text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Incoherent hydrodynamics of density waves in magnetic fields We use holography to derive effective theories of fluctuations in spontaneously broken phases of systems with finite temperature, chemical potential, magnetic field and momentum relaxation in which the order parameters break translations. We analytically construct the hydrodynamic modes corresponding to the coupled thermoelectric and density wave fluctuations and all of them turn out to be purely diffusive for our system. Upon introducing pinning for the density waves, some of these modes acquire not only a gap, but also a finite resonance due to the magnetic field. Finally, we study the optical properties and perform numerical checks of our analytical results. A crucial byproduct of our analysis is the identification of the correct current which describes the transport of heat in our system. Introduction The holographic conjecture predicts that in a certain large-N limit, large classes of conformal field theories possess a dual classical gravitational description. Apart from its fundamental implications about quantum gravity, it provides a powerful tool to study strongly interacting regimes of quantum field theories which are inaccessible by standard perturbative techniques. Over the last decade, the duality has been used to study aspects of strongly coupled systems. One of its exciting applications concerns condensed matter systems at finite temperature, chemical potential and magnetic field [1][2][3][4][5][6][7][8]. In that context, the discussion was sparkled by the discovery of electrically charged black hole instabilities which lead to superfluids/superconductors [9][10][11] from the field theory point of view. In this case, the order parameter is given by the expectation value of a complex operator which breaks an internal U (1) symmetry. Soon after the discovery of holographic superfluid phases, black hole instabilities which spontaneously break translations were found in [12]. These phases are expected to play a crucial role in understanding particular physical aspects of various condensed matter systems which exhibit instabilities such as charge and spin density waves, including the cuprate superconductors. In this paper we wish to construct the effective theory of long wavelength excitations in holographic phases with spontaneously broken translations. In order to make contact with realistic condensed matter systems, one needs to tackle the extra complication of the ionic lattice which relaxes the momentum of charge and energy carriers in the system. Momentum relaxation is an essential ingredient in discussing the low frequency transport properties of real materials. In order to accomplish this holographically, we need to deform our UV conformal field theory by relevant operators with source parameters which break translations. In other words, apart from the spontaneous, we also need to implement explicit breaking of translations. The construction of these inhomogeneous black hole backgrounds and the study of the corresponding thermodynamics is technically challenging mainly due to the fact that unstable modes naturally lead to inhomogeneous backgrounds where the only expected symmetry left is time translations. However, for certain classes of holographic theories with a bulk action which is invariant under global U (1) symmetries one can follow a Q-lattice construction [13] in order to implement both the holographic lattice as well as the order parameter that spontaneously breaks translations by simply solving ODEs. This system was introduced in [14,15] where the focus was on the transport properties and the derivation of analytic formulae for the DC transport coefficients. In [16] it was shown that the spontaneous breaking of the global U (1) in the bulk introduced additional diffusive hydrodynamic degrees of freedom to the system, which are separate from the universal ones associated to the conservation of heat and electric charge in the system. From that point of view, the system we are studying is different from the modulated phases of holography where apart from translations, no additional symmetry breaking occurs. In this work, our aim is to generalise the results of [17] in order to include an arbitrary number of internal broken symmetries as well as a finite magnetic field. 1 Similar systems with spontaneous breaking of translations via an internal symmetry have been studied before in [21]. In the absence of an explicit lattice and at zero magnetic field, the longitudinal hydrodynamic modes included one pair of sound and two diffusive modes. One of the diffusive modes can be accounted to the incoherent thermoelectric mode while the second one was associated to the diffusive mode of the internal symmetry breaking described in [16]. Similarly, the transverse sector of the system contains a single pair of sound modes. It is known that a finite magnetic field has the effect of combining the transverse and longitudinal sound modes to produce a gapped mode and a mode whose frequency was growing quadratically with the wavenumber [22,23]. Based on the hydrodynamic models of [24,25], reference [26] argued that the corresponding constant of proportionality is complex, using numerical techniques in a holographic massive gravity model; see also [27] for related work in effective theories for weak explicit background lattices. In contrast, in our work, we consider phases with strong explicit background lattices, and we analytically show that all hydrodynamic modes remain diffusive even in the presence of a magnetic field. Another aspect of our effective theory is the inclusion of explicit deformation parameters which perturbatively pin the density waves in the system. As one might expect, such deformations introduce a collection of gaps for some of the diffusive modes in our theory. Interestingly, we find that at finite magnetic field pinning also introduces resonance frequencies. As we show, from the retarded Green's functions point of view these show up as poles in the lower half plane. In section 2 we discuss the class of holographic models we are considering along with some important aspects of their thermodynamics. In section 3 we start by introducing the model of hydrodynamics that provides an effective description of the long wavelength excitations, meanwhile identifying the correct current that describes the transport of heat in our system. We then move on to include the effects of pinning in order to compute the resulting gap and resonance of the density waves, as well as compute the retarded Green's functions and extract their optical properties. We conclude the section by discussing how to decouple the Goldstone modes from the U (1) and heat currents, and by deriving the dispersion relations of our hydrodynamic modes. Section 4 contains a number of non-trivial numerical validity checks of the effective theory of section 3. We summarise our most important observations and conclude in section 5. Finally, the appendix contains technical details of the analytical calculations of section 3. Setup In this section we introduce the holographic model that captures all the necessary ingredients that we would like to include in our theory. For this reason, we consider holographic theories which in addition to the metric, they contain a gauge field A µ and N Y + N Z complex scalar fields Y J and Z I with a global U (1) N Y +N Z symmetry. Essentially, this is a generalisation of the model that we considered in [17] to include an arbitrary number of complex scalars in the bulk. The gauge field will be used to introduce the chemical potential µ and the magnetic field B in the dual field theory. The first N Y complex scalars are going to implement the explicit lattice and should therefore be relevant operators with respect to the UV theory. The remaining N Z will provide the density wave order parameters in our system. For simplicity, we consider only four bulk spacetime dimensions, corresponding to a 2 + 1 dimensional conformal field theory on the boundary, but all our results can easily be generalized to higher dimensional theories as well. The class of theories we are considering is described by the two-derivative bulk action 2 with G I , W J , τ and V being only functions of the moduli b I = |Z I | 2 and n J = |Y J | 2 . In this case, the global internal symmetries are represented by the invariance under the global transformations Z I → e iθ I Z I and Y J → e iω J Y J . The equations of motion are In order for our bulk theory to admit an AdS 4 solution of unit radius, we demand the small Z I and Y J expansions In this case, the conformal dimensions ∆ I and∆ J of the dual complex operators For the AdS 4 vacuum we use a coordinate system in which the metric reads In these coordinates, the near conformal boundary expansion for the scalars takes the form Provided that the operators O Y J are relevant with∆ J < 3, the Q-lattice construction [13] picks the deformation parameters y J s (t, . At this point we see that we have a set of N Y wavevectors k J si which are determined externally as part of the sources related to the explicit breaking of the lattice. In our theory, the operators O Z I take a VEV spontaneously suggesting that for the bulk fields Z I we should have z I s = 0. In this case, the bulk field Z I are going to be zero above a certain critical temperature. As we lower the temperature of the system, they start developing instabilities which will yield new branches of black holes breaking the global bulk N Z U (1)'s spontaneously. However, in section 3.3 we will turn on z I s perturbatively in order to study the pinning of the density waves. Specifically, within the Q-lattice ansatz for the backgrounds, we have that up to potential contact terms. Thus, we have a set N Z wavevectors k I i associated to the order parameters O Z I . These are dynamically chosen by the system in a way that the free energy is minimised, in contrast to k J si which are fixed by us as part of the boundary conditions. For the particular holographic systems we are studying, the free energy is minimised when k I i = 0. However, following the logic of [17,28], we will still consider background solutions with k I i = 0. In this way the order parameters of the spontaneous breaking also break translations apart from the internal U (1)'s. It is useful to define the real operators which have a constant expectation value The above suggests that from a microscopic point of view, the operator is the right object to focus on in order to study the gapless fluctuations of the system. In order to make this point clearer, we parametrise the spacetime fluctuations of the VEVs O Z I according to s ]. In order to solve the bulk equations of motion more efficiently, we find convenient to make the field transformations bringing the bulk action to the form A consistent ansatz that captures all the necessary ingredients we discussed so far for the thermal state is (2.14) According to (2.12), the constants c I in our ansatz (2.14) for the background black holes translate to an overall phase for the complex scalars Z I . The absence of explicit sources in our asymptotic expansion for the corresponding field does not fix it and we have to leave it arbitrary. These are essentially the gapless modes associated with the symmetry breaking in the bulk that we wish to promote to hydrodynamic ones in section 3. We choose our coordinate system so that the conformal boundary of AdS 4 is approached as we take r → ∞. In this case, the asymptotic expansions of the functions in our ansatz (2.14) take the form where we chose to only show the terms where constants of integration of the relevant ODEs appear. The constants of integration g (3) ij that appear in the expansion of the metric have to satisfy δ ij g which is the gravitational constraint and yields the conformal anomaly for the stress tensor. The constant of integration R in (2.15) represents a global shift in the radial coordinate r. This is fixed by demanding that the finite temperature horizon is at r = 0. Demanding our background solutions to be regular imposes the near horizon expansion The equations of motion (2.2) lead to the following equations for the phases of the complex scalars associated to spontaneous breaking At this point, it is useful to note that the fields σ J and χ I are not well defined when either ψ J or φ I are equal to zero. This certainly happens close to the conformal boundary and in order to avoid misinterpretations with the holographic dictionary, we discuss asymptotic expansions in terms of the complex fields Y J and Z I through (2.12). This is well defined in the regime of perturbation theory that we are interested in. The asymptotic expansions for the perturbations of φ I and χ I are (2.18) From these expansions and using (2.12) we obtain where we have used thatz I s = 0, or equivalently φ I s = 0, in the phases we are interested in. This shows that, up to contact terms, δ S I = ∆ I − 3 2 φ I v δc I , and that 2 δφ I s is a source for Ω I while 2 ζ S I is a source for S I , consistent with equation (2.11) and the discussion below it. In the next subsection we discuss aspects of the thermodynamics of our broken phase black holes. This will give us the opportunity to define certain quantities that will appear later in the context of hydrodynamics. Thermodynamics In this subsection we would like to consider the thermodynamics of the background black holes we are interested in. In order to do this we need to regularise the bulk action (2.1) by adding suitable boundary terms which act as counter-terms [29,30]. The purpose of these terms is dual, the first is to render the total on-shell action finite. The second is to make the variational problem well defined, provided we have a unique way to fix the boundary conditions on our bulk fields. It is often the case that such terms are not unique and for the purposes of our paper it is enough to list the following terms [30] (2.20) Further counter-terms can be added [31] but these will introduce extra contact terms in the retarded Green's functions that we wish to compute from the bulk theory. In order to compute the free energy of the system we need to consider the Euclidean version of the total action I E = −iS tot . We then need to evaluate the value of I E on the solution with the analytically continued time t = −it E and the periodic identification t E ∼ t E + T −1 . Since our system extends infinitely in the spatial field theory directions, the total free energy W F E = T I E is not meaningful and we instead consider the free energy density w F E where , s and ρ denote the energy density, entropy density and electric charge density respectively. Apart from the thermodynamic data T , µ, B and the explicit lattice data ψ J s , k J si our solutions also depend on the wavenumbers k I i which are related to the spontaneous breaking. Even though different values of c I in (2.14) yield different solutions, the free energy is independent of those in the spontaneous case, when φ I s = 0. 4 The first variation of the free energy yields the first law where the electric charge and entropy densities can be computed from the black hole horizon data and M is the magnetisation and by τ (0) we denote the value of τ when φ I and ψ J are evaluated on the horizon. In order to compute the variation w i I of the free energy with respect to the wavenumber k I i , we simply vary the total action S tot and use the background equations of motion to find 5 Notice that in the spontaneous case this is coming entirely from the variation of the bulk action (2.1) and it is a finite number. Potential contributions from finite counter-terms other than those listed in (2.20) would be possible but these would constitute contact terms which we can ignore for our purposes. In order to obtain the electric magnetisation M ij of the dual field theory, we need to perform a straightforward variation of the bulk action (2.1) with respect to the magnetic field B. Apart from the electric magnetisation, our backgrounds are also going to have a non-trivial thermal magnetisation M ij T . Similarly to the electric magnetisation, this is not immediately obvious from the backgrounds in (2.14) since the homogeneity of our solutions prevents the appearance of explicit heat magnetisation currents. In order to define it, we would need to introduce a larger background ansatz than (2.14) in order to include NUT charges in our metric [35]. Instead of doing that, we simply give the expression for its value in terms of the background Apart from the quantities that appear in the first variation of the free energy, we also find it useful to introduce a set of susceptibilities through the second variation of the free energy An expression for the susceptibilities ν i I , β i I and w ij IL in terms of the background can be obtained by simply varying e.g. equation (2.24). Moreover, the susceptibilities ν i I and β i I can also be found by varying the densities in equation (2.23) with respect to the spontaneous wavenumbers k I i . Even though it is not obvious from the bulk expressions that these two approaches lead to the same result, this is guaranteed by the thermodynamic Maxwell relations. This observation will become important in section 3, when we derive the constitutive relations for the currents and the Josephson equation in a derivative expansion. In order to discuss the constraints which we need to impose, we choose to work in a radial foliation and we define the normal one form n = dr normal to constant r hypersurfaces. We now use our equations of motion in (2.2) to define L µ = E µ ρ n ρ and C = C ρ n ρ with E µν = L µν − 1 2 g µν L ρ ρ . The four gravitational constraints are simply L µ = 0 and the Gauss constraint is C = 0. The constraints that we need to satisfy can be imposed on any hypersurface with e.g. constant radial coordinate r. Close to the conformal boundary, they are equivalent to the Ward identities of charge conservation, diffemorphism and Weyl invariance of the dual conformal field theory. Similar to [17], we will derive an effective hydrodynamic theory in section 3 by utilizing a subset of these constraints on a surface close to the background black hole horizon at r = 0, in terms of the constants that appear in (2.30) and (2.31). We now define the horizon currents Building on [35,36], we derive the horizon constraints that the above currents should satisfy 6 Close to the conformal boundary at r = ∞, the expansion of our functions is where we have chosen to only show the undetermined terms involving the constants of integration of the second order ODE's we need to solve. We have also included the constants ζ S I which represent the sources for the operators S I we discussed in section 2 and which we need to fix. For the components which we are free to choose by using diffeomorphism and gauge invariance, we only show their desired behaviour close to the boundary as they offer no additional information as far as the constants of integration are concerned. The remaining 9 + 2N Z + 2N Y constants of integration δg , δc I and δσ J(v) together with the 13+2N Z +2N Y coming from the 6 It is relatively straightforward to check that the r and t components of the gravitational constraints are equivalent in the r → 0 limit. horizon expansion, will fix a unique solution of the 9+2N Z +2N Y second order ODE's and the 4 constraints. In the next section we construct solutions which correspond to the late time, long wavelength hydrodynamic modes of the boundary theory. Linearised hydrodynamics In this section we study the hydrodynamic limit of the fluctuations that we introduced in subsection 2.2. In subsection 3.1 we discuss the construction of these modes up to second order in the derivative expansion. This allows us to write an effective hydrodynamic theory for the conserved currents of the system and its gapless modes related to spontaneous breaking in the bulk in subsection 3.2. Having a complete effective theory for our fluctuations, in subsection 3.3 we examine the gap induced for the spontaneous density waves sliding modes, by perturbatively small deformations for the operators Ω I . To illustrate, we specialise to the isotropic case where we find that apart from a gap, the theory also develops resonance frequencies. In subsection 3.4 we study the retarded Green's functions of the operators in our theory at finite frequency and we give the precise way that the poles of subsection 3.3 influence the transport properties. We then move on to give more general, model-independent, Kubo formulas for some of the transport coefficients in subsection 3.5. There, we also define heat and electric current operators which decouple from the Goldstone modes, and we discuss some of their properties. Finally, in subsection 3.6 we give an algebraic equation whose solutions yield the dispersion relations of our hydrodynamic modes. Even though we are not able to find the solutions in closed form, we can show that all our modes are purely diffusive at the order we are working. An interesting outcome of our results is that, after correctly identifying the heat current, the thermodynamic coefficients w i I all drop out from physically interesting quantities. Hydrodynamic perturbations In their infinite wavelength q i → 0 limit, the hydrodynamic modes we wish to study will reduce to a uniform distribution of energy, electric charge and phase shift of the complex scalar Z I which break the global symmetries in the bulk. In order to keep track of our expansion we scale q i → εq i with ε a small number and expand the frequencies and radial functions in the bulk according to where δX(r) can be any of the functions that appear in the ansatz for the perturbation (2.28). A key point of our construction is the leading piece of the ε expansion which according to our earlier discussion has to reduce to The functions X b represent the background fields of the black hole in equation (2.14). In order to generate the perturbations which satisfy the correct boundary conditions (2.30), (2.31) and (2.34), at the same time with a simple partial derivative with respect to T , µ and c I we also need to perform the perturbative coordinate and gauge transformation [36] As expected, the boundary condition requirements for our perturbations do not uniquely fix g(r) in the bulk. It is enough to choose it such that close to the conformal boundary it vanishes sufficiently fast while close to the horizon at r = 0 it approaches Choosing the perturbations as in (2.28) leads to an inhomogeneous system of differential equations coming from the bulk equations of motion (2.2) at the perturbative level. It is clear from the above construction that the seed solution (3.2) satisfies the corresponding homogeneous system of equations [17]. This suggests that we can add them at each order in the ε expansion (3.1) and therefore consider the split, with δX [n] a solution to the inhomogeneous problem which is sourced by lower order terms of the solution. Following closely the analysis of [17], we can show that the eigenmodes of the system necessarily have ω [1] = δT [0] = δµ [0] = 0. Therefore the variation of the temperature and chemical potential starts at order O(ε). The next to leading part of the bulk solution δX [1] will only be driven by a shift in the background phases of the complex scalars according to δχ . Moreover, since we are examining the equations of motion at order O(ε), it is only the first derivatives of the varying exponential that enter the source of the inhomogeneous part δX [1] . Effectively we can say that up to order ε we have The above implies that δX [1] will simply be the change of the background solution under δk I i = i εq i δc I . The same pattern will appear at all orders in the ε expansion, leading us to the further split of the solution to the inhomogeneous problem according to [17] In the end, the whole solution is determined by the variations δT , δµ and δc I and so does the horizon fluid velocity v i , the local chemical potential , and the vector potential δa (0) j . More specifically, we can identify (3.7) The above identification will prove useful in the next subsection where we give the effective theory of hydrodynamics that governs the fluctuations up to and including δX [1] in our expansion. The effective theory An important point of our construction is the way that we choose to impose the gravitational and Gauss constraints. As we discussed in section 2, these constraints should be imposed at once on any hypersurface at e.g. constant radial coordinate r. From the dual field theory point of view, the natural choice for this hypersurface would be near the conformal boundary as they become equivalent to the Ward identities of diffeomorphism, Weyl and global U (1) invariance. More specifically, the constraints L b = 0 and C = 0 give with F = dA the field strength of the external source one-form A a andȳ J s ,z I s are the sources for the complex scalar operators. Contracting the stress tensor Ward identity with a vector Λ b gives The thermal gradient ζ and electric field E perturbations enter the boundary metric g ab and external field A a according to along with the source δz I s for the scalar field We are now going to make the choice Λ = ∂ t and perturbatively expand the contracted Ward identity to give the electric charge and heat conservation with δQ a = −δT a t − µ δJ a . Equations (3.12) define two conserved currents at the level of first order perturbation theory. From the point of view of the effective theory, this is a good starting point in order to give a closed system of equations, provided that we can express these currents in terms of the hydrodynamic variables δμ, δT and δĉ I [17]. However, we will see soon that, in the phases we are interested in, the current δJ a H which describes the transport of heat is different from δQ a . As we have argued in the previous subsection, the time derivatives scale according to ∂ t ∝ O(ε 2 ) while for the spatial derivatives we have ∂ i ∝ O(ε). This suggests that we need to consider the charge densities up to order O(ε) and the transport currents up to order O(ε 2 ) in (3.12). We will write our theory in position space where all our functions will be denoted by hats. Moreover, from now on, we find it useful to define hatted thermodynamic quantities which are local functions of the dynamical temperature, chemical potential and phasons. For example, we defineŵ ≡ w(T + δT , µ + δμ, k I i + ∂ i δĉ I ). In appendix A.2 we relate the boundary currents δJ a , δQ a to the horizon currents we have defined in equation (2.32) in the ε-expansion. More specifically, we can write where we have defined the divergence free magnetisation currents At this point it is important to identify the correct current δJ i H that describes the transport of heat. For this reason, using the first law (2.22), we write the conservation equations as 7 The conservation equation (3.15) implies that up to magnetisation current contributions, the currents that correctly describe the transport of heat and electric charge are δĴ i H = δQ i + I w i I ∂ t δĉ I = δQ i (0) and δĴ i = δĴ i (0) . In terms of operators, we can writeĴ At first sight, it might seem surprising that the horizon heat current correctly describes the transport of heat. However, one might have expected this to happen since at the level of thermodynamics the entropy density of the system is determined by the horizon. This also ties well with the common lore in holography that dissipation is captured by horizon physics. The above discussion suggests that the physically relevant current to discuss iŝ J i H rather thanQ i . In the end, we would like to build our theory of hydrodynamics around the conserved currentsĴ i H andĴ i and the light modes associated to the operators S I . The corresponding triplet of sources is ζ i , E i , ξ S I for our choice of 7 It would be interesting to go beyond linear hydrodynamics and examine whether demanding positivity of entropy production (according to an appropriately defined entropy current) leads to constraints on the transport coefficients defined below, as happens generally [37,38], as well as in related contexts [21,26,[39][40][41]. operators and by examining the source terms in the deformed action we can easily show that Note that, as discussed in section 2.1, thermodynamically preferred phases satisfy w i I = 0, in which caseĴ i H =Q i . Given these results, we are able to express the boundary transport currents in terms of δT [1] , δµ [1] , δc I [0] and v i [2] at order O(ε 2 ). However, in [17] we have shown that within perturbation theory we can choose to impose the constraint L i = 0 on a hypersurface close to the horizon. This allows us to integrate out the horizon fluid velocity v i [2] and therefore obtain local expressions for the currents in terms of our hydrodynamic variables For the specific holographic model we are considering, the transport coefficients can be expressed in terms of horizon data and thermodynamic susceptibilities according to where we have used the definitions from appendix A.3, and indices in N are raised and lowered with the horizon metric g (0)ij . We now turn our attention to the pseudo gapless degrees of freedom δc I related to the density waves in our system. In order to introduce pinning to our system we turn on a perturbative deformation δφ I s ∝ O(ε 2 ) in the background asymptotics (2.15). The constitutive relations (3.18) remain unchanged [17], while in appendix A.1 we integrate the equation of motion (2.17) to obtain the effective Josephson relation with the transport coefficient [21,24,25,42]. It would be interesting to find a holographic model which realizes that. In the following subsections we study the gap of hydrodynamic modes after turning on the pinning parameters δφ I s as well as the dispersion relations without pinning. Finally we compute the finite frequency retarded Green's functions which will help us extract the transport properties of the system. Pseudo-gapless modes In order to identify the gapped modes in our effective theory we need to study perturbations with the sources switched off and with wavevector q i = 0. They belong to the Goldstone mode sector, which leads us to consider the ansatz δT = 0, δμ = 0, δĉ I = δc I 0 e −iδω g t . where we have defined the matrix 8 (3.25) 8 We remind the reader that capital indices are not being summed over. In order for equation (3.24) to have non-trivial solutions, the matrix multiplying the vector of amplitudes should not be invertible. Equating the determinant of this matrix to zero gives an algebraic equation which determines the N Z different values for the gaps δω g in terms of the eigenvalues of the matrix M −1 . In order to illustrate the effect of the magnetic field on the gaps of the theory we consider a simple case with N Z = N Y = 2 and which apart from the U (1) 4 the model has a Z 2 × Z 2 symmetry which exchanges Z 1 ↔ Z 2 and Y 1 ↔ Y 2 . This allows us to consider an isotropic background which can be achieved by choosing The symmetries of the model along with the choice of boundary parameters leads to data of integration in which the internal indices can be suppressed, allowing us to write This simplifies the quantities where we see that the magnetic field introduces an antisymmetric piece in the matrix B ij . In the above ω c = ρB/s , (3.29) can be identified with the cyclotron mode frequency [1,2]. As a consequence, the matrix M will now have complex eigenvalues leading to the gaps for the pseudo-massless modes. It is easy to see that in zero magnetic field, these modes lie on the lower imaginary semi-axis and they agree with the expressions that were obtained in [17]. Moreover, due to the isotropy of the model and background we are considering, they also lie at the same point. However, the characteristic frequency of our nearly gapless modes has a resonant frequency apart from a gap at finite magnetic field. It is interesting to consider the extreme limits for the behaviour of the poles (3.30) of our simple example. In the limit where B is the smallest parameter 9 we have a perturbatively small correction to the results of [17] (3.31) We therefore see that for small magnetic fields the two modes split and they move horizontally in opposite directions in the complex plane. In the opposite limit, where B is the largest scale in the system we have 10 32) and the two frequencies become degenerate once again, at the value which is given by the k → 0 limit of (3.30). The same result is obtained if we keep the filling fraction ρ/B finite while taking B → ∞. It is interesting to note that the expression (3.32) is the same with [16] but in a different thermal state. This situation is relevant to weak lattices and close to the phase transition at T ∼ T c . A third, distinct possibility which is relevant to weak lattices and magnetic fields is when σ 0 B ρ and B k 2 In this case we see that the resonant part dominates the poles related to the phase relaxation. The expression (3.33) agrees with the prediction from hydrodynamics in the pseudo-spontaneous regime once we identify the pinning frequency ω 2 0 ∼ k 2 Ω δφ s [23,24,43]. Note that the full expression (3.30) holds for finite magnetic field; in particular, we nowhere assumed that B is perturbatively small. All the background quantities however, including the horizon values Ψ (0) , Φ (0) , depend implicitly on B as well as k s . Thus, the scaling of δω g with B in the above expressions can be determined analytically (or, when not possible, numerically) from the properties of the ground state. 11 In particular, it is not straightforward to compare our results to [26], which found a B 1/2 scaling for large B using a holographic massive gravity model. 9 To be precise, the regime where (3.31) is valid is σ 0 B ρ and ω c k 2 Φ (0) . 10 In the regime σ 0 B ρ and B k 2 s Ψ (0) , k 2 Φ (0) . 11 The only exception is (3.31), since we expect the various background quantities to be continuously connected to the corresponding ones of the B = 0 state. Finite frequency response In this subsection we study linear response at zero wave number q i = 0. In order to achieve this we turn on all our sources with a finite frequency ω and look for a solution to the conservation law equations (3.15) and Josephson relation (3.21). A suitable ansatz for this purpose is δT = δT [1] e −iωt , δμ = δµ [1] After combining the transport heat current (3.18) and the Josephson relation (3.21), we obtain Plugging this solution back in the constitutive relations (3.15) we obtain the VEVs for the scalar fields S I and the transport currents in terms of the sources ζ i , E i and ξ S I according to In order to extract the retarded Green's functions, we need to consider the derivative of the VEVs with respect to the time dependent sources. Since we are only considering linear response, we can write 12 for any operator C in our theory. After a little algebra we obtain the expressions The non-trivial frequency dependence in the above thermoelectric conductivities includes only the horizon quantitiesᾱ ij H ,κ ij H , due to the fact that the Goldstone couples only to the heat current, (3.21). Moreover, from the above expressions we see that the gaps of the previous subsection determine the poles of the retarded Green's functions (3.38). From these expressions it is clear that the pseudo-gapless modes we discussed in section 3.3 couple to the conserved currents of our system. It should be emphasised that the Green's functions in equation (3.38) attain the most general form possible. In particular, G S I S K is simply controlled by the existence of a pole related to the pseudo-spontaneous symmetry breaking (subject to the presence of the gap), while S I couple to the currents only through their time derivatives, giving an additional factor of ω in the numerator. Thus, we expect to see the same structure in more general theories of this type. Given the time reversal symmetry of the theory, as a non-trivial check, we can see that the expressions above satisfy the Onsager relations G CD (ω) , with ε C,D = ±1 depending on how the operators C and D transform under time reversal. In particular, we find that which hold by construction. The explicit expressions (3.38) show that the limits ω → 0 and M → 0 do not commute, as also discussed in [17,44]. Specifically, taking the DC limit ω → 0 while the gap remains finite, reduces the conductivities to the corresponding horizon conductivities, first obtained in [35], as well as the expected Goldstone mode susceptibility χ IK with the rest of the correlators vanishing. However, when we first take δφ I s → 0, transport at zero-frequency gets modified by the Goldstone modes while G S I S K diverges as (3.43) Decoupling the Goldstone modes Given the results of the previous subsection we can proceed to obtain Kubo formulas, which can be taken as the fundamental definition of the corresponding transport coefficients in a generic theory with the symmetry breaking pattern we are considering. 13 In purely spontaneous phases the susceptibilities diverge [45], but here this divergence is regulated by the perturbative explicit source δφ I s . We first extract the transport coefficient Λ I K as which is a finite quantity, given by the combination of transport coefficients shown in (3.35) in our specific holographic model. We can then express γ i I and λ i I as The order of the limits is important, as was also noticed in [24]. We first need take the gap to zero in order to include the effects of the Goldstone modes in the low frequency regime, and then take ω → 0. Similarly we can definẽ since, as explained in section 3.2, only the heat current enters the Josephson relation. However, more generally, equations (3.45),(3.46) will hold in a generic theory in which we do not have explicit expressions for the low energy Green's functions, as long as the expressions (3.45), (3.46) remain finite as δφ I s → 0. It would be interesting to also define modified electric and heat current operators J i ,J i and J i H ,J i H which decouple from the Goldstone modes. This is satisfied as long as we demand that the Green's functions irrespective of the order of limits. Within the hydrodynamic regime, (3.45), (3.46) imply that indeed satisfy Note that these currents are related by Thus, from a holographic perspective, the combinations (3.49) isolate the horizon contribution to the electric and heat currents. As expected, the finite-frequency poles related to the pseudo-gapless modes cancel out in the above Green's functions, which turn out to be frequency-independent for low frequencies up to ω g . The Green's functions for the time-reversed currentsJ i ,J i H are simply the timereversed versions of (3.52) and so they also satisfy Onsager relations similar to (3.39). This can be seen by combining (3.51) and (3.40). We can proceed further by recalling the relation (3.48) between the transport coefficients entering in (3.49). We then see that the combinations do not include contributions from the Goldstone modes, and can be solely expressed in terms of the original currents J i , J i H . As before, J i dec +J i dec is a well-defined vector operator. For the retarded Green's function we find which turns out to be given by the horizon quantity σ ij 0 defined in (3.20). Similarly We thus observe that the horizon quantity σ ij 0 defined in (3.20) corresponds to the conductivity of the part of the U (1) current which decouples from the heat current J H and the Goldstone modes. In the special case of time-reversal invariant backgrounds with B = 0, we have that Then both decoupled combinations (3.53) reduce to the current considered in [46,47]. Furthermore, in the absense of a background lattice, the latter can be identified with the incoherent current which decouples from the conserved momentum operator [48]. Finally, note that all of the above results hold in the strong holographic lattice limit that we are considering in our paper, where the low frequency transport properties are determined by the Goldstone modes and the momentum non-conservation poles are outside our hydrodynamic regime. However, the explicit sources δφ I s also relax momentum apart from the massless modes with a relaxation rate ∼ (δφ I s ) 2 . So, had we not included such a background lattice, the momentum poles would dominate over the Goldstone mode poles in the hydrodynamic regime. Hydrodynamic modes In this subsection we wish to extract the dispersion relations of the hydrodynamic modes in our system at zero pinning. Similarly to the previous section, we switch off all the sources and we look for solutions of the form which solve the conservation law equations (3.15) and Josephson relation (3.21). Similarly to the previous subsection, the resulting system of equations reduces a linear system of equations for the vector of amplitudeŝ where we have defined the matrix which further implies that the dispersion relations are independent of the sign of B, since the determinant is invariant under transposition. However, the Kernel ofŜ which solves equation (3.57) will depend on it and thus the actual modes will change as we change the sign of B. By directly exploiting these properties, we show below that the frequencies ω(q i ) of our hydrodynamic modes are pure imaginary. Finally, we notice that the transformation q i → λ q i , ω → λ 2 ω and δc I → λ −1 δc I is a symmetry of equation (3.57). This shows that all our modes are diffusion-like with ω(λ q i ) = λ 2 ω(q i ). In contrast, the effective field theory of [24], as well as the holographic model of [26], both include a real quadratic part in the dispersion relation for the magnetophonon mode in the purely spontaneous or pseudo-spontaneous symmetry breaking regime. In our case, the modes are purely diffusive as a result of being in the strong translational symmetry breaking regime. Let us now prove that the modes are pure imaginary. We find convenient to split the mode vector which solves (3.57) according to (3.61) Then for the background with B → −B there is a different mode with |ṽ 1 and |ṽ 2 but with the same dispersion relation ω(q i ). This can be justified by using the transformation property (3.60) and the comment below it. We can write From the above systems and after using (3.59) we obtain the relation showing that iω has to be a real number. For B = 0, the matrices X, W, Σ and Θ are symmetric. We also know that the vectors |v 1 and |v 2 coincide with the vectors |ṽ 1 and |ṽ 2 . This observation shows that if the matrices X and W are positive definite, then iω > 0. In other words, at zero magnetic field, thermodynamic stability implies dynamical stability in the hydrodynamic sector that we focussed on. Numerical checks In this section we numerical confirm the results presented in section 3, and in particular the formula for the dispersion relations of the hydrodynamic modes coming from equation (3.58), the gap (3.30) and the optical conductivities (3.38). This is achieved by focusing on the model of [44], which is a truncation of the general bulk action (2.1) down to the four-dimensional Einstein-Maxwell theory coupled to six real scalars, φ, ψ, χ i and σ i with i = 1, 2 The variation of the above action gives rise to the following field equations of motion The simplest solution to the above equations is the unit radius vacuum AdS 4 , which is dual to a d = 3 CFT with a conserved U (1) charge. In this work we choose to place the CFT at finite temperature and deform it by a chemical potential, an external magnetic field and a background lattice. Within this theory, we are interested in thermal states that correspond to density waves. Putting all the ingredients together, the solutions we are after are captured by the ansatz (2.14), which we rewrite here for convenience where I = 1, 2, i = 1, 2. For simplicity we choose k I i = k i δ Ii , k I si = k si δ Ii . Let us now move on to discuss the boundary conditions. In the IR, we demand the presence of a regular Killing horizon at r = 0 by imposing the following expansion U (r) = 4π T r + . . . , which is specified in terms of 6 constants. In the UV, we demand the conformal boundary expansion U → (r + R) 2 + · · · + W (r + R) −1 + . . . , V 1 → log(r + R) + · · · + W p (r + R) −3 + . . . , Just like in [44], the scalar fields (ψ, σ) are taken to constitute the anisotropic Q-lattice in which both translational invariance and the two U (1) ψ symmetries are explicitly broken, while the density wave phase is supported by (φ, χ) and breaks the the two U (1) φ symmetries spontaneously. As such, the thermal states of interest correspond to taking ψ s = 0 and φ s = 0. Thus, this expansion is parametrised by 8 constants. Overall we have 14 constants appearing in the expansions, in comparison to the 11 integration constants of the problem. Thus, for fixed γ, δ, B, k i , k si and temperatures below a critical one T < T c , we expect to find a 3 parameter family of solutions, labelled by ψ s , µ, T . In figure 1 we plot the critical temperature, T c , as a function of k = k 1 = k 2 for a particular choice of parameters. This is obtained by considering linearised fluctuations around the normal phase of the system (φ = 0, χ = 0) and exhibits the usual "Bell Curve" shape. Figure 1: Plot of the critical temperature at which the background Q-lattice becomes unstable as a function of k for (k s1 , k s2 , ψ s , γ, δ, µ, B) = ( 3 10 , 3 10 , 4, 3, 1, 1, 1 10 ). We see that the most unstable mode corresponds to k = 0. Quasinormal modes We now move on to compute quasinormal modes for the backgrounds constructed above. For simplicity, we focus only on isotropic backgrounds characterised by k 1 = k 2 ≡ k, k s1 = k s2 ≡ k s , V 1 = V 2 . We consider perturbations of the form together with (δa t , δa 1 , δa 2 , δφ, δψ, δχ 1 , δχ 2 , δσ 1 , δσ 2 ), where the variations are taken to have the form δf (t, r, x 1 ) = e −iωv(t,r)+iqx 1 δf (r) , (4.8) with v EF the Eddington-Finkelstein coordinate defined as v EF (t, r, Compared to the analytic setup of the problem in section 3, we have chosen S in (2.29) such that S = U −1 , as well as a radial gauge in which all perturbations with an r index vanish. Such a gauge is not compatible with the way we constructed the modes in section 3, but the physical information of the quasinormal modes in the end should of course be the same. Note also that our choice for the momentum q i to point in the direction x 1 is without loss of generality, because the background is isotropic. Plugging this ansatz in the equations of motion, we obtain 5 first order ODEs and 10 second order giving rise to 25 integration constants. Let us now discuss the boundary conditions that we need to impose on these fields. In the IR, we impose infalling boundary conditions at the horizon, which without oss of generality is set at r = 0 δh tt = c 1 r + . . . , δh t x 2 = c 3 + . . . , where the constants c 1 , c 2 , c 3 and c 6 are not free but are fixed in terms of the others. Thus, for fixed value of q, the expansion is fixed in terms of 11 constants, ω, c 4 , c 5 , c 7 , c 8 , c 9 , c 10 , c 11 , c 12 , c 13 , c 14 . On the other hand, in the UV, the most general expansion with φ s = 0 is given by For the computation of quasinormal modes, we need to ensure that we remove all the sources from the UV expansion up to a combination of coordinate reparametrisations and gauge transformations [δg µν + Lζg µν ] → 0 , where the gauge transformations are of the form for ζ µ , λ constants. This requirement demands that the sources appearing in (4.11) take the form δh (s) where λ 2 = Bζ 3 . Therefore, the UV expansion is fixed in terms of 15 constants: ζ 1 , ζ 2 , ζ 3 , ζ 4 , λ and δh (v) 2 . Overall, for fixed q, B, we have 26 undetermined constants, of which one can be set to unity because of the linearity of the equations. This matches precisely the 25 integration constants of the problem and thus we expect our solutions to be labelled by q and B. We proceed to solve numerically this system of equations subject to the above boundary conditions using a double-sided shooting method. Figure 2 (left) shows the dispersion relations for the four hydrodynamic quasinormal modes in our system for a particular choice of the background configuration. We also illustrate with dashed lines the dispersion relations 14 fixed by the linear system 3.57. Figure 2 (right) shows the analytically predicted diffusion constants from 3.57, and the ones computed numerically from the q → 0 limit of the function iω(q) /2. 15 Let us now make some remarks on figure 2. First of all, we note that all the modes we find are diffusive and purely imaginary, as expected from the analysis in section 3.6. We also see a good quantitative agreement of the numerical solution and the analytical expressions in the regime of validity of hydrodynamics, where q is parametrically smaller than all the dimensionful scales in the system. Actually, we expect the radius of convergence of hydrodynamics to be set by the collision points of the hydrodynamic modes with the first non-hydrodynamic mode [49][50][51]. In figure 2 we chose the parameters such that the lattice is weak; in this case, the lowest lying non-hydrodynamic mode is the momentum relaxation/cyclotron mode. One of the thermoelectric modes, the steepest curve in figure 2, interacts with this non-hydrodynamic mode as q is increased, leading to the quickest deviation from the analytic quadratic dispersion relations. The top curve describes the incoherent thermoelectric mode which decouples from momentum, and agrees very well with the analytic expression even for very large q. 16 This is similar to the case without magnetic field, see [17] for further details. 14 Note that, in order to evaluate the quasinormal modes using the analytic formula (3.58), we need to compute the derivatives w i I and w ij IJ . In order to compute these correctly one needs to consider backgrounds with general k I i , i.e. k 1 1 = k 1 2 = k 2 1 = k 2 2 . 15 Note that, as explained below (4.9), we have chosen the momentum q to point in x 1 , and thus, in this setting, the diffusion matrices D ij become diffusion constants D for each mode. 16 The above characterization of the modes as thermoelectric versus Goldstone is done by examining the system as k → 0; in general all the modes are coupled. Pseudo-gapless modes and two-point fucntions In this subsection we outline the numerical computation of the pseudo-gapless modes as well as certain two-point functions involving the currents J, Q in the presence of pinning, φ s = 0. We perform a calculation similar to the one for quasinormal modes, but we now consider fluctuations with q = 0 around a background configuration that has a small but finite source φ s = 0. Looking at the ansatz (4.7), it is consistent to set δh tt , δh x 1 x 1 , δh x 1 x 2 , δh x 2 x 2 , δa t , δφ, δψ = 0. We are thus left we 6 second order and 2 first order equations for the remaining fluctuations, giving rise to 14 integration constants. The IR expansion close to the horizon (r = 0) takes a similar form as above, namely δh t x 2 = c 3 + . . . , δa x 1 = c 7 + . . . , δa x 2 = c 8 + . . . , δχ 1 = c 11 + . . . , δσ 1 = c 12 + . . . , δχ 2 = c 13 + . . . , δσ 2 = c 14 + . . . , (4.15) where the constants c 2 , c 3 are fixed in terms of the others. We see that the expansion is fixed in terms of 7 constants, ω, c 7 , c 8 , c 11 , c 12 , c 13 , c 14 . On the other hand, the UV expansion changes slightly in comparison to (4.11) because φ s = 0. In particular, it is given by Once again, we remove all the sources from the UV expansion apart from an external electric field E and a temperature gradient ζ in the x 1 direction, up to a combination of coordinate reparametrisations and gauge transformations. This is done by imposing the following constraints on the sources in (4.16) δh (s) Let us first consider the case of the pseudo-gapless modes by setting (E, ζ) = (0, 0). We see that the UV expansion is fixed in terms of 8 constants: 2 . Overall, we have 15 undetermined constants, one of which can be set to unity because of the linearity of the equations. This matches precisely the 14 integration constants of the problem and thus we expect to find a discrete set of solutions, labelled by B. We proceed to solve numerically this system of equations subject to the above boundary conditions using a double-sided shooting method aiming to identify the two pseudo-gapless modes of equations (3.30). Note that the two modes have equal imaginary parts and opposite real parts. In figure 3 we plot the real and imaginary part of these modes as a function of the pinning parameter, φ s , and the external magnetic field, B, and we compare with the analytic formulas which are depicted with dashed lines. We see that the numerical and analytic calculations are in good agreement. The reader is reminded that the analytic computation is perturbative in φ s , but exact in B. We finally consider the computation of the conductivities. From (4.17) we see that, for fixed (E, ζ), we have 7 constants in the IR and 8 in the UV. Comparing with the 14 integration constants in the problem, we expect to find a 1-parameter family of solutions labelled by ω. Using the linearity of the equations we set (E, ζ) = (1, 0) or (E, ζ) = (0, 1) depending on which source we want to keep. The diffusion currents are then given by Carrying out the numerical shooting computation, we calculate the (1, 1) and (1, 2) components of the two-point functions For fixed B and φ s , these quantities are plotted in figure 4 with solid lines. In order to compare our numerics with the analytic results of section 3, we use the definition (3.16) to write and thus obtain analytic expressions using (3.38), which are depicted in figure 4 with dashed lines. We see that the two are in good quantitative agreement at small frequencies. The reader is reminded that in this calculation we have set ζ I S = 0 and we only included sources in the x 1 direction; thus we can not compute the (2, 1) and (2, 2) components of the the two-point functions. Discussion In this paper we constructed the effective theory of hydrodynamics which captures holographic phases in which translations are broken explicitly and spontaneously. We have significantly extended the construction of [17] to include an arbitrary number N Z of gapless degrees of freedom emerging from spontaneous density waves and we also included a background magnetic field. A holographic model which incorporates the two Goldstone modes arising from spontaneous breaking of translations in magnetic fields, along with the coupling to the heat current was studied in [26] and a complex quadratic dispersion relation was found. In our setup, the strength of the explicit breaking is large compared to the wavelength of the hydrodynamic fluctuations. In section 3.6 we analytically derived an equation whose roots yield the dispersion relations of the hydrodynamic modes governing our system. Despite not being able to write down the dispersion relations of all of our 2 + N Z hydrodynamic modes in closed form, we prove that they are purely imaginary and diffusive, unlike [26]. In our construction we have also included the corresponding N Z perturbative deformation parameters which pin down the density waves and introduce N Z gaps in our theory. Interestingly, we have shown that apart from the gap, the magnetic field field causes the corresponding poles to move off the imaginary axis due to resonance effects. In section 3.4 we computed the retarded Green's functions of the operators relevant to the hydrodynamic description of the system. As one might expect, the poles due to pinning have a direct effect on the transport properties of our system as can be seen from the explicit form of the Green's functions in equation (3.38). Finally, an important byproduct in our work is the identification of the correct current in (3.16) which describes the transfer of entropy as can be seen by the conservation equation in the last line of (3.15). Given this definition, the variation of the free energy w i I with respect to the wavenumbers k I i drops out of the corresponding Green's functions (3.38). Moreover, the gaps and the resonance frequencies which can be found by solving the eigenvalue problem equation (3.24) are also independent of w i I . There are various open questions which one could further explore. It would be interesting to consider second order hydrodynamic perturbation theory and examine what the second law implies for the transport coefficients in phases with spontaneous and explicit symmetry breaking. Additionally, it is important to examine how transport in such phases is constrained from purely field theoric considerations, such as the Ward identities, and also investigate the possible experimental significance of the decoupled/incoherent currents we defined in this paper. Finally, it would be enlightning to move away from homogeneity and explore what kind of novel effects inhomogeneous models with similar symmetry breaking patterns might exhibit. in our ε expansion (3.1), which we carry out in Appendix A.1. Then, in Appendix A.2 we derive the constitutive relations for the currents by relating the field theory and horizon currents densities of our holographic model. A.1 Perturbations for χ I After perturbatively expanding the equation of motion (2.17), we have From the form of the solution close to the conformal boundary at r → ∞, we can infer a relation between the sources ζ S I of the operators S I and their vevs δ S I . This will essentially give a Josephson type of equation for the variable δĉ I through equation (3.13). In the next subsections we will solve equation (A.1) in an ε expansion. A.1.1 Field theory interpretation at order O(ε) At order O(ε) we obtain the equation We integrate this equation for δχ I [1] while insisting on the near horizon behaviour (2.30). After doing so, we obtain the asymptotic behavior [1] δc I Demanding that the operator S I is not sourced at this order, we must have A.2 Constitutive relations for the thermoelectric currents In this subsection we will relate the horizon current densities (2.32) to the boundary quantities δJ i and δQ i that appear in the current conservation equation (3.12). The bulk electric current is defined as The equations of motion (2.2) imply Following [52], for any vector Λ µ in the bulk we can define the bulk two-form where Λ µ F µν = ∂ ν f + β ν , with β a 1-form and f a globally defined function. After using the equations of motion (2.2), its divergence can be brought to the form We now consider Λ µ = ∂ t , and a general perturbation around the background ansatz (2.14) (not necessarily of the form (2.28)). The bulk heat current is defined as δQ i bulk = √ −g G ir = U 2 √ g g ij ∂ r δg jt U − ∂ j δg rt U − a t δJ i bulk = U 1/2 √ g 2K i t + U 1/2 g ij ∂ t δg rj − a t δJ i bulk , (A. 12) where we have used the result of Appendix B of [36] for the extrinsic curvature component Writingt µ ν = −2K µ ν + Xδ µ ν + Y µ ν , where X = 2K + · · · and Y are additonal terms that come from the counterterms, we recognizet µ ν as the field theory stress tensor, when evaluated on the boundary. Evaluating (A.12) at the boundary, this gives where t µ ν = r 5tµ ν . Note that the contribution from Y i t , as coming from (2.20), and contribution from the term involving a time derivative are subleading even in the precense of sources. This result matches the expression for the boundary heat current obtained from the variation of the action in the presence of the sources as in (2.28) where h µν = g µν −n µ n ν and n is the unit norm normal vector. Furthermore, equation (A.11) implies the radial dependence We now solve the horizon vector constraint at order O(ε 2 ) to write
15,181.6
2021-01-15T00:00:00.000
[ "Physics" ]
Allocating Tax Revenue to Sub-Central Government Levels: Lessons from Germany and Poland Tax sharing arrangements provide considerable financial resources to sub-central government levels. This statement is true both for unitary and federal states although tax revenue sharing mechanisms differ significantly across countries. The basic aim of this article is to compare the mechanisms adopted in Germany and in Poland. It assesses the degree of tax autonomy granted to sub-central government levels in the countries analysed, overviews the principles of apportionment of joint (shared) taxes and presents statistics on tax revenue composition of sub-central government levels. Introduction Over the last few decades the theory of fiscal federalism has attracted a lot of scientists' attention.According to this theory if there are no economies of scale than a decentralized pattern of public outputs reflecting differences in tastes across jurisdictions is welfare enhancing as compared to centralized outcome (Oates, 2008, p. 314).In order to cover expenditures related to the fulfillment of decentralized functions the sub-central government units require sufficient financial resources.In the case of a restricted fiscal autonomy of these units the financing of their public tasks in a decentralized system is based mainly on a mechanism of transfers from the central (federal) government. One of the main components of the fiscal autonomy of each government level is therefore taxing power.Governments may have three types of competences: in terms of tax legislation, in terms of tax revenue and in terms of tax administration.The decentralization of these competences may occur in various ways.For instance the central government may be responsible for tax legislation and tax administration, however tax revenue may be apportioned between different levels of government or in a more decentralized system local government may be entitled to set their own tax rates, to im-pose a surtax on a central (federal) tax liability or even to choose its own tax base, at least within the limits allowed for by a common tax administration system (Boadway, Shah, 2009, pp. 86-87). The basic aim of this article is to compare the tax autonomy of subcentral government levels and tax sharing arrangements between different government levels in Germany and in Poland.Although Poland is classified as unitary state certain public responsibilities and competences are transferred to lower-levels of government.As a result Poland may be considered as more fiscally decentralized than some federal countries.Germany on the other hand is a cooperative federation, in which, for the majority of policy areas, the federal government sets the policy framework and the states are responsible for its implementation (Kedar, 2009, p. 172).There are significant differences in the scale of federal influence on sub-national governments between federal countries.Germany is amongst the countries where this scale is relatively large and the influence exercised on local governments is strong. Methodology of the research This article attempts to compare the taxing power of the public finance subsectors (different government levels) and the tax revenue sharing procedures in two European states which differ extensively with respect to the extent of decentralisation of the public finance sector, i.e.Germany and Poland.It distinguishes three categories of taxing power: power with respect to making tax law, power with respect to obtaining tax revenue and power with respect to managing tax collection (administering taxes).The degree of financial autonomy of a sub-central government unit depends mostly on the two first categories.Therefore, they are the ones the article discusses the most thoroughly.The methodology of this article is determined by the research topic and the research objectives.The scope of taxing power is specified in the legislation applicable in each of the two countries analysed.Hence, the first part of this article includes a legislative analysis of the regulations comprised in selected national acts.It focuses on provisions included in the Constitutions of both countries, acts specifying the sources of financing for local selfgovernment units and defining principles according to which the tax revenues are shared.The relevant legislation is presented as of 31st December 2014.The first part of this article, also reviews Polish and foreign literature on the topic analysed, along with the publications of the German Federal Ministry of Finance (Bundesministerium der Finanzen). The second part of this article contains the analysis of statistical data in regard to the tax and public revenue sources.This part includes the presen-tation of the structure of tax revenues by the public finance subsectors in Germany and Poland and a comparison of these structure in both countries.The share of different public finance subsector's in the joint tax revenues and in the total tax revenues of the public finance sector are calculated along with the percentage rate of the joint tax revenues in the total tax revenues and the share of the tax revenues in the total budgetary revenues for each subsector level.The statistical data used for these calculations come from the publications of the Supreme Audit Office (NIK) in Poland, the Central Statistical Office of Poland (GUS), the German Federal Ministry of Finance (Bundesministerium der Finanzen) and the Federal Statistical Office of Germany (Statistisches Bundesamt Deutschland). Tax autonomy of federal, state and municipal governments in Germany In Germany one of the most important legal acts regulating the taxing powers of the Federation (Bund), the states (Länder) and the municipalities (Gemeinden) is the Constitution.The broadest competences to make taxation law are those of the federal government, which has the exclusive legislative power (ausschließliche Gesetzgebung) and concurrent legislative power (konkurrierende Gesetzgebung).Article 105 (1) of the German Constitution (Grundgesetz vom 23 Mai 1949) gives the Federation the exclusive power to legislate with respect to customs duties and fiscal monopolies.Taxes other than fiscal monopolies are regulated by the concurrent legislation.The Federation has the right to legislate taxes when the whole or the part of the revenue is allocated to the Federation or if the establishment of equal living conditions throughout the federal territory or the maintenance of legal or economic unity makes federal regulation necessary in the national interest.This means that with respect to other taxes, the states have the power to legislate as long as the Federation has not exercised its legislative power.In compliance with the provisions of Article 105 (2a) of the Constitution, the states have the power to legislate local taxes on consumption and expenditures as long as they are not substantially similar to taxes imposed by federal law.Moreover, the states have the power to specify the rates of the tax on the acquisition of immovable property (Grunderwerbsteuer).The municipalities (Gemeinden) do not have legislative powers, only the right to apply multipliers (Hebesätze) on the trade tax (Gewerbesteuer) and the real estate tax (Grundsteuer) within the limits specified by federal law.Legislation concerning other local taxes not regulated in federal acts of law is regulated in acts adopted by particular states. In Germany there is a distinction between the vertical (vertikale Steuerverteilung) and horizontal system of tax revenue distribution (hori-zontale Steuerverteilung).The aim of the vertical system is to divide the revenues within the federation -amongst states, municipalities or unions of municipalities.It involves ascribing whole revenues from certain kinds of taxes to the federation, states and municipalities and proportional shares in the so-called shared (joint) taxes.The horizontal system of tax revenue distribution means the redistribution of these resources amongst units belonging to the same public finance subsector. The rights with respect to obtaining tax revenues are regulated in Articles 106, 106a, 106b and 107 of the Constitution.On this basis the revenue from the following taxes is allocated to the Federation: excise on spirits (Branntweinsteuer), sparkling wines (Schaumweinsteuer), intermediate products (Zwischenerzeugnissteuer), sweet beverages containing alcohol, i.e. alkopops (Alkopopsteuer); tobacco tax (Tabaksteuer), energy tax (Energiesteuer), coffee tax (Kaffeesteuer), insurance tax (Versicherungsteuer), electricity tax (Stromsteuer), nuclear fuel tax (Kernbrennstoffsteuer), and air passenger tax (Luftverkehrsteuer).Apart from that the Federation receives tax revenues from the road haulage tax (Straßengüterverkehrsteuer), the motor vehicle tax (Kraftfahrzeugsteuer), and other kinds of taxes on transactions related to motorised means of transport, one-time capital and compensation levies, subsidiary levies on personal and corporate income taxes, e.g. the solidarity surcharge (Solidaritätszuschlag), and levies imposed due to the membership in the European Union.The next group of taxes are those which are allocated entirely to the states; these include: the inheritance and gift tax (Erbschaftsteuer/Schenkungsteuer), tax on the acquisition of immovable property (Grunderwerbsteuer), fire protection tax (Feuerschutzsteuer), betting and lottery tax (Rennwettund Lotteriesteuer), beer tax (Biersteuer) and gaming casinos levy (Abgabe von Spielbanken).As far as municipalities and unions are concerned, they receive tax revenues from the following taxes: the real estate tax (Grundsteuer), dog tax (Hundesteuer), beverage tax (Getränkesteuer), hunting and fishing tax (Jagdund Fischereisteuer), trade tax (Gewerbesteuer), secondary residence tax (Zweitwohnungsteuer), entertainment tax (Vergnügungsteuer), licensing tax on the sale of beverages (Schankerlaubnissteuer), and packaging tax (Verpackungsteuer). The most important taxes in the German tax system are shared taxes.Revenues from these taxes are divided between the Federation, the states and the municipalities.They include: the assessed income tax (veranlagte Einkommensteuer), wage withholding tax (Lohnsteuer), withholding tax on capital gains (Abgeltungsteuer), other withheld income taxes (nicht veranlagte Steuern von Ertrag), corporate income tax (Körperschaftsteuer), and value added tax (Umsatzsteuer).The statutory share of the Federation, the states and municipalities in the revenue from the shared taxes is presented in Table 1.Competences with respect to tax administration are regulated by Article 108 of the Constitution and the Tax Administration Act (Finanzverwaltungsgesetz vom 30. August 1971).The law grants the federal government authorities competences to administer duties, fiscal monopolies and consumption taxes where the principles concerning their collection are specified in federal acts; these levies include VAT on imported goods and other taxes on transactions related to motorised means of transport, collected since 1st July 2009, as well as taxes collected in relation to the European Communities. The remaining taxes are administered by the fiscal authorities of the states.The only exceptions are taxes where the revenue is allocated entirely or partially to the Federation.In compliance with Article 108 (3) of the Constitution these levies are administered by the states on behalf of the Federation.The states are free to choose the organisation of their administration and the only rule to follow when organising the administration is the principle of uniform taxation in compliance with the law (Ulbricht, 2008, p. 198). The statutory share in revenues from the shared taxes allocated to the states is divided amongst them in accordance with a statutory key.With respect to income taxes the principle of residence is taken into account.Thus, a state is given the income tax paid by a taxpayer if the taxpayer lives in this state or -in the case of corporations -if the corporation's management board is located in this state ( § 1 (1) Zerlegungsgesetz vom 6. August 1998).As a result the distribution of the revenues depends on the amount of tax revenues generated by a given state.In the case of the distribution of revenues from value added tax, the rules are different: as much as 75% of the revenue from VAT to which the states are entitled is distributed on the basis of the number of inhabitants of the relevant states.The remaining 25% is a subsidy which is divided depending on the so-called fiscal performance index (Steuerkraft).Calculating the fiscal performance index involves taking into account the share of a particular state in the revenues from the assessed income tax, wage withholding tax, withholding tax on capital gains, other withheld income taxes, corporate income tax, trade tax and the revenues of these states from the remaining taxes per inhabitant.A state is entitled to a subsidy if this index is lower than the average calculated for all the states ( § 2 (1) Finanzausgleichgesetz vom 20. Dezember 2001). Both the Federation and the states are entitled to a share in the trade tax.Municipalities are obliged to transfer a share of the tax revenues from that tax to the Federation and the states under Article 106 (6) of the Constitution.The amount to be transferred depends on the local multipliers imposed on the trade tax and the location of a particular municipality (whether it is located in a new or an old state).Moreover, since 1st July 2009, the states participate in the revenues from the motor vehicle tax collected by Federation. The share of a particular municipality in the revenue from income taxes depends on the taxable income and the tax paid by its inhabitants in a statutorily defined year.In the calculation of the revenues from the aforementioned taxes received by a municipality and in order to equalise the differences between municipalities whose inhabitants generate high income and those whose inhabitants generate low income in the case of each taxpayer only income up to a certain level is taken into account.The amount used in the calculation is 35,000 euros, in the case of individual tax settlement, and 70,000 euros, in the case of joint tax declaration (Gemeindeanteil 2014, p. 21). The apportionment of the revenues from value added tax is extremely complicated due to several amendments made in 1998 (implementation of the compensation for the abolition of the trade tax on business capital that accrued to the municipalities) and in 2007 (increase of the standard VAT rate) (Englisch, Tappe, 2011, p. 285).When distributing the municipalities share in the revenues from value added tax amongst particular municipalities, the following elements are taken into account (in statutorily regulated proportions): the revenue from the trade tax, the number of employed obliged to pay insurance contributions, the amount of wages paid from which social insurance contributions must be made. Local tax authority and the tax revenue sharing mechanism in Poland Unlike in Germany, in Poland the provisions of the Constitution do not regulate in detail the scope of taxing power of public finance subsectors (the Constitution of the Republic of Poland of 2 April 1997; Konstytucja Rzeczypospolitej Polskiej z dnia 2 kwietnia 1997 roku).Article 16(2) of the Constitution includes, however, the principle of independence of local selfgovernments.In compliance with this principle local self-government units participate in exercising public power and are obliged to perform public tasks assigned to them on its own behalf and responsibility.The execution of these tasks is possible thanks to local self-governments' share in public revenue, including, according to Article 167(2) of the Constitution, their own revenues, general subsidies and specific grants designated from the state budget. In Poland competences in the field of tax law-making are vested in the central government.The authority equipped with the power to enact tax law is Parliament.In principle the units of local self-government and their organs do not have the power to make tax law.However under Article 168 of the Constitution the units of local self-government have the right to establish the rates of local taxes and charges in compliance with statutory regulated rules.As far as municipalities (gminy) are concerned their competences with respect to tax law are implemented primarily by means of tax resolutions adopted by the municipal council (Popławski, 2009, p. 40). The provisions of the Local Taxes and Fees Act of 12 January 1991 (Ustawa z dnia 12 stycznia 1991 roku) entitle the municipal councils to determine the rates of real estate tax, motor vehicle tax, market duty, visitor's and resort duties, as well as the duty on dog owners.The rates may not exceed the upper limits specified in the statute.Municipal councils also have the right to determine the rules for the collection and time of payment of these taxes and duties, and to introduce tax deductions and exemptions.Moreover under Article 6(3) of the Farm Tax Act of 15 November 1984 (Ustawa z dnia z dnia 15 listopada 1984 roku), municipal councils may influence the amount of the farm tax by reducing the purchase prices of rye taken as a basis for calculating the tax, and -on the basis of Article 4(5) of the Forest Tax Act -influence the amount of forest tax by reducing the average selling price of wood taken as the basis for calculating the forest tax (the Act of 30 October 2002; Ustawa z dnia 30 października 2002 roku).Most of the taxes in Poland constitute the source of revenue for the state budget.They include: value added tax, the excise duty on the following products: alcoholic beverages, tobacco products, energy and electricity, passenger cars, the gambling tax, flat-rate tax on registered income, tax on sale of securities, flat-rate tax on income of the clergy and the mineral extraction tax.The Polish tax system does not allow for any taxes which would constitute revenues of the districts (counties; powiaty) or regions (provinces; województwa samorządowe).The sources of revenue for municipalities, on the other hand, are such taxes as: the real estate tax, farm tax, forest tax, motor vehicle tax, gift and inheritance tax, tax on civil law transactions and the fixed sum tax on the business activity of individuals (so called tax card assessment).Apart from that the following duties also contribute to the municipalities' budgets: the stamp duty, market duty, visitor's and resort duty, duty on dog owners and a share in the mineral exploitation duty. Local taxes are administered by local authorities: the village head, the mayor or the president.The only exceptions are: the gift and inheritance tax, the tax on civil law transactions and the fixed sum tax on business activity of individuals, which constitute the source of revenue for municipali-ties but are administered by the central government authorities, i.e. the heads of tax offices.All other taxes are administered centrally. In Poland personal and corporate income taxes are shared taxes, i.e. revenues from them are distributed amongst the state, regions, districts and municipalities.The share of public finance subsectors in revenue from these taxes is presented in Table 2. In compliance with Article 9 of the Act of 13th November 2003 on the Revenues of Local Self-Government Units (Ustawa z dnia 13 listopada 2003 roku), the amount of a municipality's share in the revenue from personal income tax is calculated by multiplying the total amount of the revenue from this tax by 0.3934 and an index equal to the share due in the year preceding the base year of the personal income tax from persons resident in a given municipality, as the total amount of the tax to be paid by all taxpayers in the same year.In the case of a district the amount of its share in the revenue from personal income tax is calculated by multiplying the total amount of the revenues from personal income tax by 0.1025 and an index equal to the share of the personal income tax due in the year preceding the base year of the personal income tax from persons resident in a given district, as the total amount of the tax to be paid in the same year.In the case of regions, on the other hand -this share is calculated by multiplying the total amount of the revenue from this tax by 0.0160 and an index equal to the share of the personal income tax from persons resident in a given region due in the year preceding the base year, as the total amount of the tax to be paid in the same year.These indices are established on the basis of statistics from tax returns submitted on the amount of income and annual tax calculations made by taxpayers as of 15 th September of the base year. The amounts of shares of regions, districts and municipalities in the revenue from corporate income tax depends on the number of taxpayers having registered offices or facilities in their territories.If a corporate income taxpayer has a facility on the territory of a local self-government unit other than the one where its registered office is located, part of the revenues from the share in the revenue from this tax is transferred to the budget of the local self-government unit in which this facility operates, proportionally to the number of people employed there under a contract of employment.In the case of a corporate income taxpayer conducting business through a foreign facility located in the Republic of Poland part of the revenue from the share in the revenue from this tax is transferred to the budget of the local self-government unit where the employers of this taxpayer or of his foreign facility perform work under a contract of employment, proportionally to the number of people employed by him or this foreign facility located in the Republic of Poland. Tax revenue assignment in Germany and Poland in the years 2009-2013 Both in Germany and in Poland taxes constitute an important source of the public revenue.However their significance varies considerably for different public finance subsectors.In Germany the share of tax revenue in the total revenue of the Federation and the states in 2009-2013 was similar (Table 5).The basic source of tax revenue for both the Federation and the states are shared taxes, which constitute about 74% of the total tax revenue of the public finance sector (without taking into account the subsector of social insurance) and about 80% of the tax revenue both for the Federation and the states.From Table 3 it can be concluded that the dominant source of revenue for the Federation is the value added tax.This is related not only to its collection efficiency but also to a considerable statutory share of the Federation in the revenue from this tax.A significant source of revenue for the Federation are also excise duties, the revenue from which account for over 20% of its tax revenue.The most efficient are the excise duties imposed on energy and tobacco products. The tax revenue of the Federation are reduced by amounts transferred to the budgets of other entities of the public finance sector and the budget of the European Union.The amounts deducted from these revenues include their own resources of the EU -both VAT-and GNI-based, as well as supplementary federal grants (Bundesergänzungszuweisungen).Moreover the Federation gives part of its revenue from the taxation of mineral oil to the states under Article 5 of the Regionalization Act (Regionalisierungsgesetz vom 27. Dezember 1993).These resources are used to finance inter alia railway transport.The federal budget also pays the states a percentage of revenue from the motor vehicle tax.The states' share in the revenue from shared taxes and tax revenue in total is slightly lower than the share of the Federation (from 3 to 5 percentage points).The states obtain their own tax revenues mainly from the tax on the acquisition of immovable property, inheritance and gift tax and the excise duty on beer.However the largest revenue of the states comes from their share in value added tax. In the case of municipalities both the proportion of the revenue from the shared taxes in the total tax revenue and of the total tax revenue in the total revenues is lower than in the case of the Federation and the states.The share of municipalities in the revenues from the shared taxes is insignificant and does not exceed 8%.The budgets of municipalities are supplied mostly by subsidies and grants.The largest tax revenue sources of municipalities are their share in the wage withholding tax and the trade tax.Source: Data from Table 3. In Poland the share of tax revenue in public revenue is relatively high in the case of the state budget (Table 6).The most important sources of tax revenue are value added tax and the excise tax, the revenues from which account for about 73% of all the tax revenue of the state budget.The percentage of shared taxes in this budget is definitely lower than in Germany.It must be also emphasized that, unlike Germany, in Poland the value added tax is not a shared tax.At the same time the proportion of income taxes, which are shared in the tax revenue of Federation in Germany was, in the years 2009-2013, 12.5 to 18.0 percentage points higher than the proportion of income taxes in the tax revenue of the state budget in Poland; one of the reasons is the higher collection efficiency of German income taxes.The regional level of local self-government in Poland does not have any tax revenue; it has a share of the shared taxes but they are insignificant.In the case of districts, which also do not have their own tax revenue, the proportion of revenue from the shared taxes in the total is even lower. Municipalities and cities with district rights have their own tax revenue, which include also shares in shared taxes.It must be noted that the share of municipalities and cities with district rights in revenues from personal income tax is relatively high.As a result municipalities in Poland are entitled to a higher share in revenue from shared taxes than in Germany.The situation is different in the case of the share of particular levels of the public finance sector in tax revenue.In the case of German municipalities this share is higher than in Poland.This is a result of a higher collection efficiency of local taxes in Germany.One of the most important sources of tax revenue for German municipalities is the trade tax.The most efficient source of tax revenues for Polish municipalities is the real estate tax. Conclusions Taxing powers of German and Polish sub-central government units differ considerably.These differences are especially visible when comparing tax competences of regions and districts in Poland and the states in Germany.It must be added that the sources of revenue and their structure are adjusted to a greater or lesser extent to the need for public resources and to the public tasks performed.Due to systemic differences between the two countries the tasks of Polish regions and districts are different from those of German states. States in Germany have relatively broad taxing powers.The Constitution provides them with the power to legislate with regard to local taxes on consumption and expenditures so long and insofar as such taxes are not substantially similar to taxes regulated by federal law.German states receive revenue from selected wealth and sales taxes.Moreover the Federation and the states have a similar level of revenue from shared taxes, which results in a comparative level of their shares in tax revenue.In Poland, regions and districts have no legislative powers with respect to tax law.They do not generate any of their own tax revenue but do participate in the revenue from the shared taxes.However their share in tax revenue that include only the revenue from shared taxes is insignificant.Therefore, their most basic sources of funding are general subsidies and specific grants. The taxing power of municipalities is significantly restricted both in Germany and in Poland.The legislative powers of municipalities with respect to tax law are limited to deciding the rates of some local taxes (only within statutory restrictions).Municipalities have no power to independently impose taxes and shape the elements of the overall tax design.German municipalities are entitled to a lower share in the revenue from shared taxes than in Poland but their share in the tax revenue of the public finance sector is still higher.A significant part of the tax revenue of municipalities in Germany comes from a relatively efficient trade tax. A comprehensive evaluation of the scope of independence of local selfgovernment units in both countries would require the taking into account of not only the level and structure of their tax revenue but also their revenue from other sources and the level and structure of their expenditure, taking into account tasks performed at various levels of the public finance sector. Table 1 . Statutory share of Federation, states and municipalities in revenues from shared taxes in Germany Source: Das bundesstaatliche Finanzausgleich (2014, p. 3). Table 2 . Statutory share of public finance subsectors in revenues from shared taxes in Poland The share of the municipalities in the revenues from personal income tax decreases by the number of percentage points equal to the product of 3.81 of the percentage point of the index calculated for the whole country.The index rate is established by dividing the number of inhabitants admitted to residential homes before 1st January 2004, as of 30th June of the base year, by the number of inhabitants admitted by 1st January 2004, as of 31st December 2003.The participation share of municipalities in the revenues from the personal income tax was35.72% in 2004, in 2005 -35.61%, in 2006 -35.95%, in 2007 -36.22%, in 2008 - 36.49%, in 2009 -36.72%, in 2010 -36.94%, in 2011 -37.12%, in 2012 -37.26%, in 2013 -37.42%, in 2014 -37.53. Table 4 . Tax revenues of the state, regions, districts, cities with district rights and municipalities in Poland in the years2009 -2013 (in million PLN) (in million PLN) Table 5 . Shares of Federation, states and municipalities in tax revenues and revenues from shared taxes in Germany in the years[2009][2010][2011][2012][2013] Table 6 . Shares of the state, regions, districts, cities with district rights and municipalities in tax revenues and revenues from shared taxes in Poland in the years
6,648.6
2016-12-28T00:00:00.000
[ "Economics" ]
An Enhanced TSA-MLP Model for Identifying Credit Default Problems Credit default has always been one of the critical factors in the development of personal credit business. By establishing a default identification model, default can be avoided effectively. There are some existing methods to identify credit default. However, these methods have some problems: Problem (1): It is different to deal the non-linear data, Problem (2): The local stagnation results in the high error rate, and Problem (3): The premature convergence leads to the low classification rate. In this paper, the sinhTSA-MLP default risk identification model is proposed to solve these problems. In this model, the proposed sinhTSA method can effectively avoid the problems of falling into local optimum and premature convergence. And the benchmark test results demonstrate sinhTSA is superior to other methods. According to the two experiments, the classification rate reaches 77.35% and 96.48%. Therefore, the sinhTSA-MLP default identification model has some particular advantages in identifying credit problems The feasibility of the sinhTSA-MLP default identification model has been proved through helping to manage credit default more consciously. Introduction Credit risk, also known as default risk, refers to the risk of economic loss caused by the failure of the counterpart to fulfill the obligations in the agreed contract (Oreski & Oreski, 2014). Trustees ignore the obligation of repayment of principal, interest, causing that the expected return of the grantee may deviate from the actual return, which is the primary type of financial risk (Ramos-Tallada, 2015). In recent years, due to the non-rich research in credit default, the international economic growth is weak. Among the banks, the management and control mechanisms of credit default is not advanced yet and the insufficient information making the research on credit default confronted with many challenges (Butaru et al., 2016). Therefore, there is a need to create a systematic prevention and control mechanism. That is to say, the involvement of research on credit default, the establishment of a forward-looking default supervision system, and the reinforcement of the supervision default prediction and default identification are necessary. With the increasingly diversified assets of financial institutions and the rise of internet finance, the management of credit assets becomes more complex, making bank credit face more accordingly defaults (Ozbayoglu et al., 2020). The loss of bank capital caused by the credit default not only affects the survival and development of the bank itself, but also causes the chain reflection of the correlation (Bhattacharya et al., 2020). To achieve risk control, traditional financial institutions often build credit scoring models based on a rule or statistical analysis of historical data. The traditional customer credit default identification system has been challenging to meet the actual needs. Therefore, the proposal of an appropriate and effective method to identify credit default has become the dominating intersection point of many scholars and practical applications (Wang, Ma, et al., 2014). Many methods have been proposed to address credit default identification problems (Chou et al., 2017;Liu et al., 2019), which can be divided into three categories: specialized judgments (Huang et al., 2005), statistical methods and Computational Intelligence (CI) approach (Chou et al., 2017). The most common method of CI credit default identification, the supervised learning method is better than that of the traditional methods (Carcillo et al., 2021;Zhang et al., 2021). At present, the traditional methods contain Random Forest (RF; Rao et al., 2020), Decision Tree (DT) (Abri Aghdam et al., 2021), Logistic Regression (LR; Fan et al., 2020), Support Vector Machines (SVM; Danenas & Garsva, 2015) and so on. Credit default identification, belonging to supervised learning in machine learning, has an explicit target variable-the type of customer (Jordan & Mitchell, 2015;Ping & Yongheng, 2011). With the evolution of artificial intelligence, artificial neural networks (ANN) is employed to forecast credit default (Lopez-Garcia et al., 2020;Thomas, 2000). The construction process of supervised learning is the same as that of the traditional method, which follows the basic business norms (Rtayli & Enneya, 2020). The independent and dependent variables, including many typical indicators and good or bad samples, are determined by the business needs. In essence, machine learning does not have an impact on the existing credit business from the practical application and there is some room for promotion and application. Owing to the complex and nonlinear characteristics of the financial market, these methods generally have limitations in timeliness and data integrity so that the comprehensive and reliable risk control can not be achieved. What's more, the phenomena of local stagnation and premature convergence constantly emerge (Malhotra & Malhotra, 2003;Pławiak et al., 2019). Consequently, the traditional statistical methods are not suitable for the analysis of complex, high dimensional and noisy data. To summarize, in the big data environment, according to the features of massive transaction data, the improved artificial neural networks are emerging, which can solve the problem of high-level, complex data (Falavigna, 2012;Meng et al., 2021), and it has been applied in the field of intelligent finance and big data risk control. Under the financial background, the model can be trained to quickly identify credit default identification. The Multi-Layer Perceptron (MLP) method has high accuracy in the classification problems and real-world problems (Feng et al., 2020;Mohammadi et al., 2021). The MLP with a hidden layer is a typical architecture of the ANN. It needs to be optimized since it always falls into local optimum when dealing with massive data (Meng et al., 2021). In solving specific problems, traditional swarm intelligence methods are relatively perfect and mature (Ertenlice & Kalayci, 2018). This triggered us to utilize the swarm intelligence methods to optimize the weights and biases term of MLP to achieve the optimal performance (Meng et al., 2021;Mirjalili et al., 2012). Because of the merits, MLP can effectively prevent credit risks and reduce the non-performing loan ratio. MLP complies with the development trend, giving full play to new advantages such as the internet and big data, which compensates for the shortcomings of traditional default identification methods, and provides more space for improving the ability of credit default identification In this paper, the swarm intelligence optimization method optimizes the weights and biases of the MLP mainly used in the field of credit default identification, which makes up for the limitations of traditional technology. Tree-Seed Algorithm (TSA) is a swarm intelligence optimization method to imitate the way of propagation between trees and seeds (Kiran, 2015). At present, TSA method has been applied in many fields: engineering optimization (Jiang et al., 2020a), symmetric traveling salesman problem (Cinar et al., 2020), optimal power flow problem (El-Fergany & Hasanien, 2018). Compared with traditional and some metaheuristic method, it has space for research and development due to the characteristics, such as fewer parameters (Kiran & Hakli, 2021) and easy to implement (Jiang et al., 2020b). Meanwhile, many scholars have been concerned, and many variants have been proposed to improve the performance and solve the real-world problems showing in Table 1. These variants also have produced very significant results in their areas. It is worth mentioning that TSA is not enough to optimize the MLP processing massive data, so it is necessary to propose a TSA variant to enhance the performance of TSA and further optimize MLP. Here, we can get the motivation of this paper: • The MLP credit default risk identification model can effectively identify credit default problems. • The TSA variant improves the MLP performance to obtain the precious classification rate. From two aspects of MLP and swarm intelligence optimization method, this paper aims to build a credit default identification model based on an intelligent optimization method. To achieve the above research objectives, the following research contents are drawn up: Firstly, based on the TSA, candidate and adjustment mechanisms are introduced to propose the TSA variant called sinhTSA. Secondly, the sinhTSA-MLP credit default identification model is proposed to obtain the default results and improve the final classification rate and accuracy. Thirdly, the feasibility and effectiveness of the proposed sinhTSA-MLP default identification model are verified by financial credit identification data. Theory With the rapid development of personal consumer loans, it is common to occur default events. In fact, there are many factors influencing credit default. Due to asymmetric information, banks and other lenders are in a relatively weak position, while borrowers are the opposite. Banks often do not have knowledge of the borrower's repayment motivation, repayment ability and "project risk" (Ma, 2020). Currently, in the risk control process, the first and second lines of defense of banks are usually due to the emergence of obvious risk indicators. It is a typical stop loss risk control method, such as too many overdue days of loans, loss of contact with borrowers, etc. There is a certain lag in risk identification and remedial measures. According to the possibility of repayment, the classification of loan risk is divided into default and non default, so as to reveal the real value of the loan. Personal credit default is related to the characteristics of individual loans (Zhang, 2013), including age, education level, length of service, residence, family income, loan income ratio, credit card debts, other debts, sex, the value of fixed assets, loan term, whether mortgage, the family structure and so on. For example, Women prefer stable and are less likely to choose default than men. Data Mining Techniques Logistic regression (LR) can predict the credit risk of the small and medium-sized enterprises for financial institutions (Zhu et al., 2016) and consumer default risk (Costa e Silva et al., 2020). Naive Bayes (NB) is a classification method based on Bayes theorem and independent assumption of feature conditions (Chen et al., 2020). The two most widely used classification models are the decision tree model (Zhou et al., 2021) and naive Bayesian model (NBM; Yager, 2006). The first payment default (FPD) loans prediction is solved by the NB (Koç & Sevgili, 2020) K-Nearest Neighbor (KNN) not only applies to the consumer credit risk (Kruppa et al., 2013) but also it can apply in bank loan default prediction (Arora & Kaur 2020;Kou et al., 2014). Consequently, it can be concluded that KNN has a great prospect in predicting credit default. This paper uses hybrid model to identify credit risk. Hybrid model refers to using relevant data to generate several learners based on certain rules, and then integrating the above learners into a model through some algorithm model integration strategy. In the model output stage, the results of each learner are fused by using the pre-determined judgment criteria, and the final output is the output of the hybrid model. Through the proposed model, this paper selects two data sets and judge whether the customer is defaulted by taking the influencing factors as the input. Tree-seed Algorithm Tree-seed Algorithm (TSA) is a heuristic method that simulates the propagation behavior between trees and seeds. And there are the following several essential parts. Firstly, trees are generated through the initialization phase. Secondly, seeds are generated through parent tree controlling by the search tendency (ST). Thirdly, when the fitness value of seed is less than that of the tree, the tree is updated by seed. Finally, when reaching the maximum iterations, the optimal global value is obtained. Multi-Layer Perceptron Multi-Layer Perceptron (MLP) has been widely applied to the finical problems . For example, it (2021) NTSA The novel algorithm is based on four different algorithmic approaches which uses two different solution generating mechanisms in order to improve balance local and global search abilities. Jiang et al. (2020a) fb_TSA Through feedback mechanism, the ST and the number of seeds are dynamically adjusted to achieve the balance between exploration and exploitation. Jiang et al. (2020b) STSA An adaptive automatic adjustment mechanism and the new initialization for the number of seeds to balance exploration and exploitation. Ding et al. (2020) C-Jaya-TSA A hybrid swarm intelligence technique based on Jaya and TSA. Jiang et al. (2020c) TSASC Two features from sine-cosine method are integrated into the TSA to balance the exploration and exploitation. CTSA Application of the Debs rules to solve constrained optimization problems. Cinar and Kiran (2018) LogicTSA, SimTSA SimLogicTSA Logic gates (LogicTSA) and similarity measurement techniques (SimTSA) is used to improve the performance. Kiran (2017) TSAWP A new control parameter named as withering process (WP) to enhance the performance. obtains superior outcome for the bankruptcy prediction of Iranian companies (Mokhatab Rafiei et al., 2011). MLP is one of the Artificial Neural Network (ANN; Turkoglu & Kaya, 2020). In addition to the input and output layer, it can have multiple hidden layers between the input layer and the output layer. Equations (1)-(4) complete the whole MLP optimization process. Firstly, calculate the weighted sum of the inputs by equation (1): Secondly, calculate the output for the hidden nodes by equation (2): Thirdly, calculate the result of the hidden node to get the final output by equations (3) and (4): where n is the number of the input nodes, Wij indicates the weight linking the ith input layer node and the jth hidden layer node. Xi presents the ith input. wjk is the weight connecting between the jth hidden node and the kth output node. θj and θ k( ) / is the bias of the jth hidden node and kth output node, respectively. Proposed sinhTSA The candidate mechanism. The candidate mechanism is considered to enhance the global diversity and the ability of exploration. And it can adjust the convergence speed based on the original TSA, guiding the optimal global solution. An adjustment mechanism k with iteration. The main contribution is the definition of suitable hyperbolic coefficients, the dynamic regulation of the expansion factor coefficient with the iteration, inspiring by the hyperbolic function as shown in equation (7). The hyperbolic coefficients (k 1 , k 2 ) changes with the number of iterations, where coefficients k 1 and k 2 are updated by equations (8) and (9). k sinh simh *iter maxiter sinh In the basic TSA, the seed generation mechanism results in premature convergence. Meanwhile, tree update mechanism leads to local stagnation. The hyperbolic coefficients that can reduce the local stagnation and increase the global diversity to a certain extent. The adjustment coefficients k1 and k2 are variables that decrease to negative numbers with the number of iterations. At the end of iteration, it can help the fine-regulation of the current search area to find the global optimum. Through the above analysis, the new seed production mechanism is the effective combination of the two mechanisms. The equation (10) utilizes the candidate and adjustment mechanisms to generate the seed based on the current and random trees when the ST is less than the random constant. This generation method increases the global search diversity and decrease the possibility of local stagnation. On the contrary, equation (11) computes the seed generation position. This mechanism is based on the current, best and random trees to generate seeds. This mechanism can efficaciously increase the accuracy of finding the optimal global solution, avoid premature convergence, and accelerate the convergence speed. IEEE CEC 2014 benchmark test functions are used to verify the superior performance of the sinhTSA, which includes unimodal, multimodal, hybrid, and composition test benchmark functions. To fully test the searching precision and convergence rate of the proposed sinhTSA, eight representative methods are employed to compare the performance including GA (Holland & Reitman, 1977), ABC (Karaboga & Ozturk, 2011), BA (Yang & He, 2013), DE (Wang, Li et al., 2014), SCA (Mirjalili, 2016), BOA (Arora & Singh, 2019), EST-TSA (Jiang et al., 2019), and STSA (Jiang, et al., 2020b). The parameter settings of the eight methods are shown in Table 2. Table 3 shows the experimental results on the standard test set, and the best results are reflected in bold. The smaller the value, the better performance of the sinhTSA. Table 4 shows the Wilcoxon rank sum test, and it is also a nonparametric method to test whether there is a significant difference in the distribution of the population from which the two paired samples come. When the p-value is less than α, it presents the sinhTSA is superior to other algorithms. It can be seen from Tables 3 and 4, the sinhTSA has advantages on dealing with complex problems. Hence, the sinhTSA can train the MLP and form a model to identify credit default. The sinhTSA-MLP Credit Default Identification Model We use sinhTSA to construct the credit default identification model. Through the continuous optimization of weights and biases, the performance of MLP has been improved, which has achieved the effect of enhancing classification rate and reducing the error rate. Parameter Setting and Criteria The sinhTSA-MLP is compared with other methods to verify the ability of the proposed method for credit default identification. Table 4 shows the comparison method parameter settings. Meanwhile, reasonable evaluation criteria are of great importance, and in the paper, equation (12) where m is the number of outputs, d i k ( ) and o i k ( ) are the desired output and actual output of the ith input by using the kth training sample. The average MSE (MSE) is used to ensure the efficiency and it is computed by equation (13). where s is the number of training samples. There are some uncertainties in training MLP, where MSE for the sinhTSA method is consistent to the equation (14). The error rate is one of the criteria of the evaluation model (Jain & Duin, 2000;Lessmann et al., 2015) and it is not sensitive to the classification accuracy of the model. Therefore, the error rate is a criterion in the paper. Description of the Data This Taiwan Data Set was selected because it is widely used and compares the predictive accuracy of default probability among six data mining methods. In order to further verify the Taiwan data set. The data from an important bank (a cash and credit card issuer) in Taiwan from April to October, 2005 is adopted to train and verify the proposed default identification model. The data from the UCI, which includes a total of 25,000 observations, and 5,529 observations (22.12%), are the cardholders with default payment. In the paper, the binary variables are used (Yes = 1 and No = 0). As shown in the Table 5, the 23 variables are selected as inputs and the default is selected as output (Steenackers & Goovaerts, 1989;Yeh & Lien, 2009). South German credit (UPDATE) data set. The data donated by the German professor Hans Hofmann via the European Statlog project is obtained from the UCI dataset. The 20 variables (status, duration, credit history, purpose, amount, savings, employment duration, installment rate, personal status sex, other debtors, present residence, property, age, other installment plans, housing, number credits, job, people liable, telephone, and foreign worker) are selected as inputs and the default is selected as output (Fahrmeir & Hamerle, 1981). For Taiwan Data set, one-half of the data is used to train the model, and the remaining dataset is used to validate the model (32.43%). For South German Credit (UPDATE) Data Set, 50% of the data was used for training and 50% for verification. To reduce the impact of variable inconsistency, this paper preprocesses the data referring (Yu et al., 2012) to compute. Empirical Analysis and Suggestion According to the deal data, we can see the distribution of gender, marriage, education, and age among the defaulting customers. For Taiwan Data Set. Figure 1 shows the proportion of customers in different situations. Through the analysis of the experimental results, it can be seen that the features of defaulting users are uneven, and women are more likely to default than men. Graduate school tends to default. Subscribers of different ages have different degrees of default risk. For South German Credit (UPDATE) Data Set, Figure 2 shows the debtors' some characters. Through the experimental results, it can be seen that the features of defaulting users are uneven, and married men are more likely to default than divorced men. Both renting housing and no counting saving debtors tend to default. Analysis and Discussion of the Credit Default Identification Through the sinhTSA-MLP In order to verify the performance of the proposed model in identifying credit default, some models are compared together, such as PSO-MLP (Mirjalili et al., 2012), DE-MLP (Wang, Li, et al., 2014), TSA-MLP (Kiran, 2015), GWO-MLP (Mirjalili, 2015), SCA-MLP (Mirjalili, 2016), GA-MLP (Singh & De, 2017) and AGWO-MLP (Meng et al., 2021). In this section, sinhTSA combines with MLP to identify the credit default. For the results, the final classification rate and error rate are the criteria to evaluate the performance of the sinhTSA-MLP. Taiwan data set. From Table 6, the sinhTSA has the highest final classification rate and the lowest error rate. That is because sinhTSA has strong exploration ability and the ability to avoid local optimum. In short, in this data set, the credit default identification model's classification accuracy and performance using the sinhTSA method are improved. South German credit (UPDATE) data set. From Table 7, the sinhTSA-MLP has the highest final classification rate and the lowest error rate. The GWO-MLP, DE-MLP, PSO-MLP, SCA-MLP, and AGWO-MLP have the same classification rate, but the error rate is different. It can be seen that the performances of convergence, exploration and exploitation for algorithms are different to update the MLP. From the results, it can be concluded that sinhTSA has a strong exploration ability to avoid local stagnation and it can effectively update the weights and biases of MLP to improve the classification rate. It can be seen from Tables 6 and 7 that sinhTSA-MLP has certain advantages in credit default identification. Compared with the basic TSA-MLP, GWO-MLP, AGWO-MLP, DE-MLP, PSO-MLP, SCA-MLP, and GA-MLP, sinhTSA-MLP can obtain a higher classification rate and lower error rate. The error rate proves that sinhTSA has strong global search ability, effectively balances the exploration and exploitation, and avoids the local stagnation to improve the convergence speed. These abilities can enhance the performance of the sinhTSA-MLP credit default identification model. Through the error rate, it can be seen that over fitting will occur in the classification process. However, the precious classification rate can be obtained, which can be proved that sinhTSA-MLP is an effective credit default identification model. Countermeasures and Suggestions Through the above experimental results and discussion, countermeasures and suggestions are put forward as follows: Bank managers should establish a predictive risk supervision and management system to strengthen the control of credit default. We should make a systematic and comprehensive risk response mechanism to strengthen the industry's handling and response-ability to uncertainty. We can effectively resolve the default risk of non-performing loans in commercial banks by increasing the risk reserve for the imperfect system. Different approval fluency and mechanisms, such as different repayment periods, loan lines, and different types of customers are set for customers with different economic strengths. In order to handle the banking business, the customer's credit database is built to record the monthly repayment situation. From the perspective of the policy-making level, a knowledge system should be formed according to the accumulation of practical application data. Besides, policy terms should be timely updated to prevent new credit risks. Taking the macroeconomy as the premise, it is advisable to measure the current and long-term risk of credit loans and predict the possibility of future losses of bank credit business, so as to effectively resolve the default risk of bank non-performing loans. What's more, it is essential to clarify management regulations, implement the application of information technology, and utilize digital thinking and computational thinking, making more optimized and rational decisions. Conclusion Credit default identification is complex and highly nonlinear. Default identification and evaluation models, such as support vector machine models, expert systems, etc., are constantly emerging. However, credit default identification is still a challenging problem. Different models need to be combined with varying application objects to accurately predict the actual credit process risk. This paper integrates swarm intelligence optimization method and MLP effectively to identify credit default. According to the No Free Lunch theorem, the swarm intelligence optimization method, TSA has some shortcomings. Therefore, a variant of TSA called sinhTSA is first proposed, innovated by the candidate and adjustment mechanisms. The sinhTSA is tested by the IEEE CEC 2014 benchmark test functions and compares with the EST-TSA, STSA, DE, BA, GA, SCA, ABC, and BOA. It is superior to these methods in terms of exploration ability and local optimum avoidance. It can demonstrate the sinhTSA can deal with complex and real-world problems. With the rapid growth of personal credit assets, the risk of personal credit assets is gradually showing, which has aroused social attention, regulatory authorities, and banks themselves. Accurately grasping the risk situation of personal credit business and promoting the sustainable and healthy development of personal credit business are essential for the manager. Therefore, it is necessary to conduct indepth research on the current risk situation and management means of personal credit to exactly understand the problems existing in the development of private credit. The sinhTSA-MLP credit default identification model is proposed to identify credit default. Based on results, sinhTSA-MLP credit default identification model obtains the highest classification rate and lowest error rate among comparative methods-MLP credit default identification models. The classification rate of the sinhTSA is highest at 77.3565% and 96.8% with the lowest error rate. By analyzing the experimental results and applying the method, the corresponding countermeasures are given to reduce the possibility of customer default and minimize the default risk. Future Prospects Though the accuracy of default identified by this model is high, other users' behaviors, such as whether they have car loans and housing loans, can be considered in future research to achieve a better prediction. The artificial neural networks method has high accuracy in credit default identification, which can effectively prevent credit default from happening. It can be further improved to build credit risk identification, for example, finding an appropriate structure and coordinating the parameters based on gradient learning algorithm. At the same time, more complex artificial neural networks should be chosen to identify credit defaults.
6,031.2
2022-04-01T00:00:00.000
[ "Computer Science", "Business" ]
VEHICLE LOGO RECOGNITION WITH REDUCED-DIMENSION SIFT VECTORS USING AUTOENCODERS Vehicle logo recognition has become an important part of object recognition in recent years because of its usage in surveillance applications. In order to achieve a higher recognition rates, several methods are proposed, such as Scale Invariant Feature Transform (SIFT), convolutional neural networks, bag-of-words and their variations. A fast logo recognition method based on reduced-dimension SIFT vectors using autoencoders is proposed in this paper. Computational load is decreased by applying dimensionality reduction to SIFT feature vectors. Feature vectors of size 128 are reduced to 64 and 32 by employing two layer neural nets called vanilla autoencoders. A dataset consisting of Medialab vehicle logo images [9] and other vehicle logo images obtained from the Internet, is used. The dataset may be reached at [10]. Results suggest that the proposed method needs less memory space than half of the original SIFT based method’s memory requirement with decreased processing time per image in return of a decrease in the accuracy less than 20%. INTRODUCTION The well-known feature extraction method SIFT [1] is used for large number of tasks in computer vision that requires object recognition and point matching between different scenes.SIFT descriptors are translation, rotation and scale invariant, robust to illumination variations and very useful on real-world tasks.But matching process is expensive because SIFT feature vectors are 128 dimensional and calculating distance between these vectors is very time consuming.Several methods are proposed to improve comparing process. We propose a SIFT method which is more time and memory efficient than traditional SIFT method for vehicle logo recognition. SIFT SIFT (scale invariant feature transform) is a successful method to find interest points in an image.It basically consists of four steps: Scale Space Extrema Detection Laplacian of Gaussian (LoG) which is a blob detector can be used to obtain different sized blobs, via different sigma values.But LoG is a costly function, hence a similar function which is Difference of Gaussians(DoG) is used with different sigma values.The difference of The Gaussian blurred images with different sigma values are obtained in DoG process.These new images are formed a pyramid over scale space.In this pyramid, local extrema are marked as possible keypoints. Keypoint Localization The extrema points consist of strong interest points, edge points and points that have low contrast.In order to get rid of the low-contrast points, points are firstly localized with more precision and then the points whose intensity value are less than a threshold, are removed.In order to get rid of the edge points, an algorithm that is similar to Harris corner detector is used. Orientation Assignment In order to have rotation invariant keypoints, the orientations are defined for each keypoint according to their neighborhood.This process consists of determining neighborhood, computing orientation histogram which covers 360 degrees, weighting the histogram, then taking the highest peak and values more than 80% of it in the histogram and finally computing orientation.Hence different oriented keypoints at the same location and scale are obtained. Keypoint Descriptor At the previous steps, the keypoints that have scale, orientation and image location information were obtained.In other words, the keypoints are now invariant to these variables.In this step descriptor for the local image region which is invariant to remaining variables such as illumination change and local shape distortion too, is calculated. The obtained descriptors are 128 dimensional vectors.Information in the keypoint descriptor comes from 16x16 neighborhood of the keypoint.8 bin orientation histogram value is computed for each 4x4 sized sub-blocks which are the pieces of 16x16 neighborhood. AUTOENCODERS Autoencoders are neural networks that tries to generate its input with minimum difference.In other words, if weight and bias sets are W and b and input is x, the autoencoder tries to map input x to output x' with hW,b(x) = x'.The goal is finding best set of parameters (W,b) that minimizes difference x-x'.Like other neural networks applications, trying to minimize x-x' is a convex optimization problem and solved with gradient based methods. Autoencoder neural network is an unsupervised learning algorithm, it takes no label.When autoencoder is shallow and vanilla type, it learns similar to PCAs.Stacking more layers of neurons makes it easier to find correlation between different components because each layer adds a nonlinear operation with activation function.However, if capacity of autoencoder is too large, network just copies instead of finding useful features. Generally, there are several error functions used in autoencoders.Eq. ( 1) and ( 2) are two frequently used error functions where (1) is L2 norm and (2) is cross entropy loss that is used when the input is bit probability, (1) where k is the index of hidden unit.In this study L2 norm as error function is used.There are several autoencoder types such as sparse autoencoders, denoising autoencoders, and variational autoencoders.In this study we used vanilla autoencoders. METHOD Our dataset consists of 90 cropped vehicle logo images which is obtained from 9 car brand [9].In [1], after SIFT vectors are obtained, to compare vectors and find matchings, cosine distances were calculated.Instead of calculating cosine distance between 128 dimensional vectors, we use cosine distance between 64 dimensional vectors which are reduced vectors by Autoencoders. Figure 1 shows the Autoencoder architecture we implemented.This is a traditional autoencoder model which has 2 encoder and 2 decoder layers with sigmoid activation function.Left-most rectangle is input SIFT feature vector with 128 dimensions.Right-most rectangle represents generated SIFT feature vector.Encoder layers encodes 128 dimensional input vectors to 64 dimensional vectors while decoder layers decode this vector back to 128 dimensions.Cost function is squared distance between original input and generated output.Our purpose is to find suitable representation for these vectors with lower number of components.This is a symmetrical network, first and second encoder layers have 128 and 64 number of units, respectively.Encoder and decoder layers are symmetric; first decoder layer has 64 number of unit while second has 128. EXPERIMENTAL RESULTS We used two programming languages for different parts of project.Firstly, logo images are cropped manually from all vehicle images in the dataset.Then SIFT vectors are obtained with [8] in MATLAB.The autoencoder is implemented on python with TensorFlow [6] and reduced vectors are compared on MATLAB. Implemented autoencoder architecture is trained with 5621 SIFT vectors through 30,000 iterations.Learning rate and batch size are set to 10 -4 and 256 respectively.Loss function is optimized with Adam [7] optimizer.Plot of loss obtained from Eq. (1) vs iteration number can be seen from Figure 2.During training process, a single CPU is used. Figure 2. Loss results per iteration The recognition performance of the proposed method is measured using the "accuracy" metric, A, defined by: where TM is the number of true matches and N is the number of all images in the data set.81% accuracy is achieved after training through 30,000 iterations while matching process with 64 dimensional vectors.In addition to this, 32 dimensional vectors are obtained and tested.Memory usage which is the occupied memory by vectors obtained from all images in data set and logo data set, is computed The results are shown in Table 1. CONCLUSIONS AND FUTURE WORK A reduced-dimension SIFT features based vehicle logo detection method is proposed using autoencoders.The dimension reduction of features is achieved with twolayer neural network structures called vanilla autoencoders. Results indicate that by employing the proposed dimension reduction technique, an accuracy decrease of less than 20% yields a memory space saving of more than 100% along with a reduced processing time requirement per image. Future work consists of 4,8 and 16 dimensional reduced vectors, quantization and binarization of these vectors and using different similarity measures, such as Jaccard and Manhattan measures. Table 1 . Results that are obtained with 128, 64 and 32 dimensional feature vectors. Table 2 . Examples of input logo images, corresponding true matches and algorithm results obtained utilizing (a.) 128, (b.) 64 and (c.) 32 dimensional SIFT feature vectors ).Using binarization and hashing for efficient SIFT matching.Journal of Visual Communication and Image Representation, 30, 86-93.
1,839.6
2018-01-09T00:00:00.000
[ "Computer Science", "Engineering" ]
Identification and Expression Analysis of Four Small Heat Shock Protein Genes in Cigarette Beetle, Lasioderma serricorne (Fabricius) Small heat shock proteins (sHsps) are molecular chaperones that play crucial roles in the stress adaption of insects. In this study, we identified and characterized four sHsp genes (LsHsp19.4, 20.2, 20.3, and 22.2) from the cigarette beetle, Lasioderma serricorne (Fabricius). The four cDNAs encoded proteins of 169, 180, 181, and 194 amino acids with molecular weights of 19.4, 20.2, 20.3, and 22.2 kDa, respectively. The four LsHsp sequences possessed a typical sHsp domain structure. Quantitative real-time PCR analyses revealed that LsHsp19.4 and 20.3 transcripts were most abundant in pupae, whereas the transcript levels of LsHsp20.2 and 22.2 were highest in adults. Transcripts of three LsHsp genes were highly expressed in the larval fat body, whereas LsHsp20.2 displayed an extremely high expression level in the gut. Expression of the four LsHsp genes was dramatically upregulated in larvae exposed to 20-hydroxyecdysone. The majority of the LsHsp genes were significantly upregulated in response to heat and cold treatments, while LsHsp19.4 was insensitive to cold stress. The four genes were upregulated when challenged by immune triggers (peptidoglycan isolated from Staphylococcus aureus and from Escherichia coli 0111:B4). Exposure to CO2 increased LsHsp20.2 and 20.3 transcript levels, but the LsHsp19.4 transcript level declined. The results suggest that different LsHsp genes play important and distinct regulatory roles in L. serricorne development and in response to diverse stresses. Introduction Heat shock proteins (Hsps), a group of molecular chaperones, play important roles in promoting correct refolding and blocking aggregation of denatured proteins [1]. Hsps represent a large gene superfamily and are universally present in the majority of living organisms ranging from bacteria to mammals. Hsps are stress-related proteins that are highly expressed in response to external stresses, including exposure to extreme temperatures [2,3], ultraviolet radiation [4], heavy metals [5], parasitic infection [6], and chemicals [7], as well as in response to starvation [8] and oxidation [9]. In addition, Hsps exhibit a variety of biological functions in early embryogenesis, diapause, and morphogenesis [10]. On the basis of their molecular mass and sequence similarities, Hsps have been classified into six families: Hsp100, Hsp90, Hsp70, Hsp60, Hsp40, and small Hsps [11,12]. Among these families, small Hsps are probably the most diverse protein family and show the greatest variation in sequence, size, and function [13]. Small heat shock proteins (sHsps) range in molecular mass from approximately 12 to 43 kDa [14]. Typically, sHsps are relatively conserved in amino acid structure and composition. The sequence generally contains an α-crystallin domain of 80-100 amino acids located near the C-terminal region, a disorganized N-terminus, and a variable C-terminus [15,16]. Most sHsps display chaperone-like activities, preventing aggregation and facilitating the correct refolding of denatured proteins [17,18]. Apart from their fundamental functions in stressful conditions, sHsps also participate in other physiological processes, including cell growth, differentiation, apoptosis [19], membrane fluidity [20], lifespan [21], and diapause [22]. At present, several sHsp cDNA sequences have been identified in numerous insect species. For example, 10 sHsp genes are known in Apis mellifera, seven in Anopheles gambiae, 11 in Drosophila melanogaster, 10 in Tribolium castaneum, and 16 in Bombyx mori [23], 14 in Plutella xylostella [24], 15 in Choristoneura fumiferana [25], and five in Bemisia tabaci [26]. Multiple sHsp genes have diverse and variable functions during insect growth and development. Previous studies have demonstrated that sHsps play essential roles in insect development and in defense against a variety of stresses. For instance, sHsps contribute to thermal tolerance in Laodelphax striatellus [27]. The expression of three sHsp genes in flesh fly (Sarcophaga crassipalpis) is upregulated during overwintering pupal diapause [28]. In D. melanogaster, overexpression of hsp22 increases resistance to oxidative stress and aging [29]. RNA-interference experiments in T. castaneum indicate that knockdown of Tchsp18.3 affects pupal-adult metamorphosis and reduces adult reproduction [30]. In Apis cerana, silencing of sHsp22.6 significantly decreases temperature tolerance, and the recombinant sHsp22.6 protein exhibits remarkable temperature tolerance, antioxidation, and molecular chaperone activities [31]. The cigarette beetle, Lasioderma serricorne (Fabricius) (Coleoptera: Anobiidae), is a destructive and economically important storage pest worldwide [32]. Outbreaks of this species constitute a severe threat to many stored products, including cereals, tobacco, dry foods, and traditional Chinese medicinal materials [33,34]. After egg hatching, L. serricorne larvae bore tunnels into the stored materials and spend their larval stage inside host products [35,36]. Control of L. serricorne has depended heavily on the application of chemical insecticides. Methyl bromide, phosphine, pyrethrin, and organophosphorus are the primary compounds used for cigarette beetle management [37,38]; however, the excessive use of chemicals has resulted in resistance development and environmental contamination [39,40]. Extreme temperature treatments and low-oxygen controlled atmospheres have previously shown promise as ecologically friendly alternatives to conventional control methods [41,42]; however, L. serricorne has developed substantial tolerances to extreme temperatures and oxygen-deficit conditions [43]. The high adaptability of L. serricorne in response to various stresses renders it difficult to control. However, insect sHsps could be used to modify the biological responses elicited by external abiotic stresses. To date, no study has investigated the effects of stresses on sHsps in L. serricorne at the molecular level. In the present study, we identified and cloned the full-length open reading frame (ORF) sequences of four sHsp genes in L. serricorne. We analyzed the expression patterns of the sHsps in different developmental stages and tissues, and in response to 20-hydroxyecdysone (20E) treatment. In addition, we evaluated the responses of the four sHsps to diverse stresses, comprising thermal stress, immune challenges, and CO 2 stress. Insect and Sample Preparation The laboratory stock colony of L. serricorne, originally collected in 2014 from a tobacco warehouse in Guizhou Province, China, was reared on Chinese medicinal material (Angelica sinensis) and maintained at 28 ± 1 • C and 40% ± 5% relative humidity under a scotoperiod of 24 h. Samples at different developmental stages, including early larvae (EL, <24 h post-hatching), late larvae (LL, older than fourth instar larvae and before prepupae), pupae (PU, >48 h post-pupation), early adults (EA, <24 h post-eclosion), and late adults (LA, one week old) were collected separately and stored at −80 • C. In the tissue-specific experiment, the fifth instar larvae were used for tissue isolation. The integument, fat body, gut, and carcass of L. serricorne were dissected under a stereomicroscope (Olympus SZX12, Tokyo, Japan). Each tissue type was placed in a 1.5-mL centrifuge tube containing RNA storage reagent (Tiangen, Beijing, China). Pools of 30 individuals of larvae were used to prepare the integument, gut, and carcass, and 50 individuals were pooled to collect the fat body tissue. All tissue samples were immediately frozen in liquid nitrogen and stored at −80 • C. Each sample was replicated three times. 20E and Temperature Treatment For the 20E (Sigma-Aldrich, St. Louis, MO, USA) treatment, a 10 µg/µL stock solution of 20E dissolved in 95% ethanol was diluted to 1 µg/µL with distilled water and used as the working solution. For the treatment group, fourth instar larvae were injected at a dose of 120 ng/larva using a Nanoliter 2010 injector (World Precision Instruments, Sarasota, FL, USA). The control group was injected with an equivalent volume of distilled water containing 0.1% ethanol. At 4, 8, and 12 h after injection, the whole bodies of surviving insects were frozen in liquid nitrogen and stored at −80 • C. For the temperature treatment, the early female and male adults were placed in small glass tubes and exposed to a range of temperatures (5,40,42,44, and 46 • C) for 2 h in a temperature-controlled chamber (DC-3100, Ningbo, China). After exposure, adults were allowed to recover at 28 • C for 2 h, and survivors were frozen in liquid nitrogen and stored at −80 • C. A set of adults maintained at 28 • C was used as a control. At least 30 insects were randomly collected per replication, and three independent biological replications were performed. Immune Challenge and Controlled Atmosphere Treatment For the immune challenge, peptidoglycan from Staphylococcus aureus (PGN-SA; InvivoGen, San Diego, CA, USA) and peptidoglycan from Escherichia coli 0111:B4 (PGN-EB; InvivoGen) were diluted with sterile endotoxin-free water to a final concentration of 1.0 µg/µL. The fourth instar larvae were injected with 200 nL of PGN-SA solution, PGN-EB solution, or sterile endotoxin-free water. The control group was handled in the same manner but without injection. Thirty individuals were randomly selected from each group after 3, 6, and 9 h post-injection. For the controlled atmosphere (CA) treatment, adults were exposed to air containing 70% CO 2 for 6 h. The treated insects were then transferred to natural atmospheric conditions for 6 h, and the mortality rate was recorded as 35.81%. The control group was maintained under natural atmospheric conditions. Approximately 40 surviving individuals were randomly selected from each group. Each of the above-mentioned treatments included three biological replications. All samples were frozen in liquid nitrogen and stored at −80 • C for RNA extraction. RNA Extraction and cDNA Synthesis Total RNA was extracted from each sample using the MiniBEST Universal RNA Extraction Kit (TaKaRa, Dalian, China) and treated in a gDNA Eraser spin column for genomic DNA removal. The RNA quantity was measured using a NanoDrop 2000C spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) with absorbance levels of 260 nm and 280 nm. The RNA integrity was verified by 1% agarose gel electrophoresis. The cDNA for cloning and quantitative real-time PCR (qPCR) was synthesized using 1 µg RNA from each sample using the PrimeScript ® RT Reagent Kit (TaKaRa) with random hexamer and oligo (dT) primers. Identification and Sequencing of LsHsp cDNAs Based on the L. serricorne transcriptome database (unpublished data), four unigene cDNAs encoding sHsps were obtained. The full-length ORFs of the four genes were confirmed by PCR with the corresponding pair of specific primers ( Table 1). The PCR reaction was as follows: 95 • C for 3 min; followed by 35 cycles at 95 • C for 30 s, 55-65 • C (depending on the gene-specific primers) for 30 s, and 72 • C for 1 min, and then a final extension for 10 min at 72 • C. The PCR products were separated by 1% agarose gel electrophoresis and sub-cloned into the pGEM-T Easy vector (Promega, Madison, WI, USA) for sequencing. Bioinformatics and Phylogenetic Analyses The ORFs were predicted with ORF finder (http://www.ncbi.nlm.nih.gov/gorf/gorf.html). DNAMAN 6.0 (LynnonBiosoft, Vaudreuil, Quebec, Canada) was used to generate a multiple-sequence alignment. Sequence similarities and the presence of conserved domains were determined using the Basic Local Alignment Search Tool algorithm-based programs available on the National Center for Biotechnology Information website (http://www.blast.ncbi.nlm.nih.gov/Blast). The molecular weights and isoelectric points were predicted with ExPASy (http://us.expasy.org/tools/protparam.html). Secondary structure predictions were performed with PHD software accessed through the NPS@Web server (http://npsa-pbil.ibcp.fr) [44]. A phylogenetic analysis was performed by MEGA 6.06 [45] using the neighbor-joining method with 1000 bootstrap replicates and pairwise deletion. Quantitative Real-Time PCR The qPCR analysis was performed using the CFX-96 real-time PCR System (Bio-Rad, Hercules, CA, USA) in a total reaction volume of 20 µL, including 10 µL of GoTaq ® qPCR Master Mix (Promega), 1 µL of cDNA template, 1 µL of each of the gene-specific primers, and 7 µL of nuclease-free water. All primers used for qPCR were designed using Primer 3 (version 0.4.0) software (http://fokker.wi.mit.edu/primer3/) ( Table 1). The reaction procedure was performed as follows: 95 • C for 2 min, followed by 40 cycles of 95 • C for 30 s and 60 • C for 30 s. A melting curve analysis from 60 • C to 95 • C was applied to all reactions to ensure the specificity and consistency of the generated products. All experiments were performed in triplicate, with two technical replicates each. The 18S ribosomal RNA (18S) gene was used as an internal reference gene. The relative expression level of the sHsp genes was calculated using the 2 −∆∆Ct method [46]. Data Analyses For all statistical analyses, IBM SPSS Statistics version 20.0 software (IBM Corporation, Armonk, NY, USA) was used. The expression data were presented as the mean ± standard error. Significant differences among treatments were analyzed by one-way analysis of variance (ANOVA) followed by a least significant difference test for multiple comparisons or a two-tailed, unpaired t-test for comparisons of two means. Insects 2019, 10, x FOR PEER REVIEW 5 of 15 followed by a least significant difference test for multiple comparisons or a two-tailed, unpaired t-test for comparisons of two means. Identification and Characterization of Four LsHsp Genes Four LsHsp genes, namely LsHsp19. 4, 20.2, 20.3, and 22.2 (GenBank accession numbers: MK395540, MK395541, MK395542, and MK395543) were cloned from L. serricorne. Sequencing analysis revealed that the cDNAs included ORFs of 510, 543, 546, and 585 bp, respectively, encoding proteins of 169, 180, 181, and 194 amino acids, respectively. The predicted molecular weights of the LsHsps ranged from 19.4 to 22.2 kDa, with theoretical isoelectric points of 5.94 to 6.60 ( Table 2). The amino acid residue identities were 20%-63% among the four LsHsps. Multiple alignments of the deduced amino acid sequences revealed that the four sHsp proteins contained a typical α-crystallin domain, which consisted of approximately 100 amino acids and six β-strands ( Figure 1). The tree was generated with MEGA6.06 using the neighbor-joining method. Nodes with >50% bootstrap values (1000 replications) are indicated on branches. The following insect sHsp sequences were used: Apis mellifera (Am), Bombyx mori (Bm), Drosophila melanogaster (Dm), Lasioderma serricorne (Ls), Tribolium castaneum (Tc). The GenBank accession number for each sequence is specified in the terminal label. The sHsp sequences determined in this study are labeled with a black diamond. Expression Levels of Four LsHsp Genes at Different Developmental Stages The expression levels of the four LsHsp genes at different developmental stages (early larvae, late larvae, pupae, early adults, and late adults) were estimated by qPCR. Generally, the four genes were consistently expressed at all tested stages, but exhibited different developmental expression patterns ( Figure 3). The highest expression levels of LsHsp19.4 and 20.3 were observed in pupae and were respectively 8.49-and 12.2-fold greater than those at other stages. The maximum expression level of LsHsp20.2 was recorded in late adults and was 24.1-fold greater than those at other stages. LsHsp22.2 transcripts were most abundant in early adults and were 26.9-fold higher than those at other stages. The tree was generated with MEGA6.06 using the neighbor-joining method. Nodes with >50% bootstrap values (1000 replications) are indicated on branches. The following insect sHsp sequences were used: Apis mellifera (Am), Bombyx mori (Bm), Drosophila melanogaster (Dm), Lasioderma serricorne (Ls), Tribolium castaneum (Tc). The GenBank accession number for each sequence is specified in the terminal label. The sHsp sequences determined in this study are labeled with a black diamond. Expression Levels of Four LsHsp Genes at Different Developmental Stages The expression levels of the four LsHsp genes at different developmental stages (early larvae, late larvae, pupae, early adults, and late adults) were estimated by qPCR. Generally, the four genes were consistently expressed at all tested stages, but exhibited different developmental expression patterns ( Figure 3). The highest expression levels of LsHsp19.4 and 20.3 were observed in pupae and were respectively 8.49-and 12.2-fold greater than those at other stages. The maximum expression level of LsHsp20.2 was recorded in late adults and was 24.1-fold greater than those at other stages. LsHsp22.2 transcripts were most abundant in early adults and were 26.9-fold higher than those at other stages. serricorne. EL, early larvae; LL, late larvae; PU, pupae; EA, early adults; LA, late adults. Different letters above bars indicate a significant difference among developmental stages based on one-way ANOVA followed by a least significant difference test (p < 0.05). Expression Levels of Four LsHsp Genes in Different Tissues All four LsHsp genes were expressed in the four tissue types analyzed. Gene expression varied significantly with tissue type except for LsHsp22.2. Interestingly, the highest expression levels of LsHsp19.4 and LsHsp20.3 were detected in the fat body, while the lowest expression levels were recorded for the carcass (Figure 4). Additionally, the maximum expression level of LsHsp20.2 was observed in the gut, which was 6.17-fold greater than that detected in the other tissues. Expression Levels of Four LsHsp Genes in Different Tissues All four LsHsp genes were expressed in the four tissue types analyzed. Gene expression varied significantly with tissue type except for LsHsp22.2. Interestingly, the highest expression levels of LsHsp19.4 and LsHsp20.3 were detected in the fat body, while the lowest expression levels were recorded for the carcass (Figure 4). Additionally, the maximum expression level of LsHsp20.2 was observed in the gut, which was 6.17-fold greater than that detected in the other tissues. serricorne. EL, early larvae; LL, late larvae; PU, pupae; EA, early adults; LA, late adults. Different letters above bars indicate a significant difference among developmental stages based on one-way ANOVA followed by a least significant difference test (p < 0.05). Expression Levels of Four LsHsp Genes in Different Tissues All four LsHsp genes were expressed in the four tissue types analyzed. Gene expression varied significantly with tissue type except for LsHsp22.2. Interestingly, the highest expression levels of LsHsp19.4 and LsHsp20.3 were detected in the fat body, while the lowest expression levels were recorded for the carcass (Figure 4). Additionally, the maximum expression level of LsHsp20.2 was observed in the gut, which was 6.17-fold greater than that detected in the other tissues. Effects of 20E Treatment on the Expression Levels of Four LsHsp Genes To determine whether 20E induces LsHsp expression in vivo, the mRNA levels of the four LsHsp genes in the fourth instar larvae were quantified in response to 20E treatment. The transcript levels of LsHsp20.2 and 20.3 increased significantly after injection of 20E at all three time points, but the response of LsHsp20.3 was much more intense than that of LsHsp20.2. Compared with the control specimens, the expression level of LsHsp22.2 dramatically increased in larvae injected with 20E at 8 and 12 h. However, 20E-induced expression was not observed for LsHsp19.4 ( Figure 5). Effects of 20E Treatment on the Expression Levels of Four LsHsp Genes To determine whether 20E induces LsHsp expression in vivo, the mRNA levels of the four LsHsp genes in the fourth instar larvae were quantified in response to 20E treatment. The transcript levels of LsHsp20.2 and 20.3 increased significantly after injection of 20E at all three time points, but the response of LsHsp20.3 was much more intense than that of LsHsp20.2. Compared with the control specimens, the expression level of LsHsp22.2 dramatically increased in larvae injected with 20E at 8 and 12 h. However, 20E-induced expression was not observed for LsHsp19.4 ( Figure 5). Larvae were collected for qPCR analysis at 4, 8, and 12 h after injection with 20E. The corresponding amount of distilled water containing 0.1% ethanol was used as a control. Significant differences between treatment and control were determined using a t test and indicated by * (p < 0.05) or ** (p < 0.01). Expression Levels of Four LsHsp Genes in Response to Thermal Stress The expression patterns of the four LsHsp genes in response to thermal stress differed substantially. During cold-shock treatment, the expression levels of three LsHsp genes were remarkably upregulated, whereas that of LsHsp19.4 was not significantly altered. During heat-shock treatment, the expression levels of all four LsHsp genes were significantly increased, and the greatest induced expression of LsHsp20.3 was observed in response to the 42 °C treatment ( Figure 6). For LsHsp20.3, the response patterns initially increased and then decreased as the temperature increased. The other three LsHsp genes exhibited similar expression patterns that increased in a temperature-dependent manner. Expression Levels of Four LsHsp Genes in Response to Immune Challenges To evaluate the responses of the LsHsp genes to immune challenges, the larvae were exposed to PGN-SA and PGN-EB at three time intervals (Figure 7). After PGN-EB exposure, expression levels of the four LsHsp genes were significantly increased by 1.34-to 119.1-fold. The maximum induced expression occurred in LsHsp20.2 at 6 h after injection with PGN-EB. Injection of PGN-SA also significantly induced expression of LsHsp genes at 3, 6, and 9 h post-treatment. In addition, the induced expression levels of LsHsp19.4 and 20.2 were greater in PGN-EB-treated groups than those in PGN-SA-treated groups. Larvae were collected for qPCR analysis at 4, 8, and 12 h after injection with 20E. The corresponding amount of distilled water containing 0.1% ethanol was used as a control. Significant differences between treatment and control were determined using a t test and indicated by * (p < 0.05) or ** (p < 0.01). Expression Levels of Four LsHsp Genes in Response to Thermal Stress The expression patterns of the four LsHsp genes in response to thermal stress differed substantially. During cold-shock treatment, the expression levels of three LsHsp genes were remarkably upregulated, whereas that of LsHsp19.4 was not significantly altered. During heat-shock treatment, the expression levels of all four LsHsp genes were significantly increased, and the greatest induced expression of LsHsp20.3 was observed in response to the 42 • C treatment ( Figure 6). For LsHsp20.3, the response patterns initially increased and then decreased as the temperature increased. The other three LsHsp genes exhibited similar expression patterns that increased in a temperature-dependent manner. Expression Levels of Four LsHsp Genes in Response to Immune Challenges To evaluate the responses of the LsHsp genes to immune challenges, the larvae were exposed to PGN-SA and PGN-EB at three time intervals (Figure 7). After PGN-EB exposure, expression levels of the four LsHsp genes were significantly increased by 1.34-to 119.1-fold. The maximum induced expression occurred in LsHsp20.2 at 6 h after injection with PGN-EB. Injection of PGN-SA also significantly induced expression of LsHsp genes at 3, 6, and 9 h post-treatment. In addition, the induced expression levels of LsHsp19.4 and 20.2 were greater in PGN-EB-treated groups than those in PGN-SA-treated groups. Effects of CO2 Exposure on the Expression Levels of Four LsHsp Genes To investigate the effects of CO2 exposure on LsHsps expression, adults were exposed to air containing 70% CO2 for 6 h. The expression levels of LsHsp22.2 and 20.2 were significantly upregulated by 2.66-and 5.84-fold, respectively. However, the expression level of LsHsp20.3 decreased by 4.2-fold after exposure to the CO2 treatment compared with that of the control ( Figure Figure 6. Relative expression levels of four LsHsp genes in L. serricorne in response to thermal stress. Different letters above bars indicate a significant difference based on one-way ANOVA followed by a least significant difference test (p < 0.05). Effects of CO2 Exposure on the Expression Levels of Four LsHsp Genes To investigate the effects of CO2 exposure on LsHsps expression, adults were exposed to air containing 70% CO2 for 6 h. The expression levels of LsHsp22.2 and 20.2 were significantly upregulated by 2.66-and 5.84-fold, respectively. However, the expression level of LsHsp20.3 decreased by 4.2-fold after exposure to the CO2 treatment compared with that of the control (Figure Effects of CO 2 Exposure on the Expression Levels of Four LsHsp Genes To investigate the effects of CO 2 exposure on LsHsps expression, adults were exposed to air containing 70% CO 2 for 6 h. The expression levels of LsHsp22.2 and 20.2 were significantly upregulated by 2.66-and 5.84-fold, respectively. However, the expression level of LsHsp20.3 decreased by 4.2-fold after exposure to the CO 2 treatment compared with that of the control (Figure 8). No difference in LsHsp22.2 expression was observed in the CO 2 -treated group compared with that of the control. 8). No difference in LsHsp22.2 expression was observed in the CO2-treated group compared with that of the control. Adults were collected for qPCR analysis at 6 h after exposure to air containing 70% CO2. The control group was maintained under natural atmospheric conditions. Significant differences between the treatment and control were determined using a t test and indicated by ** (p < 0.01). Discussion Insects possess different types of sHsps, which differ in structure and function among species [23,24,27,47]. The sHsps are known to play major roles in cell response to different stresses [48]. Analyses of these sHsps would provide valuable information for further functional studies in different insects. In the current study, we identified and cloned four sHsp genes (LsHsp19. 4, 20.2, 20.3, and 22.2) in L. serricorne based on our previous transcriptome data. Sequence analysis indicated that the four sHsps showed high similarity with other insect sHsps. The deduced amino acid sequences of four LsHsps contained a conserved α-crystalline domain consisting of common β-strands, which agrees with previous studies of sHsps in other insect species [47,[49][50][51]. Insect sHsps play important roles in the regulation of development. In the current study, we observed that the expression of four LsHsps occurred in tested developmental stages, but these LsHsps showed distinct expression patterns. The mRNA levels of LsHsp19.4 and 20.3 increased greatly when larvae molted into the pupal stage, suggesting that these two sHsps may have crucial functions in pupal formation. Similar expression patterns have been observed in other insect species, for example, Hsp23, 26, and 27 in D. melanogaster [52], Hsp19.5 in P. xylostella [53], Hsp19. 5, 20.8, and 21.7 in Liriomyza sativae [54], and Hsp20.4 and 20.8 in Spodoptera litura [55]. In addition, LsHsp 20.2 and 22.2 showed high expression levels during the adult stages. This finding is consistent with previous studies in which sHsp genes were upregulated in adults of Chilo suppressalis [49], Cydia pomonella [56], and Harmonia axyridis [57]. In Sesamia inferens, large quantities of Hsp19.6 and 20.6 transcripts were observed in the egg [58]. Interestingly, Hsp21.4 in S. litura and Hsp21.6 in Bactrocera dorsalis exhibit constitutive expression patterns during all developmental stages [47,51]. Collectively, the diverse expression patterns of sHsp genes indicate that they might fulfill different roles during the developmental progression of insects. The expression of insect sHsps shows distinct tissue specificity. In B. mori, Hsp19.1 and 22.6 were highly expressed in the integument, head, and midgut, whereas Hsp20. 1, 20.4, and 27.4 exhibited high expression levels in the ovary and testis [23]. In Oxya chinensis, Hsp19. 1, 20.4, 20.7, and 21.1 were selectively expressed in the ovary and testis, whereas Hsp19.8 and 23.8 were mainly expressed in the muscle [50]. Previous studies showed that the Malpighian tubules are the primary site of sHsp expression, such as in S. litura [47], C. suppressalis [49], and C. fumiferana [25]. In the present study, LsHsp19.4 and 20.3 were highly expressed in the fat body. Similar findings have been reported for other insect species, such as B. mori [59] and B. dorsalis [51]. The insect fat body is an important organ involved in diverse biological processes, including detoxification, immunity, energy metabolism, and nutrient storage. However, it remains unclear why sHsps were highly Figure 8. Relative expression levels of four LsHsp genes in L. serricorne in response to CO 2 stress. Adults were collected for qPCR analysis at 6 h after exposure to air containing 70% CO 2 . The control group was maintained under natural atmospheric conditions. Significant differences between the treatment and control were determined using a t test and indicated by ** (p < 0.01). Discussion Insects possess different types of sHsps, which differ in structure and function among species [23,24,27,47]. The sHsps are known to play major roles in cell response to different stresses [48]. Analyses of these sHsps would provide valuable information for further functional studies in different insects. In the current study, we identified and cloned four sHsp genes (LsHsp19. 4, 20.2, 20.3, and 22.2) in L. serricorne based on our previous transcriptome data. Sequence analysis indicated that the four sHsps showed high similarity with other insect sHsps. The deduced amino acid sequences of four LsHsps contained a conserved α-crystalline domain consisting of common β-strands, which agrees with previous studies of sHsps in other insect species [47,[49][50][51]. Insect sHsps play important roles in the regulation of development. In the current study, we observed that the expression of four LsHsps occurred in tested developmental stages, but these LsHsps showed distinct expression patterns. The mRNA levels of LsHsp19.4 and 20.3 increased greatly when larvae molted into the pupal stage, suggesting that these two sHsps may have crucial functions in pupal formation. Similar expression patterns have been observed in other insect species, for example, Hsp23, 26, and 27 in D. melanogaster [52], Hsp19.5 in P. xylostella [53], Hsp19.5, 20.8, and 21.7 in Liriomyza sativae [54], and Hsp20.4 and 20.8 in Spodoptera litura [55]. In addition, LsHsp 20.2 and 22.2 showed high expression levels during the adult stages. This finding is consistent with previous studies in which sHsp genes were upregulated in adults of Chilo suppressalis [49], Cydia pomonella [56], and Harmonia axyridis [57]. In Sesamia inferens, large quantities of Hsp19.6 and 20.6 transcripts were observed in the egg [58]. Interestingly, Hsp21.4 in S. litura and Hsp21.6 in Bactrocera dorsalis exhibit constitutive expression patterns during all developmental stages [47,51]. Collectively, the diverse expression patterns of sHsp genes indicate that they might fulfill different roles during the developmental progression of insects. The expression of insect sHsps shows distinct tissue specificity. In B. mori, Hsp19.1 and 22.6 were highly expressed in the integument, head, and midgut, whereas Hsp20.1, 20.4, and 27.4 exhibited high expression levels in the ovary and testis [23]. In Oxya chinensis, Hsp19.1, 20.4, 20.7, and 21.1 were selectively expressed in the ovary and testis, whereas Hsp19.8 and 23.8 were mainly expressed in the muscle [50]. Previous studies showed that the Malpighian tubules are the primary site of sHsp expression, such as in S. litura [47], C. suppressalis [49], and C. fumiferana [25]. In the present study, LsHsp19.4 and 20.3 were highly expressed in the fat body. Similar findings have been reported for other insect species, such as B. mori [59] and B. dorsalis [51]. The insect fat body is an important organ involved in diverse biological processes, including detoxification, immunity, energy metabolism, and nutrient storage. However, it remains unclear why sHsps were highly expressed in the fat body, and further studies are required to clarify their precise roles in L. serricorne. Similar to four sHsp genes (Hsp19.5, 20.1, 21.6, and 21.8) from P. xylostella [24], LsHsp20.2 showed a higher expression level in the gut than in other tissues analyzed. Interestingly, LsHsp22.2 exhibited a constitutive expression pattern in the four tested tissues, suggesting that it may play fundamental roles in in vivo activities. There is a functional correlation between hormones and heat-shock regulatory systems [60]. The expression of sHsp genes can be regulated by 20E in numerous insect species. Here, we found 20E upregulated mRNA levels of three sHsp genes in L. serricorne at different time points. This observation is in agreement with studies of B. dorsalis in which the expression levels of BdHsp18.4, 20.4, and 20.6 were induced by 20E [51]. Similarly, significant upregulation of SlHsp20.4 and SlHsp20.8 by 20E was observed in the larval midgut and cell line of S. litura [47,55]. There is strong evidence for a connection between ecdysone and insect sHsps. For example, upregulation of Hsp27 in D. melanogaster and Ceratitis capitata by 20E is mediated by a canonic ecdysone response element (EcRE) located in the promoter region [61,62]. Binding sites for the ecdysone-responsive transcription factor Broad-Complex (BR-C) were detected in the 5 -flanking regions of Drosophila Hsp23 [63], AccHsp23.0, and AccHsp24.2 in A. cerana [64]. Epidermis culture experiments showed that the expression of two AccHsp genes could be regulated by 20E [64]. Interestingly, expression of SlHsp20.4 and SlHsp20.8 was also induced by the juvenile hormone (JH), and the responses of two sHsp genes to JH induction were greater than that to 20E induction [55]. Studies of many sHsp genes in insects have demonstrated upregulated expression after thermal stress exposure. In the present study, three LsHsp genes were dramatically upregulated in response to cold treatment. Cold shock induces upregulation of sHsp genes, including Hsp21.4, 20.6, and 19.6 in S. inferens [58], Hsp20.4 and 20.8 in S. litura [47], and Hsp19.8, 21.5, and 21.7b in C. suppressalis [49]. Heat stress also induced the expression of four sHsp genes in L. serricorne, indicating their potential roles in heat resistance. Similar situations have been observed in several insects, including B. mori [23], P. xylostella [24], B. dorsalis [51], and C. fumiferana [25]. The sHsp bind to other cellular proteins under thermal stress and provide protection from denaturation. Different response patterns were observed for the four sHsp genes of L. serricorne in response to heat or cold treatment. Furthermore, LsHsp19.4 was insensitive to cold stress, which is consistent with the responses of Hsp20 and 21.4 in S. litura [47], and Hsp17.7 and 21.6 in B. dorsalis [51]. Thus, different sHsps may have different mechanisms of action for adaptation to thermal stress. It was reported that sHsp expression could be regulated by immune challenge in many insects. In A. cerana, for example, AccsHsp22.6 expression was significantly increased by inoculation with the fungal pathogen Ascosphaera apis [31]. AccsHsp27.6 expression was induced by exposure to S. aureus and Micrococcus luteus and suppressed by Bacillus subtilis and Pseudomonas aeruginosa. These results indicate that sHsps participate in the host immune response to different microbes, and additional analysis of recombinant Hsp27.6 protein provides antimicrobial evidence [9]. Treatment of B. mori larvae with cytoplasmic polyhedrosis virus (CPV) significantly increased the expression of sHsp23.7, suggesting its involvement in anti-BmCPV immunity [65]. Zhang et al. [66] reported that sHsp20.8 expression in Chinese oak silkworm (Antheraea perny) was substantially upregulated after bacterial and viral infection. Recent studies of T. castaneum show that silencing of Hsp18.7 amplifies the serine protease signaling pathway to regulate innate immunity of red flour beetles [67]. In the present study, the expression levels of four LsHsp genes were upregulated after treatment with Gram-positive peptidoglycan (PGN-SA) and Gram-negative peptidoglycan (PGN-EB), indicating that the LsHsps might play important roles in responses to different immune reactions. In addition, the expression of multiple insect sHsp genes is regulated by immune challenge. These effects are indirect and might be due to immune response elements [9,31]. Controlled atmosphere treatments using high CO 2 (hypercapnia) and low oxygen (hypoxia) have been applied commercially to control the cigarette beetle [34,68]. However, several populations are reported to have developed substantial tolerance to evaluated CO 2 [34]. Hypercapnia may cause many physiological consequences to insects, including the reduction of metabolic rate and initiation of anaerobic respiration [69]. Our previous studies showed that the specific activity of carboxylesterase in L. serricorne was increased following exposure to CO 2 -enriched atmosphere [34]. The expression of two glutathione S-transferase genes (LsGSTt1 and LsGSTs1) could also be markedly induced by CO 2 [70]. We concluded that detoxification enzymes may be critical for tolerance to CO 2 stress. It has been demonstrated that sHsps are rapidly synthesized in cells after exposure to environmental stressors and develop protective functions. The sHsps are reported to be the best candidates for adaptation to various stresses. Studies of A. cerana show that AccHsp27.6 expression is suppressed by CO 2 treatment [9], which corresponds with the expression of LsHsp20.3 observed in the present study. However, we found that transcription of LsHsp20.2 and 20.3 responded differentially to CO 2 treatment. These results suggest that different sHsps may play distinct roles in response to CO 2 stress. Conclusions We identified and characterized four sHsp genes from L. serricorne. The putative sHsp proteins contained the typical structural and conserved domains of sHsps. The expression levels of these sHsp genes differed among various developmental stages and tissues. The expression levels of the four LsHsp genes were upregulated by 20E exposure. Induction of these genes in response to diverse stresses indicated that the genes play important roles in stress-adaptation mechanisms. Further studies are needed to clarify the precise roles and physiological mechanisms of the sHsp genes in L. serricorne.
8,299.2
2019-05-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Processing of JEFF nuclear data libraries for the SCALE Code System and testing with criticality benchmark experiments . In the last years, a new version of the Joint Evaluated Fission and Fusion File (JEFF) data library, namely JEFF-3.3, has been released with relevant updates in the neutron reaction, thermal neutron scattering and covariance sub-libraries. In the frame of the EU H2020 SANDA project, several e ff orts have been made to enable the use of JEFF nuclear data libraries with the extensively tested and verified SCALE Code System. With this purpose, AMPX processing code has been applied to enable such application, allowing to provide insight into the interaction between the code and the new versions of JEFF data file. This paper provides an overview about the processing of JEFF-3.3 nuclear data library with AMPX for its application within the SCALE package. The AMPX-formatted cross-section library has been widely verified and tested using a comprehensive set of criticality benchmarks from ICSBEP, by comparing both with results provided by other processing and neutron transport codes and experimental. Processing of JEFF-3.3 covariances is also addressed along with their corresponding verification using covariances processed with NJOY. This work paves the way towards a successful future interaction between JEFF libraries and SCALE. Introduction Nuclear data processing is the procedure devoted to the conversion of evaluated nuclear data into libraries for specific final applications such as neutron transport or inventory calculations. Computational codes are specifically dedicated to nuclear data processing. AMPX [1] is the modular processing code of SCALE Code System that takes basic cross section data in Evaluated Nuclear Data File (ENDF) format to provide both multigroup (MG) or continuous energy (CE) libraries for their use by the neutron transport codes included within SCALE [2]. The OECD Nuclear Energy Agency (NEA) Data Bank coordinates the Joint Evaluated Fission and Fusion (JEFF) nuclear data library project. In the last years, a new version of the JEFF library, namely JEFF-3.3 [3], has been released with relevant updates in the neutron reaction and the thermal neutron scattering sub-libraries. The library is publicly released through the NEA in ENDF-6 format [4]. Thus, users should perform the nuclear data processing to produce a nuclear data set in an adequate format for the final application. Therefore, the use of JEFF nuclear data libraries within SCALE system is not straightforward so that the processing of the nuclear data library must be undertaken with AMPX. Past efforts set the first milestones for the usage of JEFF libraries within SCALE [5]. As a continuation of that work, within the EU H2020 SANDA (Supplying Accurate Nuclear Data for energy and non-energy Applica- * e-mail<EMAIL_ADDRESS>tions) project, AMPX is being used for processing the JEFF-3.3 neutron library. This paper deals with the processing of JEFF-3.3 neutron data libraries into a CE library for its use with SCALE transport codes such as KENO-VI. Main aspects concerning the processing of CE and covariance libraries with AMPX are depicted. The CE library performance is also evaluated for a set of criticality benchmarks. This allows to identify the application domain of the generated library and those issues that require further development activities. Processing with AMPX AMPX is the modular processing code of SCALE Code System, developed at Oak Ridge National Laboratory (ORNL). In this work, the CE library is generated using AMPX code available with SCALE6.3β11. This version incorporates relevant updates regarding the generation of probability tables for the unresolved resonance region (URR), affecting to intermediate and fast spectrum systems [6]. This section presents a brief summary about the processing of both CE libraries and covariance matrices. Continuous energy libraries The generation of a CE library with AMPX is performed through a multi-step procedure based on the usage of different modules. For each available isotope and starting from the ENDF-6 file, POLIDENT is firstly used to reconstruct point-wise CE cross sections at 0 K, with a default reconstruction tolerance of 0.1%. Then, BROADEN performs the Doppler-broadening for those temperatures required by the user. The first stage is completed by TGEL, which ensures the consistency between partial and total reactions. Separately, PURM generates probability tables for the URR (if present), setting the number of probability bins to 20. Y12 is applied to generate two-dimensional kinematics data for neutron scattering producing doubledifferential data and JAMAICAN converts the data into marginal probability distribution in exit energy. Finally, PLATINUM creates the final CE library by merging the data produced in previous steps. Y12 and JAMAICAN are also used for processing the Thermal Scattering Libraries (TSL), combining the thermal moderator data with the proper evaluation in the higher energy range (>10 eV). Covariance libraries This work also deals with the processing of JEFF-3.3 covariance libraries. AMPX is applied to generate COVERX-format covariances for average number of neutrons per fission (MF31), resonance parameters (MF32) and neutron cross-sections (MF33) and prompt fission spectrum (MF35). PUFF is the module devoted to generating covariance libraries according to the group-averaged cross section data on the user-defined energy structure. Files produced by PUFF are then merged into a library that contains cross-reaction and cross-material covariance matrices (if present). Corrections are finally applied to the library by means of the COGNAC module. In this work, two sets of covariance libraries are generated using a weighting function generically optimized for fast reactor analyses. The first covariance library is created for general purposes using a 33-energy group structure. Moreover, a 7-group structure covariance matrix is also created in the frame of the OECD/NEA WPEC Subgroup-46 [7]. Processing JEFF-3.cross section library The latest release of the JEFF project, JEFF-3.3, is a thorough update of the neutron, decay data, fission yields, dpa and neutron activation libraries with TSLs for 20 compounds. It also includes new evaluations for the major nuclides 235 U, 238 U and 239 Pu along with important updates for many other isotopes in terms of neutron cross sections. It is worth mentioning that JEFF-3.3 improvements targeted the needs for advanced reactors developments programs, including upgrades for both sodium and lead. This work addresses the processing of JEFF-3.3 neutron data for 562 isotopes and 20 TSLs. The procedure detailed in Sect. 2.1 is successfully applied for the processing of all the isotopes. Nonetheless, several issues are identified through the processing itself and via the infinite dilution testing phase. The latter consists of very simple criticality calculations of an infinite dilution containing fissile material ( 238 U) in a solution, allowing to test all the nuclides included in the library. The main outcomes of the testing phase are detailed below: • Negative cross section values are found when reconstructing cross section from resonance parameters for 19 isotopes. • The total resonance width given in the file differs from calculated for the same set of isotopes. • The lower limit of the URR does not include some unresolved resonance parameters for 239 U. • Regarding TSL, data for 1 H in CaH 2 , Ca in CaH 2 and Mg in Mg metal are not considered since they are not identified in SCALE. The infinite dilution calculation also reveals relevant issues concerning TSLs since it is observed that SCALE is not currently able to manage the following compounds: 16 O in Al 2 O 3 and 16 O in D 2 O. This is due to the lack of available material identification for them. However, this can be solved in future iterations using the COMPOZ module to update the standard composition library. Additionally, certain metastable isotopes are not manageable by SCALE so that they do not pass this phase: 106m Ag, 62m Co, 152m Eu, 94m Nb, and 135m Xe. The rest of the isotopes are successfully processed and can be used in transport calculations. The performance of the library as a whole is evaluated in the next section. Benchmarking In order to test the new cross section library, a comprehensive set of experiments from the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP) [8] has been selected and evaluated. This analysis includes a comparison between KENO-VI and MCNP6.1, that use AMPX and NJOY-processed cross section data respectively. It is worth mentioning that both processing routes rely on the same set of parameters. Results are presented in terms of C/E since experimental values are also considered and provided insight into the behaviour of the library itself. A set of ICSBEP benchmarks is created aiming to cover a variety of fuel, moderators, reflectors, spectra and geometries. This set consists of 120 benchmarks, divided into 43 highly enriched uranium (HEU) cases, 10 intermediate-enriched uranium (IEU) cases, 14 lowenriched uranium (LEU) cases, 8 mixed uranium and plutonium (MIX) cases, 28 plutonium (PU) cases and 17 233 U systems (U233). Of these, 71 corresponds to fast neutron spectra (FAST), 6 as for intermediate spectrum (INTER) and 43 for thermal spectrum (THERM). This set is mostly composed by cases included within ICSBEP database along with updated inputs provided by OECD/NEA, ensuring the consistency between KENO-VI and MCNP inputs. The latter has been widely used in previous works [9]. Multiplication factor calculations for the set of benchmarks using both KENO-VI and MCNP are presented from Fig. 1 to Fig. 4, ensuring a 1σ statistical error lower than 10 pcm. For HEU category (Fig. 1) a good agreement can be observed between the values provided by KENO-VI and MCNP. Nonetheless, a dramatic deviation of around 900 pcm appears for the HMF009-001 benchmark. Further analyses reveals that this difference can be explained by the presence of 9 Be. In fact, this behaviour is systematically found in subsequent cases and this also affects to 9 This issue lies in the description of the (n,2n) reaction in the 9 Be ENDF-6 file provided by JEFF-3.3 evaluation. It is described by means of its partial reactions (i.e., MT875+ reaction channels), but the total reaction is not included. AMPX properly deals with these channels but an additional patch should be included to construct the (n,2n) description. This issue also affects to HCI003-007, for which a deviation of around 300 pcm is found. The rest of cases presents discrepancies below 100 pcm except for HMF-003-009, HMF-003-011, HMF-011-001, HMI-006-003 and HMI-006-004. Deviations between 100 and 200 pcm are found for these cases suggesting that models shall be reviewed and updated. Fig. 2 depicts results for IEU, LEU and MIX benchmarks. A very good agreement is obtained for IEU benchmarks since differences are lower than 30 pcm in each case. Nonetheless, only benchmarks with fast spectra are included in this case so that more configurations with different physical forms and spectra may be added to the study for a wider comparison. Regarding LEU category, results are consistent between both codes considering that deviations are not larger than 60 pcm in all cases. This is also observed for MIX benchmarks, where a very good agreement is also obtained (deviations <60 pcm). However, it is worth mentioning that both MCT002-001 and -002 present differences of around 100 pcm because simplified models are used in SCALE while MCNP results are obtained with detailed models. Results for PU benchmarks are presented in Fig 3. In general, both codes predict reasonably similar multiplication factors. The effect of the presence of 9 Be is again observed for PMF018-001, PMF019-001 and PMF021-001, showing differences larger than 2000 pcm. Apart from that, PMF005-001 presents deviations of around 150 pcm, even after updating the KENO-VI model. This benchmark may suggest that additional verification exercises are recommended for W isotopes. This test is performed based on infinite dilution cases along with verification calculations for PMF-005-001. Firstly, infinite dilution tests show remarkable discrepancies between KENO-VI and MCNP for several W isotopes: 182 W, 184 W and 186 W . Concerning PMF-005-001, deviation between both codes is initially around 150 pcm but it is reduced up to 30 pcm when these nuclides are removed from the calculations. This behaviour is also confirmed for other cases such as HMF-003-009, that also contains these isotopes. Thus, further analyses are mandatory to solve this issue. Finally, U233 cases (Fig. 4) are considered covering a wide range of physical forms. Deviations are consistent between KENO-VI and MCNP besides the fact that of 9 Be is involved in both UMF005-001 and -002. On the other hand, UMF004-001 and -002, for which differences of around 200 pcm are found between both codes, are also affected by W isotopes. In general, the AMPX-formatted JEFF-3.3 library shows a reasonably good performance. Results provided in this work are accompanied by extensive verification and validation activities carried out in the frame of the JEFF project. This library has been used to assess temperature trends observed for the IRPhEP KRITZ (KRITZ-LWR-RESR-004) benchmarks [10]. This allowed to test the library at room and elevated temperatures, showing a good performance compared to benchmark results. Additionally, reactor physics calculations have been also performed for the SEFOR fast reactor, evaluating the associated Doppler reactivity effect [11]. Benchmarking activities successfully support the performance of the processed JEFF-3.3 library, establishing a reference processing route for future releases. In fact, preliminary recent works have been carried out for the JEFF-4T1 testing library [12], paving the way towards an optimized interaction between the future JEFF-4 library and AMPX processing code. Processing and testing COVERX JEFF-3.3 covariances As aforementioned, JEFF-3.3 neutron library contains 562 isotopes, of which 447 include covariances so that the procedure detailed in Sect. 2.2 is applied for all of them. Two different energy group structures are used for collapsing both cross sections and covariances in PUFF. For verification purposes, the processing is also carried out with NJOY, specifically using the ERRORR module, allowing to check the consistency between both codes. In general, AMPX properly deals with JEFF-3.3 covariances and a brief comparison for relevant isotopesreactions is presented in this work. Fig. 5 and 6 shows the relative standard deviations for 239 Pu (n,f) and 238 U (n,γ), respectively. They include results using both AMPX and NJOY and for the mentioned energy group structures. It can be seen a very good agreement between both codes in all cases and also for the 238 U (n,γ) correlation matrix (Fig. 7). NJOY BOXER -33g AMPX COVERX -33g NJOY BOXER -7g AMPX COVERX -7g Figure 6. Relative standard deviation for 238 U (n,γ) collapsed into 7-and 33-energy groups using AMPX and NJOY. This agreement can be observed for the rest of relevant isotopes/reactions but a more extensive analysis would be worthwhile following the methodology proposed in [13]. This would allow to test in a more comprehensive way. Nonetheless, the COVERX JEFF-3.3 covariance library has been already tested in terms of uncertainty propagation and compared to NJOY-based covariances [14]. A very good agreement was observed between these two different methodologies, ensuring the adequate performance of the covariance matrix. Conclusions and future work JEFF-3.3 neutron cross section library, including TSLs, has been successfully processed with AMPX using the most recent SCALE release. The CE library has been created for its use with SCALE neutron transport tools. This work is a step further on the interaction between AMPX and JEFF libraries, identifying remaining issues towards a more efficient procedure. During the testing and benchmarking phases, relevant improvements have been highlighted concerning the treatment of 9 Be. On the other hand, several W isotopes require a more specific analysis in order to adjust potential inconsistencies. Nonetheless, the CE library performs adequately for the set of 120 benchmarks and according to all the verification activities already carried out. Associated covariances have been also generated using different energy group structures and verified against NJOY-processed files. Finally, it is worth mentioning that these libraries (along with JEFF-3.1.1 CE library) have been recently released through the NEA/CPS aiming to expand the user's group of JEFF nuclear data libraries in the frame of the widely used SCALE Code System. This will also contribute to a more extensive verification and validation capabilities of the current JEFF-3.3, targeting future releases: JEFF-4.
3,691.8
2023-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Top-pair production at the LHC through NNLO QCD and NLO EW A bstractIn this work we present for the first time predictions for top-quark pair differential distributions at the LHC at NNLO QCD accuracy and including EW corrections. For the latter we include not only contributions of Oαs2α$$ \mathcal{O}\left({\alpha}_s^2\alpha \right) $$, but also those of order Oαsα2$$ \mathcal{O}\left({\alpha}_s{\alpha}^2\right) $$ and Oα3$$ \mathcal{O}\left({\alpha}^3\right) $$. Besides providing phenomenological predictions for all main differential distributions with stable top quarks, we also study the following issues. 1) The effect of the photon PDF on top-pair spectra: we find it to be strongly dependent on the PDF set used — especially for the top pT distribution. 2) The difference between the additive and multiplicative approaches for combining QCD and EW corrections: with our scale choice, we find relatively small differences between the central predictions, but reduced scale dependence within the multiplicative approach. 3) The potential effect from the radiation of heavy bosons on inclusive top-pair spectra: we find it to be, typically, negligible. Introduction The availability of NNLO QCD predictions for stable top-pair production at the LHC, both for the total cross-section [1][2][3][4] with NNLL soft-gluon resummation [5,6] and for all the main differential distributions [7,8], has made it possible to compare Standard Model (SM) theory with LHC data at the few-percent level accuracy. Such a high precision has led, among others, to further scrutiny of the differences between LHC measurements [9] and the ability of Monte-Carlo event generators to describe hadronic tt production. As a result of these ongoing studies, new MC developments are taking place, such as the incorporation of non-resonant and interference effects [10,11], which builds upon previous works that included NLO top decay corrections through-fixed order [12][13][14][15][16] and/or showered [17][18][19] calculations. One of the remaining ways for further improving SM theory predictions is by consistently including the so called Electro-Weak (EW) corrections on top of the NNLO QCD ones. Weak [20][21][22][23][24][25][26][27][28], QED [29] and EW (weak+QED) [30][31][32][33][34] corrections to top-quark pair production have been known for quite some time, and also EW corrections to the fully off-shell dilepton signature are nowadays available [35]. As it has been documented in the literature, although EW effects are rather small at the level of total cross-section, they can have a sizeable impact on differential distributions and also on the top-quark charge asymmetry. The goal of this work is to consistently merge existing NNLO QCD predictions with EW corrections into a single coherent prediction and to study its phenomenological impact. This is achieved by combining the NNLO QCD predictions from ref. [8] with the complete LO and NLO contributions derived within the framework of ref. [34]. Specifically, we include the NLO EW effects of O(α 2 s α), all subleading NLO (O(α s α 2 ) and O(α 3 )) terms as well as the LO (O(α s α) and O(α 2 )) contributions. Another motivation for this study stems from recent developments in understanding the photon content of the proton [36,37]. As shown in ref. [34], depending on the PDF set, photon-initiated contributions can be numerically significant in some regions of phase space. 1 If the photon density from the NNPDF3.0QED set [40,41] is employed, the photon-initiated contribution is large in size and of opposite sign with respect to the Sudakov EW corrections, leading to the almost complete cancellation of the two effects. Nevertheless, large PDF uncertainties from the photon PDF are still present after this cancellation. On the other hand, theoretical consensus about the correctness of the novel approach introduced in ref. [37] appears to have emerged by now. 2 The PDF set provided with ref. [37], named LUXQED, includes a photon PDF whose central value and relative uncertainty are both much smaller than in the case of NNPDF3.0QED. Thus, at variance with the NNPDF3.0QED set, neither large cancellation between Sudakov effects and photon-induced contributions nor large photon PDF uncertainty is present in LUXQED-based predictions. In order to document the ambiguity arising from the differences between the photon densities in the available PDF sets, with the exception of sec. 2, in this work we always give predictions for top-pair differential distributions at the LHC based on the LUXQED [37] and NNPDF3.0QED [40,41] PDF sets. We believe that our findings will provide a valuable input to future PDF determinations including EW effects. This paper is organised as follows: sec. 2 is devoted to the phenomenological study of our combined QCD and EW predictions for the LHC at 13 TeV. The reasons behind some of the choices made in sec. 2 -like the choice of PDF set and combination approach -are revealed in sec. 3, where we compare in great detail two approaches for combining NNLO QCD and EW corrections in top-pair differential distributions. The so-called additive approach is discussed in sec. 3.1, while the multiplicative one in sec. 3.2. Section 4 is dedicated to studying the impact of the photon PDF on top-pair spectra. In sec. 5 we provide an estimate of the impact of inclusive Heavy Boson Radiation (HBR), namely the contribution from ttV final states with V = H, W ± , Z. While most of the notation is introduced in the main text some technical details are delegated to Appendix A. Phenomenological predictions for the LHC at 13 TeV In this section we present predictions for tt distributions for the LHC at 13 TeV at NNLO QCD accuracy including also EW corrections. We focus on the following distributions: the top-pair invariant mass m(tt), the top/antitop average transverse momentum (p T,avt ) and rapidity (y avt ) and the rapidity y(tt) of the tt system. The p T,avt (y avt ) distributions are calculated not on an event-by-event basis but by averaging the results of the histograms for the transverse momentum (rapidity) of the top and the antitop. Our calculation is performed using the following input parameters m t = 173.3 GeV , m H = 125.09 GeV , m W = 80.385 GeV , m Z = 91.1876 GeV , (2.1) while all other fermion masses are set to zero. All masses are renormalised on-shell and all decay widths are set to zero. The renormalisation of α s is performed in the 5-flavour scheme while EW input parameters and the associated α renormalisation condition are in the G µ -scheme, with G µ = 1.1663787 · 10 −5 GeV −2 . (2. 2) The EW corrections have been calculated in a completely automated way via an extension of the MadGraph5 aMC@NLO code [43] that has been already validated in refs. [44,45], and in ref. [46] for the calculation of the complete NLO corrections. We work with dynamical renormalisation (µ r ) and factorisation (µ f ) scales. Their common central value is defined as where m T,t ≡ m 2 t + p 2 T,t and m T,t ≡ m 2 t + p 2 T,t are the transverse masses of the top and antitop quarks. As already mentioned, p T,avt and y avt distributions are obtained by averaging the top and antitop distributions for the transverse momentum and rapidity, respectively. These scale choices have been motivated and studied at length in ref. [8]. In all cases theoretical uncertainties due to missing higher orders are estimated via the 7-point variation of µ r and µ f in the interval {µ/2, 2µ} with 1/2 ≤ µ r /µ f ≤ 2. We remark that the combination of QCD and EW corrections is independently performed for each value of µ f,r . For theoretical consistency, a set of PDFs including QED effects in the DGLAP evolution should always be preferred whenever NLO EW corrections are computed. At the moment, the only two NNLO QCD accurate PDF sets that include them are NNPDF3.0QED and LUXQED. 3 Both sets have a photon density, which induces additional contributions to tt production [29,34]. As motivated and discussed at length in sec. 3, the phenomenological predictions in this section are based on the LUXQED PDF set and on the multiplicative approach for combining QCD and EW corrections, which we will denote as QCD × EW. We invite the dσ/dp T interested reader to consult secs. 3 and 4 where detailed comparisons between the two PDF sets as well as between the two approaches for combining QCD and EW corrections can be found. From the plots shown in fig. 1 we conclude that the impact of the EW corrections relative to NNLO QCD depends strongly on the kinematic distribution. The smallest impact is observed in the two rapidity distributions: the relative effect for y avt is around 2 permil and is much smaller than the scale uncertainty. The y(tt) distribution is slightly more sensitive, with a relative impact of slightly above 1% for large values of y(tt). This correction is also well within the scale-variation uncertainty band. The impact of EW corrections on the m(tt) distribution is larger. Relative to NNLO QCD it varies between +2% at the absolute threshold and -6% at high energies. Still, this correction is well within the scale variation uncertainty. The small sensitivity of the y avt and y(tt) distributions to EW corrections supports the findings of ref. [9] where these two distributions were used for constraining the PDF of the gluon. The largest correction due to EW effects is observed in the p T,avt distribution. Relative to NNLO QCD, the correction ranges from +2% at low p T,avt to -25% at p T,avt ∼ 3 TeV. The correction is significant and is comparable to the scale variation band already for p T,avt ∼ 500 GeV. Overall, the EW contribution to the p T,avt distribution is as large as the total theory uncertainty band in the full kinematic range p T,avt ≤ 3 TeV considered in this work. The fraction of the theory uncertainty induced by PDFs is strongly dependent on kinematics. For the y avt and y(tt) distributions, the PDF error is slightly smaller than the scale uncertainty for central rapidities, but is larger in the peripheral region, especially for the y(tt) distribution. The PDF uncertainty becomes the dominant source of theory error in the p T,avt distribution for p T,avt as large as 500 GeV, while for the m(tt) distribution it begins to dominate over the scale uncertainty for m(tt) ∼ 2.5 TeV. There are many applications for the results derived in this work. Examples are: inclusion of EW effects in PDF determinations from LHC tt distributions, high-mass LHC searches, precision SM LHC measurements and benchmarking of LHC event generators. A practical and sufficiently accurate procedure for the utilisation of our results could be as follows. One starts by deriving an analytic fit for the QCD × EW/QCD K-factor (all Kfactors in fig. 1 are available in electronic form 4 ); as evident from fig. 1 it is a very smooth function for all four differential distributions. Under the assumption that this K-factor is PDF independent, such an analytic fit could then be used to rescale the NNLO-QCDaccurate differential distributions derived with any PDF set from existing NNLO QCD [50] fastNLO [51,52] tables. Regarding the PDF error of NNLO QCD differential distributions, it can be calculated very fast with any PDF set with the help of the fastNLO tables of ref. [50]. As we show in the following the PDF error of the QCD and combined QCD and EW predictions is almost the same, especially for the LUXQED PDF set used for our phenomenological predictions. Comparison of two approaches for combining NNLO QCD predictions and EW corrections In this work we compare two approaches for combining QCD and EW corrections. For brevity, we will refer to them as additive and multiplicative approaches. As already mentioned, the results presented in sec. 2 have been calculated using the multiplicative approach. In the additive approach the NNLO QCD predictions (defined as the complete set of O(α n s ) terms up to n = 4) are combined with all possible remaining LO and NLO terms arising from QCD and electroweak interactions in the Standard Model. In other words, at LO we include not only the purely QCD O(α 2 s ) contribution, but also all O(α s α) and O(α 2 ) terms. Similarly, at NLO we take into account not only the NLO QCD O(α 3 s ) contribution but also the O(α 2 s α) one, the so-called NLO EW, as well as the subleading contributions of O(α s α 2 ) and O(α 3 ). For brevity, we will denote as "EW corrections" the sum of all LO and NLO terms of the form O(α m s α n ) with n > 0. Moreover, when we will refer to "QCD" results, we will understand predictions at NNLO QCD accuracy. For a generic observable Σ in the additive approach we denote the prediction at this level of accuracy as Σ QCD+EW . In the multiplicative approach one assumes complete factorisation of NLO QCD and NLO EW effects. This approach is presented in sec. 3.2 and is denoted as Σ QCD×EW . The precise definition of the various quantities mentioned in the text is given in appendix A where an appropriate notation for the classification of the different contributions is introduced. Here, we just state the most relevant definitions for the following discussion, where Σ denotes a generic observable in tt production and K NLO QCD is the standard NLO/LO K-factor in QCD. A variation of the multiplicative approach denoted as Σ QCD 2 ×EW will also be considered; it is defined similarly to Σ QCD×EW in eq. (3.1) but with NNLO/LO QCD K-factor. As it has been discussed in ref. [34], the usage of different PDF sets leads to a very different impact of the photon-induced contribution on tt distributions. While in the case of NNPDF3.0QED the impact of photon-induced contributions is relatively large and with very large uncertainties, in the case of LUXQED it is expected to be negligible. For this reason in the rest of this work we always show predictions with both PDF sets. Additive combination Distributions for p T,avt and m(tt) are shown in fig. 2, while the y avt and y(tt) distributions are shown in fig. 3. The format of the plots for all distributions is as follows: for each observable, we show two plots side-by-side, with the same layout. The plot on the lefthand side shows predictions obtained using the LUXQED set, while for the one on the right the NNPDF3.0QED set is employed. Results at NNLO QCD accuracy are labelled as "QCD" while the combination of NNLO QCD predictions and EW corrections in the additive approach are labelled as "QCD+EW". In each plot the three insets display ratios of different quantities 5 over the centralscale QCD result (i.e., in the case of LUXQED, the black line in the main panel of fig. 1). In the first inset we show the scale uncertainty due to EW corrections alone (red band), without QCD contributions (Σ EW using the notation of Appendix A). This quantity can be compared to the scale uncertainty of the QCD prediction at NNLO accuracy (grey band). In the second inset we present the scale-uncertainty band (red) for the combined QCD+EW prediction. The grey band corresponds to the NNLO QCD scale-uncertainty band already shown in the first inset. The third inset is equivalent to the second one, but it shows the PDF uncertainties. We combine, for each one of the PDF members, the QCD prediction and the EW corrections into the QCD+EW result. The PDF uncertainty band of the QCD+EW prediction is shown in red while the grey band corresponds to the PDF uncertainty of the QCD prediction. For all insets, when the grey band is covered by the red one, its borders are displayed as black dashed lines. As can be seen in figs. 2 and 3, the effect of EW corrections is, in general, within the NNLO QCD scale uncertainty. A notable exception is the case of the p T,avt distribution with LUXQED. In the tail of this distribution the effect of Sudakov logarithms is large and negative, of the order of -(10-20%), and is not compensated by the photon-induced contribution. On the contrary, in the case of NNPDF3.0QED, photon-induced contributions mostly compensate the negative corrections due to Sudakov logarithms. As it has already been noted in ref. [34], with this PDF set, the effect of photon-induced contributions is not negligible also for large values of m(tt), y avt and y(tt). As it can be seen in the first inset, in the large p T,avt regime the scale dependence of the EW corrections alone is of the same size as, or even larger than, the scale variation at NNLO QCD accuracy. For this reason, as evident from the second inset, the scale uncertainty of the combined QCD+EW prediction is much larger than in the purely QCD case, both with the LUXQED and NNPDF3.0QED PDF sets. This feature is present only in the tail of the p T,avt distribution. The PDF uncertainties (third inset) for all distributions do not exhibit large differences between QCD and QCD+EW predictions, despite the fact that the photon-induced contribution in NNPDF3.0QED is large and has very large PDF uncertainty (relative to LUXQED). Multiplicative combination and comparison with the additive one The additive approach Σ QCD+EW for combining QCD and EW corrections discussed in sec. 3.1 is exact to the order at which the perturbative expansion of the production crosssection is truncated. An alternative possibility for combining QCD and EW corrections is what we already called the multiplicative approach, Σ QCD×EW . This approach is designed to approximate the leading EW corrections at higher orders. In the case of tt production these are NNLO EW contributions of order O(α 3 s α). The multiplicative approach is motivated by the fact that soft QCD and EW Sudakov logarithms factorise, with the latter typically leading to large negative corrections for boosted kinematics. Thus, when dominant NLO EW and NLO QCD corrections are at the same time induced by these two effects, the desired fixed order can be very well approximated via rescaling NLO EW corrections with NLO QCD K-factors. 6 Otherwise, if one is in a kinematical regime for which the dominant NLO EW or NLO QCD corrections are of different origin (i.e. not Sudakov or soft), the difference between the multiplicative and additive approaches given by the term Σ mixed in eq. (A.8) can be considered as an indication of theory uncertainty in that kinematics. It must be stressed that the perturbative orders involved in the additive approach are included exactly also in the multiplicative approach; the only addition the multiplicative approach introduces on top of the additive one is the approximated O(α 3 s α) contribution. One of the advantages of the multiplicative approach is the stabilisation of scale dependence. As we saw in sec. 3.1, when QCD and EW corrections are combined in the additive approach, the scale dependence at large p T,avt can exceed that of the NNLO QCD prediction. On the other hand, the large p T,avt limit is precisely the kinematic regime where the multiplicative approach is a good approximation and can be trusted: at large p T,avt the NLO EW and NLO QCD corrections are mainly induced by Sudakov logarithms and soft emissions, respectively, and as we just pointed out these two contributions factorise. The presence of large Sudakov logarithms in the NLO EW result at large p T,avt is easy to see since for Born kinematics large p T,t implies large p T,t which, in turn, implies largê s,t andû Mandelstam variables. That NLO QCD corrections at large p T,avt are mainly of soft origin can be shown with an explicit NLO calculation; by applying appropriate cuts on the jet, the top and/or the antitop one can easily see that the differential cross-section is dominated by kinematic configurations containing almost back-to-back hard top and antitop and a jet with small p T . Plots demonstrating this can be found in footnote 4. In the following, for all observables Σ considered in this work, we present predictions in the multiplicative approach denoted as Σ QCD×EW . As a further check of the stability of the multiplicative approach we display also the quantity Σ QCD 2 ×EW , whose precise definition can be found in appendix A. Σ QCD 2 ×EW is defined analogously to Σ QCD×EW , but by rescaling NLO EW corrections via NNLO QCD K-factors. By comparing Σ QCD×EW and Σ QCD 2 ×EW one can further estimate the uncertainty due to mixed QCD-EW higher orders. Figure 4 shows the p T,avt and m(tt) distributions, while fig. 5 refers to y avt and y(tt). As in sec. 3.1, the plots on the left are produced using the LUXQED PDF set, while those on the right using the NNPDF3.0QED PDF set. We next describe the format of the plots. Each plot consists of five insets, which all show ratios of different quantities over the central value of Σ QCD . In the first inset we compare the central-scale results for the three alternative predictions: Σ QCD+EW /Σ QCD (red line), Σ QCD×EW /Σ QCD (green line) and Σ QCD 2 ×EW /Σ QCD (violet line). These quantities are further displayed in the second, third and fourth inset, respectively, where not only the central value but also the scale dependence of the numerator is shown. In all cases we calculate the scale-uncertainty band as a scale-by-scale combination and subsequent variation in the 7-point approach. Scale variation bands have the same colour as the corresponding central-value line. For comparison we also display (grey band) the relative scale uncertainty of Σ QCD . Thus, the second inset is exactly the same as the second inset in the corresponding plots in sec. 3.1. The last inset shows a comparison of the ratio Σ QCD+EW /Σ QCD including (red line) or not (orange line) the contribution Σ res , where "res" stands for residual and denotes the fact that Σ res are contributions to Σ EW that are expected to be small, regardless of the PDF set used (see eq. (A.6)). As expected, the multiplicative approach shows much smaller dependence on the scale variation. This is particularly relevant for the tail of the p T,avt distribution, where the scale uncertainty of Σ EW alone is comparable in size with the one of Σ QCD ; with this reduction of the scale uncertainty the Σ QCD×EW and Σ QCD uncertainty bands do not overlap when LUXQED is used. In the case of m(tt) and y avt distributions, the Σ QCD×EW central-value predictions are typically larger in absolute value than those of Σ QCD+EW , while they are all almost of the same size for the y(tt) distribution. In the case of y avt the difference between the additive and multiplicative approaches is completely negligible compared to their scale uncertainty. Therefore, besides the kinematic region where Sudakov effects are the dom- Figure 5. As in fig. 4 but for the y avt and y(tt) differential distributions. inant contribution, the multiplicative and additive approaches are equivalent. Moreover, the difference between Σ QCD×EW and Σ QCD 2 ×EW is in general small; a sizeable difference between their scale dependences can be noted only in the tail of the p T,avt distribution. For all the reasons mentioned above we believe that the multiplicative approach should be preferred over the additive one and, indeed, it has been used for the calculation of our best predictions in sec. 2. As can be seen from figs. 4 and 5 and their threshold-zoomed-in versions in footnote 4 the difference between Σ QCD+EW and Σ QCD×EW for non-boosted kinematics is much smaller than the total theory uncertainty (scale+PDF) shown in fig. 1. Thus, the difference between the two approaches can be safely ignored in the estimation of the theory uncertainty. One should bear in mind that this conclusion depends on the choice of scale, which in our case, as explained in ref. [8], is based on the principle of fastest convergence. A different scale choice with larger K factors would likely artificially enhance the difference between Σ QCD+EW and Σ QCD×EW . In the last inset in figs. 4 and 5 we compare the quantities Σ EW and Σ EW − Σ res , where the Σ res contribution is exactly included in both the additive and multiplicative approaches. As expected, one can see that the Σ res contribution is typically flat and very small. The only exception is the m(tt) distribution where a visible difference between the two curves (Σ EW and Σ EW − Σ res ) is present, especially in the tail. The Σ res contribution includes the squared EW tree-level diagrams, the O(α 2 ) contribution denoted as Σ Impact of the photon PDF In this section we quantify the impact on tt differential distributions of the difference between the photon densities provided by the LUXQED and NNPDF3.0QED PDF sets. In other words, we repeat the study performed in ref. [34] for these two PDF sets since they were not considered in that work. We compare the size of the electroweak corrections with and without the photon PDF for both PDF sets. In each plot of fig. 6 we show the relative impact induced by the electroweak corrections (the ratio Σ EW /Σ QCD ; see definitions in Appendix A) for four cases: NNPDF3.0QED setting the photon PDF equal to zero (red) or not (green), and LUXQED setting the photon PDF equal to zero (violet) or not (blue). For the cases including the photon PDF, we also show the PDF-uncertainty band of Σ EW . The impact of the photon-induced contribution can be evaluated via the difference between the green and red lines in the case of NNPDF3.0QED and the difference between the blue and violet lines in the case of LUXQED. As can be seen in fig. 6, the impact of the photon PDF on the p T,avt , m(tt), y avt and y(tt) distributions is negligible in the case of LUXQED, while it is large and with very large uncertainties for the case of NNPDF3.0QED, as already pointed out in ref. [34] for NNPDF2.3QED. At very large p T,avt and m(tt) also LUXQED show a non-negligible relative PDF uncertainty, which is not induced by the photon but from the PDFs of the coloured partons at large x. We checked that a similar behaviour is exhibited also by NNPDF3.0QED when its photon PDF is set to zero. Contributions from heavy boson radiation In the calculation of EW corrections to QCD processes the inclusion of real emissions of massive gauge bosons (heavy boson radiation or HBR) is not mandatory since, due to the finite mass of the gauge bosons, real and virtual weak corrections are separately finite (albeit the virtual corrections are enhanced by large Sudakov logarithms). Furthermore, such emissions are typically resolved in experimental analyses and are generally considered Figure 6. Impact of the photon PDF on the p T,avt , m(tt), y avt and y(tt) differential distributions at 13 TeV. The format of the plots is described in the text. as a different process ttV (+X) with V = H, W ± , Z. For these reasons, the results in sec. 2 do not include HBR contributions. It is, nonetheless, interesting to estimate the contribution of HBR to inclusive tt production. Our motivation is threefold: First, resolved or not, HBR is a legitimate contribution to the tt(+X) final state considered in this work. Secondly, it is clear that one cannot guarantee that HBR is resolved with 100% efficiency. Therefore, it is mandatory to have a prior estimate for the size of the effect. Finally, we are unaware of prior works where the HBR contribution has been estimated in inclusive tt production. Recently, refs. [44,53] have provided estimates for HBR in the processes ttV (+X), with V = H, Z, W . We have investigated the impact of HBR on all four distributions considered in this work: p T,avt , m(tt), y avt and y(tt). Our results are shown in fig. 7, where we plot the effect of HBR on the central scale normalised to the QCD prediction. We show separately the LO HBR effect of order O(α 2 s α) as well as the NLO QCD HBR prediction which includes terms of order O(α 3 s α). As a reference we also show the EW corrections for tt. In our calculations we include HBR due to H, W and Z. We are fully inclusive in HBR, i.e., no cuts on the emitted heavy bosons are applied. Clearly, any realistic experimental analysis will require an estimate of HBR subject to experimental cuts, but such an investigation would be well outside the scope of the present work. From fig. 7 we conclude that the effect of HBR is generally much smaller than the EW corrections. In particular, higher-order QCD corrections to HBR are completely negligible, i.e. HBR is well described in LO for all the tt inclusive distributions and for the full kinematic ranges considered here. The absolute effect of HBR on the p T,avt distribution is positive and small; it never exceeds 2-3% (relative to the tt prediction at NNLO QCD accuracy) and is always much smaller than the EW correction. The only distribution where the HBR contribution is not negligible compared to the EW one is m(tt) computed with LUXQED. For this distribution the HBR correction is positive and only about half the absolute size of the (negative) EW correction. Still, the absolute size of the HBR, relative to the prediction at NNLO QCD accuracy, is within 1% and so its phenomenological relevance is unclear. The impact of HBR on the two rapidity distributions is tiny, typically within 3 permil of the NNLO QCD prediction. Conclusions In this work we derive for the first time predictions for all main top-quark pair differential distributions 7 with stable top quarks at the LHC at NNLO QCD accuracy and including the following EW corrections: the NLO EW effects of O(α 2 s α), all subleading NLO terms of order O(α s α 2 ) and O(α 3 ) as well as the LO contributions of order O(α s α) and O(α 2 ). We present a detailed analysis of top-pair production at the LHC at 13 TeV and we find that the effect of EW corrections on differential distributions with stable top quarks is in general within the current total (scale+PDF) theory uncertainty. A notable exception is the p T,avt distribution in the boosted regime where the effect of EW corrections is significant with respect to the current total theory error. We have checked that similar conclusions apply also for LHC at 8 TeV. All results derived in this work in the multiplica-tive approach, for both 8 and 13 TeV, are available in electronic form 4 as well as with the ArXiv submission of this paper. Providing phenomenological predictions for the LHC is only one of the motivations for the present study. In this work we also quantify the impact of the photon PDF on top-pair differential distributions and study the difference between the additive and multiplicative approaches for combining QCD and EW corrections. Moreover, we analyse the contribution from inclusive Heavy Boson Radiation on inclusive top-pair differential distributions. In order to quantify the impact of the photon PDF, we use two recent PDF sets whose photon components are constructed within very different approaches. The first set, LUXQED, is based on the PDF4LHC15 set [54] and adds to it a photon contribution that is derived from the structure function approach of ref. [37]. The second set, NNPDF3.0QED, is based on the NNPDF3.0 family of PDFs and adds a photon component that is extracted from a fit to collider data. NNPDF3.0QED photon density has both a much larger central value and PDF uncertainty than those of LUXQED. On the other hand, the two sets are compatible within PDF errors and they both include QED effects in the DGLAP evolution on top of the usual NNLO QCD evolution. We confirm the observations already made in ref. [34], namely, the way the photon PDF is included impacts all differential distributions. The size of this impact is different for the various distributions; the most significant impact can be observed in the p T,avt distribution at moderate and large p T where the net effect from EW corrections based on NNPDF3.0QED is rather small and with large PDF uncertainties, while using LUXQED it is negative, with small PDF uncertainties and comparable to the size of the NNLO QCD scale error. The m(tt) distribution displays even larger effects, but only at extremely high m(tt). The y(tt) distribution is also affected at large y(tt) values. 8 It seems to us that a consensus is emerging around the structure-function approach of ref. [37]. Given its appealing predictiveness, this approach will likely be utilised in the future in other PDF sets. Therefore, at present, it seems to us that as far as the photon PDF is concerned predictions based on the LUXQED set should be preferred. Our best predictions in this work are based on the so-called multiplicative approach for combining QCD and EW corrections. We have also presented predictions based on the standard additive approach. In general, we find that the difference between the two approaches is small and well within the scale uncertainty band. The difference between the two approaches is more pronounced for the m(tt) and p T,avt distributions. Nevertheless, both approaches agree within the scale variation. The scale uncertainty is smaller within the multiplicative approach and, especially in the case of the p T,avt distribution, does not overlap with the NNLO QCD uncertainty band. We stress that these features may be sensitive to the choice of factorisation and renormalisation scales. Since we are unaware of a past study of Heavy Boson Radiation (i.e. H, W ± and Z) in inclusive tt production, for completeness, we have also presented the impact of inclusive HBR on the inclusive top-pair differential spectrum. While it is often assumed that additional HBR emissions can be removed in the measurements, it is nevertheless instructive to consider the contribution of such final states. We find that, typically, the HBR contribution is negligible, except for the m(tt) distribution, where it tends to partially offset the EW correction (when computed with LUXQED). We have also checked that NLO QCD corrections to the LO HBR result are negligible for all inclusive tt distributions considered by us. The LO (m + n = 2), NLO (m + n = 3) and NNLO (m + n = 4) contributions read Σ tt LO (α s , α) = α 2 s Σ 2,0 + α s αΣ 2,1 + α 2 Σ 2,2 ≡ Σ LO,1 + Σ LO,2 + Σ LO,3 , In order to simplify the notation, we further define the following purely QCD quantities and those involving EW corrections Throughout this work with the term "EW corrections" we refer to the quantity Σ EW , while the term "NLO EW corrections" will only refer to Σ NLO EW . In the additive approach, which is presented in section 3.1, QCD and electroweak corrections are combined through the linear combination Σ QCD+EW ≡ Σ QCD + Σ EW . (A.7) The so called "multiplicative approach", which has been discussed in sec. 3.1, is precisely defined in the following. The purpose of the multiplicative approach is to estimate the size of Σ NNLO,2 , which for convenience we rename Σ mixed and assuming complete factorisation of NLO QCD and NLO EW effects we estimate as In the regime where NLO QCD corrections are dominated by soft interactions and NLO EW by Sudakov logarithms, eq. (A.8) is a very good approximation, since the two effects factorise and are dominant. In other regimes Σ mixed can be used as an estimate of the leading missing mixed QCD-EW higher orders. The advantage of the inclusion of Σ mixed is the stabilisation of the scale dependence of the term Σ NLO EW , which in tt production has almost 9 the same functional form of Σ LO QCD . To this end we define the multiplicative 9 We say "almost" because this order receives also QCD corrections to the ΣLO EW contributions from the gγ and bb initial states. Besides these effects ΣNLO EW(µ2) = ΣNLO EW(µ1) In order to test the stability of the multiplicative approach under even higher mixed QCD-EW orders, we combine NNLO QCD corrections and NLO EW corrections in order to estimate, besides the Σ mixed term, also NNNLO contributions of order α 4 s α. Finally, we briefly describe how the dependence on the photon PDF enters the different perturbative orders. At LO and NLO accuracy, all contributions, with the exception of Σ LO QCD and Σ NLO QCD , depend on the photon PDF. The dominant photon-induced process is gγ → tt, which contributes to Σ LO EW and, via QCD corrections to this order, to Σ NLO EW . In addition, Σ NLO EW , but also Σ NLO,3 and Σ NLO,4 , receive contributions from the qγ → ttq andqγ → ttq processes. Moreover, in the case of Σ LO,3 and Σ NLO,4 , also the γγ initial state contributes. As already discussed in ref. [34], almost all of the photoninduced contribution arises form Σ LO EW . In this work, at variance with ref. [34], we also include the term Σ res in our calculations. However, since the size of Σ res is in general small, the previous argument still applies. The numerical impact of Σ res is discussed in sec. 3.2. Given the structure of the photon-induced contributions described before, it is also important to note that, with LUXQED, the multiplicative approach is a better approximation of Σ mixed than in the case of NNPDF3.0QED. Indeed, the order Σ NLO EW contains also terms that can be seen as "QCD corrections" to the gγ contributions in Σ LO,2 (negligible only with the LUXQED), but are not taken into account in the multiplicative approach.
8,887.2
2017-05-11T00:00:00.000
[ "Physics" ]
Comparative Proteomic Analysis by iTRAQ Reveals that Plastid Pigment Metabolism Contributes to Leaf Color Changes in Tobacco (Nicotiana tabacum) during Curing. Tobacco (Nicotiana tabacum), is a world's major non-food agricultural crop widely cultivated for its economic value. Among several color change associated biological processes, plastid pigment metabolism is of trivial importance in postharvest plant organs during curing and storage. However, the molecular mechanisms involved in carotenoid and chlorophyll metabolism, as well as color change in tobacco leaves during curing, need further elaboration. Here, proteomic analysis at different curing stages (0 h, 48 h, 72 h) was performed in tobacco cv. Bi'na1 with an aim to investigate the molecular mechanisms of pigment metabolism in tobacco leaves as revealed by the iTRAQ proteomic approach. Our results displayed significant differences in leaf color parameters and ultrastructural fingerprints that indicate an acceleration of chloroplast disintegration and promotion of pigment degradation in tobacco leaves due to curing. In total, 5931 proteins were identified, of which 923 (450 up-regulated, 452 down-regulated, and 21 common) differentially expressed proteins (DEPs) were obtained from tobacco leaves. To elucidate the molecular mechanisms of pigment metabolism and color change, 19 DEPs involved in carotenoid metabolism and 12 DEPs related to chlorophyll metabolism were screened. The results exhibited the complex regulation of DEPs in carotenoid metabolism, a negative regulation in chlorophyll biosynthesis, and a positive regulation in chlorophyll breakdown, which delayed the degradation of xanthophylls and accelerated the breakdown of chlorophylls, promoting the formation of yellow color during curing. Particularly, the up-regulation of the chlorophyllase-1-like isoform X2 was the key protein regulatory mechanism responsible for chlorophyll metabolism and color change. The expression pattern of 8 genes was consistent with the iTRAQ data. These results not only provide new insights into pigment metabolism and color change underlying the postharvest physiological regulatory networks in plants, but also a broader perspective, which prompts us to pay attention to further screen key proteins in tobacco leaves during curing. Introduction Tobacco (Nicotiana tabacum) is an extensively investigated model plant and one of the most widely cultivated non-food crops. Given its agricultural importance, tobacco is grown in more than 100 countries for its foliage, mainly consumed as cigarettes, cigars, snus, snuff, etc. The plant organs, including fruit, flowers, and leaves, undergo a series of complex physiological and biochemical changes when they detach from their mother plant [1][2][3]. Curing is the process of transforming raw materials into target requirements through certain processes. Color is one of the quality factors of cash crops and agricultural products, and color change associated with carotenoid and chlorophyll metabolism is one of the most obvious phenomena in postharvest vegetative organs during curing and storage. A substantial amount of research has been carried out to comprehend this fundamental postharvest physiological process to improve the commercial value of agricultural goods during the past few decades [4][5][6]. The carotenoids represent the most widespread group of pigments in nature, with over 750 members and an estimated yield of 100 million tons per year [6][7][8], and many carotenoid compounds have been examined, such as β-carotene, lutein, violaxanthin, and neoxanthin [9][10][11]. Certain plant proteins, such as carotenoid cleavage dioxygenases (CCDs) and violaxanthin de-epoxidase (VDE), significantly participate in the regulation of the carotenoid and degradation products content in plants [6,10,12]. Additionally, lipoxygenase (LOX) is an important enzyme that catalyzes the co-oxidation of β-carotene and plays a significant part in the deterioration of β-carotene levels [13,14], while peroxidase (POD) is involved in the cleavage of various carotenes, such as xanthophylls and apocarotenals, to flavor compounds [15]. Carotenoid degradation products are important volatile flavor components and precursors for plant growth regulators such as the phytohormone abscisic acid (ABA) and strigolactones in a range of plant species [10,12,16]. In addition, chlorophyll metabolism is an important biological phenomenon, and it has been estimated that about one billion tons of chlorophyll are destroyed on a global scale each year [17,18]. Chlorophyll compounds mainly include chlorophyll a and chlorophyll b in plants [17][18][19]. Proteins, such as chlorophyllide-a oxygenase (CAO) and chlorophyllase (Chlase), are involved in the chlorophyll biosynthetic pathway and the chlorophyll breakdown pathway [17,18,20]. Chlorophyll degradation is a highly controlled sequential process that converts the fluorescent chlorophyll molecules into non-fluorescent chlorophyll catabolites (NCCs), which are stored within the vacuole in a range of plant species [18,21,22]. Furthermore, chlorophyll serves as a precursor for important volatile flavor components such as phytol and neophytadiene [16]. Parameters of the CIEL*a*b color coordinate include lightness L* (positive white and negative black), and two chromatic components a* (positive red and negative green) and b* (positive yellow and negative blue) [2,11]. Color change is one of the most dramatic events occurring in plant postharvest organs during curing and storage [2,5,23]. Pigments are compounds that absorb subsets of the visible spectrum, transmitting and reflecting back only what they do not absorb, and causing the tissue to be perceived as the reflected colors [24]. Carotenoids are pigments that range in color from yellow through orange to red, resulting from their C 40 polyene backbone [6,14]. The green color changes to orange and red due to the breakdown of chlorophylls and the accumulation of the orange β-carotene and the red lycopene in plants [6,25]. The plants seem intensely yellow due to the accumulation of the xanthophylls, namely lutein, neoxanthin, and violaxanthin [11,25,26]. The color change is determined by a dynamic shift in pigment composition and their contents in plants, which is associated with the regulation of differentially expressed proteins (DEPs). Proteins involved in photosynthesis, glyoxylate metabolism, carbon and nitrogen metabolism, anthocyanin biosynthesis, protein processing, and redox homeostasis are crucial for color regulation in plants [8,[27][28][29]. Moreover, leaf color change is a complex programmed process that is closely related to pigment metabolism and is regulated by fine-tuned molecular mechanisms [8,27,30]. Tobacco is the most important non-food agricultural economic crop and serves as a model plant organism to study fundamental biological processes [31,32]. In tobacco, leaf senescence during postharvest processing is different from natural senescence, and rather an accelerated one [16,23]. It is worth emphasizing that energy metabolism, photosynthesis, jasmonic acid biosynthesis, cell rescue, and reactive oxygen species scavenging are crucial for leaf senescence, and induced leaf senescence may be involved in nutrient remobilization and the cell viability maintenance [33][34][35][36]. Strikingly, carotenoid, and chlorophyll metabolism associated with color change was one of the most important biological processes in tobacco leaves during curing and senescence [23,36,37]. Fresh tobacco leaves are harvested and processed into flue-cured tobacco raw material in a bulk barn. This curing process of tobacco leaves can be divided into the yellowing stage, the leaf-drying stage, and the stem-drying stage. The yellowing stage is the first key step associated with carotenoid and chlorophyll metabolic and color changes in tobacco leaves [16,23]. Thus, studying plastid pigment metabolic and color changes in postharvest tobacco leaves during curing will provide more information for enhancing the understanding of this biological process and improving crop quality and reducing losses. iTRAQ (isobaric tags for relative and absolute quantification), a high-throughput proteomic technology, is one platform for comparing changes in the abundance of specific proteins among different samples [8,29,38]. Although new advances have been made in our understanding of pigment metabolism and color change in plant organs [4][5][6]13,23], fewer studies have focused specifically on their molecular mechanisms in postharvest tobacco leaves during curing. In this study, iTRAQ-based proteomic analysis was employed to identify important regulators in pigment metabolism pathways and elucidate the molecular mechanism of pigment metabolism and color change in tobacco leaves during the yellowing stage (0 h, 48 h, 72 h). The results herein provide new insights into the molecular mechanisms involved in pigment metabolism and color change for the future study of postharvest physiological regulatory networks in plants. Color and Phenotypic Changes of Tobacco Leaves during Curing During the curing process, significant differences in the leaf color parameters L*, a*and b* were observed ( Figure 1A). Strikingly, the L*, a*, and b* values gradually increased, which were consistent with the changes in the tobacco leaf phenotypes from green to yellow during 0-72 h ( Figure 1B). senescence may be involved in nutrient remobilization and the cell viability maintenance [33][34][35][36]. Strikingly, carotenoid, and chlorophyll metabolism associated with color change was one of the most important biological processes in tobacco leaves during curing and senescence [23,36,37]. Fresh tobacco leaves are harvested and processed into flue-cured tobacco raw material in a bulk barn. This curing process of tobacco leaves can be divided into the yellowing stage, the leaf-drying stage, and the stem-drying stage. The yellowing stage is the first key step associated with carotenoid and chlorophyll metabolic and color changes in tobacco leaves [16,23]. Thus, studying plastid pigment metabolic and color changes in postharvest tobacco leaves during curing will provide more information for enhancing the understanding of this biological process and improving crop quality and reducing losses. iTRAQ (isobaric tags for relative and absolute quantification), a high-throughput proteomic technology, is one platform for comparing changes in the abundance of specific proteins among different samples [8,29,38]. Although new advances have been made in our understanding of pigment metabolism and color change in plant organs [4][5][6]13,23], fewer studies have focused specifically on their molecular mechanisms in postharvest tobacco leaves during curing. In this study, iTRAQ-based proteomic analysis was employed to identify important regulators in pigment metabolism pathways and elucidate the molecular mechanism of pigment metabolism and color change in tobacco leaves during the yellowing stage (0 h, 48 h, 72 h). The results herein provide new insights into the molecular mechanisms involved in pigment metabolism and color change for the future study of postharvest physiological regulatory networks in plants. Color and Phenotypic Changes of Tobacco Leaves during Curing During the curing process, significant differences in the leaf color parameters L*, a*and b* were observed ( Figure 1A). Strikingly, the L*, a*, and b* values gradually increased, which were consistent with the changes in the tobacco leaf phenotypes from green to yellow during 0-72 h ( Figure 1B). Ultrastructural Observations of Tobacco Leaves during Curing Ultrastructural observations indicated that the cell contained relatively intact chloroplast, grana thylakoids, and starch granules at 0 h; however, at 48 h, the chloroplast membranes and grana thylakoid lamellae were severely disrupted (Figure 2). At 72 h, only a few of the chloroplast and grana thylakoid lamellae remained. Ultrastructural observations of the cells showed that the curing process accelerated the chloroplast structural breakdown and promoted the degradation of the pigments in tobacco leaves during curing. Ultrastructural Observations of Tobacco Leaves during Curing Ultrastructural observations indicated that the cell contained relatively intact chloroplast, grana thylakoids, and starch granules at 0 h; however, at 48 h, the chloroplast membranes and grana thylakoid lamellae were severely disrupted (Figure 2). At 72 h, only a few of the chloroplast and grana thylakoid lamellae remained. Ultrastructural observations of the cells showed that the curing process accelerated Physiological Attributes of Tobacco Leaves during Curing The plastid pigment concentrations in tobacco leaves during curing were analyzed, as presented in Table 1. Among several carotenoids, the highest levels were displayed by lutein followed by βcarotene, while violaxanthin concentration was found higher than that of neoxanthin during 0-72 h. However, the chlorophylls were found to be the most abundant plastid pigments in tobacco leaves at 0 h, followed by the carotenoids. Although the carotenoid and chlorophyll concentrations decreased significantly, the ratio between the carotenoids and chlorophylls significantly increased Physiological Attributes of Tobacco Leaves during Curing The plastid pigment concentrations in tobacco leaves during curing were analyzed, as presented in Table 1. Among several carotenoids, the highest levels were displayed by lutein followed by β-carotene, while violaxanthin concentration was found higher than that of neoxanthin during 0-72 h. However, the chlorophylls were found to be the most abundant plastid pigments in tobacco leaves at 0 h, followed by the carotenoids. Although the carotenoid and chlorophyll concentrations decreased significantly, the ratio between the carotenoids and chlorophylls significantly increased during curing. This difference may be explained by the observation that the chlorophyll a (94.05%) and chlorophyll b (87.53%) concentrations and SPAD value (93.37%) decreased at a greater rate than carotenoids, including β-carotene (74.35%), lutein (77.56%), violaxanthin (73.71%), and neoxanthin (79.94%) during curing. These results indicated that pigments, particularly chlorophylls, degrade at high levels in tobacco leaves during curing. To confirm that proteins involved in pigment metabolism were modulated in tobacco leaves during 0-72 h, we selected physiological parameters that can be measured using established assays (Figure 3). The Chlase activities and MDA content significantly increased in tobacco leaves during curing. However, ascorbate peroxidase (APX) activities and ascorbic acid (ASA) content significantly decreased in tobacco leaves during 0-72 h. In addition, the LOX activities showed higher at 48 h than that at 0 h and 72 h, and the POD activities in leaves at 0 h was significantly higher than that at 72 h. These findings indicate that change in physiological parameters associated with pigment metabolism and color change is significant in tobacco leaves during different curing stages. To confirm that proteins involved in pigment metabolism were modulated in tobacco leaves during 0-72 h, we selected physiological parameters that can be measured using established assays ( Figure 3). The Chlase activities and MDA content significantly increased in tobacco leaves during curing. However, ascorbate peroxidase (APX) activities and ascorbic acid (ASA) content significantly decreased in tobacco leaves during 0-72 h. In addition, the LOX activities showed higher at 48 h than that at 0 h and 72 h, and the POD activities in leaves at 0 h was significantly higher than that at 72 h. These findings indicate that change in physiological parameters associated with pigment metabolism and color change is significant in tobacco leaves during different curing stages. Pigment Degradation Products Analysis in Tobacco Leaves during Curing For chemometric analysis, the relative concentrations of the 82 volatile components were analyzed using comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GC×GC-TOF-MS) system (Supplementary Table S1), including 19 carotenoid metabolites and 1 chlorophyll catabolite (Supplementary Table S2). The total concentration of carotenoid and chlorophyll degradation products in tobacco leaves decreased during 0-48 h and 0-72 h and increased during 48-72 h. It is worth noting that the 6-methyl-5-hepten-2-ol, β-ionol, β-ionone, and solavetivone detected in the tobacco leaf samples during 0-72 h and the 3-oxo-α-ionol detected in the mature fresh leaves have not been previously reported to be the components in tobacco headspace volatiles. Strikingly, isophorone was found to be the most abundant carotenoid volatile metabolite in tobacco leaves followed by geranylacetone and dihydroactinidiolide during 0-72 h. The levels of six carotenoid metabolites, including 6-methyl-5-hepten-2-ol, linalool, isophorone, megastigmatrienone A, megastigmatrienone B, and solavetivone were decreased during 0-72 h. Conversely, the levels of 3-oxo-α-ionol and 3-hydroxy-β-damascone increased during 0-72 h, whereas the remaining metabolites of carotenoid and chlorophyll did not persistently increase or decrease during 0-72 h. These findings indicate that the postharvest tobacco leaves underwent a series of complex physiological and biochemical changes involving pigment metabolism during curing. Protein Profile Analysis of Tobacco Leaves Using iTRAQ In order to clarify molecular mechanisms involved in carotenoid and chlorophyll metabolism and color change, data analysis based on the phenotypic, physiological, and chemical changes in tobacco leaves during 0-72 h using iTRAQ was found credible. In total, 1,043,678 spectra were identified from the iTRAQ analysis using the leaf samples at different curing stages as the materials. MASCOT (Modular Approach to Software Construction Operation and Test), a powerful database retrieval software, which can realize the identification from mass spectrometry data to protein, generated a total of 372,574 spectra matched to in silico peptide spectra, 218,381 unique spectra, 30,498 peptides, 22,993 unique peptides, and 5931 proteins from the iTRAQ experiments Run1, Run2, and Run3 ( Figure 4A and Supplementary Table S3). In order to obtain the relationship between the spectrum and the peptide segment, the mass spectrum was matched with the theoretical spectrum, where peptide segments were used as the dimension for data processing and calculation (a peptide segment may correspond to more than one spectrum). Among the identified proteins in three technical duplicate experiments, 1296 proteins had 1 identified unique peptide, 3715 had 2, 2564 had 3, 516 had more than 11, and the remainder had 4-10 ( Figure 4B). The peptide information validated that many unique peptides are shared with different proteins (the identified peptides were compared with protein databases). The relative molecular mass of identified proteins was mainly distributed at 10~80 kDa, and the proportion (18.06%) of proteins with a relative molecular mass of 30~40 kDa was the highest (Supplementary Figure S1A). A total of 5488 proteins were identified with 0~10% sequence coverage. However, only 1.18% of the proteins were identified with sequence coverage > 20% (Supplementary Figure S1B). The coefficient of variation (CV), defined as the ratio of the standard deviation (SD) to the mean (CV = SD/mean) was used to evaluate the reproducibility of protein quantification. The lower the CV, the better the reproducibility. The CV distribution (mean CV: 0.16) in three replicates showed good reproducibility (Supplementary Figure S2). DEPs Identified and Functional Analysis The proteins were screened with a fold-change value > 1.5 or < 0.67 and a Q-value of < 0.05. DEPs Identified and Functional Analysis The proteins were screened with a fold-change value > 1.5 or < 0.67 and a Q-value of < 0.05. DEPs Involved in Carotenoid and Chlorophyll Metabolism At the post-transcriptional level, 31 DEPs involved in carotenoid and chlorophyll metabolism were identified in tobacco leaves during curing, and the detail of these DEPs and BLAST data are listed in Supplementary Table S4. iTRAQ analysis revealed that among five DEPs in the carotenoid biosynthetic pathway, two were up-regulated during 0-48 h and/or 0-72 h, and three were downregulated during 0-48 h and/or 0-72 h ( Figure 6A). Alternatively, 14 proteins involved in carotenoid degradation were differentially expressed in tobacco leaves during curing. These proteins included 2 LOX proteins and 12 POD proteins. Two LOX proteins were both up-regulated during 0-48 h and 0-72 h. Of the 12 POD proteins, 3 were up-regulated during 0-48 h and/or 0-72 h, and 9 were downregulated during 0-48 h and/or 0-72 h and/or 48-72 h. The differences in the abundance of these proteins indicated the complex regulatory network involved in carotenoid metabolism in tobacco leaves during curing (Figure 7). A total of 12 DEPs were involved in chlorophyll metabolism in tobacco leaves during curing, including 11 DEPs in the chlorophyll biosynthetic pathway and 1 DEP in the chlorophyll breakdown pathway ( Figure 6B). Ten DEPs in the chlorophyll biosynthetic pathway were significantly down-regulated during 0-48 h and/or 0-72 h and/or 48-72 h. In contrast, ferrochelatase isoform I was up-regulated during 0-48 h and 0-72 h in the chlorophyll biosynthetic pathway, and chlorophyllase-1-like isoform X2 (Chlase-1-X2) was significantly up-regulated during DEPs Involved in Carotenoid and Chlorophyll Metabolism At the post-transcriptional level, 31 DEPs involved in carotenoid and chlorophyll metabolism were identified in tobacco leaves during curing, and the detail of these DEPs and BLAST data are listed in Supplementary Table S4. iTRAQ analysis revealed that among five DEPs in the carotenoid biosynthetic pathway, two were up-regulated during 0-48 h and/or 0-72 h, and three were down-regulated during 0-48 h and/or 0-72 h ( Figure 6A). Alternatively, 14 proteins involved in carotenoid degradation were differentially expressed in tobacco leaves during curing. These proteins included 2 LOX proteins and 12 POD proteins. Two LOX proteins were both up-regulated during 0-48 h and 0-72 h. Of the 12 POD proteins, 3 were up-regulated during 0-48 h and/or 0-72 h, and 9 were down-regulated during 0-48 h and/or 0-72 h and/or 48-72 h. The differences in the abundance of these proteins indicated the complex regulatory network involved in carotenoid metabolism in tobacco leaves during curing (Figure 7). A total of 12 DEPs were involved in chlorophyll metabolism in tobacco leaves during curing, including 11 DEPs in the chlorophyll biosynthetic pathway and 1 DEP in the chlorophyll breakdown pathway ( Figure 6B). Ten DEPs in the chlorophyll biosynthetic pathway were significantly down-regulated during 0-48 h and/or 0-72 h and/or 48-72 h. In contrast, ferrochelatase isoform I was up-regulated during 0-48 h and 0-72 h in the chlorophyll biosynthetic pathway, and chlorophyllase-1-like isoform X2 (Chlase-1-X2) was significantly up-regulated during 0-72 h and 48-72 h in the chlorophyll breakdown pathway. The results indicated that these DEPs potentially play important roles in chlorophyll metabolism (Figure 7). Table S4. (A) GGPP, geranylgeranyl diphosphate; PSY, phytoene synthase; PDS, phytoene desaturase; ZDS, ζ-carotene desaturase; CRTISO, carotenoid isomerase; LCYB, lycopene βcyclase; LCYE, lycopeneε-cyclase; BCH, β-carotene hydroxylase; NSY, neoxanthin synthase. (B) Chlide a/b, chlorophyllide a/b; MCS, metal-chelating substance; CAO, chlorophyllide-a oxygenase; Pheide a, pheophorbide a; Chl a/b, chlorophyll a/b; 7HChl a, 7-Hydroxy-chlorophyll a; NCCs, nonfluorescent chlorophyll catabolites. Discussion Proteomics analysis provides a broad perspective on the process of leaf color change, which prompts us not only to pay attention to the pigment metabolism pathway but also to further screen some key proteins related to the pigment metabolism and color change in tobacco leaves during curing [27][28][29]. Quantitative proteome analysis revealed that hundreds of DEPs were identified in all Validation of iTRAQ Data by qRT-PCR To provide the accurate data for the molecular mechanisms related to the pigment metabolism and color change in postharvest tobacco leaves during curing, the mRNA expression levels of eight key DEPs were detected by quantitative real-time polymerase chain reaction (qRT-PCR). The results exhibited that qRT-PCR data of eight genes aligned with the iTRAQ results ( Figure 8). Discussion Proteomics analysis provides a broad perspective on the process of leaf color change, which prompts us not only to pay attention to the pigment metabolism pathway but also to further screen some key proteins related to the pigment metabolism and color change in tobacco leaves during curing [27][28][29]. Quantitative proteome analysis revealed that hundreds of DEPs were identified in all Discussion Proteomics analysis provides a broad perspective on the process of leaf color change, which prompts us not only to pay attention to the pigment metabolism pathway but also to further screen some key proteins related to the pigment metabolism and color change in tobacco leaves during curing [27][28][29]. Quantitative proteome analysis revealed that hundreds of DEPs were identified in all leaf samples during curing. Although, many DEPs might be associated with pigment metabolism and color change in tobacco leaves during curing, 19 DEPs related to carotenoid metabolism, and 12 DEPs involved in chlorophyll metabolism were selected based on bioinformatics analysis. This analysis helped us to clearly identify DEPs associated with pigment metabolism and color change and the postharvest physiological regulatory networks in tobacco leaves during curing (Figure 7). Leaf Color Change is Determined by the Carotenoid and Chlorophyll Content Color change in plants is determined by the content of various plastid pigments, and plant organs are intensely yellow due to the accumulation of the xanthophylls, including lutein, neoxanthin, and violaxanthin [11,25,26]. The green color changes to orange due to the breakdown of chlorophylls and the accumulation of the β-carotene in plants [6,25]. Regardless, the concentrations of both carotenoid and chlorophyll significantly decreased in tobacco leaves during curing. Chlorophyll was found to be the most abundant plastid pigment in tobacco leaves at 0 h, but the ratio of carotenoid/chlorophyll of different samples was all larger than or equal to 1.30 during 48-72 h. In addition, the ratios between xanthophylls and β-carotene of different samples were all larger than or equal to 1.71 during 0-72 h. Thus, the tobacco leaves showed a green phenotype at 0 h and a yellow phenotype at 48 h and 72 h. Pigment metabolism and their relative contents were responsible for the formation of the yellow phenotype, which was consistent with previous reports [11,30,39]. The data were expressed as the color values of lightness L*, greenness a*, and yellowness b* [2,11,27]. In this study, the change in color was quantified as the increment in the values of L*, a*, and b*, which is associated with the pigment degradation and the increase in the relative concentrations of the carotenoid and the phenotypic change in tobacco leaves during curing. Effect of Cell Ultrastructure Damage on Pigment Metabolism and Leaf Color Change The metabolism of chlorophyll and carotenoid occurs in the chloroplast (complex organelle with several distinct sub-organellar compartments to internally sort the proteins) and chromoplast membranes [8,40]. Compared with 0 h, chloroplasts and grana thylakoid lamellae appeared to be more severely damaged in tobacco leaves at 48 h and 72 h, especially at 72 h. Chloroplast structure and functions are plausibly linked to pigment metabolism and leaf color change [8,30]. Structural damage of the chloroplast might accelerate the degradation of the chlorophyll and carotenoid, and alter the proportion of pigment compositions and promote the formation of yellow color in tobacco leaves during curing. Role of Physiological Parameters in Pigment Metabolism and Leaf Color Change Chlase, POD, and LOX are all important enzymes involved in pigment metabolism [14,15,19], and APX, ASA, and MDA are significant parameters to deduce the physiological state in plants [41,42]. The increased Chlase activities in tobacco leaves during curing might accelerate the degradation of chlorophyll [17,18]. In contrast, the increased LOX activities in tobacco leaves during 0-48 h might promote the degradation of carotenoid [13,14], but the decreased POD activities during 0-72 h might result in the delayed degradation of carotenoid in tobacco leaves during curing [15]. APX is an important enzyme for detoxification of H 2 O 2 in plants [43], and ASA is a key substrate for the detoxification of reactive oxygen entities [36]. The decreased APX activities and ASA content in tobacco leaves during 0-72 h might accelerate leaf senescence and promote the degradation of pigment. MDA is a marker for lipid peroxidation and a characteristic of senescence in plants, and the increased MDA content in tobacco leaves during curing might also accelerate leaf senescence and promote the degradation of pigment, which is consistent with the results of leaf senescence in tomato plants [42]. Following the progressive stress of physiology and catalysis of the enzymes, the pigment content remarkably decreased during curing, especially chlorophyll content, and the leaves kept a yellow phenotype for 72 h. Role of DEPs in Carotenoid and Chlorophyll Metabolism and Color Change In this study, 31 DEPs involved in carotenoid and chlorophyll metabolism were identified in tobacco leaves during curing. Although, these DEPs in the pigment metabolic pathway were mainly responsible for the change of pigment content and leaf color, the carotenoid cleavage, and chlorophyll breakdown were the primary biological processes in postharvest plant leaves during curing [16,23,44]. In the present study, 5 DEPs involved in carotenoid biosynthesis and 14 DEPs related to carotenoid cleavage were identified in tobacco leaves during curing. In the carotenoid biosynthetic pathway, geranylgeranyl pyrophosphate synthase, chloroplastic-like (GGPPS) was down-regulated in tobacco leaves during curing, which catalyzes the conversion of (E, E)-farnesyl diphosphate (FPP) into geranylgeranyl diphosphate (GGPP). In plants, FPP and GGPP are isoprenoid precursors necessary for carotenoid biosynthesis. The down-regulated GGPPS in tobacco leaves during 0-48 h and 0-72 h might reduce the content of carotenoids, and it ultimately leads to the decreased content of their degradation products, such as linalool, solavetivone, 6-methyl-5-hepten-2-ol, 6-methyl-5-hepten-2-one, 6-methyl-3,5-heptadien-2-one, β-ionone, and isophorone. In addition, zeaxanthin epoxidase, chloroplastic-like (ZEP) was significantly up-regulated, and VDE was significantly down-regulated in tobacco leaves during 0-48 h and 0-72 h. VDE is a member of a group of proteins known as lipocalins that bind and transport small hydrophobic molecules [7]. Zeaxanthin is epoxidized by ZEP, finally yielding violaxanthin, however, this epoxidation is reversible with the effect of VDE [11]. When zeaxanthin synthesis is inhibited by VDE, violaxanthin might be catalyzed by neoxanthin synthase (NSY) to yield neoxanthin. Then neoxanthin could be converted into 3-hydroxy-β-damascone and β-damascenone under the catalysis of carotenoid cleavage enzymes, resulting in the increase in their concentrations in tobacco leaves during 0-72 h. Furthermore, the violaxanthin concentration was higher than that of neoxanthin during 0-72 h, which might be closely associated with the up-regulated ZEP and down-regulated VDE. In the carotenoid biosynthetic pathway, unnamed protein product protein was significantly down-regulated in tobacco leaves during 0-72 h, while scopoletin glucosyltransferase-like protein was significantly up-regulated during 0-48 h. Both of them were involved in ABA biosynthesis in tobacco leaves during curing. The down-regulated unnamed protein product protein might not be helpful for the biosynthetic conversion of xanthoxin into ABA, whereas the up-regulated scopoletin glucosyltransferase-like protein might be conducive to ABA biosynthesis and finally lead to the decreased content of carotenes. Alternatively, it is worth emphasizing that CCDs are known to be important for cleaving carotenoid compounds and forming important flavor and fragrance volatiles or their apocarotenoids [14,39,45]. However, none of them showed a significant up-or down-regulation in different comparisons. Thereby, we speculate that CCDs might not be crucial for carotenoid metabolism in tobacco leaves during 0-72 h. Moreover, LOX and POD are important enzymes involved in the cleavage of carotenoids [13][14][15]. Two up-regulated LOX proteins might accelerate the degradation of carotenoids and keep them at low levels in tobacco leaves during curing. Three POD proteins were up-regulated, but the other nine POD-related proteins were down-regulated in tobacco leaves during curing. These up-regulated POD proteins might accelerate the degradation of carotenoids, but down-regulated POD proteins might not be conducive to carotenoid cleavage and forming important flavor. In addition, 19 carotenoid metabolites showed 6 changing trends with increased and/or decreased relative concentrations in different curing stages. These results suggested that carotenoid metabolism was involved in a complex regulatory network. In contrast, a total of 12 DEPs were involved in chlorophyll metabolism in tobacco leaves during curing, including 1 up-regulated and 10 down-regulated DEPs in the chlorophyll biosynthetic pathway and 1 up-regulated DEP in the chlorophyll breakdown pathway. The down-regulated delta-aminolevulinic acid dehydratase, porphobilinogen deaminase, chloroplastic-like, protoporphyrinogen oxidase, chloroplastic, magnesium-chelatase subunit ChlI, chloroplastic isoform X1, magnesium protoporphyrin IX methyltransferase, chloroplastic, magnesium-protoporphyrin IX monomethyl ester [oxidative] cyclase, chloroplastic, uncharacterized protein ycf39 isoform X2, protein TIC 62, chloroplastic isoform X3 and geranylgeranyl reductase in the chlorophyll biosynthetic pathway inhibited chlorophyll biosynthesis and indirectly reduced chlorophyll content in tobacco leaves during curing, which is consistent with previous reports related to color regulation in plants [8,27,29]. Additionally, two DEPs were identified in the chlorophyll biosynthesis shunt related to protoheme. The ferrochelatase isoform I protein was significantly up-regulated during 0-72 h, while ferrochelatase-2, chloroplastic isoform X1 protein was significantly down-regulated during 0-48 h and 0-72 h, which might indirectly regulate chlorophyll metabolism via an effect targeted on the coproporphyrin III and protoporphyrin IX. The results suggested that these DEPs were negative regulators of chlorophyll biosynthesis in tobacco leaves during curing. Alternatively, it is worth emphasizing that Chlase-1-X2 catalyzes the conversion of chlorophyll into chlorophyllide and phytol and is thought to be a key rate-limiting step in the chlorophyll breakdown pathway [19,46]. Subsequently, neophytadiene is formed via the dehydration of phytol. The Chlase-1-X2 was significantly up-regulated during 0-72 h and 48-72 h, which accelerated chlorophyll degradation in tobacco leaves during curing. However, the Chlase-1-X2 was down-regulated (mean ratio = 0.83) in tobacco leaves during 0-48 h. The relative concentrations of neophytadiene decreased markedly during 0-48 h and increased significantly during 48-72 h, which indicated a positive correlation between neophytadiene and Chlase-1-X2 protein. Thus, we inferred that Chlase-1-X2 played a leading role in chlorophyll breakdown in tobacco leaves during curing. Plant Material and Sampling Tobacco cultivar "Bi'na1" was obtained from the Guizhou Academy of Tobacco Science, China. Figure S7). It is worth noting that an intelligent curing system (DDMB06YS) was adopted, which can automatically control the dry bulb and wet bulb temperatures during curing. For each experiment, tobacco leaves were collected at 3 phases of flue-curing, i.e., 0 h, 48 h, and 72 h during the yellowing stage. Leaf samples of 0 h were collected from labeled plants in the fields before flue-curing. At 48 h (from the beginning of the curing process), approximately 80% of the leaf area turned yellow in the middle of the yellowing stage (dry bulb temperature 38 • C and wet bulb temperature 35~36 • C). At 72 h, the tobacco leaf had the yellow laminae and green midribs indicating the end of the yellowing stage (dry bulb temperature 42 • C and wet bulb temperature 33~34 • C). The samples were divided into two duplicates at each stage during curing; one was used directly to determine the color parameters, cell ultrastructure, and pigment content, and the other one was frozen in liquid nitrogen immediately to further measure the carotenoid composition and degradation products, physiological parameters and the DEPs involved in pigment metabolism. Three independent curing experiments were performed. Color Analysis The leaf color was determined at 0 h, 48 h, and 72 h using a Minolta Chroma Meter CR-10 (Konica Minolta Sensing, Inc., Japan) calibrated previously with a white standard tile by taking 6 measurements per leaf in the equatorial region. The data were expressed as the color values of lightness (L* = measures light reflected), redness (a* = measures positive red and negative green), yellowness (b* = measures positive yellow and negative blue). Thirty leaves were used for these determinations at each yellowing stage during curing. Ultrastructural Observation Sample sections of 1 mm 2 (2 cm distance from the midrib) were excised from the middle portion of the labeled leaves. Ultrastructural changes were studied by observing ultrathin sections of leaf palisade tissue at 0 h, 48 h, and 72 h using a Hitachi H-600 electron microscope (Kyoto, Japan) [47]. Physiological Measurements Approximately 0.1 g of fresh tissue was immersed in 95% ethanol for 24 h in the absence of light. The absorbance of the extracts was measured using a UV-1800 ultraviolet/visible spectrophotometer (Shimadzu, Kyoto, Japan) at wavelengths of 470, 649, and 665 nm. Chlorophyll a, chlorophyll b, and total carotenoid concentrations were calculated as previously described [48]. The frozen leaf samples were ground into a fine powder using liquid nitrogen with a mortar and pestle, then freeze-dried. The β-carotene, lutein, neoxanthin, and violaxanthin were quantified via HPLC, as described previously [49]. Moreover, the chlorophyll content, as expressed by the SPAD value, was measured using a Chlorophyll Meter (Model SPAD-502, Tokyo, Japan). Thirty leaves were measured by taking 6 measurements per leaf in the equatorial region. The leaf samples were analyzed for Chlase, LOX, POD, APX, ASA activities, and malondialdehyde (MDA) content. Plant enzyme-linked immunosorbent assay kits were purchased from the Shanghai Jianglai Bio-Technology Co., Ltd. (Shanghai, China) to measure Chlase and LOX activity, and from the Nanjing Jiancheng Bioengineering Institute (Nanjing, China) to determine the APX and POD activities, along with ASA and MDA content. Pigment Degradation Products Analysis Carotenoid and chlorophyll degradation products were determined using qualitative and quantitative methods in postharvest tobacco leaves during curing. Freeze-dried tobacco leaf samples were previously treated using headspace solid-phase micro-extraction (HS-SPME) and analyzed using a GC×GC-TOF-MS [50]. Protein Extraction Total proteins were extracted from leaf tissue at 0 h, 48 h, and 72 h during curing, as previously described [51]. The samples were transferred to a 2 mL centrifuge tube, and 5% cross-linked polyvinylpyrrolidone (PVPP) powder and homogenization lysis buffer (7 M urea, 2 M thiourea, 4% 3-[(3-cholamidopropyl) dimethylammonio]-1-propanesulfonate [CHAPS], 40 mM Tris-HCl, pH 8.5) were added. A grinder (power is 60 HZ, time is 2 min) was used to break the tissues, then 2 × volume of Tris-saturated phenol was added and shaken for 15 min. After centrifugation (25,000× g for 15 min at 4 • C), the upper phenol phase was taken into a 10 mL centrifuge tube, and 5 × volume of 0.1 M cold ammonium acetate/methanol and 10 mM dithiothreitol (DTT; final concentration) were added, then placed at −20 • C for 2 h. These steps were repeated twice. Then, 1 mL of cold acetone was added and again placed at −20 • C for 30 min. The supernatant was discarded after centrifugation, and this step was repeated once. Air-dry precipitation, 1XCocktail was added with SDS L3 and ethylene diamine tetra acetic acid (EDTA), then 10 mM DTT was added after putting on ice for 5 min. Protein was solubilized using a grinder and centrifuged, and then the supernatant was discarded and put into a water bath for 1 h at 56 • C after adding 10 mM DTT. Afterward, 55 mM iodoacetamide (IAM) was added and placed in a dark room for 45 min. 1 mL cold acetone was added, and placed at −20 • C for 2 h, then centrifuged. The steps of protein solubilization and centrifugation were repeated. The protein concentration was determined by the Bradford assay using bovine serum albumin (BSA) as a standard [52]. The samples were kept at -80 • C for further analysis. LC-ESI-MS/MS Analysis Each fraction was resuspended in buffer A (2% acetonitrile, 0.1% formic acid) and centrifuged at 20,000× g for 10 min. The samples were loaded at 8 µL min −1 for 4 min, and the 44 min gradient was then run at 300 nL min −1 starting from 2% to 35% B (98% acetonitrile, 0.1% formic acid), followed by a 2 min linear gradient to 80%, and maintenance at 80% B for 4 min, and finally a return to 5% in 1 min. The peptides were subjected to nanoelectrospray ionization followed by tandem mass spectrometry (MS/MS) in a QEXACTIVE (Thermo Fisher Scientific, San Jose, CA, USA) coupled online to the HPLC for data-dependent acquisition (DDA) mode detection. The main parameters were set: The ion source voltage was set to 1.6 kV; the MS1 scan range was 350~1600 m/z; the resolution was set to 70,000; the MS2 starting m/z was fixed at 100; the resolution was 17,500. The screening conditions for the secondary fragmentation were: Charge 2+ to 7+, and the top 20 parent ions with the peak intensity exceeding 10,000. The ion fragmentation mode was the high-energy collision dissociation (HCD), and the fragment ions were detected in Orbitrap. The dynamic exclusion time was set to 15 s. Automatic gain control (AGC) was set to: MS1 3E6, MS2 1E5. iTRAQ protein Identification and Quantification The MASCOT search engine (Matrix Science, London, UK; version 2.3.02) was used to simultaneously identify and quantify proteins against the Nicotiana tabacum database (http://www. ncbi.nlm.nih.gov/protein?term=txid4085[Organism]; 85,194 entries). For protein identification, a mass tolerance of 20 Da (ppm) was permitted for intact peptide masses, and a mass tolerance of 0.05 Da was permitted for fragmented ions; there was an allowance for one missed cleavage in the trypsin digests. Oxidation (M) and iTRAQ8plex (Y) represent variable modifications, and carbamidomethyl (C), iTRAQ8plex (N-term) and iTRAQ8plex (K) represent fixed modifications. All unique peptides (at least one unique spectrum) were permitted for protein quantitation. An automated software called IQuant [53] was employed for quantitatively analyzing the labeled peptides with isobaric tags. It integrates Mascot Percolator [54], a well-performing machine learning method for rescoring database search results, to provide reliable significance measures. To assess the confidence of peptides, the peptide spectral matches (PSMs) were pre-filtered at 1% PSM-level false discovery rate (FDR). In order to control the rate of false-positive at the protein level, a protein FDR at 1%, which was based on "picked" protein FDR strategy [55], would also be estimated after protein inference (protein-level FDR ≤ 0.01). DEPs were required to satisfy these conditions for identification: Confident protein identification involved at least one unique peptide, changes of greater than 1.5-fold or less than 0.67-fold, and Q-values less than 0.05 in at least 2 replicate experiments. The quantitative protein ratios were then weighted and normalized by the median ratio in MASCOT. Bioinformatics Analysis The GO database (http://en.wikipedia.org/wiki/Gene_Ontology) represents an international standardization of gene functional classification systems. The KOGs database (https://www.ncbi. nlm.nih.gov/pubmed/14759257) was used for orthologous protein classification. The pathways were used as queries to search the KEGG pathway database (http://www.genome.jp/kegg/pathway.html). Heatmap/hierarchical clustering of DEPs was conducted by pheatmap package in R language. The mass spectrometry proteomics data have been deposited to the iProX data repository (National Center for Protein Sciences, Beijing, China) with the dataset identifier IPX0001410001 (https://www.iprox.org/). RNA Extraction and qRT-PCR Analysis Total RNA was extracted from tobacco leaf samples by TRIzol reagent (Invitrogen), and cDNA was reverse transcribed from 1 µg of total RNA using PrimeScript™ RT Reagent Kit (TaKaRa), according to the manufacturer's instructions. qRT-PCR was performed using the iQ™5 real-time PCR detection system (Bio-Rad, USA) with the following conditions: 95 • C for 15 s, followed by 40 cycles of 95 • C for 15 s, 60 • C for 30 s, and 72 • C for 30 s. The tobacco β-actin gene was used as an endogenous control. The transcript levels of genes were calculated according to the 2 -∆∆Ct method [56]. Experiments were performed in triplicate for each treatment. Primer sequences are listed in Supplementary Table S5. Data Analysis Data were analyzed statistically using Duncan's Multiple Range Test with SPSS version 16.0 (SPSS, Chicago, IL, USA). All the photographs and figures were processed and analyzed using Adobe Illustrator CS5 (Adobe Systems Inc., San Francisco, CA, USA) or Origin 8.0 software (Origin lab, Corp., Northampton, MA, USA). Conclusions There was a significant decrease in the content of chlorophyll than carotenoid in tobacco leaves during curing, which was not only associated with the complex regulation of DEPs in carotenoid metabolism, but also correlated with DEPs playing a negative role in chlorophyll biosynthesis and a positive role in chlorophyll breakdown. The total concentration of carotenoid and chlorophyll degradation products in tobacco leaves decreased during 0-48 h and 0-72 h and increased during 48-72 h, which was the result of the combined action of DEPs in the pigment metabolic pathway, especially in the breakdown pathway. These DEPs delayed the degradation of xanthophylls and accelerated the breakdown of chlorophylls, promoting the formation of yellow color during 0-72 h. In particular, the up-regulation of the Chlase-1-X2 was the key protein regulatory mechanism responsible for chlorophyll metabolism and color change. In the future, we will attempt to carry out further research to elucidate the regulatory factors (e.g., environmental and genetic factors) that regulate the pigment metabolic flow and color change in postharvest tobacco leaves during curing. All these findings provide useful molecular information for a better understanding of the complicated postharvest physiological regulatory networks and the molecular mechanisms involved in pigment metabolism and color change in plants. Conflicts of Interest: The authors declare no conflict of interest.
9,821
2020-03-31T00:00:00.000
[ "Environmental Science", "Biology" ]
Histopathology of Aeromonas caviae Infection in Challenged Nile Tilapia Oreochromis niloticus (Linnaeus, 1758) Aeromonas caviae is one of the predominant motile aeromonads in Nile tilapia, Oreochromis niloticus . In this study, the histopathological alterations in the kidney, liver and pancreas of O. niloticus juveniles experimentally challenged with α-haemolytic A. caviae is described. The Nile tilapia experienced 60% mortalities at a challenge dose of 6×10 8 cells/fish. Post-challenge, well-defined histopathological changes were observed with nephritis and the loss of structural integrity of the kidney tissues. The liver was dispersed, necrotized and had fatty changes in the hepatic parenchyma. Inflammation of the pancreas, as well as pancreatic acinar cells and disintegration of intrahepatic exocrine pancreatic tissues, were also noted. The results, thus, demonstrated that A. caviae can cause severe damages in the kidney, liver and pancreas of O. niloticus , similar to those of other known fish bacterial pathogens. 2008; Zamri-Saad et al., 2014). In a study by Martins et al. (2008), A. caviae was associated with mortalities of Nile tilapia held in cages, wherein they isolated large numbers A. caviae in the liver and kidney. Saleh et al. (2017) reported A. caviae as one of the predominant species of motile aeromonads in Nile tilapia. As the Nile tilapia culture is fast picking up, we started investigating the diseases of cultured tilapia in West Bengal, India. In our earlier studies, streptococcal (Adikesavalu et al., 2017) and A. hydrophila (Julinta et al., 2017) infections in Nile tilapia were reported. In this study, we report the histopathological alterations in the kidney, liver and pancreas of O. niloticus juveniles caused by experimental A. caviae infection. Background Disease is one of the major problems that affect the tilapia production and livelihood of the farmers. The bacteria that cause diseases and mortalities in tilapia are Flavobacterium columnare, Edwardsiella tarda, Aeromonas spp., Vibrio spp., Francisella spp., Streptococcus agalactiae, etc (Zamri-Saad et al., 2014;Huicab-Pech et al., 2016). The genus Aeromonas includes ubiquitous bacteria found in aquatic habitats and at least 31 species have been described (Chen et al., 2016). Aeromonas spp. are considered to be opportunistic pathogens and capable of producing disease only in weakened populations of fish or as secondary invaders in fish suffering from other diseases (Harikrishnan and Balasundaram, 2005;Austin and Austin, 2012). Aeromonas spp. are divided into two principal subgroups: the non-motile and psychrophilic species (A. salmonicida) and the motile and mesophilic species namely A. hydrophila, A. sobria, A. veronii, A. caviae and others (Austin and Austin, 2012). Several species of the genus Aeromonas frequently cause problems in both feral and cultured fish and are responsible for heavy economic losses due to high mortality (El-Sayed, 2006;Martins et al., 2008;Austin and Austin, 2012). The Nile tilapia O. niloticus is reportedly sensitive to streptococcosis and also motile Aeromonas septicemia (MAS). Diseases associated with streptococcosis and MAS are, however, most severe among fish that are cultured under intensive conditions (El-Sayed, 2006;Martins et al., 2008;Zamri-Saad et al., 2014). In a study by Martins et al. (2008), A. caviae was associated with mortalities of Nile tilapia held in cages, wherein they isolated large numbers A. caviae in the liver and kidney. Saleh et al. (2017) reported A. caviae as one of the predominant species of motile aeromonads in Nile tilapia. As the Nile tilapia culture is fast picking up, we started investigating the diseases of cultured tilapia in West Bengal, India. In our earlier studies, streptococcal (Adikesavalu et al., 2017) and A. hydrophila (Julinta et al., 2017) infections in Nile tilapia were reported. In this study, we report the histopathological alterations in the kidney, liver and pancreas of O. niloticus juveniles caused by experimental A. caviae infection. district, West Bengal, India in oxygen filled polythene bags to the laboratory. The fish were acclimatized for an hour followed by disinfection with 5 ppm potassium permanganate for 10 min and then stocked in 500 L capacity fibreglass reinforced plastic tanks at 100 numbers/tank containing 400 L clean bore-well water. The fish were acclimatized for about two weeks and fed ad labitum with commercial floating pellet feed containing 30% protein. For the experimental challenge, four glass aquaria of size L60 × H30 × W30 cm were selected (two each for treatment and control groups), disinfected, cleaned, dried for a week and filled with clean bore-well water. Prior to experimentation, two apparently tilapia juveniles were euthanized, dissected aseptically and inocula from the kidneys were streaked on to Rimler-Shotts agar and tryptic soya agar to check whether or not the fish are infected as per Austin and Austin (2012). Ten each of healthy tilapia were released into the experimental tanks and acclimatized for about 3 days. All the tanks were covered with nylon netting for adequate protection. An α-haemolytic bacterium, Aeromonas caviae-T 1 K 2 used in this study was isolated from diseased O. niloticus with MAS and preserved as glycerol stock at -20°C in the Department of Aquatic Animal Health, West Bengal University of Animal and Fishery Sciences, Kolkata, India. The glycerol stock was revived in tryptic soya broth and the cell suspension of A. caviae was prepared as described in Adikesavalu et al. (2015) and used immediately. Our preliminary studies with A. caviae-T 1 K 2 determined the LD 50 value in Nile tilapia juveniles as 6.76×10 8 cells/fish (data not shown). Aliquots (0.1 mL) of the undiluted A. caviae cell suspension (6×10 9 cells/mL) was injected intramuscularly, i.e., on the dorsal side of the body at a 45° angle at the base of dorsal fin, in such a way to get 6×10 8 cells/fish. The control fish received sterile saline, 0.1 mL each. The challenged fish were maintained in their respective tanks for the observations on mortality, external signs of infections and behavioural changes. Histopathology The organs such as kidney and liver of the freshly dead O. niloticus on day 4 post-injection were fixed in Bouin's solution for 24 h. The fixed organs were processed, embedded in paraffin wax and thin (5 μm) sections prepared. The sections were then stained with haematoxylin and eosin for the detection of histopathological changes as per Roberts (2012). Results Aeromonas caviae challenged O. niloticus exhibited sluggishness and abnormal behaviour like wandering around the corners, resting at the bottom and vertical swimming. Figure 3a-c, respectively. The kidney tissues had glomerulopathy with dilated Bowman's space, nephropathy with the loss of tubular epithelial cells, obliterated as well as inflamed nephritic tubules, melanomacrophage aggregate, necrosis, widen lumen and thickening of lumen lining (Figures 1a-c). The liver demonstrated melanomacrophage aggregate, infiltration of haemocytes, loss of normal architecture of the hepatic tissue and fatty changes in the hepatic parenchyma; while the pancreas illustrated degradation and/or inflammation of the pancreas and pancreatic acinar cells and, vacuolation in the pancreatic tissue (Figure 2a-c; Figure 3a-c). Discussion Aeromonas caviae strain used in this study was isolated from the kidney of septicemic O. niloticus along with other motile aeromonads such as A. hydrophila, A. veronii, A. schubertii and A. bestiarum (data not shown). The challenged tilapia showed haemorrhages at the site of injection and experienced 60% mortalities. The main visceral organs, viz., kidney, liver and pancreas were affected in the injected fish, thus revealing the fact that A. caviae was able to infect many visceral organs similar to Aeromonas caviae-like bacterium (Thomas et al., 2013) and A. hydrophila (Azad et al., 2001). Systemic infections are generally characterized by diffused necrosis in several internal organs, primarily the liver and kidney are the target organs of an acute septicemia. These organs when attacked by bacterial toxins cause acute haemorrhage and necrosis of vital organs and lose their structural integrity (Huizinga et al., 1979). In control fish, the normal structure and systematic arrangement of kidney tissues with well-defined glomerulus were observed. While, in challenged tilapia, the structural integrity of the kidney tissues were lost and well-defined histopathological changes observed on 4 dpi, which indicated disease International Journal of Aquaculture, 2018, Vol.8, No.20, 151-155 http://ija.biopublisher.ca progression with extensive changes in the tissues of this vital organ. The inflammation of nephritic tubules of challenged tilapia exemplified nephritis. The glomerulopathy and dilation of Bowman's space are the indications of a defective glomerular filtration of blood, which, in turn, hamper the removal of excess wastes and fluids from the kidney. This clearly justified the ability of A. caviae in causing systemic infection as it contains many putative virulence genes, including those encoding a type 2 secretion system, an RTX toxin, and polar flagella (Sudheesh et al., 2012). The findings of this study are reasonably similar to those observed by Julinta et al. (2017) in O. niloticus intramuscularly challenged with A. hydrophila, a strain isolated along with A. caviae strain used here. On the other hand, in A. hydrophila challenged O. mossambicus, Azad et al. (2001) noted aggregation of melanmacrophage centers (MMC) in the pronephros, necrosis of the cells in the renal interstitium, tubular necrosis and glomerular degeneration, oedematous degeneration of the tubules, depletion of cells in the tubular interstitium occlusion of the ophisthonephric collecting duct with MMC. The liver of the control fish showed a normal structure and systematic arrangement of hepatocytes. The pancreatic tissue inside the liver is not common in all kinds of fish, but if it is, this organ is called the hepatopancreas (El-Bakary and El-Gammal, 2010). The observations of this study revealed the presence of pancreatic tissue inside the liver of O. niloticus. In the liver and pancreas of challenged tilapia, dispersed and necrotized tissue, infiltration of haemocytes, loss of normal architecture of the hepatic tissue, fatty changes in the hepatic parenchyma, inflammation of pancreas as well as pancreatic acinar cells, and disintegration of intrahepatic exocrine pancreatic tissues were commonly observed, which corroborate the observations of several earlier studies conducted on different fish species due to Aeromonas infection (Azad et al., 2001;Ghosh and Homechaudhuri, 2012;Al-Yahya et al., 2018). Azad et al. (2001) documented vacuolation, congestion of hepatic sinuses with blood cells and internal haemorrhages, pyknotic necrosis of hepatocytes in the liver of A. hydrophila challenged O. mossambicus; while in the pancreas, they observed acinar cell degradation in 3-5 dpi and mild necrosis in the pancreatic acini. On the other hand, Al-Yahya et al. (2018) noted massive haemocyte aggregation, pyknotic nuclei in the hepatopancreas and perivascular cuffing of hepatopancreatic haemolymph vessels in A. hydrophila infected blue tilapia, O. aureus. These changes may lead to a disorder of lipid metabolism in the liver tissues, i.e., lipidosis, possibly associated with toxins and extracellular products such as hemolysin, protease, elastase produced by aeromonads (Yardimci and Aydin, 2011). In contrast, the study by Islam et al. (2008) revealed the development of internal tissue abscess characterized by focal necrosis and haemorrhage. According to them, the distribution of bacterial cells all over the hepatic tissue caused massive diffused necrosis represented by vacuolation and atrophy in the liver of fish challenged with Aeromonas. The infiltration of haemocytes in the hepatic tissue is a measure of cellular response, which indicated the ability of Nile tilapia to respond to the A. caviae infection. The results of the present study, thus, demonstrated that A. caviae can cause serious pathology in the kidney, liver and pancreas of O. niloticus, similar to those of other known fish bacterial pathogens such as A. hydrophila (Azad et al., 2001;Julinta et al., 2017;Al-Yahya et al., 2018) or S. agalactiae (Adikesavalu et al., 2017). Since A. caviae caused mortalities only at a challenge dose of 6×10 8 cells/fish, it is highly apparent that risk factors such as temperature fluctuations, poor water quality, accumulation of organics, crowding, etc, together with other virulent motile aeromonads may affect the physiological functioning of tilapia and increase their susceptibility to the pathogenic agents. Therefore, the search for timely corrective measures should first address the identification of risk factor(s) that predispose Aeromonas outbreaks in Nile tilapia and mitigating the same responsibly.
2,741.2
2018-08-31T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Interferon‐γ and IL‐5 associated cell‐mediated immune responses to HPV16 E2 and E6 distinguish between persistent oral HPV16 infections and noninfected mucosa Abstract Objectives Natural history of human papillomavirus (HPV) infection in the head and neck region is poorly understood, and their impact on collective HPV‐specific immunity is not known. Materials and methods In this study, we have performed a systematic analysis of HPV16‐specific cell‐mediated immunity (CMI) in 21 women with known oral and genital HPV DNA status and HPV serology (Ab) based on 6‐year follow‐up data. These women being a subgroup from the Finnish Family HPV Study were recalled for blood sampling to be tested for their CMI‐responses to HPV16 E2, E6, and E7 peptides. Results The results showed that HPV16 E2‐specific lymphocyte proliferation was more prevalent in women who tested HPV16 DNA negative in oral mucosa and were either HPV16 seropositive or negative than in HPV16 DNA+/Ab+ women (p = 0.046 and p = 0.035). In addition, the HPV16 DNA−/Ab− women most often displayed E6‐specific proliferation (p = 0.020). Proportional cytokine profiles indicated that oral HPV16‐negative women were characterized by prominent IFN‐γ and IL‐5 secretion not found in women with persisting oral HPV16 (p = 0.014 and p = 0.040, respectively). Conclusions Our results indicate that the naturally arising immune response induced by oral HPV infections displays a mixed Th1/Th2/Th17 cytokine profile while women with persisting oral HPV16 might have an impaired HPV16‐specific CMI, shifted partly toward a Th2 profile, similarly as seen earlier among patients with high‐grade genital HPV lesions. Thus, the lack of HPV 16 E2 and E6 specific T memory cells and Th2 cytokines might also predispose women for persistent oral HPV16 infection which might be related to the risk of cancer. | INTRODUCTION Human papillomavirus (HPV) infection with high-risk genotypes is the main etiological factor of cervical cancer (CC), being present in over 90% of the cases. HPV can also infect head and neck region which includes oral cavity, oropharynx, nasopharynx, hypopharynx, and larynx. HPV is known to be associated with (i) totally benign lesions, (ii) potentially malignant lesions, and (iii) a subgroup of squamous cell carcinomas (HNSCC) (Rautava & Syrjänen, 2011;Syrjänen, 2003;Syrjänen et al., 2007). According to a recent meta-analysis, approximately 22%, 50%, and 21% of oral, oropharyngeal, and laryngeal cancers are associated with HPV, mostly with the HPV16 genotype (Ndiaye et al., 2014). Studies on the natural history of HPV in the head and neck region are scarce (Kero et al., 2012;Kero et al., 2014;Louvanto et al., 2010;Pierce Campbell et al., 2015;Rautava et al., 2012). Even less is known on the role of HPV-specific immunity in oral and oropharyngeal HPV infections. In our previous study on the males in the Finnish Family HPV cohort, oral HPV infections were associated with HPV typespecific humoral immune responses , but this association was not found in women (Paaso et al., 2011). While both the CIN and the genitally always negative control women exhibited HPV16-specific CMI, the two groups could be distinguished on the basis of their IL-17A secretion after HPV16 E6 stimulation (Paaso et al., 2015). There are hardly any studies on CMI immunity in the context of oral HPV infections. However, one study showed that local HPV-specific T cells were more often present or less suppressed in HPV-induced HNSCCs than in CCs (Heusinkveld et al., 2012). The aim of the present study was to assess the association between oral HPV16 infection and immunity against HPV16. We analyzed women (the cases) with oral HPV16 infection who were HPV16L1 seropositive or -negative during the 6-year FU. The control group consisted of women who remained constantly HPV16 DNAnegative in their oral brush samples during the FU, being either HPV16-seropositive or -negative. These four subgroups (DNA+/Ab+, DNA+/Ab−, DNA−/Ab+, DNA−/Ab−) were analyzed to disclose any differences in their HPV-specific CMI. | Women The women in the present study represent a subgroup of the Finnish were enrolled in the study. Enrollment has been described previously (Rintala et al., 2005;Rintala et al., 2006). In addition, 131 male spouses participated in the study (Kero et al., 2012). For the present study, four subgroups of women (n = 21) were individually invited based on the known FU data on their oral HPV DNA and HPV serological status as follows: group 1 (n = 4), oral HPV16 DNA+/Ab+, that is, women with persisting HPV16 DNA for at least 24 months and also had HPV16 L1-specific antibodies tested with the multiplex HPV serology analysis; group 2 (n = 6), oral HPV16+/Ab−, that is, women with persistent oral HPV16, but HPV16-seronegative; group 3 (n = 4), oral HPV16−/Ab+, that is, constantly HPV16 DNA-negative but HPV16-seropositive; and group 4 (n = 7), oral HPV16−/Ab−, that is, those being both oral HPV16 DNA-and seronegative. Written informed consent was obtained from all women. The mean age of the women at the time of CMI testing was of 38 years 7 months (range: 34-45 years), with no significant differences between the four groups. Their FU data of the women are summarized in Figure 1. | Oral samples A subgroup of mother-child pairs were recalled 10-14 years after the entry into study for HPV specific CMI studies. At this visit, venous blood samples were taken for CMI in addition to oral brush sampling for HPV testing both from mothers and children. Briefly, the buccal mucosa of the cheeks and vestibular areas were brushed with a Cytobrush ® (MedScan, Malmö, Sweden). The Cytobrush was then inserted into 80% ethanol and instantly frozen and stored at −70 C until analyzed. A dentist specialized in oral mucosal diseases (JR) also carried out a comprehensive clinical examination of all women. | Cervical specimen Gynecological examination was also conducted at the 14-year FU visit by a gynecologist (KK), including a Pap smear and additional cervical sampling with a Cytobrush ® for HPV testing. The cytobrush for HPV testing was placed in a tube with 0.05 M phosphate-buffered saline (PBS) with 100 μg of gentamycin, immediately frozen at −20 C. Then the Cytobrush was stored at −70 C until tested (Rintala et al., 2005). Three of the 21 women declined the gynecological examination and sampling. | Anal samples Brush samples (Cytobrush ® ) were also taken from the anus by the gynecologist. Xylocain gel (AstraZeneca AB, Södertälje, Sweden) was applied to the anal region before sampling to increase the acceptability of the sampling device. The process of the samples was similar to that of the gynecological samples. Three of the 21 women refused the anal sampling. | HPV antibody screening Blood samples for antibody determination were taken in a clot activator tube at baseline and at 12-, 24-, and 36-months of FU. At first, the samples were centrifuged at 1150g for 10 min (Sorval GLC-2; DuPont instrument). Then the serum was divided into three 1 ml aliquots and stored at −20 C for no longer than 1 week. Until being sent for analysis at the DKFZ, Heidelberg, Germany, the samples were stored at −70 C. Multiplex HPV serology based on glutathione S-transferase fusion-protein capture on fluorescent beads (Syrjänen et al., 2009;Waterboer et al., 2005) was used to analyze antibodies to the major capsid protein L1 of HPV types 6, 11, 16, 18, and 45. The median reporter fluorescence intensity (MFI) of at least 100 beads was computed for each bead set in the sample with the cut-off value separately defined for each HPV probe as 1.5× background MFI + 5 MFI. The cut-off for HPV seropositivity was MFI 200. | Blood samples for CMI The isolation of peripheral blood mononuclear cells (PBMCs) was performed as described in Koskimaa et al., 2014 andPaaso et al., 2015). Venous blood samples (74 ml) were collected in sodium-heparin collection tubes. The isolation was performed by centrifugation for 3 h over a Ficoll-Paque gradient (GE Healthcare Life Sciences, Uppsala, Sweden). Around 10 × 10 6 PBMCs were used for the lymphocyte stimulation test (LST). The leftover cells were cryopreserved in 80% Fetal Bovine Serum (FBS, Biowest, EU quality) and 20% DMSO (Merck, Darmstadt, Germany) at a density of 10 million PBMCs/vial. Autologous serum was used for the cell cultures in the short-term T cell proliferation assay (Koskimaa et al., 2014;Paaso et al., 2015). Mass spectrum and high-performance liquid chromatography (HPLC) was used to test the peptide quality. Memory response mix (MRM) stock solution (50×) was a positive control for the proliferation assay and cytokine production capacity of the PBMCs. It consisted tetanus toxoid, 0.75 fl/ml (Statens Serum Institut, Copenhagen, Denmark), Tuberculin PPD, 5 μg/ml (Statens Serum Institut), and Candida albicans, 0.015% (Greer Laboratories, Lenoir, USA). | Proliferative capacity determination of HPV16-specific T cells by short-term lymphocyte stimulation test The protocol for LST is described in more detail in our previous communications (Koskimaa et al., 2014;Paaso et al., 2015). Briefly, the PBMCs were seeded into U-bottomed 96-wells microtiter plates | Analysis of cytokine secretion At day 6, the supernatants were collected from the samples which were LST-positive. The Cytometric Bead Array (CBA) Human Enhanced Sensitivity Flex Set system (BD Biosciences, Temse, Belgium) was used to determine the levels of IFN-γ, TNF-α, IL-2, IL-5, IL-10, and IL-17A (Koskimaa et al., 2014). As described by the manufacturer, the detection limits for the cytokines were based on standard curves followed with the limit of 274 fg/ml. The limit of a positive antigen-induced cytokine production was determined as a cytokine concentration exceeding twice the concentration of the medium-only control was considered as the limit of a positive antigen-induced cytokine production (Heusinkveld et al., 2011). | RESULTS 3.1 | HPV-specific information during the FU All HPV-specific data of the 21 women followed up for 14 years are given in Figure 1, which also includes the results of HPV testing of anal, genital and oral HPV samples collected at the last visit when also the blood sample was taken for CMI analyses. Also, the results of the clinical examination of oral mucosa at the same visit are given in As to the total response to E2 or E6, the strength of the proliferative response (Figure 3) and the percentage of individuals responding to E2 were higher in HPV16 DNA negative groups (groups 3 and 4) than in HPV16 DNA positive group (group 1) (p = 0.046 and p = 0.035, respectively). The proliferative response (Figure 3) and the percentage of patients responding to the complete E6 were higher in group 4 than in any other group (group 4 vs. 2; p-value = 0.020) ( Figure 2b). The women were then grouped together according to their oral HPV16 DNA+/− status (groups 1 and 2; groups 3 and 4). The strength of the proliferative response ( Figure 3) and the percentage of women responding to either E2 or E6 were higher among the oral HPV16 DNA− groups; p = 0.030 and 0.020, respectively, than among the HPV16 DNA+ women (Figure 2c). | DISCUSSION The number of subjects in this study was small. Despite this fact, we were able to demonstrate differences in HPV16 specific CMI between HPV negative women and the women with persistent oral HPV16 infection. Oral HPV negative women showed strong proliferative E2and E6-specific responses. These responses were associated with prominent IFN-γ and IL-5 secretion. Oral cavity is the first possible route of HPV infection, especially in children, and could even be the first anatomic site of productive infection resulting in HPV-specific immunity (Koskimaa et al., 2012;Koskimaa et al., 2014). Our main interest was to find out whether any CMI-specific indicators could predict the outcome of oral HPV16 infections. Thus, based on their oral HPV16 DNA and HPV16 serology status during the FU the women were classified into four subgroups. We hypothesized that the HPV-specific immune system is compromised in women with persistent HPV16 infection and lack of protective HPV-specific antibodies (group 2) during the FU. Those women might be at increased risk for HPV induced oral mucosal lesions-and even cancer-development. As expected, E2-specific proliferation was most common among the HPV16 DNA-negative women (groups 3 and 4). In unison, E6-specific responsiveness was predominantly found in group 4. It was previously found that reactivity against E2 and E6 is typical for HPV-negative persons. This explains why circulating CD4+ and CD8+ T cells specific for HPV16 E2 and E6 antigens are frequently detected in healthy subjects (i.e., devoid of genital HPV) who have successfully cleared the virus (de Jong et al., 2002;Welters et al., 2003). The fact that none of the women had clinical oral lesions could be explained with the lack of E7-specific proliferation. Among HPV-positive oropharyngeal or CC patients, E2-specific proliferation is less prevalent, but E6-and E7-specific T cell proliferation is detected in many cases (de Jong et al., 2004;Piersma et al., 2008). This difference in immune reactivity suggests the best protective CMI against progressive oral HPV16 infection for the HPV16 DNA-and antibody-negative women (group 4). While oral HPV16 DNA has never been detected in those women peptides of oral HPV16-DNA-positive (n = 10) and -negative (n = 11) women antibodies in sera was the difference between the two HPV16 DNAnegative groups 3 and 4 and the absence of E6-reactivity in the former. The presence of antibodies indicates a previous exposure to HPV16, which was not resolved in time to prevent the development of antibodies (Sasagawa et al., 1998). Potentially, the lack of E6 reactivity, which is usually found in (HPV antibody-negative) healthy individuals, is related to this. For genital infections, a role for E2-and E6-specific T cells in resolving the early stages of HPV infection has also been suggested (Dillon et al., 2007;Farhat et al., 2009;Paaso et al., 2015;Welters et al., 2003;Woo et al., 2010). In addition to Th1/Th2 cytokines, also IL-17 cytokines (mainly IL-17A) produced by Th17 T cells play important role in host immune responses during the elimination of pathogens. IL-17 might also stimulate the production of IFN-γ and TNF-α by T cells and natural killer (NK) cells which amplifies the inflammation. Therefore the persistent production of IL-17 is a risk factor for chronic inflammation and elevated levels of IL-17 have been found in CC patients (Chen et al., 2013;Zhang et al., 2011). We measured lower concentrations of IL-17A among women with oral persistent HPV16 infection than among the HPV-negative control women and the overall concentrations were low. The most probable reason for divergent results is the small sample amount and other possible explanation might be the stage of HPV infections. In other studies, the divergences have been able to found when comparing cancer patients, women with different stages of CIN and healthy controls (Xue et al., 2018). All women in our study are basically healthy without any clinically detectable lesions in mouth or squamous intraepithelial (SIL) in pap smears. The most protective cytokine combination seem to be with oral HPV16 DNA− and Ab− women (group 4). The mixed Th1/Th2-type cytokine secretion is typical to healthy women compared to CC patients (de Jong et al., 2004;Welters et al., 2003;Welters et al., 2006). In line with these previous results we could show evidence that the proportional ratios of IFN-γ and IL-5 cytokine secretions were significantly higher also in the oral HPV16 DNA-negative women than in oral HPV DNA persistors (de Jong et al., 2004). Some previous evidence suggests that IFN-γ is one possible prognostic F I G U R E 5 Proportional ratios of the cytokines measured in the lymphocyte stimulation test (LST)-positive samples. In the upper part of the figure, the women were divided into four subgroups: HPV16 DNA+/Ab+ (n = 4), HPV16 DNA+/Ab− (n = 6), HPV16 DNA−/Ab+ (n = 4) and HPV16 DNA−/Ab− (n = 7). In the lower part of the figure, the smaller subgroups were combined to create two groups according to the oral HPV DNA status (oral HPV16-DNA-positive (n = 10) and -negative (n = 11) women). In each study subject, single-cytokines were summed up from the LST-positive samples, only. Then, the sum value of each cytokine was divided by the total cytokine sum values within the LST+ samples to obtain the cytokine-specific ratios. Finally, the mean values of the given ratios were calculated for each subgroup marker for the HR-HPV clearance (Scott et al., 1999;Welters et al., 2003). Our previous study also showed that women who tested HPV negative in genital tract or who could clear their HPV infection had higher levels of IFN-γ after HPV16 peptide stimulation than women with HPV induced cervical intraepithelial lesions (CIN) (Paaso et al., 2015). In line with our results, Ondondo and coworkers recently reported that men with HPV clearance had significantly higher IFN-γ levels than those with persistent HPV infection (Ondondo et al., 2019). They concluded that Th1 cell-mediated cytokine response was associated with natural HPV clearance in men (Ondondo et al., 2019). Similarly, they found that HPV IL-2 levels were higher in HPV clearance group then in noninfected men which was however, not evident in our study. Contradictory to us, they used L1 peptides for peripheral blood lymphocyte stimulation which signifies better for productive HPV infection than stimulation with E2, E6, or E7 peptides. Increased IL-10 levels have been identified in CIN lesions. T cell activation and Th1 cell differentiation can be inhibited by IL-10 (Alcocer-Gonzalez et al., 2006;de Jong et al., 2004;Lin et al., 2019;Prata et al., 2015). During the virus infection the role of IL-10 is crucial. Many cell types are able to secrete IL-10 and the major source of IL-10 is varying depending on the stage of infection (acute or chronic). For example, through co-operation with cytokines of Th1, the IL-10 is able to regulate Th2 cytokines like the overproduction of IL-4 and IL-5 (Couper et al., 2008). The highest concentration of IL-10 after HPV16 peptide exposure was secreted by the peripheral blood mononuclear cells derived from women with persistent oral HPV16 infection and HPV16 seropositivity (group 1). Despite our limited amounts of results, we assume that oral HPV 16 infections could affect the host's CMI similarly as described by the earlier CMI studies focused only on HPV infections in the cervix, so far. To conclude, our results indicate a mixed Th1/Th2/Th17 cytokine profile while in oral HPV16 persistors the proliferative E2 and E6 responses were partly impaired and lacked the IFN-γ and interleukin-5 secretion. Also the proportional ratios of cytokines measured from LST-positive samples indicate a slight shift toward Th2/Th17 profile, similarly as found earlier in patients with CC or severe CIN (de Jong et al., 2004;Lin et al., 2019). However, here the differences were less prominent as we studied here women with chronic oral HPV infection but clinically healthy during a long FU (de Jong et al., 2004). In the future, there is an urgent need for additional studies on patients suffering from persistent HPV infections with SIL and carcinomas in head and neck region especially in oral, oropharyngeal, and sinonasal tracts which might be the first sites of HPV infection in early life. Furthermore, these areas are in or adjacent to lymphoid tissues facilitating an early recognition of infectious agents HPV included. ACKNOWLEDGMENTS We are grateful to the women who participated in the Finnish Family CONFLICT OF INTEREST There is no conflict of interest. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request.
4,463.2
2021-01-09T00:00:00.000
[ "Biology" ]
Analysing Twitter and web queries for flu trend prediction Background Social media platforms encourage people to share diverse aspects of their daily life. Among these, shared health related information might be used to infer health status and incidence rates for specific conditions or symptoms. In this work, we present an infodemiology study that evaluates the use of Twitter messages and search engine query logs to estimate and predict the incidence rate of influenza like illness in Portugal. Results Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Conclusions Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results. Results: Based on a manually classified dataset of 2704 tweets from Portugal, we selected a set of 650 textual features to train a Naïve Bayes classifier to identify tweets mentioning flu or flu-like illness or symptoms. We obtained a precision of 0.78 and an F-measure of 0.83, based on cross validation over the complete annotated set. Furthermore, we trained a multiple linear regression model to estimate the health-monitoring data from the Influenzanet project, using as predictors the relative frequencies obtained from the tweet classification results and from query logs, and achieved a correlation ratio of 0.89 (p < 0.001). These classification and regression models were also applied to estimate the flu incidence in the following flu season, achieving a correlation of 0.72. Conclusions: Previous studies addressing the estimation of disease incidence based on user-generated content have mostly focused on the english language. Our results further validate those studies and show that by changing the initial steps of data preprocessing and feature extraction and selection, the proposed approaches can be adapted to other languages. Additionally, we investigated whether the predictive model created can be applied to data from the subsequent flu season. In this case, although the prediction result was good, an initial phase to adapt the regression model could be necessary to achieve more robust results. Background With the establishment of the Web2.0 paradigm, the Internet became a means of disseminating personal information rather than being used only as a source of information. Also thanks to the widespread access to computers, and to the appearance of other more mobile platforms such as smartphones and tablets, large quantities of user generated content (UGC) are currently created every day, on web pages, blogs and through social networking services like Twitter or Facebook. This content includes for example, personal experiences, knowledge, product reviews, and health information [1,2], representing great opportunities with many possible applications. Mining these data provides an instantaneous snapshot of the public's opinions, and longitudinal tracking allows identification of changes in opinions [3]. Twitter [4], for example, offers a micro-blogging service that allows users to communicate through status updates limited to 140 characters, commonly referred to as "tweets". It has over 200 million active users, and around 400 million tweets are published daily. Among the various forms of content that is created, people often use social networking services to share personal health related information, such as the appearance of flu symptoms or the recovery of those symptoms. Other types of user-generated content, such as Internet searches or comments to news articles, may also contain information related to some of these aspects. Thus, this information can be used to identify disease cases and estimate the disease incidence rate through time. The use of these different forms of internet data to infer and predict disease incidence is the subject of infodemiology, an emerging field of study also focused on syndromic surveillance and on measuring and tracking the diffusion of health information over the internet [5]. Various previous works have used Twitter and other user-generated data to assess and categorize the kind of information sought by individuals, to infer health status or measure the spread of a disease in a population [6]. In Lyon et al. [7], for example, the authors compared three web-based biosecurity intelligence systems and highlighted the value of social media, namely Twitter, in terms of the speed the information is passed and also because many issues or messages were not disseminated through other means. Chew and Eysenbach [3] suggested a complementary infoveillance approach during the 2009 H1N1 pandemic, using Twitter. They applied content and sentiment analysis to 2 million tweets containing the keywords "swine flu", "swineflu", or "H1N1". For this, they created a range of queries related to different content categories, and showed that the results of these queries correlated well with the results of manual coding, suggesting that near real-time content and sentiment analysis could be achieved, allowing monitoring large amounts of textual data over time. Signorini et al. [8] collected tweets matching a set of 15 pre-specified search terms including "flu", "vaccine", "tamiflu", and "h1n1" and applied content analysis and regression models to measure and monitor public concern and levels of disease during the H1N1 pandemic in the United States. Using a regression model trained on 1 million influenza-related tweets and using the incidence rates reported by the Centers for Disease Control (CDC) as reference, the authors reported errors ranging from 0.04% to 0.93% for the estimation of influenza-like illness levels. Chunara et al. [9] analysed cholera-related tweets published during the first 100 days of the 2010 Haitian cholera outbreak. For this, all tweets published in this period and containing the word "cholera" or the hashtag "#cholera" were considered, and these data were compared to data from two sources: HealthMap, an automated surveillance platform, and the Haitian Ministry of Public Health (MSPP). They showed good correlation between Twitter and HealthMap data, and showed a good correlation (0.83) between Twitter and MSPP data in the initial period of the outbreak, although this value decreased to 0.25 when the complete 100 days period was considered. Aramaki et al. [10] applied SVM machine learning techniques to Twitter messages to predict influenza rates in Japan. Lampos and Cristianini [11] and Culotta [12,13] analysed Twitter messages using regression models, in the United Kingdom and the United States respectively, obtaining correlation rates of approximately 0.95. More recently, Lee et al. [14] proposed a tool for real-time disease surveillance using Twitter data, showing daily activity of the disease and corresponding symptoms automatically mined from the text, as well as geographical incidence. Different works also rely on query logs to track influenza activity. One of earliest works in this field was performed by Eysenbach [15], who observed a Pearson's correlation coefficient of 0.91 between clicks on a keyword-triggered advert in the Google search engine with epidemiological data from Canada. This work was later extended by Ginsberg et al. [16] who presented Google Flu Trends, a tool based on Google search queries and that achieved an average correlation of 0.97 when compared against the ILI percentages provided by the Centers for Disease Control (CDC). The greatest advantage of these methods over traditional ones is instant feedback: while health reports are published in a weekly or monthly basis, Twitter and/or query log analyses can be obtained almost instantly. This characteristic is of extreme importance because early stage detection can reduce the impact of epidemic breakouts [10,16]. In this work, we adapted previously described methods in order to estimate the occurrence rate of influenza in Portugal using user-generated content from different sources, namely tweets and query logs, combined through multiple linear regression models. We take a two-step approach, as used in similar studies: first, tweets and user queries are selected according to manually crafted regular expressions, and tweets are further classified as 'positive' or 'negative', according to whether the text points to the occurrence of flu or not; secondly, we use the relative frequencies of the selected tweets and search queries to estimate flu incidence rates as predicted through an online self-reporting scheme. We show validation results for each of these steps and also evaluate if the trained models can be directly applied in the following flu seasons, allowing to track disease trends. The article is organized as follows: the next section describes the methodology, followed by a presentation and discussion of the obtained results; finally, some conclusions and proposals for further studies are presented in the last section. Methods The main objective of this work was to evaluate if user-generated content could be used to create a reliable predictive model to obtain instant feedback regarding the incidence of flu in Portugal, allowing to track its changes over time during the main flu period. In order to assess the performance of our approach, we compare it to epidemiological results from Influenzanet [17], a health-monitoring project related to flu. Influenzanet data is collected from several participants who sign up to the project and report any influenza symptoms, such as fevers or headaches, on a weekly basis. Influenzanet results are presented as weekly activity levels, defined as the number of onsets of the symptoms on a given week divided by the number of participant-weeks. As of May 2013, over 41200 volunteers from nine european countries were contributing to the Influenzanet project. In Portugal, 1552 participants were registered and contributing to the data at this time. Our proposed method is based on the analysis of tweets and web search queries. We start by identifying tweets and queries that are relevant for indicating the presence of flu or flu symptoms. This might be a statement on a Twitter status or a web search for flu medication, for example. We then calculate the weekly relative frequencies of flu related tweets and web queries, that is, the fraction of tweets (queries) in a given week that are related to flu. Given these values we trained various regression models using the Influenzanet results as the dependent variable, or predictand. The data were time aligned with the Influenzanet results. We used training data from nearly 14 million tweets originating in Portugal and covering the period between March 2011 and February 2012, an average of 324 thousand tweets per week. An initial analysis showed that a large quantity of tweets including links to web addresses (URLs) were from news sources, or were linking to news stories. Therefore, all tweets containing an URL were removed from the dataset in order to avoid bias. Similarly, replies (re-tweets) were also excluded. Besides Twitter, we also used around 15 million query log entries from the SAPO [18] search platform from December 2011 to May 2012, an average of 780 thousand log entries per week. In order to verify the possible application of the proposed method in a real scenario, we tested the hypothesis that the classification and regression models trained in one flu season could be applied in the following season. This would allow measuring the incidence of flu on a weekly basis, in almost real time. For this, we obtained a total of 24 million tweets, created from December 2012 to April 2013, an average of 1,1 million tweets per week. From the web search logs, we obtained a total of 14 million queries for the same period, an average of 650 thousand searches per week. Regular expressions We start by applying regular expressions in order to capture tweets and queries that contain influenza related words. For the queries, we used a simple regular expression that matches "gripe" (the Portuguese word for influenza) word derivations: "(en)?grip[a-z]+". A set of 1547 searches was identified, an average of 47 searches per week. For filtering the Twitter data we used a more complex expression, since tweets may contain a more descriptive account of someone's health status. The regular expression was built according to common insights about how people describe flu and flu-like symptoms, and can be divided into three groups, as described in Table 1: "gripe" (influenza) word derivations, "constipação" (cold) word derivations and flu related symptoms, such as body/throat pains, headache and fever. Using this regular expression, a set of 3183 tweets was identified, an average of 67 tweets per week. Classification methods Using filtering based on regular expressions as described above is not sufficient, as many tweets that contain words related to flu do not imply that the person writing the text has the flu. Tweets like "Hoping the flu doesn't strike me again this winter" contain the keyword flu but do not tell us that this person has the flu. To solve this problem, we applied machine-learning techniques to classify each tweet as "positive" or "negative" according to its content. Manual data annotation In order to create the predictive models, we asked a group of 37 independent annotators to manually classify a set of tweets, using a simple and intuitive web form. During the annotation task, each annotator was repeatedly assigned a random tweet, with the following restrictions: each tweet had to be labelled by up to three annotators, and each annotator could not label the same tweet more than once. Annotators were instructed to consider a tweet as positive if it revealed that the person who wrote it was with the flu, was having flu symptoms or was recently ill with the flu. A third category was also used, to indicate tweets referring to "cold". We explore the inclusion of these tweets as positive or negative in the results section. To reduce incorrect answers, the annotators could also label the tweet as unknown. Then, a final label was assigned to each tweet according to majority vote, that is, a tweet was considered positive if at least two annotators marked it as positive. Tweets with inconsistent or insufficient labelling information did not receive a final label and were not included in the dataset. Feature extraction and selection In order to train the classification models, tweets were represented by a bag-of-words (BOW) model. The Natural Language Processing Toolkit [19] (NLTK) was used to tokenize the text, remove Portuguese stopwords and stem all remaining words in each tweet. Character bigrams for each word were also generated, making up a total of 5106 features. Bigrams of words were also tested, but these did not improve the classification results and were therefore removed. We applied feature selection techniques for defining the best set of features to use. For this, each feature was compared to the true class label to obtain the mutual information (MI) value. The higher a feature's MI score, the more it is related to the true class label, meaning that the feature contains discriminative information to decide if that tweet should be classified as positive or negative. We selected the optimal number of features empirically, by selecting features with MI value above different threshold values and running cross-validation with the training data. Machine learning methods Several machine learning techniques (SVM, Naïve Bayes, Random Forest, Decision Tree, Nearest Neighbour) were tested in order to evaluate which would produce better results. We used the SVM-light [20] implementation of SVMs. The remaining classifiers were trained using the Scikit-learn toolkit [21]. Linear regression models We used linear regression models to estimate the flu incidence rate, using the Influenzanet data to train and validate the regression. We trained both single and multiple linear regressions, combining the predicted values obtained from the different classifiers, query logs and regular expressions: where y i represents the flu rate in week i, b 0 is the intercept, x i,k is the value of the predictor k in week i, b k is the coefficient of predictor k, and K is the total number of predictors used. As the input to the regressions, we used the weekly relative frequencies obtained after applying the regular expressions to the web queries and to tweets, and after classifying the tweets with the various classifiers tested. Also, instead of using the number of positive predictions from the classifiers to calculate the weekly relative frequencies, we tested summing over the classification probabilities of the positive predicted documents in each week, similarly to what was proposed by Culotta [12,13]: where f i is the predicted incidence in week i, D i is the set of documents for week i, p(y j = 1|x j ) is the probability for classifying document x j as positive, |D i | is the number of documents in week i, and t is the classification threshold. Data annotation A total of 7132 annotations were obtained, resulting in 2704 labelled tweets of which 949 were positive for flu. Although a large number of annotators was recruited, which could introduce inconsistencies in the data, this was minimized by the fact that a smaller number of annotators actually contributed to the classification of most of the dataset: the three top annotators contributed to the labelling of 56% of the dataset, and the top ten annotators contributed to 90% of the final labels (Table 2). Moreover, to validate the data obtained from the annotators, the majority voted labels of 500 random tweets were verified by one of the authors, leading to an annotator accuracy of 95.2%. Binary classification of Twitter messages The performance of the different classifiers was compared through 5 × 2-fold cross validation using the entire dataset of 2704 tweets, covering the period from May 2011 to February 2012. Using the full set of features, the best results were obtained with the SVM classifier, with an F-measure of 0.75. After feature selection, the best overall results were obtained for a set of 650 features, achieving an F-measure of 0.83 with both SVM and Naïve Bayes classifiers (Table 3). For each classifier, we selected from the receiver operating characteristic (ROC) analysis ( Figure 1) and based on the training data only, an operating point that maximized the classification precision without a severe loss on the classifier recall. Although operating points with higher F-measure values could have been selected, these would represent higher recall, at the expense of a lower precision. We therefore chose the more stringent models, in order to reduce the amount of false positive hits, and consequently, the amount of noise present in the final results. Applying a simple linear regression between the predictions of each of these classifiers and Influenzanet data resulted in an average correlation ratio of 0.76 for both SVM and Naïve Bayes. When the classifiers were trained with tweets marked as "cold" treated as positive data, the results improved considerably for the Naïve Bayes classifier (0.82) but only a slight change was obtained with the SVM classifier (0.77). Flu trend prediction For flu trend prediction, we tested linear regression models with the relative frequencies calculated from the classification results, query logs and regular expressions as the predictors. In order to select the best regression model, we executed a series of cross-validation experiments, using data from the period from December 2011 to April 2012 (20 weeks). To avoid overlapping between training and test data, the NB and SVM models for tweet classification were trained with a subset of 1728 manually annotated tweets, covering the period from March 2011 to November 2011. To run the experiments, we randomly partitioned the data into a training set and a test set, each covering ten data points (weeks). This was repeated ten times, and the Pearson's correlation coefficient between the predictor output and the Influenzanet rates, used as gold standard, was calculated for each partition. The average results are shown in Table 4. In the results shown in the table, the Naïve Bayes classifier was used to classify the tweets. A correlation of 0.886 was obtained for a multiple regression combining the queries relative frequency with the tweets relative frequency, using classification probabilities instead of counts in the calculation, as shown in Table 4. For comparison, the best regression obtained with the SVM classifier was achieved with the same configuration, resulting in a correlation ratio of 0.849. Figure 2 shows the resulting predicted trend on this data, using the best regression model. The model was trained with the data for the first ten weeks and applied to the entire time sequence. One of the possible limitations of this study is the reduced size of the dataset, when compared to similar works. Effectively, despite Twitter being a largely used social web platform, it is not very popular in Portugal, which limited the size of our dataset. As a comparison, we had access to around 14 million tweets as training data, with a daily average of nearly 40,000 tweets, from which 1728 were used to train the binary classifiers. Aramaki et al. [10] used 300 million tweets, from which 5,000 were used for training. On the other hand, Cullota [12] used a total of 500,000 messages, selecting 206 of those messages to train a model. Due to the limited amount of used data, overfitting problems are reported in that work. However, although the data used for Pearson's correlation ratios between linear regression estimates and Influenzanet data. flu and expanded regular expressions correspond to the pattern for "gripe" (flu) word derivations and the complete pattern as shown in Table 1, respectively. The weekly relative frequency was calculated based on the number of positively classified tweets (counts) or on the probabilities given by the classifier (Eq. 2). Tweets referring to "cold " were used either as positive or negative data when training the classifier. calculating the model coefficients was limited, good regression results could still be obtained. In order to test the hypothesis that the classification and regression models trained in one flu season could be applied in the following season, we trained a NB classification model using the complete set of 2704 annotated tweets and a regression model using the 20 weeks from December 2011 to April 2012. Applying the regular expressions and the previously trained classifier to the 24 million tweets from the period between December 2012 and April 2013, we obtained a total of 5594 positive tweets, representing an average of 266 tweets per week. Similarly, applying the simple regular expression for flu related words to the 14 million queries relative to this period, we obtained 1428 queries, an average of 68 per week. For each week we calculated the relative frequency for queries and tweets, considering classification probabilities as above, and applied the regression model trained with the data from the previous flu season (2011 to 2012). Figure 3 illustrates the regression results obtained. The Influenzanet project measures the ILI activity levels by week, but updates and reports these results daily taking into account the onset of symptoms in the previous 7 days, and therefore considering weeks starting in each day of the week. Since only the symptom onset is considered, this introduces variability in the data, depending on if we consider the week to start on a Sunday or a Monday. In order to deal with this variability, we considered the average and the maximum reported activity for a given week, for training the regression model. We take Monday as the first day of the week. The stacked areas in Figure 3 show the minimum, average and maximum ILI activity reported by the Influenzanet project for each week. The solid line shows the predicted trend when the maximum ILI activity value was used to train the regression model. The dashed line shows the trend when the average activity was used. The best result, r = 0.72, was obtained considering the maximum activity value in the regression. When the average was considered, the correlation coefficient dropped to r = 0.67. Also shown is the weekly incidence rate as reported by medical doctors and national agencies to the European Influenza Surveillance Network (EISN). The correlation Figure 3 Flu trend prediction. Prediction of flu incidence rate in Portugal for the period from December 2012 to April 2013. The models trained with data from the previous flu season were used to generate the prediction. Stacked areas indicate the minimum, average and maximum values registered by the Influenzanet project for each week. The incidence rate reported by the European Influenza Surveillance Network (EISN) is also shown (right axis). coefficient was in this case much lower, r = 0.62, using either the average or the maximum activity values in the regression. However, considering a time delay of one week for this curve in relation to the predicted trend, this correlation increases considerably to r = 0.79. In fact, inspecting the lines in the graph, its possible to observe that the EISN trend seems to lag behind the predicted trend by one week. This difference could be a result of the time people take before they visit the doctor, as opposed to the real time nature of social networks. Another interesting observation from the graph is the seemingly over-estimated peak at week 9 (February 27th to March 4th 2013). In fact, although not indicated by the Influenzanet and EISN values, this corresponds to a period of abnormally high incidence of flu in Portugal, as reported in the media and by the National Institute of Health (INSA) in its weekly flu surveillance bulletin. It is also possible that more references to flu were made in the social networks due to the high impact of news reports. This needs to be inspected in more detail. Conclusions Although many studies targeting the prediction of flu incidence using data from search engine logs or from social media have been presented, this is, to the best of our knowledge, one of the first works on this subject done specifically for the Portuguese language. Although most of the used methods are similar and applicable across languages, the limited amount of available data in languages other than English, as well as language specificities, may influence the final results obtained. Another important novelty of our work is the combination of tweets and user queries, through multiple linear regression models. This contributed to a better approximation to health monitoring results used as gold-standard in this work. A possible extension to this would be to use other sources of user-generated content, such as blog posts and comments on web pages. The best result reached a Pearson's correlation ratio, between the estimated incidence rate and the Influenzanet data, of 0.89 (p < 0.001). This result indicates that this method can be used to complement other measures of disease incidence rates. Unfortunately, the amount of data available for validating the prediction model was reduced, which may limit the relevance of the results. We also evaluated the application of the classification and regression models from one flu season to the next. The best result, r = 0.72, indicates that a good estimate can be obtained, although further work is needed in order to improve this. One possibility to pursue is to apply adaptive learning to update the classification and regression models as new information becomes available, for example from weekly epidemiological reports. Another important aspect to consider in further studies is whether it is possible to detect in (almost) real-time or predict, with some advance, an increase in the incidence of flu (or other illnesses) in order to optimize the response by the health authorities. The one week delay observed between the EISN data and the predicted trend seems to point in this direction, but this needs further validation.
6,522.6
2014-05-01T00:00:00.000
[ "Computer Science", "Environmental Science", "Medicine" ]
Mechanisms of knowledge flows in bottom-up and top-down cluster initiatives Abstract Knowledge flows are widely believed to be a phenomenon of clusters, and inducing them is one of the chief objectives in establishing and promoting cluster initiatives (CI). However, not many studies discuss how these flows and their effects may differ depending on the mode of CI creation and on the role of public authorities in this process. The main aim of this article is to compare mechanisms of knowledge flows in bottom-up and top-down cluster initiatives. The results of an empirical research involving two case studies in western Poland, obtained through the use of Social Network Analysis (SNA), allowed stating that in bottom-up cluster initiatives firms which were innovation leaders played a prime role in disseminating technological and business knowledge, while in the top-down initiatives the most important were representatives of universities and research centres as well as formal coordinators of cooperation. Policy implications stemming from these results were identified. INTRODUCTION Clusters have recently become a key theoretical and empirical issue in regional economic competitiveness and innovativeness (Cruz & Teixeira, 2010), in particular the way that knowledge flows or spillovers occur between cluster actors: firms, universities, research institutions, agencies for regional development and other cooperating entities. Knowledge flowing between cluster agents can drive a variety of positive knowledge externalities, thereby stimulating regional innovativeness (Karlsson & Gråsjö, 2014). Generating knowledge flows is one aim underlying cluster initiatives -organized efforts to increase growth and competitiveness within a cluster (Lindqvist, Ketels, & Sölvell, 2013). We know relatively little about how different actors influence the mechanisms of cluster knowledge flows. Spillovers involve third parties tapping into existing innovation networks, changing those networks' nature and, hence, how knowledge creation, exchange and spillover occurs (Maskell, 2001). Because clusters involve heterogeneous constellations of these actors, knowledge spillover mechanisms may function depending on the way these cooperating of formalize REGIONAL STUDIES, REGIONAL SCIENCE cooperation. Two extremes are stylized here: a bottom-up cluster initiative is created where relations already existing between the firms, as against a top-down cluster initiative where public authorities or business support units formalize cooperation (Fromhold-Eisebith & Eisebith, 2005). Therefore, three research questions are asked in this paper: • How do a cluster initiative's origins (bottom-up or top-down) affect cluster-based knowledge spillover mechanisms? • How does entrepreneurs' willingness to share their knowledge in bottom-up and top-down cluster initiatives affect spillover creation? • What instruments of cluster policies should be employed to achieve the highest knowledge externalities in both investigated types of initiatives? KNOWLEDGE FLOWS IN CLUSTERS The theoretical foundations of cluster processes were laid by Porter (1990) and Krugman (1991), who posited that geographical proximity of firms from one sector favoured knowledge spillovers. Face-to-face contacts engendered interactive learning processes in clusters facilitating exchange of both explicit and tacit knowledge (Storper & Venables, 2004), also augmented by cluster agents building channels of communication with outsiders (Bathelt & Cohendet, 2014). Among the many kinds of economic knowledge, firms use clusters primarily to obtain technological and business (entrepreneurial) knowledge through a cluster (Karlsson & Gråsjö, 2014). Dahl and Petersen (2004) considered these cluster knowledge flow processes empirically, whilst others measured firms' patent stocks or patent networks to analyse cluster knowledge creation and knowledge spillovers (e.g. McCann & Folta, 2011). However, such approaches cannot investigate the mechanisms and processes knowledge flows between particular actors (knowledge sources and receivers): Swann's (2009) 'Ladder model of cluster richness' claimed informal knowledge exchange was the highest level of cluster advancement, but also the most difficult to measure. Stough (2015) likewise argued that as clusters mature, the more homogeneous the knowledge that exists and is exchanged (purposefully and spontaneously) internally. There are several reasons for entrepreneurs to share their knowledge in clusters (e.g. Bessant & Tidd, 2011), including: • Complementarity of the knowledge possessed (bilateral exchange). • Willingness to help friends or colleagues (whether positional or philanthropic). • Building confidence and creating a chance to establish long-lasting cooperation. • Difficulty in hiding knowledge or unplanned transmission during personal meetings and conversations. • A desire for self-innovativeness, especially when talks/discussions encourage managers to think how to improve their business. • Encouragements from public authorities. Of course, not all of a company's knowledge can be shared or disseminated. Firm owners will resist giving away secrets of production or promotion, especially if they ensure competitive market advantages. Universities and research institutions also play their roles in cluster knowledge flows: their knowledge may spillover spontaneously, as well as also offering payable knowledge transfers in the form of consultations, training courses or contract research (Runiewicz-Wardyn, 2013). The extent to which firms can exploit valuable knowledge in clusters depends on their ability to REGIONAL STUDIES, REGIONAL SCIENCE build a relational network position inside a cluster, and their skills to find the best knowledge source inside such networks (van der Valk & Gijsbers, 2010). Research shows that different methods of initiating cluster initiatives: top-down and bottom-up, significantly affect rules of interaction, routines of collaboration and collective learning (Fromhold-Eisebith & Eisebith, 2005). The present paper investigates how cluster initiative origins can influence knowledge flows between cluster agents (mostly firms, but also universities, research institutions and coordinators of cooperation) and what role they play in disseminating knowledge within those initiatives. Identifying such mechanisms affects regional policies, as it can help in formulating recommendations on optimizing knowledge spillovers in regional contexts. SELECTION OF CASE STUDIES AND METHODOLOGY Social network analysis (SNA) is widely accepted as a good tool for investigating different actors' positions and behaviour patterns in innovation networks, systems or processes, including clusters (Giuliani & Petrobelli, 2011;Ter Wal & Boschma, 2009). The present paper explores knowledge flows in two cluster initiatives, in the Wielkopolska region (western Poland): The Swarzędz Cluster of Furniture Producers and The Leszno Flavours Food Cluster. Both represent low-tech industries where the respective branches had been selected in 2014 as smart specializations for Wielkopolska. Firm formal cooperation had been established in 2011, with both initiatives actively operating in 2014 and 2015. The most substantive difference between them was their mode of creation. The furniture cluster initiative had been set up by several local entrepreneurs who had known each other for years and decided to reinforce and formalize their cooperation. The food cluster was initiated by the Leszno Business Centre with the support of Leszno municipality authorities (Table 1). The author conducted personal interviews with the owners of every firm in both initiatives. Each interview started by giving the interviewee a list of all entities formally constituting the given cluster initiative, including firms (12 and 18 respectively, all small in size), as well as coordinators and cooperating universities and research units (2 and 5 respectively). Managing authorities were asked from whom they had acquired two kinds of knowledge: technological (concerning machines, materials, components, the production process) and business (concerning entrepreneurial matters: financing, organization of production, promotion, marketing). These two types of knowledge were selected because both are important for firm performance and relatively easy for managers to differentiate. note: (a) diamonds -coordinators of cluster initiatives; circles -firms; squares -universities/research units; (b) ties indicate bilateral relations; (c) node size is directly proportional to actor's centrality degree; (d) an arrow pointing from actor a to B means that a acquires knowledge from B (B is the source of knowledge for a); (e) the layout of nodes is random (does not reflect the spatial distribution of firms). Source: author's own work in ucinet and netdraw. Food cluster technological knowledge flows Business knowledge flows Knowledge flows (together) REGIONAL STUDIES, REGIONAL SCIENCE The method chosen was 'roster recall': interview participants recall in their memory relations to other network members on the basis of the presented list. It was assumed that firms had similar absorptive capacities and that answers reflected business contacts rather than personal sympathies. Matrices for the two initiatives for both types of knowledge were constructed (12 × 12 and 18 × 18, recording 1 if a knowledge flow from an agent was clearly declared and 0 if not) as well as one extra matrix covering both network flows (with 2 if both types of knowledge were obtained from the same agent, 1 if only one type, 0 if none). These matrices were then analysed with Ucinet VI software (Borgatti, Everett, & Freeman, 2002) and visualized in Netdraw software (Borgatti, 2002). The networks were analysed in terms of four network characteristics (Wasserman & Faust, 1994): • Network density: the overall number of ties relative to the number of possible ties. • In-degree centrality: the number or per cent of all ties oriented toward actors, sometimes termed popularity or attractiveness. • Out-degree centrality: the number or per cent of all ties emanating from actors, also known as expansiveness. • An actor's centrality degree: the number of an actor's ties relative to the number of the actor's all possible ties; it reflects the actor's level of network activity or involvement. COMPARING KNOWLEDGE FLOWS IN BOTTOM-UP AND TOP-DOWN CLUSTER INITIATIVES The network analysis conducted yielded substantially different results for the bottom-up and top-down cluster initiatives (Tables 2 and 3). Network densities in the bottom-up furniture cluster were higher than those in the topdown food cluster, possibly suggesting that well-established relations influence knowledge flows' intensity. Moreover, in the furniture cluster the network density of technological knowledge flows was higher than that of business knowledge flows, a pattern reversed for the food cluster. REGIONAL STUDIES, REGIONAL SCIENCE Actors' roles different in the two networks. In the furniture cluster, the most central actors in both technological and business knowledge flows were two 'innovation leader' firms. In the food cluster, the most central actors in both knowledge networks were the Poznań universities: of Life Sciences and of Economics, that functioned as 'external knowledge brokers'. The cooperation coordinator also played an important role here (especially in disseminating business knowledge and organizing events transferring technological knowledge to firms). Many relations between actors were bilateral in both cluster initiatives, and the findings (average out-degree centrality was lower than in-degree centrality) suggest that firms tend to seek knowledge suppliers rather than knowledge users. Certainly the presence of important knowledge sources (universities and research centres) on the list affected these results. These two case suggest that a cluster initiative's origins affect the processes of internal knowledge flows. Figure 1 shows the initial models of mechanisms of knowledge flows in bottom-up and top-down cluster initiatives. In bottom-up clusters, strong relations link those firms playing major roles in disseminating knowledge. Other cluster firms link to innovation leaders and their inside knowledge as the main source of their new knowledge, whilst coordinators and external experts play largely insubstantial roles in disseminating knowledge. In top-down type clusters, firms want to access outside knowledge, coming from universities and research institutions. Here, representatives of universities and research centres are considered trustworthy, becoming important sources of sector-specific technological and business knowledges, whilst inter-firm knowledge exchange is more restricted. CONCLUSIONS AND POLICY RECOMMENDATIONS In countries of Central and Eastern Europe (CEE), cluster initiatives recently became a popular innovation policy instrument (Ketels, Lindqvist, & Sölvell, 2006), particularly driven by European Union funding (Churski & Stryjakiewicz, 2006;Kowalski, 2013). In CEE countries, market rules and network cooperation are often new to some firms, with cluster alliances facing many problems, including meeting legal requirements or overcoming lack of trust in relations between entrepreneurs, scientists and public authorities (Stryjakiewicz, 2005). Regional innovation strategies should emphasize more strongly the significance of supporting knowledge flows between cluster agents to reinforce 'regional advantages' and foster regional smart specializations (Asheim, Boschma, & Cooke, 2011). This paper demonstrates that mechanisms of cluster initiative knowledge flows appear to depend on those initiatives' origins. Cluster agents positions differed in knowledge flow between bottom-up and the top-down cluster initiative. In the bottom-up cluster, innovation-leading firms played the prime role in disseminating technological and business knowledge, while in the top-down clusters, universities and research centres, playing a 'knowledge broker' role, were most important. Firms in the bottom-up initiative used mainly other cluster firms' knowledge, while those in the top-down one exploited external knowledge. Entrepreneurs' willingness to share knowledge also differed: in the bottom-up initiative, entrepreneurs were generally more active than in the top-down initiative, which suggests that long-lasting relations impact upon knowledge flows. This analysis gives two examples of low-tech Polish industries, leaving open the question whether similar patterns might be visible in medium or high-tech clusters outside CEE countries. Certainly further studies of this issue are needed, concerning also qualitative research on network orchestration as well as relations between knowledge flows and the level of trust or cooperation/ competition inside a cluster initiative. These results suggest that further studying knowledge spillovers in clusters as well as regional policy tools concerning knowledge creation and dissemination in cluster initiatives (especially in CEE countries) should account for differences resulting from their bottom-up or top-down character. REGIONAL STUDIES, REGIONAL SCIENCE There are also consequences here for regional policy-makers supporting cluster initiatives. Bottom-up cluster initiatives should firstly provide financial and organizational support for research and innovation work in the most active and innovative firms. Thereafter, events should be organized, (e.g. meetings, conferences, trade fairs) -so create long-lasting trust relationships with other firms who then benefit from innovation leaders' knowledge. In top-down cluster initiatives, superficial relationships and lack of confidence between firms hinder immediate knowledge spillovers, encouraging innovation leaders to behave protectively to their knowledge, and raising distrust in other cluster companies. Therefore supporting direct knowledge transfer from universities or research institutes to firms is a useful first step, via various types of training courses, conferences and open meetings.
3,143
2016-01-01T00:00:00.000
[ "Sociology", "Business", "Economics" ]
The Functional Expansion Approach for Solving NPDEs as a Generalization of the Kudryashov and G (cid:48) / G Methods : This paper presents the functional expansion approach as a generalized method for finding traveling wave solutions of various nonlinear partial differential equations. The approach can be seen as a combination of the Kudryashov and G (cid:48) / G solving methods. It allowed the extension of the first method to the use of second order auxiliary equations, and, at the same time, it allowed non-standard G (cid:48) / G -solutions to be generated. The functional expansion is illustrated here on the Dodd–Bullough–Mikhailov model, using a linear second order ordinary differential equation as an auxiliary equation. Introduction Solving nonlinear partial differential equations (NPDEs) is an important issue in many problems from mathematical physics. This is mainly related to the fact that the integrability of these equations is a problem in itself and there are not any clear prescriptions or algorithms that can be used for solving such equations. Many approaches have been proposed, both for establishing if the equations are integrable or not, and for solving the integrable ones. It is important to mention that the same NPDE could present many classes of solutions, depending on the values of the parameters appearing in the equation. An important class of solutions is represented by the traveling wave solutions. They are very important in the theory of solitons and are related to a symmetry transformation, which leads to a one-dimensional, nonlinear ordinary differential equation (NODE) [1]. This is obtained by using the wave variable and a whole symmetry group accepted by the initial equation. Let us consider that the variable u(t, x) defined in a 2D space satisfies an NPDE of the following form: F(u, u t , u x , u xx , u tt , · · · ) = 0. (1) Traditionally, the wave variable includes the wave velocity V, and has the following form: It transforms (1) into a NODE of the form: ∆(u, u , u , · · · ) = 0, where u = du(ξ)/dξ. In principle, solving (3) is simpler than solving (1) and then, by pulling back the solutions of (3) to the initial variables {x, t}, one can find solutions of the NPDE (1). Many approaches for finding traveling wave solutions have been proposed and are currently used in literature. Some of them have a strong theoretical basis and are related to approaches such as the inverse scattering method [2], Lax operators [3,4], Hirota and super-Hirota biliniarization [5][6][7][8], Lie symmetry theory [9,10], the ghost field method [11][12][13][14], the homotopy technique [15], etc. There are also direct approaches, trying to see if the investigated NPDE accepts traveling wave solutions with a pre-defined mathematical form: harmonic solutions expressed through sine-cosine [16], hyperbolic solutions expressed by cosh or tanh [17,18], the first integral method [19,20], etc. These attempts were generated by the fact that such solutions correspond to important equations from soliton theory, such as Riccati [21] or Jacobi [22,23] equations. These were only a step before the invention of the so-called auxiliary equation method for solving NPDEs [24]. In this case the NPDE solutions u(ξ) have to be expressed as combinations or expansions of any of the known solutions G(ξ) of these basic "auxiliary" equations. Many investigation methods based on auxiliary equations have been proposed, such as the exponential method [25,26] or the Kudryashov method [27,28]. In the last mentioned case, the following is supposed: where G(ξ) represents a solution of the Riccati equation: As the Riccati equation is a very simple, first order equation, the solution (4) depends on G(ξ) only, having at the end the following form: Many other types of auxiliary equations have been considered in the literature, some of them being of higher differential orders. For example, if a second order auxiliary equation, ∆(G, G , G ) = 0 is considered, the solution u(ξ) will also depend on the first derivative of G(ξ) and it should be expressed as follows: The question is how the Kudryashov method can be extended in the case of second order auxiliary equations. Many authors use the G /G-method, in its classic form [29], or in various extended or generalized versions [30,31]. In this approach the solution (6) has to mainly be considered as an expansion with constant or function coefficients a i of the following form: The method does not offer a clear answer as to why the only possible combination is in the form of (7) and if other extended forms are still possible: answering these questions was the main aim of this work. The starting point is represented by our previous paper [32], where the functional expansion method was proposed. This approach can be seen as an extension of many other approaches, including the Kudryashov method, and it has been shown that for models such as KdV, Gardner and Kundu-Mukherjee-Naskar (KMN) [33], more general solutions other than G /G can be generated that use a linear second order ODE as an auxiliary equation. Here, the Dodd-Bullough-Mikhailov (DBM) equation was used as an exemplifying model. As many other methods can obtain traveling wave solutions, the functional expansion method has two important ingredients: (i) transformation of the NPDE into a NODE using the wave variable; (ii) finding solutions of the NODE in terms of the known solutions of the auxiliary equation. Both ingredients bring specific aspects and could generate intensive analysis and discussions. The NPDE solutions strongly depend on how these ingredients are chosen. We will see that the functional expansion method supposes a very specific (double) balancing procedure that creates the differences between this and the other approaches. The paper is structured in the following sections: after these general ideas, the functional expansion method will be briefly reviewed in the next section. A second order auxiliary equation and a specific class of NPDEs will be considered, which is quite a general class of equations, including models such as Korteweg de Vries, nonlinear Schrödinger, Klein-Gordon, etc. As we already mentioned, to illustrate how the functional expansion works, a specific case belonging to the mentioned class of equations, namely the DBM equation, will be considered in the third section of the paper. We obtain traveling wave solutions for the model and we try to prove if they are new ones or if they can be reduced to already-known solutions for the same model. This checking is very important [34], and we will comment on it at the end of the paper. The Functional Expansion Method Let us consider that to solve (1) we transform it into a NODE of the form in (3), using the wave variable (2). We are looking for the solutions of (3) that can be expressed as combinations of solutions G(ξ) of an auxiliary equation of the form: Depending on the differentiability order of (8), the most general form for the solution of (3), should be: In particular, the previous relation could be considered as: Here P i (G) are 2m + 1 functionals depending on G(ξ) and that have to be determined. H(G, G , G , · · · ) can be a very general expression containing G(ξ) and its derivatives. Depending on the form of P and H, one can generate very complex solutions. The choices covering almost all the approaches currently used in the literature are H depending on G and G only. More strictly, H is usually considered as a formal series expansion at most linear in the two variables: The generalized Kudryashov method corresponds to the case h 0 = 0, h 1 = 0, h 2 = 0. There are other choices, too. For example, if we consider the opposite situation, h 0 = h 1 = 0 and h 2 = 0, we obtain from (10) an expression of the following form: The generalized and improved G /G method [30,31] corresponds to (12) with the following choices: The approach from [35] corresponds to (12) with P i = a i G −i + b i+1 G −i+1 , while the (w/g) method is recovered for P = 1/G and H = w(G), with an adequate choice for the auxiliary equation. The representation used in [36] is also included in (9), but it does not accept the condensed form (12). It imposes an H(G, G ) of the following form: The functional expansion method deals with solutions in the form of (12), where P i (G) are arbitrary functionals that have to be determined. This is performed by applying a balancing procedure for determining m, followed by vanishing the coefficients for different powers in G . At the end, we obtain a system of NODEs in the functionals P i (G) and its derivatives,Ṗ i ≡ dP i dG ,P i ≡ d 2 P i dG 2 , etc. If this system can be directly solved, we obtain very general solutions of the form in (12). However, it seems that direct solving is not always straightforward, so we need ways out to particular solutions. Such a way, as was proposed in [32], consists of looking for solutions given as expansions in G that can be represented as rationales with a polynomial numerator, N (G), and denominator, D(G): This choice, with the functionals {P i , i = 0, 1, · · · , m} as ratios of polynomials, gives an answer to the previous formulated question on how to generalize the Kudryashov method for second order auxiliary equations, with (15) being quite similar to (4), Kudryashov's choice. In (15), n(N i ) and n(D i ) are the degrees of N i (G) and D i (G), respectively. The parameters {π iα , ω iβ } are constants that have to be determined in order to be able to write down the effective solutions u(ξ). To determine these degrees, we need, as we will see, a supplementary balancing procedure, taking into account the "degree" attached to P i (G): In principle, these degrees can be positive, negative or zero but, considering this supplementary balancing, we will obtain negative values only. The Balancing Procedure An important step in applying the functional expansion method is related, as in almost all expansion methods, to the balancing procedures, which allow the limit of the expansions to be found. As we already mentioned, the functional expansion method supposes two different expansions: one in terms of various powers of the derivatives G (ξ), and one in the chosen form of the functionals P i (G). This fact automatically leads to two balancing procedures: the first one allows the maximal value of m in (12) to be found, while the second one leads to the possible forms of the functionals P i (G) and allows their degrees to be determined, n(P i ), as defined by (16). When n(P i ) are known, n(N i ) and n(D i ) can be fixed, when the representation in (15) is considered. Speaking about balancing, an important issue to also be considered is the mathematical form of the equation to be solved. To illustrate how effectively all these balancing procedures are working, let us consider that the ODE (3) to be solved has the following general form: This class of NODEs leads to many important nonlinear 2D equations. One of the examples, which was intensively tackled in this work, is the Dodd-Bullogh-Mikhailov equation. In this case, the attached NODE has the following form: Other important equations lead to NODEs corresponding to B(u) = 0. Such equations are as follows: • the KdV equation • the cubic nonlinear Schrödinger equation • the nonlinear Klein-Gordon equation • the ZKBBM equation The previously mentioned equations could have more general nonlinear terms. For example, in the Klein-Gordon equation considered in quantum field theory, other nonlinearities can appear [37][38][39][40]. To keep the discussions as general as possible, we consider that in (17) A(u), B(u) and E(u) are polynomials of the following form: We start by discussing the last aspect mentioned: a balancing procedure imposed by the form of the equation to be solved. We suppose the functions in (23) are known, that is the limit of each expansion, n A , n B and n E , are known. By introducing (12) in (17) a system of nonlinear ordinary differential equations for the functionals {P i (G), i = 0, 1, · · · , m} is generated by equating the coefficients of the terms with the same powers in G to zero. This is called the determining system and is used for finding the degrees attached to the functionals, following (16). In the first step, we see what values could have the summation limit m appearing in (12), as function of the parameters n A , n B and n E . They can be obtained through a first balancing procedure among the term with the highest derivative and the terms with the biggest order of nonlinearity. The maximal order of derivation in G is generated by the first term from (17). It has the following form: Depending on the degrees of the polynomials B(u) and E(u), the terms that generate the highest nonlinearity are as follows: If, for example, n E > n B , the first balancing has to be made between (24) and (26). It allows us to find the summation limit m in (12), leading to m(n A + 1) + 2 = mn E : Imposing m ∈ N, we conclude that m can take two integer values only: either m = 2 (then n E − n A = 2) or m = 1 (for n E − n A = 3). The case n E − n A = 1 asks for special consideration, which is not the object of analysis in the current paper. Such a situation appears in the Chafee-Infante model: Let us go further by trying to get the conditions the functionals {P i , i ≤ m} have to satisfy, for an already fixed m. For this purpose, the equation in P m , the functional of the maximum degree with m given by (27), contains only two terms and it has the following form:P We introduced the notation α = e n E a n A , the ratio of the coefficients appearing in front of the maximal order terms in the expansions from (23). This is a constant with a known value for a given model. Let us take P m (G) as a rational polynomial expression, having the form (15). The compatibility requirement for (28) allows, by applying a second balancing procedure, the degree n(P m ), attached to P m by (16), to be effectively determined. More precisely, it leads to a relation among the already known m and the degree n(P m ): Considering the case described by (27) with m = 2, which will be tackled below, as m is positive, we deduce that n(P m ) has to be negative. Until now, m and the functional P m have been determined. The other functionals {P i , i < m} appearing in the solution (12) can be determined considering the other equations of the determining system that were generated when (12) was introduced in (17). Similar reasonings as before lead to negative values for all the functionals' degrees: This will be proven in the next section, using one of the equations identified as belonging to (17) as an example. (15) being quite similar to that used in [27], the relation (29) leads our approach to n(N ) <n(D), while [27] shows the opposite: the numerators have higher degrees than the denominators. Remark 2. It is important to note that (29) and (30) fix only the difference between the two degrees n(N i ) and n(D i ). It is clear that there are many solutions that can be considered and that there are many possible choices of the type in (15) for the same functional P i (G). For m = 1, (30) imposes n(P i ) = −1, which can be achieved considering, for example, n(N i ) =1 and n(D i ) =2, but also n(N i ) =2 and n(D i ) =3. The Example of the Dodd-Boullogh-Mikhailov Equation To see explicitly how the previous assertions functioned we considered a specific model of 2D NPDEs, leading, when the wave variable (2) was introduced, to a particular case of (23), namely to (18); this is the Dodd-Bullough-Mikhailov (DBM) equation, with the following form: With the change in variable u(x, t) = e w(x,t) the previous equation takes the following form: This is an important equation, with many applications in hydrodynamics and quantum field theory. Various types of periodic, hyperbolic or rational solutions, of traveling wave or of soliton types, were pointed out, using methods such as the tanh method [41], the exp-function method [42] or the G /G method [43,44]. Here the equation is investigated using the functional expansion method and, as we will see, this approach allows the recovery of all the mentioned solutions, and, moreover, enables new solutions, larger that the G /G solutions, for example, to be found. As we already mentioned, the first step of the functional expansion method consists of the reduction of (32) to an ODE using the wave variable (2). This reduction leads to (18), that can be re-written as follows: The Determining System for the Functionals Let us look for solutions of the DBM equation of the form (12). We use the expression in (33) and we apply the first balancing procedure for determining the number of terms to be considered in the expansion. Taking into account the term with a second order derivative and third order nonlinearity, the balancing leads to m = 2, so the sought DBM solution has the following form: Finding the DBM solutions allows P 0 , P 1 , P 2 to be found as functionals depending on the solutions G(ξ) of an auxiliary equation. We choose here, as an auxiliary equation, a general second order differential equation: From (33)- (35), equating the coefficients of the different powers of G to zero, we obtain by hand, but also using Wolfram Mathematica, a determining system of seven ordinary differential equations for P 0 , P 1 , P 2 : −V .. −µV (42) from the previous generating system can be rewritten as follows: Remark 3. The last Equation As shown below, this equation leads, in all the cases, to a constraint showing that the parameters λ and µ from the auxiliary equation cannot take any values; they are related to each other and to the wave velocity V. The same constraint also appears when G /G solutions are considered. Remark 4. A first attempt at directly solving the previous obtained system would probably lead to the most general solution accepted by the DBM model. It is quite easy to verify, for example, that the first Equation (36) accepts the following as a solution: For C 1 = V and C 2 = 0 the equation becomes Using (45), (37) can be solved, obtaining the solution for P 1 : Unfortunately, this approach of finding DBM solutions by directly solving the determining system (36)-(42) fails when trying to find P 0 from the remaining Equations (38)- (42). It seems that it is not possible, at least for the DBM model, to obtain general P 0 , P 1 and P 2 that are compatible with the whole system. This is why another approach is needed to find solutions, with the functionals chosen as in (15). An important step is determining the limits n(N i ) and n(D i ) in the expansions of the numerator and the denominator of each {P i , i = 0, 1, 2}. For this purpose the second balancing procedure is used, applied this time to the determining Equations (36)- (42). Taking into account that for DBM we obtained m = 2, from (29) we obtain n(P 2 ) = −m = −2. Similarly, we conclude that the degrees attached to the functionals {P i , i = 0, 1, 2} by (16) have to be as follows: n(P 2 ) ≡ n 2 ≡ n(N 2 ) − n(D 2 ) = −2. The limited constraints (47) offer a large freedom in the choice of the mathematical form of the functionals P 0 , P 1 and P 2 . Correspondingly, larger classes of DBM solutions may be generated through the functional expansion method. As a proof, in the next subsection we analyze three choices that are more general than those considered in the G /G method. Remark 5. Let us mention again that, in all the three cases, the simplest choice corresponds to the following: The DBM solution (34) has exactly the form given by the G /G-approaches: It is obvious that the choices (48)-(50) are more general. Examples of DBM Solutions Generated through the Functional Expansion We will now show how our proposed method effectively functioned, to see if more general solutions, such as those arising in the G /G method, could be generated. The procedure was quite simple and obvious: we introduced the chosen forms for the functionals P i in the determining system (36)- (42), taking into consideration the explicit form of the auxiliary Equation (35). A set of algebraic equations arose, relating the parameters { ω ij , π ij } from P i with the wave velocity V from the main Equation (33) and with the parameters {λ, µ, ρ} from the auxiliary equation. All the compatible solutions of this algebraic system led to solutions for the functionals {P i , i = 0, 1, 2}, and, implicitly, for the DBM Equation. We have to keep in mind that the solution of (35) could be written as in [45]: Depending on the relation between λ and µ, we have three different situations: (i) If λ 2 − 4µ < 0 we have the following: Here, as well as in the forthcoming expression, we used the notations C 1 = Ae −iϕ and C 2 = Ae iϕ , respectively. (ii) If λ 2 − 4µ > 0, the solution (53) could be written as (iii) If λ 2 − 4µ = 0 the solution (53) is: Examples of Solutions in Case I The functionals {P i , i = 0, 1, 2} have numerators of degree zero: n(N 0 ) = n(N 1 ) = n(N 2 ) = 0. To observe the constraints (47), the denominators of the functionals have to be n(D 0 ) = 0; n(D 1 ) = 1; n(D 2 ) = 2. Choosing simplified notations for the coefficients appearing in (48)-(50), we consider the following: We note that with the choice in (57), Equation (42) in the determining system leads to the following constraint: Many DBM solutions can be generated with these choices. Some of them correspond to the already-reported solutions, obtained through the G /G-method. For example, one of the solutions accepted by the determining system (36)-(42) is of the following form: This corresponds to the case ρ = 0, λ = 0, and it leads to the solution of (33) of the following form: On the other hand, even observing the constraint in (58), non-standard solutions of the determining system appear as follows, for example: The corresponding solution of (33) becomes the following: Other solutions, apparently more complex than (62), are as follows: Although, when (58) is imposed, these solutions take the form of (61). Examples of Solutions in the Case II When the functionals {P i , i = 0, 1, 2} have numerators of degree one, n(N 0 ) = n(N 1 ) = n(N 2 ) = 1, the constraint in (47) asks for n(D 0 ) = 1; n(D 1 ) = 2; n(D 3 ) = 3. Again, considering simplified notation for the coefficients in (48)-(50), we choose the following: We also consider here that ρ = 0. The algebraic equations generated by the determining system (36)- (42) lead to the relations among the parameters {a, b, c, d, e, f , h, j, n, p, r, V, λ, µ} and allow the functionals P 0 , P 1 and P 2 to be in a simpler form: Compatibility conditions impose, in this case too, the following supplementary constraint: Similarly to (58), it restricts the possible values of the parameters in the auxiliary equation. As the velocity is a real quantity, and, as we have seen, the solutions of the auxiliary equation ask λ 2 − 4µ to also be real (positive, negative or zero), we retain from (68) the following situation: We note that only two situations, (54) and (55), can be fulfilled, considering adequate values, positive or negative, for the velocity V. Correspondingly, we have to consider only these two types of solutions, harmonic and hyperbolic, for the auxiliary equation. There are no realistic velocities leading to λ 2 − 4µ = 0. Let us also note that for ρ = 0, the functionals (65)-(67) become the following: The final DBM solution of Equation (33) becomes, in this case, the already-known expressions provided by the G /G approach: Another remark is that, when (68) is observed, the expressions (66) and (67) take the simple forms from (61) mentioned in Case I. Examples of Solutions in Case III Consider now the case when all the functionals P i have identical quadratic denominators: n(D 0 ) = n(D 1 ) = n(D 2 ) = 2. The relation in (47) imposes that n(N 0 ) = 2; n(N 1 ) = 1; n(N 2 ) = 0. We may choose P 0 , P 1 , P 2 as having the following forms: The procedure mentioned before leads, in this case too, to non-standard G /G-solutions. It is interesting that, again, Equation (42) is fulfilled if and only if the wave velocity is related to λ and µ from (35) by a relation of the same form as in the two previous cases: In fact (69) and Equation (72) now also take the following simplified forms: Again, because of (69), we should consider only the harmonic and the hyperbolic solutions of auxiliary Equation (35), that is (54), respectively (55). For example, for negative velocities, we have λ 2 − 4µ > 0, and for positive velocities, λ 2 − 4µ < 0. Below some comments on DBM solutions are giventhat were obtained through the functional expansion method, in the three examples that were considered before; there were some similarities that did not depend on the chosen form of the functionals P 0 , P 1 , P 2 . Equation (42) generates, in all the cases, the following constraint: In all the cases the general solutions obtained could be, at the end, reduced to the same expressions: Using the expressions in (34), they led to DBM solutions that were different from those that the G /G approach generated. Apparently, the non-standard G /G-solutions (75) are related to the factor ρ = 0 in (35). It is important to note that such non-standard solutions appear even if ρ = 0. It is simple to check, for example, the following two solutions for the determining system (36)- (42) corresponding to ρ = 0: (76) These are also different from what was obtained by the G /G approach. Recovering the Main Types of DBM Solutions The DBM solutions obtained using functional expansion had the general form (34). We effectively wrote down a few of these solutions, using the expressions (75)-(77) for the functionals {P i , i = 0, 1, 2}. It was interesting to note that, whatever expression was used, we obtained quite similar DBM solutions. They depended on the wave velocities and all the parameters appearing in (75)-(77) were captured in two other parameters that are denoted below by C 1 and C 2 , respectively. As already mentioned, only two cases arose and they corresponded to the solutions of (55) and (54), respectively, of auxiliary Equation (35). We prove here that these two cases, corresponding to negative and, respectively, positive wave velocities, practically allowed the recovery of all the important types of DBM solutions. For negative velocities, V = − 3 λ 2 −4µ < 0, the auxiliary equation admits the solution of (55), and it leads to the DBM solution (C 1 and C 2 are integration constants): In Figure 1a this solution is represented for V = −1 and for any C 1 = C 2 . It has the form of a bright soliton, which is the type of solution already reported in the literature for the DBM model. Its specific mathematical form is as follows: For V = −1 and any C 1 = −C 2 , the solution is plotted in Figure 1b. It is the typical dark soliton accepted by DBM and can be re-written as follows: If we considered bigger values for the two constants, C 1 and C 2 , the solution profile (78) changed. It took the form of periodic peaks propagating in time and along the x-axis. The peaks could have unbounded amplitudes and a periodicity depending on the effective values of C 1 and C 2 . This behaviour is illustrated by the two specific solutions plotted in Figure 2. In principle, bigger values of the constants C 1 , C 2 led to decreased wave periods and amplitudes. We noticed that the amplitudes changed from ∼10 30 in Figure 1b to ∼10 4 in Figure 2. For positive velocities, V = − 3 λ 2 −4µ > 0, the auxiliary equation admitted the solution in (54). In this case the DBM solution took the following form: Considering V = 1, the solution (81) for C 1 = 0, C 2 = 1 is plotted in Figure 3a. It had the form of many propagating periodic waves. For bigger values of the two constants, C 1 = 5, C 2 = 7, the wave amplitudes and frequencies decreased, as can be seen in Figure 3. For C 1 and C 2 with opposite signs, the solution had a similar shape: many waves propagating along the axis. From Figure 4a,b, made for different values of C 1 , C 2 , we note that there was not a high dependency on the values of these constants. This was quite normal considering the mathematical form of the solution. Conclusions This paper presented in detail how the functional expansion method proposed in [32] functioned for a large and important class of equations that can be expressed as in (17). Such nonlinear equations have important applications in various fields, such as optics and plasma physics [46][47][48], for example. Our claim was that this approach for solving NPDEs was more general than almost all the others based on the use of an auxiliary equation. Two such approaches were specially considered: the Kudryashov method, which is suitable when first order auxiliary equations are considered and the G /G method, which is the traditional approach in the case when the focus is on second order auxiliary equations. Compared with previously published papers, the novelty this paper brings was related to the explicit presentation of the balancing procedure that, in the case of the functional expansion method, required a double balance: the first one gave the maximal term in the expansion of (12), and the second one was used for determining the functionals P 0 , P 1 , P 2 , as is explained in Section 3.1. The application of the functional expansion to the DBM model represented another novelty of this paper. The method was based on expansions of the type in (10), or, more exactly, of the type in (12). These were in fact the most general possible forms of solutions and they included almost all the choices used in various approaches to the direct finding of exact solutions for nonlinear differential equations. The method presented many advantages, one of them being that it generalized other approaches to the direct solving of NPDEs. Here we considered the Kudryashov and (G /G) methods [28]. The choice of (15) is similar to what Kudryashov method considers. Practically, the functional expansion approach extended the Kudryashov approach to second order auxiliary equations, and it allowed more general solutions, as in the (G /G) approach, to be obtained. This was another important merit of out method and it was illustrated for the DBM equation using the non-standard solutions of type (75)-(77). Expressions containing the G /G ratio appear now in the most natural way, as particular sub-cases of more general solutions. It was true that the non-standard form of solutions were limited to first order denominators; this introduced a limitation to our method, at least for the DBM model. Another important issue approached in the paper was related to the balancing procedures that were traditionally applied to limit the number of terms considered in the expansions. It was pointed out that the functional expansion asks for two different balancing procedures: one following the powers of G and a second one following the powers of G. The connection between the two, as well as the relation with the form of the equations, were investigated for equations belonging to the class in (17). The outcome expressed through (27) was quite important for investigating equations such as KdV, nonlinear Schrödinger, Klein-Gordon, KMN [33] or Benjamin-Bona-Mahony. Limits of the method in investigating special types of equations, such as Chaffe-Infante or Fisher, for example, were also mentioned. How these limitations could be overcome using alternative approaches will be tackled in future works. Acknowledgments: Three of the authors (C.N.B., R.C. and R.E.) acknowledge the mobility grants received from the H2020 Project "Dynamics", 2017-RISE-777911, as well as the support they obtained in the frame of the NT-03 Agreement between SEENET-MTP and ICTP. Conflicts of Interest: The authors declare no conflict of interests. Abbreviations The following abbreviations are used in this manuscript: NPDEs Nonlinear partial differential equations NODE Nonlinear ordinary differential equations ODE Ordinary differential equations KdV Korteweg de Vries KMN Kundu-Mukherjee-Naskar DBM Dodd-Bullough-Mikhailov
7,977.2
2022-04-15T00:00:00.000
[ "Mathematics", "Physics" ]
Positive solutions of three-point boundary value problems for systems of nonlinear second order ordinary differential equations We study a three-point nonlinear boundary value problem with higher-order p-Laplacian. We show that there exist countable many positive solutions by using the fixed point index theorem for operators in a cone. In recent years, because of the wide mathematical and physical backgrounds [7,8], the existence of positive solutions for nonlinear boundary value problems with p-Laplacian received wide attention.Especially, when p = 2, the existence of positive solutions for nonlinear singular boundary value problems has been obtained (see [5,6,10]); when p = 2 and the nonlinearities are continuous, many results of the existence of positive solutions 2 Positive solutions of a three-point BVP have been obtained [1][2][3][4]9] by using comparison results, topological degree theorem, respectively.Recently, on the existence of positive solutions of multipoint boundary value problems for second-order ordinary differential equation, some authors have obtained the existence results (see [5][6][7][8]10]).However, all of the above-mentioned references dealt with the case of the nonlinearity without singularities.For the singular case of multipoint boundary value problems, to our acknowledge, no one has studied the existence of positive solutions in this case. Very recently, Kaufmann and Kosmatov [3] established the result of countably many positive solutions for the two-point boundary value problems with infinitely many singularities of the following form: where a ∈ L p [0,1], p ≥ 1, and a(t) can have countably singularities on [0,1/2).Lian and Ge in [4] investigated the following boundary value problem: where φ p (s) = |s| p−2 s, p > 1, α,β,γ,δ ≥ 0, αγ + αδ + γβ > 0 and obtained that the problem has at least one positive solution by using the fixed point theorem of the compression and expansion of norm in the cone.Motivated by the results mentioned above, in this paper, we extend the results obtained in [4] to the more general three-point boundary value problems (1.1)-(1.2) which are generalization of problems (1.4).We would stress that the results presented in this paper complement and improve those obtained in [3,4], since we allow nonlinearity to have infinitely many singularities and the boundary value conditions are more general.We will show that the problems (1.1)-(1.2) have infinitely many solutions if g and f satisfy some suitable conditions. Preliminaries and lemmas We denote and the norm Our main tool of this paper is the following fixed point theorem of cone expansion and compression of norm type. Lemma 2.1 [1].Suppose E is a banach space, → K be completely continuous.Suppose that one of the following two conditions holds: Now we define a mapping where w(t) is given by 4 Positive solutions of a three-point BVP where δ is a solution of the equation y 0 (x) = y 1 (x), here (2.4) Obviously, y 0 (x) is a nondecreasing continuous function defined on [0,1] with y 0 (0) = 0 and y 1 (x) is a nonincreasing continuous function defined on [0,1] with y 1 (1 are solutions of the equation y 0 (x) = y 1 (x), then we have (2.6) Obviously, we can obtain the following results: where θ ∈ (0,1/2) is a given constant.We can easily get the following lemmas. 6 Positive solutions of a three-point BVP The main result In this section, we present our main results, and also provide an example of family of functions a(t) that satifies condition (H 3 ).For convenience, we set and for each natural number k, assume that f satisfy .13) For each natural number k, assume that f satisfies .18) For each natural number k, assume that f satisfies Then, the boundary value problem (1.1), (1.2) has infinitely many solutions {u k } ∞ k=1 such that
862.4
2006-11-27T00:00:00.000
[ "Mathematics" ]
Achieving SDG 3.8.2: Financial Protection Against Catastrophic Health Expenditure in Malaysia Background: The Sustainable Development Goal (SDG) 3.8.2 is one of the two indicators to monitor a country's progress towards universal health coverage. It concerns the nancial protection against catastrophic spending on health based on the budget share approach. The purpose of this study is twofold: 1) to measure SDG 3.8.2 on the proportion of households with catastrophic health expenditure (CHE), and 2) to determine households at risk of CHE Methods: A cross-sectional study was conducted using secondary data from the 2015/2016 Household Expenditure Survey. The inclusion criterion was Malaysian households with some health spending in the past 12 months before the date of the survey. The World Health Organization method of calculating CHE was applied in the calculation, and a threshold of 10% out-of-pocket health spending from total household expenditures was used to determine CHE. Data were analysed descriptively, and multiple logistic regression was used to determine factors associated with CHE. Results: A total of 13015 households were involved in the study. The proportion of CHE was 2.8%. Four associated factors that were statistically signicant were female-headed household (Adjusted OR 1.6; CI 1.25, 2.03; p-value <0.001), household that lived in rural area (Adjusted OR 1.29; 95% CI 1.04, 1.61; p-value =0.022), small household size (Adjusted OR 2.4; 95% CI 1.81, 3.18; p-value <0.001) and head of household aged below 60 years old (Adjusted OR2.34; 95% CI 1.81, 3.18; p-value <0.001). Conclusions: The low proportion of CHE revealed that Malaysia is on the right track towards achieving SDG 3.8 on universal health coverage status by 2030. However there is an increasing trend in the proportion of CHE. Households at risk of CHE require nancial protection to afford healthcare and safety net measures to prevent from spiralling further into the vicious cycle of illness and poverty. nationally representative household surveys containing information on OOP health spending and household expenditures used measure the SDG 3.8.2 on the incidence of CHE (1). It is recommended as a methodologically sound measure of the nancial burden of costs on household that will assist health policy-makers to better comprehend the effectiveness of different policy instruments, and support evidence-based policy-making (12). In Malaysia, the data on OOP health spending is available from the Household Expenditure Survey (HES), conducted by the Department of Statistic Malaysia. This survey provides statistics and data about general household consumption, including those on health. Findings from this survey were used by the government agencies in planning, formulating and monitoring the national development plan. This ve-yearly survey was carried out by probability sampling that represented all households in Malaysia and was rst conducted in 1957. The purpose of this survey was to collect information on households and the pattern of consumption expenditure on a variety of goods and services. This information was used to update the consumer price index which is a measure of the average rate of change in prices of a xed basket of goods and services which represent the expenditure pattern of all Background In September 2015, the United Nations General Assembly adopted 17 Sustainable Development Goals (SDG) as a universal call to end poverty, protect the planet and ensure that all people enjoy peace and prosperity. The SDG 3 bid to ensure healthy lives and promote well-being for all, while SDG 3.8 focuses on achieving universal health coverage that includes nancial risk protection, access to quality essential healthcare services and access to safe, effective, quality and affordable essential medicines and vaccines for all. Target 3.8 has two indicators; 3.8.1 on coverage of essential health services and 3.8.2 on the proportion of a country's population with catastrophic spending on health (1,2). This study aims to measure the SDG target 3.8.2 as it is key to achieving the universal health coverage status by 2030, which meant all individuals and communities should have access to equitable and quality essential health services without suffering nancial hardship. Financial hardship is measured through catastrophic health expenditures (CHE); which occurs when out-of-pocket spending for health exceed a household's ability-topay, thus forcing a household to divert spending away from essential items such as food, shelter and clothing until the spending on these items is reduced below the level indicated by the poverty line (1). It was estimated that half of the world population have limited coverage of vital health services. In 2010, among those who have access to health, 11.7% (808 million people) of the world population suffered from CHE (2). There are many approaches to measuring CHE. The most common is the World Bank and World Health Organization (WHO) budget share method which de nes CHE as the proportion of the population with large household expenditure (10% and 25%) on health as a share of total household expenditure or income (1). The 10% total household expenditure threshold was selected as it is the preferred indicator used by the World Bank and the WHO in their 2017 Global Monitoring Report on universal health coverage. The OOP is de ned as payments made by households at the point of receiving any health services. Out-of-pocket payments could be nanced out of a household's income, savings, or borrowing. They exclude any reimbursement by a third party, such as an employer or individual private insurance. Data on household OOP health spending can be collected from various sources such as the national Household Expenditure Survey (HES), Household Income and Basic Amenities, the Malaysia National Health Account and the National Health and Morbidity Survey (3)(4)(5). Data on OOP health spending alone cannot determine whether a household is suffering from nancial hardship. As such, high OOP spending on health by wealthy households are not considered catastrophic, as there is more expendable income to spend. Poorer households, on the other hand, are more vulnerable to sudden, unbudgeted OOP health spending in the occurrence of sudden health event. They are more likely to prioritise household income on necessities; thus OOP health payments incurred may deter the poor from seeking care (6). In 2010, 808 million people incurred CHE at the 10% threshold of total household consumption or income, and 179 million suffered such payments at the 25% threshold (7). Between 2000 to 2010, the region with the fastest increase in a population facing CHE at 10% threshold is Africa followed by Asia (1). The increasing trend of CHE which occurred around the world is alarming. Since the year 2000, a rising trend in the proportion of CHE was noted globally from 9.7% to 11.4% in 2009 and 11.7% in 2010. Countries from Latin America and the Caribbean had the highest proportion of CHE (14.8%) in 2010, followed by Asia (12.8%). Countries from Oceania and North America had the lowest proportion of CHE compared to other countries in 2010 (3.9% and 4.6% respectively) (7). A study by Van Doorslaer et al., (8) using data obtained from national expenditure survey of 11 low to middle-income countries showed that household in countries like China, Vietnam, Bangladesh and Nepal had to spend more than 60% of their health expenditure from OOP. This result in a marked increase in poverty estimation from additional 1%-2% of the population in Vietnam to 3%-8% population in Bangladesh. In Nigeria, a study using Harmonized Nigeria Living Standard Survey of 2009/2010 showed that 16.4% of households in Nigeria suffered from CHE at 10% expenditure threshold. The same study also showed a lower proportion of CHE (13.7%) in Nigeria when using 40% of total non-food expenditure. Other African countries experienced a higher proportion of CHE; for instance, 22.8% in Uganda and 22.4% in Egypt with 10% threshold of total health expenditure (9). In Malaysia, an unpublished data reported that only 1.44% of households experienced CHE at 10% threshold (10). While the WHO Regional O ce for the Western Paci c reported zero per cent CHE at 25% threshold (11). However, these ndings were based on the 2004/2005 HES, and since then, no newer study was done on the measurement of CHE and its associated factors at a national scale. Moreover, due to the nature of these reports, there was an incomplete explanation of the methodology as to how these numbers were derived. Data from nationally representative household surveys containing information on OOP health spending and household expenditures can be used to measure the SDG 3.8.2 on the incidence of CHE (1). It is recommended as a methodologically sound measure of the nancial burden of healthcare costs on household that will assist health policy-makers to better comprehend the effectiveness of different policy instruments, and support evidence-based policy-making (12). In Malaysia, the data on OOP health spending is available from the Household Expenditure Survey (HES), conducted by the Department of Statistic Malaysia. This survey provides statistics and data about general household consumption, including those on health. Findings from this survey were used by the government agencies in planning, formulating and monitoring the national development plan. This ve-yearly survey was carried out by probability sampling that represented all households in Malaysia and was rst conducted in 1957. The purpose of this survey was to collect information on households and the pattern of consumption expenditure on a variety of goods and services. This information was used to update the consumer price index which is a measure of the average rate of change in prices of a xed basket of goods and services which represent the expenditure pattern of all households in Malaysia. The data are available for interested parties such as economists, academicians and others for research purposes (13). The 2020 HES is currently ongoing. As such, the latest completed survey data available to the public is the 2015/2016 HES and to date, it has not been utilised to measure CHE. The HES collects household data on OOP spending on health expenditure, but it alone does not re ect the actual nancial hardship resulting from care-seeking events. Therefore, a better indicator is the measurement of CHE. The HES data is currently underutilised; it is not being used to measure CHE. Those who experience CHE are at risk of poverty thus affecting the affordability to purchase food and other necessities. On the other hand, those who cannot afford may have to forgo treatment, resulting in a barrier to care. Catastrophic health expenditure will also affect universal health coverage status. It is one of the key indicators used in monitoring target 3.8.2 of the Sustainable Development Goals. Therefore, the ndings of the study will help to partly answer whether Malaysia has achieved the universal health coverage status. This study is also intended to ll the knowledge gap and for better understanding of the associated factors of CHE in Malaysia. By providing evidence-based information, it may be used to develop a health nancing policy and strategy on the safety net to reduce CHE in Malaysia. Study setting Malaysia is an upper-middle-income country located in South-East Asia. With a population of 32.6 million, the total expenditure on health as a percentage of GDP in 2017 was 4.24% (5). Following the Beveridge model, health nancing in Malaysia is primarily from general taxation; there is no social or national health insurance. Public health care is provided at a highly subsidised rate. Maternal and child care, infectious diseases, are provided free for all Malaysians. Government employees, the disabled and disadvantaged are exempted from paying all treatment-related charges. Meanwhile, private health care contributes 48.9% of the total expenditure on health and is nanced on a non-subsidized fee through out-of-pocket or voluntary private health insurance (14). Study design and study sample The study design was a cross-sectional study that used secondary data from 2015/2016 HES. We included all households recruited in 2015/2016 HES with some health spending (> Malaysian Ringgit; MYR 0). The sample size was calculated using single and two proportion formula via the PS Software. The highest sample size required for the study was 9156 households. However, no sampling method was applied; all households which ful lled the study criterion were included in the study. Study data The study data used was from the 2015/2016 HES data. Data collection The 2015/2016 HES data was provided by its custodian; Data Mikro, School of Mathematical Sciences, Universiti Sains Malaysia. Permission to use the data was obtained prior to the data extraction. The raw data was emailed to the researcher in separate Microsoft Excel les due to its size in February 2019. Data extraction was performed using a proforma. The data acquired include: Households' characteristics: age, gender, ethnicity, household size, household members' age, household income, location, education, health insurance status and marital status. All out-of-pocket spending on healthcare and all household expenditures for consumption (personal goods and services including tax, food and house rental fees) and non-consumption items (social security, charity, income tax, summons and alimonies payment) Data analysis Data were screened and imputed in Microsoft Excel spreadsheets. It was then combined and exported to IBM SPSS version 24 software for analysis. Descriptive statistics were used to summarise the socio-demographic characteristics of households. Numerical data were presented as mean (SD) or median (IQR) based on their normality distribution. Categorical data as presented as frequency (percentage). Following the WHO approach, CHE is de ned as OOP health expenditure exceeding 10% of total household expenditure. OOP: Refers to direct payments made by individuals to health care providers at the time of service use HH exp Total household expenditure on consumption (personal goods and services including tax, food and house rental fees) and non-consumption items (social security, charity, income tax, summons and alimonies payment) The dependent variable was CHE (yes/no), whilst independent variables were age of head household, ethnic group, income classi cation, household size, level of education, location, gender head of household, health insurance status and marital status. Simple and multiple logistic regression was applied to determine the associated factors. To obtain the preliminary nal model, variables were selected using Forward Likelihood selection and Backward Likelihood elimination methods. Multicollinearity and interaction between signi cance variable were checked using correlation table, standard error and variance in ation factors. All possible two-way interaction between independent variables was checked. Subsequently, the tness of the model was checked using the Hosmer Lemeshow test, classi cation table and receiver operating characteristic (ROC) curve. The nal model of multiple logistic regression will be presented as adjusted OR with 95% con dent interval, Wald statistic and corresponding p value. Variable with p value less than 0.05 is considered to be a signi cant associated factor for CHE. Results A total of 13015 households from the 2015/2016 HES were included in this study. Around 10.5% of households were excluded from the study as they did not report any health expenditure in the survey. The mean age of the household heads was 47 years old (SD 13.57) and the mean household size was 4.14 (SD 2.05). Majority of the head of households were aged below 60 years old (82%), male (83%), native Bumiputera (66.6%), still married (77.4%), lived in urban areas (69.3%) and had low education level (75.8%). The OOP data were skewed; means were presented together with medians to enable comparison with national averages in discussing the study ndings. The numbers may seem insigni cant due to the conversion of Malaysian Ringgit (MYR to US dollars (conversion rate 1 MYR: 0.24 USD). From this study, the median monthly household income was USD1140.41 (IQR1139.87). Around 46% came from low-income households, and only 6.2% of households purchased private health insurance. This study also found that the median monthly household expenditure was USD756.52 (IQR588.06) and the median monthly OOP health expenditure was USD5.84 (IQR22.16). In Table 1, median OOP health expenditures between household size classi cation, gender and marital status were generally about the same. However, for households headed by individuals aged more than 60 years old, of Chinese ethnic, highly educated, resided in urban areas, of highincome group and purchased private health insurance, had relatively higher median OOP health spending than their counterparts. Chinese households spent the highest OOP health expenditure compared to other ethnics with a median of USD10.17 (IQR24.21). There was a considerable gap in the median OOP health expenditure between income groups. The wealthiest households spent the most on health with a median of USD13.79 (IQR38.67) compared to the lowest income group with a median of USD3.51 (IQR7.98) In the current study, a threshold of 10% OOP share of health expenditure was used to de ne CHE. However, proportions of CHE calculated based on various thresholds of expenditure and income were presented to enable comparison with other studies in the literature. From Factors associated with CHE Multiple logistic regression analysis showed that gender and age of household head, household location and size were signi cantly associated with CHE. There was no multicollinearity between the variables as evidenced by the low correlation between the variables in the correlation matrix, small standard error and the VIF is less than ten. All possible two-way interactions between the variables were checked, and no interaction was found. The model was proved to be t as the Hosmer-Lemeshow was not signi cant with p-value 0.82 and the classi cation table showed 97.2%. The area under the ROC was 68.8% (95% CI 0.659, 0.717; p-value < 0.001) ( However, the proportion of CHE in this study was far lower than the global estimation of 11.7% in the 2010 World Bank report. The low proportion of CHE may be due to the tax-based health nancing system in Malaysia, which provides subsidised healthcare in all public healthcare institutions (5). Extremely low proportion (0.00%-2.99%) of CHE were observed in countries with near-identical health nancing models such as the United Kingdom, Italy, New Zealand, Denmark, Sweden, Brunei and Saudi Arabia (1). Our study ndings rea rm the fact that increasing the share of total health expenditure that is prepaid through taxes will lead to reduced catastrophic payment incidence (7). On the other hand, our ndings may be an underestimation of the true number of CHE. According to Cylus et al., the budget share method overestimated nancial hardship among rich households and underestimated hardship among poor households (16). However, this nding was generated among the European population, which may not be relevant to the local context. A low incidence of CHE may also indicate people are not getting (and not paying for) needed care (7) or the HES sampling of healthier populations who may not require regular medical care. Previous studies in Malaysia found the proportion of CHE was observed to be as high as 47.8% among households affected by colorectal cancer (17), 33.0% for acute gastroenteritis requiring hospitalization (6) and 16.0% among patients with ischaemic heart diseases (18). Discussing OOP health spending in the 2015/2016 HES is crucial. High OOP expenditures was often misjudged as an undesirable outcome. The Malaysia National Health Accounts reported 38% of healthcare expenditure in Malaysia is paid OOP; however, if one were to understand the concept of CHE, high OOP does not necessarily translate into catastrophic spending. For example, we discovered that households headed by individuals < 60 years old was 2.34 times more likely to suffer CHE, whereas the median OOP health spending among older household was in fact, higher. Furthermore, these older households were smaller in size (79.7% have < 5 household members), yet they spent more on their healthcare expenditures. This may be due to these older households requiring more money for treatment and rehabilitation of chronic diseases which are more prevalent among the elderly. A similar observation was found in a study among Malaysians which reported higher mean expenditure on health among those aged > 50 years old compared to those < 35 years old (USD15.98 vs USD9.25). Young Malaysians were found to spend a more signi cant portion of their total spending on holidays, clothing and entertainment which resulted in a smaller balance of their spending on health (19). The OOP health spending among urban households was higher compared to rural households in this present study. This nding was in line with the 2015 National Health and Morbidity Survey report which showed markedly higher OOP health spending among the urban populations. This may be due to the high utilisation of public health facilities in the rural area. However, this nding was also due to the differences in income and higher spending power among the urbanites. This study also found that 44.3% of the rural households were from the low-income group compared with only 36.8% in urban populations. Furthermore, higher concentrations of private healthcare facilities in urban areas due to the high demand also lead to higher OOP health spending among the urbanites. This is also evidenced by the National Health and Morbidity Survey 2015 which reported higher utilisation of private healthcare facilities among urban populations (30.4%; 95% CI, 28.2, 32.7) (4). Highly educated households were found to have more OOP health expenditure, and the same nding was also reported in the National Health and Morbidity Survey 2015. The difference in health spending was a direct re ection of higher income among most households with high education status. The OOP health spending among married and unmarried head of households was approximately the same, however, among divorced or widowed head of household it was found to be slightly lower. From the National Health and Morbidity Survey 2015, high OOP health spending was recording among married households. A recent study showed that married household usually has more frequent visits to healthcare facilities compared to unmarried households (20). As for widowed and divorced household, the low OOP health spending can be due to the fewer number of breadwinners in the household, which contributed to the low income and expenditure. Households categorised in the high income group in this study were spending more than the others. This nding is identical to other studies which showed higher OOP health spending among wealthier households (21)(22)(23). Such a common scenario is due to the high spending power within the wealthier households, which is the main reason OOP spending is concentrated among the more a uent population. These high OOP spending will not incur CHE since their total spending was in concurrence with their income. Most of the a uent households utilises private health facilities compared to the government health facilities, which led to an increase in the OOP expenditure on health (21). Notably, households with health insurance had higher OOP spending compared to households without health insurance. Further analysis showed that 46.7% of households with no health insurance were from the low income group which explained the low OOP health spending among these households. The multivariable analysis identi ed four factors that were signi cantly associated with CHE in this study, mainly head of household age and gender, location and household size. Female-led households were more likely to suffer from CHE compared to their male counterparts. Even though the OOP of health expenditure was the same between male and female-led households, the median monthly expenditure of female (USD639.16; IQR 553.73) was less than male (USD774.48; IQR 589.43). The low expenditure among the female-led households may be due to the lower income received, compared to male-led households. A previous study done in Malaysia showed that a 1% increase in income would increase 0.5% of total expenditure (19). This nding was also evidenced in the Salaries and Wages Survey Report in 2016. The report showed that median monthly salaries for employed Malaysian female were USD405.68 which is lower than employed Malaysian male with a median USD414.35 (24). Findings from a study in Portugal also discovered male-led household was protective against CHE in 2000. However, this study also highlights a shift in gender preference towards CHE in 2005 whereby male-led households were more prone to incur CHE. It is important to note that this study used 40% OOP health expenditure over non-food expenditure as an indication of CHE, which meant the proportion of CHE was in fact, higher (25). In Malaysia, the median salary in urban areas was higher compared with those employed in rural areas. In 2016, the median monthly salary among urbanites was USD481.52 compared to USD302.40 in a rural area. This explained the nding in this study where rural households were more likely to develop CHE. In this study, even though the urban population had higher OOP, but they were less likely to develop CHE than rural households. Furthermore, health insurance coverage may also play a role since 82.4% of those with health insurance were among the urban population which would give them more nancial protection. Rural households are also more prone to CHE since they usually have to travel further to seek healthcare (mean distance 13.26 km) compared to urban households (mean distance 9.18 km) (4). Household size of 1 to 2 person was found to be more at risk of developing CHE compared to households with 5 or more members. This was in line with ndings in other countries such as Vietnam and Peru (23,26). Two postulations may support this nding. First, in larger households, family members may provide better care to each other and encourage a healthier lifestyle thus reducing the utilisation of health services. Second, larger households especially those who are working can draw more resources and will share the nancial burden during illness episode and at times of needs (23,25,27,28). Another factor that was found to be associated with the CHE in this study was the age of the head of household. Households with younger head of household (< 60 years old) was more likely to develop CHE compared to older household heads. A similar nding was also noted in Peru where head of household aged between 18-24 years old was more likely to incur CHE compared to head of household aged 45-54 years old (26). Although the mean OOP spending was higher among the head of household aged more than 60 years old, the multivariable analysis showed that head of household aged less than 60 years old were more prone to develop CHE. This was due to the signi cantly lower total expenditure among the younger head of households. Also, 75.2% of the low income group was from younger head of households which could have contributed to the occurrence of CHE. Study limitations Although this study used a large national household survey, there were limitations that can be improved in future studies. This study uses secondary data gathered from the 2015/2016 HES published by the Department of Statistics, Malaysia. The variables that were collected in this survey were limited, and some of the important factors associated with CHE were not available from the survey. For example, in the literature review, factors such as type of illness and number of household members with disabilities were not available in the survey. Since this is a household population survey, respondents may be exposed to recall bias during the survey which may affect the accuracy and the quality of the data obtained. However despite these limitations, a household survey remains the best source of data in determining CHE of a country (1). In addition, respondents may fabricate their monthly income. This was apparent for those who worked in informal sectors without documented payslip. More accurate data on income, however, can be obtained through income tax records but this will not be made available to researchers. Since this is a cross-sectional study where both independent and dependent variables are being collected simultaneously, the cause and effect relationship cannot be determined. Conclusions The SDG 3.8.2 on the proportion of households with CHE among Malaysian households was 2.8%. Four factors were signi cantly associated with CHE i.e. age and gender of household heads, household size and location. Despite the majority of Malaysians were nancially protected from CHE, the number of households suffering from this situation is on an increasing trend. Undeniably, vulnerable groups among the Malaysian populations who suffered from CHE should not be overlooked as they require assistance to prevent further debt and the vicious cycle of illness and poverty. To determine a more accurate analysis of nancial risk protection and health coverage, a more speci c survey is required to identify vulnerable groups who suffer nancial catastrophe from a speci c disease. Since the proportion of CHE among Malaysian households is low in this study, the current health nancing should be maintained. However, the increasing cost of healthcare makes it challenging for healthcare in Malaysia to remain nancially sustainable. Malaysians, rich or poor, take advantage of the inexpensive healthcare services in public hospitals. Hospitals become crowded and health resources are stretched thin; thus jeopardising future access to care of the most vulnerable groups. A health nancing policy in Malaysia is required to avoid abuse and to ensure these nancial assistances are provided only to genuine CHE cases determined by the
6,665.4
2020-08-13T00:00:00.000
[ "Economics", "Medicine" ]
Fabrication and characterization of 3D-printed composite scaffolds of coral-derived hydroxyapatite nanoparticles/polycaprolactone/gelatin carrying doxorubicin for bone tissue engineering In this study, nanocomposite scaffolds of hydroxyapatite (HA)/polycaprolactone (PCL)/gelatin (Gel) with varying amounts of HA (42–52 wt. %), PCL (42–52 wt. %), and Gel (6 wt. %) were 3D printed. Subsequently, a scaffold with optimal mechanical properties was utilized as a carrier for doxorubicin (DOX) in the treatment of bone cancer. For this purpose, HA nanoparticles were first synthesized by the hydrothermal conversion of Acropora coral and characterized by using different techniques. Also, a compression test was performed to investigate the mechanical properties of the fabricated scaffolds. The mineralization of the optimal scaffold was determined by immersing it in simulated body fluid (SBF) solution for 28 days, and the biocompatibility was investigated by seeding MG-63 osteoblast-like cells on it after 1–7 days. The obtained results showed that the average size of the synthesized HA particles was about 80 nm. The compressive modulus and strength of the scaffold with 47 wt. % HA was reported to be 0.29 GPa and 9.9 MPa, respectively, which was in the range of trabecular bones. In addition, the scaffold surface was entirely coated with an apatite layer after 28 days of soaking in SBF. Also, the efficiency and loading percentage of DOX were obtained as 30.8 and 1.6%, respectively. The drug release behavior was stable for 14 days. Cytotoxicity and adhesion evaluations showed that the fabricated scaffold had no negative effects on the viability of MG-63 cells and led to their proliferation during the investigated period. From these results, it can be concluded that the HA/PCL/Gel scaffold prepared in this study, in addition to its drug release capability, has good bioactivity, mechanical properties, and biocompatibility, and can be considered a suitable option for bone tumor treatment. Graphical Abstract Bone tissue engineering (BTE) has emerged as a promising alternative to traditional bone grafts, driven by its potential for an endless supply and the absence of disease transmission.Despite these advantages, BTE procedures have not progressed into clinical practice, primarily due to various constraints.The aim is to harness the regenerative capabilities of local or implanted stem/progenitor cell populations by integrating biodegradable and osteoconductive three-dimensional (3D) scaffolds with controlled delivery of osteoinductive molecules [1,2].Addressing large and serious bone defects and promoting bone regeneration stands as a crucial concern for orthopedic surgeons, as highlighted by Alidadi et al. [3].In response, numerous researchers have undertaken extensive efforts to identify methodologies with minimal side effects [4,5].The utilization of 3D-printing technology in BTE has proven effective, thanks to its rapid, precise, and controlled production process.While conventional bioceramic scaffolds are commonly employed in BTE, the development of bioceramic scaffolds with a hierarchical structure, comprising macro-, micro-, and nanomaterials, has gained prominence.This innovation aims to provide a 3D environment conducive to cell adhesion and proliferation [6][7][8][9]. Extensive research has focused on bioceramics for numerous years, driven by their similarity to the inorganic composition of bone.Bioceramics possess notable characteristics such as high stiffness, hydrophobicity, bioactivity, biocompatibility, osteoconductivity, and potential osteoinductivity, all contributing to their ability to promote bone regeneration by modifying in vivo conditions.Ceramics offer a significant advantage over other implant materials due to their varied biocompatibility, some are inert under biological conditions, while others elicit a regulated response in the body.Bioactive ceramics, including hydroxyapatite (HA), glass ceramics, and bioactive glasses, interact with biological fluids through cellular activity, bridging the gap between hard and soft tissues [10,11].Frequently employed as metal-support coatings, HA, with its chemical composition (Ca10(PO4)6(OH)2) identical to the main constituents of bone, has been extensively studied in BTE.It has demonstrated beneficial effects on osteoblast adhesion and proliferation, being a key component of the mineral phases in teeth and bones.Notably osteoconductive, biocompatible, and non-toxic, HA, along with other calcium phosphate compounds, holds a special place in biomedical and dental materials [12][13][14]. The increasing demand for novel, intricate, and multifunctional materials has brought attention to natural composite materials that underwent substantial modification through extended evolution, selection pressures, and adaptation processes.Among these, marine biological materials stand out as vital sources of inspiration for biomimicry and raw materials with applications spanning technology and biomedicine [15].Studies on natural engineering structures, or biomimicry as defined by Vincent et al. [16], have offered diverse methods for creating distinctive scaffolds applicable in regenerative medicine, showing promise to significantly enhance conventional human-made biomaterials.Corals, with their osteoconductivity, biocompatibility, and favorable dissolution qualities, emerge as effective candidates for scaffold development [17].Utilizing coral as the raw material results in a porous implant structure, facilitating the proliferation and invasion of hard and soft tissues.This structural characteristic fosters the formation of robust mechanical and chemical bonds.Furthermore, leveraging coral as the starting material offers the advantage of shape customization before conversion into HA [18]. Only a select few coral genera, notably Porites, Goniopora, and Acropora, exhibit morphological and structural features almost identical to bone.These corals possess the potential to serve as temporary bone replacements due to their striking resemblance to bone structure.Characterized by extensive networks of channels and pores, these corals display a geometrical arrangement of connected spaces in two main directions, mirroring the pore configurations of decellularized bone.This structural similarity facilitates the permeation of new blood vessels and, ultimately, the development of endogenous bone.In particular, Acropora corals stand out for their specialization in resisting strong mechanical loads, thanks to their compact structure and low porosity.Additionally, their irregular yet well-organized pores contribute to enhanced permeability [19,20].Various HA-based degradable polymer composites, such as HA/collagen, HA/gelatin (Gel), and HA/polycaprolactone (PCL), have been employed in bone tissue engineering, exhibiting favorable mechanical properties and remarkable bioproperties.For instance, HA/ collagen composites have shown improvements in the adhesion, proliferation, and differentiation of seeded stem cells.In a study by Hamlekhan et al. [21], PCL/HA/Gel composite scaffolds were investigated to enhance mechanical effectiveness, leading to increased stress, stiffness, and compressive modulus.The incorporation of HA in the HA/Gel composite resulted in heightened compressive modulus and toughness, reaching approximately 0.18 GPa, remarkably similar to that of natural sponge bone.Similarly, in the HA/PCL scaffold, an increase in HA content from 0 to 30 wt. % correlated with a rise in compressive modulus from 0.3 to 0.5 GPa, marking a 2.4-fold increase compared to PCL alone [22]. The slow degradation rate, poor mechanical strength, and low fracture toughness of pure HA can hinder complete bone regeneration and increase the risk of infection.To address these limitations, composite materials with desirable features, including porosity, mechanical strength, thermal properties, regulated degradation rates, and the incorporation of bioactive substances, are essential for enhanced repair and regeneration in bone tissue engineering.In addition, porous HA/PCL scaffolds have demonstrated a more efficient promotion of osteoblast proliferation and viability compared to pure PCL scaffolds.Gómez-Lizárraga et al. [23] conducted a study comparing 3D-printed scaffolds made of pure PCL, PCL/synthetic-HA, and PCL/bio-HA derived from bovine bones.The findings revealed that PCL/bio-HA scaffolds exhibit enhanced bioactivity over pure PCL, fostering improved cell adhesion, activation, and proliferation.These composite materials capitalize on the advantages offered by a variety of biodegradable materials, finding extensive applications in BTE.Instead of relying solely on either natural polymers (e.g., collagen, gelatin, alginate, hyaluronic acid, and chitosan) or synthetic polymers (e.g., poly(lactic-co-glycolic acid) (PLGA), polylactic acid (PLA), and PCL), and bioceramics like HA, composite forms have gained widespread use in BTE [24][25][26]. This study aimed to evaluate the synthesis of HA nanoparticles derived from a biomimetic source.Subsequently, the effectiveness of HA/PCL/Gel scaffolding, optimized for HA percentage, was investigated in terms of bioactivity, biodegradability, viability, and the release profile of doxorubicin (DOX). In previous studies, researchers have explored the utilization of HA, PCL, and Gel in scaffold fabrication, acknowledging their merits in bone regeneration and drug delivery systems.However, a comprehensive understanding of how varying compositions of these components impact the mechanical properties and drug-release capabilities of 3D-printed nanocomposite scaffolds remains an area requiring further exploration.Furthermore, while the bioactivity of scaffolds in simulated body fluid (SBF) has been studied, the specific implications of mineralization on the scaffold's potential for bone tumor treatment have yet to be fully elucidated. This study aims to build upon the foundation laid by previous research by systematically investigating the influence of different ratios of HA, PCL, and Gel on the mechanical properties and drug release profiles of 3D-printed nanocomposite scaffolds.Additionally, we delve into the mineralization process of the optimal scaffold in SBF, unraveling its implications for bioactivity and its potential in the context of bone tumor treatment.Through these endeavors, we strive to contribute valuable insights to the evolving landscape of bone tissue engineering, identifying pathways for innovation and addressing unmet needs in the field. Materials Acropora coral was sourced from Kish Island, Iran.PCL with a molecular weight of 80 kDa, phosphoric acid, ammonia solution, and Gel powder were procured from Sigma-Aldrich, Germany.Phosphate-buffered saline (PBS) and simulated body fluid (SBF) were obtained from Topal Advance Materials, Iran. Synthesis of coral-derived HA The method proposed by Roy and Linnehan [27] was employed in this study to convert coral powder into HA through a hydrothermal process.Initially, coral powder was prepared by mechanically milling crushed and washed corals for 5 h in a planetary ball mill, utilizing zirconia (ZrO 2 ) balls and cups with a ball-to-powder ratio of 30:1.This milling process was repeated until the desired amount of powder was obtained.Subsequently, 2 g of milled coral powder was combined with 1.6 mL of H 3 PO 4 and distilled water (100 mL) at a Ca/P ratio of 1.67.The pH was adjusted to 7 by introducing an NH 4 OH solution.After thorough stirring, the mixture was tightly sealed in a 200 mL Teflonlined stainless-steel autoclave and subjected to an oven at 180 °C for 46 h.The resulting product was filtered, washed with distilled water, and then dried in an oven at 80 °C for one day. Fabrication of 3D scaffolds Three ink compositions with varying ratios of the constituent materials were prepared (see Table 1).To mitigate the risk of agglomeration and potential nozzle clogging during printing, the synthesized powder underwent a sieving process using a 150-mesh-size sieve.Subsequently, PCL was dissolved in chloroform within a sealed container at 40 °C for 2 h.The dissolved PCL was then blended with Gel and the sieved HA.The Gel content remained constant across all three ink compositions.Additionally, a pure PCL scaffold was manufactured for comparative purposes. A NIKA 3D printer from Adli Regeneration Medicine Company in Isfahan, Iran, was employed for the printing process, with the Repetier Host Software used to control the printer.The printing geometry, a 2 × 2 cm 2 cubic-shaped block with a strut spacing of 500 μm, was designed using the Solidworks program ver.2017 and printed through the Repetier software interface ver.2.1.6.The Slic3r slicing profile was applied to convert the STL file into G-code, with a designated layer height of 300 μm.The porous 3D constructs were printed layer-by-layer, involving the continuous extrusion of the composite ink for up to 20 layers.The printing ink, a composite of PCL, Gel, and HA, was extruded through a 22 G stainless steel nozzle with an internal diameter of 410 μm. Printing configurations varied among scaffolds depending on the ink composition.For ink 2, where PCL and HA percentages were equal, the extrusion rate was set at 0.01 mm/s, and the line deposition rate was 2 mm/s.In the case of ink 3, where PCL content was lower than HA, potentially causing agglomeration, the extrusion rate was increased to 0.013 mm/s.Additionally, to address quick ink setting due to low PCL, resulting in cracks, the line deposition rate was increased to 6 mm/s.For ink 1, characterized by stickiness and lower viscosity due to a reduced HA amount, the extrusion rate was decreased to 0.009 mm/ s, and the line deposition rate was reset to 2 mm/s. Characterization Different techniques were employed to characterize the raw coral, synthesized HA powder, and fabricated scaffolds.Structural analysis of milled Acropora coral and the synthesized HA powder was performed using X-ray diffraction analysis (XRD) on a D8 Advance Bruker instrument with a wavelength of 1.54 Å.The chemical compositions of the raw coral and synthesized HA were determined through X-ray fluorescence spectrometry (XRF) using a Bruker S4 Pioneer instrument.Morphological investigations of the asreceived coral and synthesized HA were conducted using field-emission scanning electron microscopy (SEM) on a ZEISS SIGMA VP-500 instrument.SEM was also employed to characterize the apatite layer formed and cell adhesion on the surface of the fabricated scaffold after biological tests.The specific surface area, pore volume, and average pore diameter of the synthesized HA powder were determined through the Brunauer-Emmett-Teller (BET) method, involving the adsorption/desorption of N 2 gas at liquid nitrogen temperature (~77 K) using a Series BEL SORP mini II.Mechanical compression tests were performed using a 2 T SANTAM testing machine.Three specimens of approximately 5 × 5 × 5 mm 3 were tested for each scaffold group.The compressive modulus for each sample was determined by linearly fitting the elastic part of the stress-strain curve. Biodegradability To assess the in vitro behavior of the scaffolds, a degradation test was employed.Initially, the samples were weighed (W 0 ) before immersion in PBS.Following incubation in PBS at 37 °C for 7, 14, 21, and 28 days, the samples were retrieved and wiped, and the pH of PBS was measured every 24 h.Subsequently, the samples were washed with PBS and placed in a vacuum oven at 37 °C for 12 h.Their weights were then measured (W t ), and Eq. 1 was applied to calculate the weight loss.Additionally, concentrations of calcium and phosphorus ions in the PBS solution were determined using inductively coupled plasma atomic emission spectroscopy (ICP-OES) on an Analytik Jena PQ 9000 instrument. Bioactivity To evaluate the in vitro mineralization activity of the scaffolds, they were immersed in SBF.Samples of approximately 5 × 5 × 3 mm 3 were submerged in 10 mL of SBF (the mass-to-volume ratio of about 20 mg/mL) for 28 days at 37 °C.Subsequently, SEM was employed to examine the formation of an apatite layer on the scaffold surface.In order to assess the bioactivity mechanism, the pH of SBF in the presence of the scaffold samples was measured. Drug loading and releasing A serially diluted solution of the drug was prepared from DOX stock (2 mg/mL, Iranian Red Crescent) with concentrations ranging from 10 to 100 µg/mL.The UV absorption of each solution was measured at 480 nm using a Cary 60 UV-Vis spectrophotometer, and a calibration curve was constructed through the linear regression method. For the preparation of the sample for drug loading, the scaffold was cut into small disks (120 mg) and subjected to sterilization under UV light for 1 h.Subsequently, a 4 mL tube containing the sterile scaffold received 3 mL of DOX solution, and the mixture was stored for 24 h in a biosafety cabinet.Loading capacity (LC) and entrapment efficiency (EE) were calculated using Eqs. 2 and 3, respectively. For the in vitro release test, the 3D-printed scaffold loaded with the drug was immersed in 4 mL of PBS solution at 37 °C.The DOX release medium was periodically replaced with fresh PBS at selected time intervals.Subsequently, the UV absorption of each medium was measured using a UV-Vis spectrophotometer at 480 nm.The cumulative concentration of DOX released from the scaffold was then plotted against time. Cell culture The MG-63 cell line obtained from the Royan Institute, Iran, was employed in this study.To cultivate the cell line, Dulbecco's modified Eagle's medium, comprising 10% fetal bovine serum (FBS), 1% Glutamax, 1% penicillin/ streptomycin antibiotic, and 1% essential amino acids, served as the culture medium.Before cell seeding, the scaffolds underwent sterilization in 70% ethanol for 1 h, followed by 30 min of UV irradiation on each side.MG-63 cells were then seeded onto the scaffolds with a density of 20,000 cells/ml for subsequent experiments. Cell adhesion To assess the scaffolds' capability for supporting cell attachment, MG-63 cells were cultured on the samples for 24 h.Subsequently, the cells were fixed with 2.5% glutaraldehyde for 2 h at room temperature and dehydrated using increasing concentrations of ethanol (30, 70, and 100%) for 10 min each.The prepared samples were then goldsputtered and imaged using SEM. Fluorescent staining To visualize the cell cytoskeleton architecture, the nucleus and actin filaments were stained with 4,6-diamidino-2phenylindole (DAPI, blue) and TRITC-labeled phalloidin (red), respectively.Following 24 h of scaffold culture, samples were fixed with 4% paraformaldehyde, permeabilized with 0.2% Triton X-100, and subjected to staining with phalloidin and DAPI.Subsequently, imaging was conducted using an Olympus BX51 fluorescence microscope. Cell activity To assess the metabolic activity of cells cultured on the scaffold, the MTS assay was conducted on days 1, 3, 5, and 7. Before cell seeding, the 96-well plate was coated with polyhydroxy ethyl methacrylate (p-HEMA) to prevent cell adhesion to the plate's bottom.Following the manufacturer's instructions, MTS/PMS solution (Promega, WI, USA) was substituted with the culture medium at each time point, and formazan production was measured after 3.5 h using a microplate reader (Fluostar Optima, BMG Lab Technologies, Germany) at 492 nm.The obtained absorption values were normalized to the absorption value of the control sample (containing culture medium + MTS without cells), and the resulting number was reported as the net absorption value. Statistical analysis All quantitative outcomes were presented as the mean (n = 3) accompanied by the standard deviation (SD). Characterization of Acropora coral Initially, XRF analysis was employed to determine the elemental composition of the as-received Acropora corals, and the obtained results are presented in Table 2.A comparison with the findings of Guillemin et al. [28] confirmed that the coral exoskeleton primarily consisted of calcium carbonate (CaCO 3 ) with approximately 3 wt.% of other trace elements.After ball-milling, a trace of ZrO 2 (~0.8 wt.%), originating from the milling balls and cups, was observed in the chemical composition of the coral powder.However, it should be considered that the milling material, ZrO 2 , is so hard that no contamination should occur.Fig. 1 XRD pattern of as-received coral after ball milling XRD analysis was employed to determine the phase structures of the raw corals.Figure 1 illustrates the XRD pattern of the milled coral.The predominant phase in the asreceived coral powder is aragonite, a high-pressure stable phase of CaCO 3 , as observed in similar studies by Hansel et al. [29].Additionally, a minor phase identified as ZrO 2 , resulting from the ball-milling process, was present in the milled coral powder.Calcite peaks, indicative of a stable phase of CaCO 3 at atmospheric pressure, were also observed in the unconverted coral powder.Notably, the element Ca, as reported by Demers et al. [19], plays a role in bone regeneration when corals are implanted in vivo.SEM analysis was conducted to study the morphology of the raw coral pieces.Figure 2, derived from SEM examination of the unconverted coral, reveals a microporous surface with pore sizes ranging from 5 to 50 μm.According to Wu et al. [30], Acropora exhibits numerous irregularly shaped but similarly oriented pores, featuring the largest pore diameter and highest permeability.This characteristic facilitates fluid transport in vitro, contributing to the success of these constructs in tissue engineering applications. Characterization of synthesized HA powder The results of the XRF analysis for the powder synthesized using the hydrothermal method are presented in Table 2.The Ca/P molar ratio, determined by XRF, closely matched the theoretical value of HA [31], with a calculated value of 1.65.Ca and P constituted 95 wt.% of the total mineral content of the natural HA.Following the hydrothermal process, XRD analysis identified the product as a singlephase highly crystalline HA with sharp peaks, as illustrated in Fig. 3. Notably, no peaks corresponding to CaCO 3 or CaO were observed, indicating complete conversion.These findings align with a study conducted by Sivakumar et al. [32] on hard coral converted to HA using a hydrothermal process. The morphology of the synthesized HA particles was investigated by SEM, as illustrated in Fig. 4a.The particles exhibit a predominantly spherical morphology with an average diameter of approximately 80 nm.The micrograph reveals a high degree of agglomeration in the synthesized powder.To confirm the composition of the synthesized HA nanoparticles, energy dispersive spectroscopy (EDS) was employed, as depicted in Fig. 4b.The EDS results indicate that the synthesized HA nanoparticles are primarily composed of elements Ca, P, and O. Additionally, minor peaks in the EDS spectrum suggest the presence of Zr ions resulting from the ball milling process. The BET test was utilized to determine the textural properties of the synthesized HA nanopowder.Figure 5 illustrates the corresponding N 2 adsorption/desorption isotherms obtained through BET analysis.Research suggests that the shape and size of synthesized HA powder can be controlled by adjusting hydrothermal temperature, time, and reaction concentration [33].The specific surface area of the material was determined to be ~48 m 2 /g, and the pore size was found to be broader in the mesoporous range, with an average diameter of ~10 nm and total pore volume of ~0.4 cm 3 /g.The increased internal porosity and specific surface area of HA are known to accelerate the healing of bone defects.Moreover, porous HA particles have been explored to maximize the porosity and surface area of osteoinductive scaffolds.Dawson et al. [34] demonstrated that a high surface area of porous HA leads to an increased rate of material resorption, resulting in physical degradation and potentially accelerating the osteoclastic breakdown of HA. Mechanical properties of fabricated scaffolds The mechanical properties, a critical parameter in implant design for tissue engineering, were evaluated for four different fabricated scaffolds.The stress-strain curves and corresponding data are presented in Fig. 6 and Table 3, respectively.As strain increased, the curves deviated from linearity, revealing distinct values of strength and strain at break depending on the HA concentration.It was observed (Table 3) that increasing the HA content to 52 wt.% enhanced the compressive modulus and compressive strength of the pure PCL scaffold from 0.16 ± 0.02 to 0.31 ± 0.02 GPa and from 5.2 ± 0.2 to 9.9 ± 0.3 MPa, respectively.Similar findings have been reported in the literature [35].The mechanical properties obtained fall within the reported range for trabecular bone (2-6 MPa compressive strength and 0.1-0.3GPa elastic modulus) [36].Table 3 also illustrates that the compressive strength and modulus of the nanocomposite scaffold improved with a higher HA content.This improvement is attributed to the high elastic modulus and crystallinity of HA [37,38] compared to PCL.While pure PCL scaffolds lack the necessary mechanical and bioactive properties for bone regeneration, using a high HA content in the scaffold composition could slightly impact the mechanical properties due to discontinuities between HA particles.In other words, a higher PCL content may enhance flexibility but could impact the overall stiffness of the scaffold. In conclusion, the mechanical properties of the fabricated scaffolds, particularly in the optimal composition, exhibit characteristics that make them suitable for bone cancer treatment.While the compressive modulus is slightly higher than that of natural trabecular bone, the compressive strength and yield strength fall within or above the reported ranges for trabecular bone.These scaffolds have the potential to provide the necessary mechanical support for bone cancer patients undergoing treatment, emphasizing their utility in load-bearing applications for bone regeneration and functional recovery.Therefore, the scaffold with the optimal composition of 47% PCL + 47% HA + 6% Gel exhibited the best mechanical properties and was selected for further investigation in this study.3.4 Biodegradability of the HA/PCL/Gel scaffold Figure 7a illustrates the weight loss graph of the scaffold with a composition of 47% PCL, 47% HA, and 6% Gel during immersion in the PBS solution from 7 to 28 days.The graph indicates low biodegradability until 14 days, with a significant increase in weight loss after 21 days.The maximum weight loss, approximately 40%, was observed in the HA/PCL/Gel scaffold after 28 days.Given the minimal degradation of PCL over 28 days, the weight loss in the scaffold primarily results from the decomposition of HA nanoparticles and gelatin.These observations align with a previous study that highlighted the higher hydrophilicity of HA/PCL scaffolds, attributed to hydrophilic HA nanoparticles, allowing water infiltration and slightly faster degradation of HA/PCL scaffolds [39].In Fig. 7b, the pH of the PBS solution is documented during the soaking of various samples.Initially, the pH of the medium experiences a slight increase from the original level (~7.4) to approximately 7.6.This initial rise is attributed to the degradation of gelatin, causing an increase in pH during the first three days of incubation.The presence of PCL in the scaffold contributes to a subsequent decrease in the pH of the medium during degradation, likely due to the acidic degradation product of PCL, typically the carboxyl end groups.Other researchers [40,41] have noted that the presence of bioactive particles can compensate for the acidification of PBS caused by acidic polymer degradation products.However, HA exhibits a reverse effect, potentially accelerating the rate of degradation. In vitro bioactivity assessment of the HA/PCL/ Gel scaffold In Fig. 8a, the pH values of the SBF solution change the soaking of samples from the 47% PCL + 47% HA + 6% Gel scaffold for different periods.Over the initial three days, there was an increase in the pH of the SBF solution, attributed to the release of OH -ions from calcium hydroxide in the HA nanoparticles.This rise in pH aligns observations from previous studies on immersion HA in SBF [42,43].Subsequently, the pH gradually decreases, likely due to the consumption of OH -during apatite deposition.An ICP test was conducted to analyze the concentrations of Ca and P ions in the SBF solution after 28 days of soaking the scaffold (Fig. 8b).The concentrations of and P ions showed a reduction after three days, indicating a higher rate of apatite formation, particularly on the surface of the scaffold.This in vitro bioactivity analysis aligns with previous research, emphasizing that the presence of HA promotes the production of crystalline apatite layers in SBF, showcasing its remarkable bioactivity [44,45].In Fig. 9, SEM was employed to examine the surface morphology of the 47% PCL + 47% HA + 6% Gel scaffold before and after soaking in the SBF solution.The immersion of bioactive compounds in SBF is a well-established method to assess in vitro apatite formation potential.However, the reliability of this method depends on the type of bioceramics tested.Studies have indicated that carbonate-based bioceramics may not exhibit obvious apatite formation when soaked in SBF for a short time [46].Contrary to this, the current results reveal the formation of an apatite layer on the scaffold's surface after 14 days of soaking (Fig. 9b).Theoretically, a higher HA content enhances bioactivity, and the presence of even trace amounts of Zr ions influences this bioactivity.This observation aligns with findings by Montazerian et al. [47], who reported that the presence of Zr ions improves the time required for HA formation.Notably, after 28 days of soaking, a thick layer of fine particles developed on the scaffold's surface (Fig. 9c, d). The formation of an apatite layer can be attributed to the presence of negatively charged OH -and PO 4 3-ion groups in the HA structure.These groups attract positive Ca 2+ ions from the surrounding SBF solution, leading to a positively charged surface.This enhances the attraction of negatively charged OH - and PO 4 3− ions from the surrounding SBF solution, initiating the formation of an HA layer on the scaffold surface.This repeated reaction over time results in a well-developed bioactive surface.The SEM results, indicative of the coated samples, affirm their bioactive properties based on their chemical behavior [48,49]. In vitro investigation of DOX release from drugloaded HA/PCL/Gel scaffold In Section, 'Drug loading and releasing', the calibration curve for DOX in PBS was established, and the resulting curve is depicted in Fig. 10.The regression equation for the standard addition curve was determined as follows: y = 0.0184x + 0.0064, where y represents the absorbance value, and x is the DOX concentration.The high R 2 value of 0.999 indicates excellent linearity for the equation, demonstrating its reliability for determining DOX concentrations within the range of 10-100 μg/ml.Before investigating the DOX release behavior from the 47%PCL + 47%HA + 6%Gel scaffold, LC and EE were calculated.DOX exhibited an EE of 30.8 ± 5.6%.For experimental purposes, the LC of the drug was set at 1.5%.UV-Vis spectrophotometry was then employed to determine the concentration of DOX released from the scaffold in the PBS solution for 14 days.Figure 11 illustrates the cumulative release of DOX from the fabricated nanocomposite scaffold with the 47%PCL + 47%HA + 6%Gel composition.The release process was characterized by two distinct stages: burst and sustained release.The burst release extended until day 5, after which the drug release rate stabilized.UV-Vis spectroscopy results indicated that the sample released the maximum amount of DOX (94%), with an exceptionally high release rate during the initial two days.This initial burst release was attributed to the rapid desorption of the drug from the scaffold surfaces.By day 14, the amount of released drug gradually decreased at a relatively constant rate.The calculated amounts of drug released from the nanocomposite scaffold in the PBS solution were influenced by the scaffold composition.In this study, a moderate amount of HA (47%) acted as a barrier against the release of the drug from the nanocomposite scaffolds during the first two days.Previous studies have indicated that samples with higher HA content tend to have higher encapsulation efficiency, suggesting that increasing the HA content in the scaffold could decrease the drug release rate [50].Advanced methods for drug integration into scaffolds, including precise 3D printing of drug patterns, have been recently employed.These technologies offer more precise control over the location and dosage of drugs, their proximity to target sites, and their distance from cells, leading to more efficient drug loading and release [51].It should be noted that the achievement of a stable drug release behavior for 14 days in the fabricated scaffolds in the current study holds significant implications for their effectiveness in bone tumor treatment.The stable drug release for 14 days implies a prolonged and sustained exposure of the tumor site to the therapeutic agent, in this case, DOX.This extended duration is crucial for maximizing the therapeutic effect on cancer cells over an extended period.In other words, sustaining drug release for 14 days helps maintain an optimal concentration of the anticancer drug within the local microenvironment.Consistent drug levels are essential for ensuring that cancer cells are continuously exposed to effective concentrations, minimizing the risk of drug resistance and enhancing treatment outcomes.Also, localized and sustained drug release at the tumor site is advantageous for minimizing systemic side effects.By reducing systemic exposure to the drug, the potential for adverse effects on healthy tissues and organs is mitigated, improving the overall safety profile of the treatment. In vitro biological assay The ability of cells to adhere to the scaffold's surface is a key indicator of its biocompatibility.Cell adhesion is essential for the initial stages of tissue integration and regeneration.In the context of bone cancer treatment, a scaffold that promotes cell adhesion is advantageous.It supports the attachment of both healthy and cancerous cells, facilitating their interaction with the scaffold and influencing subsequent cellular behaviors.For this reason, the morphology and attachment capability of MG-63 cells were studied after one day of culture on the 47%PCL + 47% HA + 6%Gel scaffold.As depicted in Fig. 12, there was a favorable interaction between MG-63 cells and the scaffold, with well-spread cells exhibiting a relatively rounded morphology and lower cytoplasmic spreading compared to normal cells.The presence of Gel is thought to have enhanced cell attachment and spreading.Some studies suggest that the arginine-glycine-aspartic acid (RGD) sequence in Gel plays a significant role in establishing stable interactions between cells and the surrounding extracellular matrix, making it highly effective for cell adhesion in clinical applications [52].The incorporation of Gel into the composite resulted in the development of a suitable scaffold with optimal bioactivity, biocompatibility, and mechanical properties.The cell nuclei were stained with DAPI to observe the density of cell nuclei on the 47%PCL + 47%HA + 6%Gel scaffold (Fig. 13a).Additionally, filamentous actins were stained with phalloidin to analyze the cytoskeletal features of the cells (Fig. 13b) [53,54].On the first day, DAPI staining revealed an acceptable number of cells adhered to the scaffold, with a relatively high density of cells on the strands of the printed scaffold.Simultaneously, the relatively low cytoplasmic expansion in the images obtained from phalloidin staining indicated that the cells did not exhibit significant spreading after one day.Overall, the results of this staining confirmed the reasonable attachment and morphology of the cells on the scaffold. Cytotoxicity assessments provide insights into how the fabricated scaffold interacts with cells.A lack of cytotoxic effects indicates that the scaffold components do not induce harmful responses in cells, ensuring a biocompatible environment.The absence of cytotoxicity is particularly important for cancer treatment, as it ensures that the scaffold itself does not compromise the viability of normal cells in the surrounding tissue.This is critical for maintaining overall tissue health and facilitating the integration of the scaffold with the host environment.To do this assessment, the MTS test was employed to determine the viability and activity of MG-63 cells seeded on the 47%PCL + 47% HA + 6%Gel scaffold after 1, 3, 5, and 7 days of culture (Fig. 14).The MTS test confirmed the sensible attachment of MG-63 cells on the surface of the scaffold on the first day.Furthermore, the increasing trend until the seventh day indicated a significant rise in the vital capacity of the cells over the study period, which can be attributed to their proliferation and expansion on the strands.The growth curve appeared as a logarithmic pattern that reached the stationary phase on the seventh day.Generally, the exponential growth curve of the cells in this test approved the non-toxicity of the biological materials used in scaffold preparation, as as the scaffold's non-destructive effect on cell activity.Chuenjitkuntaworn et al. [55] illustrated that a 3D PCL/HA scaffold can support cell growth and osteogenic differentiation, implying that a 3D porous PCL/ HA scaffold could be a potential candidate material for bone tissue engineering. Discussion The results obtained from the XRF, XRD, and SEM analyses provided a comprehensive understanding of the characteristics of Acropora coral.The presence of calcium carbonate (CaCO 3 ) as the major component and the identification of trace elements align with previous studies [28]. Similarly, the synthesized HA powder exhibited a Ca/P molar ratio close to the theoretical value of HA [31], confirming the success of the hydrothermal synthesis.The SEM images illustrated the spherical morphology of the nanoparticles, a crucial factor for their application in bone tissue engineering [33]. The mechanical properties of the fabricated scaffolds, crucial for their implantation in tissue engineering, were systematically evaluated.The compressive modulus and strength increased with higher HA content, aligning with previous studies [35].However, it's noteworthy that an excessively high HA content may introduce discontinuities, affecting mechanical properties.The 47% PCL + 47% HA + 6% Gel composition demonstrated optimal mechanical properties, making it a suitable candidate for further investigations. The biodegradability assessment revealed a gradual weight loss, primarily attributed to the decomposition of HA nanoparticles and gelatin.The influence of PCL degradation on the pH of the medium was observed, emphasizing the complex interplay of scaffold components during degradation [40,41].The in vitro bioactivity evaluation demonstrated the scaffold's ability to induce apatite formation, a promising sign for its performance in bone tissue engineering [42,43]. The drug release profile of DOX from the nanocomposite scaffold exhibited a biphasic pattern, with an initial burst release followed by sustained release.The composition of the scaffold significantly influenced the release rate, with higher HA content acting as a barrier during the initial release.This finding is consistent with previous studies, emphasizing the importance of scaffold composition in drug delivery systems [50]. The in vitro biological assay confirmed the biocompatibility of the scaffold.Cell attachment, morphology, and metabolic activity of MG-63 cells on the scaffold indicated its potential for supporting cell growth and proliferation.The presence of Gel likely contributed to enhanced cell attachment, emphasizing the role of scaffold composition in cellular interactions [52]. The observed results align with existing literature on scaffold development for bone tissue engineering.The mechanical properties of the fabricated scaffold fall within the reported range for trabecular bone, emphasizing their suitability for load-bearing applications [36].The gradual degradation of the scaffold and its ability to induce apatite formation corroborate findings from similar studies, underlining the potential for effective integration with the host tissue [39,42,43]. The synthesized scaffold, with its optimal mechanical properties, controlled biodegradability, and demonstrated in vitro bioactivity, holds significant promise for bone tissue engineering applications.The ability to finely tune the scaffold composition, influencing both mechanical and drug release properties, opens avenues for personalized approaches in regenerative medicine.The sustained release of DOX from the scaffold suggests its potential for localized drug delivery in cancer therapy, enhancing the therapeutic efficacy while minimizing systemic side effects. The study likely identified an optimal scaffold composition (e.g., 47% PCL, 47% HA, 6% Gel) for achieving a balance between mechanical properties, biodegradability, and drug release.Further research can explore variations in composition to fine-tune these properties and potentially enhance specific aspects such as drug release kinetics or scaffold strength.The success of incorporating HA, PCL, and Gel opens avenues for integrating additional bioactive agents.Researchers can explore the inclusion of growth factors, antimicrobial agents, or other therapeutic molecules to create multifunctional scaffolds.This approach aims to address various aspects of bone cancer treatment, such as promoting tissue regeneration or preventing infections.Building on the controlled drug release observed in this study, further research can delve into advanced drug delivery strategies.This may include incorporating stimuliresponsive materials for on-demand drug release, exploring combination therapies with multiple drugs, or utilizing nanotechnology for precise control over drug delivery patterns.In conclusion, applying the findings from this study to further research involves exploring new avenues for scaffold improvement, incorporating advanced technologies, and progressing toward preclinical and clinical validations.This iterative process is essential for developing innovative and effective solutions for bone cancer treatment. While the current study provides valuable insights, it is not without limitations.The in vitro assessments, while informative, do not fully replicate the complex in vivo environment.Further studies, including in vivo experiments, are essential to validate the scaffold's performance in a more physiological setting.Additionally, exploring alternative drug-loading techniques and investigating the longterm effects of scaffold degradation are avenues for future research.The expansion of nanocomposite scaffold production for clinical applications may face challenges and limitations such as scalability issues, reproducibility concerns, regulatory hurdles, cost implications, and potential safety and toxicity considerations.Additionally, the complexity of the manufacturing process and the need for specialized expertise could pose obstacles to widespread scalability.It is crucial to address these factors to ensure the successful and efficient translation of nanocomposite scaffolds from laboratory settings to clinical applications. Addressing these challenges requires a comprehensive approach involving technological innovation, regulatory compliance, quality assurance, and close collaboration between researchers, manufacturers, and regulatory bodies.Despite these challenges, overcoming them can unlock the potential of nanocomposite scaffolds for impactful clinical applications in tissue engineering and regenerative medicine. Conclusion In this research, HA nanoparticles were successfully synthesized from coral using a hydrothermal method, showcasing a spherical morphology with an average diameter of approximately 80 nm.The study focused on the fabrication of 3D-printed scaffolds using HA, PCL, and Gel, exhibiting porous structures with optimal mechanical properties akin to trabecular bone.These scaffolds demonstrated exceptional bioactivity, forming crystalline apatite layers, and showed a controlled drug release profile.Additionally, the presence of Gel enhanced cell attachment and spreading, making the HA/PCL/Gel scaffolds promising candidates for bone tissue engineering applications.In conclusion, while the use of Acropora coral-derived HA nanoparticles in scaffolds for bone cancer treatment holds promise, thorough investigations and optimization are necessary to ensure the safety and efficacy of these scaffolds in a clinical setting. Fig. 4 aFig. 5 Fig. Fig. 4 a SEM image, and (b) corresponding EDS results of synthesized powder via hydrothermal method Fig. 7 a Fig. 7 a Mass variation of the scaffold, and (b) changes in the pH value of the PBS solution during the soaking of the 47% PCL + 47%HA + 6%Gel scaffold for 28 days (n = 3, error bars are standard deviation) Fig. 8 Fig. 10 Fig. 8 Changes of (a) pH value, and (b) Ca, and P ions of the SBF solution during the soaking of 47%PCL + 47%HA + 6%Gel scaffold for 28 days (n = 3, error bars are standard deviation) Table 1 Composition of three different inks used for scaffold fabrication
9,318.4
2024-01-29T00:00:00.000
[ "Medicine", "Engineering", "Materials Science" ]
Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education 3(2), 2007 Abstract The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM) of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics), integrated (with Science) or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation. Introduction In Sweden, schools are free to choose different ways of organising the science curriculum at the local level.This gives schools the opportunity to teach science integrated or subject-specifically (Skolverket, 2001a).In the following we name the different ways of organising the curriculum "teaching organisation" which is a less precise, but shorter, term.We also use the terms "integrated" and "thematic" as synonymous concepts.In this explorative study, the results from PISA 2003 are matched with essentially one question made in a survey to the same schools.The question was if the schools teach thematically (as the integrated teaching organisation was called in the survey) or subject-specifically.The original question (in Swedish) is found in Box 1.The additional survey of the PISA 2003 schools was performed in the autumn of 2003, after the main collection of PISA 2003 results was completed.Data from the two connected studies are used to discover if there is a relation between the different ways of organising science teaching and student results in science in the PISA 2003 study in Sweden. The question asked in this study is: Does the teaching organisation in an integrated or a subjectspecific way affect student results in scientific literacy?Since curriculum discussions regarding science teaching in Sweden as well as in other countries have a component that concerns the use of integrated or subject-specific teaching, it should be of interest to investigate if these two teaching organisations produce different results. A description of the essential features of the data used in this study is presented first.The next section deals with the methods of statistical analysis performed in this study.After that, results and their interpretation follows, after which the model is assessed and diagnosed.A discussion of the results concludes the article. How PISA measures results Harlen (2001) discusses the rationale of the PISA-study.PISA aims at defining each domain not merely in terms of mastery of the school curriculum, but to test adolescents on important knowledge and skills needed in adult life.The aims of PISA are well aligned with ideas regarding an integrated curriculum (Aikenhead, 2003;Schwab, 1989;Showalter, 1973), especially the idea that the student is learning for life and needs to be able to learn new things later in life.The framework for PISA is also rather well aligned with the Swedish science curriculum (Skolverket, 2006).Scientific literacy in PISA 2003 is defined as: 'Scientific literacy is the capacity to use scientific knowledge, to identify questions and to draw evidence-based conclusions in order to understand and help make decisions about the natural world and the changes made to it through human activity.'(OECD, 2003) In this study the overall science results of PISA 2003 have been used.This result is calculated from the raw score (the students results from the test booklets) of the science questions iterated by Item Response analysis and weighted to a mean of 500 points with a standard deviation of 100 points (OECD, 2002(OECD, , 2005a(OECD, , 2005b)).Examples of items used in PISA can be found in the Swedish reports (Skolverket, 2001b(Skolverket, , 2004)). Sample The OECD's PISA is an international study of student performance in Mathematical, Reading and Scientific literacy.In the year 2003 Mathematical literacy was the main subject of study and the other subjects were minor.The sample of students is randomly sampled in two steps (OECD, 2005b).This sample procedure results in nesting students at the first level with schools on the second level.The sample in PISA 2003 was fifteen year-old students.This study is restricted to Undervisningsgrupp syftar på de av er angivna undervisningsgrupper som hade elever som deltog i PISA-studien under våren 2003.I kolumnen Tema-eller ämnesundervisning markerar ni hur undervisningen i huvudsak har bedrivits i undervisningsgruppen under våren 2003.I följande kolumner anger ni namn och e-post till undervisande lärare i gruppen.För de grupper som haft en lärare i NO som arbetat med temaundervisning/integrerad ämnesundervisning anges namn i kolumnen för temalärare.För de grupper som haft lärare som undervisat i uppdelade ämnen i kemi/ biologi/fysik anges namn i de för dessa kolumner angivna. Maria Åström and Karl-Göran Karlsson Box 1. Original question to the schools of PISA 2003PISA , asked in autumn 2003. [123] . [123] 3(2), 2007 students that took the science part of the PISA 2003 test and were in the ninth grade of compulsory school.Since PISA 2003 collected data from students in the seventh, eighth, ninth and tenth grades, students outside the ninth grade were excluded.In the original sample of PISA 2003 there were 4624 students of which 4420 students were in grade nine.Of these 2359 students have science results from PISA 2003 and in our sample there are 1867 students.The remaining 492 students were in schools that did not answer the survey of autumn 2003. The data collected in the survey of autumn 2003 shows different organisations in science teaching (integrated, subject-specific and mixed science).132 out of 172 schools answered the survey, corresponding to 77 percent of the PISA 2003-schools that had students in grade nine.Analysis of the data at the school level reveals that around 20 per cent of the schools organised science teaching in an integrated way and around 20 per cent of the schools used a mixed organisation (e.g.subject-specific in some classes and thematic in others, or sometimes subject-specific and sometimes thematic).The rest of the schools used subject-specific teaching.Small schools taught integrated science more often than large schools (Åström, 2004).When the sample was analysed at the student level, the proportion of students with integrated science remained at 22 per cent, but students with mixed teaching were only 12 per cent.The rest of the students received subject-specific teaching.This means that schools had several classes where teachers taught subject-specifically and few classes where teachers taught integrated at mixed schools. The sample was analysed with simple mean comparison for the students with different teaching organisations, both at individual level and at school level.This comparison is found in table 1.Using hierarchical linear models to test differences 3(2), 2007 There is a slight tendency for the mixed group to have a higher mean than the other groups, but as seen from the t-values this is not significant.A comparison of some variables that was used by PISA in the analysis of student results showed that there were differences between the groups, for instant the Economic, cultural and social index was higher in the mixed group than in the other groups.There were also differences between the groups of students with regard to ethnicity.It was decided to get a better analysis of the data using a more complex model with some variables used that have proven to give differences in the students results.The variables used in the study are found in table 3. Table 3. Variables of the study and their description. The independent variables used in this study were selected from models used in the PISA 2003 main report (OECD, 2004, p. 439).Those variables concerned gender, country of birth, home language, preschool attendance and an economic, social and cultural index at the student level (OECD, 2005a).Science result (the weighted likelihood estimate, WLE) is the dependent variable. A list of the variables and a short description of them is in Table 3.As is seen in Table 3 there are missing data for some of the variables.The missing data cases were not substituted with dummy indicators or cases in the present analysis.The additional variable of teaching organisation was collected separately and added to the PISA data file.This additional variable is a nominal variable that groups students into three groups: integrated, subject-specific or mixed.Data on the teaching variable was collected at class level and the variable of teaching organisation can thus be connected to each student. Analysis method The sample was analysed with hierarchical linear models (HLM) using SPSS MIXED Linear models, since two levels of data were nested (Tabachnick & Fidell, 2007).The variable of teaching organisation was used additionally in two of the models.The variables are presented in Table 3.The gender variable did not contribute significantly to the results and could have been excluded, but was kept for comparison with PISA results.A maximum likelihood method was used in fitting the three models. A two level hierarchical model of the variables used in PISA 2003 was first modelled (called PISA_plain in this study).The variables chosen were gender, country of birth, home language, preschool attendance and an economic, cultural and social index.The variables were fixed at the first level and the means of the schools could vary. Secondly, a two level hierarchical model of the variables used in PISA 2003 and the variable collected in a survey from autumn 2003 was modelled (called TEACH_simple).This model was built from the same variables and the teaching organisation at the first level.The variables were fixed at the first level and the means of the schools could vary.The first level equation is: ß 0j is the intercept, ß 1 , ß 2 , ß 3 , ß 4 , ß 5 and ß 6 are coefficients of increase or decrease of the intercept depending on the variable and e ij is an error term.The index i stand for the student i at school j and runs from 1 to n, with n as a function of j, where n is the number of student with WLEscience at school j.The index j runs from 1 to 132. The second level equation of the second model (TEACH_simple) is: where ß 0j is the random intercept, γ 00 is the grand mean over schools and u 0j is an error term that shows variation between schools. Thirdly, a two level hierarchical method to test the variable TEACH was performed and tested (called TEACH_complex).The model's first level equation was the same as in equation ( 1) and the second level equation was performed to predict random variation of the variable TEACH between schools.It is: where ß 5j is the random effect of the variable TEACH, γ 50 is the fixed effect of the variable TEACH and u 5j is an error term that shows variation between schools. The models were tested with a type III test (the sums of squares adjusted for any other effects that do not contain it and orthogonal to any effects, if any, that do contain it) to find if there were dependences between the variables in the model.A null model with no variables, but with random intercept, was calculated to use as a reference in the evaluation of the different models (Tabachnick & Fidell, 2007).Then a full model for each of the different models was calculated and the variances between the null model and the full model were compared in the analysis (Snijders & Bosker, 1998). Using hierarchical linear models to test differences Evaluation of the different models Three different measures to evaluate the models were done.An intra-class correlation was calculated to find out if a two level hierarchical model is appropriate.The CHI2-values were calculated and compared to tabled values for the appropriate degrees of freedom, to check if the model explains better than random.The effect sizes (that is the percentage of explained variance at each level of the analysis, see below) of the three models were calculated and compared.A closer description of the used methods is found in Snijders & Bosker (1998) and Tabachnick & Fidell (2007). Intra-class correlations To test intra-class correlation, ρ can be calculated.ρ = s bg 2 /(s bg 2 +s wg 2 ), where s bg is variance between groups (schools) and s wg is variance within groups (schools).Intra-class correlation is a measure of the degree of dependence of individuals.Existence of intra-class correlation represents the effect of all omitted variables and measurement errors, under the assumption that the errors are unrelated (Kreft & de Leeuw, 1998, p.9). Intra-class correlation is the same for all the models (2,3 percent), which is quite small.However, even a small amount of intra-class correlation in a large sample makes a hierarchical linear model applicable. Assessing models A test to assess if the model makes better than random predictions was performed.This was calculated from the CHI2 value of the intercept only model and the CHI2 value of the full model with all variables fixed.The difference in CHI2-values was compared to the tabled CHI2-value with appropriate degrees of freedom, at a significance level of 0,05.The PISA_plain model has seven degrees of freedom (CHI2 tabled is 14) and for the two other models (TEACH_simple and TEACH_complex) there are nine degrees of freedom (CHI2 tabled is 17).The calculated CHI2values for the models are 201 and 202, so the models predict better than chance. Effect sizes Calculation of the models' effect size is performed with η 2 = (s 1 2 -s 2 2 )/s 1 2 , where s 1 2 is residual variance of the null model and s 2 2 is residual variance of the full model.However the usually used measure of explained variance (or effect size) is composed of both a within-group and a between-group component in the hierarchical linear modelling (Kreft & de Leeuw, 1998).This gives some complication in calculation of the explained variance, and two different values for each of the models are given.The model used to calculate the explained variances is taken from Snijders & Bosker (1998, p. 102) who use a combined explained variance, that can be compared to the tabled PISA values for the same entities.A typical group size of fourteen students was used in the calculations. Result The hierarchal linear models were analysed with SPSS MIXED Linear.The following presents a description of all three models (PISA_plain, TEACH_simple and TEACH_complex).The results of the calculation of the first model are explained in detail.The others are presented with shorter descriptions, since they are all calculated in a similar way.The variables in the tables are written with explanatory denotations, to facilitate the reading. The PISA_plain model A Type III test of the fixed effects of PISA_plain was conducted first.In this test, the variables Country of birth and ESCS were significant at p<<0,001 (less than 0,1 percent).The influence of Language at home and Preschool attendance were significant at p<0,05 (at a five percent level) but the variable GENDER did not significantly contribute to the model.The economic, social and cultural index used in the analysis is centred at OECD mean, and the Swedish mean is slightly higher.A table of estimated coefficients, standard errors and the t-Ratio of the variables in the PISA_plain model is in Table 4. Maria Åström and Karl-Göran Karlsson [127] 3(2), 2007 As may be seen in Table 4, a female student with zero Economic, social and cultural index (in accordance with the OECD mean value), not born in Sweden, not speaking Swedish at home and with more than one year of preschool had on average 431,2 PISA points.When the student is born in Sweden, this yields an increase of the dependent variable by 47,2 compared to students born in other countries.When a student is speaking Swedish at home this yields an increase of 32,2 PISA points compared to a student that speaks some other language at home.A student with no preschool education has a decrease of 23,9 compared to a student with more than one year of preschool.A one unit increase of the Economic, social and cultural index yields an increase of 33,6 PISA-points.For a regular female student in Sweden, born in Sweden, who speaks Swedish at home, has an average Economic, social and cultural index of 0,3 (the Swedish mean for Economic, social and cultural index) and who attended preschool one year or more is predicted to have 520,8 PISA points according to this analysis.A boy in Sweden with the same characteristics is predicted to have an average of 518,7 PISA points. The TEACH_simple model The TEACH_simple model contains the variables as described above in the methods section.All regression coefficients are fixed at the first level of the model.The intercept varies between schools with a fixed mean.A type III test of fixed effects was performed.It showed that the variables Country of birth and ESCS contribute significantly to variation in the dependent variable (at p<<0,001).The variables Language at home and Preschool attendance contribute significantly, at a significance level of 5 percent.The variables GENDER and TEACH are not significant.Table 5 shows estimates of the fixed effects from the variables.Using hierarchical linear models to test differences [128] 3(2), 2007 Variable Effect estimates can be interpreted as above.The effect of teaching organisations is not significant, but the table shows that the students attending subject-specific teaching has 4,9 fewer PISA points than students with mixed teaching organisation.The group with integrated teaching has 1,8 fewer PISA points than students with mixed teaching organisation.Care should be taken in interpreting these results since the t-test does not show a significant result.According to this analysis, a typical Swedish student (girl) who was born in Sweden, has Swedish as her home language, has a mean Economic, social and cultural index of 0,3, attended preschool more than one year and also attended subject-specific science classes has a predicted score of 519,5 PISA points. The TEACH_complex model The TEACH_complex includes the variables as described above.Regression coefficients are all fixed at the first level of the model.This model also contains a random regression coefficient at the second level: the intercept that depends on the TEACH variable.This tests for the presence of differences in students' results in different teaching organisations.A type III test of fixed effect was conducted.The results were essentially the same as the type III test of the TEACH_simple model.The variables Born in Sweden and Economic, social and cultural index contribute significantly (at p<<0,001 level) as do the variables Language at home and Attended preschool (at p<0,05 level) to variation in the dependent variable.The variables GENDER and TEACH are not significant. Table 6 shows estimates of the fixed effects from the variables in the TEACH_complex model.The estimated effects can be interpreted as above.This table is essentially the same as the table for the TEACH_simple model.Once again the analysis shows that a typical Swedish student (girl) who was born in Sweden, has Swedish as her home language, has a mean ESCS of 0,3, attended preschool more than one year and also received subject-specific science classes has a predicted score of 518,9 PISA points. Evaluation of the models The three models were evaluated for effectiveness, possibility of prediction and intra-class correlation as described in the method section above.The two proportional reductions of error are calculated according to Snijders & Bosker (1998) 3(2), 2007 Table 7. Evaluation of the three HLM models in this study. As can be seen in Table 7, the both effect sizes (e.g.proportion of reduction of error for predicting an individual outcome or the proportion of reduction of the between school variance) are nearly the same for the three models, about twelve and thirty-five per cent. Discussion The science teaching organisation does not show statistically significant differences on science literacy results in the PISA 2003 study, according to this study.There may be several reasons for the lack of differences found between the different teaching organisations and this section deals with some of them.We also briefly discuss the treatment of missing values. The variables in the three models explain about twelve percent of the variance between students and thirty-five per cent of the variance between schools.These numbers are comparable to the OECD model (OECD, 2004, p. 439).That model deals with mathematics results as the dependent variable and explains thirty-two percent of the between school variance and seven percent of the within school variance.In the OECD model gender is included but in the Swedish data there is no significant difference between boys' and girls' science results, so that variable is not contributing to the model.The difference between this study and the PISA 2003 model may be explained by differences in the models used and of course that the PISA model is applied across countries.The PISA 2003 model (p.439) was modelled as a three level model, with fixed variables and random intercepts across schools and countries (W.Schoulz 1 , personal communication, Sept. 20 2006). The model in this study is a two level model with fixed variables and random intercepts across schools. The additional variance explained by the TEACH_simple model is very small.The three models make better than random predictions of student results according to CHI2-tests.This is mainly due to the economic, social and cultural index (ESCS) and country of birth (BORN).Other variables in the model contribute less to predictions of student science results. Some possible explanations of why students' test results in science are unaffected by the science organisation are listed below. 1.The variable is not well enough defined and would have to be better defined in the teacher survey to be accounted for.Compare with social facts described in Searle (Searle, 1997). 2. The variable may mean different things to different respondents.An answer from one survey respondent may mean something different to that person than the same answer from another respondent (Searle, 1997). 3. The variable of science organisation doesn't really affect students' science literacy results or it is not possible to assess differences in students' science literacy results due to science organisation with the present assessment.Using hierarchical linear models to test differences [130] 3(2), 2007 As to statements 1 and 2, it should be pointed out that the word used in the question to PISA 2003 schools about teaching organisation was "theme" and not "integrated teaching".The word 'theme' was used in Swedish curricula between 1980 and 1994, at which time a new national curriculum was implemented.The curriculum from 1994 (revised in 2001) granted a great deal of freedom to individual schools to develop new ways of organising work in schools.The curriculum of 1994 discusses subject integration.Since science teachers have not used the word integration very often, it was considered appropriate to use the word "theme" as this would be a word that would be better recognized by the survey respondents.The question written in Swedish is found in Box 1, for the readers own judging of how they would answer the question. Regarding statement 3 above, it is worth mentioning that there are difficulties in determining what should be taught in the general curriculum during the last years of compulsory school, since there are no specifics listed in the curriculum (Skolverket, 2001a).Sweden is a small country with a fairly homogenous culture, with a tradition of explicitly regulated content and teaching methods (Gustafsson, 1999).Gustafsson writes that teachers in a decentralised curriculum do not experience the freedom intended.Nevertheless, different approaches to the national science curriculum have been developed in recent years, reflecting teachers' various interpretations of the curriculum (Åström, forthcoming).It is thus possible that the variable of teaching organisations relates to different contents, according to the ways teachers interpret the curriculum.The curriculum is the same for all teachers, so they are supposed to accomplish the same thing, in terms of student outcome, but how they chose to organise the science teaching can be done in different ways.Another possibility is that differences are ruled out by other factors that influence students' results, such as motivation and teacher interaction (Lee, Smith, & Croninger, 1995).Influences deriving from differences in teacher behaviour have not been included in this study.The teachers' ways of working in classrooms are usually a main factor to consider when comparing student results.One possible explanation for the lack of difference between different forms of teaching organisation could be that individual teachers' personal methods reduce or nullify the diversity different teaching organisations otherwise would provide. The goals of the science curriculum in Sweden is well aligned with the framework of PISA 2003 in some of the science subjects and a bit weaker, but still applicable, in others as found in a study where TIMSS, PISA and the Swedish curriculum were compared (Skolverket, 2006).PISA focuses on process knowledge that students have achieved during their studies.The knowledge of science processes is an important part of the Swedish curriculum.An argument for integrated teaching has been that the students will get a whole and integrated picture of the topics studied (Fogarty, 1995;Penick, 2003) that would promote knowledge of the science processes.Therefore it might be expected that students that have studied integrated science would have accomplished more knowledge in science processes and perform better on the PISA assessment than students that have studied subject-specific science.As seen in this study this does not seem to be the case.The chain of evidence has however more than one difficulty, since the variable of teaching organisation is fuzzy as discussed above. Conclusions Teachers and schools in Sweden are able to organise science teaching differently.Therefore it is possible to study the relationship between teaching organisation and student results in scientific literacy by investigating a randomly selected sample of schools in the compulsory education system from PISA 2003.In this study no differences were found in PISA 2003 science results for students with different teaching organisation.The same result, no difference between different teaching organisations, is found at the individual level with simple mean comparison between groups (Åström, 2005) and also as described in the introduction.The result applies for the group of students as a whole as well as when the variables of country of birth, home language, Maria Åström and Karl-Göran Karlsson [131] 3(2), 2007 preschool attendance and economic, social and cultural index are taken into account.The variable of teaching organisation does not contribute significantly to the test results.In conclusion, the teaching organisation of science in Sweden, be it integrated, subject-specific or mixed organisation, has no statistical significance for students' results in scientific literacy as measured by PISA 2003 according to this investigation. Table 1 . Mean comparison between the groups and mean for the whole group at individual level and school level.The mean values for the different group was compared and tested against the grand mean with t-test.A table of tested t-value is in table 2. Table 4 . Estimates of fixed effects in a full model of the PISA_Plain model.**=at a significance level of p<<0,001, *=at a significance level of p<0,05. Table 5 . Estimates of fixed effects in a full model of the TEACH_simple model.**=at a significance level of p<<0,001, *=at a significance level of p<0,05. Table 6 . Estimates of fixed effects in a full model of the TEACH_complex model. model of calculations of effect size (or explained variance). 1 Senior research fellow at the Australian Council for Educational Research
6,298.6
2012-06-29T00:00:00.000
[ "Education", "Physics" ]
IMPACT OF AGRICULTURAL PROTECTION ON AGRICULTURAL GROWTH IN NIGERIA : POLITICAL ECONOMY PERSPECTIVE ( 1980-2016 ) This study examined the impact of agricultural protection and other macroeconomic variables on agricultural growth in Nigeria from 1980 to 2016. The specific objectives were to (i) estimate the level of agricultural protection in Nigeria; (ii) determine the effects of agricultural protection on agricultural growth, and (iii) analyse the causal relationship between agricultural protection and agricultural growth in Nigeria. The data were obtained from annual time series dataset from Central Bank of Nigeria (CBN), World Bank, and Food and Agriculture Organisation (FAO) and were tested using unit root and cointegration tests. Descriptive statistics, Nominal Protection Coefficient (NPC) model, multiple regression and Granger causality were analytical test used, while the hypotheses were tested with F-test. Results revealed a significant presence of protection in the agricultural sector but not statistically commensurate with the share of agriculture to Nigeria's gross domestic product, (GDP). All hypotheses were tested at 1% probability level, i.e. p < 0.01. There was a negative significant relationship between agricultural growth and protection in agriculture. A significant and positive relationship exists between agricultural growth and budgetary appropriation to the agricultural sector, while foreign direct investment and farmers' economic welfare had a non-significant and negative relationship with protection level. There was significant causality running from budgetary appropriation (agriculture) to agricultural protection and from protection level to GDP (agriculture). One of the major recommendations is that government should review its policy instruments, programmes, and projects to ensure that targeted policy objectives such as increase in agricultural growth is achieved by increasing its budget and liberalizing the sector. INTRODUCTION Nigeria is one of the developing economies with significant expenditures on agricultural protection through interest and exchange rates differentials, price mechanisms, input subsidies, researches, embargos and regulations promulgated in various protectionistic policy reforms, projects and programmes.Before 1980, African economies were deeply confronted with a crisis situation but Nigeria's experience of the economic crisis was delayed until the early -and mid-the 1980s with the collapse of global oil price.Sequel to this, many African countries including Nigeria adopted remedial and protectionistic measures to address their economic problems, either on their own or at the instance of multinational finance/development agencies such as the International Monetary Fund and the World Bank.Such protectionistic measures, policies, reforms, projects, and programmes executed in Nigeria from 1980 include but not limited to Green Revolution in 1980, Directorate of Food, Roads and Rural Infrastructure (DFRRI) in I986, Better Life for Rural Women in 1992, National Agricultural Land Development Authority (NALDA,) in 1992, Family Support programme (FSP) and Family Economic Advancement Programme (FEAP) in 1996, National Fadama Development Project (NFDP) in 1990, National Economic Empowerment and Development Strategy (NEEDS) in 1999, National Special Programme on Food Security (NSPFS) in 2002, Root and Tuber Expansion Programme (RTEP) in 2003 (Iwuchukwu and Igbokwe, 2012).Others include the Growth Enhancement Scheme (GES) in 2011 and Agricultural Transformation Agenda (ATA) in 2015 and Agricultural Promotion Policy (APP) in 2016.Each of these reforms consists of one or more of agricultural protection instruments such as tax exemption, tariff reduction, subsidies, credit facilities, reduced interest rate, and regulations and each of them have cost implications. Agricultural protection is a political economy tool designed to boost domestic production and it is justified not only on the grounds that it can contribute to domestic food security and foster more stable societies, but also because there are sound economic reasons to do so (FAO, 1999).One of these sound economic reasons is to increase GDP in the sector but Gardner (1992) allegedly confounded the paradox of growing protection and the declining share of agriculture in his research.Also, worthy to note is that the oil sector which used to contribute a meagre 2.6% of the GDP in 1960, later contributed 57.6% to the GDP in 1970and up to 99.7% in 1972(Keke, 1992).Agriculture, on the other hand, contributed only 12% to the GDP in 1970 and has remained stagnated till 2017.This supposedly has culminated in rising food import bill leading to the persistent huge deficit in the balance of payments over the years (Ugwu, 2007, CBN, 2017).These conflicting claims beg for empirical research and investigation into this paradox of increasing expenditure on protectionistic programmes and decreasing GDP in agriculture. On the issues of political economy variables that affect protection, Moon, Pino, and Asirvatham (2016) theorized that agricultural protection represents an effort by the political class to increase agricultural growth by improving national food security and minimizing food dependence on foreign countries.Rooted in the realist view of the world, the theory suggests that a state's concern about food dependence on foreign countries or about national food insecurity would be heightened as the extent of vulnerability to national food insecurity increases and as per capita income rises.In turn, concern about national food insecurity in a country is hypothesized to lead to growth in agricultural protection.In Akanegbu (2015), the pace of economic growth of Nigeria is best indicated by the trend of its gross domestic product (GDP) or gross national product (GNP). The patterns of agricultural protection policies in Nigeria and other developing economies in Africa suggest that developing nations strongly subsidize agriculture (Olper, 1998).However, scholars have conflicting opinions about the impact of such political economy tool because poverty and other expected macroeconomic indexes are not commensurate with the claims of huge expenditures by the political class over the years.For instance, Inhwam (2008) and Barrette (1999) had argued that agricultural protection is capable of creating negative externalities to developing countries because agricultural protection distorts trades of agricultural products which some developing countries have a comparative advantage in producing.On the contrary, Goldin and Knudsen (1990) opined that since agriculture is a sector of comparative advantage for many developing countries now and for some time in future, agricultural protection does not materially impair their potentials for economic growth.Moon, Pino, and Asirvatham (2016) also support that protection could bring about agricultural growth in the economy. To determine the relationship between agricultural growth and protection, some other relevant political economy factors or indicators are expected to guide the decision.Bratton and van de Walle (1994) viewed political economy variables as those factors taken into consideration as economic and political exigencies when analysing protectionism.Such political economy variables may include the state of food security or food selfsufficiency status; the contribution of foreign exchange earnings from the sector's export; general economic welfare to farm producers; GDP of the sector; budgetary allocation to the sector; and political or structural changes in the economy.In the same line of thought, Amin (1972) explained that different regime reflected varying economic and political interests.It is expected that a nation whose food supply is grossly dependent on import would be politically vulnerable.Pejout (2010) opined that food riots and violence became more prevalent in African cities following the rapid escalation of food prices in 2008 (Pejout, 2010) and this resulted in political instability and drove governments to re-analyse their agricultural policy.General economic welfare to farmers is also a political indicator that determines the demand push for protection from voters/farmers.It is expected that when the farmers are not making much of profit, their demand for protection may likely increase.Sometimes, it is suspected that political class purposively increases the agricultural budget for protection or subsidies in order to gain political support during elections.In line with this, Bratton and Van De Walle (1994), opined that political class or elite mobilize political support by using their public position to distribute rent-seeking opportunities such as subsidies, interest free-loan, or grants.Nations' GDP appears to be a quick tool in the hands of politicians for measuring the progress of policies and programmes.The GDP situation during a specific period or policy regime may guide the political class on whether the sector needs promotion or not.The rise and growth of agricultural protection coincide with the long-term decline in the share of agricultural labour and in the share of agriculture from overall GDP (Binswanger and Deininger, 1997). Empirically, data from CBN (2018), in the year 1960, agriculture contributed about 64% to the total GDP, however, in the 1970s, the contributions from agriculture to the GDP decreased to 48%.Furthermore, the decrease proceeded to 20% in 1980 and 19% in 1985 respectively and has continued to show weakness till date. By the opinion of Iwuchukwu and Igbokwe (2012), Nigeria's agricultural policies and programmes have undergone changes, especially in the post-colonial era.These changes according to Amalu (1998) have been a mere reflection of changes in government and administration.Amalu emphasized that these policies and programmes vary only in nomenclature and organizational network.Maybe no empirical research has bordered to investigate the claim that despite these policies and reforms, which gulped billions of tax-payers money, poverty and poor agricultural growth are still prevailing. Olawepo (2010) opined that income is generally low from agricultural production.Also, International Fund for Agricultural Development (IFAD; 2016, NBS; 2017) reported that despite all these many efforts, poverty is still widespread in the country and has been on increase.Also, CBN (2018) reported that the share of GDP from agriculture has remained between 11% and 21% from 1980 to date.In any country where government's intervention is promulgated in any sector, questions of accountability and appraisal arise of to what extent or degree does government support such policy, and how much has the policy contributed to the growth of the sector? The main objective of this study was to examine the impact of agricultural protection on agricultural growth in Nigeria.The specific objectives were to (i) estimate the level of agricultural protection i n Nigeria; (ii) determine the effects of agricultural protection on agricultural growth, and (iii) analyse the causal relationship between agricultural protection and agricultural growth in Nigeria. The null hypothesis tested was that agricultural protection does not have a statistically significant impact on agricultural growth in Nigeria. Analytical framework Studies on agricultural protection or other political economy issues have employed alternative measurement concepts which differ in their meanings and in terms of their uses and degree of complexity.However, where the effects of government policies are not directly translated into domestic prices, these measures would provide only a partial indication of the extent of government's protection interventions.The most simple and widely used measurement of the protection level is the nominal rate of protection (NRP) and the nominal protection coefficient (NPC) (Tyers and Anderson, 1992; Krueger, Schiff and Valdés, 1991).Amin (1996) puts that Nominal protection coefficient (NPC) is the ratio of producer price (Pi) to the border price (Pf) with adjustment made for transport, storage, and other costs.Also, the relationship between agricultural GDP and agricultural protection is akin to output and input relation.While government stimulate agricultural production with some protection policy instruments such as fertilizer subsidy, direct transfer, distribution of improved seedling, etc, it is expected that these investment will transform into increase in GDP. The effect of agricultural protection on agricultural growth was analysed in the standard growth accounting framework.The validity or strength of the multiple linear regression method used in this study is based on the Gauss-Markov assumptions in which the dependent (GDP) and independent variables (political economy/macroeconomic variables) are expected to be linearly correlated, with the estimators (β0, β1, β2, β3, β4, β5) being BLUE with an expected value of zero i.e.E(ε) = 0, which implies that on average the errors cancel out each other. Model specification The coefficients of the protection level in the agricultural sector are widely estimated using the nominal protection coefficient (NPC).According to De Gorter and Tsur, (1991), Krueger, Schiff, and Valdés, (1991), the most simple and widely used measurement of the price wedge is the nominal rate of protection (NRP) and the nominal protection coefficient (NPC) (Krueger, Schiff and Valdés, 1991;Miller andAnderson, 1992 andArene, 2008).The level of protection estimation equation is given in Eq. 1. Where: PD Domestic Producer Price; PW World price. The measurement concepts refer to the protection levels for a single agricultural commodity, but these can easily be aggregated to reflect overall protection to the agricultural sector.Secondly, to represent the relationships between agricultural output and its political economy determinants, the standard model of economic growth as applied by Owutuamor and Arene, 2018 was followed.In the same line, Solow (1956) growth model was adopted in which the output of agricultural sector, usually measured by the gross domestic product (GDP) of the sector, is represented in the production function where its growth depends on a number of factors X1, X2, X3...Xn.The function is shown in equation 2. Where: Y output and 1, = ( 1() . 2() . 3() . 4() . 5() ) Assuming there is a steady state, say a linear relationship, as seen in standard output models; output is estimated by multiple linear equations in the linear form in Eq. 3, which formed the basis for the estimation of the model in this study.This study is also based on the assumption that there may be other influential factors affecting growth but this study is only restricted to political economy variables as indicators for quick and easy policy considerations.In order to establish the mathematical function of this model, the intercept 0, measure of error term and parameters of estimations β1,2,3…n are added in Eq. 4. Choice of variables The choice of political economy variables that could affect agricultural protection was conceptualised in line with the views of Moon, Pino, and Asirvatham (2016) which theorized that agricultural protection represents an effort by the political class to increase agricultural growth.Rooted in this realist view of the political economy relationship, the study selected only variables assumed to have strong political and economic implications for agricultural policy.These variables stand as indicators in the hands of the political class which guide their political and economic decisions on the timing, budgeting and degree of protection in the sector. In line with this conceptualisation, Bratton and van de Walle (1994) viewed political economy variables as those factors taken into consideration as economic and political exigencies when analysing protectionism.Such political economy variables may include the state of food security or food self-sufficiency status; general economic welfare or GDP of the sector; budgetary allocation to the sector; and policy structural changes in the economy.In the same line of thought, Amin (1972) explained that different regime reflected varying economic and political interests.Also, in Akanegbu (2015), the pace of economic growth of Nigeria is best indicated by the trend of its gross domestic product (GDP).Following this, the model for the regression as given in Eq. 4 are specified thus: Y GDP Gross Domestic Product, the dependent variable which represents the GDP share to agricultural sector; an indicator or tool for making quick political decisions for adjustments or performance assessment in the economy.);X1 Nominal Protection Coefficient (NPC), used as proxy for measuring the degree of agricultural price protection in the economy; X2 Foreign Direct Investment (FDI) share to agricultural sectors which represents the economic and political will of individuals to invest in the sector); X 3 budgetary allocation to agricultural sector which is an indicator for political willingness of the ruling class to motivate or invest in the economy; X4 policy structure changes (protection 1, no protection 0); X5 form/type of government (civilian 1, military 0); β 0 Intercept; t Time series; Stochastic error term; and β1, β2, β3 Estimation coefficients.Apriori Expectations: On apriori, the following relationship in line with Eq. 2 as expected is shown in Eq. 5. 𝐺𝐷𝑃 = 𝑓(𝑁𝑃𝐶) (5) In order to improve the linearity of the equation, Owutuamor and Arene, (2018) followed same as advised in Obansa and Maduekwe (2013) that there is need to log-linearize all the incorporated variables in order to avoid multicollinearity and to revert the mean generating process.As such, natural log is introduced into Eq.( 4), thereby giving the econometric model in (Eq. 6). The model's empirical strategy is based on these apriori expectations shown in Eq. 7. The empirical model specified in Eq. 7 was estimated from literature.First, the observed variables, X1-X5 are fully accounted for in the equations based on the assumption that agricultural growth does not happen without some factors acting on it (Inhwan, 2008; Moon, Pino, and Asirvatham, 2016).However, it is expected that many factors could affect the growth of the sector but this study was limited to political economy perspective. The reason was to specifically x-ray the dynamics of government's interventions in the sector.In line with the assumptions, Bratton and van de Walle (1994) opined that political economy variables are those factors taken into consideration as economic and political exigencies.Also, it is expected that increase in NPC in the economy will motivate growth in the sector.Increase in FDI share to agricultural sectors will increase the volume of production and growth.Increase in budgetary allocation to agricultural sector will spur investment and growth.Also, when the policy structure changes from exploitation or liberalization to protection policy, many young investors would feel protected and invest more.Finally, it is expected that government under democracy would attract more investment and growth in the sector. Hypotheses The following two hypotheses were tested in the study: Ho1: agricultural protection level does not have a significant effect on agricultural growth in Nigeria; and Ho2: there is no causal relationship between agricultural protection level and agricultural growth.The null hypotheses, H0 were tested using the F-statistic at the five percent (5%) level of significance.The calculated F value (Fcal) was compared to the critical value of F (F-tab).Usually, if the value of the F-cal is greater than that of the F-tab at the 5% level of significance; the null hypothesis is rejected but if otherwise, it is accepted.The F-statistics formula is given as Eq. 8. The Study Area The study area is officially known as the Federal Republic of Nigeria, but here often referred to as Nigeria.The major exports of the country are: crude oil (petroleum), natural gas, cashew nuts, skin and fur, tobacco, cocoa, cassava, rubber, food, live animals, aluminium alloys and other solid minerals, (CIA World Factbook 2018) while major imports are refined petroleum products, wheat, rice, sugar, herbicides, fertilizers, chemicals, vehicles, aircraft parts, vessels, vegetable products, processed food, beverages, spirits and vinegar, equipment, machines, and tools (NBS 2015).Despite its considerable agricultural resources, Nigeria is still a net importer of food and agricultural products in general (USAID 2009) and as such the agricultural sector has been one of the least attractive sectors (Owutuamor and Arene, 2018) and has lost its leading contribution to Nigeria's GDP (CBN 2018; FAO 2012). Data Specification This work made use of secondary data.The annual time series data of agricultural output, measured by the share of agriculture to GDP, and FDI inflows into the sector were collected from CBN, spanning from 1980-2015 while 2016 was extrapolated.Also, NPC was calculated from annual data of domestic price collected from FAOSTAT and World price collected from World Bank.This study covering a 37-year period, spanning from 1980 to 2016 employed descriptive statistics aided by the use of Microsoft Excel and inferential statistics in the form of the econometric regression methods of the multiple linear regression and Granger causality test were applied as the estimation technique in evaluating the relationships and causality between the dependent variable (agricultural growth) and the political economy variables (agricultural protection level, foreign direct investment inflows to agriculture, Gross Domestic Product (GDP) inflows from the agricultural sector into the economy, political structure changes in national policy reforms and form of government in power). The regression equation was estimated after carrying out pre-estimation tests for stationarity in order to avoid multicollinearity of explanatory variables.To eliminate the presence of autocorrelation in the model, this study applied the Augmented Dickey-Fuller (ADF) test to detect the stationarity of the variables at the 5% level of significance and also identify the order of integration of the variables in the model. For the objective one, the level of protection in agriculture was estimated using the NPC model.For objective two, the effect of protection (as estimated in objective one) and other political economy tools on agricultural growth were determined using multiple linear regression with SPSS.For objective three, the causal relationship between agricultural growth and the independent variables (political economy variables) were determined using Granger causality test with Eview software. Pre-Estimation Techniques Before the main analyses were conducted, the set of data was tested for unit root in the study.ADF was used to carry out the test under its traditional conditions, hypotheses and decision rules as adopted by Nwosu and Okafor (2014).In a related study, Njoku, Chigbu and Akujobi (2015) also adopted the use of unit root test on some residuals using the ADF test.The variables were further tested for endogeneity and corrections made.Also, the variables were further subjected to cointegration test to check for long-term association. The decision rule showed that the prob (t-stat) > 0.05 which implied that the null hypothesis of no integration be rejected and we, therefore, concluded that the variables in the model have long-term relationship. To eliminate the presence of autocorrelation in the model, this study applied the Augmented Dickey-Fuller (ADF) test to detect the stationarity of the variables at the 5% level of significance and also identify the order of integration of the variables in the model.The ADF test was based on the following regression in Eq. ( 8).The Johansen (1991) co-integration method was used to test for long-term relationship between the variables.This involves looking for linear combinations of I in Eq. ( 9) time series that are stationary in the order I(1).This procedure focuses on the rank of the Π-matrix as shown in Eq. ( 9). Where: Z n x 1 vector of variables that are integrated of order one, often denoted as I(1); Π co-efficient matrix ; number of co-integrating relationships.Such that if the Π-matrix has reduced rank, the endogenous variables depicted by Z are co-integrated, with α as the co-integrating vector.However, if the variables are stationary in levels, Π would have full rank. The results of the ADF and cointegration tests are shown in Tables 1 and 2. The result shows that all the variables were stationary at their first difference (i.e.1(1)).The result in the Table 2 confirm that the variables were co-integrated in the long-run at the same rate by the normalized co-integration coefficient with the highest log likelihood in absolute term. Estimates of Agricultural Protection Coefficient in Nigeria The level of protection in the agricultural sector in Nigeria (Table 3) shows an unsteady trend.In general, the average coefficient of protection measured from selected major staple food and agricultural export commodities in Nigeria shows that the mean value was 31.8%, the minimum was 19.6% (2009) while the maximum was 53.2% (2000).This suggests that Nigeria protected agricultural sector.This result is in line with previous studies (Olper, 1998) which states that the patterns of agricultural policies in Africa suggest that developing nations strongly subsidize or protect agriculture. Effects of Agricultural Protection and Other Political Economy Variables on Agricultural Growth The results in Table 4 showed that about 33.5% variations of agricultural growth were explained by variation in the selected political economy variables which was statistically significant (p < 0.01).This means that the variables specified in the model significantly affected the growth in the agricultural sector.As such, the null hypothesis which states that agricultural protection level does not have a significant effect on agricultural growth in Nigeria be rejected, and the alternative hypothesis accepted. The specific political economy factors that had significant effects on agricultural growth were agricultural protection level, nation's budgetary allocation to Agriculture and form of governance.These political economy variables were discussed below: Agricultural protection level and agricultural growth This research reveals that agricultural protection level had negative significant effect on agricultural growth in Nigeria.For every one unit change in agricultural protection level, there is a change of -280 units showing a decrease in the agricultural growth measure -GDP share to agriculture.This is related to other findings by Saibu and Keke (2014) and Usman and Arene (2014), who inferred that some macroeconomic variables move in opposite direction.In related studies, Barrette (1999) and Inhwam (2008) had argued that agricultural protection is capable of creating negative externalities to developing countries.Also, Ubogu (1988) conclude that a liberal trade regime with low tariffs and without quotas up to 1973 translated to export-led growth in the world economy and relative stability in Nigeria's export earnings and inflow of foreign capital. The policy implication of this result is that funding meant for agriculture should rather be used for investment in other areas of the sector other than offering protection to the farmers through subsidies and incentives.The sector is in urgent need for massive investment under liberal trade since this study has shown that protecting the sector would do more harm than good.This result has also revealed that food policy involves not only activities in agricultural production but also includes feeding the industries with raw materials, food processing and manufacturing to reduce post-harvest losses, distribution and marketing of value-added products and, trade and consumption that are capable to spur industrialization. Budgetary allocation to agriculture This research reveals that agricultural budgetary allocation had a positive and significant impact on agricultural growth in Nigeria.For every one unit change in agricultural budget, there was a positive change of 2.99% in the GDP share to agriculture showing a significant increase in the agricultural growth.It is logical and expected that a unit increase in budgetary allocation to agriculture causes a positive impact on the growth and productivity of agriculture.This result is in line with that The result obtained in the study suggests that agricultural budgets have positive impact on agricultural growth in Nigeria.This suggests that Nigeria has to encourage increased investment and budgetary allocation to the sector.If the investment and budget are increased in the sector, it could support a vibrant agricultural sector capable of ensuring the supply of raw materials for the industrial sector as well as providing gai nful employment for the teeming population.It will also address the economic problems of rural poverty which is rampant and reduce dependence on oil and food importation.This call needs urgent attention especially now that Nigeria's poverty rate is reportedly alarming.However, if the agricultural sector is encouraged with the introduction of improved technology so as to diversify the economic base and reduce dependence on oil revenue in the bid to return the economy to the path of self-sustaining growth and industrialization, then it will enhance economic prosperity.Zietz and Valdes (1993) also identified that the size of government's budget is likely to shift the supply curve of protection, adding that it's particularly true when agricultural protection is provided through subsidies or incentives.Therefore, caution should be taken to invest the funding in areas that require investment rather than agricultural protection. Causal Relationship between Agricultural Protection Level and Agricultural Growth The null hypothesis which states that there is a causal relationship between agricultural protection level and agricultural growth was tested using Granger causality test and the result is presented thus: The result showed that the null hypotheses contained in Table 5 were rejected.These, therefore, mean that GDP share from agriculture causes significant changes in agricultural protection and that in the short run too, protection level in agriculture is significant in causing changes in GDP growth share from agriculture.This is related to that of Obansa and Maduekwe (2013), Oloyede (2014), and Owutuamor and Arene (2018) that agricultural growth can be induced by a macroeconomic variable.Since GDP can be used to measure general economic welfare in an economy (Gardner, 2012), Bratton and van de Walle (1994), opined that political class or elite mobilize political support by using their public position to distribute rent-seeking opportunities such as subsidies, interest free-loan, or grants.This means that when the GDP is low, politicians are likely to increase agricultural protection as a way of buying support from farmers who are also the majority of the voters. Paradoxically, the increase in agricultural protection causes a negative change in the sector's GDP as seen in the regression in Table 5. CONCLUSIONS This study was carried out to statistically analyse the impact of agricultural protection on agricultural growth, measured by agricultural output (GDP) in Nigeria.The variables were logically restricted to political economy indicators as tools in the hands of the political class for managing the economy of the nation.It describes the trends in agricultural growth and protection level in agriculture and empirically analysed the effects of agricultural protection level on gross domestic product inflows from agricultural sector into the economy and other political economy cum macroeconomic variables such as Foreign Direct Investment (FDI) share to the agricultural sector which represents the economic and political will of individuals to invest in the sector; budgetary allocation to agricultural sector which is an indicator for political willingness of the ruling class to motivate or invest in the economy; political structure changes whose dummy nature was vectorized into protection as 1, and no protection as 0; and form/type of government (also put as civilian 1, military 0). The empirical results show that about 34 percent of the total variation in agricultural growth can be explained by agricultural protection and other political economy variables considered in the model, whereas less than 66 percent is accounted for by the error term and other variables not included in the political economy model.Although there was a negative relationship between agricultural growth and protection in agriculture, this was significant.It also reveals that agricultural protection had a positive and significant impact on agricultural growth in Nigeria between 1980 and 2016. The findings in this study suggest strong policy implications which are recommended as thus: Nigerian government should rearrange its food policies to position agriculture in a more liberalized commercial form as a serious business rather than a means of addressing farmer's demand for subsidies and price effects.The government should also increase its budgetary allocation to the sector for the purpose of embarking on the massive construction of agro-industries, silos, and other important capital projects that would cover many other aspects of agriculture such as processing, storage, marketing, industrialization, etc. Figure 1 : Figure 1: Graphical presentation of protection level in agriculture from 1980 to 2016.Source: Author's Computation, 2018. 2, 3, 4, 5, factors that determine the rate of output.To account for time factor in the model, according to Mankiw, Rome r and Weil (1992), output i.e.Agricultural GDP growth, became a function of government income (measured by agricultural budget), foreign direct investment (FDI), amount of protection in the sector (measured by nominal protection coefficient, NPC), policy structure changes and form of government at time (). Table 1 : ADF unit root test result Source: computed output with e-views. Table 2 : Johansen cointegration test result Table 5 : Parameter Estimates for short run Pair Wise Granger Causality Tests between level of agricultural protection and growth in the sector.
7,116.2
2019-03-31T00:00:00.000
[ "Economics" ]
Technology of CO 2 capture and storage : This paper studies carbon capture and storage based on carbon emission. There are three main technical routes for CO 2 emission reduction: pre-combustion capture, oxygen-rich combustion, and post-combustion capture; CO 2 separation technology mainly includes: chemical absorption method, solid adsorption method, membrane separation method. CO 2 capture needs to be transported to a special place for storage, which can be generally divided into geological storage, marine storage and chemical storage. Future carbon capture research will focus on cost savings and energy savings. Introduction The impact of greenhouse gases on global climate is becoming increasingly apparent. The burning of fossil fuels has become the most important source of CO 2 emission, and the continuous increase of CO 2 concentration in the atmosphere will lead to climate change and global warming. Therefore, it is urgent to reduce CO 2 emission as much as possible. Power plants are currently the largest source of industrial carbon, so controlling CO2 emissions from power plants can effectively reduce the concentration of CO 2 in the atmosphere. CO 2 capture and storage (CCS) is an important way to reduce CO 2 emissions, the technology of CO 2 capture and storage (CCS) refers to the separation of CO 2 from industrial or energy-related emission sources, transportation to certain locations for storage, and long-term isolation from the atmosphere [1]. The current CO 2 technology has some limitations; Emission reduction measures have energy consumption in the process of equipment manufacturing, installation and operation, and the utilization efficiency of new energy, such as wind energy, solar energy, biomass energy and low-carbon energy, is still low [2] . Therefore, it is very important to adopt reasonable CO 2 emission reduction methods. The CO 2 capture and separation play an important role in the CO 2 emission reduction process. Post combustion capture Post-combustion capture technology is a technology to capture CO2 from the flue gas generated after the fuel is burned in the air. The CO2 generated parts in the power plant are mainly boilers and internal combustion engines, which have large equipment and are difficult to be reformed. Therefore, only the flue gas captured after combustion can be used to reduce CO2 emissions. There are many power plants in China, so the application of post-combustion capture is very wide. However, the disadvantages of this technology lie in the large volume of flue gas, low emission pressure and low partial pressure of CO2 in the power plant, resulting in high investment and operation cost [4] . Precombustion capture First, fossil fuels are gasified into gas under certain pressure and temperature conditions in a steam environment. CO in the gas is converted into H 2 and CO 2 in the water-gas conversion reactor, which transfers the fuel chemical energy into H 2 . Then CO 2 and H 2 are separated through separation technology, and the H 2 separated can be used as fuel for power plant or for fuel cell development [5] . The CO 2 mixture produced by this pre-combustion capture method is relatively high in concentration and easy to be sequestered. The gasification of solid fuel and the storage of H 2 as a by-product are the main technical bottlenecks of this technology. Oxygen-enriched combustion Oxygen-rich combustion USES extremely pure oxygen to replace the traditional air and fossil fuels, and takes part in combustion reaction in combustion chamber together with some high-concentration CO2 returned after combustion to generate flue gas dominated by H2O and CO2 [6]. The water is then condensed into a liquid by physical cooling to separate the pure CO2. The advantage of oxygen-rich combustion is that the products are almost only H2O and CO2, so CO2 can be captured effectively. Meanwhile, due to the reduction of nitrogen content, the NOX content generated is also reduced, thus reducing the cost of NOX removal. The difficulty of this scheme lies in the high cost of obtaining pure oxygen conditions. CO2 separation technique The separation technologies used for CO2 capture mainly include chemical absorption, solid adsorption, and membrane separation. Chemical absorption method The main method used for large-scale CO2 capture in industry is alcohol amine method, among which the commonly used absorbents are ammonia, ethanolamine (MEA), diethanolamine (DEA), methylethanolamine (MDEA) and triethanolamine (TEA0), etc. [7] . These substances capture CO2 mainly by the interaction of amino groups in amines with CO2. The chemical absorption process is shown in Fig. 1. [8] ,After the flue gas enters the absorber, CO2 in the flue gas reacts with the absorber to form intermediate compounds.After that ,the liquid intermediate compound passes through the heat exchanger to exchange heat with the dilute solution coming out of the regenerator. It is then heated in the regenerator and then decomposed into absorbent and CO2. Finally, the adsorbent generated is transported to the liquid storage tank to form a complete cycle. The pressure difference required by the solvent returning to the absorption chamber is provided by the booster pump [8] . The captured CO2 is condensed, dehydrated, compressed and transported to storage sites for storage.The advantage of this method is that it can produce relatively pure CO2 airflow, but the disadvantage is that the equipment cost is high and the energy consumption is large. Solid adsorption method Solid adsorption method refers to the selective adsorption of CO 2 with adsorbent under certain conditions (temperature, pressure, etc.), and the desorption of CO 2 by changing the conditions so as to achieve the purpose of separating CO 2 . Solid adsorbent materials mainly activated carbon, molecular sieve, zeolite, MOF materials, etc. According to the adsorption conditions, the adsorption method can be divided into variable temperature adsorption (TSA) and pressure swing adsorption (PSA).The adsorption method mainly relies on van der Waals force to adsorb CO2 on the surface of the adsorbent, and its adsorption capacity mainly depends on the structural characteristics of the adsorbent and the pressure and temperature conditions in the process. In the case of variable temperature adsorption, the control speed of temperature regulation is slow, the efficiency is low, and a large amount of adsorbent is needed, which leads to the high cost of this technology. Therefore, pressure swing adsorption method is generally adopted in the industry, while variable temperature adsorption method is rarely used [9]. Solid adsorption technology can be divided into :(1) fixed bed technology. Fixed bed adsorption usually involves the use of packed beds containing granulated sorbent or structural sorbent. These packed beds can be configured as single containers or used to simulate moving beds or rotating arrangements [10] . (2) Mobile bed technology. Mobile adsorption system refers to any system involving the transport of granulated adsorbent between different containers used for adsorption; Examples are mobile beds and fluidized beds. Mobile bed adsorption means that the fluid to be treated flows from top to bottom in the tower. When it comes into contact with the adsorbent, the adsorbent is adsorbed. The saturated adsorbent is discharged continuously or intermittently from the bottom of the tower, and fresh or regenerated adsorbent is added to the top of the tower. Fluidized bed adsorption operation is an adsorption operation in which the fluid flows from bottom to top and the velocity of the fluid is controlled within a certain range to ensure that adsorbent particles are held up but not taken out and carried out in a fluidized state [11] . From the perspective of engineering application, solid adsorption eliminates the flow of liquid waste. In addition, solid adsorbents do not volatilize, which can avoid a large amount of energy loss related to the regeneration of absorbent. Membrane separation method Membrane separation refers to the selective separation of CO2 from gas through membrane under certain conditions. According to the different membrane materials, there are mainly polymeric membrane, inorganic membrane, mixed membrane and other filtration membrane under development [4] . Membrane separation technology has the advantages of low investment, low energy consumption, small footprint and convenient maintenance, etc., which has attracted much attention in the field of CO2 capture. The transfer of CO2 in the membrane depends on solution-diffusion, and the transfer process consists of three steps: ① CO 2 adsorbed and dissolved on the upper surface of the membrane;②CO 2 diffuses through the membrane under the action of partial pressure difference on both sides.③ CO 2 desorption on the downstream surface of the membrane. The permeability rate mainly depends on the solubility and diffusion coefficient of CO 2 in the membrane. At present, CO 2 membrane separation technology has been developed to some extent, but the poor separation performance of the membrane restricts the development of this technology. CO2 sequestration After CO2 capture, the high-purity CO2 needs to be further treated. Transport CO2 to a storage site for storage. Due to large CO2 emissions, permanent storage is generally required, so it is also called storage. CO2 sequestration requires the selection of suitable CO2 sequestration sites and monitoring, verification and risk assessment. In addition, environmental impact, low cost and national and international law norms are also considered [11] . The main ways of CO2 storage are geological storage, Marine storage and chemical storage. Geological storage: geological storage refers to the use of natural gas similar to the geological storage principle of CO2 storage.CO 2 Geological storage technologies can be broadly classified into the following categories:①Depleted reservoir storage;②Oil and gas reservoir storage (CO 2 is stored in the mined oil and gas field with enhanced oil recovery technology and high-pressure gas recovery technology); ③ Storage of unrecoverable coal seams (using enhanced methane recovery technology in coalfields);④Deep saline layer sequestration;⑤Other rock formations are sequestration, such as basalt and oil shale [12] . Ocean storage: The carbon sequestration capacity of the oceans far exceeds that of the terrestrial biosphere and atmosphere. Ocean sequestration refers to the capture of CO 2 is liquefied treatment, sent to the designated sea area, using pipeline technology into a certain depth of the ocean for storage. Marchetti [13] first proposed the idea of CO 2 ocean storage in 1977, He proposed to inject CO 2 collected by different ways into the deep sea area in the form of gas, liquid and solid respectively, and make it automatically form very stable solid hydrate under specific high pressure and low temperature conditions in the deep sea, so as to realize long-term sequestration and isolation of CO 2 . Chemical storage: metal oxide reacts with CO 2 through chemical reaction to form inorganic carbonate, which is then permanently sealed. It is currently in the research stage, but its small-scale application has been successful.It reacts to form inorganic carbonate, which is then sequestered permanently. However, the application of this technology requires large amounts of energy, minerals and proper disposal of waste. Conclusion Aiming at the demand of energy conservation and emission reduction, this paper conducted research on CO 2 capture and storage.C The three technical routes of CCS technology are :(1) post-combustion capture (2) pre-combustion capture (3) oxygen-rich combustion. The separation methods of CO 2 mainly include chemical absorption, solid adsorption and membrane separation. Then the captured CO2 needs to be sequestered, in which CO2 is sequestered mainly through three methods:geological sequestration, Marine sequestration and chemical sequestration. China's development status determines that it will also use fossil fuels on a long-term and large-scale scale. At the present stage,CO 2 capture and storage has made a considerable contribution to China's CO 2 emission reduction. At present, the cost and energy consumption of CCS technology are still very high, so the key to the future development of CO 2 capture and storage technology is to reduce energy consumption and cost.
2,706.2
2019-01-01T00:00:00.000
[ "Engineering" ]
Cavity-Backed Patch Filtenna for Harmonic Suppression A co-design consisting of a filtering antenna integrating a cavity-backed patch antenna and a low-pass coaxial filter is proposed for size reduction of the RF front-end. The cavity-backed patch antenna is developed to exhibit a broad impedance bandwidth and a unidirectional radiation pattern. The low-pass coaxial filter is implemented to suppress harmonic resonances and gain in the stop-band of the antenna and is embedded directly inside the antenna cavity to realize a compact small-footprint co-designed filtering antenna structure. Two prototypes of the proposed filtering antennas, which integrate cavity-backed patch antennas with <inline-formula> <tex-math notation="LaTeX">$4^{\textit {th}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$5^{\textit {th}}$ </tex-math></inline-formula> order low-pass coaxial filters and with the overall dimensions of <inline-formula> <tex-math notation="LaTeX">$0.697\lambda _{0} \times 0.585\lambda _{0} \times 0.236\lambda _{0}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$0.697\lambda _{0} \times 0.585\lambda _{0} \times 0.320\lambda _{0}$ </tex-math></inline-formula> (where <inline-formula> <tex-math notation="LaTeX">$\lambda _{0}$ </tex-math></inline-formula> is the free space wavelength at 3.15 GHz), respectively, are fabricated and measured. The experimental results show fractional bandwidths of 25% and 23.8% and gain suppression levels exceeding 11 dB and 22 dB in the stop-bands for the filtering antennas with the <inline-formula> <tex-math notation="LaTeX">$4^{\textit {th}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$5^{\textit {th}}$ </tex-math></inline-formula> order filters, respectively. The measured gain is more than 6.5 dBi in the pass-band for both filtering antennas. In addition, excellent agreement is obtained between the simulated and measured results. I. INTRODUCTION In recent years, there has been an increasing level of demand for multifunction, multipurpose RF front-end components with miniaturization designs for use in wireless communication systems [1]- [6]. For these applications, the antenna and filter are the two most essential components at the front end of a typical RF system, but they are usually larger components compared to other RF components. Thus, there have been numerous studies that aimed to miniaturize the overall size of RF front ends by integrating an antenna and a filter into a single module, referred to as a co-designed filtering antenna [7]- [12]. Traditionally, RF components are usually connected via a standard 50-such as a coaxial connector, resulting in a bulky structure. In addition, in order to match the input and output impedances of each component with the 50interface, a matching network is also required inside each The associate editor coordinating the review of this manuscript and approving it for publication was Mohammad Tariqul Islam . component, which increases the complexity, size, and total losses of the overall system. However, if the components can be integrated into a single module, the impedance at the interfaces between the components can also be used to optimize the performance of the overall system, hence eliminating the need for matching networks inside each component. Therefore, a co-designed filtering antenna that combines an antenna and a filter into a single module is more advantageous over the traditional cascaded method in terms of complexity, size, and losses. Thus far, a variety of filtering antennas have been proposed using a planar microstrip line [7]- [9] and a substrate-integrated waveguide (SIW) [10]- [12]. In some of the reported designs, the antenna acted as a dispersive complex load for the filter, including coupled planar resonator filters connected to microstrip patch antennas [7] and a coupled SIW resonator filter connected to a planar coaxial collinear antenna [10]. In other filtering antennas, the antenna served as both a radiator and as the last resonator of the filter simultaneously. Examples include planar monopole antennas integrated with different types of coupled-line filters [8], [9] and coupled SIW cavity filters cascaded behind slot antennas [11], [12]. As metallic cavity structures have high power capacities and low insertion losses, they are widely used in the design of the high-Q filters installed in the base stations of wireless communication systems [13]. In relation to this, a 3-D metallic cavity-backed antenna has a feasible design that allows it to attain a unidirectional radiation pattern, as indicated in earlier studies [14], [15]. Therefore, a co-designed filtering antenna based on a metallic cavity structure would be an important advance [16]. In recent work [17], a broadband duplex-filtenna using a 3-D metallic cavity structure was presented. In this design, however, the antenna and filter are cascaded through a small section of a 50-coaxial cable, resulting in a large footprint. In this communication, we present a compact smallfootprint cavity-backed filtering antenna. The filter is directly inserted inside the antenna cavity at the feed position and replaces the feeding part of the antenna. The 4 th and 5 th order low-pass coaxial filters are designed and integrated with wideband cavity-backed patch antennas effectively to suppress the harmonic resonances and gains in the stop-bands of the antennas. The proposed cavity-backed filtering antenna has a smaller volume as compared to a traditional antenna cascaded with a filter. A simulation is conducted using the ANSYS high-frequency structure simulator (HFSS). The performance of the proposed filtering antenna is verified through its fabrication and measurement. II. ANTENNA DESIGN A. CAVITY-BACKED PATCH ANTENNA DESIGN Figure 1 shows the geometry of the proposed cavity-backed patch antenna. The antenna consists of a rectangular patch of size p w 1 × p l 1, a Taconic TLE-95 ( r = 2.95, tanδ = 0.0028) substrate with a thickness of t = 1.27 mm, and an air-filled metallic cavity. The inner dimensions of the cavity are h w 1 × h l 1 × h2. The overall dimensions of the antenna are s w 1 × s l 1 × h1. The antenna is excited by a metal post inside the cavity through the coupled feeding technique. The metal post with a diameter of v1 is assembled with a SMA connector and is located f p away from the center of the cavity-backed antenna. The antenna is designed to operate on the S-band at a designed frequency of 3.15 GHz. The optimized parameters of the cavity-backed antenna are listed in Table 1. In the process of designing a cavity-backed patch antenna, a coupled feed is used to improve the bandwidth. An equivalent circuit, as shown in Figure 2, is introduced to describe the mechanism by which to obtain a wide bandwidth through a coupled feed. The cavity can be modeled as a shorted waveguide with admittance of Y in when viewed from the aperture. Y in depends on the height of the cavity, and it represents an inductive property for an electrically short cavity model. The aperture part of the cavity is modeled as a circuit with admittance of Y AP , and Y AP is also inductive for a small aperture [18]. Because both the cavity and the radiation aperture are inductive, the resonance of the cavity-backed patch antenna is caused by the capacitance of the patch. However, it is difficult to implement a wide bandwidth with only resonance by the patch, and the bandwidth is expanded by generating additional resonance through a coupled feed. In the equivalent circuit of the coupled feed part, C g refers to the capacitance between the metallic post and the patch, C s denotes the capacitance between the metallic post and the cavity wall, and L P refers to the inductance due to the current flowing through the metallic post. In order to confirm the additional resonance caused by the coupled feed and the corresponding bandwidth expansion, the reflection coefficients according to the feeding method are compared, as shown in Figure 3. In Figure 3, direct feed refers to a feeding method that electrically connects the metal post and the patch by extending the length of the metal post. With the direct feed approach, it can be seen that only a single resonance arises, and as described above, this is the resonance caused by the patch. On the other hand, it can be seen that the use of the coupling feed causes additional resonance, which increases the bandwidth. A prototype is fabricated for validation, as shown in Fig Figure 4(b) shows the measured and simulated reflection coefficients and the peak gains of the proposed cavity-backed patch antenna. Good agreement is obtained between the simulation and the measurement. As shown in Figure 4, the antenna achieves a measured −10 dB reflection bandwidth of 25.12% (2.82-3.63 GHz). Within the −10 dB reflection bandwidth, a measured maximum gain of 7.3 dB is attained at 3.2 GHz. As also observed in Figure 4, there are several harmonic resonances that occur at the frequencies of 6 GHz, 9 GHz, and 12 GHz, which are multiples of the fundamental resonance frequency f 0 of 3 GHz. At these harmonic frequencies, the peak gains are also very high. Because harmonic resonance also transmits and receives signals in the unwanted frequency range, it is necessary to suppress the harmonic resonances with a lowpass filter. B. LOW-PASS COAXIAL FILTER DESIGN This subsection presents the design of a coaxial-type lowpass filter, which is then integrated with the designed cavitybacked patch antenna to suppress the harmonic resonances and gains in the stop-band of the antenna. Figure 5 presents an illustration of the configuration of the low-pass coaxial filter. The structure of the overall low-pass coaxial filter consists of inner and outer conductors, as depicted in Figure 5(a). The coaxial filter has a different order and characteristics depending on the structure of the inner conductor. The inner conductor consists of a cascading structure of capacitive and inductive steps. According to the number of steps, the order of the coaxial filter is determined. We selected 4 th order and 5 th order Chebyshev low-pass filter prototypes. Corresponding side-cut views are shown in Figures 5(b) and 5(c). As indicated, in the 4 th order filter, the inner conductor has four steps, whereas the inner conductor of the 5 th order filter has five steps. These characteristics can also be seen through the equivalent circuit shown in Figure 6. Figure 6 presents the equivalent circuit of a 4 th order low-pass filter and a 5 th order lowpass filter. The inductance and capacitance consist of a ladder network structure, with each inductance and capacitance referring to the step of the inner conductor described above. In the equivalent circuit, each element has a value defined as a g-value. In general, the g-value is the value normalized to the value at which the source resistance g 0 becomes 1. In the equivalent circuit of an n th order low-pass filter, g i (i=1 to n) represents the value of the inductance or capacitance, and g 0 and g n+1 indicate the input impedance and the output impedance, respectively. In order to design the filter, it is necessary to calculate the element value g to implement the Chebyshev response characteristics [13]. In this paper, both low-pass coaxial filters are designed to have a cutoff frequency of 4 GHz and a reflection coefficient of less than -15 dB in the pass-band. The input and output impedances of the filter are both set to 50-. The g-values for the 4 th order Chebyshev response are g 0 = 1, g 1 = 1.1955, g 2 = 1.3001, g 3 = 1.8626, g 4 = 0.8345, and g 5 = 1.4326, and the g-values for the 5th order Chebyshev response are g 0 = 1, g 1 = 1.2328, g 2 = 1.3591, g 3 = 2.0599, g 4 = 1.3591, g 5 = 1.2328, and g 6 = 1. With the calculated g-values, the lengths of the inductive step and capacitive step constituting the inner conductor of the filter are computed as follows [13]: (1) Here, g L and g C are determined by the g-values, which are calculated above as follows: if the g-value is used to calculate the length of the inductive step, then it will replace g L in (1); in contrast, if g is used to calculate the length of a capacitive step, then it will replace g C in (2). R 0 and f c represent the reference impedance and the cutoff frequency, respectively. Z low and Z high are impedances of the capacitive and inductive steps. Z low = 10-and Z high = 100-are used in this design. Based on the calculated parameters, an additional tuning process is also needed to adjust the parameters so as to attain better performance. The final parameters of the 4 th order and 5 th order low-pass coaxial filters are shown in Table 2. Figure 7 illustrates the response characteristics of the designed low-pass coaxial filters. As observed, these two filters achieve an S 11 value of less than -15 dB in the entire pass-band. The S 21 results show that both filters have a cutoff frequency of 4 GHz. These results are consistent with the design specifications; hence, the low-pass coaxial filters are shown to be well designed. In addition, as depicted in Figure 7, the higher the filter order is, the sharper the skirt characteristics become and the better the filtering characteristics. Clearly, the 5 th order filter has better characteristics as compared to the 4 th order filter. However, as the order of the filter increases, the overall length of the filter also increases. Therefore, the order should be selected in consideration of the size and characteristics of the filter. In this work, the 4 th order and 5 th order filters are selected to design filtering antennas. VOLUME 8, 2020 C. CAVITY-BACKED FILTERING ANTENNA DESIGN With the designs of antenna and filter shown in subsections A and B, respectively, the cavity-backed co-designed filtering antenna is implemented. Figure 8 depicts the configuration of the proposed cavity-backed co-designed filtering antenna. The low-pass coaxial filter is directly inserted inside the antenna cavity at the feed position, replacing the feeding part of the cavity-backed patch antenna (see Figures 8(a) and 8(b)). The bottom part of the metal post is reduced so that the filter can be inserted, while its top part remains. The input of the filter is connected by a standard 50-SMA connector, whereas the output of the filter is connected to the top part of the metal post to excite the rectangular patch antenna. The feeding signal is sent to the metal post through the lowpass coaxial filter. This signal is then coupled to a rectangular patch, and radiation occurs. The output impedance of the filter is optimized to attain better impedance matching between the antenna and the filter without 50-constraints. Two filters, in this case a 4 th order and a 5 th order low-pass coaxial filters, are used for integration with the cavity-backed patch antennas. The configurations of these two filters are illustrated in Figures 8(c) and 8(d). For the fabrication of the coaxial filters, to maintain the distance between the inner and outer conductors, the gaps between the conductors must be filled with a material, such as a dielectric material. For this reason, we use a hollow cylinder of Teflon, which has a dielectric constant of 2.1 and which is inserted between the inner and outer conductors of the filters. In addition, the Teflon is extended by half of the length of the metal post to hold the metal post. The design parameters are optimized to achieve optimum performance. The optimized parameters of the filtering antennas are shown in Table 3. For the filtering antenna using the 4 th order filter, the design parameters are p l 2 = 26.9 mm, p w 2 = 27 mm, h3 = 21.2 mm, h4 = 12.1 mm, f p = 5.3 mm, h l 2 = 53.1 mm, and h w 2 = 41.1 mm, whereas for the filtering antenna using the 5 th order filter, the design parameters are p l 2 = 26.7 mm, p w 2 = 29.2 mm, h3 = 29.2 mm, h4 = 12.1 mm, f p = 4.5 mm, h l 2 = 52.1 mm, and h w 2 = 43.3 mm. Figure 9 shows a comparison of the simulated reflection coefficients and peak gains of the cavity-backed patch antenna and the cavity-backed co-designed filtering antennas. As can be observed in Figure 9(a), the cavity-backed patch antenna without a filter has a −10 dB impedance bandwidth of 2.82-3.63 GHz (25.12%). Meanwhile, the cavitybacked co-designed filtering antennas with the 4 th order low-pass coaxial filter and the 5 th order low-pass coaxial filter FIGURE 11. Fabrication model: (a) cavity-backed co-designed filtering antenna with the 4 th order low-pass coaxial filter before assembly, (b) cavity-backed co-designed filtering antenna with the 5 th order low-pass coaxial filter before assembly, and (c) after assembly. achieve −10 dB impedance bandwidths of 2.84-3.61 GHz (23.88%) and 2.79-3.49 GHz (22.29%), respectively. In addition, as indicated in Figure 9(a), the harmonic resonances that occur at the frequencies of 6 GHz, 9 GHz, and 12 GHz in the cavity-backed patch antenna are wholly suppressed due to their integration with the low-pass filters, demonstrating the feasibility of the co-design. Figure 9(b) presents the results of the peak gains of the antennas with and without filters. All three antennas have similar gains in the pass-band. In the stop-band, the gain of the co-designed filtering antennas is significantly reduced as compared to that of the cavity-backed patch antenna without a filter. The gain reduction exceeds 11 dB and 22 dB for the co-designed filtering antennas with the 4 th and 5 th order low-pass coaxial filters, respectively. In addition, the skirt and filtering characteristics of the design integrating the 5 th order filter is more improved than those of the design integrating the 4 th order filter. Figure 10 presents E-field distribution of the cavity-backed antenna in the form of a cross-section. A simulation is conducted to compare the radiation characteristics with and without an integrated filter in the harmonic band (8.65 GHz). Figure 10(a) shows the E-field distribution of a cavity-backed antenna without a filter, where it can be seen that the intensity of the field radiated from the antenna is high. On the other hand, as shown in Figures 10(b) and 10(c), when a filter is integrated into the cavity-backed antenna, there is very little E-field emitted from the antenna. In addition, the field intensity of the cavity-backed antenna with the 5 th order filter (see Figure 10(c)) is weaker than that of the cavity-backed antenna with a 4 th order filter (see Figure 10(b)). Through the results shown in Figure 9 and Figure 10, it is confirmed that the proposed filter-integrated cavity-backed antenna has excellent filtering characteristics in the stop band and radiation characteristics in the pass band. III. EXPERIMENTAL RESULTS AND DISCUSSION Two prototypes, i.e., the cavity-backed co-designed filtering antennas with the 4 th order and 5 th order low-pass coaxial filters, were fabricated for experimental validation. Figure 11 shows photographs of these two prototypes. The overall dimensions of the prototype co-designed filtering antenna with the 4 th order filter are 66.4 mm × 55.7 mm × 22.47 mm (corresponding to 0.697λ 0 × 0.585λ 0 × 0.236λ 0 ; λ 0 is the wavelength at the designed frequency of 3.15 GHz), identical to the overall dimensions of the prototype cavity-backed patch antenna (shown in Section II.A). This indicates that proposed co-designed filtering antenna has a lower profile as compared with the traditional cascading antenna with a filter. The overall dimensions of the prototype co-designed filtering antenna with the 5 th order filter are 66.4 mm × 55.7 mm × 30.47 mm, corresponding to 0.697λ 0 × 0.585λ 0 × 0.320λ 0 . The height of this prototype is slightly increased because a longer filter with a higher order is implemented. Figure 12 depicts the simulated and measured results of the reflection coefficients and gains for the two prototype antennas. In both designs, resonance does not occur in bands other than the S-band. The simulation and measurement results are in good agreement. The cavity-backed co-designed filtering antennas with 4 th and 5 th order low-pass coaxial filters achieve measured −10 dB impedance bandwidths of 2.8-3.6 GHz (25%) and 2.76-3.48 GHz (23.08%), respectively. As also observed in Figure 12, the maximum gain measured in the pass-band is more than 6.5 dBi for both prototypes. In addition, in the stop-band, the measured gain suppression exceeded 11 dBi and 22 dBi for the cavity-backed codesigned filtering antennas with 4 th and 5 th order low-pass coaxial filters, respectively, as compared to the standalone cavity-backed patch antenna. Figure 13 compares the measured radiation patterns of a cavity-backed patch antenna with and without a coaxial filter. Regardless of the integration of the filter, the radiation pattern does not change within the passband and has unidirectional radiation characteristics. Hence, it can be seen that the filter integrated in the cavity does not have much of an effect on the radiation pattern. Figure 14 and Figure 15 show the radiation patterns when a 4 th order low pass filter and a 5 th order low pass filter are integrated into the cavity, respectively. Simulation and measurement results are shown, and the intensity of co-polarization is more than 24.98 dB higher than that of cross-polarization in the boresight direction. On the yz−plane, the radiation pattern is slightly tilted in the +ydirection due to the off-center feed position. The proposed filtering antennas are designed to have low back-radiation for radar applications. Both types of filtering antennas have a front-to-back ratio of 16.9 dB or more, as can be seen in the radiation pattern graph. In addition, the 3 dB beamwidth at the center frequency is 79 • in the E-plane direction and 84 • in the H-plane direction in both cases. Accordingly, the gain reduction is expected to be insignificant when steering the beam within the ±40 • range. Table 4 shows a comparison between the proposed filtenna design and previous filtennas in the literature. Compared to the planar filtenna designs in [8] and [9], our designs have much higher gains, wider impedance bandwidths, and unidirectional radiation patterns, all of which are desirable for radar applications. Although the SIC-backed slot filtennas [11] [12] have a lower profile, our filtennas achieve a wider impedance bandwidth by approximately three times and occupy a smaller antenna footprint. Compared to a previous cavity-backed slot filtenna [16], the size of our design is smaller overall. Specifically, to realize 5 th order filtenna design [16], earlier researchers use two cavities, resulting in a very high-profile structure. Meanwhile, we only use one cavity and insert a 5 th order low-pass coaxial filter inside the cavity, achieving a low-profile 5 th order filtenna design. Therefore, it is concluded that the proposed filtering antenna can outperform other filtering antennas. IV. CONCLUSION A compact, small-footprint cavity-backed co-designed filtering antenna integrating a broadband cavity-backed patch antenna and a low-pass coaxial filter was designed, fabricated, and tested. Prototype 4 th and 5 th order low-pass coaxial filters were developed for integration. The filtering antenna with the 4 th order filter has dimensions of 0.697λ 0 × 0.585λ 0 × 0.236λ 0 , identical to those of standalone cavitybacked patch antenna, but exhibits gain suppression of more than 11 dBi in the stop-band. When integrated with a 5 th order filter, the filtering antenna has dimensions of 0.697λ 0 × 0.585λ 0 × 0.320λ 0 and exhibits better gain suppression of more than 22 dBi in the stop-band. The measured fraction bandwidths of these two filtering antennas are 25% and 23.8%, respectively. Moreover, both filtering antennas achieve a measured gain of more than 6.5 dBi throughout the pass-band. Therefore, the proposed filter-integrated cavitybacked antenna is an excellent candidate for both size miniaturization of the RF front ends and for harmonic suppression.
5,465.6
2020-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Virtual Simulation Design Facing Smart Payment Screen on the Background of Artificial Intelligence the rapid development of virtual simulation technology topic Virtual simulation has been used in education, transportation, anti-theft, and other fi elds, but it has not seen its application in the fi eld of intelligent payment. This study designed an intelligent payment screen based on virtual simulation technology and evaluated the relevant performance. The evaluation results found that compared with the traditional payment screen, the use of smart payment screen for payment had the fastest improvement in security and the slowest improvement in usability, an increase of 4.2%; the youth group believed that the convenience of use of the smart payment screen based on virtual simulation technology has improved the fastest. The elderly group believed that the use of the smart payment screen based on virtual simulation technology had the slowest improvement in ease of use, an increase of 9.4%; the average payment time of each place has decreased, the average payment time of place 4 was the longest, and the average payment time of place 6 was the shortest, and it was concluded that the higher the payment success rate was, the shortest payment time was; in terms of the user ’ s preference for the smart payment screen, it was concluded that 18:00 was the peak period for users to use the smart payment screen; in terms of risk assessment of using smart payment screens, it was concluded that compared with traditional payment screens, operational risk was reduced by 3.88%, credit risk was reduced by 4.67%, liquidity risk was reduced by 5.06%, and settlement risk was reduced by 6.01%. The design of the smart payment screen makes the user ’ s payment more secure, and the payment risk is greatly reduced. Introduction The era of artificial intelligence has arrived, and artificial intelligence technology is used in speech recognition, autonomous driving, and smart cities in life. With the popularization of computer applications, artificial intelligence technology has also been applied and promoted. People are full of joy in the beautiful life brought by artificial intelligence, and at the same time they are eager to bring more convenience to life through artificial intelligence. In the field of payment, users also have new thinking about smart payment, people expect payment to be more secure, and hope to make new changes. In the recent years, mobile payment (MP) has become popular, and many different payment screens have emerged on the market. However, most of the current MPs have defects, such as poor security and weak system, and cannot meet people's daily payment needs. Moreover, in the environment of artificial intelligence, virtual simulation technology is also becoming mature and has been applied in various fields. In order to improve the security of smart payment and change the defects of current market payment, it is urgent to design a smart payment screen based on virtual simulation technology. At this stage, the payment system and its technology are a hot topic, and many scholars have conducted research on intelligent payment. HHS Centers for Medicare & Medicaid Services described changes in the amounts and factors used to determine payment rates for Medicare services paid under the Medicare Hospital Outpatient Prospective Payment System and Medicare Ambulatory Surgery Center payment systems [1]. Layton et al. developed a measure of the efficiency consequences of price and benefit distortions under a particular payment system, building on previous research on the evaluation of health plan payment systems [2]. Feng et al. proposed a blockchain-based privacy-preserving payment mechanism, which enabled data sharing while ensuring the security of sensitive user information. This mechanism introduced a registration and data maintenance process based on blockchain technology, which realized the payment audit of privileged users while ensuring the anonymity of user payment data [3]. Mondal et al. introduced a generalized order-level inventory system that fully allowed late payments within various transaction credit intervals [4]. Barkhordari et al. conducted an experimental investigation of important factors influencing trust in Iran's electronic payment system. Research has found that both perceived security and trust have a positive impact on the use of electronic payment systems [5]. Tsao et al. explored the impact of dynamic discount-based credit payments on supply chain network design issues, also considered the time value of money, and applied cash flow discounting to formulate a model that determines the optimal replenishment cycle, selling price, and scope of influence for distribution centers while maximizing the present value of total profits [6]. Khalilzadeh et al. provided a comprehensive model to study the determinants of near field communication (NFC)-based MP technologies in the restaurant industry. The results showed that the proposed model provided about 20% more explanatory power and predictive accuracy than the original unified theoretical model of technology acceptance and use, and proved the impact of risk, safety, and trust on customers' intentions to use NFC-based MP technology in the restaurant environment [7]. To summarize, payment is an issue worthy of study, but there is currently no research on smart payment screens in the market. Virtual simulation technology has played an indispensable role since the rapid development of information technology, and many scholars have conducted in-depth research on it. Cagnazzo used Sim&Size software to conduct virtual simulations to determine whether there would be an impact on technical, angiographic, and clinical outcomes after WEB treatment. It was found that virtual simulation with Sim&Size software appeared to be helpful in selecting an appropriate braided endo-bridge device for aneurysm treatment, thereby reducing intervention time, radiation dose, number of undeployed devices, and the need for corrective interventions [8]. Hudder et al. compared students studying neonatal assessment using virtual simulations with students in a traditional laboratory setting to assess students' knowledge, skills, satisfaction, self-confidence, and clinical judgment. Results showed that students' knowledge acquisition of neonatal assessment was greater when content and demonstrations were provided through virtual simulations, but student satisfaction and self-confidence were higher due to the opportunity to participate in live laboratory activities [9]. Mackenna et al. used an exploratory descriptive design for students to complete virtual simulations and then conducted self-reporting activities. The results showed that students demonstrated different levels of reflective thinking through self-reporting [10]. Mht et al. investigated whether the inclusion of virtual simulations in the required first-year self-care therapy course would affect the frequency of interactions, self-reported student confidence, and the performance of students reported by recipients during the second-year Introduction to Community Pharmacy Practice Experiences [11]. Fogg et al. used the clinical judgment scale to assess the effect of virtual simulations on students' selfperceived clinical judgment abilities. The results showed that virtual simulation was beneficial to students' learning and development of clinical judgment ability [12]. Padilha et al. assessed the ease and usefulness of clinical virtual simulations and the intention of emergency nurses to use clinical virtual simulations to improve their clinical reasoning skills for lifelong learning [13]. Verkuyl et al. delved into the advantages of self-reporting immediately after a virtual game simulation and the value of maximizing reflection by adding group debriefing [14]. Therefore, most of the researches were based on virtual simulation technology on medical treatment, education, and so on. It was rarely heard of its application in the field of smart payment. Based on this, this article adopted virtual simulation technology to simulate the design of intelligent payment screen. In this article, the simulation technology was used to design an intelligent payment screen, and the security, usability, reliability, and convenience of the intelligent payment screen were compared with the traditional payment screen. At the same time, a survey of users' preference for the smart payment screen was conducted, and the possible risks of using the smart payment screen were evaluated in order to provide more possibilities for intelligent payment, enrich payment methods, and greatly improve the convenience of payment. Algorithm of Virtual Simulation Technology The first-order differential formula is set as: In the formula: (1) Fourth-order Runge-Kutta method. h is a fixed step size, and the vector form of the fourthorder Runge-Kutta method at this time is: 2 Applied Bionics and Biomechanics The Taylor series expansion of y n+1 at t n is performed: For _ y = λy, there is y ðiÞ = λ i y, and substitute it into the previous formula: The stability condition of this formula is that the absolute value of the iteration coefficient is <1. Virtual Simulation Design for Smart Payment Screen 3.1. Payment Tokenization. Payment tokenization technology is a security technology that is born to prevent information leakage. That is to say, it is a technical way to convert the sensitive information of financial institution users to form a corresponding token (Token), and use Token to replace the original sensitive information for information exchange during payment [15,16]. This technology can effectively reduce the risk of sensitive information leakage, prevent financial losses, and protect the legitimate rights and interests of consumers. The smart payment screen designed in this article aims to use payment tokenization technology to ensure the payment security and convenience of those who use the smart payment screen. The bank card data and payment data stored in the terminal are encrypted and protected by the security module, and the user's identity is verified by biometric features to avoid payment data input. 3D modeling technology is used to build an immersive payment scene to design a standard payment solution that can not only improve the user experience but also ensure security to solve the industry's pain points. The specific design scheme is shown in Figure 1. The payment tokenization system can effectively guarantee the payment security of those who use the smart payment screen, who show the token to the merchant, and then flow to the card issuer through different paths [17,18]. The merchant's mark flows to two parties: one is to request the mark from the mark requester, this path flows to the mark management library, and finally flows to the card issuer due to the mark guarantee; on the other side, it flows through the token transfer to the acquirer, then to the card organization, and finally to the card issuer in the same way as the other path. The payment tokenization system contains several important roles: (1) persons using smart payment screens: a smart payment screen will be held and the token can be streamed to the card issuer. (2) Card issuer: it owns the account relationship with the cardholder, owns the authorization, and ongoing risk management in the payment token ecosystem. (3) Merchant: it may be the recipient of the payment token in place of the PAN, or it may be the token requester. (4) Acquirer: all transactions will be handled in the same way as now, including authorization authentication, payment consumption, clearing, and exception handling. 3.2. Architecture Design of Smart Payment Screen. The design of smart payment screen needs to meet the security needs and practical needs of users, not only contracting institutions, card organizations, but also virtual reality (VR) equipment [19]. The specific architecture design of the smart payment screen is shown in Figure 2. 3.2.1. Institution. Institution includes banking institutions and third-party institutions. Both are security carrier issuers, and their main functions are: controlling the master key of the primary security domain, assisting in the installation of "auxiliary security domains," and managing the life cycle. Card Organization. The card organization includes three parts: trusted service, trust service provider (TSP), and online payment platform. Trusted services are interconnected with trusted service systems, such as operators and terminal manufacturers to provide banks with services, such as over-the-air card issuance and security module life cycle management; TSP provides payment tokenization services; 3 Applied Bionics and Biomechanics online payment platforms provide online payment access, verify payment data, and accept transactions. VR Equipment. The security module is used to provide secure storage of financial information and secure access control, including the issuance and update of symmetric and asymmetric encryption algorithms and digital certificates, the dynamic issuance, deletion and update of bank financial data, the security isolation of financial data, and the provision of APIs for secure transactions. The security module includes four modules: access control, secure transmission, secure interface, and secure storage. The main component of the VR application part is the VR payment control. The VR payment control provides users with payment portals in VR scenarios, allowing users to select pay-ment cards and discounts, use biometrics to authenticate user identities, dynamically generate ciphertext of payment data, and verify payment data and acceptance transactions through online payment platforms. 3.3. Security Module. The original intention of this article to design an intelligent payment screen based on the simulation technology is to improve the security of payment, so the security module is carefully designed. The security module includes access control, key exchange, secure interface, and secure communication [20]. The specific architecture diagram is shown in Figure 3 Applied Bionics and Biomechanics use and management of data resources within the legal scope. This design effectively protects the rights and interests of the visiting subject, protects its legal status from being infringed, and also protects the user's payment security. For unknown and illegal software and hardware resources, it is impossible to intrude into the payment system of the visiting subject without any reason, and the user's access security is greatly guaranteed. Key Exchange. Simply put, key exchange is to use an asymmetric encryption algorithm to encrypt a symmetric key to ensure the security of transmission, and then use a symmetric key to encrypt data. The key exchange guarantees the user's payment security to a large extent. The user's payment data can be encrypted to a large extent through this method, thus preventing the attack of unscrupulous data thieves. Security Interface. The security interface ensures the security of user data, and prevents malicious calls to the interface by third parties and requests from being modified during transmission. The design of the security interface makes users feel more secure in payment and the use of the smart payment screen. Secure Communication. Secure communication guarantees the security of the channel between the communication endpoints, with both confidentiality and integrity. Confidentiality is guaranteed with encryption, and integrity is guaranteed with message authentication codes. This not only ensures that user data is private and confidential, and will not be viewed by listeners using network monitoring software, but also protects user data from unauthorized or malicious modification during transmission. The setting of Virtual Simulation Design Experiment for Smart Payment Screen This article design an intelligent payment screen based on virtual simulation technology and puts it into use. The safety, usability, and reliability of the payment using the smart payment screen and the payment using the traditional payment screen were compared by using the VR payment scene in order to test whether the payment using the smart payment screen can achieve the effect of people's expected payment, and whether it can meet the needs of large-scale use in the market. The test results are recorded in Table 1 and Figure 4. According to Table 1 and Figure 4, it can be seen that the payment security, usability, and reliability of using the smart payment screen are improved compared to the payment using the traditional payment screen. Security has increased by 19.2%, which is the fastest improvement, reliability has increased by 11.1%, and availability has increased by 4.2%, which is the slowest improvement. It can be seen that the use of the smart payment screen based on virtual simulation technology can greatly improve the security of payment, but the usability of the smart payment screen needs to be further improved to meet the needs of more consumers. The most important purpose of the design of the smart payment screen is to improve the convenience of payment, make the payment method more convenient for people's life, and let the life get a better consumption experience due to smart payment. Therefore, this study selected different stages of the population and selected representatives to test the convenience of the smart payment, and compared it with the traditional payment. The test results are shown in Figure 5. It can be seen from Figure 5 that in the survey on the convenience of use of the smart payment screen based on virtual simulation technology, teenagers and young people believe that the convenience of use of the smart payment screen based on virtual simulation technology is improved rapidly. The youth group believes that the convenience of use of the smart payment screen based on virtual simulation technology has improved the fastest, with an increase of 22.8%; the middle-aged and elderly groups believe that the convenience of use of the smart payment screen based on virtual simulation technology has been improved slowly, and they believe that the convenience of use of the smart payment screen based on virtual simulation technology has improved the fastest. The elderly group believes that the use of the smart payment screen based on virtual simulation technology has the slowest improvement in ease of use, an increase of 9.4%. The reason is that the majority of young people have a strong ability to accept new things, and they do not feel burdened by the use of the intelligent payment screen based on virtual simulation technology, but only feel novel; however, for the elderly, smart payment methods are a burden, and they cannot use smart payment screens well. Payment convenience is a concept proposed that is relative to traditional paper currency payment. Compared with paper currency payment, MP only needs to carry a mobile phone instead of holding a lot of banknotes, thus enhancing the convenience of payment. There are two important indicators for evaluating the convenience of payment: first is the time cost of the payment process. The shorter the payment time is, the better the convenience of payment is; another important evaluation indicator related to the convenience of payment is the payment success rate. Due to the complexity of payment methods, many users may have low payment success rates due to improper operation or unfamiliarity with payment software, which greatly reduces payment efficiency and violates the original intention of payment convenience. Therefore, it is necessary to test the payment success rate to improve the convenience and efficiency of payment. Based on this, in order to investigate the convenience of using the smart payment screen based on virtual simulation technology, this study conducted an experimental investigation on two factors that affected its convenience. This experiment randomly selected 6 places that used the intelligent payment screen based on virtual simulation technology, and conducted a follow-up investigation on them, and compared it with the traditional MP screen. The test results are shown in Figures 6 and 7. It can be seen from Figure 6 that the average payment duration of using the smart payment screen based on virtual simulation technology in various places is reduced compared to the average payment duration of using the traditional payment screen. The overall trend is roughly the same. Location 4 has the longest average payment time and location 6 has the shortest average payment time. It can be seen that the fundamental reason for the difference in the average payment time of each place is the difference in the flow of people and the network system of each place. Correspondingly, the reason can be found from Figure 7. The higher the payment success rate is, the shorter the payment time is. In order to understand the user's preference for the smart payment screen, this article investigated the frequency of users using the smart payment screen. Location 4 was selected as the subjects of this experiment. Since the opening time of place 4 is from 10:00 to 20:00, five points were selected for this survey during this time period, and the survey results of users using the smart payment screen at different time points were recorded in Figure 8. It can be seen from Figure 8 that 18:00 is the peak period for users to use the smart payment screen. Most people tend to use the smart payment screen at this point in time. Therefore, it is necessary to make preparations for the payment environment at this point in time, to ensure the smoothness of the network and the system to keep running well, to prevent the system from crashing and bring a bad payment experience to users. Applied Bionics and Biomechanics The use of smart payment screens makes payment safe and convenient, but it cannot be denied that the use of smart payment screens also has certain risks. There may be operational risks in the use of smart payment screens, such as electronic pickpockets, online fraud, online hacking, credit risks, liquidity risks, and settlement risks. In order to avoid risks in the payment process and to improve the security of payment, this experiment evaluated the possible risks in the use of smart payment screens and compared them with traditional payment screens. A represents using the traditional payment screen, B represents using the smart payment screen, and the evaluation results are shown in Figure 9. Applied Bionics and Biomechanics It can be seen from Figure 9 that the use risk of the smart payment screen is greatly reduced compared to the use risk of the traditional payment screen. Operational risk was reduced by 3.88%, credit risk by 4.67%, liquidity risk by 5.06%, and settlement risk by 6.01%. Users no longer have to worry about the risks of using smart payment screens, and merchants no longer have to provide users with unnecessary operations to reduce user experience, thereby improving corporate competitiveness. Advantages and Disadvantages of Virtual Simulation Design for Smart Payment Screen Traditional payment method Smart payment method and greatly improve the accuracy of information comparison. Moreover, the improvement of the algorithm will allow the payment process to be completed in a very short time, and the user experience will be better. Compared with traditional payment screens, they are more 9 Applied Bionics and Biomechanics likely to accept this efficient and convenient payment screen. Payment Risk Is Reduced. The use of virtual simulation technology brings security to payment, and users are more willing to use the smart payment screen to pay. Compared with using a traditional payment screen, the use of the smart payment screen greatly reduces the payment risk. Consumers can complete the payment at any time and anywhere; the user only needs to connect the mobile phone or tablet with the smart payment terminal to complete the payment operation, which solves the problem of user privacy leakage during the payment process, thereby greatly improving the payment security. Disadvantages of Virtual Simulation Design for Smart Payment Screens. The design of the smart payment screen still has some shortcomings, and it is a burden for the elderly users to use the smart payment screen. Therefore, the design of the smart payment screen needs to increase the use module for the elderly, so that this part of the group can also feel the convenience of using the smart payment screen and enhance their user experience. Conclusions In the context of artificial intelligence, this study designed an intelligent payment screen based on virtual simulation technology, compared it with the traditional payment screen, and evaluated the relevant content. In the performance test of the use of smart payment screens and traditional payment screens, it was concluded that the payment security, usability, and reliability of using smart payment screens were improved compared with those of traditional payment screens. The security has been improved by 19.2%, which was the fastest improvement, the reliability has been improved by 11.1%, and the availability has been improved by 4.2%, which was the slowest improvement; in the survey on the convenience of use of the smart payment screen based on virtual simulation technology, the youth group believed that the convenience of use of the smart payment screen based on virtual simulation technology has improved the fastest, with an increase of 22.8%. The elderly group believed that the use of the smart payment screen based on virtual simulation technology had the slowest improvement in ease of use, an increase of 9.4%; in terms of the payment duration of the use of the intelligent payment screen based on the virtual simulation technology, the average payment duration of each place was reduced. The average payment time of place 4 was the longest, and the average payment time of place 6 was the shortest, and it was concluded that the higher the payment success rate was, the shortest payment time was; in terms of the user's preference for the smart payment screen, it was concluded that 18:00 was the peak period for users to use the smart payment screen; in terms of risk assessment of using smart payment screens, it was concluded that compared with traditional payment screens, operational risk was reduced by 3.88%, credit risk was reduced by 4.67%, liquidity risk was reduced by 5.06%, and settlement risk was reduced by 6.01%. Data Availability Data supporting this research article are available from the corresponding author or first author on reasonable request. Conflicts of Interest The authors declare that they have no conflicts of interest.
5,829.6
2022-10-08T00:00:00.000
[ "Computer Science" ]
Cytofkit: A Bioconductor Package for an Integrated Mass Cytometry Data Analysis Pipeline Single-cell mass cytometry significantly increases the dimensionality of cytometry analysis as compared to fluorescence flow cytometry, providing unprecedented resolution of cellular diversity in tissues. However, analysis and interpretation of these high-dimensional data poses a significant technical challenge. Here, we present cytofkit, a new Bioconductor package, which integrates both state-of-the-art bioinformatics methods and in-house novel algorithms to offer a comprehensive toolset for mass cytometry data analysis. Cytofkit provides functions for data pre-processing, data visualization through linear or non-linear dimensionality reduction, automatic identification of cell subsets, and inference of the relatedness between cell subsets. This pipeline also provides a graphical user interface (GUI) for ease of use, as well as a shiny application (APP) for interactive visualization of cell subpopulations and progression profiles of key markers. Applied to a CD14−CD19− PBMCs dataset, cytofkit accurately identified different subsets of lymphocytes; applied to a human CD4+ T cell dataset, cytofkit uncovered multiple subtypes of TFH cells spanning blood and tonsils. Cytofkit is implemented in R, licensed under the Artistic license 2.0, and freely available from the Bioconductor website, https://bioconductor.org/packages/cytofkit/. Cytofkit is also applicable for flow cytometry data analysis. Introduction Mass cytometry, or cytometry by time-of-flight (CyTOF), uniquely combines metal-labeling of antibodies with mass spectrometry to enable high-dimensional measurement of the characteristics of individual cells [1,2]. The high purity and choice of metal isotopes overcome the limitations of spectral overlap in flow cytometry, and allow for simultaneous analysis of more than 40 markers per cell [3,4]. This technology has been successfully applied in a number of areas including mapping phenotypic heterogeneity of leukemia [5], inferring cellular progression and hierarchies [6], assessing drug effects on immune cells [7,8] and uncovering mechanisms of cellular reprogramming [9]. Despite the advantages of mass cytometry, effective analysis and interpretation of these high dimensional and large-scale datasets remain challenging. Traditional manual gating, the gold-standard method for flow cytometry data analysis, is not practical for mass cytometry due to its high dimensionality. In addition, most automated methods designed for flow cytometry data do not perform well for mass cytometry data [10]. Analysis of mass cytometry data has several key challenges including debarcoding [11], batch normalization [12], visualization of high-dimensional data, identification of cell subsets, inference of relatedness between cell subsets, and detection of changes in subset abundance. This manuscript focuses on addressing the following three key challenges for data that don't display batch effect. The first challenge is efficient visualization of these high-dimensional data. A biaxial plot that displays the correlation of every two markers is a common way to visualize flow cytometry data. With the fact that m(m − 1)/2 biaxial plots are needed to fully visualize an mdimensional dataset, this approach is impractical for mass cytometry data as the parameter m of mass cytometry is usually greater than 40. Alternative dimensionality reduction approaches have been used to transform the high-dimensional data to a low-dimensional representation, thus allowing visualization of the cells in a single plot. In Newell et al. [13], principal component analysis (PCA) was used to visualize a 25-parameter mass cytometry panel for CD8 + T cells. However, PCA is a linear transformation, and it cannot capture nonlinear relationships. To address this limitation, Amir et al. [5] developed a visualization tool named viSNE which utilizes the t-distributed stochastic neighbor embedding (t-SNE) algorithm. t-SNE is a nonlinear dimensionality reduction approach [14] which embeds the data from high dimensional space into a lower dimensional map based on similarities. On a t-SNE map, similar cells are placed to nearby points, while dissimilar cells are placed far apart. It has been demonstrated that t-SNE can effectively visualize cellular heterogeneity in normal and leukemic bone marrow [5]. The second challenge is to identify cell subpopulations. To address this challenge, the ACCENSE method has been developed to automatically identify cellular subpopulations using a density peak-finding algorithm on a t-SNE transformed 2-D map [15]. However, not all cells are assigned to a defined subpopulation in this method. DensVM extends ACCENSE by using support vector machine (SVM) to assign any unassigned cells to the subpopulations in a machine learning manner [16]. This approach has been demonstrated to precisely detect the boundaries of cell populations in murine myeloid data. DensVM has also been applied to map the numerous subtypes of follicular helper T cells derived from human blood and tonsils [16,17]. However, both ACCENSE and DensVM rely on a computationally intensive search for an optimal number of subpopulations. PhenoGraph, a graph-based partitioning method, has demonstrated efficiency in subpopulation detection [10]. PhenoGraph first constructs a nearest-neighbor graph which captures the phenotypic relatedness of the high-dimensional data, and then it applies a graph partition algorithm called Louvain [18] to dissect the nearest-neighbor graph into phenotypically coherent subpopulations. Applied to the study of acute myeloid leukemia, PhenoGraph provided a comprehensive view of the major phenotypes and elucidated intra-and inter-tumor heterogeneity. PhenoGraph has also been tested on three different mass cytometry datasets of healthy human bone marrow, and it displayed superior accuracy and robustness in immune cell type detection as compared to other methods. The third challenge is to detect cellular progression. In addition to defining distinctive cell subsets, there is great interest in resolving the order of cellular differentiation to reveal their developmental relationships. For example, Bendall et al. developed a graph-based trajectory detection algorithm named Wanderlust, which orders cells into a unified trajectory that reflects the developmental path [19]. This method correctly predicted the early developmental path of human B-lymphocytes. Nevertheless, this algorithm was designed for linear and non-branching developmental path, and hence is less useful for interpreting complex single-cell data with multiple developmental lineages. Wishbone extended the ability of Wanderlust to capture bifurcating developmental trajectories through introducing waypoints and identifying branch points [20]. Wishbone is based on diffusion map, which has been demonstrated to be powerful and robust for detecting the global geometric structures from the data [21]. However, Wishbone requires the input of a starting cell. SPADE is an innovative approach designed to extract cellular hierarchy using minimum spanning tree (MST) [6]. While SPADE enables the prediction of multi-branched cell developmental pathways, the hierarchical clustering used in SPADE needs a pre-specification of the number of clusters, additionally, the MST used by SPADE is susceptible to over-fitting and is not robust for local variation [9]. Our recent novel method named Mpath constructs multi-branching cell lineages from single-cell data using neighborhood-based cell state transitions [22]. However we have only demonstrated its applications for single-cell RNA-sequencing data. We are currently testing and optimizing Mpath for mass cytometry and flow cytometry data. In this report, we present an integrated analysis pipeline, named cytofkit. It is designed to analyze mass cytometry data in four main steps. In the first step, cytofkit performs data pre-processing, and enables combined analysis of multiple Flow Cytometry Standard (FCS) files. Users are allowed to customize their data merging strategy to combine the data using selectable transformation methods. The remaining three steps address respectively, each of the three challenges discussed above. Firstly, cytofkit provides state-of-the-art clustering methods including DensVM [16], FlowSOM [23] and PhenoGraph [10], as well as an in-house newly developed algorithm named ClusterX for automatic detection of cell subpopulations. Secondly, it provides functions to visualize the high-dimensional data with color-labeled cell types using either linear transformation such as PCA or non-linear dimensionality reduction such as ISOMAP [24], diffusion map or t-SNE (we use Barnes-Hut variant of t-SNE, a speed optimized implementation of t-SNE [25]). Lastly, it infers the relatedness between cell subsets using ISOMAP or diffusion map. In addition to providing an integrated analysis pipeline, cytofkit provides a user-friendly GUI and an interactive shiny APP to facilitate result exploration and interpretation. Through the application of cytofkit to a CD14 − CD19 − PBMCs dataset, cytofkit was able to accurately identify known populations of lymphocytes including CD4 + , CD8 + , γδT, NK, and NKT cells, and further segregate these subsets to reveal subpopulations such as different stages of CD4 + and CD8 + T cell differentiation, as well as three subsets of γδT and two subsets of NK cells. Moreover, as shown in our previous publication [17], application of cytofkit for an objective comparison of human T helper (TH) cells derived from peripheral blood versus tonsils revealed numerous subtypes of follicular helper T cells (T FH ) cells that followed a continuum spanning both blood and tonsils. Design and Implementation We have developed an integrated mass cytometry data analysis pipeline as an open-source R/ Bioconductor package called cytofkit. As shown in Fig 1, the pipeline consists of four major components: (1) pre-processing, (2) cell subset detection, (3) cell subset visualization and interpretation and (4) Inference of the relatedness between cell subsets. FCS file; secondly the extracted data are transformed using either negative value pruned inverse hyperbolic sine transformation (cytofAsinh) or automatic logicle transformation (autoLgcl) [26] (see details in S1 file); finally expression matrixes from each FCS file are combined into a single matrix using one of the four selectable strategies, including i) ceil which samples up to a user specified number of cells without replacement from each FCS file, ii) all which takes all cells from each FCS file, iii) min which samples the minimum number of cells among all the selected FCS files from each FCS file and iv) fixed which samples an user specified number of cells (with replacement when the total number of cell in the file is less than the specified cytofkit number) from each FCS file. In the combined expression matrix, each cell is given a unique ID, which is the concatenation of its original FCS file name and its sequence ID in the file. Cell subset detection The subset detection is implemented by clustering algorithms. Cytofkit provides three state-ofthe-art clustering methods DensVM [16], PhenoGraph [10], FlowSOM [23] and one in-house developed clustering algorithm called ClusterX. DensVM and ClusterX are density-based clustering algorithms, which are applied to the t-SNE embedded map, whereas PhenoGraph is a graph based clustering algorithm, which works directly on the high-dimensional data. DensVM. DensVM (Density-based clustering aided by support Vector Machine) is an extension of ACCENSE's density-based clustering algorithm [15]. ACCENSE's clustering algorithm first computes 2D probability density from the t-SNE map using the Gaussian kernel transform. A 2D peak-finding algorithm is then applied to identify local density maxima that represent the center of cellular subpopulations. For each peak k, the nearest neighboring peak is identified and distance to the nearest neighbor d k is calculated. ACCENSE then draws a circle of radius d k /2 centered at the peak k, and assign all cells within the circle to cluster k. By using this approach, a significant number of cells are located outside any circle and left unclassified, which hampers the estimation of subpopulation frequencies and downstream statistical tests. DensVM overcomes this limitation by utilizing a machine-learning algorithm called support vector machine to train a classifier that learns the patterns of cells that were assigned to ACCENSE clusters. The trained classifier then takes as an input the marker expression profiles of unclassified cells and assigns each of them to one of the ACCENSE clusters based on the assumption that cells from the same cluster should share similar patterns of marker expression (details in paper [16]). DensVM is able to objectively assign every cell to an appropriate cluster. PhenoGraph. PhenoGraph works on an m-by-N intensity matrix, which comprises m parameters of N cells. For each cell, PhenoGraph first identifies k nearest neighbors using Euclidean distance, resulting in N sets of k-neighbors. Based on the number of neighbors shared by every two cells, it calculates the similarity between cells using the Jaccard similarity coefficient and generates a cell-cell similarity matrix, which is then converted into a network. Subsequently, PhenoGraph partitions the network using the Louvain algorithm to extract communities with optimal modularity [18]. This algorithm makes no assumption about the size or number of subpopulations, which make it applicable to many different datasets. In cytofkit, we converted the original python code of PhenoGraph into R script. ClusterX. ClusterX is a clustering method improved from Clustering by fast search and find of density peaks (CFSFDP) [27]. The CFSFDP algorithm is fast and able to recognize clusters regardless of their shape. However it has two main limitations. The first limitation is that it takes a dissimilarity matrix as the input, which results in an O(n 2 ) memory burden for a dataset of n cells. The second is that it requires manually decided cut-off values to determine density peaks, which is inefficient and subjective. ClusterX addresses the memory issue with a splitapply-combine strategy [28], and automates density peak detection using generalized (extreme Studentized deviate) ESD test [29]. When combined with t-SNE, ClusterX extends its capacity for clustering high-dimensional data. The workflow of ClusterX for mass cytometry data clustering is illustrated in Fig 2 (see detailed description of ClusterX in S1 file). Cell subset visualization and interpretation Three dimensionality reduction methods are integrated into cytofkit for visualizing the high dimensional mass cytometry data. These include one linear transformation method PCA and two non-linear transformation methods ISOMAP and t-SNE. After dimensionality reduction, cytofkit plots the transformed two-dimensional maps with point color representing the cell type detected from cluster analysis and point shape representing which sample (i.e. FCS file) the cell belongs to. The expression pattern of a specified marker can also be visualized on the dimensionality-reduced map with values represented by colors. A heat map is generated to visualize the median expression level of each marker in each cell type. This heat map facilitates the annotation of known cell types based on prior knowledge of cell type specific marker expression, as well as the detection of novel cell types with novel expression patterns. The percentage of cells in each cluster for each FCS file can also be visualized using a heat map, which helps the detection of changes in abundance of subsets among different samples. All these plots can be either saved automatically by the cytofkit package or interactively visualized with our specifically designed shiny APP (see in Pipeline Implementation section). Example t-SNE plots and heat map plots can be found in the Results and Discussion section. Inference of inter-subset relatedness Instead of directly estimating cellular developmental path from individual cells, which is computationally challenging and error prone, cytofkit provides assistant approaches for inferring Workflow of ClusterX for mass cytometry data clustering. (a) depict the workflow of ClusterX for mass cytometry data clustering, which contains four steps: (i) t-SNE dimensionality reduction (ii) estimate the local density on the t-SNE map (iii) detect the density peaks represented as cluster centers and (iv) assign the remaining cells to clusters. (b) Explains the local density estimation method. (c) Illustrate the cluster assigning step using two peaks, peak1 and peak 2. Each point is a cell and the color intensity represents the local density of the cell. Then each cell is assigned to be the same cluster as its nearest neighbor cell which has higher density than it. doi:10.1371/journal.pcbi.1005112.g002 cytofkit the progression based on the relationship of cell subsets. As we will demonstrate later in our Results and Discussion section, ISOMAP or diffusion map perform better for reserving the global inter-relatedness between cell subsets compared to tSNE. ISOMAP takes into account local distances between similar cells and is able to capture the global geometry between different cell types. In CD4 + T cell dataset, we applied ISOMAP to detect three hypothesized progression paths spanning across blood and tonsils derived from the naïve T cells (see details in our previously published paper [17]). Diffusion map is a dimensionality reduction algorithm, which captures the non-linear structure of data as a continuum. It demonstrated considerably better performance than the other dimensionality reduction methods PCA or t-SNE for revealing the differentiation structure in single-cell data analysis [21]. In cytofkit, we combined dimensionality reduction methods including ISOMAP and diffusion map with the clustering results to infer inter-subset relatedness, which is expected to help detection of cell differentiation trajectories. Firstly, we down-sampled the number of cells in each cluster to an equal size, thus reducing cell subset density heterogeneities and removing the dominating effect of large populations in the data. Then we ran ISOMAP or diffusion map on the down-sampled dataset and overlaid the clusters onto the transformed dimensions. By checking the median position of clusters in ISOMAP or diffusion map, hypothesized paths of subset progression can be drawn and annotated. The expression profiles of selected markers can be visualized with a Tobit-family generalized linear model (GLM) [30] along the manually defined progression path to either validate the hypothesized path or detect potential progression dynamics. Pipeline implementation We implemented the cytofkit pipeline in R, and built it as a Bioconductor package (https:// bioconductor.org/packages/cytofkit/). ClusterX, as a newly developed clustering algorithm, was implemented as an R package named ClusterX and is available on github (https://github. com/JinmiaoChenLab/ClusterX). PhenoGraph is originally available as python code. We reimplemented the algorithm into an R package named Rphenograph and it is also available on github (https://github.com/JinmiaoChenLab/Rphenograph). ClusterX and Rphenograph are both integrated into the cytofkit package. To facilitate the easy access of cytofkit package, we developed a user-friendly GUI using R tcltk package as shown in Fig 3. To facilitate interactive visualization of the analysis results, the cytofkit package provides a shiny APP which can be deployed locally with function cytofkitShinyAPP(). The analysis results from cytofkit will be saved as an RData object, which can be easily loaded into this shiny APP. This shiny APP provides an interactive interface to visualize and explore the analysis results as shown in Fig 4. In addition, an online version of the shiny APP is also publicly available at https://chenhao. shinyapps.io/cytofkitShinyAPP/. An instruction on usage of the GUI and the package can be found in S2 File as well online in the package vignettes (https://www.bioconductor.org/ packages/release/bioc/vignettes/cytofkit/inst/doc/cytofkit_example.html). An instruction on the usage of the shiny APP is included in S3 File as well as online in the package vignette (https://www.bioconductor.org/packages/release/bioc/vignettes/cytofkit/inst/doc/cytofkit_ shinyAPP.html). A detailed Rmarkdown file including the analysis procedures and all the data used in the manuscript are available on github (https://github.com/JinmiaoChenLab/ cytofkit_analysis_data_code) for reproducing our analysis results. Cytofkit package adds dimensionality reduction and clustering results as additional parameters to the FCS files. Users can open the modified FCS files using other software such as FlowJo to visually verify the clusters with their prior knowledge. They can also overlay manually gated populations onto the t-SNE (ISOMAP, diffusion map) plots; perform manual gating according to the t-SNE plot or clustering results. Results and Discussion We demonstrate the utility of this package using two datasets (included in S1 Dataset). One is a CD14 − CD19 − PBMCs dataset and the other is a CD4 + T cell dataset combined from human blood and tonsils. In order to assess the accuracy of cytofkit, we manually gated populations of CD4 + , CD8 + , γδT, CD3 + CD56 + NKT and CD3 − CD56 + NK cells from the CD14 − CD19 − PBMCs dataset (gating strategy included in S1 Fig). Populations of naïve (CD45RA + CCR7 +-CD45RO -), T H 1 (IFN-γ + ), T H 17 (IL-17A + ) and T FH (CXCR5 hi PD-1 hi ) cells are manually gated from the CD4 + T cell dataset (see in [17]). More information about these two datasets is included in the S1 File data description section. Comparison of dimensionality reduction methods for visualization In order to assess the performance of the three dimensionality reduction methods PCA, ISO-MAP and t-SNE, we applied these methods to the above two datasets. For the CD14 − CD19 − PBMCs dataset, we overlaid the gated lymphocyte and NK cell populations onto the plots of the three methods. In Fig 5(a), we observed that PCA displayed a continuous U-shaped pattern of cellular clusters. ISOMAP preserved the U-shaped continuum while showing better resolution of CD4 + , CD8 + , γδT, CD3 + CD56 + NKT and CD3 − CD56 + NK cells. The preserved continuum shows the interrelatedness between these subsets. In contrast, t-SNE showed geometrically distinct clusters at much higher resolution and discriminated several populations within the CD4 + T cell population. However, we did not observe the continuum as seen with ISOMAP. In the CD4 + T cell dataset, after overlaying naïve (CD45RA + CCR7 + CD45RO − ), T H 1 (IFN-γ + ), T H 17 (IL-17A + ) and T FH (CXCR5 hi PD-1 hi ) cells onto the dimensionality-reduced cytofkit map, we observed that each subset occupied distinct regions in ISOMAP and t-SNE, whereas T H 1 and T H 17 cells overlapped in the same region for PCA, as shown in Fig 5(b). Overall, these analyses of two independent datasets highlighted the advantages of non-linear approaches over linear PCA for visualizing and interpreting mass cytometry data. Comparison of clustering methods for subset detection Cytofkit contains three clustering methods for automatic subset identification; they are Clus-terX, DensVM and PhenoGraph. To assess the performance of these clustering methods, we quantitatively calculated the precision, recall and F-measure of each clustering method, using manually gated populations of CD4 + , CD8 + , γδT, NK and NKT cells from the CD14 − CD19 − PBMCs dataset as the gold standard. Fig 6 shows that DensVM detected 13 clusters, Pheno-Graph identified 14 clusters and ClusterX 15 clusters. These clusters were mapped to the manually gated populations using FlowJo. As shown in Table 1, ClusterX produced the highest precision in this case; nevertheless, the precision score differences among these three clustering methods are quite small. The F-measures for DensVM, ClusterX and PhenoGraph are 0.886, 0.894 and 0.854 respectively, which shows that all three clustering methods can accurately identify the manually gated cellular populations. We annotated the clusters detected by ClusterX based on the median expression of markers, which revealed different stages of CD4 + and CD8 + T cell differentiation, and three subsets of γδT, NK and NKT cells (Fig 7(a)). Unlike ClusterX, DensVM did not distinguish the CD8 cytofkit effector population and CD4 late effector population (Fig 7(b)). PhenoGraph detected the ClusterX annotated CD8 effector population and the NKT population as one population (Fig 7(c)). It should be noted that these manually annotated cell populations need to be further validated experimentally. Without experimental validation, we could not determine if clusters 10, 12 and 15 in ClusterX represent truly distinct cell populations or are a result of over-fragmentation. Despite these small differences, all three methods were able to define cellular heterogeneity with a higher efficiency and resolution than manual gating, and we suggest users to try multiple clustering methods for their own data analysis. The clustering results for the CD4 + T cell dataset can be seen in S2 Fig. Assess ISOMAP, diffusion map and t-SNE for inferring inter-cluster relationship To investigate the performance of ISOMAP, diffusion map and t-SNE for mapping potential relationships between cell subsets, we sub-sampled 10000 cells from the CD14 − CD19 − PBMCs dataset and repeated ISOMAP, diffusion map and t-SNE analysis three times. Fig 8 shows that the relative geometric locations of ClusterX clusters on a t-SNE map are a poor measure of between cluster similarities. This is manifested by the evident shift of the relative positions of cell clusters on the t-SNE maps of three subsamples. For example, cluster 11 and cluster 3 were close to each other in subsample 1 and subsample 3 but far apart in subsample 2. Similar changes were also observed on the positional relationships between cluster 11 and 10, or cluster 13 and 6. In contrast, ISOMAP and diffusion map were both able to consistently reproduce the structure of cluster relationship and the relative locations of these clusters remain consistent in all three subsamples. To remove the density heterogeneity among cell subsets, we down-sampled 500 cells from each cluster using method ceil. Then we plotted the cell subsets using the first two components calculated by ISOMAP and diffusion map (Fig 9). The two methods both give a U-shape like structure of the relationship of cell subsets. On one arm of the U-shape are CD4 + and naïve CD8 + T cells, which do not exhibit cytotoxic capabilities, as evidenced by the lack of Perforin expression (Fig 9(b)). On the opposite arm are γδ Vd + , γδ Vd − , CD8 Eff, NKT and NK cells, cytofkit which were located in order along the second component. We found a continuous increase in the expression of Perforin and GranzymeB along the second component indicating a progression of increased cytotoxic capabilities of these subsets (Fig 9(c)). On another dataset which we previously published, ISOMAP was able to display three hypothesized progression paths of CD4 + T cells spanning across blood and tonsils [17]. To summarize, although t-SNE better discriminates cells of distinct phenotypes, we highlight the limitation of t-SNE and suggest using ISOMAP or diffusion map for inferring relatedness between subsets. Conclusion In summary, we developed an integrated analysis pipeline for mass cytometry data, termed cytofkit. Combining state-of-the-art methods and in-house developed algorithms, we aim to provide a one-stop analysis toolkit for mass cytometry data with user-selectable options and customizable framework. Cytofkit can take commands from a user friendly GUI and performs analysis including pre-processing, cell subset detection, plots for visualization and annotation, and inference of the relatedness between cell subsets. In the end, the analysis results can be further explored in an interactive way using the specifically designed shiny APP. Our analytical pipeline provides an automated mass cytometry data analysis toolset, which can be used by bench scientists without any training. . Cytofkit is developed with a general framework, which makes it easily extensible to add in new methods and also applicable to other multiparameter data types. We are continually working on new algorithms for inferring cellular progression as well as meta-clustering methods for comparative analysis between multiple batches of data. New methods will be added into cytofkit to make it more useful for automatic mass cytometry data analysis. Supporting Information S1 Dataset. Zip file containing cytofkit package source code, the CD14 − CD19 − PBMCs dataset and the CD4 + T cell dataset. (ZIP) In ClusterX, data are first split row-wisely into chunks, the distance matrix is calculated in each chunk to be restricted in a limited size; then apply the calculation function for each parameters in each chunk; Finally the parameters are combined from all chunks for post processing. (TIF)
6,153
2016-09-01T00:00:00.000
[ "Biology", "Computer Science" ]
THE IMPLEMENTATION OF TECHNOLOGY-BASED MEDIA IN IMPROVING ENGLISH SPEAKING SKILL OF HOSPITALITY STUDENTS IN MATARAM TOURISM COLLEGE This research aims to find out the effectiveness of the application of technology-based teaching media to improve the speaking skills of the Hospitality Study Program Students of STP Mataram. In this study, several media that utilize computers and the internet will be used in learning speaking. The use of the application Rosetta Stone, Duolingo, and YouTube videos integrated with the google classroom are used as learning media for students' speaking learning of the hospitality study program students. In this study, 30 students are used as the experimental group and 30 students as a controlled group. The results of this study indicated that the value of t-test > t table (3.462> 2.676), which means that there is a significant difference in English speaking skills using technology-based learning media. In conclusion, the use of technologybased learning media is effective in enhancing students’ speaking skill. Article History: Received: October, 2020 Revised: November, 2020 Published: December, 202020 INTRODUCTION The development of tourism as a phenomenon in the modern era cannot be avoided. As an industry, tourism has contributed to people's lives, especially from an economic standpoint. The rapid advancement of science and technology that continues to develop requires capable human resources, including human resources in tourism who are expected to continuously change themselves so that they could follow the developments that occur. This development is a big business opportunity for the hospitality industry where the need for professional, qualified, and ready-to-work human Resources also increases along with the development of the hospitality industry. One of them is the ability to use English well. Therefore, tourism practitioners or graduate students of the tourism program is very important and crucial. However, the reality in the field shows that the graduate students of tourism programs do not have the expected English language skills for work. Genc and Bada (2005) stated that for daily communication, speaking is an important and basic skill that helps students learn English to become good readers and writers. Based on the results of previous research regarding the work preferences of students in the work field, it was found that students who chose to work in a section that used language competences more had better English skills compared to students who chose to work in fields that did not use language competence much (Wahyuningsih, 2019). Technology advances also have an influence on the learning media in English. In learning to talk (speaking English) in the Mataram Tourism College Hospitality Study Program, lecturers in the spoken subject frequently apply the learning method that is focused on them (teacher-centered) so that it appears that lecturers dominate the classroom rather than students in the learning process. Many students are unable to use English successfully in communication and communications with them. This research used technology-based teaching media to see how effective it is in developing student speaking skills to address students' difficulties in speaking activities. The technology-based teaching media that will be used are in the form of video media, computers, and interactive applications that can be used in learning English, especially in improving students' speaking skills. To see the student's ability to speak in English, the researcher used a speaking assessment that was adjusted to the specific skills that the students would do, in this case, the students' ability to use English. In this study, several technologies that utilize computers and the internet will be used in learning speaking. The use of Rosetta Stone applications, Duolingo, learning with videos, and google classroom are used as a medium for speaking learning for students in the hospitality study program. The researchers selected several applications and technologies from the results of the research findings that had been conducted by several previous researchers. As a global communication media, the Internet allows it to be used in language teaching and learning, for example, learning English. The Internet provides various addresses and web pages that can be used as a place of learning. The web pages have been grouped according to their domains, such as vocabulary, grammar, phonetics, and according to the language skills being taught, such as speaking, listening, reading, and writing. Cord-Mounouray in Kartal (2005) categorized the learning experiences provided by the Internet into several types: (1) Communication: correspondence projects, distance learning, research into specific areas of Internet society, virtual meetings, role-playing, etc .; (2) Documentation: documentary research, providing readers with a variety of sources as needed; (3) Publishing: publishing manuscripts (personal or collective), both those that already exist on the internet and those that have never existed; (4) Collaborative studies: competitions, group performances, collaborative writing, simulations, telepresence; (5) Individual study: on-line learning, virtual campus. In integrating technology into the learning process, experts develop various models. Figure 1 is a model proposed by Woodbridge (2004) and modified by the other researchers. Some essential notions from this model are as follows. Technology (ICT) plays role in three functions: first, creating a pleasant and exciting learning environment (emotional effect); second, equip students' skills to use high technology. It addresses the challenge of its relevance to the world outside of schools. Third, technology functions as learning tools with application and utility programs, which, apart from simplifying and speeding up work, also increases the variety and techniques of analysis and interpretation. Positive emotions, skills in using technology, and skills in utilizing programs and utilities: developing the ability to create, manipulate, and learn, practice with problem solving-based tasks, build constructivist learning environments. Speech skills in English is one of the key goals of studying English. The ability to speak English in the age of globalization, where the borders are too limited, is therefore crucial at present, and with the ability to speak English, each person will be able to communicate well, not just in higher education, but in English. Such technical reasons, for example; the needs of the field of work, manufacturing, travel, etc. As far as teaching English speaking is concerned, Nunan (1991) states that "success is measured by the ability to speak using the target language" ("success is measured in terms of the ability to carry out a conversation in the target language"). So if students do not learn to talk or have no chance of communicating, they will lose confidence in learning the language. In the other hand, if the speaking lesson is delivered appropriately, students will be inspired to learn and the classroom environment will be vibrant and dynamic. Lawtie (2004: 1) claims that the difficulty of speech is due to a variety of factors: students do not want to talk or say anything in class, students laugh with their peers using their mother tongue (L1) and the class is too loud for the instructor to lose hold of the class. Rosetta Stone application is an application for learning foreign languages in an interactive way that can be used easily by users to learn foreign languages. The preferred method in using this application is Dynamic Immersion, which is without translation in other languages. The learning media is in the form of visuals/images, so it is hoped that users can immediately get used to associating foreign language words they are learning with the visual images shown. The purpose of this is to teach the various vocabulary terms and grammar of language intuitively, without practice or translation to learners. Therefore, using the Rosetta Stone media as a learning medium is expected to improve English speaking skills. Advances in technology have made it easier for us to do various things that may not have been previously imagined, one of which is learning foreign languages. If in the past we studied foreign languages at course institutions or at least bought books and tapes, now learning foreign languages can be done anywhere and anytime online or mobile. One of the learning applications that can be used on smartphones is Duolingo application. For English lessons for Indonesian speakers, there are 55 phases that must be passed while one phase consists of several lessons. However, it is not only the eyes that are trained (reading), the ears and the mouth are also included in the Duolingo learning material. Sometimes Duolingo will play a word or sentence in a foreign language and we are asked to write or translate what we hear. We are also sometimes asked to say a word or sentence in the language we are learning. Google Classroom is a platform that promotes student and instructor collaboration; teachers may also build and administer assignments for students in online classrooms free of charge (Beal, 2017). It just lets the instructor create communities to exchange homework and announcements. Google Classroom may be a platform that encourages students to engage. Nagele (2017) said that teachers can build student-centered, interactive and memorable active lessons only via Google Classroom because it offers easy-to-use learning features for students in all categories who can work together. Google Classroom is suitable for all types of pupils, including adult learners. It also has a variety of advantages, such as being paperless, available everywhere and everywhere as long as there is an internet link and from any computer, for communication between teachers and students, for giving input to students, and for customized learning. It has a learning function that lets teachers actively develop and administer assignments and also gives guidance to students. Google Classroom makes it easier for instructors to do student assignments. Really good for teachers and pupils, since it's easy to use. RESEARCH METHOD Research Design This study is quasi experimental research with pre-test and post-test design. The effectiveness of using technology-based teaching media was carried out by comparing the effectiveness and efficiency of the conditions before and after treatment or by comparing it with groups who used conventional media of teaching. Population and Sample In this study, the two groups namely the experimental group and the control group. The subjects in this study were 2 groups, each of which consisted of 30 students from the hospitality study program at the Mataram Tourism College. One class will be the control group while the other class will be the experimental group. Instruments The data obtained in the study are quantitative. These quantitative data are in the form of test scores of students using technology-based teaching media and students who use conventional media. The data will be taken using the speaking skill test instrument which is calculated using the speaking assessment rubric. Data Analysis Data analysis in this study used non-independent t-test statistics to compare the pre-test and post-test results achieved by the experimental group and control groups who take learning with technology-based teaching media. The data were analyzed with the help of statistical software, namely SPSS version 26. The test criterion is as follows; if the t value obtained is greater than the t table value (t test> t table) it can be concluded that there is a difference in the pre-test score with the post-test score of students who take learning with technology-based teaching media. Otherwise. If the t value obtained is smaller than the t -test <t table), it can be concluded that there is no difference between the pre-test and post-test scores of students who take learning with technology-based teaching media. RESEARCH FINDINGS AND DISCUSSION Research Findings The test to measure the effectiveness of using media based on English learning technology was attended by 30 students as an experimental group and 30 other students as a control group. The two groups were selected based on the equivalent English proficiency score. The experimental group took learning using technology-based media, while the control group took learning using conventional methods. Before learning is carried out, each student takes a Pre-test, to see their initial ability to communicate in English in the field of work/professional Front Office Hotel staff, and at the end of the learning program, they take a Post-test to measure their achievements from the learning process they have taken with the teaching materials developed in this study. The effectiveness test was carried out to determine the significance of improving communication skills in English in the Front Office Hotel profession. The significance is based on the results of the pre-test and post-test scores. The results of learning English in this study focused on students' speaking skills in carrying out a series of competencies in accordance with the expected learning outcomes using technology-based learning media. The technology-based teaching media referred to in this study are three learning media used, first using language learning applications on students' computers/laptops using the Rosetta Stone application. The second uses the Duolingo learning application on student smartphones, and the third uses speaking learning with YouTube and Google Meet which is integrated with Google Classroom. The results of learning English were obtained through pre-test and post-test with performance tests. The results obtained are entered into the data and then calculated. The learning outcome data were arranged according to scoring guidelines, namely the highest score of 100 and the lowest score of 0. All data in this study were calculated using the SPSS version 26 statistical package program. A comparison of learning outcomes between two groups was presented in table 1. The normality test is one of the requirements of an analysis prerequisite test, where before the t-test, the data must be normally distributed. Meanwhile, the data normality test was carried out by using the Kolmogorov-Smirnov One-Sample test (One-Sample Kolmogorov-Smirnov) with the help of the SPSS program. The homogeneity test of data is one of the prerequisite tests for analysis where before the t-test is carried out, the collected data must be homogeneous or come from the same population. To test the homogeneity of the data, the F test analysis was used. The SPSS output in table 3 shows the Lavene statistic of 0.656 (Pre-Test) with a significance value (Sig) = 0.518. For the post-test results, the Lavene statistic is 0.528 and (Sig) = 0.526. Based on the table of the homogeneity test results above, it can be concluded that all data for hypothesis testing has a homogeneous variant. It can be seen that the significance value is greater than 0.05. Tests are carried out using the independent sample t-test (Independent Sample T-test). The test aims to determine whether there are differences in learning outcomes and Englishspeaking skills using technology-based learning media. The SPSS program assisted the t-test in this study. The test criteria in this t-test were if the t-test (positive value) was greater than the t-table then H0 was rejected. The results of the data are presented in table 4. From table 4, it is known that the value of t-test > t table (3.462> 2.676) and P-value (0.001 <0.05) then Ho is rejected, meaning that there is a significant difference in English speaking ability using technology-based learning media. Based on the table above, it can be seen that sig. (2 tailed) 0.001 smaller than 0.05, which means that Ho is rejected (Ha is accepted). This shows that there is a significant difference in English speaking skills using technology-based learning media.
3,495.8
2020-01-01T00:00:00.000
[ "Education", "Computer Science" ]
Bacillus spp. Inhibit Edwardsiella tarda Quorum-Sensing and Fish Infection The disruption of pathogen communication or quorum-sensing (QS) via quorum-quenching (QQ) molecules has been proposed as a promising strategy to fight bacterial infections. Bacillus spp. have recognizable biotechnology applications, namely as probiotic health-promoting agents or as a source of natural antimicrobial molecules, including QQ molecules. This study characterized the QQ potential of 200 Bacillus spp., isolated from the gut of different aquaculture fish species, to suppress fish pathogens QS. Approximately 12% of the tested Bacillus spp. fish isolates (FI). were able to interfere with synthetic QS molecules. Ten isolates were further selected as producers of extracellular QQ-molecules and their QQ capacity was evaluated against the QS of important aquaculture bacterial pathogens, namely Aeromonas spp., Vibrio spp., Photobacterium damselae, Edwardsiela tarda, and Shigella sonnei. The results revealed that A. veronii and E. tarda produce QS molecules that are detectable by the Chr. violaceum biosensor, and which were degraded when exposed to the extracellular extracts of three FI isolates. Moreover, the same isolates, identified as B. subtilis, B. vezelensis, and B. pumilus, significantly reduced the pathogenicity of E. tarda in zebrafish larvae, increasing its survival by 50%. Taken together, these results identified three Bacillus spp. capable of extracellularly quenching aquaculture pathogen communication, and thus become a promising source of bioactive molecules for use in the biocontrol of aquaculture bacterial diseases. Introduction Despite experiencing continuous growth, the development of aquaculture remains highly vulnerable to the occurrence of infectious bacterial diseases. Furunculosis, edwardsielosis, vibriosis, and photobacteriosis are among the most prevalent diseases, with serious effects on marine fish species [1]. Although antibiotics are an important tool for disease treatment, their damaging effects on the environment and public health have led to increased restrictions on their use in aquaculture [2]. An alternative to antibiotics relies on attenuating pathogen virulence, a process called anti-virulence therapy [3]. Virulence factors (e.g., motility, extracellular polysaccharides, biofilms, lytic enzymes, or secretion systems) enable bacterial pathogens to colonize and damage their host. Their species, and tested their ability to degrade the AHLs signals produced by important fish pathogenic strains. After validation of fish isolates (FIs) with QQ capacity, their potential protective effects were determined in an in vivo model (zebrafish larvae) when challenged with E. tarda. B. cereus strains showed QQ and protective activities against Y. ruckeri in rainbow trout [34]. This study screened the QQ capacity of the extracellular components of a collection of Bacillus spp., which were previously isolated from the gut of different marine fish species, and tested their ability to degrade the AHLs signals produced by important fish pathogenic strains. After validation of fish isolates (FIs) with QQ capacity, their potential protective effects were determined in an in vivo model (zebrafish larvae) when challenged with E. tarda. Figure 1. Violacein pigment production by Chr. violaceum biosensors when exposed to sporeforming fish isolates. (A) Inhibition of biosensor's violacein pigment production by FI isolates (FI numbers on top). (B) Biosensor's violacein pigment inhibition by the cell-free supernatant of sporeforming fish isolates (FI numbers on top). All photos were taken with a Sony IMX240 camera and are at the same scale. When testing their cell-free supernatants (to establish the intra-or extracellular localization of the QQ compounds), it became clear that FI did not produce extracellular compounds capable of interfering with all AHLs of the wild-type biosensor ( Figure 1B). In contrast, the inhibition of violacein pigment production was observed in the CV026 biosensor supplemented with 5 µ M of 3-Oxo-C6-HSL, and was particularly evident in FI314, FI330, and FI464, which showed stronger activity when compared to the other isolates ( Figure 1B). These results indicate that FI isolates produce extracellular compounds with Figure 1. Violacein pigment production by Chr. violaceum biosensors when exposed to sporeforming fish isolates. (A) Inhibition of biosensor's violacein pigment production by FI isolates (FI numbers on top). (B) Biosensor's violacein pigment inhibition by the cell-free supernatant of sporeforming fish isolates (FI numbers on top). All photos were taken with a Sony IMX240 camera and are at the same scale. When testing their cell-free supernatants (to establish the intra-or extracellular localization of the QQ compounds), it became clear that FI did not produce extracellular compounds capable of interfering with all AHLs of the wild-type biosensor ( Figure 1B). In contrast, the inhibition of violacein pigment production was observed in the CV026 biosensor supplemented with 5 µM of 3-Oxo-C6-HSL, and was particularly evident in FI314, FI330, and FI464, which showed stronger activity when compared to the other isolates ( Figure 1B). These results indicate that FI isolates produce extracellular compounds with QQ capacity against at least 3-Oxo-C6-HSL QS molecules. The laboratory strain B. subtilis 168 could not inhibit pigment production of the biosensors in both bioassays. Isolates QQ Activity Is Mediated through AHLs Enzymatic Inactivation To evaluate the catalyst nature of the QQ activity observed in the previous experiment, the extracellular compounds of all positive strains (FI314, FI330, FI333, FI335, FI346, FI383, FI423, FI436, FI442, and FI464) were mixed for 24 h with 30 µM of synthetic 3-Oxo-C6-HSL, before a bioassay with CV026 biosensor. In this, the reduction in the violacein halo when submitted to the H24 reaction mixture, when compared to a fresh reaction mixture (H0), indicates lactone degradation by the extracellular compounds present on the FI supernatants, and thus an enzymatic reaction. The FI extracellular compounds mixed with the 3-Oxo-C6-HSL led to a reduction in pigment production by the biosensor in all combinations, with strains FI314, FI330, FI346, and FI464 showing the highest activity ( Figure 2A). A reduction in the pigment production was also noticed when using the fresh (0H) reactions of strains FI314, FI330, and FI464, which might indicate the presence of signal blockers (Figure 2A) on their supernatant. QQ capacity against at least 3-Oxo-C6-HSL QS molecules. The laboratory strain B. subtilis 168 could not inhibit pigment production of the biosensors in both bioassays. Isolates QQ Activity is Mediated Through AHLs Enzymatic Inactivation To evaluate the catalyst nature of the QQ activity observed in the previous experiment, the extracellular compounds of all positive strains (FI314, FI330, FI333, FI335, FI346, FI383, FI423, FI436, FI442, and FI464) were mixed for 24 hours with 30 µ M of synthetic 3-Oxo-C6-HSL, before a bioassay with CV026 biosensor. In this, the reduction in the violacein halo when submitted to the H24 reaction mixture, when compared to a fresh reaction mixture (H0), indicates lactone degradation by the extracellular compounds present on the FI supernatants, and thus an enzymatic reaction. The FI extracellular compounds mixed with the 3-Oxo-C6-HSL led to a reduction in pigment production by the biosensor in all combinations, with strains FI314, FI330, FI346, and FI464 showing the highest activity ( Figure 2A). A reduction in the pigment production was also noticed when using the fresh (0H) reactions of strains FI314, FI330, and FI464, which might indicate the presence of signal blockers ( Figure 2A) on their supernatant. To clarify if the degradation of the 3-Oxo-C6-HSL could be due to lactonase, FI extracellular compounds were simultaneously tested for lactonase enzymatic activity through acidification, since lactonase enzymatic reaction is pH-mediated and reversible by acidification. After the acidification process, the enzymatic reaction could be reversed, as confirmed by the regain of purple pigmentation intensity, suggesting a lactonase-type FI QQ activity ( Figure 2A). As expected, the negative control (LB medium mixed with 30 µ M of 3-Oxo-C6-HSL) did not reduce pigment production during the bioassay (Figure 2A). Figure 2. Activity and genomic detection of putative QQ lactonases. (A) Enzymatic degradation of 3-Oxo-C6-HSL by the fish isolates' extracellular compounds (FIs on the top), followed by reversion of the enzymatic reaction through acidification, revealed by the reduction and restoration of the violacein pigment production by the biosensor. All photos were taken with a Sony IMX240 camera and are at the same scale. (B) PCR detection of genes coding for a putative QQ lactonase (ytnP) and N-acyl homoserine lactonase (aiiA) in the genomes of B. subtilis 168 (Bsub) and fish isolates (FI numbers on top). The amplicon size, in base pairs (bp), is shown on the right. The figure was constructed using different zones of the agarose gel. PCR products marked with a red circle in the figure were sequenced using the corresponding forward and reverse primers (Table 1). To elucidate the lactonases behind the observed QQ activities, all 10 FI isolates were first identified based on partial sequencing of the 16S rRNA gene (~1000 bp), as part of the Bacillus genera. FI314, FI330, FI346, and FI442 were identified as B. subtilis; FI333, and FI423 as B. amyloliquefaciens; FI335, FI383, and FI436 as B. velezensis; and FI464 as a B. , followed by reversion of the enzymatic reaction through acidification, revealed by the reduction and restoration of the violacein pigment production by the biosensor. All photos were taken with a Sony IMX240 camera and are at the same scale. (B) PCR detection of genes coding for a putative QQ lactonase (ytnP) and N-acyl homoserine lactonase (aiiA) in the genomes of B. subtilis 168 (Bsub) and fish isolates (FI numbers on top). The amplicon size, in base pairs (bp), is shown on the right. The figure was constructed using different zones of the agarose gel. PCR products marked with a red circle in the figure were sequenced using the corresponding forward and reverse primers ( Table 1). To clarify if the degradation of the 3-Oxo-C6-HSL could be due to lactonase, FI extracellular compounds were simultaneously tested for lactonase enzymatic activity through acidification, since lactonase enzymatic reaction is pH-mediated and reversible by acidification. After the acidification process, the enzymatic reaction could be reversed, as confirmed by the regain of purple pigmentation intensity, suggesting a lactonase-type FI QQ activity ( Figure 2A). As expected, the negative control (LB medium mixed with 30 µM of 3-Oxo-C6-HSL) did not reduce pigment production during the bioassay (Figure 2A). To elucidate the lactonases behind the observed QQ activities, all 10 FI isolates were first identified based on partial sequencing of the 16S rRNA gene (~1000 bp), as part of the Bacillus genera. FI314, FI330, FI346, and FI442 were identified as B. subtilis; FI333, and FI423 as B. amyloliquefaciens; FI335, FI383, and FI436 as B. velezensis; and FI464 as a B. pumilus ( Table 1). The presence of genes that were previously correlated with QQ activity in Bacillus species was first investigated using oligonucleotide primers described by Kalia, et al. [37] to amplify the Firmicutes AHL-lactonase gene aiiA (Table 1). Since no amplification could be obtained in the target FI (data not shown), novel oligonucleotide primers were specifically designed in highly conserved genomic regions to target the genes of interest: aiiA (N-acyl homoserine lactonase) and ytnP (probable quorum-quenching lactonase). As shown in Figure 2B, all strains except FI464 showed a PCR band with the expected size (559 bp) of ytnP gene. In isolate FI464, a strong PCR band was observed at a higher molecular weight (~1500 bp). However, only faint bands with the expected size (583 bp) of aiiA could be detected in FI314, FI330, FI333, FI383, FI423, and FI442. Conjugating the previously observed genomic profile and QQ bioactivities, five PCR products were sequenced to confirm if they matched the targeted gene (bands marked in red in Figure 2B). All DNA sequences could be assigned to a protein family using BLASTx (https://blast.ncbi.nlm.nih.gov/Blast.cgi; accessed on 15 January 2020) ( Table 2). The amino acid sequences of putative AHL-lactonases were aligned with other known AHLlactonases, and the conserved motif "HXHXDH" present in the metallo-β-lactamase superfamily was searched for, as illustrated in Supplementary Figure S1A. Putative ytnP-like AHL-lactonases from FI314 and FI436 have the zinc-binding motif "HXHXDH" (Supplementary Figure S1A) and cluster with the B. subtilis 168 ytnP ( Supplementary Figure S1B), being identified through BlastX as ytnP-like metallo-hydrolase and MBL fold metallo-hydrolase, respectively. However, the high-molecular-weight band of FI464 did not correspond to a putative QQ lactonase but to a DAK2 domain-containing protein ( Table 2) and did not cluster to any QQ enzymes known in Bacillus spp. (Supplementary Figure S1B). Putative AHL-lactonase aiiA, from strains FI314 and FI383, clustered separately from the other AHL-lactonases in the metallo-β-lactamase family, and were identified as containing a rhodanese-like homology domain (Table 2). a Closest known protein using BLASTx based on partial sequences of QQ genes (~300-400 nt). b Query Coverthe percentage of the query sequence covered by the reference sequence. c Percent Identity-the percentage of similarity between the query sequence and the reference sequence. Isolates QQ Compounds Can Interfere with Fish Pathogens AHLs After establishing the putative lactonase type of FI QQ activity, FI QQ compounds were tested for interference with fish pathogens AHLs. For this, 13 different fish pathogens (Table 3) were studied for the production of AHLs' detectable by Chr. violaceum CV026 biosensor. From the 13 tested pathogens, only Aeromonas salmonicida, A. veronii, A. bivalvium, and Edwardsiella tarda could induce the production of a violacein pigment on the biosensor by cross-feeding (Supplementary Figure S2). As AHLs can diffuse through the cell, their production kinetics and cell-release were investigated on those four pathogens. By using pathogens' extracellular medium in a well-diffusion method, it was observed that the production of AHLs by A. veronii started at the beginning of its growth curve, lasting for 12 h, with a maximum peak at the transition from the late exponential to the early stationary phase of growth ( Figure 3A,B). In E. tarda, AHLs detection started at the transition from the exponential to stationary growth phase and was extended until 36 h of growth ( Figure 3A,B). Although observable by cross-feeding (Supplementary Figure S2), the use of an extracellular medium in the well-diffusion method did not allow for the detection of AHLs produced by A. salmonicida and A. bivalvium by the CV026 biosensor at 48 h of the assay ( Figure 3B). Before ascertaining the FI QQ capacity against natural AHLs from fish pathogens, the QQ kinetics of FI314, FI436, and FI464 (selected by conjugating the previous QQ bioassays, molecular identification, and the QQ genomic profile) was assessed using synthetic 3-oxo-C6-HSL. For all strains, the maximum production of QQ compounds was during the early stationary growth phase (~8 h) (Supplementary Figure S3). Fish pathogens AHLs were extracted from the cell-free supernatants of 6 h and 14 h cultures of A. veronii and E. tarda, respectively, and analyzed through agar well-diffusion assay using different supplementation amounts (10-100 µL in 8 mL of soft-agar) ( Figure 4). A. veronii crude AHL extracts induced a slight violacein pigmentation on CV026 biosensor with 100 µL of supplementation. The AHL crude extract from E. tarda induced purple pigmentation on the biosensor CV026 with 10-100 µL of supplementation. As illustrated in Figure 4, all three selected FI strains could inactivate the signals produced by A. veronii and E. tarda, with FI314 and FI464 showing a higher QQ potential. Additionally, a decrease in QQ activity could also be observed with an increase in E. tarda AHLs supplementation (Figure 4), indicating a concentration-dependent activity. from the late exponential to the early stationary phase of growth ( Figure 3A,B). In E. tarda, AHLs detection started at the transition from the exponential to stationary growth phase and was extended until 36 h of growth ( Figure 3A,B). Although observable by cross-feeding (Supplementary Figure S2), the use of an extracellular medium in the well-diffusion method did not allow for the detection of AHLs produced by A. salmonicida and A. bivalvium by the CV026 biosensor at 48h of the assay ( Figure 3B). Before ascertaining the FI QQ capacity against natural AHLs from fish pathogens, the QQ kinetics of FI314, FI436, and FI464 (selected by conjugating the previous QQ bioassays, molecular identification, and the QQ genomic profile) was assessed using synthetic 3-oxo-C6-HSL. For all strains, the maximum production of QQ compounds was during the early stationary growth phase (~8h) (Supplementary Figure S3). Fish pathogens AHLs were extracted from the cell-free supernatants of 6 h and 14 h cultures of A. veronii and E. tarda, respectively, and analyzed through agar well-diffusion assay using different supplementation amounts (10-100 µ L in 8 mL of soft-agar) ( Figure 4). A. veronii crude AHL extracts induced a slight violacein pigmentation on CV026 biosensor with 100 L of supplementation. The AHL crude extract from E. tarda induced purple pigmentation on the biosensor CV026 with 10-100 L of supplementation. As illustrated in Figure 4, all three selected FI strains could inactivate the signals produced by A. veronii and E. tarda, with FI314 and FI464 showing a higher QQ potential. Additionally, a Biosensor's violacein pigment production when supplemented with natural AHLs extracted from A. veronii and E. tarda (10,20,40,60, 80 and 100 L) around wells containing cell-free supernatant of sporeforming fish isolates (FI numbers on top). All photos were taken with a Sony IMX240 camera and are at the same scale. Isolates QQ Compounds Protect Zebrafish Larvae Upon E. tarda Challenge The next question was whether the in vitro FI QQ activity against E. tarda AHLs could originate from an in vivo protection of FI strains to zebrafish larvae challenged with E. Isolates QQ Compounds Protect Zebrafish Larvae upon E. tarda Challenge The next question was whether the in vitro FI QQ activity against E. tarda AHLs could originate from an in vivo protection of FI strains to zebrafish larvae challenged with E. tarda. For this, the maximum non-toxic extract concentration (MNTC) of each FI to be employed in protective assays was first established using 4 dpf zebrafish larvae (Supplementary Figure S4A). The results showed that extracts from all strains induced toxicity in zebrafish larvae when administrated at concentrations above 250 µg mL −1 and, thus, FI extracts were used at 250 µg mL −1 (MNTC). This concentration did not influence E. tarda growth (data not shown). Secondly, an E. tarda infection model was established by exposing 10 dpf zebrafish larvae by immersion with 5 × 10 7 , 1 × 10 8, and 3 × 10 8 CFU mL −1 of E. tarda for 24 h. Larvae exposed to 3 × 10 8 CFU E.tarda mL −1 started to show mortality at 10 h post-infection (hpi) and rapidly progressed until 18 hpi, with 100% mortality (data not shown and Supplementary Figure S4). Mortalities in larvae exposed to 1 × 10 8 CFU E. tarda mL −1 began at 17 hpi and progressed through time, reaching~60% at 24 hpi. On the other hand, 5 × 10 7 CFU E. tarda mL −1 started to induce mortalities only at 23 hpi (Supplementary Figure S4B). Control larvae did not exhibit any mortality throughout the experimental trial. From the overall results, 1 × 10 8 CFU E. tarda mL −1 was selected as the bacterial concentration for the challenge experiment. Finally, the protective effect of FI extracellular compounds was evaluated using the E. tarda infection model. As illustrated in Figure 5, FI extracellular compounds from FI314, FI436, and FI464 were able to protect zebrafish larvae from E. tarda infection, significantly increasing larvae survival rate when compared to the control (non-treated zebrafish larvae infected with E. tarda). In the control group, at 24 hpi, only 30% of zebrafish larvae survived, which resulted in 100% mortality at 48 hpi. By comparing treatments with the control group, after 48 hpi, FI314 increased the average survival rate of challenged larvae by 43% (p < 0.01), and strains FI436 and FI464 increased the survival rate upon challenge by 50% (p < 0.001). Mar. Drugs 2021, 19, x FOR PEER REVIEW Finally, the protective effect of FI extracellular compounds was evaluated us E. tarda infection model. As illustrated in Figure 5, FI extracellular compounds from FI436, and FI464 were able to protect zebrafish larvae from E. tarda infection, signif increasing larvae survival rate when compared to the control (non-treated zebraf vae infected with E. tarda). In the control group, at 24 hpi, only 30% of zebrafish survived, which resulted in 100% mortality at 48 hpi. By comparing treatments w control group, after 48 hpi, FI314 increased the average survival rate of challenged by 43% (p < 0.01), and strains FI436 and FI464 increased the survival rate upon cha by 50% (p < 0.001). Discussion Bacillus spp. have been extensively studied for their use in aquaculture due t probiotic attributes, which include the production of bioactive compounds and th ulation of the host immune response [40]. In addition to their well-known antimi activity, Bacillus spp. are also producers of quorum-quenching (QQ) molecules [27 The disruption of pathogens' communication or quorum-sensing via QQ molecu Figure 5. FI extracts protection of zebrafish larvae against infection with E. tarda. 7 dpf zebrafish larvae were immersed for 2 h with 250 µg mL −1 of each FI extract and three days later, challenged with E. tarda at a final concentration of 1 × 10 8 CFUs mL −1 for 24 h. Untreated larvae challenged with E. tarda and, untreated and unchallenged larvae were used as positive and negative control, respectively. Data are composed of three independent experiments. Significant differences (p < 0.01; p < 0.001) in relation to control are represented by asterisks (**, ***, respectively). Discussion Bacillus spp. have been extensively studied for their use in aquaculture due to their probiotic attributes, which include the production of bioactive compounds and the modulation of the host immune response [40]. In addition to their well-known antimicrobial activity, Bacillus spp. are also producers of quorum-quenching (QQ) molecules [27,29,41]. The disruption of pathogens' communication or quorum-sensing via QQ molecules has been proposed as a promising alternative to fighting bacterial infections in aquaculture [15], which remains a major constraint to the sustainable development of the sector. We recently described the potential of fish-gut Bacillus spp. isolates (FI) as a source of QQ molecules [27]. In this study, we took advantage of 200 FI isolated from the gut of different fish species (Sparus aurata, Dicentrarchus labrax, and Diplodus sargus) [24,27] to further explore their QQ potential, by testing their ability to partially or completely degrade AHLs' QS molecules used by gram-negative fish pathogens. Consistent with the literature, the results indicate that fish gut Bacillus spp. have QQ potential and, by testing their extracellular compounds, the majority of these Bacillus spp. produce and release QQ compounds to the extracellular environment. This observation contradicts previous reports that described AHL-degrading activities in Bacillus spp. as cytoplasmatic or cell-wall-associated [14,16,42] and might represent a technological advantage for QQ disease control, since compounds that work extracellularly are believed to exert less selective pressure in evolving resistance among pathogens [3,14,43]. AHLs can be degraded through the enzymatic activity of lactonases, acylases, or oxi-reductases. To date, mainly lactonases, but also one oxidoreductase and one putative acylase, have been described in Bacillus species [42,[44][45][46]. When an AHL-lactonase is present, cleavage of the homoserine lactone ring of the AHL molecule occurs. The opening of the lactone ring makes the AHL molecule incapable of binding to the target transcriptional regulators (e.g VanT, AhyR, and LuxR homologs), attenuating its effectiveness and detection by the biosensor [47]. This hydrolysis is pH-mediated and can be reversed by acidification. Taking this into consideration, the extracellular enzymatic activity of the positive FI isolates was tested, and all 10 strains enzymatically degraded the AHLs after 24 h of incubation, with a partial restoration of the AHL molecule, inferred by the recovery of the violacein pigment on the biosensor during the acidification process. The restoration of the violacein pigment on the biosensor after the acidification process indicated the putative presence of an AHL-lactonase. However, pigment regain was only partial, which may be explained by the lack of pH control during incubation, which is essential to the hydrolysis reversion of the lactone ring [48]. Although the presence of an acylase, an oxidoreductase, or small molecules that blocked the biosensor signal receptors cannot be ruled out, AHLs degradation seen by the smaller colour formation on the biosensor submitted to the 24 h AHL-SUP mixture when compared to the colour observed on the biosensor submitted to fresh (H0) AHL-SUP mixture, followed by colour restoration upon acidification, indicate that the observable QQ activity is probably due to a lactonase-like enzyme. In the lactonase-like enzyme family, the aiiA gene and AiiA enzyme were the first to be discovered and characterized for their QQ activity in Bacillus spp. [16]. Over the years, different QQ studies highlighted the potential of the AiiA enzyme in the prevention of plant and animal bacterial infections [16,28,49]. In the present study, attempts to amplify the aiiA gene, both using the literature-described primers and primers designed by us, were unsuccessful. Despite the presence of light PCR bands in the agarose gel (using the primers designed in this study), the analysis of their nucleotide and translated sequences did not correspond to the aiiA gene or contain the AHL-lactonases' conserved zinc-binding motif "HxHxDH" [42]. Although Metallo-β-lactamase family members possess the same folding and conserved sequences, the group englobes proteins with divergent sequences and biological functions [50]. Thus, it was hypothesized that, by using the primers designed in homologous sequences, other Metallo-β-lactamase proteins were amplified, with no similarity to the aiiA gene. In fact, by considering the taxonomic identification of the QQ strains, only a few studies have reported the presence of the aiiA gene in B. subtilis and B. amyloliquefaciens [49,51], and none in B. vezelensis and B. pumilus. Meanwhile, Schneider, Yepes, Garcia-Betancur, Westedt, Mielich and López [20] reported the expression of another QQ gene, the ytnP from B. subtilis, capable of interfering with the signaling pathways of biofilm formation and streptomycin production in Pseudomonas aeruginosa and Streptomyces griseus. Here, the ytnP gene was successfully amplified in 9 out of the 10 QQ strains. Protein sequence analysis of FI314 and FI436 showed similarity with ytnP-like and MBL fold proteins, respectively, and revealed the presence of the AHL-lactonase zinc-binding motif containing the key histidine residues "HxHxDH" of the lactonase architecture [42]. The YtnP enzyme has been described to accumulate in the cytoplasm, and only upon the presence of other antimicrobials and stress factors [20]. In the strains used in this study, the observed QQ activity was extracellular, and is thus unlikely to only be due to the action of YtnP. An additional undescribed QQ lactonase, or at least a differently regulated ytnP, must be present. The deletion of ytnP in the FI isolates background could provide a glimpse into their QQ mechanism, but transformation of these strains (both by chemical methods and by electroporation) has been unsuccessful to date (data not shown). Additionally, despite showing a putative AHL-lactonase activity in the biochemical test, none of the test QQ genes nor the conserved motif of AHL-lactonase were amplified from FI464, identified as B. pumilus. In fact, to the authors' knowledge, there is only one report describing B. pumilus QQ potential. Nithya, Aravindraja and Pandian [46] reported a B. pumilus strain with a putative acylase activity, which effectively reduced biofilm formation and other virulence factors in P. aeruginosa. However, the authors did not characterize the protein in detail or provide the gene sequence. Thus, the FI464 genome is undergoing further investigation, as it is assumed that this strain possesses a completely different lactonase activity to the one described in the literature for the other Bacillus species. The ability of the FI to interfere with fish pathogens QS was the core of this study. Thus, the three strains with the best QQ profile, FI314, FI436, and FI464, were evaluated for their ability to degrade the natural signals produced by different species of problematic fish pathogens. QS systems based on AHLs are well-described and reviewed in fish bacterial pathogens [8,12,13,[52][53][54][55][56][57][58]. Here, AHLs' production was evaluated by cross-feeding fourteen different fish pathogens, including species from Aeromonas, Vibrio, Photobacterium, Edwardsiella, and Shigella genera, with the biosensor Chr. violaceum CV026. Of the tested pathogens, only A. salmonicida, A. veronii, A. bivalvium, and E. tarda had the capacity to induce violacein pigmentation on the biosensor Chr. violaceum CV026. This biosensor has the limitation of being stimulated by AHLs with acyl chains ranging from C4 to C8 and, when these are present alongside long-acyl-chain molecules (C8-C14), inhibition of violacein production occurs [36]. This might explain the lack of violacein induction on the biosensor by the tested Vibrio spp., which are known to produce several QS molecules, including AHLs with different lengths and derivates, such as C4-HSL, C6-HSL, 3-oxo-C10-HSL, 3-oxo-C14-HSL [9,56,57,59]. Bruhn, Dalsgaard, Nielsen, Buchholtz, Larsen and Gram [52] also reported a lack of violacein induction on the CV026 biosensor when testing V. anguillarum, V. vulnificus, and Photobacterium damselae subsp. damselae strains, but demonstrated AHLs production in Vibrio spp. using another biosensor. E. tarda and Aeromonas spp. have previously been reported to induce the violacein pigment on the CV026 biosensor [12,54,55,60]. The exception in these results, as anticipated, was the A. hydrophila LMG 2844 strain, as it was unable to induce an AHL-mediating response that was detectable by different biosensors, including CV026 [60,61]. An unexpected result was the lack of violacein pigment when testing T. maritimum, since Romero, Avendano-Herrera, Magarinos, Camara and Otero [13] reported that this species produces C4-HSL. Although the CV026 biosensor detect Ced4-HSL molecules, no activity was reported in the Flavobacteriaceae family using this biosensor [52], and, to the authors' knowledge, there is no study reporting violacein induction on a CV026 biosensor by T. maritimum AHLs. Moreover, genome sequencing of the T. maritimum strain NCIMB 2154 T revealed the absence of homologous genes for AHLs synthesis [62]. Next, the pathogens that induced violacein pigmentation on the Chr. Violaceum biosensor, i.e., A. salmonicida, A. veronii, A. bivalvium, and E. tarda, were tested for the maximum peak in AHLs production and extracellular accumulation. Although A. salmonicida and A. bivalvium induced the violacein pigment on the cross-feeding bioassays, they failed to induce it when testing the extracellular accumulation for 48 h. This might be due to low AHLs concentration or stability outside the cell. A. veronii accumulated AHLs during the whole exponential phase with a maximum peak at the time of transition to the stationary phase, followed by a rapid decline, as described by Jangid, et al. [63]. However, AHLs accumulation in E. tarda was only detected when the cells were entering the stationary phase, and was stable throughout this phase, with the maximum peak detected at 16 h of growth. Accordingly, Han, Li, Qi, Zhang and Bossier [55] also detected AHLs production during the E. tarda growth phase (1-21 h) using the Ag. tumefaciens KYC55 biosensor and, since this biosensor is considered to be ultra-sensitive in AHLs detection, the accumulation of AHLs was detected 5 h earlier. A. veronii is known to produce natural short and medium acyl-chains, such as C6-HSL, C8-HSL, 3-oxo-C8-HSL, and 3-hydro-C8-HSL [64]. In CV026, C8 acyl chains led to an inhibition of violacein induction by other molecules, such as C6-HSL [36]. This might explain the minor induction of violacein pigment on the biosensor, observed when the crude and concentrated A. veronii extracts were used. Nonetheless, FI314 and FI464 were able to interfere with these natural AHLs, either through enzymatical degradation or by interrupting their detection by bacterial receptors. When testing different quantities of E. tarda AHLs supplementation, it could be observed that an increase in supplementation led to a decrease in QQ activity, which might indicate an enzymatic degradation of C4-HSL, C6-HSL, 3-oxo-C6-HSL, and C7-HSL [54,55] by all the tested FIs. Recent studies have highlighted the QQ potential of Bacillus spp. against natural AHLs produced by fish pathogens such as A. hydrophila [30,65,66], Yersinia ruckeri [34], and V. harveyi and V. alginolyticus [41]. Similarly, Gui, Wu, Liu, Wang, Zhang and Li [53] demonstrated that a purified lactonase aiiA AI96 from Bacillus sp. enzymatically inactivated the AHLs produced by A. veronii, as well as other QS-controlled behaviors. To the authors' knowledge, to date, only Romero, et al. [67] have described an in vitro quenching of E. tarda AHLs (C6-HSL and 3-oxo-C6-HSL) using Tenacibaculum sp. strain 20J cell extracts. Thus, this is the first report demonstrating the potential of QQ Bacillus spp. in inhibiting the natural AHLs produced by E. tarda. E. tarda is an important bacterial pathogen that causes hemorrhagic septicemia, edwardsielosis, affecting economically important aquaculture fish species such as turbot [68], Senegalese sole [69], and tilapia [70][71][72], and has been associated with gastro-and extraintestinal infections in humans [73,74]. Bacillus QQ strains have been used in several in vivo studies for fish disease mitigation, but, to date, none has addressed E. tarda infections. For example, oral administration of purified QQ enzyme AiiA increased zebrafish survival by 40% when challenged with A. hydrophila [28], and increased shrimp survival by 50% when challenged with V. parahaemolyticus [33]. Similarly, when cells of QQ B. cereus and B. thuringiensis strains were fed to rainbow trout, fish survival upon infection with Y. ruckeri increased by 80% [34]. Additionally, Chen, et al. [75] co-injected a purified QQ enzyme (AiiA B546) with A. hydrophila, reducing common carp mortality by 25%, and, in a similar experiment, cells of the QQ B. licheniformis T-1 strains decreased zebrafish mortality by 50% when co-injected with A. hydrophila [29]. In the present study, the treatment of zebrafish larvae with FI314, FI436, and FI464 extracts significantly reduced their mortality when challenged with E. tarda by immersion. In E. tarda, the routes of infection are believed to be the gut, the skin, and the gills [76]. The gills and the skin are in constant contact with the environment and, consequently, are accessible to pathogen entry. Thus, the presence of QQ molecules at these sites may help prevent or delay fish infection. On the other hand, in the present study, there was no physical contact between the FIs extracts and E. tarda, suggesting that these molecules can circulate through the host, exerting their protective effect. Although FIs extracts did not inhibit E. tarda growth (data not shown), a possible direct stimulus in the larvae immune system, facilitating disease resistance, cannot be ruled out. This interpretation is only speculative and requires further investigation, including the determination of a putative stimulus in the fish immune system using such extracts. Importantly, the extracts showed good stability (up to 12 months) under no specific storage conditions (room temperature). Moreover, a small bacterial culture volume (200 mL) allowed lyophilized extract to be obtained, make up to 40 L of treating water. These characteristics reinforce the practical application of these extracts in the aquaculture industry. As a conclusion, this work describes three Bacillus spp. which are capable of extracellular quenching E. tarda AHLs while protecting the model zebrafish larvae from infection. The lack of studies regarding E. tarda infection and its crescent impact in aquaculture emphasizes the novelty and importance of this study. Nonetheless, further studies are required to fully characterize the QQ molecules responsible for the bioactivities described and to clarify the QQ mechanism involved. QS plays an important role in bacterial pathogenesis and virulence; thus, the QQ molecules from FI314 (B. subtilis), FI436 (B. vezelensis), and FI464 (B. pumilus) may be promising tools for disease control in aquaculture. Evaluation of Isolates QQ Activity A flow diagram with the methodology used for evaluating QQ activity is presented in Supplementary Figure S5. The extracellular QQ activity was measured in all positive strains from the previous assay. For that, cell-free supernatant of each FI was prepared from overnight cultures grown at 37 • C and 140 rpm, followed by centrifugation for 10 min at 13,000× g and sterilization by filtration with 0.22 µm cellulose acetate filter. LB agar plates were overlaid with LB soft agar (0.8% agar) inoculated with Chr. violaceum WT (OD 600~0 .1), or with Chr. violaceum CV026 (OD 600~0 .1) supplemented with 5 µM of 3-Oxo-C6-HSL. Once the plates solidified, 9 mm diameter wells were punched and filled with 100 µL of the cell-free supernatant of each FI. As the cell-free supernatants of all fish isolates tested do not interfere with the biosensor's growth, the zones of violacein pigmentation inhibition around the wells after 48 h at 30 • C were considered a positive result for QQ extracellular activity. The laboratory strain B. subtilis 168 was used as a control for FIs bacterial growth and strains FI314, FI330, and FI442 were used as a positive control for QQ activity, as described earlier [27]. All digital photos were taken with a Sony IMX240 camera and zones (in mm) of pigment inhibition recorded. AHLs Enzymatic Inactivation by Isolates' Extracellular Compounds Fish isolates' cell-free supernatant was tested for AHL enzymatic degradation of 30 µM of 3-Oxo-C6-HSL, using a well-diffusion method, as follows: LB plates were overlaid with 8 mL of LB soft agar (0.8% agar) previously inoculated with Chr. violaceum CV026 (OD600~0.1). After plate solidification, 9 mm diameter wells were punched and filled with 100 µL of AHL-SUP reaction mixtures, (cell-free supernatant of each FI strain mixed with 30 µM of 3-Oxo-C6-HSL. To evaluate enzymatic activity, the AHL-SUP reaction mixtures were incubated at room temperature, with agitation at 120 rpm, for 0 h and 24 h, before using in the bioassays with Chr. violaceum CV026. LB media with and without the same concentration of 3-Oxo-C6-HSL were used as negative controls. An intensity and size reduction in the violacein pigment halo on the biosensor submitted to both reaction mixtures 0 h and 24 h was considered a positive result for enzymatic degradation. The method described by Edwin A. Yates [48] was used to elucidate whether the enzymatic degradation of the 3-Oxo-C6-HSL could be due to a lactonase. Reaction mixtures were acidified with 10 N HCL to pH 2.0, and then used again in a new bioassay with Chr. violaceum CV026, as described. Since lactonase enzymatic reaction is pH-mediated and reversible by acidification, the restoration of the violacein pigment halo on Chr. violaceum CV026 is considered positive for lactonase activity. All digital photos were taken with a Sony IMX240 camera and zones (in mm) of pigment inhibition were recorded. Design of QQ Primers To obtain a set of primers specific to genes encoding putative QQ enzymes, an initial search was conducted at the Protein Knowledgebase -UniProtKB for "Bacillus Quorum Quenching lactonase". Selected enzymes included YtnP (probable QQ lactonase) (https:// www.uniprot.org/uniprot/O34760, accessed on 9 June 2019) and AiiA (N-acyl homoserine lactonase) (https://www.uniprot.org/uniprot/Q9L8R8, accessed on 9 June 2019). The protein sequence of each enzyme was used to search for similar proteins in the translated nucleotide database using NCBI (https://www.ncbi.nlm.nih.gov, accessed on 9 June 2019). Nucleotide sequence alignments with ClustalW (GenomeNet, Kyoto University, Japan) allowed for the detection of regions of sequence conservation, used to design a pair of primers for each enzyme-encoding gene (ytnP and aiiA) with the SnapGene software version 5.2.3 (GSL Biotech LLC, San Diego, CA, USA) ( Table 1). Phylogenetic analysis was performed with the GenBank non-redundant nucleotide database (Blastn) and the GenBank protein database using a translated nucleotide sequence (Blastx) with BLAST (http://www.ncbi.nlm.nih.gov, accessed on 15 January 2020). The amino acid sequences of the putative AHL-lactonase enzymes were aligned with ClustalW software (https://www.genome.jp/tools-bin/clustalw, accessed on 15 January 2020) and the phylogenetic tree was built using the neighbor-joining method available in ClustalW software. Pathogens' AHLs-production kinetics were evaluated in the supernatant of A. salmonicida, A. veronii, A. bivalvium and E. tarda. In brief, an inoculum (OD 600~0 .05) of A. bivalvium, A. salmonicida, A. veronii and E. tarda was prepared in BHI medium and grown for 48 h at 25 • C (or 37 • C in the case of E. tarda), 140 rpm. Every 2 hours (or every 4 hours after the first 24 h of growth), the optical density (OD 600 ) was measured and the pathogens' supernatant was obtained by centrifugation for 10 min at 16,000× g, and filtration with 0.22 µm cellulose acetate filter. The revelation of AHLs in the pathogen's supernatant was performed by overlaying a LB agar plate with 8 mL of LB soft agar (0.8% agar) inoculated with Chr. violaceum CV026 (OD 600~0 .1). After plate solidification, 9 mm diameter wells were punched and filled with 100 µL of each pathogen's supernatant. Plates were incubated for 48 h at 30 • C and violacein pigment halos around the wells were considered a positive result. All digital photos were taken with a Sony IMX240 camera (Sony, Tokyo, Japan)and zones (in mm) of recorded pigment inhibition. Extraction of Fish Pathogens' AHLs The AHLs produced by A. salmonicida, A. veronii, A. bivalvium, and E. tarda were extracted as described in [78], with some modifications. In brief, an inoculum of each bacterial strain (OD 600~0 .05) was prepared in 25 mL of BHI medium from overnight cultures. A. salmonicida, A. veronii, A. bivalvium were grown for 6 h at 25 • C, and E. tarda was grown for 14 h at 37 • C, 140 rpm. Pathogens' supernatant was obtained from centrifugation for 10 min at 16,000× g, and filtration with 0.22 µm cellulose acetate filter and mixed with an equal volume of acidified ethyl acetate (0.1% of acetic acid). Mixtures were shaken for 30 min, followed by phase separation. The organic phases were pooled and stored at 4 • C. The extraction procedure was repeated three times to improve AHLs extraction. The pooled fraction was concentrated in a rotary evaporator at room temperature. Finally, the dried extract was dissolved in 500 µL of ethyl acetate and stored at 4 • C until use. Fish Isolates QQ Activity on Fish Pathogens' AHLs Fish isolates' QQ activity on A. salmonicida, A. veronii, A. bivalvium, and E. tarda AHL's was evaluated by overlaying LB plates with 8 mL of LB soft agar inoculated with Chr. violaceum CV026 (OD 600~0 .05) supplemented the natural AHLs extracted from the fish pathogens (10, 20, 40, 60, 80 and 100 µL). Once the plates solidified, 9 mm diameter wells were punched and filled with 100 µL of cell-free supernatant of each fish isolate strain. As in previous bioassays, zones of violacein inhibition around the wells (without interference on biosensor growth) after 48 h at 30 • C were considered a positive result for QQ activity. All digital photos were taken with a Sony IMX240 camera and zones (in mm) of pigment inhibition were recorded. Ethics Statement Zebrafish experiments and handling were approved by the Animal Welfare Committee of the Interdisciplinary Centre of Marine and Environmental Research (CIIMAR), performed by trained scientists (with FELASA category C), carried out in a registered installation (N16091.UDER), in compliance with the European directive 2010/63/EU for the care and use of laboratory animals. Zebrafish Larvae General Care Zebrafish embryos were obtained from a wild-type zebrafish broodstock, and incubated in egg water at 28 • C under a photoperiod of 14 h of light:10 h of darkness until hatching. After hatching, larvae were kept in the same conditions and from 6 dpf were fed twice a day (diet containing 36.7% of total crude protein and 15% of total lipids). After each experiment, the surviving larvae were euthanized with a lethal dose of tricaine methanesulfonate (MS-222, 300 mg L −1 ). Isolates Extracts Preparation and Testing for Toxicity in Zebrafish Larvae The extracts of the 3 most promising QQ fish isolates were obtained by freeze-drying the filtered cell-free supernatant of 8 h cultures and resuspending it in sterile 1×PBS. The evaluation of their in vivo toxicity was performed using zebrafish larvae (Danio rerio) as a model, following the Organisation for Economic Co-operation and Development (OECD) Guidelines for Fish Embryo Toxicity Tests [79]. Zebrafish larvae at 4 days post-fertilization (dpf) were distributed into 6-well plates containing 10 larvae/well in 5 mL of egg water (26.4 mg L −1 of Instant Ocean®Salt) and exposed to fish isolates' extracts with concentrations ranging from 67.5 µg mL −1 to 1 mg mL −1 . Larval mortality was recorded at 4, 5, 6, and 7 dpf, and the dead larvae were removed and discarded. Larvae kept in egg water (without treatment) were used as a negative control. The experiment was performed in triplicate to determine the maximum non-toxic extract concentration (MNTC) to be used in subsequent assays. E. tarda Infection Model Zebrafish larvae were used to establish an infection model of E. tarda by bath immersion. E. tarda was cultured for 24 h in BHI at 37 • C with 140 rpm, pelleted by centrifugation (6000× g ) at room temperature, washed twice with sterile 1×PBS and then diluted to the correct concentration in 1×PBS. Before the establishment of the infection model, bacterial cell densities ranging from 10 4 to 10 9 were tested to evaluate their virulence in zebrafish larvae during 24 h and determine the lowest lethal concentration causing 100% mortalities (5 × 10 8 CFUs mL −1 ) and the non-lethal dose (1 × 10 7 CFUs mL −1 ) (data not shown). Zebrafish larvae at 10 dpf were distributed into 6-well plates containing 10 larvae/well in 5 mL of Egg water and inoculated with 5 × 10 7 , 1 × 10 8 , and 3 × 10 8 CFUs mL −1 of E. tarda. After inoculation, larvae were fed, and the plate was incubated at 28 • C. The E. tarda inoculum was kept in the water throughout the whole experiment (24 h). Cumulative mortalities were registered for 24 h and dead larvae found during the assay were removed and discarded. Control groups were included: (i) non-inoculated larvae, with egg-water only; (ii) larvae inoculated with 1×PBS. The experiment was independently performed 3 times. Fish Isolates' Protection Assay against E. tarda Infection in Zebrafish Larvae The fish isolates' protection of zebrafish larvae against E. tarda infection was performed by testing the lyophilized extracts at a final concentration of 250 µg mL −1 . Before the assay, a pre-treatment experiment was performed, where zebrafish larvae were treated with the extracts once, twice, or thrice for 2 or 24 h before challenge with E. tarda, allowing for the establishment of the best protection method. Thus, at 4 dpf, 10 healthy larvae were distributed into each well of a 6-well plate, containing 5 mL of Egg water. Larvae were treated with the extracts once after mouth's opening (7 dpf), for 2 h at 28 • C and then transferred to new 5 mL egg water. The treated 10 dpf larvae were challenged by immersion with E. tarda at 1 × 10 8 CFU mL −1 . After inoculation, larvae were fed, and the plate was incubated at 28 • C for 24 h. Cumulative mortalities were registered between 16 and 24 h and the dead larvae were removed and discarded. Control groups were included: (i) non-treated larvae inoculated with E. tarda; (ii) non-inoculated larvae; (iii) larvae inoculated with 1×PBS. The experiment was independently performed 3 times. Statistical Analysis Survival data were analysed using Kaplan-Meier, and group differences were analysed by the log-rank, using the GraphPad Prism 9 software. The one-way ANOVA was performed to compare between treatments and the control, using the SPSS 26.0 software (IBM Corp., New York, NY, USA) package for Windows. When p-values were significant (p < 0.05), means were compared with Dunnett's test. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/md19110602/s1, Table S1: Identification of the 10 fish-gut isolates with QQ activity, using 16S rRNA gene analysis. Figure S1: Amino acid sequence comparison of the putative AHL-lactonases with known AHL-lactonase enzymes. Figure S2: Detection of AHLs production by gram-negative fish pathogens using Chr. violaceum CV026 as a biosensor. Figure S3: Growth curves and QQ kinetics of FI314, FI436 and FI464. Figure S4: Toxicity of fish isolates' extracellular compounds and establishment of E. tarda infection model in zebrafish larvae. Figure S5: Flow diagram of the methodology used for evaluating FI isolates QQ activity. (Foundation for Science and Technology), under the POCH program; CRS has a scientific employment contract supported by national funds through FCT-Portuguese Foundation for Science and Technology. This research was partially supported by the Strategic Funding UIDB/04423/2020 and UIDP/04423/2020, UIDB/04033/2020 and UIDB/00772/2020 through national funds provided by FCT and European Regional Development Fund (ERDF), in the framework of the programme PT2020. Institutional Review Board Statement: Animal experiments were approved by the Animal Welfare Committee of the Interdisciplinary Centre of Marine and Environmental Research (CIIMAR) and carried out in a registered installation (N16091.UDER) and were performed by trained scientists (following FELASA category C recommendations) in full compliance with national rules and following the European Directive 2010/63/EU of the European Parliament and the European Union Council on the protection of animals used for scientific purposes. Data Availability Statement: 16S rRNA gene sequences of fish isolates described in this manuscript have been deposited in GenBank with the accession numbers provided in Table 2. Authors confirm that all relevant data are included in the article. Conflicts of Interest: RJ is an employee of Epicore Inc., a subsidiary of Archer Daniels Midland, which supported this research by completing a commercial licensing agreement for the potential utilization of the probiotic bacteria FI314, FI330, FI442 used in this research, in commercial products for aquaculture. The opinions expressed in the manuscript are those of the authors and do not necessarily reflect company policies. All other authors declare no competing interest.
11,389
2021-10-23T00:00:00.000
[ "Biology", "Engineering" ]
Power-Optimal Control of a Stirling Engine’s Frictional Piston Motion The power output of Stirling engines can be optimized by several means. In this study, the focus is on potential performance improvements that can be achieved by optimizing the piston motion of an alpha-Stirling engine in the presence of dissipative processes, in particular mechanical friction. We use a low-effort endoreversible Stirling engine model, which allows for the incorporation of finite heat and mass transfer as well as the friction caused by the piston motion. Instead of performing a parameterization of the piston motion and optimizing these parameters, we here use an indirect iterative gradient method that is based on Pontryagin’s maximum principle. For the varying friction coefficient, the optimization results are compared to both, a harmonic piston motion and optimization results found in a previous study, where a parameterized piston motion had been used. Thus we show how much performance can be improved by using the more sophisticated and numerically more expensive iterative gradient method. Introduction Stirling engines [1] are devices capable of transforming heat into mechanical work by utilizing almost any external heat source. Thus, they constitute an interesting alternative for power production in various scenarios, e.g., for waste heat or burnable waste gas utilization, or as part of electrothermal energy storage systems for renewable energies. The most essential parts of a Stirling engine are the hot working space, the cold working space, and the regenerator. The two working spaces are thermally connected to external heat baths through heat exchangers. The volumes of the working spaces are cyclically varied over time so as to compress the gas at low temperature, heat it up with help of the regenerator, expand it at high temperature and cool it down again using the regenerator. One technical configuration to realize this is referred to as alpha-Stirling engine, where the two working spaces are in separate cylinders and their volumes are changed by independently movable working pistons. To determine proper design parameters of a Stirling engine suitable for a specific application, parameter optimizations can be performed, see for example [2][3][4][5][6]. Then, the piston motions are often determined as harmonic functions or parametric functions representing specific piston drive mechanisms. In this theoretical study we focus on how the piston motion influences the engine performance by applying methods similar to previous optimizations of Stirling engines' piston motions [7][8][9][10][11][12][13]. Experimental studies [14,15] have shown that power output improvements of Stirling engines are feasible through altering the piston motion. In the current study we revisit [12], where we performed the optimization of an alpha-Stirling engine for a parameterized class of smooth piston motions. In contrast to this previous publication, we will not restrict ourselves to such a parameterized class of piston motions here, but we will use optimal control theory to obtain a more general solution of the optimal control problem. To this end we apply an indirect iterative gradient method [11] that exploits limit cycles in the state and costate problems to solve them for periodic boundary conditions. In order to make such optimizations feasible, models with few degrees of freedom and low numerical effort are required. Stirling engine models are often categorized as first, second, and third order [16,17]. Especially in the case of the more detailed third order models, relatively large numerical effort is typically connected to the description of the regenerator. Therefore, several attempts [10,11,18,19] have been made to develop reduced-order regenerator models that constitute proper tradeoffs between accuracy and numerical effort for optimal control problems. In the current study we use an ideal regenerator model [12] that is based on Endoreversible Thermodynamics. This model does not require additional differential equations to describe the regenerator dynamics. Instead, the regenerator is described as an endoreversible engine that instantaneously balances fluxes of particles, energy, and entropy. Hence, this model allows for a very-low-effort Stirling engine description including finite heat transfer between the working gas and the external heat baths, finite mass transfer through the regenerator as well as friction of the pistons. In the endoreversible approach, physical systems are described as networks of reversible subsystems, which exchange extensities (entropy, volume, particles, . . . ) and energy through reversible or irreversible interactions. Hence, in endoreversible modeling all irreversibilities are typically captured by the interactions whereas the subsystems can be described with the convenient tools of equilibrium thermodynamics. The most basic kinds of subsystems are (in-)finite reservoirs that contain extensities on the one hand, and engines, which represent ideal energy conversion devices, on the other. A finite reservoir i is characterized by a state function E(X α i ) that determines its energy content depending on the amount of the extensities X α i contained in the reservoir. Here, the superscript α specifies the extensity, e.g., entropy S i = X S i , volume V i = X V i , and particle number n i = X n i . The corresponding intensity Y α i follows as where Y S i = T i is the temperature, Y V i = p i is the pressure, and Y n i = µ i is the chemical potential of the reservoir. When it comes to specifying the state of the finite reservoir one has some freedom in the choice of state variables. One way is to specify all extensities. The reservoir can have one or several contact points r, to each of which one interaction is attached. Through these interactions, the reservoir can take up or release extensities. According to the Gibbs relation every extensity flux J α i,r that enters (J α i,r > 0) or leaves (J α i,r < 0) the reservoir at r carries an energy flux I α i,r = Y α i J α i,r . If the interaction involves several extensity fluxes (see multi-extensity fluxes [39]) then the overall energy flux at r is The dynamics of the finite reservoir can then for example be defined by a number of ordinary differential equations (ODEs), each describing the balance equation for one of the respective extensitiesẊ In contrast, if the reservoir is considered as infinite, it is characterized by prescribing the full set of intensities Y α i , which do not change-regardless of the size of the extensity fluxes J α i,r . As stated above, engines represent energy conversion devices. They can either operate cyclically or continuously, where we solely consider the latter here. They do not contain extensities and energy but only pass them on. Correspondingly, engines are characterized by a set of balance equations for all extensities and energy: While the intensity values are equal at all contact points of a reservoir, with engines the intensity values generally differ from contact point to contact point so that all these balance equations are fulfilled. Interactions are the modeling objects in Endoreversible Thermodynamics which generally capture all irreversibilities. They are characterized by balance equations for the extensities and energy, as well as by transfer laws. We will here only consider bilateral interactions that connect two subsystems. A bilateral interaction is reversible if and only if the intensity values are equal at the two connected contact points. This is achievable with infinitely fast transfer laws, making sure that small intensity differences are balanced instantaneously. If the transfer laws are finite, the system might evolve in a way such that the intensity values at the bilateral interaction's two contact points deviate. Then the interaction becomes irreversible: now energy and all extensities but entropy are conserved in it. Any proper definition of transfer laws must assure that entropy can only be produced and never annihilated. State Dynamics The state dynamics of the endoreversible Stirling engine model is characterized by six coupled ODEs describing the dynamics of the working space volumes, particle numbers, and entropies: where 1 refers to the hot working space, 2 to the cold working space, H to the hot heat bath and C to the cold heat bath, as indicated in Figure 1. The heat bath temperatures are defined as T H = 400 K and T C = 300 K. The volume dynamics from Equation (5) is defined dependent on prescribed periodic control functions u i (t), i ∈ {1, 2}. Here, ν V is a large number (where we will use ν V = 500) and τ is the cycle time. This volume dynamics is similar to an over-damped mass-springsystem with moving spring support. If ν V (the "spring constant") is chosen large, then V i (t) will approach u i (t) for t → ∞. This indirect way of controlling the volume through u i (t) allows the dynamics to approach the limit cycle independent from the chosen initial value for the volume. On the left side the hot cylinder 1 is located with its interactions to the hot heat bath H and a transmission unit T1. On the right side the cold cylinder 2 is displayed with corresponding interactions to the cold heat bath C and a transmission unit T2. Both are connected by the regenerator R in the middle which interacts with an entropy and work reservoir, SR and WR, respectively. Further reservoirs are work reservoirs WT and WF collecting the net power and friction losses, respectively, from the energy converting engines T1 and T2 as well as volume reservoirs E representing the environment [12]. The particle dynamics from Equation (6) essentially features two terms. The first term α(p j − p i ) describes a pressure driven particle flux between the two working spaces i and j, where α is a particle transfer coefficient. The second term was added in the current model to make sure that the particle dynamics features a limit cycle, which is essential for the application of the indirect optimization algorithm used in this study and-as above-gives freedom in choosing the initial values for the particle numbers. Here, n tot and ν n are fixed parameters, where we will use n tot = 1 mol and ν n = 20. For t → ∞, n 1 + n 2 → n tot and then this term becomes an additive zero. The entropy dynamics from Equation (7) is the very same as in [12]. It also features two terms, the first one describing heat transfer from or to the corresponding external heat bath with the heat transfer coefficient κ and the second one describing reversible entropy exchange with the ideal endoreversible regenerator. The intensities in the two working spaces i ∈ {1, 2} can be calculated as [12]: where we useĉ V = 5/2 for the dimensionless specific heat capacity, R is the ideal gas constant and S 0 /n 0 is the working gas's molar entropy at reference conditions T 0 and p 0 . Corresponding data can for example be found in [45]. In fact, apart from constant shifts in the entropies, the system dynamics is not influenced by the definition of this reference entropy. Hence we refrain from giving values here. Note that even though Equations (5)-(10) describe the complete system dynamics, additional care has to be taken regarding the evaluation of output quantities like mechanical power. This is due to the use of the ideal endoreversible regenerator model and will be addressed next. Performance Measures As displayed in Figure 1, the ideal endoreversible regenerator is represented by the engine R. This engine has interactions with the working spaces 1 and 2. Moreover, in order to allow for the instantaneous balancing of energy and entropy fluxes that enter or exit R at the corresponding contact points, R is given additional interactions to a heat bath SR and a work reservoir WR. The heat bath SR is chosen to have the cold heat bath's temperature T C . Then from the energy and entropy balances follows that and In this rather simple, ideal endoreversible regenerator model the integral surplus or deficit of the energy τ 0 P R dt is then assumed to enter the overall work output: Here, β is the friction coefficient of the pistons. Correspondingly, the average net power output of the Stirling engine is P out = W out /τ. The heat taken from the hot heat bath during one cycle is (14) and the efficiency of the Stirling engine results as η = W out /Q in . Note that detailed analyses of Stirling engines, as necessary during design development, require more detailed regenerator models than the ideal endoreversible regenerator model used here. Nevertheless, this model is considered useful when the behavior of regenerative systems is studied in a rather general manner, as it is the case here, and if additionally very low numerical effort is a key requirement. Optimization In this study we revisit a previous work [12] where we used a parametric optimization method to optimize the piston motion of the Stirling engine described above. This parametric optimization leads to a power-optimal piston motion that we will refer to as OS motion. In the following we will briefly introduce this parametric optimization method. Afterwards, we will describe the optimization method based on Optimal Control Theory, which we use in the present study to obtain a more general optimization result labeled COC motion. Parametric Optimization (OS Motion) In the study mentioned above [12] the piston motion of the alpha-Stirling engine was parametrized by the following function: f 1 (y, σ) = (sin(2πy + σ sin(4πy)) + 1)/2, This motion is dependent on the two parameters σ and δ which can be different for the two cylinders of the engine, hence we have σ i , δ i with i = 1, 2. The state dynamics-and consequently the power output-is thus influenced by the parameters σ i , δ i . Then, using an iterative optimization algorithm σ i , δ i are adapted to maximize the power output. The resulting piston motion we refer to as OS motion. For more details see [12]. Note however that even though the OS motion might capture important features of the fully optimized motion, the possible shapes, which can be realized with this parametric approach, are quite limited. For example, the swept volume (V max − V min ) was not part of the optimization in [12]. Thus, it must be expected that the optimal power output obtained with the OS motion can be outmatched by using a more general parametrization V i (t). In this study we will use Optimal Control Theory to optimize the piston motion, which will be described below. It does not require any kind of parametrization of the piston motion and will thus lead to a more general optimization result. Optimal Control Theory (COC Motion) The working volumes of the Stirling engine are here considered to result from τperiodic control functions u i (t), i ∈ {1, 2} in terms of differential equations: as described in Section 2.2. Now, for prescribed cycle time τ, our goal is to choose u i (t) in a way such that the work output W out is maximized for the solution of the system of ODEṡ in the time domain t ∈ [0, τ] under periodic boundary conditions: x(0) = x(τ). Here, we use the following definitions of the state vector x := (V 1 , V 2 , n 1 , n 2 , S 1 , S 2 ) T and the control vector u := (u 1 (t), u 2 (t)) T as well as the state dynamics f defined as a vector function of the latter according to Equations (5)-(10). This constitutes a cyclic optimal control problem. To set up the necessary conditions of optimality, we define the Hamilton function: where λ is a costate vector and ζ is the path target function: This is in accordance with the definition of the overall work output from Equation (13). However, here a penalty term was added in order to account for minimum and maximum volume constraints: where the prefactors are defined as ν p0 = 1 W and ν p1 = 500, the maximum admissible swept volume is ∆V = V max − V min , and the minimum and maximum volumes are V min = 1 L and V max = 11 L, respectively. The actual swept volume ∆V COC that the optimized piston paths of the COC motion involve may also turn out to be smaller than ∆V, as will be seen later. Then the first order necessary conditions of optimality are [46,47]: Equations (24)-(26) need to be solved for periodic boundary conditions for both the state and the costate variables: x(0) = x(τ), λ(0) = λ(τ). This is here done with an indirect iterative gradient method described in [11], which exploits the existence of attractive and repulsive limit cycles in the state problem (Equation (24)) and the costate problem (Equation (25)) for obtaining the periodic solutions. Results In [12] the Stirling engine piston motion was power-optimized for a parameterized class of smooth piston motions. The optimized piston motion is referred to as OS, whereas the "standard harmonic piston motion" with V min = 1 L, V max = 11 L and 90°phase shift is referred to as STD and used as a benchmark. The alpha-Stirling engine model introduced in Section 2 is equivalent to that used in [12]. However, in the current study additional care was taken to make sure that the system dynamics feature a limit cycle in order to allow for the application of the abovementioned optimization method. The following set of parameter values we will refer to as "reference values" in this study: β 0 = 10 5 Js/m 6 , α 0 = 100 mol/(s bar), κ 0 = 10 5 W/K, τ 0 = 1 s. Here, the values α 0 and κ 0 were chosen so that the associated pressures and temperatures equilibrate very fast compared to the cycle time τ 0 . Correspondingly, the associated irreversibilities are relatively small. Hence, apart from the friction irreversibility the model can be considered near-ideal. The results obtained for these values are called "reference case". The piston motions obtained with the cyclic optimal control algorithm described in Section 3.2 will be referred to as the COC motion. In the following, we will compare it to the STD motion as well as the OS motion from [12]. In Figure 2 the volumes of the working spaces are plotted against time for the reference case. The STD motion is represented by dotted lines, the OS motion by dashed lines, and the COC motion by solid lines. The red lines represent the hot working space and the blue lines the cold working space. It can be seen that there are considerable deviations between all three types of piston motions. However, the OS and the COC motion do have in common that the pistons tend to spend more time close to their bottom and top dead centers than with the STD motion. Consequently, the average piston velocities are increased compared to the STD motion. The highest piston velocity occurs for the OS motion. For the COC motion the maximum piston velocity is lower but the velocity is almost constant during four clearly distinguishable strokes. This leads to large accelerations at the transitions between those strokes, especially at t/τ ≈ 5/8. This is connected to the fact that the path target function used here does contain friction as a function of the velocity (viaV i ), whereas it does not contain the acceleration (viaV i ) and thus there is no penalty on fast accelerations. We will come back to this sharp transition later. Figure 2. Resulting cylinder volumes V 1 and V 2 against relative time t/τ for the STD, OS, and COC motions with reference case parameters. The overall gas volume V 1+2 := V 1 + V 2 is depicted in Figure 3 for all three types of piston motions by black curves. The cold working volume is again shown by blue lines. As indicated above, apart from the moderate irreversibility due to friction, the reference values lead to a near-ideal thermodynamic model. It is interesting that for the considered friction law and volume constraints corresponding to an alpha-type Stirling configuration, the COC motion does not contain isochoric strokes-in contrast to the ideal Stirling cycle. Instead, the overall gas volume V 1+2 changes with approximately constant absolute rate during almost the whole cycle, in order to minimize frictional losses. This may be different for beta-or gamma-Stirling configurations. Figure 3. Overall gas volume V 1+2 and cold cylinder volume V 2 against relative time t/τ for the STD, OS, and COC motions with reference case parameters. In Figure 4 the gas pressures in the working spaces are plotted against time for the reference case. The STD motion is represented by dotted lines, the OS motion by dashed lines, and the COC motion by solid lines. The red lines represent the hot working space and the blue lines the cold working space. Note that in this figure the blue lines lie on top of the respective red lines since there are only very small pressure differences between the two working spaces. This is due to the choice of a relatively large parameter value for the mass transfer coefficient α = α 0 . The optimizations raise the overall difference between the minimum and maximum cycle pressures. While the minimum pressure values approximately remain the same, the OS and COC motions lead to much higher maximum pressures than the STD motion. For the COC motion the pressure curves are much more peak-shaped than for the OS motion and their maximum values are about 18% higher than that of the OS motion. With the COC motion the pressure peak occurs at the minimum of the overall gas volume V 1+2 at t/τ ≈ 5/8, which is much lower for COC than for OS, as can be seen in Figure 3. Obviously, the shape and maximum value of that peak strongly depend on the volume constraints. In the following we will discuss the influence of friction on the optimal piston motion and the resulting performance measures. As can be seen in Equations (13) and (22), friction is in this work modeled depending on the piston velocity in terms of βV 2 i with the friction coefficient β. We repeated the optimization for varying β. Here, we chose a range with β ≥ 0.5 × 10 5 Js/m 6 (for the computation of the COC motions) since for very small friction the tendency to perform more than one reciprocating piston movement in the prescribed time period grows. Correspondingly, by choosing β large enough as to prevent additional reciprocating movements, the results for the COC motion remain comparable to those for the STD and OS motions. In Figure 5 the volumes of the working spaces of the COC motion are plotted against time for varying friction coefficient β. The hot working space is represented by solid lines, the cold working space by dotted lines. Obviously, for the increasing friction coefficient the piston's dwell times at the bottom dead center (maximum volume) reduce and above β ≈ 2 × 10 5 Js/m 6 , the curves eventually detach from the maximum volume bounds. At the highest considered value of β = 8 × 10 5 Js/m 6 , the swept volumes have reduced to about one half of the available volume. In contrast, the piston's dwell times at the top dead center (minimum volume) are only slightly reduced for increasing β. Remarkably, the sharp edge at t/τ ≈ 5/8 is not affected by increasing friction. Only the absolute values of the curvature around the volume maximums (bottom dead centers) become smaller and smaller. This behavior can be related to the path target function from Equation (22): The volumes are kept minimal as to decrease the effective dead volume and increase the pressure in the engine and thus the indicated work, which leads to the sharp edge at t/τ ≈ 5/8. On the other hand as friction becomes more dominant the quadratic average of the piston speeds (translating to friction) are reduced while trying to achieve swept volumes (indicated work) as high as possible. This results in the rounded volume maxima. From the temporal evolution of the state variables, the work output per cycle W out is determined by integration, as defined in Equation (13). The cycle-averaged power output follows as P out = W out /τ. In Figure 6 the average power output P out of the Stirling engine is plotted against the friction coefficient β for the STD, OS, and COC piston motions. For low friction coefficient the COC motion leads to about 10% power gain relative to the OS motion. For higher values of β the power gain due to the COC motion becomes much larger than that of the OS motion. This is connected to different effects: • STD motion: As β is increased for fixed piston motion, frictional losses increase linearly with β. Therefore, the average power output P out decreases linearly with β. • OS motion: As β changes, the piston motion adapts. Therefore, the average power output P out decreases non-linearly with β. However, since the actual swept volume is fixed to the maximum admissible swept volume ∆V, the net power output P out decays at least with a rate of −2(2 ∆V/τ) 2 . This follows from Equation (13) for the pistons moving according to a triangle wave withV i = ±(2 ∆V/τ). • COC motion: As β changes, the piston motion adapts not only in its shape, but also in its actual swept volume ∆V COC . This can be seen in Figure 5. Starting from β ≈ 2 the actual swept volume ∆V COC continuously decreases as β is increased. Therefore, now the lower bound for the rate of decay of P out is only −2(2 ∆V COC /τ) 2 , which quadratically reduces with ∆V COC . Correspondingly, it can be seen in Figure 6 that the decay of P out with increasing β is much slower for the COC motion. To make the latter point clearer we plot the optimization result of the COC motion already shown in Figure 6 (black line) against both the friction coefficient β and the actual swept volume (of the cold piston) in Figure 7. The COC motion is here represented by the thick black line. The color surface apart from that line was obtained by varying β, while leaving the swept volume and the shape of the piston motion fixed. That is, in Figure 7 the swept volume is a proxy for the shape of the piston motion according to Figure 5. It can be seen that, starting in the upper corner, the net power output is reduced by both increasing the friction coefficient and decreasing the swept volume. If the swept volume is held constant as β is increased, the decay of net power is very strong, not unlike it happens for the STD and OS motions. The COC motion (thick black line), however, avoids this strong decrease by reducing the swept volume for larger β. The influence of the friction coefficient β on the Stirling engine's efficiency is shown in Figure 8. For small β the efficiency approaches a value of 0.25 for all piston motions, which corresponds to Carnot efficiency. This is because the reference values κ 0 and α 0 of the heat and mass transfer coefficients were chosen relatively large, so that the corresponding irreversibilities are negligible. The thick black line represents the average power output P out of the COC motion plotted against the friction coefficient β and the actual swept volume of the cold piston (see Figure 5). The color surface was obtained by varying β, while leaving the swept volume and the shape of the piston motion fixed. Correspondingly, the swept volume here is a proxy for the shape of the piston motion from Figure 5. The plot range is restricted to values above zero watts. The optimizations of the piston motion were performed for the target function being the power output in both cases, with the OS and the COC motion. Therefore, the efficiency resulting from the optimized motions will not necessarily be lager than that of the STD motion. In fact, for the OS motion it can be observed that its efficiency goes below that of the STD motion for β below about 2.1 × 10 5 Js/m 6 . For the COC motion this does not occur in the considered range with β ≥ 0.5 × 10 5 Js/m 6 . For large β both, the OS and the COC motion lead to increased efficiency. However, the efficiency increase due to the COC motion is much more significant, which is again partially related to the reduction in the swept volumes. Conclusions In this study we applied cyclic optimal control theory to power-optimize the piston motion of an alpha-Stirling engine with dominating mechanical friction irreversibility. The underlying endoreversible Stirling engine model additionally takes finite heat and mass transfer into account. However, we here used large transfer coefficients so that, apart from friction, a near-ideal thermodynamic model was obtained. The optimizations were repeated for the varying friction coefficient. The results for the optimized piston motions (COC) were compared to results of a previous study [12], where parameterized piston motions had been optimized (OS). Moreover, harmonic piston motions (STD) were resorted to as a benchmark. The optimized piston motions OS and COC lead to increased pressure variations during the cycle, which bring about significant gains in power. The COC motion obtained with cyclic optimal control theory in the current study, outmatches the OS motion from [12] regarding both power and efficiency. This especially holds true for high values of the friction coefficient. However, in the COC motion considerably higher accelerations of the pistons occur. This is connected to the used definition of frictional losses involving only the piston velocities, not accelerations. An interesting result is that for given engine parameters, there is a certain swept volume for which net power becomes optimal. Increasing swept volume beyond this value would result in reduction of net power. Moreover, it was shown that for the considered friction law and volume constraints corresponding to an alpha-Stirling configuration, the COC motion does not contain isochoric strokes, which is in contrast to the ideal Stirling cycle. For other Stirling configurations this might however be different. For more detailed analyses, as required in engineering, additional subsystems as well as transfer and friction laws describing a specific Stirling engine design can be included in the model. Moreover, in this case a more detailed irreversible regenerator model should be used. A low-order endoreversible regenerator model developed for this purpose is, for example, described, numerically validated and applied in [11].
7,168
2022-03-01T00:00:00.000
[ "Engineering", "Physics" ]
The Contribution of Word-, Sentence-, and Discourse-Level Abilities on Writing Performance: A 3-Year Longitudinal Study Writing is a foundational skill throughout school grades. This study analyzed the development of different levels of written language (word, sentence, and discourse) and explored the relationship between these levels and writing performance. About 95 Portuguese students from two cohorts—Grades 4–7 (n = 47) 6–9 (n = 48)—were asked to produce a descriptive text two times, with a 3-year interval. The produced texts were used to assess spelling, syntactic correctness and complexity, and descriptive discourse as well as text length and quality. The main results showed that there were improvements from Grades 4 to 7 and 6 to 9 in word- and sentence-level skills, along with increases in some dimensions of the descriptive discourse. Moreover, the older cohort performed better than the younger cohort in terms of spelling, syntactic complexity, and text quality, but not in terms of syntactic correctness, one dimension of the descriptive discourse, and text length. Regression analyses showed that writing performance was predicted by word and sentence levels in the younger cohort only, and by discourse-level variables in both cohorts. Overall, despite indicating a generalized growth in writing skills throughout schooling, this study also highlighted the areas that may need additional attention from teachers, mainly in terms of the descriptive features. INTRODUCTION Writing is a complex skill (Dockrell et al., 2014). It requires the production of legible letters following conventional spellings, to produce words that are organized into sentences and form a coherent written text, expressing the writer's ideas . Given the complexity of writing, research into the development of this ability throughout schooling is particularly relevant to understand the trajectories of learning and, based on these, to provide educational guidelines to foster writing skills. Much of extant research provides cross-sectional comparisons (Lerkkanen et al., 2004), and a few studies provide a longitudinal analysis of writing development Jagaiah et al., 2020), mainly with long gaps between the measurement points. These may help to better gauge the development of writing, given its long learning curve. This was the goal of the present study in which we examined the development of different levels of written language and their contribution to writing performance in two cohorts of Portuguese students (from Grades 4 to 7 and 6 to 9). These grades were chosen as they represent critical transitions for Portuguese students, from the first cycle of basic education (Grades 1-4) to the second one (Grades 5-6), and from the second cycle to the third cycle (Grades 7-9). Levels of language, an analytic tool, is used to understand the complexity of oral and written language (Berninger and Garvey, 1982) based on the analysis of words, sentences, and discourse. In the word level, spelling could be defined as the ability to retrieve, assemble, and select orthographic symbols (Abbott and Berninger, 1993). Large-span cross-sectional studies found that spelling errors (an indicator of spelling skill) decreased throughout schooling, for example, from Grades 2 to 5 (Alves and Limpo, 2015), Grades 2 to 6 (Llaurado and Dockrell, 2020;Magalhães et al., 2020), and Grades 1 to 9 (Bahr et al., 2012). However, a few studies examined the type of errors produced, which can inform about the spelling difficulties of students in each language. Portuguese is a romance language with a simple syllabic structure and orthographic complexities and inconsistencies, classified as an intermediate depth orthography (Seymour et al., 2003; see more details about the Portuguese spelling system in Supplementary Material, section 1). Among several error categorization systems (Treiman et al., 2019), phonological, orthographic, and morphological assessment of spelling (POMAS) seems to be particularly useful, given its specificity of analysis and theoretical support. Grounded on the triple word form theory (Bahr et al., 2012), POMAS codes misspellings into three categories: phonological, orthographic, and morphological. Findings from POMAS revealed that from Grades 1 to 9 there was a decrease in phonological errors coupled with an increase in morphological ones, with most errors across grades being orthographic (Bahr et al., 2012). Despite not assessing morphological errors, Magalhães et al. (2020) found a similar pattern in Portuguese children from Grades 2, 4, and 6. The authors also found that stress marks errors-largely underexplored in Portuguese studies-were present equally in the assessed grades. Writing a text also requires sentence-level abilities as children need to convert their ideas into sentences. Two key sentencelevel measures are syntactic complexity and correctness (Dockrell et al., 2014). One of the most frequent measures of syntactic complexity is clause length (Jagaiah et al., 2020), which is the mean number of words per clause (Berman and Slobin, 1994). Syntactic correctness can be measured through the correctness of word sequences, defined as two contiguous syntactically and semantically acceptable writing units (Videen et al., 1982). A systematic review on Grades 1-12 concluded that syntactic complexity increased throughout schooling (Jagaiah et al., 2020). Similar evidence was found for syntactic correctness. In Grades 3-5, Dockrell et al. (2014) found that younger students produced significantly less correct word sequences than older ones. Likewise, similar findings were found by Malecki and Jewell (2003) showed that several indicators of syntactic correctness consistently increased from early elementary (Grades 1-2) to elementary grades (Grades 3-5), and from elementary to middle grades (Grades 6-8). By serving specific communicative goals and functions, writing a text requires discourse-related knowledge concerning the structural features of each genre (Berman and Nir-sagiv, 2007;Graham et al., 2013;Dockrell et al., 2014). Like word and sentence levels, discourse-level abilities seem to increase throughout schooling, with students progressively producing texts with more and more genre-specific features. Tolchinsky (2019) found that descriptiveness (i.e., degree to which descriptive texts include the representative features of this genre) increased from Grades 1 to 4. Berman and Nir-sagiv (2007) found similar increases across grades in narrative and expository writing in older samples (Grades 4,7,and 11 and University). In addition to improvements in word, sentence, and discourse levels throughout schooling, research has shown that mastering these levels is important for writing performance, assessed in terms of the quality and amount of writing (Berninger, 2012). Both in primary and middle grades, a few studies found that better writing performance is predicted by (a) higher spelling skills [Grades 1-3 in Graham et al. (1997); Grades 3-6 in Abbott et al. (2010); and Grades 7-8 in Limpo et al. (2017)], (b) greater sentence-level abilities [Grades 2 and 3 in Arfé et al. (2016); Grades 3-5 and 5-7 in Beers and Nagy (2011); and Grades 7-8 in Limpo et al. (2017)]; and (c) more genre-related knowledge, including descriptive texts Tolchinsky, 2019). Providing stronger evidence on these links, meta-analyses showed that interventions promoting the writing levels of students improved the overall writing performance [Grades 1-6 in Graham et al. (2012) and Grades 4-12 in Graham and Perin (2007)]. Present Study As most findings surveyed above came from cross-sectional studies, it seems crucial to complement them with longitudinal findings to bring new inputs about writing development. To that end, the following research questions were addressed in a two-cohort sample of Portuguese students: Which are the developmental trajectories of word, sentence, and discourse levels of written language? Moreover, to which degree do these levels predict writing performance? Word, sentence, and discourse levels were measured through spelling, syntactic complexity/correctness, and descriptiveness, whereas writing performance was measured via text length and writing quality. The younger and older cohort were assessed at Grades 4 and 7 and 6 and 9, respectively. Based on the previous research, we expected a skill increase in all levels and an association between these and writing performance. We also hypothesized that the older cohort would show better writing skills than the younger cohort. Participants and Design Participants were 101 students from Grades 4 to 9 and enrolled in a cluster of public schools located in urban middle-class neighborhoods from the Center of Portugal. Among these, six children were dropped from the analyses based on the following criteria: four had special education needs and two were identified as extreme outliers in one of the variables under analysis (viz., morphological misspellings per 100 words, which lied more than 3.0 times the interquartile range above the third quartile). All analyses were then based on the data from 95 students and were divided into two cohorts that were assessed twice (T1-T2), with a 3-year gap. The younger cohort was composed of 47 students in Grade 4 (51% girls) with an average age of 9.28 years at T1 (SD = 0.45). The older cohort included 48 students in Grade 6 (62% girls) with an average age of 11.35 years at T1 (SD = 0.53). Procedure After the formal agreement from the principal of the school cluster, permission was given to contact the teachers of Grade 4 classes (a total of five) and Grade 6 classes (a total of four). After being explained about the goals and procedures of the study, including the possibility to withdraw at any moment, all teachers and students agreed to participate. In group, students were asked to produce a descriptive text in response to the prompt "Please describe your school, " which has been successfully used in previous research (e.g., Berninger et al., 2009;Dockrell et al., 2014). The full administration procedure lasted for 50 min. This is the typical duration of writing tasks in the participating schools, also used in prior studies (e.g., Llaurado and Dockrell, 2020). The exact same procedure was followed for both cohorts and testing moments. To potentiate the engagement of students, they were told that the best texts would be posted at the school webpage. Measures Further details on the measures described below can be found in Supplementary Material, sections 2 and 3. Word-Level Measures Based on POMAS (Bahr et al., 2012), misspellings were counted separately by category: phonological errors, orthographic errors, morphological errors, stress marks, and illegible errors. However, the illegible errors were ignored as they were negligible (below 1%). This measure was re-scored by a second judge in the written products of 20% of the pupils at both T1 and T2. Reliability was good for all misspelling types, as indicated by the intraclass correlation coefficient (ICC) for single measures (>0.80). Sentence-Level Measures The sentence-level measures included the clause length and percentage of incorrect word sequences. Clause length was computed by averaging the number of words per clause, employing the computerized language analysis (CLAN) software (MacWhinney, 2000). The percentage of incorrect word sequences was calculated by examining the total number of incorrect sequences divided by the total number of sequences. Based on the scoring of 20% of the measures by a second judge, we concluded that reliability was high (ICC for individual measures >0.94). Discourse-Level Measures To measure descriptiveness (i.e., the presence of features typical of the descriptive text), we followed the taxonomy proposed by Adam (2001), including anchoring, aspectualization, relation, and subthematization categories. This taxonomy was used in previous studies, which relied on a dichotomous scale to indicate the presence or absence of the category [Grades 4-6 in Moura et al. (2015); Grade 3 in Pereira and Gonçalves (2017)]. Because our sample was older and we were concerned that this dichotomic coding would lead to ceiling effects, we added a new level indicating the presence and elaboration of information in each category. Thus, we used a three-point scale from 0 to 2, with the highest scores indicating a higher degree of discourse elaboration [for a similar coding scheme in argumentative texts, see Limpo and Alves (2013)]. Two independent judges scored these dimensions across all texts. Disagreements were solved through a discussion. Writing Performance Two measures were used: text length and text quality. Text length was measured through the total number of words provided by CLAN. Text quality was assessed using a holistic scale ranging from 1 (low quality) to 7 (high quality), on creativity, coherence, syntax, and vocabulary (Alves et al., 2016). To avoid transcription biases on quality assessment, texts were previously typed, and misspellings were corrected (Berninger and Swanson, 1994). Two independent judges rated the text quality of all texts produced. Inter-reliability was high, as measured by the ICC for average measures, which was 0.91 at T1 and T2. RESULTS Excepting one measure, we confirmed that our study revealed no distributional problems, as the absolute values of these indexes of skewness and kurtosis did not exceed 3.0 and 10.0, respectively (Kline, 2005). We found a ceiling effect in the descriptive dimension of aspectualization in the younger cohort at T2. Thus, this variable was not included in the analyses. Descriptive statistics for variables are presented on Table 1. Cohort and Time Differences at the Word Level To examine whether the type of misspellings varied across cohort and time, we conducted a 2 (cohort [younger, older] . These were decomposed with simple-effect analyses followed up by pairwise comparisons with Bonferroni adjustments, for misspellings type × cohort and misspellings type × time. Overall, the results showed that stress mark errors were the most frequent and that, except morphological errors, the younger cohort produced more misspellings of all types than the older one (for complete results, see Supplementary Material, section 4). Cohort and Time Differences in Writing Performance To examine whether text length and text quality varied across cohort and time, we conducted two 2 (cohort [younger, older]) × 2 (time [T1, T2]) ANOVAs. For text length, the results revealed a main effect of time, [F (1,93) = 30.63, p < 0.001, η 2 p = 0.25], with pupils writing longer texts at T2 than T1. For text quality, the results showed a main effect of cohort, [F (1,93) = 6.99, p = 0.01, η 2 p = 0.07], with the older cohort producing better texts than the younger one; and a main effect of time, [F (1,93) = 94.80, p < 0.001, η 2 p = 0.51], with pupils writing better texts at T2 than T1. Table 2 presents the bivariate correlations between all variables at T1 and T2 for both cohorts. In general, we found that (a) for the younger cohort, writing performance was associated with word and discourse variables at T1, but only with discourse variables at T2 and (b) for the older cohort, text length was associated with sentence-level variables and text quality with spelling-level, sentence-level, and discourse-level variables at T1, whereas at T2 only discourse level was related to writing performance. Contribution of Written Language Levels to Writing Performance To test the contribution of the three levels of language to writing performance, we conducted a set of stepwise regression analyses to predict text length and quality at T1 and T2, separately by cohort (final model estimates are presented on Tables 3, 4, for text length and text quality, respectively). For predicting text length and quality at T1, we progressively introduced word-, Frontiers in Psychology | www.frontiersin.org sentence-, and discourse-level variables at T1 step-by-step. For predicting text length and quality at T2, we introduced text length and quality at T1 as a first step, followed by a step-by-step inclusion of writing levels at T2. Predicting T1 Text Length For the younger cohort, Steps 1 and 2 did not reach statistical significance, but the inclusion of discourse-level variables increased the amount of variance explained in text length for both the younger, R change = 0.19, [F change(3,37) = 4.09, p = 0.01]. The final model with all predictors explained 43% of the variance in text length at T1, [F (9,37) = 3.04, p = 0.01]. Significant and independent predictors were clause length (b = 0.28) and the descriptive dimensions of anchoring (b = 0.28) and subthematization (b = 0.30). The first steps and the final model were, however, not significant for the older cohort, R 2 = 0.26, [F (9,38) = 1.52, p = 0.18]. Predicting T2 Text Length For the younger cohort, Step 1 with text length at T1 and Step 2 with word-level predictors did not reach statistical significance. However, the inclusion of sentence-level predictors at Step 3 Predicting T1 Text Quality For the younger cohort, Step 1 with word-level predictors made a significant contribution to text quality, R 2 = 0.23, [F (4,42) = 3.14, p = 0.02]. The inclusion of sentence-level predictors did not increase the amount of variance explained, but the inclusion of discourse-level predictors did, R 2 change = 0.22, [F change(3,37) = Predicting T2 Text Quality Step 1 of the analyses, including T1 text quality, proved significant for both the younger, R 2 = 0. DISCUSSION The first goal of the present study was to trace the developmental path of writing at word, sentence, and discourse levels. The overall results indicated that older students showed better performance at word and sentence levels than younger students, but mixed findings were found for the discourse level. In line with prior studies (Alves and Limpo, 2015;Llaurado and Dockrell, 2020;Magalhães et al., 2020), we found a general decrease in misspellings from T1 to T2 and more misspellings in the younger than the older cohort. This finding is not surprising as students in higher grades had more years of formal instruction and therefore had more spelling knowledge and writing experience. Moreover, it has been suggested that in free writing older students may be better at selecting words they know how to spell correctly (Graham and Santangelo, 2014). A misspelling analysis revealed four noteworthy findings. First, the pattern of older students producing less misspellings than younger ones was not observed for morphological errors. This finding aligns well with the proposal of Bahr et al. (2012), who suggested that more morphological errors may occur in older students due to the use of more complex vocabulary. This may require advanced morphological (derivational) knowledge that takes time to growth (Nagy et al., 2006;Berninger et al., 2010). In the future, POMAS could be used to examine the development of misspellings beyond Grade 9. Second, phonological misspellings decreased from T1 to T2 in both cohorts, being the least frequent type of misspelling in the older cohort. This is in line with previous research (Bahr et al., 2012) and suggests that sound-based spellings are learned in the earliest phases of spelling development (Treiman and Bourassa, 2000). Third, stress mark errors were the most frequent misspelling in both timepoints and cohorts, also corroborating past findings with younger Portuguese students (Magalhães et al., 2020). Stress mark errors indicate poor lexical knowledge of stress and difficulties in prosodic and orthographic mapping (Defior et al., 2012). From an applied standpoint, this means that spelling instruction is not being entirely successful in fostering this kind of knowledge. Finally, after stress mark errors, orthographic misspellings were the most predominant errors in Grades 4, 6, and 7. This is a common finding in the field (Bahr et al., 2012;Magalhães et al., 2020;Mesquita et al., 2020), suggesting that the Portuguese orthographic complexities and inconsistencies take several years to be mastered. Future research seems to be needed for developing evidence-based practices for improving orthographic knowledge beyond primary grades. In line with previous meta-analytic findings (Jagaiah et al., 2020), the sentence-level results showed longer clauses in the older than the younger cohort. Moreover, we found stronger time-related increases in the ability of students to craft complex sentences in the older cohort. These results indicate that improvements in syntactic complexity are more salient in older writers, as it has been proposed by scholars in the field (Hunt, 1970;Berninger et al., 2011). This may be related to teacher practices, with a sentence-related explicit instruction, including vocabulary and sentence expansion exercises only at later stages when students are more familiarized with complex genres (Connors, 2000). Our results showed a growth in syntactic correctness from Grades 4 to 7 and 6 to 9, with better performances in the older than the younger cohort. Similar findings have been reported in the field (Malecki and Jewell, 2003;Dockrell et al., 2014), indicating that the ability to craft syntactically correct sentences progresses throughout the primary and middle school. The measure of incorrect word sequences seems particularly sensitive to gauge that progress, including in older students (Espin et al., 2000;Weissenburger and Espin, 2005). The analyses examining cohort and grade differences in the discourse level revealed three main findings. First, the younger cohort performed better than the older one in terms of anchoring. This unexpected finding may be related to the Portuguese curricula, which, in the initial grades, emphasize the use of titles and introductory sentences to contextualize the theme to readers (Buescu et al., 2015). Though vivid in late primary and early middle grades, these recommendations may be lost over the years. Second, we observed the anticipated increase from T1 to T2 in the ability of students to connect the prompt with other topics, with older students performing better than younger ones. Tolchinsky (2019) already suggested that older students tend to provide elaborated descriptions of topic-related aspects, whereas younger ones usually present lists of attributes, with a few efforts to articulate content. This progressive increase in ideas elaboration over time is common to other genres (Berman and Nir-sagiv, 2007;Beers and Nagy, 2009) and may be linked to progressive mastery of writing of students. Third, relationship was the most absent descriptive feature (either alone in Grade 4, or together with anchoring in Grades 6, 7, and 9). Clearly, ability of the students to relate and compare concepts was poor in all grades assessed, which is alarming, given the importance of this feature. Descriptive texts should establish links between sub-topics through comparisons or metaphors, which allow readers to form a picture in their minds (Adam, 2001). More research is needed to understand which factors underlie this poor performance and which strategies may be used to foster it. The second goal of this study was to examine the contribution of word, sentence, and discourse levels to writing performance. However, considering the participants/predictors ratio, some caution is needed when interpreting these findings, which should be replicated in future studies with larger samples. Concerning word-level predictors, we found that more stress mark errors were associated with poorer texts at T1. This finding aligns with those of Magalhães et al. (2020) showing that in Grade 4, stress mark errors were reliable predictors of text quality. Regarding sentence-level predictors, we found the overall contribution of syntactic complexity to the amount and quality of students writing, confirming the importance of producing complex and good sentences to perform well in writing Limpo et al., 2017). Interestingly, the contribution of wordand sentence-level predictors to writing performance was only observed in the younger cohort, indicating that once students master a writing level, its role in writing performance diminishes (Graham, 2006). A striking finding involving both word-and sentence-level predictors at T2 in the younger cohort was that more stress mark errors, and more incorrect word sequences were associated with longer texts. Although this finding may be an artifact of the current study, it may also hint that by ignoring some aspects of writing, such as word stress and sentence correctness, students may be able to write more. Additional research is needed to replicate and explore these results. Discourse-level variables were the most salient predictors of writing performance in both cohorts, confirming that the amount and quality of the writing of students is heavily dependent on their ability to follow genre-specific structures (Graham, 2006). This means that a powerful way to increase writing performance in a given genre is to improve student's knowledge about its underlying. Previous meta-analyses on the best methods to develop writing are in line with this conclusion (Graham and Perin, 2007;Graham et al., 2012). It should, however, be noted that our findings showed that not all descriptive categories contributed equally to writing performance. The more relevant category seems to be subthematization, that is, the ability of students to elaborate on the content. Experimental research is needed to examine the degree to which teaching of each of the descriptive categories results in better writing performance. Implications for Applied Settings The findings of the current study provide relevant hints for practice. Despite the general growth in writing, there seems to be room for improvement. In younger students, teachers may need to provide additional instruction in terms of syntactic complexity, whereas in older students, they may need to focus on stress mark and orthographic knowledge as well as on descriptive features, mainly, anchoring. CONCLUSION In sum, this study showed the overall growth in word and sentence levels and an increase in the discourse level only for sub-thematization. Moreover, whereas word and sentence level predicted writing performance only in the younger cohort, the discourse level was a relevant predictor in both cohorts. By helping us to understand the long-term curve of writing development, these findings provide hints for researchers to develop evidence-based practices tailored to the writing needs of students. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS LP was responsible for initial study design and data collection. LC coordinated data coding and the manuscript preparation. TL analyzed the data. All authors contributed to the design and conceptualization of the study, literature review, discussion, and wrote and reviewed the manuscript and approved its final version. FUNDING This project was supported by National Funds through FCT-Fundação para a Ciência e Tecnologia, I. P., under the project UIDB/00914/2020. Universidade Portucalense, Infante D. Henrique funded this work by supporting its publication fees.
5,951.4
2021-08-03T00:00:00.000
[ "Education", "Linguistics" ]
A Novel True Random Number Generator Based on Mouse Movement and a One-Dimensional Chaotic Map . We propose a novel true random number generator using mouse movement and a one-dimensional chaotic map. We utilize the x -coordinate of the mouse movement to be the length of an iteration segment of our TRNs and the y -coordinate to be the initial value of this iteration segment. And, when it iterates, we perturb the parameter with the real value produced by the TRNG itself. And we find that the TRNG we proposed conquers several flaws of some former mouse-based TRNGs. At last we take experiments and test the randomness of our algorithm with the NIST statistical test suite; results illustrate that our TRNG is suitable to produce true random numbers (cid:2) TRNs (cid:3) on universal personal computers (cid:2) PCs (cid:3) . Introduction Random number generators RNGs have been widely used in recently science and technology, such as simulation, sampling, numerical analysis, computer programming, decision making, recreation, cryptographic protocols, and cryptosystems 1-7 .RNGs have two basic types: true random number generators TRNGs and pseudorandom number generators PRNGs .TRNGs produce true random numbers, which are nondeterministic.That means even if all the previous values have been gotten, the next value is unpredicted.PRNGs, on the contrary, are deterministic.The pseudorandom numbers PRNs are generated by deterministic programs and an input called seed and can be predicted by posttime series.Because TRNGs have better security property, in the higher security required areas, like generating cryptographic keys and initialization variables in cryptographic protocols, and so forth, TPRGs are irreplaceable.However, it is widely known that TRNGs need a certain physical phenomena, like thermal noise 8, 9 , atmospheric noise 10, 11 , and coin tossing 1 .So if one wants to utilize the above methods to produce true random numbers TRNs in a personal computer PC , which is the most widely used platform, additional equipments must be required.It is impossible in short-finance situation.So we turn to figure out the problem by using the existed equipments.We consider mouse a proper source for TRNs, because it is used universally and it produces a real random source.People move mouse in a true random pattern, and others could hardly tell the regulation of it.Even the attackers get the pattern of the past mouse movement, they cannot tell the following action the user will take.Thus, mouse appears to be a TRNG, and an increasing number of researches are focusing at this field. Hu et al. 12 and Zhou et al. 13 talk about the algorithms of generating TRNs based on mouse movement and chaotic cryptography, respectively.In literature 12 , Hu et al. provide three approaches to postprocess the mouse movement pattern, and one way to transform mouse movement pattern to digital numbers.Experiment results show that their methods pass statistical tests.However, in literature 13 , Zhou et al. also provide three approaches, two of which are different from the approaches in literature 12 , while one of which is the same.Although two of approaches in literature 12 and all of those in literature 13 can pass the randomness test, there are some drawbacks of their algorithms.Firstly, their methods require proper movement numbers of mouse movement pattern; neither too much or too little will break the security of the algorithm.This is because their postprocess way is generally a way to count pixels of the image, so if the image's pixels' distribution is not uniform, the produced TRNs are not uniform, either.Thus, the produced RNs cannot pass statistical tests 13 and be provided randomness.That is to say, the TRNs' properties rely on the mouse-produced image's properties.Combining this reason and the issue of key space, literature 12 requires the number of points of samples to be between 256 and 1024, which limits user's action, and is inconvenient.Second, the "MASK" method, which is considered by that first paper as the best approach among the three methods of that paper and concluded that it is able to practice in common PC applications, is a parallel image encryption algorithm 13 ; it is not suitable for using on a one-element PC.Plus, the tent-map-based approach TMA contained in literature 13 is evaluated by the author that is unconvinced for users, since it can only produce a 104-bit random number with a single mouse movement.As well as the other method mentioned in literature 13 , the free forward-feedback nonlinear digital filter method FFNF is also not suitable for usage because of its low speed.And the discrete 2D chaotic map permutation method in literature 12 has not also been considered random at all.Thus, we just need to compare the left two approaches, the Spatiotemporal chaos method in literature 12 and the New approach based on tent map method NPTM in literature 13 . Here we propose a simple algorithm of TRNGs based on user's mouse movement and a one-dimensional chaotic map.We utilize the x-coordinate to be the length of an iteration segment of our TRNs and y-coordinate to be the initial value of this iteration segment.And, when it iterates, we perturb the parameter.We take some experiments and a NIST statistical suite to test the randomness of this TRNG and compare it to the two methods mentioned above.Results show that our algorithm is random and most of the randomness properties are better than the two methods.Plus, the time cost of our algorithm is even lower.And also, our algorithm needs no additional equipments.Therefore, we can conclude that our algorithm is an effective and practicable method to produce TRNs for universal computers. Introduction of the One-Dimensional Chaotic Map In 2009, Aguirregabiria proposed a class of one-dimensional smooth map 14 , which is depicted as follows.It has a good property that, in a certain parameter interval, this class of maps has positive Lyapunov exponent: here f should accord in three conditions: ii the Schwarzian derivative is negative in the whole interval.Schwarzian derivative is defined as iii f is a unimodal map and f 0 f 1 0. That is to say, the map increases from f 0 0, when it comes to the maximum f c , c ∈ 0, 1 , the map begins to decrease until f 1 0 again. When map satisfies the three conditions above, we call the map "S-unimodal" 14 .Take f x x 1 − x , for example, and in the rest of this paper, f x is also defined as that.Then, by the proof in literature 14 , the function f r x has positive Lyapunov exponent in the interval −0.75, ∞ .We draw the Lyapunov exponent in Figure 1 a by computing numerically by And then, we use a modified class of one-dimensional maps in 2.4 , here k is the expansion coefficient of r on negative axle: By simple calculation, one can find out that when r > k, f rk is S-unimodal, too.Here we take k 30; one can see that f rk has positive Lyapunov exponent λ showed in Figure 1 b , when r > −22.5.And in the rest of this paper, we are likely to use these parameters to build our TRNG. TRNGs Algorithm Based on Mouse Movement Here we are going to depict our algorithm carefully.We firstly choose the one-dimensional map depicted before to be the iteration map of our TRNGs algorithm.We then get the pattern of the mouse movement showed in Figure 2, which is the material of our TRNGs.We utilize the points of mouse movement, which construct Figure 2. We consider the x-coordinate of a point to be the numbers of the iteration and the y-coordinate to be the initial of the iteration.So that one point can deduce a sequence of numbers, which is produced by iterations.The more points the mouse moving, the more sequences will be produced.And as the initial value and the numbers of one iteration are totally unknown and absolutely cannot be predicted, those sequences are TRNs.And even there are only a few points existed; there will be lots of numbers produced.That solves the problem in literature 12 of constraining the number of points of mouse movement pattern.To remark it, the y-coordinate should be divided by 900 before it is used to be the initial value. What is more, we add perturbation in each iteration, which will increase the randomness and break the periodic phenomenon.We replace the parameter r i with the value x i − 1.5.That will guarantee the parameter r will not depart too much but approximate the value −1.It will preserve properties of the TRNs we produced.And we use the binary quantization to transform our TRNs to a binary sequence in 3.1 .Here t is the threshold, and we adopt t 0.5.Although there are lots of methods to be the transformation 15-18 , we still use binary quantization for its simplicity.In the next section, we will introduce several experiments and statistical tests to test the sequence we generated whether random or not and compare the time cost and the randomness with the spatiotemporal chaos approach: 3.1 Kernel Density Map, Histogram, and Autocorrelation Function Using the algorithm we proposed above and the pattern we have, we produced 1000000 TRNs.We firstly draw the kernel density map in Figure 3 and histogram in Figure 4 with the real value sequence of our TRNs, which are real numbers.The histogram equation is in Considering x is an n number binary sequence, x is a shifted sequence referred to x .Let D be the number of bit-to-bit disagreements between x and x .Here d is the number of time sequences between x and x . Moreover, we draw the autocorrelation function of our binary sequences TRNs using the equation in 4.2 .The autocorrelation function C is The autocorrelation function then is shown in Figure 5, which is a δ-like function. NIST Statistical Test Suite We also test our binary sequences with the NIST statistical test suite 19 , which depicts deviations of a binary sequence from randomness.Unlike the Version 1.7 and before 1, we compare the time used in order to generate 256-bit data, including our algorithm, Spatiotemporal chaos in literature 12 , and the New approach based on tent map NPTM in literature 13 .Moreover, we list results of the NIST statistical test suite of our algorithm, the spatiotemporal chaos approach contained in literature 12 , and the New approach based on tent map NPTM contained in literature 13 in Table 2.As there are two tests of spatiotemporal chaos approaches for n 1000 and n 10000, respectively, we list them all in the table. Results and Comparison By comparing the items in Table 2, we find out that there are more items, although not all of them, of our algorithm that are better than that of the other three methods.For example, comparing to Spatiotemporal chaos n 10000 and Spatiotemporal chaos n 1000 , there are 12 items in 15 in the table of our method that are better than that of the two methods.And also, comparing to NPTM, there are also 11 items of ours that are better than that of it. Conclusion In this paper, we first summarize some drawbacks of two proposed mouse movement TRNGs and propose a novel TRNG which conquers the flaws of the former two.The new algorithm is based on mouse movement and a one-dimensional chaotic map.The approach utilizes the x-coordinate of the mouse movement to be the length of the iteration, and the y-coordinate to be the initial value of this iteration segment.And we perturb the parameter with the real value of the produced TRNG itself when it iterates. And then, we do some experiments and we compare the time cost of the three approaches and get the result that ours is a little bit faster than the other two.Last but not least, we test the three sequences with the NIST statistical test suite.Results show that our algorithm is better than the other two and is suitable to produce TRNs on universal PC. Figure 4 : Figure 4: Histogram of the sequence. Table 1 : Average time required to generate a random number using different approaches. Table 2 : 800-22 test's results.Ziv complexity test LZCT , we use the 2010 version NIST statistical test suite 19 , the number of the items of which is 15.Thus, in this paper, we just list 15 items for comparison.The value of each test represents the degree of randomness of the tested sequence.If the value is bigger than 0.01, it demonstrates that the sequence passes the test and could be considered as random.And the bigger the value is, the more random the sequences are.For more details, please refer to literature 19 .In this test, all approaches were implemented with nonoptimized Matlab codes, running on an ordinary PC with 1.5 GHz Intel Celeron CPU.In Table
2,961
2012-02-02T00:00:00.000
[ "Computer Science" ]
A Comparison of Computational Methods for Identifying Virulence Factors Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them. Introduction The Escherichia coli O104:H4 bacteria outbreak since May-02-2011 in Germany has brought into focus the need to use reagents to rapidly identify pathogenic organisms and genes involved in the mechanisms of pathogenicity. Although the majority of E. Coli strains are beneficial to human bodies, the genome of this new strain of O104 was modified by mutations or the genetic materials secreted from other bacteria, rendering it able to produce Shiga toxin and resist to many kinds of antibiotics and also to the mineral tellurium dioxide, causing foodborne illness [1]. In the course of pathogens infection and pathopoiesis, virulence factors (VFs) play a key role. VFs are the molecules produced by pathogens that increase the ability of pathogens to cause disease. According to their mechanisms and functions, VFs can be generally classified into the following seven groups: (1) adhesins that attach microbes to their hosts, (2) colonization factors that enable certain bacteria to colonize within host cells, (3) effectors that suppress hosts' defenses, (4) invasions that disrupt the host membranes and stimulate endocytosis, (5) toxins that poison the host cells and cause tissue damage, (6) capsular polysaccharides that protect pathogens from host defenses, and (7) siderophores that take up iron [2][3][4]. House-keeping proteins that are required for maintaining the basic cellular functions and are not related to pathogenesis are not virulence factors [2]. Therefore, virulence factors can be the potential targets of drugs to treat infectious diseases specifically, without killing or inhibiting other bacterial growth, avoiding the higher evolutionary pressure to develop drug resistance [4]. At present, complete genome sequences of almost all major bacterial pathogens have been determined (http://cmr.jcvi.org/tigrscripts/CMR/CmrHomePage.cgi), providing significant insights into microbial pathogenesis and drug resistance. Meanwhile, several repositories aiming at collecting the virulence factors with their structures, functions and mechanisms have also emerged, facilitating the study of virulence factors of bacterial pathogens. The Virulence Factor Database (VFDB, http://www.mgc.ac.cn/VFs/), constructed by the virulence-guided classification system, currently contains 409 virulence factors and 2,353 VF-related genes (accessed June 2011) [5]. The Lawrence Livermore National Laboratory Virulence Database (MvirDB, http://predictioncenter.llnl.gov/) integrates DNA and protein sequence information from various databases and provides a browser tool enabling keyword and virulence classification search [6]. Researchers have identified many genes of potential virulence factors by analyzing comparative genomics or homology searching against the virulence factor databases. For example, Gulig et al. [7] identified 80 genes exclusively found in clade 2, which was the predominant clade among the clinical strains and generally possessed higher virulence potential in the animal models of Vibrio vulnificus. Conducting the investigation with a different approach, Wegmann et al. [8] used BLASTP to search against the toxin in the virulence factor database MvirDB to assess the GRAS (generally regarded as safe) status of L. lactis MG1363. Although the data relevant to virulence factors are expanding rapidly, it is still quite limited in the area of using computational tools to interpret, identify and characterize virulence factors. A large number of proteins in the microbial genomes are still annotated as hypothetical or with little functional characterization, or with contradictory information to confuse the comparative genomics analysis. Homology searching methods like BLAST [9] could only identify conserved virulence factors but failed to identify novel virulence factors that are evolutionary distant from known virulent proteins. In order to deal with such situation, several machine-learning approaches have been proposed, such as SPAAN [10] for identifying adhesins and adhesinlike proteins and VICMpred [11] for classifying bacterial proteins among the following four different functional classes: cellular process, information molecule, metabolism molecule and virulence factors. However, the former was restricted to adhesins only, while the latter was trained with merely 670 gram-negative bacterial proteins [10,11]. To improve these kinds of situations, VirulentPred [12] and Virulent-GO [13] were developed recently for predicting bacterial virulent proteins based on their sequences information alone: the samples in the former were formulated by a vector consisting of five kinds of sequence features; while the samples in the latter by a vector containing the GO [14] information. It was reported that the two predictors yielded an overall success rate of 81.8% [12] and 82.5% [13], respectively. The present study was devoted to develop a novel network-based method by incorporating the information of protein-protein interaction (PPI) for identifying bacterial virulence factors in UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Compared with the sequence-based methods such as BLAST, feature selection and VirulentPred, the network-based method achieved a remarkable improvement with the identification accuracy of 0.9. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in STRING database. The high success rates indicate that the network-based method is quite promising. It is anticipated that with the increasing amount of PPI networks available in more and more organisms, the current network-based approach will play a more and more important role in both applications and stimulating new strategies for in-depth investigation into the relevant areas. Benchmark Dataset Datasets of virulence factors were downloaded from VFDB [5], a well-established database based on experimentally validated virulence factors extracted from literatures and supplemented with comprehensive genomics information from bacterial pathogens. A total of 2,295 proteins of virulence factors were obtained, involving 24 pathogens from Bacillus to Yersinia. According to the total amount of virulence factors in each of these species, we selected five of them that contained the largest amounts of virulence factors. These five species were: (i) Escherichia coli 536 (UPEC 536), (ii) Pseudomonas aeruginosa PAO1 (P. aeruginosa PAO1), (iii) Salmonella enterica (serovartyphimurium) LT2, (iv) Escherichia coli CFT073 (UPEC CFT073), and (v) Legionella pneumophila Philadelphia 1 (L. pneumophila Philadelphia 1). The numbers of virulence factors in the above five species were 230, 190, 165, 117 and 117, respectively. Since these five species are closely related, we also selected another two species that were distant in phylogeny. The two species were Campylobacter jejuni NCTC 11168 (C. jejuni NCTC 11168) and Mycobacterium tuberculosis H37Rv (M. tuberculosis H37Rv), and they contained 98 and 86 virulence factors, respectively. All the aforementioned species, except Salmonella enterica (serovartyphimurium) LT2, were included in the STRING database [15]. Consequently, the virulence factors in the remaining six species would form our first-hand dataset. The protein-protein interaction (PPI) network used here was retrieved from the STRING database [15] (http://string-db.org/). For each of the six species, a PPI network was constructed by integrating different sources of information derived from experimental, computational, and text-mining methods. Furthermore, all interactions in STRING are provided with a probabilistic confidence score, representing a rough estimate of how likely a given interaction describing a functional linkage between two proteins might occur. In order to predict virulence factors based on the STRING database, we extracted all the proteins and interactions between them for the 6 species mentioned above. Mapping these known virulence factors from VFDB to STRING proteins, we found 207, 110, 189, 116, 98 and 83 proteins for UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168, and M. tuberculosis H37Rv, respectively, by BLASTP with the cutoff of HSP score being 90. These proteins comprised our positive dataset. Proteins, not known as virulence factors, were randomly selected from the remaining proteins of each species in STRING to compose the negative dataset, with the ratio between the size of negative dataset and positive dataset equaling 5:1. Then, all the virulent and nonvirulent sequences of the six species were randomly divided into a training dataset with a proportion of 80% and a testing dataset with 20%. The training dataset was used by the jackknife crossvalidation method to assess the identification performance of each virulence factor classifier developed by us, while the testing dataset was used to compare our methods with other existing tools (such as VirulentPred) in identifying the virulence factors. STRING Network-Based Method It has been demonstrated that the STRING network-based method could be used to predict protein phenotypes [16]. The prediction accuracy thus obtained was 65.4% for budding yeast, much higher than the success rate (15.4%) by a random guess. In this study, we are to apply this novel method to predict virulence factors. In the PPI network, when predicting whether a protein was a virulence factor or not, we considered two kinds of information: the number of its neighbor nodes (proteins) and the strengths of its interactions (confidence scores) with them. The detailed process of the prediction based on STRING network is described as follows. Firstly, suppose a PPI network consisting of n nodes fp 1 ,p 2 ,:::,p n g, in which each node is divided into 2 classes (T = [T 1 , T 2 ]), where T 1 stands for ''virulence factor'', T 2 the ''non-virulence factor''. Then we denoted the class of the i-th protein in the PPI network by T(P i )~½t i,1 ,t i,2 (i~1,2,:::,n) ð1Þ where t i,j~1 , ifP i belongs to the j-th class 0, otherwise For a query protein P k , its interaction weights with m proteins (nodes) can thus be defined by W (P k )~½w k,1 ,w k,2 ,:::,w k,i ,:::,w k,m T (i~1,2,:::,m) ð3Þ where w k,i is the interaction weight (confidence score) between P k and the i-th protein P i in the dataset concerned. If there is no interaction between them, let w k,i~0 . Since the self-interaction of proteins was not taken account here, we have w k,i~0 when k~i. In order to estimate the likelihood of the protein P k belonging to the j-th class, we defined a score function as given by where proteins without any associations with the queried protein would have no contribution to the score function S(P k [j). Thus, the likelihood of protein P k belonging to the j-th class can be deemed as the sum of the interaction weights of all its neighbor proteins being labeled as the j-th class in the training dataset. Apparently, the larger the value of S(P k [j), the more likely the protein P k would belong to the j-th class. Thus, the class of the queried protein P k can be determined by the following formula: If rw1, the queried protein P k was predicted belonging to the virulence factors; otherwise, other kinds of proteins. BLAST For the purpose of comparison, we also used BLAST to predict the virulence factors as follows. First, let us denote the training set as p 1 ,p 2 ,:::,p n f g , and a queried protein as P k , then comparing the queried protein P k with the training set proteins by BLASTP with default parameters. In the list of hit results p 1 ,p 2 ,:::,p m f g1ƒmƒn ð Þ , we chose the positive and negative samples both with the smallest e-values. If either positive or negative sample did not exist in the list, the corresponding e-value was set at 10. We computed the ratio of positive vs. negative samples' e-values by the following equation: where p m [1 means the protein p m was a virulence factor; p m [2, not. Obviously, the queried protein is more likely to belong to the same class as the hit protein with the smallest e-value in the hit list. Thus, if rv1, the queried protein P k was assigned to the category of virulence factors; otherwise, other kinds of proteins. Amino Acid and Pseudo Amino Acid Composition In this method, virulence factors were coded by amino acid composition (AAC) and pseudo-amino acid composition (PseAAC) [17], from which some important features are selected by the feature selection method. Generally, the frequency of the occurrence of each amino acid in a protein sequence can be used to code the sequence. That is, a protein can be represented by a 20-D (dimensional) numerical vector. However, this traditional amino acid composition nearly loses the sequence-order information completely. To improve it, the pseudo amino acid composition (PseAAC) was proposed [17,18] to complement the simple amino acid composition (AAC) for representing the sample of a protein. Since the concept of PseAAC was introduced, it has been widely used to study various problems in proteins and protein-related systems, such as predicting subcellular location of proteins [19], structural classes of proteins [20] and DNA-binding proteins [21], etc. In this study, we only employed the sequence-order information reflected by a series of PseAAC components [17] to code proteins. These kinds of sequence-order information were derived according to the following five physicochemical and biochemical properties of amino acids: (i) codon diversity, (ii) electrostatic charge, (iii) molecular volume, (iv) polarity, and (v) secondary structure propensity. The values of such five properties were retrieved from [22][23][24]. To get the optimal results, we set ë = 50 and ù = 0.15 for the PseAAC, as done in [24]. Since each of the aforementioned five features can generate ë = 50 discrete numbers, each protein sample will be coded by a (20+5065 = 270)-D vector in the feature space. Feature Selection and NNA Classifier In machine learning, feature selection is a technique that selects an optimal subset of features to build a more robust learning model. Here, we used Maximum Relevance Minimum Redundancy (mRMR) method [25] to rank the 270 features based on their relevance to the classification variable (maximum relevance) and the redundancy among them (minimum redundancy). More important features will be selected earlier and ranked in higher position. Meanwhile, in spite of the features being ranked according to mRMR criteria, it is a bit of a challenge to get the optimal number of features used for the prediction. To solve the problem, we adopted Incremental Feature Selection (IFS) [26] to find the optimal number of features. For the 270 features ranking from higher to lower, we added features one by one to code the protein. Thus, we obtained a series of feature subsets where f i is the i-th feature in the ranked feature list. Subsequently, a Nearest Neighbor Algorithm (NNA) [27] classifier was constructed for each feature subset to predict whether a protein was a virulence factor or not. NNA is one of the simplest and most effective machine learning algorithms, which assigns the unknown sample to the class of its nearest neighbor. The core of this algorithm is the distance function: where v i : v j is the inner product of the two coding vectors vi and vj, and DDvDD represents the module of vector v. Since each protein is coded by an i-D (1ƒiƒ270) vector and the training set contains n proteins p 1 ,p 2 ,:::,p k ,:::,p n f g , we can determine the class of a queried protein p as follows Þ is the nearest distance of the queried protein p and the j-th class protein p k [j, in which j~1 means protein p k belongs to positive samples and j~2 negative samples. According to the theory of NNA, if rv1, the queried protein is assigned to the virulence factors; otherwise not. Since the NNA classifier can be applied for every feature subset to perform a prediction, we draw an IFS (Incremental Feature Selection) curve to reflect the relationship between the performance of the NNA classifier and the feature subset. In the curve, x-axis is the number of features of the subset S i and y-axis is the prediction accuracy of the NNA classifier. The optimal prediction result is the highest point in the curve, which corresponds to the feature subset in the x-axis that achieves the highest overall accuracy in the curve. Jackknife Cross-Validation and Evaluation In statistical prediction, the jackknife cross-validation, also known as the leave-one-out cross-validation (LOOCV), is regarded as an objective and effective method to evaluate a classifier for its effectiveness in practical application. Accordingly, we adopted this method here to examine the quality of the present classifiers. During the jackknifing process, each of the proteins in the dataset was in turn singled out for testing by the classifier trained with the remaining proteins. To evaluate the performance quality, we calculated the following six indexes: sensitivity (S n ), specificity (S p ), precision (P), recall (R), accuracy (AC) and Matthews Correlation Coefficient (MCC), as formulated below: where TP represents the true positive, TN the true negative, FP the false positive, and FN the false negative. S n , S p and AC are the percentages of virulent proteins, non-virulent proteins and any proteins that are correctly predicted, respectively. Precision (P) is the proportion of the true positives against all the positive results (both true positives and false positives), while recall (R) in our classification context is referred to as the true positive rate (S n ) in fact but is used in precision/recall curves. MCC equaling to 1 indicates a perfect prediction, whereas 0 means a completely random prediction. Then, we further calculated ROC score defined as the areas under the ROC curves, the plot of true positive rate (S n ) as a function of the number of false positive rate (1-S p ) by R package ROCR [28]. We also used ROCR to draw precision/recall curves for the comparison of the aforementioned three methods. Virulence Factors and Databases By the use of the molecular version of Robert Koch's postulates, which built a causal relationship between pathogens and disease, Stanley Falkow attempted to provide a definition of the term 'virulence factor': (1) the potential virulence factor gene should be found in all pathogenic strains of the genus or species but be absent from non-pathogenic strains; (2) virulence of the microbe with the inactivated gene should be less than that of the unaltered microbe in an appropriate animal model; (3) reintroduction of the relevant gene into the microbe should restore virulence in the animal model [29,30]. His work has provided an experimentally rigorous approach to the study of virulence in certain bacterial pathogens. However, it should be noted that the definition of the virulence factor is also problematic and controversial [31,32]. For example, some ''classic'' virulence factors, such as invasion genes (e.g., yjjp, ibeB and ompA), were also found in the genomes of commensal bacteria [32]. In spite of this imprecise definition, the virulence factor concept has still been used as a powerful engine in driving research in the fields of microbial pathogenesis and infectious diseases, and thus has greatly furthered our understanding of microbial pathogenesis [33][34][35][36]. Except VFDB and MvirDB mentioned above, several other databases have been developed specially for virulence factors, such as PHI-base (Pathogen Host Interations dataBase) [37], ARDB (Antibiotic Resistance Genes Database) [38] and ATDB (Animal Toxin Database) [39] and so on. Among these databases, VFDB was found to be the broadest and most comprehensive and had the highest quality with its curated dataset and virulence-guided classification system [5,33]. Via exhaustive literature screening and expert review, VFDB has provided up-to-date information regarding experimentally validated bacterial virulence factors from genera of medically important bacterial pathogens. And therefore, we used the virulence factors from VFDB as our primary dataset. Results by BLAST At first, we conducted the homology search for each species by BLASTP with the cutoff of HSP score being 90. However, most of the proteins (more than 80 percent) in the training dataset will be discarded for the poor homology among them. Thence, in the following study, to make use of the most of the data, no cutoff was set for the BLAST method. If the ratio of the smallest e-values of positive and negative samples was less than one, then the query protein was assigned to the virulence factor class regardless of how poor the alignment was; if not, non-virulence factor class. In some cases, it was also possible that no hit whatsoever existed for a query protein, and then the query protein would be excluded from the training dataset. For example, in UPEC 536, among 993 (2076660.8) proteins, 960 were predicted by the BLAST and 33 proteins were discarded. The prediction results are given in Table 1. As can be seen, the S n , S p , AC, and MCC for UPEC 536 were 0.460, 0.869, 0.800 Table 1. We can find that the overall prediction accuracies are around or less than 0.8. Results by the Feature Selection Method We also applied feature selection method to predict whether a protein is a virulence factor or not. The model was constructed as follows. First of all, each of the proteins in the training dataset was coded as a 270-D feature vector in the feature space (see Section 4 of Materials and Methods). Then, the mRMR program was run to rank the 270 features according to the criteria of Maximum Relevance and Minimum Redundancy. The mRMR-ranked features can be found in Table S1 and will be participated in IFS procedure for feature selection and analysis. For each feature subset, a NNA classifier was built and its prediction accuracy was calculated by the jackknife cross-validation. Based on the number . Histogram illustration to show the difference of the amino acid occurrence frequency between virulence and nonvirulence factors. The histograms were plotted for Ala, Ser, Arg, and Val in UPEC 536, respectively. X-axis is the amino acid composition, while y-axis is the frequency of sequences that own the corresponding amino acid composition in the dataset. P-values are given by the Wilcoxon rank sum test and measure how much evidence we have against the null hypothesis that the amino acid composition distribution is the same for virulence and non-virulence factors. Traditionally, when p-value ,0.05, we say the null hypothesis is rejected, that is, the amino acid composition distribution is significantly different for virulence and non-virulence factors. The feature distribution histograms and p-values show the difference of the amino acid composition frequencies between virulence and non-virulence factors is significant, and thus it is reasonable to pick out virulence factors from proteomes based on amino acid composition features. doi:10.1371/journal.pone.0042517.g003 of the features in a feature subset and the corresponding prediction accuracy, we plotted the IFS curve ( Figure 1). Again take UPEC 536 for example: it was observed that when the feature subset contained the first 47 features, the prediction accuracy got the highest value of 0.824773. Hence, the optimal prediction model should be constructed by the first 47 features in the mRMR feature list. For the other five species, the optimal number of features and the corresponding accuracy were (148; 0.797348), (112; 0.807554), (20; 0.777288), (204; 0.834043) and (26; 0.796482), respectively. As described in the Materials and Methods, two kinds of features were used to code protein sequences. They were conventional amino acid compositions and pseudo-amino acid compositions, and the latter was based on 5 kinds of physicochemical and biochemical properties of amino acids, such as the codon diversity, electrostatic charge, molecular volume, polarity and secondary structure. The distribution of the number of features in each property in the optimized feature subset was investigated and shown in Figure 2. As the panel A of the figure showed that, in the optimized feature subset of UPEC 536, there were 15 features of amino acid compositions, 8 features of codon diversity, 6 features of electrostatic charge, 6 features of molecular volume, 7 features of polarity and 5 features of secondary structure. This indicated that both amino acid composition and pseudo-amino acid composition contributed to the prediction of virulence factors and that conventional amino acid composition may play an irreplaceable role in the prediction. Furthermore, the amino acid composition analysis of virulence and non-virulence factors revealed some interesting results. According to the criteria of maximum relevance to the target (Table S1), we selected the top 4 amino acid composition features ranked by mRMR to investigate the feature distribution between virulence and nonvirulence factors (Figure 3). It was observed that compositions of residues Ala, Ser, Arg and Val, corresponding to AA composi-tion1, AA composition16, AA composition15 and AA composi-tion18 in the Table S1 respectively, contributed significantly to the classification for virulence and non-virulence factors. This was supported by the discovery of Aarti Garg et al.'s research [12]. Amino acid compositions had been successfully applied to the predictions of antimicrobial peptides [24], bacterial virulent proteins [12] and subcellular localization [40,41], etc. And in many cases the approach outperformed the homology searching methods [12,40], consistent with our results. By analyzing the feature subset that achieved the best prediction accuracy for each species (Figure 2), it was revealed that the distribution of the features was different among the six species. For UPEC 536 and P. aeruginosa PAO1, conventional amino acid compositions played the most importance role, while for the other 4 species, pseudo-amino acid components such as codon diversity, electrostatic charge, polarity and secondary structure contributed more towards the prediction. The reasons may come from two factors. One is that the completeness of the annotation of virulence factors in each species is not the same: some may be studied by more research groups and has more detailed and accurate annotations. The other may be due to the inaccurate annotation where some virulence factors are still annotated as non-virulence factors. Listed in Table 2 are the results obtained by the feature selection method on the six species via the jackknife tests. Performance of the Network-Based Method From STRING, the probabilistic confidence scores of interactions between proteins can usually be acquired, which can then be used to investigate biological problems [16,42,43]. However, some proteins may not interact with any of other proteins in the same training dataset. Take UPEC 536 as an example, only 959 proteins in its training dataset have interactions with other proteins, while the remaining 9932959 = 34 proteins have no interactions at all with the other proteins although they may interact with proteins outside training dataset. Considering the negative dataset was generated randomly, it is always possible that some proteins do not interact with any others in the training dataset. One feasible solution is to put all the non-virulence factors in STRING into the negative dataset. Unfortunately, this would make the size of the negative samples so large that S n would be very low, though AC could be high. In order to balance the positive and negative samples, we tested the performance by setting the ratio between positive samples and negative samples to be 1:2, 1:5 and 1:10. And we found that when the ratio was 1:5, we obtained the desirable performance. For the other five species (i.e., UPEC CFT073, L. pneumophila Philadelphia 1, P. aeruginosa PAO1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv), the corresponding numbers of proteins without any interaction with the others are 5282461 = 67, 5562506 = 50, 9072880 = 27, 4702467 = 3 and 3982372 = 26, respectively. All these proteins were discarded. Listed in Table 3 are the results obtained by the current network-based method on the six species via the jackknife tests. As we could see from the table, the AC values were more than 0.90 for all species except M. tuberculosis H37Rv, significantly higher than those by either the BLAST method or the feature selection method, indicating that the current network-based method is quite promising that may hold very high potential for identifying virulence factors in various organisms. However, it was noted that although the AC value achieved by the networkbased method for M. tuberculosis H37Rv was higher than those by the BLAST and feature selection methods, the value was only 0.84140, much less than those for the other five species. The poor prediction performance for M. tuberculosis H37Rv might be due to that the quality of protein-protein interaction data for this organism in the STRING database is much poorer [44]. Comparison between Network-based and Other Methods In this study, we developed three different methods to identity virulence factors. As shown from Tables 1, 2, and 3, the networkbased method significantly outperformed the BLAST method and feature selection method. Meanwhile, we also tried to perform the ROC and precision/recall comparisons. For the BLAST method, when the query protein sequence was very similar to some of the protein sequences in the database, the e-value would be close to zero, and hence their corresponding distance would also near zero in the feature selection method as described above. Consequently, many ratios would have extreme values, making the ROC and precision/recall curves for both BLAST and feature selection methods look abnormal. To tune this kind of extreme values, let us adopt the following monotone decreasing function where x is either the e-value or distance. By means of Eq. 11, all the e-values and distances could be mapped into the region of (0,1]. After such a transformation, we redrew the ROC and precision/recall curves (Figures 4 and 5). As expected, the two kinds of curves have showed once again that the network-based method achieved the best performance among the three methods for all the six species. Moreover, based on the independent testing dataset for the six species, we did plan to compare the prediction performance of our three methods with other existing methods, including VirulentPred [12] and Virulent-GO [13]. Unfortunately, no downloadable or online tool whatsoever was available for Virulent-GO. Thus, only the comparison with VirulentPred was made here as a compromise. The concrete comparison procedures are as follows. The positive and negative testing dataset was submitted onto the VirulentPred online service (http://203.92.44.117/virulent/submit.html) directly with default parameters. For our three methods, it should be noticed that the feature set used to code the testing dataset in the feature selection method was the optimal subset, which was obtained from the training dataset. The results of Sn, Sp, AC and MCC were also calculated for each method, respectively. As can be seen from Table 4, the network-based method achieved much better prediction performance than the BLAST and feature selection methods, too. Although the value of Sn by VirulentPred was slightly higher than that by the network-based method, the value of Sp by the former was much lower than that by the latter, indicating that the false positive outcome was really a serious problem for VirulentPred and hence leading to its poor prediction accuracy (AC) and MCC. As for M. tuberculosis H37Rv, Zhou and his colleagues [44] have demonstrated that the protein-protein interaction data of this organism in the STRING database is of low quality and thus may unfavorably affect our network-based method. Accordingly, it was not surprising that the performance on the testing dataset for such species was quite poor when compared with the other five. Taken together, we can draw a conclusion that the method based on the STRING networks is really better in identifying the bacterial virulence factors. From the Sequence to the Network Determining protein function is one of the most challenging problems in the post-genomic era. In this context, sequencebased methods such as BLAST are the primary tools to deal with this kind of problems. However, their accuracy is considerably affected by the type and amount of information on the specific protein family. Also, these methods would fail for those systems that contain a significant proportion of novel proteins without functionally known homologous counterparts in the current databases. Therefore, many new computational methods have been developed to infer the protein function using the principle of guilt-by-association of other functional properties to complement the sequence-based methods [45]. Our method based on the STRING protein-protein interaction network reflects one of the efforts in this regard. As the cornerstone in the current network-based method, the STRING database quantitatively integrates the interaction data from many information sources such as phylogenetic, experimental and existing knowledge information, extending the direct (physical) associations to the indirect (functional) associations. We have analyzed the detailed sub-score information of our STRING network data for the virulence factors in the six species. It was found that most of the interactions among virulence factors in the STRING database were functional associations, mainly with the neighborhood and co-occurrence associations ( Figure 6). In view of this, we further studied the locations of the virulence factors in the genomes and biological processes they were involved in. It has been noted by previous investigators [32,46] that many virulence factors are presented in the pathogenicity islands involved in horizontal gene transfer. In 2009, with the number and diversity of bacterial genomes sequenced, a systematic largescale analysis across diverse genera has indicated that virulence factors are disproportionately associated with genomic islands (GIs) [33]. Subsequently, we mapped our virulence factors of the six species to the SEED subsystems by the SEED Viewer version 2.0 (http://pubseed.theseed.org/seedviewer.cgi) [47]. In the microbial genome annotation, the SEED is the first annotation environment that curates genomic data via the curation of subsystems by an expert annotator across many genomes, not on a gene-by-gene basis. These subsystems group genes by the pathways or structures in which they participate. For instance, type 4 secretion and conjugative transfer are composed of a set of functional roles that some proteins perform (type IV secretion system protein VirD4, inner membrane protein forms channel for type IV secretion of T-DNA complex VirB3 and minor pilin of type IV secretion complex VirB5, etc.). Our results revealed that more than half of mapped virulence factors participated in a specific biological process or structural complex with at least one other virulence factor (Figure 7, Figures S1, S2, S3, S4, and S5). As Figure 7 showed, in C. jejuni NCTC 11168, as many as 29 and 13 virulence factors were involved in the flagellum subsystem and flagellar motility subsystem, respectively. Flagella belong to a major virulence factor in Campylobacter in VFDB, and can penetrate the mucus barrier and are important for intestinal colonization. Clusters of virulence factors in prokaryotic genomes and enrichments in biological pathways made it possible for their functional associations such as neighborhood and co-occurrence to be common and confident in the STRING database. Our network-based method was based on hypothesis that proteins participating in the same cellular processes or being localized at the same cellular compartment usually share similar functions. This is reasonable because a pair of proteins participating in a same pathway or locating in a same complex is many folds more likely to interact with each other than a random pair of proteins [48]. In fact, during the course of infecting susceptible hosts, it is necessary for multiple virulence factors in bacterial pathogens to cooperate with each other [13,34,49]. For example, it has been shown that the prototypical type 1 secreted toxin, á-hemolysin (HlyA) is encoded by UPEC 536 and CFT073 and its expression is associated with increased clinical severity in the urinary tract infections patients [50]. However, the HlyA protein requires a post-translational modification for activity. The inactive protoxin pro-HlyA is activated by another virulence factor protein HlyC, which is an acyl carrier protein that acts as the fatty acid donor and is responsible for acylation of HylA, resulting in toxin activation [49]. Another example is that the secreted virulence factors by Pseudomonas aeruginosa, including â-lactamase, alkaline phosphatase, hemolytic phospholipase C, and Cif, are not released individually as naked proteins into the surrounding milieu. Instead, it is the bacterial-derived outer membrane vesicles (OMV) that deliver these virulence factors simultaneously and directly into the host airway epithelial cells in a coordinated manner [34]. In addition, Lilburn et al. [42] also proposed an approach by assembling a list of known virulent proteins and using these proteins as bait proteins in STRING functional association network to detect candidate virulent proteins involving in virulence in Vibrio cholerae, including proteins that are overlooked because of the incomplete annotation or the requirement of a follow-up investigation to confirm their roles in virulence. All these facts are consistent with the notion that virulent functions depend on the interaction of a large number of proteins. That is the essence of why the STRING networkbased method is able to perform better than the sequence-based methods such as BLASTP and feature selection method (Tables 1, 2 Application and Improvement Although the network-based method was merely tested for the proteins in six species, the high success rates obtained indicated the promising potential to be applied to other species as well. At present, we only considered virulence factors annotated in VFDB and protein-protein interactions in STRING database. Many other databases, such as MvirDB and SwissProt [51], also contain a large number of virulence factors, some of which are not collected in VFDB. Accordingly, for any other given bacterial species, we can also use the current network-based method to identify the virulence factors concerned once significant statistical data are available for the species. In other words, the current method can be easily extended to identify the virulence factors in many other bacterial species. Despite quite high prediction accuracy by the network-based method, the following limitations should be pointed out. Firstly, some of the hypothetical non-virulent proteins in the training set could turn out to be virulence factors after more of their functions Figure 6. The functional associations of virulence factors in the STRING database. For each protein-protein interaction in the STRING database, there are seven evidence channels and each is assigned a confident subscore and then integrated to a combined score to show the possibility of the interaction. We analyzed all the interactions of virulence factors of six species, and computed the mean scores of seven evidence channels and percents of each evidence channel that had a score more than 0. After the normalization based on the combined score, we found that gene neighborhood and co-occurrence were the main associations between these virulence factors. doi:10.1371/journal.pone.0042517.g006 are determined in future. It will be less of a problem when more proteins are accurately annotated by experiments. Secondly, some protein-protein interactions from STRING database might not be reliable, such as the case in M. tuberculosis H37Rv. Also, some of the methods that generate protein interaction data -e.g., twohybrids or gene neighbor -are susceptible to noise and might have a high false-positive rate [52][53][54]. Nevertheless, the STRING by combining the protein-protein interactions from multiple sources could improve their expected accuracy with at least 80% for more than half of the genes, clearly demonstrating the reliability of the data [55] in many cases. With enhanced quality of this small fraction of PPI networks in STRING, the performance of our network-based method can be further improved. Thirdly, the above network-based method has only taken into account of the neighbors that directly interact with the query protein, without considering the full topology of the network, during the prediction process. Yet it has been observed that, up to 69% of yeast proteins share functions with their indirect interaction partners, while only 48% share functions with their immediate interaction neighbors, as indicated in BioGrid [56]. Lastly, since the pathogenicity mechanism involves the interactions between the host and pathogen proteins [57,58], more information about these kinds of interactions would be very useful in improving the methodology and even providing some clues or insights for revealing the mechanism. Table S1 Supporting Information The feature list for all the six species by mRMR. The first part is the features ranked according to the criteria of maximum relevance to target. And the second part is the features ranked according to maximum relevance and minimum redundancy. The mRMR method could assign a score to each feature and then rank the features based on their scores.
9,458.4
2012-08-03T00:00:00.000
[ "Biology", "Computer Science", "Medicine" ]
Kant’s ethical attitudes in his master’s thesis On fire . The significance of the ethical concepts of Kant, developed in the critical period of his philosophy, goes far beyond the boundaries of the Enlightenment. In the Groundwork of the Metaphysics of Morals , the Critique of Practical Reason , The Metaphysics of Morals Kant criticises the most common doctrines of morality in the eighteenth century, including the philosophy of moral sense and the Wolffian ethics. However, almost all of Kant’s life took place in the “Age of Reason”, and his early works fit completely into the cultural context of the Enlightenment. True, a significant part of them is devoted to natural science problems that are not directly related to practical philosophy. In this material it is difficult to identify the ethical foundations which Kant, the natural philosopher, followed and to establish how typical they are for the German Enlightenment of the mid-eighteenth century. It is similarly difficult to see which of them would continue to be significant for Kant in his mature period. It is advisable to begin with an analysis of Kant’s Master’s thesis On Fire . As examples of Kant’s ethical attitudes, I can point out: observance of the etiquette norms of writing in the Gallant Century; trust in the geometric method; declaration of distrust of philosophical speculation in itself, combined with the construction of empirically verifiable hypotheses; the desire to establish the truth, synthesising speculative and experimental knowledge. Introduction At the heart of Kant's practical philosophy, which he formulated in the critical period, is the metaphysics of morals. It tries, by the use of mere reason, to establish the principle of moral behaviour expressed as a synthetic a priori judgement. Kant believed that he had managed to prove that the categorical imperative of morality, which contains the form of volition generally, was the desired "synthetic practical proposition" (GMS, АА 04: 444; Kant, 1997, p. 51), whereas the ethics founded on the latter was necessary and universal. In March 1789, deeply impressed by the Critique of Practical Reason, Heinrich Jung-Stilling wrote to Kant: "As soon as one comprehends the Critique of Reason one sees that no refutation of it is possible. Consequently, your philosophy must be eternal and immutable" (Br, AA 11: 7; Kant, 1999, p. 288). It is a simplification to reduce the ethics of the mature Kant, which pretends to applicability to every rational being, to the cultural context of the Enlightenment; a simplification that both Kant and his contemporaries would have condemned. In the second note to the fourth theorem of the Critique of Practical Reason Kant shows that the enlighteners Mandeville, Hutcheson, Wolff, Crusius, like other moralists, proceeded from the wrong material foundations, defining the principle of morality instead of beginning, as was necessary with the form: "[…] the matter of the maxim can indeed remain, but it must not be the condition of the maxim since the maxim would then not be fit for a law" (KpV, AA 05: 34; Kant, 2015, p. 31). Nevertheless, as an original thinker, Kant undoubtedly came to maturity in the Age of Enlightenment. Therefore, it is advisable to consider particularly the early works of Kant in the context of the ethics of the Enlightenment. Kant gives his most comprehensive account of moral problems in two works of 1764 -the Observations on the Feeling of the Beautiful and Sublime and the Inquiry Concerning the Distinctness of the Principles of Natural Theology and Morality. In these writings his position approaches the ideas that were voiced in the early eighteenth century by the advocates of the moral sense theory: 1 "Hutcheson and others have, under the name of moral feeling, provided us with a starting point from which to develop some excellent observations" (UD, AA 02: 300; Kant, 1992b, p. 274). Kant's other printed works of the pre-critical period focus mostly on natural science and do not have a direct bearing on practical philosophy. Analysis of these writings helps to identify the ethical rules that Kant, the natural scientist, observed and to reconstruct the moral principle that he applied. This leads to the historical-philosophical task of determining to what extent the moral norms accepted by the young Kant were typical of the German Enlightenment of the eighteenth century, which of them would be criticised in his late works and which would endure. Fulfilling this task requires a series of philosophical investigations, and one does well to begin with Kant's first Latin dissertation On Fire. The ethos of the scientist in the treatise Succinct exposition of some meditations on fire The natural philosophical dissertation On Fire uses the geometrical method. It includes an introduction, two sections comprising twelve propositions, and a conclusion. There are eleven drawings in the dissertation. A bibliography and elements which are now obligatory in a research paper (problem statement, description of methods) are missing. In most cases, however, Kant mentions in the text the names of scholars whose works he consulted and sometimes cites the titles of the books as well as the publishing details. The introduction notifies the reader that the ensuing meditations on fire are preliminary in their nature. Kant emphasis that he "carefully guarded against […] hypothetical and arbitrary proofs" (Di, AA 01: 371; Kant, 2012, p. 312). He proclaims the geometrical method and reliance on empirical data the prerequisite for the veracity of a piece of research and expresses his mistrust of speculative reflections. The first section deals with the nature of solid and liquid bodies. Kant deems it necessary to begin with this issue since "the force of fire is manifested principally in the rarefaction of bodies and in breaking down their combination" (Di, AA 01: 371; Kant, 2012, p. 312). Kant is interested in the cohesion of elements constituting bodies. Misinterpreting Cartesian ideas, Kant argues in Proposition I that the fluidity of matter cannot be explained by its division into "smooth minute parts that loosely cohere" (Di, AA 01: 371; Kant, 2012, p. 312). In Proposition II Kant analyses Pascal's law and concludes that liquids consist of both "minute parts" and "elastic matter, which is present between the elementary parts of a fluid body." This matter "is nothing other than the matter of heat" (Di, AA 01: 372; Kant, 2012, p. 313). Proposition III holds that this elastic matter is found between elements in solid bodies. Proposition IV invokes "elastic matter" to explain the ability of metal wires to "stretch[] a little without breaking when a weight is hung from them" (Di, AA 01: 373; Kant, 2012, p. 314). In Proposition V Kant employs his hypothesis about intervening elastic matter to analyse Hooke's law (in so doing, he makes a calculus error) and to inquire into experiments on the compression of elastic bodies. Kant ends the section with the conclusion that "every body consisting of solid parts is held together by some elastic matter as the bond of its unity" (Di, AA 01: 375; Kant, 2012, p. 317). The second section focuses on the matter of fire and its modifications. It begins with Proposition VI, which examines some empirically observed properties of fire, heat, and cold. Proposition VII identifies the matter of fire with the intervening elastic matter discussed in section one, whereas heat is defined as the "undulatory or vibratory motion" (Di, AA 01: 376; Kant, 2012, p. 318) of this matter. Later Kant describes the phenomenon of boiling, using, as Erich Adickes believes, contemporary textbooks of physics (Adickes, 1925, p. 40). Proposition VIII, which contains a reference to Newton's Opticks (Newton, 1717), says that "[t]he matter of heat is nothing but the ether (the matter of light) compressed by a strong attractive (adhesive) force of bodies into their interstices" (Di, AA 01: 377; Kant, 2012, p. 318). At the same time, Kant writes that his opinion agrees well with both the fact that glass is transparent and Euler's hypothesis of light being the pressure of the ubiquitous ether. In Proposition IX Kant recounts the method for measuring heat which was put forward by Guillaume Amontons and summarises the ideas of Daniel Gabriel Fahrenheit, Herman Boerhaave, Pierre Charles le Monnier and Jean-Batiste Baron de Secondat to reflect on the properties of air at different heights. The focus of Proposition X is on the nature of vapours, which, according to Kant, consist of tenuous bubbles, particles of liquids that hold elastic ether within their thin walls. Proposition XI explores the properties of vapour (or gas, or "elastic liquid", as Kant calls it) as described by Stephen Hales (1727) (Di, AA 01: 381; Kant, 2012, p. 323). Finally, Proposition XII reveals the nature of fire. Kant defines it as "vapor brought to that degree of fire that it flashes with light and goes out only when there is insufficient fuel" (Di, AA 01: 383; Kant, 2012, p. 325). Further on, Kant draws on Euler's works (without referencing them) to explain how a tiny spark can start a big fire without violating the principles of mechanics that state that the cause is equivalent to the effect. In the brief conclusion Kant writes courteously that he commends his dissertation and himself "to the indulgent and benevolent will of the [Most Distinguished Faculty of Philosophy]" (Di, AA 01: 384; Kant, 2012, p. 326). When reading On Fire in Latin, one immediately sees that Kant meticulously adheres to the conventions of the eighteenth-century writing etiquette. Following the tradition of the time, the full title of the dissertation contains thirty-one words, including the adverbs "graciously" (benevole) and "humbly" (humillime) (cf. Di, AA 01: 369; Kant, 2012, p. 311 Kant, 2012, p. 311). Although Kant refers to his dissertation as a "succinct exposition", delineatio (Di, AA 01: 369; Kant, 2012, p. 311), or a "little work", opusculum (Di, AA 01: 384; Kant, 2012, p. 326), and writes that it is merely "the outlines of a theory" (Di, AA 01: 369; Kant, 2012, p. 311), which will not occupy much of the time of university professors, whom he calls "men occupied with heavier duties" (Di, AA 01: 384; Kant, 2012, p. 326), he honestly attempts to create a work of scientific value. Kant advances an original hypothesis about the elastic matter, i.e. ether, existing alongside atoms. He uses this matter to explain some observable properties of liquids, solids, gases, fire and flame. He is concerned about the reliability of his findings and places restrictions on speculative reason, which is eager to prove anything. Kant insists that he follows "the thread of experience and geometry, without which the way out of the labyrinth of nature can hardly be found" (Di, AA 01: 371; Kant, 2012, p. 312). I suppose that this research approach reveals the influence of the tradition of experimental natural philosophy, with which Kant familiarised himself by reading Newton's works, Boerhaave's textbooks, Euler's writings and articles on physics and chemistry published in the "Transactions of the Royal Academy of Sciences at Paris". Despite proclaiming the significance of experiments in investigating the nature of fire, Kant acts as the opposite of a field scientist. The only experimental method he uses is found in Proposition VI. Consisting in the enumeration of observable well-known properties of fire, it is equipped with neither measurements nor calculations. The other empirical data are drawn from scientific publications published many years before. For example, Philippe de la Hire (1705) conducted his experiments on compression of elastic bodies as early as the eighteenth century. Nevertheless, all the descriptions of experiments that Kant cites are authoritative, verifiable, and reproducible. He offers experimental physicists some of his insights into matters of natural science as "an opinion [...] worthy of their most accurate investigation" (Di, AA 01: 382; Kant, 2012, p. 324). Probably the absence of Kant's own experiments 2 is due, firstly, to a shortage of time -Kant was preparing for the oral Master's examination while working on the voluminous Universal Natural History and Theory of the Heavens. Secondly, as almost all German universities of the mid-eighteenth century, the Albertina was faithful to the Wolffian philosophy, which taught that strict adherence to the geometrical method (mos geometricus) was a sufficient condition for the veracity of scientific data. The selection of this method of presenting research findings and a dearth of experiments indicate that, at the beginning of his career as a researcher, the young Kant strongly relied on the Wolffian tradition. I think that, bereft of an opportunity to study fire empirically and forced to construct speculative hypotheses that could be verified experimentally (and required such verification), Kant tried to abide by the ethical rule "perform the most perfect action in your power" (UD, AA 02: 299; Kant, 1992b, p. 273), which was propagated by Wolff and his acolytes. Thus, the essence of the ethical position Kant takes in his doctoral dissertation On Fire was adherence to the etiquette of the age of fêtes galantes, trust in the geometrical method, outright doubts about philosophical reasoning per se (accompanied by the construction of empirically verifiable hypotheses), and a desire to establish the truth by synthesising speculative and empirical knowledge. All these attitudes flow from the principle of perfection (in this case, perfection in the cognition of nature), typical of Wolffian ethics. The development of the ethical position from On Fire to Kant's later works In his Latin dissertations written immediately after the treatise On Fire, Kant continues to adhere to the writing etiquette of the time, employs the geometrical method 3 and tries to synthesise the speculative and empirical approaches. In A New Elucidation of the First Principles of Metaphysical Cognition Kant cites many empirical examples from physics and astronomy. In the introduction to the Physical Monadology, he states that "[m]etaphysics, therefore, which many say may be properly absent from physics is, in fact, its only support; it alone provides illumination" (MonPh, AA 01: 475; Kant, 1992c, p. 51). Later he will immerse himself more deeply in the problems of natural science and adopt the attitudes shared by empiricists. For instance, in the Observations on the Feeling of the Beautiful and Sublime, his first printed work that directly touches upon ethical issues, Kant repeatedly mentions moral perfection. Yet he calls conscious moral feeling the moral principle: " […] true virtue can only be grafted upon principles, and it will become the more sublime and noble the more general they are. These principles are not speculative rules, but the consciousness of a feeling that lives in every human breast and that extends much further than to the special grounds of sympathy and complaisance. I believe that I can bring all this together if I say that it is the feeling of the beauty and the dignity of human nature" (GSE, AA 02: 217; Kant, 2007, p. 31). Kant attends to the relationship between moral sentimentalism and the Wolffian rational ethics in his Inquiry Concerning the Distinctness of the Principles of Natural Theology and Morality. He comes to believe that the Wolffian principle of perfection, which is contained in two formal grounds of all obligation to act -"Perform the most perfect action in your power!" and "Abstain from doing that which will hinder the realisation of the greatest possible perfection!" (UD, AA 02: 299; Kant, 1992b, p. 273), is founded on the ability to represent the truth, i.e. cognition. Still, "no specifically determinate obligation flows from these two rules of the good, unless they are combined with indemonstrable material principles of practical cognition" (ibid.). These principles appear by virtue of the feeling of the good described by Hutcheson, Shaftesbury, and Kant himself (see the above quotation). For Kant, moral feeling is fundamental since the notion of the good "arises from simpler feelings of the good" (ibid.). Nevertheless, he notes that it has yet to be determined "whether it is merely the faculty of cognition, or whether it is feeling (the first inner ground of the faculty of desire), which decides [morality's] first principles" (UD, АА 02: 300; Kant, 1992b, p. 274-275). As we know, Kant will dedicate many years to the search for the answer. And the one he finds will expedite the Copernican turn in practical philosophy and become the major argument against all heteronomous concepts of morality, including moral sentimentalism and Wolffian ethics. During the critical period Kant analysed the nature of moral feeling in the Groundwork of the Metaphysics of Morals, the Critique of Pure Reason, and The Metaphysics of Morals. In the first of these works he argues that moral feeling denotes the interest a person has in moral laws. But "[this feeling] must rather be regarded as the subjective effect that the law exercises on the will, to which reason alone delivers the objective grounds" (GMS, АА 04: 460; Kant, 1997, p. 64). Kant explores this thought further: "[…] it is not because the [moral] law interests us that it has validity for us (for that is heteronomy and dependence of practical reason upon sensibility, namely upon a feeling lying at its basis, in which case it could never be morally lawgiving); instead, the law interests because it is valid for us as human beings, since it arose from our will as intelligence and so from our proper self" (GMS, АА 04: 460-461; Kant, 1997, p. 64). In the Critique of Practical Reason, Kant writes that moral feeling must be preceded by "[t]he concept of morality and duty" (KpV, AA 04: 38; Kant, 2015, p. 35). In The Metaphysics of Morals, he defines this feeling as "the susceptibility to feel pleasure or displeasure merely from being aware that our actions are consistent with or contrary to the law of duty" (MS, AA 06: 399; Kant, 1991, p. 201) and restates that it simply proceeds from the representation of moral law. If a feeling is the principal motive of an action, this action is pathological according to Kant. This feeling is rooted in the desire for personal happiness and leads to heteronomy of the will. Therefore, for the mature Kant, the statement that we have a special ability to determine what is morally good is false: "We have, rather, a susceptibility on the part of free choice to be moved by pure practical reason (and its law), and this is what we call moral feeling" (MS, AA 06: 400; Kant, 1991, p. 202). The Critique of Practical Reason also dwells on the Wolffian interpretation of perfection as the objective internal practical and material ground for the determination of the will in the principle of morality. While defining perfection in practical terms as "the fitness or adequacy of a thing for all sorts of ends" (KpV, AA 04: 41; Kant, 2015, p. 36), Kant emphasises that "ends must first be given to us, in relation to which alone the concept of perfection […] can be the determining ground of the will" (ibid.). Yet in this case the end will precede the determination of the will and become its empirical matter. Just like moral feeling, the end will be inseparable from the principle of personal happiness. Consequently, Wolffian ethics is heteronomous. 4 Conclusion The style of the Master's thesis On Fire, the geometrical method employed in that work and the absence of original experiments in it show clearly that the young Kant strongly relies on Wolffian rationalism. This he cannot yet overcome, notwithstanding his declarations of the significance of experiments for cognising nature. The ethical principle, to which the young Kant adhered when working on his dissertation, was the search for perfection. All the above was distinctive of research conducted at Prussian universities in the mid-eighteenth century. In later pre-critical texts Kant was increasingly embracing empiricism. His ethical position was now compatible with that of the Scottish school of moral philosophy. He proclaimed Wolffian ethics fit for cognition or, more precisely, the elucidation of the notion of the good arising from the feeling of the good, rather than for the regulation of behaviour. In the critical period, Kant abandoned empiricism in ethics, re-adopted ethical rationalism, and created an original practical philosophy, at the heart of which was the autonomous good will. From this perspective, he criticises both ethical sentimentalism and the Wolffian principle of perfection, which is unacceptable for the mature Kant because of its heteronomy, its association with eudemonism and, finally, empiricism. Therefore, the ethical position that Kant adopted in On Fire is different from that which he would take in the practical philosophy of criticism. The tendency towards synthesising empiricism and rationalism in cognising nature, one that was already apparent in Kant's first dissertation, is very much in line with his intentions of the critical period.
4,786.6
2023-01-01T00:00:00.000
[ "Philosophy" ]
The immunometabolite S-2-hydroxyglutarate exacerbates perioperative ischemic brain injury and cognitive dysfunction by enhancing CD8+ T lymphocyte-mediated neurotoxicity Background Metabolic dysregulation and disruption of immune homeostasis have been widely associated with perioperative complications including perioperative ischemic stroke. Although immunometabolite S-2-hydroxyglutarate (S-2HG) is an emerging regulator of immune cells and thus triggers the immune response, it is unclear whether and how S-2HG elicits perioperative ischemic brain injury and exacerbates post-stroke cognitive dysfunction. Methods Perioperative ischemic stroke was induced by transient middle cerebral artery occlusion for 60 min in C57BL/6 mice 1 day after ileocecal resection. CD8+ T lymphocyte activation and invasion of the cerebrovascular compartment were measured using flow cytometry. Untargeted metabolomic profiling was performed to detect metabolic changes in sorted CD8+ T lymphocytes after ischemia. CD8+ T lymphocytes were transfected with lentivirus ex vivo to mobilize cell proliferation and differentiation before being transferred into recombination activating gene 1 (Rag1−/−) stroke mice. Results The perioperative stroke mice exhibit more severe cerebral ischemic injury and neurological dysfunction than the stroke-only mice. CD8+ T lymphocyte invasion of brain parenchyma and neurotoxicity augment cerebral ischemic injury in the perioperative stroke mice. CD8+ T lymphocyte depletion reverses exacerbated immune-mediated cerebral ischemic brain injury in perioperative stroke mice. Perioperative ischemic stroke triggers aberrant metabolic alterations in peripheral CD8+ T cells, in which S-2HG is more abundant. S-2HG alters CD8+ T lymphocyte proliferation and differentiation ex vivo and modulates the immune-mediated ischemic brain injury and post-stroke cognitive dysfunction by enhancing CD8+ T lymphocyte-mediated neurotoxicity. Conclusion Our study establishes that S-2HG signaling-mediated activation and neurotoxicity of CD8+ T lymphocytes might exacerbate perioperative ischemic brain injury and may represent a promising immunotherapy target in perioperative ischemic stroke. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-022-02537-4. Introduction Perioperative ischemic stroke is one of the most catastrophic complications of surgery, which has strong public health implications [1]. With difficulties in prompt diagnosis, a narrow therapeutic time window, malignant brain edema, and high risks of lethal hemorrhagic transformation, less than 5% perioperative ischemic stroke patients benefit from tPA-mediated thrombolysis [2]. Post-stroke cognitive dysfunction is a common consequence of stroke, resulting in reduced quality of life. Despite medical and technological advances, the incidence of perioperative ischemic stroke and post-stroke cognitive dysfunction has not yet decreased. Therefore, an in-depth investigation of the molecular mechanism involved in perioperative ischemic stroke may be particularly important to the identification of diagnostic and therapeutic targets. Disruption of immune homeostasis plays a key role in exacerbation of cerebral ischemic stroke. Previous studies have mainly focused on neutrophils, and monocytes in ischemic brain injury [3]. However, the adaptive immune cells-cytotoxic CD8 + T lymphocytes are gaining increasing attention in recent years [4]. In response to antigen stimulation or hypoxia, quiescent CD8 + T lymphocytes convert to multiple T cell subsets including effector and memory cells [5], but in vivo activation and differentiation in perioperative ischemic stroke remain largely elusive. Moreover, although deleterious effects of proinflammatory cytokines of CD8 + T lymphocytes are well characterized, direct neurotoxic effects of brain-infiltrating CD8 + T lymphocytes in perioperative ischemic stroke are essentially unknown [6]. The essential roles of immunometabolism in modulating cell fate have gradually begun to be unraveled [7,8], but context-dependent metabolic effects in vivo remain unclear. Cellular metabolism alters the proliferative effector state of CD8 + T lymphocytes [9]. A recent study indicates that hypoxia and mitochondrial deficits may result in S-2HG accumulation, which can regulate the differentiation of CD8 + T lymphocytes via altering histone and DNA demethylation and thus trigger adaptive immune responses [10]. Given perioperative risk factors such as surgical insults, traumatic injuries, anesthesia, or hypoxia, there is an urgent need to explore whether and how S-2HG modulates CD8 + T lymphocytes and elicit secondary brain damage in perioperative ischemic stroke. In the present study, we demonstrated that the perioperative stroke mice exhibited exacerbated cerebral ischemic injury and neurological dysfunction more than the stroke-only mice. CD8 + T lymphocyte invasion of brain parenchyma and neurotoxicity augment cerebral ischemic injury in the perioperative stroke mice. Perioperative ischemic stroke triggered aberrant metabolic alteration of S-2HG in peripheral CD8 + T cells. S-2HG-mediated activation and neurotoxicity of CD8 + T lymphocytes might present a novel mechanism and therapeutic target for perioperative ischemic stroke. Experimental animals Male C57BL/6 WT mice were obtained from the Chinese PLA General Hospital Laboratory Animal Center and male Rag1 −/− mice (B6/JGpt-Rag1 em1Cd /Gpt) were purchased from GemPharmatech Co., Ltd. All mice used in the study were bred and housed in specific pathogenfree conditions. All animal experiments were undertaken in accordance with the National Institute of Health Guide for Care and Use of Laboratory Animals, with the approval of the Ethics Committee for Animal Experimentation of the Chinese PLA General Hospital. All animals were used at 6-8 weeks of age and randomly assigned for all experiments. Mouse model of perioperative ischemic stroke Ileocecal resection (ICR) and tMCAO surgery were performed to establish the perioperative ischemic stroke model. Animals were anesthetized with inhalational anesthesia using sevoflurane delivered through a nose cone mask. During ICR surgery, ileocecal artery was exposed and ligated with 5-0 silk suture. Identify and divide the ischemic portions of ileum and colon ensuring that blood supply to the transected ends is adequate. Ileum and colon were anastomosed for digestive tract reconstruction with polypropylene 8-0 interrupted sutures. A typical anastomosis will require 14 to 16 interrupted sutures. One day after ICR, perioperative ischemic stroke was induced by transient intraluminal occlusion of the left middle cerebral artery (MCA) with silicone-coated suture (Doccol Corporation) for 60 min followed by reperfusion as described previously [11]. 7T rodent magnetic resonance imaging scanning 7T/400 mm ultra-high field magnetic resonance system (BioSpec70/20USR, Bruker Corporation) was performed to evaluate the cerebral infarct volume in live mice. Mice were anesthetized with 2.5-3.5% isoflurane, while vitals signs were continually monitored with the monitoring system during the examinations. T2-weighted images were captured using relaxation enhancement sequences. Imaging parameters were as follows: repetition time (TR) = 3500 ms, effective echo time (TE) = 40 ms, slice thickness = 0.5 mm, field of view (FOV) = 15 × 15 mm, matrix size = 120 × 120. Infarct volumes were calculated as scanned volumes of contralateral brain tissue minus scanned volumes of the non-infarcted areas of the ipsilateral lesioned brain tissue using Image J software (NIH). Untargeted metabolomic profiling of CD8 + T lymphocytes by LC-MS/MS Isolate untouched and highly purified CD8 + T cells from stroke mouse splenocytes by using EasyStep Mouse CD8 + T Cell Isolation Kit (StemCell Technologies). The analysis process of untargeted metabolomics was divided into two parts: experimental and bioinformatics analysis. The cell metabolites were extracted using the methanol:acetonitrile:water extraction protocol and then analyzed by liquid chromatography system coupled with high-resolution mass spectrometer (Thermo Fisher Scientific, USA). The bioinformatics analysis mainly included: data preprocessing, data quality control, statistical analysis, screening for differential metabolites, and pathway enrichment analysis. Adoptive cell transfer of Rag1 −/− mice CD8 + T lymphocytes isolated from healthy mice spleen using magnetic sorting (StemCell Technologies) were plated at 5 × 10 5 per well in a 24-well plate with anti-CD3/CD28 bead-based stimulation (Miltenyi) for 48 h. After being transfected with lentivirus L2hgdh-Flag to upregulate L2hgdh expression, 2 × 10 6 CD8 + T cells were injected via femoral vein to recipient Rag1 −/− mice before the perioperative stroke model established. Behavioral tests All behavioral tests were carried out by the blinded experimenter. The modified Garcia score test [12,13] and foot fault test [14] were performed as described previously to assess sensorimotor deficits. Morris water maze (MWM) was used to investigate spatial learning, reference memory, and working memory. The threechamber paradigm test for sociability and social novelty preference was performed to assess sociability. Antibodies for western blot and immunofluorescent staining The primary antibodies used for western blot are as follows: rabbit anti-HIF-1α (1:1000, Cell Signaling Technology, RRID: AB_2799095), and rabbit anti-LDHA Assessment of infarct volume To evaluate the extent of infarct area, brains were sectioned and incubated with 2.0% 2,3,5-triphenyltetrazolium chloride staining (TTC, Sigma-Aldrich) as previously described [15]. Microtubule-associated protein 2 (MAP2) staining was used to delineate infarct area with rabbit anti-MAP2 antibodies (1:200, Cell Signaling Technology). The stained sections were scanned, and infarct areas were measured using Image J software (NIH). Relative infarct volumes with correction for cerebral edema were assessed based on the following equation: (volumes of contralateral brain tissue minus volumes of the non-infarcted areas of the ipsilateral lesioned brain tissue). Evans blue administration and quantification Mice were injected intravenously with 4 ml/kg of 2% Evans Blue dye (Sigma-Aldrich), followed by a 3-h circulation in vivo before euthanasia. The brain tissue was minced and sonicated in N,N-dimethylformamide (10 ml/kg, Sigma-Aldrich), incubated for 24 h at 60 °C, and centrifuged at 3000 rpm for 10 min [16]. The supernatants were obtained and analyzed at 620 nm using spectrophotometer (DU-640 spectrophotometer, Beckman Coulter). RT-PCR and quantitative PCR analysis Total RNA of occludin was extracted and purified using TRIzol reagent (Invitrogen). 1 μg of total RNA was reversed transcribed into complementary DNA using the Superscript III system (Invitrogen). Quantitative real-time PCR was carried out with 7500 Real Time PCR system (Applied Biosystems), and the relative expression was calculated using the 2 ΔΔCt method. HE staining, Nissl staining, Tunel staining and NeuN staining HE staining, Nissl staining, Tunel staining and NeuN staining were performed to assess the ischemia-induced neuronal damage in the penumbra 7 days after stroke. The brain tissue was embedded in paraffin and coronally sectioned to a thickness of 4 μm. HE + , Nissl + , Tunel + or NeuN + cells were counted in 5 different areas of the ischemic penumbra by the blinded investigators. Statistical analyses All results were analyzed by an investigator who was blinded to the study protocol. All values are presented as mean ± standard error of the mean (SEM) for continuous numerical variables. Experimental group sizes were predetermined based on previous similar studies (power 0.8, α 0.05). Student's t-test was performed to compare the difference between two groups. When comparing multiple groups, one-way ANOVA was applied in the statistical analysis, with post hoc Bonferroni correction for multiple comparisons. Comparisons of different groups in each time point were carried out using two-way ANOVA plus post hoc Bonferroni test. We performed statistical analyses using GraphPad Prism 9 (GraphPad Software). All statistical tests were two-tailed and P values < 0.05 were deemed statistically significant. Results The perioperative stroke mice exhibit more severe ischemia-induced cerebral ischemic injury than the stroke-only mice To investigate how perioperative ischemic stroke influences cerebral injury, we established the perioperative stroke model and quantified the brain injury on infarct size by TTC staining and MAP2 staining (Fig. 1A). Compared with stroke-only mice (tMCAO group), the perioperative stroke mice (ICR + tMCAO group) exhibited significantly increased infarct volume by TTC staining (t (18) ischemia) and by MAP2 staining (t (8) = 2.925, P = 0.0191, Fig. 1C) (7 days following ischemia). Meanwhile, we observed that the edema and infarct territories were highlighted using a T2-weighted imaging sequence of 7T rodent magnetic resonance imaging scanning (Fig. 1D). Compared with stroke-only mice, the proportion of HE + (P < 0.0001), Nissl + (P < 0.0001) and NeuN + (P = 0.0097) cells in the penumbra of ischemic hemisphere were significantly diminished in mice with perioperative stroke. Meanwhile, perioperative stroke mice showed an obvious augment of Tunel + cells (P < 0.0001) in the ischemic penumbra, suggesting a worsen neuronal injury (Additional file 1: Fig. S1). We performed the blood-brain barrier (BBB) permeability experiment using Evans blue administration intravenously in vivo. We confirmed that the perioperative stroke mice exhibited an augment of BBB breakdown at 3 days following ischemia, as indicated by the quantification of Evans blue leakage (P < 0.0001, Fig. 1E). Next, as occludin is a critical molecule for preserving the BBB permeability, we demonstrated that the perioperative stroke mice exhibited more severe occludin loss in the ischemic penumbra when assayed at 3 days following tMCAO (P < 0.0001, Fig. 1F). Notably, the perioperative stroke mice demonstrated dramatically augmented mortality within 28 days following ischemia compared to stroke-only mice, as indicated in Kaplan-Meier (KM) survival curves (log-rank χ 2 = 16.10, P = 0.0011, Fig. 1G). Taken together, these findings suggest that ischemia-induced cerebral ischemic injury in the perioperative stroke mice is more severe compared to the stroke-only mice. The perioperative stroke mice are more vulnerable to sensorimotor, cognitive, and social dysfunction We next performed the modified Garcia score test and foot fault test to assess sensorimotor deficits 0, 1, 3, 7, 14, 21, and 28 days after ischemia onset, respectively. Sensorimotor impairments were robustly exacerbated in perioperative stroke mice compared with strokeonly mice, as proved by neurobehavioral changes on the modified Garcia score test for total neurological assessments (P < 0.0001) including gross motor, body proprioception, and motor coordination ( Fig. 2A and Additional file 1: Fig. S2). Moreover, we observed an apparent difference in both forelimb foot fault errors (P = 0.0006) and total foot fault errors (P = 0.0003) between perioperative stroke mice and stroke-only mice, indicating a more severe fine motor impairment in perioperative stroke mice (Fig. 2B, C). We then used Morris water maze to investigate whether perioperative ischemic stroke would affect spatial learning, reference memory, and working memory. In both perioperative stroke mice and stroke-only mice, the escape latency prolonged significantly compared to sham mice from day 20 to day 24 post-ischemia, suggesting that the spatial learning ability declined after stroke. However, perioperative stroke mice presented with only an increasing trend with respect to spatial learning impairment without statistical significance as compared with strokeonly mice (P = 0.8251, Fig. 2E). On day 25 poststroke, no significant difference was detected in swimming velocity among groups in the probe test (F (3,39) = 0.4777, P = 0.6997), suggesting that the performance on MWM test was not compromised by differences in locomotor deficits (Fig. 2F). Interestingly, the mice with perioperative stroke had significant deficits in the ability to remember the exact platform location during the probe test and spent less time searching in the target quadrant than stroke-only mice (F (3, 39) = 6.727, P = 0.0009), indicating a worse spatial reference memory (Fig. 2D, G). On days 26 to 28 post-ischemia, we performed spatial working memory test by randomly changing the hidden platform and initial entry quadrant of the mice tested. More severe impairment in spatial working memory in perioperative stroke mice was verified (P = 0.0362), as indicated by a relatively higher escape latency compared with strokeonly mice (Fig. 2D, H). Since previous animal and human studies had reported that the stroke or a spectrum of neurodegenerative diseases could lead to high risks of social dysfunction [17], we hypothesized that perioperative ischemic stroke could affect social interaction behaviors. The three-chamber paradigm test for sociability and social novelty preference has been successfully employed to assess sociability in mice [18,19]. In sociability test, the perioperative stroke mice exhibited no preference for stranger mice (mouse 1) and empty cup at 30 days following ischemia, indicating (See figure on next page.) Fig. 2 The perioperative stroke mice develop profound neurological deficits such as sensorimotor, cognitive, and social dysfunction. A-C Sensorimotor function was assessed using Garcia score test (A) or foot fault test (B, C) until 28 days after tMCAO (n = 10-15/group). D Representative trajectories of mice showing spatial reference memory or spatial working memory. E Spatial learning in the navigation test of MWM on days 20, 21, 22, 23, 24 after stroke. Average escape latency is shown for the training sessions. F Quantification of swimming speed during probe test on day 25 after stroke. G Time traveled in the target quadrant during the probe test (n = 10-11/group). H Quantification of latency to the platform during spatial working memory testing on days 26, 27, and 28 after stroke (n = 10-11/group). I Representative trajectories of mice showing sociability and social novelty preference. J During the sociability test, the time spent interacting with a stranger mouse or with an empty cup was recorded on days 30 after stroke (n = 10-11/group). K During the social novelty preference test, the time spent interacting with a novel introduced stranger versus an initial stranger was recorded on days 30 after stroke (n = 10-11/group). *P < 0.05, **P < 0.01, ns indicates nonsignificant impaired sociability (t (18) = 0.1570, P = 0.8770, Fig. 2I, J). In social novelty preference test, the perioperative stroke mice showed no preference for a novel encountered stranger (mouse 2) and an initial stranger (mouse 1) at 30 days poststroke, indicating impaired social novelty preference (t (18) = 0.6937, P = 0.4967, Fig. 2I, K). These findings indicate that perioperative stroke mice contributed to sensorimotor, cognitive, and social dysfunction, suggesting that related brain regions or structural and functional connectivity between brain areas may be disrupted due to the larger range of infarct volume. CD8 + T lymphocyte invasion of brain parenchyma and direct neurotoxicity augment in the perioperative stroke mice Immune cells are recognized as key players in exacerbation of ischemic brain injury. Considering that profound immune responses can be rapidly activated during the perioperative period or ischemic stroke, we next clarified which cell subsets conducted the exacerbation effect of cerebral ischemic injury in perioperative stroke mice and focused on the direct cytotoxic effects of the brain-infiltrating immune cells. To this end, we assessed the major subset of immune cells at 7 days after stroke by flow cytometry (FCM) analysis. Compared to Sham mice, ICR mice showed an obvious augment of CD44 hi CD62L lo CD8 + T lymphocytes in blood (P = 0.0365) and spleen (P = 0.0401), demonstrating an active functional status of peripherally CD8 + T lymphocytes. Compared to stroke-only mice, the percentage of activated CD8 + T lymphocytes in blood (P = 0.0131) and spleen (P = 0.0223) increased significantly in perioperative stroke mice, indicating a peripherally increasing activation state of CD8 + T lymphocytes in perioperative stroke model (Fig. 3A, B). Notably, greatly increased brain-invading CD8 + T lymphocytes (F (3,24) = 6.398, P < 0.0001) were detected in perioperative stroke mice by the flow analysis of ischemic hemisphere, suggesting that the perioperative stroke led to CD8 + T lymphocytes recruitment to brain parenchyma, which could potentially enhance the neurotoxic effect (Fig. 3C). Meanwhile, perioperative stroke mice showed an obvious augment of CD44 hi CD62L lo CD8 + T lymphocytes in the ischemic hemisphere, demonstrating an active functional status of brain-infiltrating CD8 + T lymphocytes (Fig. 3D, E). Moreover, brain-infiltrating CD8 + T lymphocytes of perioperative stroke mice responded with more Perforin (t (8) = 2.481, P = 0.0380) and Granzyme B (t (8) = 7.065, P < 0.0001), which can exert cytotoxicity to neurons [20] (Fig. 3F, G). Cerebral mRNA expressions of Perforin (t (8) = 3.553, P = 0.0075) and Granzyme B (t (8) = 2.680, P = 0.0279) in perioperative stroke mice were obviously higher than stroke-only mice 7 days after stroke onset, whereas the mRNA level of IFN-γ (t (8) = 0.9686, P = 0.3611) and TNF-α (t (8) = 1.208, P = 0.2617) did not reflect abnormal variability (Fig. 3H). These suggested the direct neurotoxicity of CD8 + T lymphocyte plays a more important role in perioperative ischemic brain injury than humoral pathways (IFN-γ or TNF-α). In contrast, the flow cytometry analysis of brain-infiltrating CD4 + T lymphocytes (t (9) = 0.6418, P = 0.5370), B cell (t (9) = 1.332, P = 0.2157) and neutrophil (t (9) = 1.394, P = 0.1969) did not reveal a significant difference between perioperative stroke mice and stroke-only mice, respectively (Additional file 1: Fig. S3A-F). In addition, we further performed immunofluorescence staining to evaluate the invasion of CD8 + T lymphocytes or CD4 + T lymphocytes in the ischemic brain 7 days after stroke onset. The enrichment of brain-invading CD8 + T cells (F (3,16) = 61.43, P < 0.0001, vs. tMCAO group) in the ischemic hemisphere was more evident in perioperative stroke mice (Additional file 1: Fig. S4A, B). However, the perioperative stroke mice did not show an obvious enhancement in recruiting CD4 + T lymphocytes (F (3,16) = 15.43, P = 0.9973, vs. tMCAO group) to the ischemic hemisphere (Additional file 1: Fig. S4A, C). We then explored the activation effects of astrocytes and microglia in ischemic penumbra, respectively. Differences between perioperative stroke and stroke-only mice in astrocytic (P = 0.0697) and microglial (P = 0.0684) activation were statistically insignificant (Additional file 1: Fig. S4A, D). Collectively, these results suggest that the invasion of brain parenchyma and direct neurotoxicity of CD8 + T lymphocytes may play a critical role in immune-mediated cerebral ischemic injury and contribute to the exacerbation of ischemic brain injury in perioperative stroke mice. Neuroprotection of CD8 + T lymphocyte depletion in perioperative stroke mice Having determined that the active status of CD8 + T lymphocytes increased both peripherally and centrally in perioperative stroke mice, we next sought to elucidate the direct effect of CD8 + T lymphocytes on brain infarct size and neurobehavioral deficits after tMCAO in perioperative stroke mice by in vivo depletion of CD8 + T cell populations using anti-CD8 monoclonal antibodies (mAb) (Fig. 4A). Flow cytometry analysis confirmed that splenic CD8 + T lymphocytes were depleted in anti-CD8mAb-treated mice, whereas splenic CD8 + T cell populations were not diminished by the isotype IgG treatment (Fig. 4B). Three days after stroke, administration of anti-CD8α mAb led to reduced brain infarct size in perioperative stroke mice (P < 0.0001) and stroke-only mice (P = 0.0262) compared with isotype IgG-treated mice (Fig. 4C, D). Interestingly, we further compared brain infarct size between perioperative stroke mice with anti-CD8α mAb and stroke-only mice with anti-CD8α mAb and observed no measurable differences between these two groups (P = 0.8075), indicating a therapeutic potential for exacerbated cerebral ischemic brain damage (Fig. 4D). Moreover, compared with isotype IgGtreated mice, administration of anti-CD8α mAb lead to reduced Tunel + cells (P < 0.0001) and increased NeuN + cells (P < 0.0001) in the ischemic penumbra with perioperative stroke mice (Fig. 4E-G). By 7 days and 14 days after ischemia, neurobehavioral dysfunctions were significantly ameliorated in perioperative stroke mice by depletion of CD8 + T lymphocytes as compared to isotype IgG-treated mice (P < 0.0001, Fig. 4H). Meanwhile, there was a trend toward improved survival for Fig. 3 The activation and brain invasion of CD8 + T lymphocytes exacerbate ischemic brain injury in perioperative stroke mice. A, B Flow cytometry analysis on CD44 hi CD62L lo percentage on CD8 + T lymphocytes in blood (A) and spleen (B) 7 days after stroke (n = 6-7/group). C Representative dot plots and an absolute number of brain-invading CD8 + T cells 7 days after stroke (n = 7/group). D-G CD44 (D), CD62L (E), Perforin (F), and Granzyme B (G) mean fluorescence intensity (MFI, arbitrary unit) of brain infiltrating CD8 + T cells 7 days after stroke (n = 5/group). H mRNA levels of cytokines Perforin, Granzyme B, IFN-γ, and TNF-α as measured by RT-PCR in ischemic and non-ischemic hemispheres (n = 5/group). The sham mice were defined as the mice that underwent laparotomy without ICR. *P < 0.05, **P < 0.01, ns indicates nonsignificant anti-CD8α-treated mice (Fig. 4I). Taken together, these results demonstrate that CD8 + T lymphocyte depletion reverses exacerbated immune-mediated cerebral ischemic brain injury and is crucial for the reduction of brain infarct volume, protection of neurons and remission of neurobehavioral deficits in perioperative stroke mice. Immunometabolite S-2HG upregulation in CD8 + T cell after perioperative ischemic stroke Metabolic dysregulation has been widely associated with immune cell production and differentiation [21][22][23]. Unlike non-perioperative stroke, perioperative stroke has a distinct characteristic: surgical intervention [24]. Given that surgical insults, traumatic injuries, or inflammatory stress events exist during the perioperative period, which are important for triggering aberrant metabolic alterations [25], we hypothesized that metabolic responses are essential for CD8 + T cells to cope with the demand for cell growth and multiple rounds of division. To test this, we established the perioperative stroke model and performed untargeted metabolomics on sorted splenic CD8 + T lymphocytes 3 days after tMCAO. Consistently, partial least-squares discrimination analysis (PLS-DA) (data not shown) and unsupervised hierarchical clustering indicated that the metabolomic profiling of perioperative stroke mice was significantly different compared to stroke-only mice (Fig. 5A). Among a total of 1811 metabolites, 25 upregulated metabolites and 40 downregulated metabolites were detected in perioperative stroke mice. The volcano plot further showed that the top 5 upregulated differential metabolites and the top 5 downregulated differential metabolites represented in the perioperative stroke and stroke-only mice, including lignocaine, S-2-hydroxyglutarate, citrate, N-acetylserotonin, fexofenadine, inosine, riboflavin, S-malate, oxaloacetate, and l-proline (Fig. 5B). KEGG pathway analysis with hierarchical clustering revealed that the differential metabolites in perioperative stroke mice were mainly enriched in early citrate cycle (TCA cycle) metabolism (Fig. 5C). Notably, S-2HG was more abundant in CD8 + T cells of perioperative stroke mice than that in strokeonly mice. Moreover, we confirmed that S-2HG in CD8 + T cells significantly increased in ICR mice (P = 0.0233, vs. Sham group) and perioperative stroke mice (P = 0.0263, vs. tMCAO group) by quantitative liquid chromatography-mass spectrometry (LC-MS), indicating a strong association between the accumulation of S-2HG and the activation of CD8 + T cells (Fig. 5D). Previous studies [9] suggest that hypoxia-inducible factor 1α (HIF-1α) protein accumulates in the context of hypoxia or stress, and induces the overexpression of lactate dehydrogenase A (LDHA), and elevates the production of S-2HG (Fig. 5E). By 3 days after tMCAO, we next elucidated the expression of HIF-1α and LDHA in isolated CD8 + T lymphocytes using Western blot. HIF-1α (F (3,8) = 45.59, P < 0.0001) and LDHA (F (3,8) = 33.37, P < 0.0001), known to be important for S-2HG production and accumulation, strikingly increased in perioperative stroke mice compared with stroke-only mice (Fig. 5F, G). Collectively, we conclude that CD8 + T cells undergo profound metabolic alterations in perioperative stroke mice, with an aberrant accumulation of immunometabolite S-2HG. S-2HG alters CD8 + T lymphocyte proliferation and differentiation Next, we made efforts to test whether and how S-2HG directly alters CD8 + T lymphocytes proliferation and differentiation. Homeostatic proliferation and differentiation were evaluated by Edu incorporation assay and flow cytometric analysis, respectively (Fig. 6A). We first separated CD8 + T lymphocytes from healthy C57BL/6 mice spleen using magnetic sorting and then cultured the CD8 + T lymphocytes with cell-permeable S-2HG for 7 consecutive days to characterize the responses to the treatments. In the 200 μm S-2HG-treated cells, we observed an obvious increase in the percentages of Edu + cells and effector CD8 + T lymphocytes compared with the control cells treated with PBS, suggesting that exogenous S-2HG promoted homeostatic proliferation (F (2, 57) = 21.660, P < 0.0001) and differentiation (F (2, 15) = 9.992, P = 0.0017) of CD8 + T lymphocytes (Fig. 6B, E). However, unexpectedly, 500 μm S-2HG treatment inhibited the homeostatic proliferation (P = 0.0014, vs. vehicle group) compared with control cells treated with PBS and produced only a trend toward decreasing in effector CD8 + T lymphocytes without statistical significance (P = 0.4406, vs. vehicle group) (Fig. 6B, E). Next, (See figure on next page.) Fig. 4 CD8 + T lymphocytes depletion is crucial for the reduction of ischemic brain injury in perioperative stroke mice. A Flowchart illustrates the experimental design. Three days before tMCAO, C57BL/6 mice were injected intraperitoneally with 100 μg InVivoMAb anti-mouse CD8α antibody or 100 μg Rat IgG2b (isotype antibody). B Representative dot plots of flow cytometry of CD8 + T cells in the spleen after CD8 + T lymphocytes depletion. C, D Representative and quantification of TTC staining in 6 consecutive coronal sections (1 mm apart) 3 days after stroke (n = 6-7/group). E Representative images of brain tissue with Tunel staining or NeuN staining in the ischemic penumbra 7 days after stroke. Red arrows signify the injured neurons. Scale bar = 20 μm. we sought to explore the role that endogenous S-2HG in CD8 + T lymphocyte function. L2hgdh is a crucial rate-limiting enzyme that can exert rapid oxidative degradation of S-2HG [26]. Compared to empty vector and PBS treated, transfection with lentiviral overexpression of L2hgdh (denoted as L2hgdh-Flag) into CD8 + T lymphocytes lead to the downregulation of Edu + cells and effector CD8 + T lymphocytes, suggesting that endogenously produced S-2HG decreased proliferation (F (2, 57) = 13.430, P < 0.0001) and triggered phenotypic markers conversion (F (2, 12) = 6.364, P = 0.0131) (Fig. 6C, F). In contrast, knockdown of L2hgdh production with lentiviral shRNA (denoted as shL2hgdh) augmented proliferation (F (2, 57) = 11.990, P < 0.0001) and differentiation (F (2, 12) = 8.514, P = 0.0050) of CD8 + T lymphocytes compared with control shRNA (shScramble) and PBS (Fig. 6D, G). These results suggest that exogenous or endogenous S-2HG ex vivo regulates proliferation and phenotypic markers transformations of CD8 + T lymphocytes. Discussion Perioperative ischemic stroke is an under-appreciated and catastrophic neurological complication of surgery with high mortality and disability [27]. Nevertheless, the mechanisms by which perioperative stroke occurs remain largely uncertain. In the present study, we are focused on the integration effects of perioperative risk factors such as surgical insults, traumatic injuries, anesthesia, or hypoxia and demonstrated that the immunometabolite S-2-hydroxyglutarate exacerbated perioperative ischemic brain injury and post-stroke cognitive dysfunction with structural and functional brain changes by enhancing CD8 + T lymphocyte-mediated neurotoxicity. The potential clinical implication of our findings is that S-2HG and CD8 + T lymphocyte could become the promising immune-modulatory targets for perioperative ischemic stroke therapy. The blood-brain barrier protects the brain from most endogenous and exogenous risk factors. However, neuroinflammation in CNS disorders can potentially disrupt BBB structural integrity and increase BBB permeability, while even peripheral systemic inflammation trigger BBB damage [28]. During the ischemic stroke event, substantial inflammatory factors or cytokines released from Fig. 5 Accumulation of immunometabolite S-2HG in CD8 + T lymphocytes after perioperative ischemic stroke. A Heat map of all differentially expressed intracellular metabolites. The differential expressed metabolites were confirmed with a fold change distribution of ICR + tMCAO/ tMCAO > 2.0 or < 0.5. B Volcano plot indicating the metabolites of differential accumulation [log 2 (fold change) on X-axis] and significant change [− log 10 P adj on Y-axis] in the ICR + tMCAO and tMCAO group. C KEGG analysis of enriched biological processes with differential expressed metabolites. D S-2HG levels in splenic CD8 + T cells sorted from perioperative stroke mice or stroke-only mice 3 days after tMCAO (n = 3/group). E Schematic showing the physiological and pathophysiological mechanism of S-2HG production, listing representative enzymes, and metabolic molecules. F, G Representative western blot images and quantification of HIF-1α or LDHA 3 days after stroke in the sorted splenic CD8 + T cells from perioperative stroke mice or stroke-only mice (n = 3/group). The sham mice were defined as the mice that underwent laparotomy without ICR. *P < 0.05, **P < 0.01 necrotic neurons enter the peripheral circulation through the pathological BBB, which may affect the activation of peripheral immune cells that do not enter the brain parenchyma during physiological state [29]. Moreover, migration and adhesion of activated peripheral immune cells to disrupted BBB and brain parenchyma will further exacerbate the BBB damage and ischemic brain injury [30]. In our study, we demonstrated that tight junction protein occludin degradation and BBB disruption were progressively worse with the potentially augmenting inflammation response in perioperative stroke mice. We reasoned that the dual origin of inflammatory factors from both surgical intervention and necrotic neuronal death induced exacerbated BBB disruption in perioperative stroke mice. We went a step further and focused on the role of immune responses and secondary neuronal damage in perioperative ischemic brain injury. We observed that the invasion of brain parenchyma and direct neurotoxicity of CD8 + T lymphocytes may play a crucial role in ischemic brain injury and contribute to the exacerbation of ischemic brain injury in perioperative stroke mice. The deleterious effects of CD8 + T lymphocytes are more mediated via direct cytotoxicity rather than humoral pathways in perioperative ischemia. Furthermore, we performed the experimental paradigm of CD8 + T lymphocytes depletion using antibody neutralization that confirmed the deleterious effect in immune-mediated cerebral ischemic brain injury of CD8 + T lymphocytes in perioperative stroke mice. In response to T cell receptors (TCRs) triggering, quiescent CD8 + T-lymphocytes convert to memory CD8 + T cells and effector T cells [31][32][33]. Cellular metabolism controls CD8 + T cells activation and differentiation. The metabolic programs of CD8 + T lymphocyte states are important for the expression of key phenotypic markers [34]. To systematically identify metabolic factors that induce the activation and differentiation of CD8 + T lymphocytes, we performed untargeted metabolomics on sorted splenic CD8 + T lymphocytes from the stroke models and demonstrated that the differential metabolites in perioperative stroke mice were mainly enriched in early citrate cycle (TCA cycle) metabolism, which is associated with glucose utilization and oxygen consumption [35]. These results indicated that the CD8 + T lymphocytes of perioperative stroke mice underwent a massive metabolic switch to adapt to demands of cell growth and differentiation due to the integration effects of perioperative risk factors such as surgical insults, traumatic injuries, anesthesia, or hypoxia. S-2-Hydroxyglutarate is present in urine of healthy individuals and is elevated in patients with an inborn metabolic disease that results from L2hgdh deficiency [36]. Emerging studies highlight that accumulation of S-2HG can modulate the differentiation of CD8 + T lymphocytes through altering histone and DNA demethylation and thus trigger the immune response [10,37]. The productions of S-2HG are derived from lactate dehydrogenase or malate dehydrogenase and are strongly dependent on HIF-1α signaling [38,39]. Furthermore, the expression of lactate dehydrogenase in hypoxic environment is HIF-1α-dependent and makes a greater contribution to hypoxia-induced S-2HG production than the production of malate dehydrogenase. Consistent with these previous studies, we also found that HIF-1α and LDHA, triggered by inflammatory and hypoxic responses, strikingly increased in perioperative stroke mice compared with stroke-only mice and sham mice. We observed that the accumulation of S-2HG in CD8 + T cells in response to TCR triggering and hypoxia significantly increased in perioperative stroke mice compared to stroke-only mice. In a normal physiological state, the production of S-2HG relies heavily on oxidative phosphorylation and involves glucose, acetyl-CoA, and α-oxoglutarate. However, in the context of hypoxia and mitochondrial deficit, glutamine is the major source of S-2HG with lower glucose and oxygen consumption [40]. In addition, we used the treatment with cell-permeable S-2HG and the modulation of L2hdgh activity with lentiviral transfection to alter the S-2HG levels and demonstrated that exogenous or endogenous S-2HG ex vivo regulates proliferation and differentiation of CD8 + T lymphocytes. Nevertheless, when 500 μm S-2HG treatment with CD8 + T lymphocyte inhibited the homeostatic proliferation compared with control cells, we reasoned that there may be an increase in apoptosis at the dose due to drug-related toxicities. Using selective adoptive CD8 + T cells transfer into Rag1 −/− mice in vivo, endogenous S-2HG reduction alleviates the ischemic brain injury and post-stroke cognitive dysfunction with fewer brain-infiltrating CD8 + T lymphocytes. These data support our conclusion that immunometabolite S-2HG may exacerbate perioperative ischemic brain injury and post-stroke cognitive dysfunction (Fig. 8). However, in the current study, we did not evaluate the influence of adoptive CD8 + T cells transfected with lentiviral L2hgdh shRNA transfer into Rag1 −/− mice in vivo on ischemic brain injury and MWM, which warrants further attention. Given the neuroprotection with female hormones and the appropriate number of animals to reach statistical significance, we only chose male mice in this study. Conclusion Our study advances an important new understanding of the mechanisms of immune-mediated ischemic brain injury and post-stroke cognitive dysfunction with perioperative ischemic stroke. Invasion of brain parenchyma and direct neurotoxicity of CD8 + T lymphocytes may play a critical role in immune-mediated ischemic brain injury. Aberrant accumulation of immunometabolite S-2HG mediates activation and migration of CD8 + T lymphocytes, which may present a novel mechanism and therapeutic target underlying exacerbation effect of perioperative ischemic stroke. Fig. 8 Schematic illustrating novel mechanism underlying S-2HG-mediated perioperative ischemic brain injury. Perioperative risk factors trigger an aberrant metabolic accumulation of S-2HG in peripheral CD8 + T cells and thus enhance the activation of CD8 + T cells. During the ischemic stroke event, migration and adhesion of activated peripheral CD8 + T cells to disrupted BBB and brain parenchyma will further exacerbate ischemic brain injury and neurological dysfunction
8,517.6
2022-07-07T00:00:00.000
[ "Biology", "Medicine", "Psychology" ]
Magnetic Control of Flexoelectric Domains in a Nematic Fluid The formation of flexoelectric stripe patterns (flexodomains) was studied under the influence of external electric and magnetic fields in a nematic liquid crystal. The critical voltage and wavevector of flexodomains were investigated in different geometries by both experiments and simulations. It is demonstrated that upon altering the orientation of the magnetic field with respect to the director, the critical voltage and wavenumber behave substantially differently. In the geometry of the twist Freedericksz transition, a non-monotonic behavior as a function of the magnetic field was found. Introduction Nematic liquid crystals are anisotropic uids with uniaxial orientational order, but without discrete translational symmetry. 1They typically consist of elongated molecules that uctuate around the local axis of symmetry described by a unit vector called the director (n).The practical importance of nematic liquid crystals originates from their controllability by external electric and magnetic elds.In display applications, an electric eld is used to switch the director, which can adjust the optical properties of a device. 2 The majority of display modes utilizes the Freedericksz transition: an external eld-induced director reorientation, where the driving torques originate from the anisotropies of the dielectric constant (3 a ) and/or the diamagnetic susceptibility (c a ). 3 The value of 3 a (c a ) is given by the difference of the dielectric constants (diamagnetic susceptibilities) measured in an electric eld (magnetic eld) parallel to and perpendicular to the director: 3 a ¼ 3 k À 3 t (c a ¼ c k À c t ).If 3 a > 0 (c a > 0), the director tends to be parallel to the applied electric (magnetic) eld.Otherwise, the perpendicular conguration is more favorable.If a destabilizing eld is precisely perpendicular to (or, for negative anisotropies, parallel to) the initial homogeneous director, the torque vanishes and the reorientation starts due to small uctuations, above a well-dened threshold eld. In the electric Freedericksz transition, the dielectric interaction dominates, which is described by a free energy contribution quadratic in the magnitude of the electric eld.In addition, the director may be coupled linearly with the electric eld via the exoelectric interaction. 4,5Flexoelectricity means that a polarization is induced by a splay or bend deformation of the director n, and is dened as: where e 1 and e 3 are the splay and bend exoelectric coefficients, respectively.The usual order of magnitude for e 1 and e 3 is pC m À1 , though giant (a few nC m À1 ) values 6,7 were also reported for e 3 of bent-core 8-14 liquid crystals. Nematics are excellent materials to study spontaneous pattern formation, 15 as nonlinearities in their physical properties provide a rich source of patterns, and external electromagnetic elds can serve as control parameters.For example, applying an electric eld on a planar nematic layer can induce instabilities that result in different types of periodic director deformations. In the present paper, we focus on a particular pattern, the socalled exodomains (FDs), which represent an equilibrium director modulation caused by exoelectricity. 16,17They appear as stripes parallel to the initial director n 0 .The rst theoretical model of FDs only considered the one elastic constant approximation, 16 but this has already given a good qualitative explanation of the phenomenon.Recently, a detailed theoretical description of FDs was developed 18,19 that also accounted for unequal elastic constants and for the dynamic behavior of FDs exposed to sinusoidal voltage excitation. 18Furthermore, it recognized the similarity between FDs and splay-twist domains of the periodic Freedericksz transition; the latter were observed in polymeric liquid crystals with large elastic anisotropy. 20ecently, nonlinear eld effects and defect dynamics were also investigated in exodomains 21,22 in a bent-core compound.Moreover, exoelectric patterns were studied in special geometries, such as twisted nematic (TN) cells using rod-like compounds 23 and recently in bent-core nematic liquid crystals, 24,25 where the voltage-polarity dependent orientation of the exoelectric stripes indicated that those domains are localized near the electrodes due to an electric eld gradient. In this work, we study how an additional magnetic eld affects the formation of exodomains.In order to give a complete answer, we performed experiments and developed a theoretical description, including magnetic elds applied in different geometries.Since exodomains appear as an electric eld-induced equilibrium deformation, similar to the electric Freedericksz transition, it is a plausible idea to compare the characteristics of these two phenomena in the presence of applied magnetic elds.In the present paper, we also make this comparison using our ndings on exodomains in magnetic elds. The practical importance of exodomains lies in the fact that they offer a method to determine the exoelectric parameter e* ¼ |e 1 À e 3 | that is otherwise only measurable by complicated or unreliable techniques.Classical measuring methods deduce exoelectric parameters from the electro-optical response and require precise knowledge of the voltage applied on the liquid crystal. 5Since the director deformation originating from exoelectricity is linearly coupled to the electric eld, very low frequencies or DC voltages should be applied in order to avoid the damping of the optical response by the viscosities of the liquid crystal.Unfortunately, under such conditions, an internal voltage attenuation at the aligning layers and ionic effects [26][27][28][29][30][31][32][33][34][35][36] are unavoidable, resulting in erroneous voltage data.The main advantage of using FDs for determining e* is that the exoelectric parameter can be calculated solely from the critical wavenumber, regardless of the value of the critical voltage.Indeed, analysis of FDs using the sophisticated theoretical description 18 has been successfully employed recently for measuring e* in a rod-like nematic. 37It should be noted, however, that the applicability of this method is limited; only a few compounds exhibit exodomains, as the exoelectric instability requires a special combination of material parameters. 18If the dielectric torque acting on the director is too large, the exoelectric pattern formation is suppressed.Thus, an important requirement is a small |3 a |.We will show that the limits of applicability might be extended if a magnetic eld is also applied. Experimental conditions Our experimental investigations were performed on a typical rod-like nematic liquid crystal 4-n-octyloxyphenyl 4-n-methyloxybenzoate (1OO8 †).The chemical structure of 1OO8 is shown in Fig. 1. 1OO8 shows only a nematic mesophase below the clearing point (T NI ) of 76.7 C. On heating, it melts from the crystalline phase to the nematic phase at 63.5 C; the nematic phase can be supercooled down to 53 C. Several material parameters of 1OO8 were determined as a function of temperature in a previous work. 37Here, we will use the bulk elastic constants (K 11 , K 22 , and K 33 ), the dielectric, and the diamagnetic susceptibility anisotropies in our calculations.Our measurements were performed at 53 C, so we used the material parameters of 1OO8 corresponding to the same temperature in our simulations, namely: K 11 ¼ 8.54 pN, K 22 ¼ 3.83 pN, K 33 ¼ 10.6 pN, 3 a ¼ À0.48, and c a ¼ 9.65  10 À7 . The compound 1OO8 was studied in a sandwich cell with ITO electrodes coated with rubbed polyimide layers for planar alignment.The electrode area was 5 mm  5 mm.The thickness of the empty cell (d ¼ 19.5 mm) was measured by interferometry using an Ocean Optics spectrophotometer.During the experiments, the sample was held on a custom-made heat stage that provided a constant temperature with a precision better than 0.1 C. The heat stage was placed between the two poles of an electromagnet capable of producing a maximum homogeneous magnetic inductance of B ¼ 1 T at the sample position.The magnetic inductance was measured by using an Alphalab 100 Gaussmeter.The magnetic eld lay in the plane of the liquid crystal cell due to mechanical constraints.By rotating and xing the stage, the angle between the magnetic eld and the rubbing direction could be adjusted.Our measurements were performed in three geometries where this angle was set to 0 , 45 , and 90 , henceforth denoted as the parallel (k), the oblique, and the perpendicular (t) geometries, respectively (Fig. 2). DC voltage (U) was applied to the cell using the function generator output of a TiePie Handyscope HS3 device via a highvoltage amplier.The sample was observed using a Questar QM100 long range microscope in transmission mode with white light illumination.The electric eld-induced patterns were visualized by the shadowgraph technique, 41 without using any polarizer in the present case.The micrographs were recorded by using a Foculus FO323B digital camera. In each geometry, for a given value of the magnetic eld, voltage scans with 0.2 V steps were performed at a predened voltage interval.Aer each voltage step, the DC driving was kept constant for 5 seconds before recording the image.Fig. 2 The schematics of the measurement geometries referred to as parallel, oblique, and perpendicular.The plane of the sandwich cell lies in the plane of the figure (x-y plane), the observation direction and the electric field are parallel to the z-axis. In order to understand the physics of exoelectric pattern formation in the presence of the external magnetic eld, one has to calculate the director distortions under the combination of electric and magnetic elds. A planar cell lled with a nematic liquid crystal is considered in a three-dimensional Cartesian coordinate system.The x-axis coincides with the rubbing direction, and the cell lies in the x-y plane.We assume strong anchoring of the director and no pretilt at the boundaries.The general director eld n ¼ n(x, y, z) is represented by the tilt angle q and the azimuthal (twist) angle f: n ¼ (cos q cos f, cos q sin f, sin q). ( Then, the initial homogeneous orientation n 0 corresponds to q ¼ f ¼ 0, and both q and f should remain zero at the boundaries, even in the distorted state. A homogeneous magnetic inductance B ¼ (B k , B t , 0) parallel to, and a homogeneous electric eld perpendicular to the cell plane are considered.Naturally, the assumption on the homogeneity of the electric eld is appropriate until the variation in the z-component of the director remains very small inside the cell, which is valid if U ( U c . Since exodomains represent an equilibrium deformation, the nal state can be calculated by minimizing the free energy.In our case, the density of free energy (f) is given by the sum of the elastic (f elast ), dielectric (f electr ), exoelectric (f exo ), and magnetic (f magn ) contributions: The Frank elastic constants K 11 , K 22 , and K 33 correspond to the splay, twist, and bend director deformations, respectively.The permittivity and permeability of vacuum are denoted by 3 0 and m 0 , respectively.For the minimization of the free energy the Euler-Lagrange formalism is used. The characteristic parameters of the exodomains, namely their threshold voltage U c and the critical wavevector q c at the onset of the exoelectric instability, can be obtained via a linear stability analysis with respect to periodic director deformations.These detailed calculations will be performed below for two special cases, the parallel and the perpendicular geometries shown in Fig. 2. The parallel geometry In the parallel geometry, the magnetic inductance is Assuming c a > 0, no magnetic Freedericksz transition is expected in this geometry; thus, the modulated director eld of exodomains emerges from a homogeneous planar basic state.The stripes of FDs are assumed to remain parallel to the rubbing direction, q c ¼ (0, q, 0).Consequently, all variables depend only on the y-and z-coordinates.The free energy is minimized by solving the system of Euler-Lagrange equations: where spatial partial derivatives are denoted in the lower indices by commas and the corresponding space coordinates. Combining eqn ( 2)-( 11) results in a complicated system of nonlinear partial differential equations that has to be further processed as follows.Near the onset of exodomains, the director distortions are small and their periodic part characterized by the wavenumber q can be separated from the zdependent amplitudes of the tilt (q 0 (z)) and twist (f 0 (z)) modulations via: f(y, z) ¼ f 0 (z)sin(qy). The director deformation prole of FDs in the middle of the cell (z ¼ 0) is shown in Fig. 3. Using the above ansatz, eqn (10) and ( 11) can be linearized with respect to the small quantities q 0 and f 0 .Aer switching to the dimensionless space variable ẑ ¼ zp/d and wavenumber q ¼ qd/p, straightforward calculations result in: Fig. 3 The director profile of flexodomains in the middle of the cell (z ¼ 0) in two views.The director is symbolized by ellipses.The electric field is parallel to the z-direction. where K av ¼ (K 11 + K 22 )/2, and dK ¼ (K 11 À K 22 )/(K 11 + K 22 ).In this nal system of ordinary differential equations, the prime and double prime denote the rst and second ẑ-derivatives, respectively.In addition, the following scaling quantities are introduced: They are formally similar to the expressions for the threshold magnetic inductances of the magnetic splay (B s ), the magnetic twist (B t ) transitions, and the threshold voltage U s of the electric splay Freedericksz transition.The applied voltage corresponds to The cell is symmetric with respect to its midplane, therefore it is sufficient to perform the calculations for only one half of the cell.For a given value of q and U, the system of eqn ( 14) and ( 15) was numerically solved for f 0 (ẑ) and q 0 (ẑ) in Matlab in the interval ẑ ¼ [Àp/2,0], which corresponds to the lower half of the planar cell.Mixed boundary conditions were used as follows: q 0 (Àp/2) ¼ f 0 (Àp/2) ¼ 0 and q 0 0 (0) ¼ f 0 0 (0) ¼ 0. For U < U ckq the homogeneous director eld is stable, thus f 0 (ẑ) ¼ q 0 (ẑ) ¼ 0. The critical voltage U ckq is identied by the nonzero solutions of the director modulation amplitudes f 0 (ẑ) and q 0 (ẑ), showing the emergence of the pattern.Since our model is linearized, no quantitative information can be obtained about the director eld above the critical voltage. Calculating U ckq as the function of q yields a neutral curve with a minimum corresponding to the actual critical voltage U ck and wavenumber qck .As an example, the U ckq vs. q curve is shown for B k ¼ 0 T (solid line), 0.25 T (dotted line), 0.375 T (dashed line), and 0.5 T (dash-dotted line) in Fig. 4. The red crosses show U ck and qck for each value of the applied parallel magnetic inductance.The calculations were performed with the material parameters of 1OO8 listed in Section 2 and e* ¼ 6.9 pC m À1 . It can be clearly seen in Fig. 4 that both U ck and qck increase with higher values of B k . The perpendicular geometry In the perpendicular geometry, the magnetic inductance is given by: The main difference to the parallel case is that here the magnetic eld does not stabilize the initial homogeneous planar conguration; instead it induces a twist Freedericksz transition. As a consequence, in the absence of the electric eld, the director can be described solely by a ẑ-dependent twist angle j ¼ j(ẑ), i.e. n ¼ (cos j, sin j,0).The determination of the director prole via minimization of the free energy is a well-known procedure; j(ẑ) can be obtained as the solution of the second order nonlinear ordinary differential equation: 42 with the boundary conditions: j 0 (Àp/2) ¼ 0 and j 0 0 (0) ¼ 0. Fig. 5 shows the resulting j(ẑ) prole inside the cell calculated with the parameters of our particular material (1OO8) for three different values of the applied magnetic inductance. For B t < B t , the twist angle naturally equals zero.Above the Freedericksz threshold eld, j increases and reaches its maximum in the middle of the cell: j m ¼ j(0).At higher B t , j m approaches 90 , but in the largest part of the cell, the twist angle is still signicantly below 90 ; even at B t /B t ¼ 2.78 that corresponds to our maximum inductance of B t ¼ 1 T. Note that an electric eld below the onset of FDs (i.e.U < U ck ) does not affect the basic (homogeneous or twisted) state. Fig. 4 The critical voltages (U ckq ) of flexodomains for different wavenumbers in the case of B k ¼ 0 T (solid line), 0.25 T (dotted line), 0.375 T (dashed line), and 0.5 T (dash-dotted line).For a given magnetic inductance, the red cross shows the smallest critical voltage U ck at the critical wavenumber qck that should actually be realized by the system.Fig. 5 The twist angle j versus the cross-section direction z of the planar cell in the case of different perpendicularly applied magnetic inductances. If B t < B t , the initial director conguration is homogeneous, thus the director modulation caused by the onset of exodomains can be described similar to the parallel case, using eqn (2).Above the Freedericksz threshold, however, the periodic structure of FDs emerges from a twisted director eld, hence: n ¼ (cos q cos(j + f), cos q sin(j + f), sin q), (18) where q ¼ q(x, y, z) and f ¼ f(x, y, z) now depend on all space coordinates.The free energy minimization can be done by the system of Euler-Lagrange equations: In eqn ( 19) and ( 20), additional terms appear compared to eqn (10) and (11) in the parallel case, due to the x-dependence of the angles q and f.The combination of eqn ( 3)-( 8) and ( 16)-( 20) leads to lengthy expressions that must be linearized next in order to have a chance to calculate the threshold parameters of FDs. In the perpendicular geometry we still assume that the exoelectric instability results in unidirectional stripes, but in contrast to the parallel case, the stripes are allowed to run at an angle b with respect to the initial planar director n 0 , i.e. q c ¼ (q sin b, q cos b, 0).Hence the following ansatz is applied to the q and f angles: q(x, y, z) ¼ q 0 (z)cos((q cos b)y À (q sin b)x), f(x, y, z) ¼ f 0 (z)sin((q cos b)y À (q sin b)x). By switching again to dimensionless variables as in Section 3.1 aer straightforward calculations one obtains the system of ordinary differential equations: where an additional constant k ¼ K 33 /K av was introduced. Below the twist Freedericksz threshold, the procedure to nd U ct and qct for different values of B t is similar to that discussed in Section 3.1, as b and j can be xed to zero.If B t > B t , however, j and j 0 have to be taken from the solution of eqn (17).Calculating U ctqb as the function of q and b gives a surface with a minimum that corresponds to the actual critical voltage U ct , wavenumber qct , and stripe angle b ct . As an example, U ctqb plotted as the function of q and b for B t ¼ 0.7 T is shown in Fig. 6.The calculations were performed using the material parameters of 1OO8 presented in Section 2 and e* ¼ 6.8 pC m À1 .The minimum is clearly seen at around the middle of the surface. Parallel geometry Snapshots of exodomains taken at B k ¼ 0 T (U ¼ 23 V) and B k ¼ 1 T (U ¼ 52.6 V) are presented in Fig. 7a and c.The micrographs were captured slightly above the threshold voltages (U ck ) of the patterns in the parallel geometry, covering an area of 106 mm  106 mm.The two-dimensional Fourier transforms (amplitude spectra) of the images Fig. 7a and c are shown in Fig. 7b and d, respectively. It can immediately be seen in Fig. 7 that the dimensionless wavenumber q of FDs is signicantly larger at B k ¼ 1 T than at zero magnetic eld, however, the direction of the wavevector remains the same as expected.The threshold voltage is also larger at the higher B k .In order to precisely determine the threshold parameters U ck and qck , the emergence of the pattern has to be followed as the function of the applied voltage.The proper analysis of this process needs a denition of a quantity that can be used to indicate the presence of a pattern.In our case, this quantity was assigned to the maximal Fourier amplitude (C FFT ) in a broad region in that part of the Fourier space where the peaks for FDs were expected.The value of C FFT is essentially a measure of contrast that is expected to be minimal in the homogeneous initial state, and to increase with the emergence of the pattern. The measured voltage dependence of C FFT is shown in Fig. 8 for different values of B k .We note that here C FFT is background corrected, which means that the maximal Fourier amplitude of the snapshot taken in the homogeneous initial state is subtracted from all measured values. The threshold behavior is observed from Fig. 8 for all different values of B k .Below the appearance of the patterns, the contrast equals the background value, thus C FFT ¼ 0. At higher voltages, the emergence of FDs is indicated by an increase in the contrast.The critical voltages U ck (versus B k ) were determined by extrapolation of the lines tted on the linear parts of the C FFT (U) functions for each value of B k (dashed lines in Fig. 8). The q data were obtained by tting the peaks in the Fourier transforms of micrographs with 2D Gaussian surfaces for each applied voltage.The tted centres of the Gaussians were used to acquire the values of q.In Fig. 9, the wavenumbers of FDs are plotted as the function of the reduced voltage U/U ck for several values of B k .The data show that the wavenumber increases linearly with the applied voltage above the threshold.Therefore, the critical wavenumber qck should be determined by extrapolation to U/U ck ¼ 1.The extrapolating dashed lines in Fig. 9 were tted to the data points lying in the range 1.02 < U/U ck < 1.06, which is approximately the same interval as the one used in the extrapolation to determine U ck (see Fig. 8). Applying the procedure presented above on a number of different B k values, the magnetic eld dependence of the threshold parameters can be determined.Fig. 10a and b depict how B k affects U ck and qck , respectively.The solid symbols show the experimentally obtained data. In order to see how the experimental results match with our theoretical considerations, the threshold parameters U ck and qck were determined by the simulation technique described in Section 3.1 using the material parameters of 1OO8 listed in Section 2. Only the parameter e*, i.e. the difference of exoelectric coefficients, was determined by tting our theoretical model to the measured value of qck at B k ¼ 0. Our method gave e* ¼ 6.9 pC m À1 , which was used in the simulations of the parallel geometry.The open symbols in Fig. 10a and b show the magnetic inductance dependence of the critical voltage and the wavenumber obtained from the simulation (the connecting lines are used as guides for the eye). It is seen in Fig. 10 that the theoretical U ck (B) dependence is nicely reproduced experimentally; U ck increases monotonically with B k , but the measured threshold voltages are systematically larger than the theoretical ones.This deviation can be attributed to the ionic conductivity of the liquid crystal and to the structure of the cell.Though in many situations liquid crystals can be considered to be insulators, the nite conductivity of nematics becomes important if a low frequency AC voltage is applied onto the material. 43The effect is even more apparent when a DC voltage is used.Common liquid crystals, such as 1OO8, exhibit electrolytic conductivity where the charge carriers are ionic impurities.If the applied electric eld changes very slowly or is constant with time, ions with opposite charges have time to reach the opposite electrodes, where they can accumulate, forming a Debye layer.This screening may reduce the electric eld in the cell.However, if the total number of charge carriers is sufficiently low, this effect is negligible.In typical liquid crystal test cells, the ITO electrodes are coated with electrically insulating polyimide layers.The thickness of these are approximately 100-120 nm that can provide barriers strong enough to stop ions and minimize the charge transfer from the electrodes. 28In the static case, the voltage U applied on the cell should be larger than that on the liquid crystal itself (U LC ), because of the voltage drop at the polyimide and the Debye layers.The internal voltage attenuation may be estimated by U LC /U ¼ R LC /(R b + R LC ), where R LC and R b are the resistances of the liquid crystal and of the boundary layers, respectively.This simple model can explain why systematically larger critical voltages were obtained in the experiments compared to the simulations. Another peculiarity seen in Fig. 10a is that the difference between the experimental and calculated critical voltages is larger at lower applied voltages.This effect is consistent with the above model.Increasing the applied DC voltage decreases the effective number of charge carriers and thus increases R LC , while R p might be regarded as voltage independent.Consequently, the internal attenuation reduces; the ratio U LC /U approaches 1 when the applied voltage increases. Besides the critical voltage, the wavenumber qck also shows a signicant increase with the applied magnetic eld (see Fig. 7 and 10).This tendency is the consequence of the monotonically increasing U ck (B k ); higher voltages may allow higher wavenumbers.The simulation results agree very well with the experimental data.It should be noted that the calculated magnetic eld dependence of qck in Fig. 10b is not a t; the material parameters used were not varied in order to achieve a better match with the measured results.This implies that a ne tuning of the elastic constants and the other parameters may result in even better agreement. The oblique geometry In Fig. 11a-d, micrographs of FDs of the oblique geometry that were taken at , respectively, are shown.The images were recorded slightly above the threshold voltages of exodomains, and they cover an area of 106 mm  106 mm, similar to the ones presented in the previous subsection of the parallel geometry.In Fig. 11a-d, one can clearly identify the most spectacular feature of the oblique geometry: not only the wavenumber of the FD stripes is inuenced by the magnetic eld, but the direction of the wavevector as well. In the oblique geometry, the threshold voltages U cobl and the critical wavenumbers qcobl were determined following the procedure presented in Section 4.1 in conjunction with Fig. 8 and 9.In contrast to the parallel case, the angle b obl between n 0 and the exoelectric stripes had to be measured too. The dependence of the stripe direction b obl on the reduced voltage U/U cobl is plotted in Fig. 12 for different magnetic inductances.At nonzero values of B obl , the angle b obl shows a decreasing tendency by increasing U/U cobl .Therefore the stripe direction angle at the onset of the exoelectric patterns (b cobl ) can be determined by extrapolation (see the dashed lines in Fig. 12), analogous to the determination of qcobl . The magnetic inductance dependence of the threshold parameters U cobl and qcobl is shown in Fig. 13a.The critical voltage increases with B obl ; qcobl exhibits a similar character.The tendency of increasing the critical voltage and wavenumber at high magnetic elds is similar to that found in the parallel geometry and is due to the stabilizing magnetic torques. The stripe angle b cobl at the threshold versus the magnetic inductance is shown in Fig. 13b.The data indicate that b cobl increases with B obl monotonically from zero and it approaches 45 at high elds. In the oblique geometry, a pure magnetic eld induces a thresholdless, homogeneous twist deformation.Therefore, for B obl s 0, a twisted structure forms the basic state of the exoelectric instability at U ¼ U cobl .For high magnetic elds, the director realigns to be parallel to B obl , i.e. the maximal rotation angle is 45 .Fig. 13b clearly shows that b cobl follows the director rotation and saturates approaching the same 45 angle; thus, at large magnetic elds, the FD stripes are parallel to the (average) director, just as in the case of B obl ¼ 0. The perpendicular geometry Micrographs of exodomains of the perpendicular geometry recorded slightly above their critical voltage at are shown in Fig. 14a-f, respectively.All images cover an area of 106 mm  106 mm (the same as shown in Fig. 7 and 11).The direction of the rubbing (n 0 ) and that of the magnetic eld (B t ) correspond to the vertical and the horizontal directions, respectively. One can observe in Fig. 14a-c that the distance between stripes increases, but the orientation of exodomains remains parallel to the rubbing direction.In contrast, in Fig. 14d-f, the wavenumber of the pattern increases and the stripes become oblique, gradually approaching to be horizontal. The critical parameters U ct , qct , and b ct versus B t were determined by following the same procedure presented in the previous subsections.The experimental values of the critical voltage, wavenumber and stripe angle can be seen as the function of B t in Fig. 15a-c, respectively (solid symbols). The threshold parameters U ct , qct , and b ct were also determined by the simulation technique described in Section 3.2.The same material constants were used as for the parallel geometry, except e*.A slightly different value of e* ¼ 6.8 pC m À1 was used here (instead of 6.9 pC m À1 ), in order to t the experimental value of the critical wavenumber in the perpendicular geometry at zero magnetic eld, which differed slightly from that measured previously in the parallel geometry.The open symbols in Fig. 15a-c show the magnetic inductance dependence of the critical voltage, wavenumber and stripe angle obtained from the simulations (the connecting lines are just guides for the eye).It is clear from Fig. 15 that the characteristics of the exoelectric patterns are different below and above the threshold magnetic inductance (B t ¼ 0.36 T) of the twist Freedericksz transition.Nevertheless, for both B t ranges, the theoretical curves nicely reproduce the experimental dependence.In the range B t /B t < 1, U ct and qct decreases with increasing magnetic eld, while the direction of the FD stripes remains parallel to n 0 , thus b ct is essentially zero.It is important to note that close to B t , both U ct and qct are far below their values at B t ¼ 0. This decrease becomes clear if we invoke the structure of exodomains.The periodic director deformation of FDs involves both tilt and twist components, as it was described by eqn (12) and (13).The torque exerted by a bias magnetic eld applied perpendicular to the initial director helps to twist the director and thus reduces the threshold voltage of FDs, even if it is still too low to induce a homogeneous twist deformation by itself. Above the Freedericksz threshold, both U ct and qct increase with B t .Furthermore, the orientation of the stripes changes gradually from the rubbing direction towards a state where they are more parallel to the magnetic eld.Therefore, b ct increases from zero and approaches 90 at high values of B t .In order to see how the critical rotation angle b ct of FDs is related to the twist deformed basic state of the director eld, we included the maximal twist angle j m as the function of B t in Fig. 15c (solid line).It is clearly seen that j m is always larger than b ct , as expected.The difference between the stripe angle and the maximal twist angle is relatively small, despite the fact that the j(ẑ) prole is not at, even at B t ¼ 1 T.This leads to the conclusion that the exoelectric domains in our system are localized in the middle of the cell.As a consequence, the twist deformation of the director is directly visualized by the rotation of the FD stripes. Though the numerically obtained U ct (B t ) curves exhibit a similar B-dependence as the experimental ones, the latter values are slightly, though systematically higher, just as has been found in the parallel geometry.The explanation given for the voltage deviation in Section 4.1 applies here as well.In contrast to the threshold voltages, the measured and calculated qct (B t ) curves match almost perfectly, despite the fact that the B-dependent values were not obtained by a t (no free parameters were varied in the simulations). Similarly, good agreement can be seen between the calculated and the measured b ct (B t ) dependence as well, though the threshold of the twist Freedericksz transition seems to be less sharp in the experiment.This is most likely due to experimental imperfections, e.g. a slight misalignment of the magnetic eld direction. We note here that the vertical dashed line in Fig. 15a-c is not an experimental value of the threshold of the twist Freedericksz transition; it was calculated from the material parameters of 1OO8 listed in Section 2, which were taken from independent measurements. 37 Discussion We have shown in the previous sections that the presence of an additional magnetic eld has a signicant inuence on the formation of exodomains in a nematic liquid crystal.It is well known that a magnetic eld affects the electric Freedericksz transitions as well in certain geometries.This is not surprising as, depending on its direction, the torque exerted by the magnetic eld either stabilizes or destabilizes the initial state.In the following, we discuss these analogies in more detail. Let us start with the magnetic eld applied along n 0 .Here, the critical voltage U ck of FDs was found to increase monotonically with B k .Qualitatively similar behavior is expected in the same geometry for the homogeneous splay Freedericksz transition, assuming that the nematic compound exhibits positive dielectric and magnetic susceptibility anisotropies.There, the magnetic eld has a stabilizing effect: it tends to suppress the director tilt and twist as well.Our ndings point out that this stabilizing effect works similarly in the case of exodomains, where the director deformation is induced by exoelectricity, thus acting against the negative dielectric anisotropy that stabilizes the homogeneous planar state. The perpendicular geometry has some more interesting aspects.Our results show that the critical properties of FDs exhibit a completely different nature in the two distinct magnetic eld ranges separated by the twist Freedericksz threshold eld.For B t < B t , the critical voltage was found to decrease with B t , while for B t > B t an opposite tendency was detected.Comparing this with the splay Freedericksz transition of a nematic with 3 a > 0, we do not nd an analogy, in contrast to the parallel geometry.The threshold voltage for the homogeneous director reorientation is not affected by the magnetic eld at all if B t < B t . 44This is due to the fact that the electric splay Freedericksz distortion involves only the tilt of the director, while in FDs tilt and twist are both present.A magnetic inductance below B t cannot create twist, but may alter twist if it is already present. Measurements in the oblique and perpendicular geometries showed that the direction of the FD stripes rotate if there is a twist deformation in the sample.This unambiguously proves that the FDs observed are of bulk origin, just as it was assumed in the rst theoretical interpretation. 16Thus, our ndings are in contrast to some recent results of exoelectric pattern formation in bent-core nematic compounds using twisted cells, 24,25 where the patterns were found to be localized near the electrodes and changed their direction upon reversal of the voltage polarity.Despite similarities in their appearance, we assume that those patterns are exodomains of another type with a different, not yet fully explored formation mechanism, where surface effects (e.g.anchoring and ion blocking strength, surface polarization, large electric eld gradients near Debye layers) as well as differences in material parameters (ion concentration, elastic constants, etc.) may play an important role. In Section 1 we have already pointed out the advantages of using exodomains in determining e*, as well as the main drawback of this technique: only a few compounds possess the combination of the material parameters required for the appearance of exodomains. 18For example, in compounds with large positive dielectric anisotropy, FDs cannot be seen because their threshold voltage would be larger than that of the electric Freedericksz transition.However, we showed that applying a magnetic eld in the perpendicular geometry substantially decreases U ct , while the electric Freedericksz threshold remains unaffected by B t < B t .Therefore, our results opens up a perspective to enlarge the number of nematics that may show FDs.Namely, we think that the application of a suitable B t will allow observation of FDs in compounds where no exoelectric pattern formation can be seen in the absence of a magnetic eld.Proving this will be the subject of further studies. Fig. 6 Fig.6The critical voltages (U ctqb ) of flexodomains for different wavenumbers q and stripe angles b in the case of B t ¼ 0.7 T. The minimum of the surface in the center corresponds to the actual threshold of the flexoelectric instability. Fig. 7 Fig. 7 Micrographs of flexodomains taken at (a) B k ¼ 0 T (U ¼ 23 V) and (c) B k ¼ 1 T (U ¼ 52.6 V) in the parallel geometry.The images cover an area of 106 mm  106 mm.The magnetic field and the rubbing direction lie parallel to the horizontal direction.The two dimensional Fourier transforms of (a) and (c) are shown in (b) and (d), respectively. Fig. 8 Fig. 8 The voltage dependence of the pattern contrast (symbols) based on the maximal Fourier amplitude (C FFT ) for different applied magnetic inductances in the parallel geometry.The dashed lines indicate the linear extrapolation. Fig. 9 Fig. 9 The wavenumber of flexodomains as a function of the reduced voltage (symbols) for different applied magnetic inductances in the parallel geometry.The dashed lines indicate the linear extrapolation. Fig. 10 Fig. 10 The magnetic inductance dependence of (a) the critical voltage and (b) the wavenumber in the parallel geometry.The solid (connected open) symbols were obtained by experiments (by numerical simulations). Fig. 11 Fig. 11 Micrographs of flexodomains taken at (a) B obl ¼ 0 T (U ¼ 23 V), (b) B obl ¼ 0.3 T (U ¼ 27.6 V), (c) B obl ¼ 0.6 T (U ¼ 38.8 V) and (d) B obl ¼ 1 T (U ¼ 54.8 V) in the oblique geometry.The images cover an area of 106 mm  106 mm.The magnetic field lies in the horizontal direction.The rubbing direction is at an angle of 45 with respect to the horizontal direction (parallel to the stripes in (a)). Fig. 12 Fig. 12 The angle of FD stripes with respect to the rubbing direction as the function of the reduced voltage (symbols) for different applied magnetic inductances in the oblique geometry.The dashed lines indicate the linear extrapolation. Fig. 13 Fig.13The magnetic inductance dependence of (a) the critical voltage, wavenumber and (b) FD stripe angle with respect to the rubbing direction in the oblique geometry. Fig. 14 Fig. 14 Micrographs of flexodomains taken at (a)B t ¼ 0 T (U ¼ 24 V), (b) B t ¼ 0.2 T (U ¼ 22.4 V), (c) B t ¼ 0.35 T (U ¼ 18.4 V), (d) B t ¼ 0.5 T (U ¼ 31.6 V), (e) B t ¼ 0.75 T (U ¼ 45.6 V) and (f) B t ¼ 1 T (and U ¼ 56 V) in the perpendicular geometry.The images cover an area of 106 mm  106 mm.The magnetic field and the rubbing direction lie parallel to the horizontal and the vertical directions, respectively. Fig. 15 Fig. 15 The magnetic inductance dependence of (a) the critical voltage, (b) wavenumber and (c) FD stripe angle with respect to the rubbing direction in the perpendicular geometry.The solid (connected open) symbols were obtained by experiments (numerical simulations).
9,451.4
2014-06-04T00:00:00.000
[ "Physics" ]
An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier The fast compressive tracking (FCT) algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF) for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution. Introduction In the current target tracking, the visual tracking algorithms are roughly divided into two categories, i.e. generative or discriminative.Most generative algorithms only take the holistic representation into consideration and do not make full use of the discriminative information between the target and the background.So these algorithms are difficult to represent the target correctly under occlusion and pose variation. Besides, discriminative model regards the tracking problem as a binary classification problem by obtaining the decision boundary of target and background to distinguish them i.e. such as online boosting, MIL tracker, online Semi-Boost.In recent years, Kai Hua Zhang raised the ideas on generative and discriminative model and proposed the fast compressive tracking algorithm [1], which is also demonstrated not efficient enough to deal with noise, thus suffering from drifting problem. Compressive tracking The compressive sensing theory states that the high-dimension of the feature space can be projected to low-dimension space which contains enough information to reconstruct the original high-dimension through random projection matrix. v Rx .and n m .According to Johnson-Lindenstrauss [2] lemma, for the two K-sparse vectors, the following property can be obtained to ensure the integrity of important information. To deal with the scale change problems, series of rectangle filters are used to convolve the original image, the rectangle filters are defined as: , x i y j h x y otherwise Diaconis and Freedman [3] show that the low-dimension vectors which are projected from the high-dimension through random measure matrix are almost Gaussian. Where Taking the introduction of new samples into account in each new frame the scale parameters are incrementally updated by: In the end, we can model with a naive Bayes classifier to predict the position from the candidate Online random forest Integrated learning is a robust machine learning algorithms, especially random forest, which has been proved strong anti-interference ability, therefore, when the wrong obstruction sample is updated, it will not cause great impact.Online random forest is mainly with the idea of online learning and integrated learning. Random forest Random forest is composed by a lot of classification and regression trees. The randomness of the random forest is not only reflected in the choice of the samples, but also in the Given the training set X and labels Y, and with an ensemble of trees, 1 ... k t x t x we define the average number of votes at a sample (x, y): Where I is the indicator function and K is the number of the trees, we believe that the larger distance between the average number of the right class and any other class, the more confidence the classifier is.So we define the generalization error. , It also has been shown by Breiman [4] that as the number of trees increases, for almost surely all sequencesT , E converges to This result explains that the random forest do not over-fit as more trees as added.It has also been shown by Breiman [5] that this error has an upper bound Where U is the mean correlation between pairs of trees in the forest and s is the strength of the ensemble. Online learning In the tracking process, the samples are updated with the image sequences continuously.We cannot get the entire training set in advance.What we need to do is to train the classifier with the samples which arrive sequentially, so that it would converge to the off-line model.Online bagging provides a good idea for the online updated. Given a training set T of size N, the probability of every sample which is selected randomly k times is When N o f , the probability distribution is a Poisson distribution, so each classifier is updated by each sample k times where k is generated randomly from the Poisson distribution. In the online random forest, there exist another important problem that we should tell the node when it is appropriate to perform a split.Firstly, the samples are gathered over times, so we should know whether there are enough samples for a node to perform a split. Secondly, the splits are whether good enough for classification purpose.It has been shown by Amir Saffari Improved method The parameters of naive Bayes classifier will continue to be updated in the tracking process, However, when the target is in the event of partial occlusion or occlusion of all time, the noisy samples will be introduced into the model, and previous samples will be forgotten .The general solution is to judge when the target is under occlusion, so to design different update rates.But the finding of shelter materials will not only cause a large degree of complex algorithms, moreover, it does not guarantee the accuracy.However, it has been shown by Kai Hua Zhang [7] that the compressive tracking provides a good image representation way for the tracking problem.So we use it to make a discriminative representation of the samples in our algorithm. The online random forest selects the partial samples and features from the total training set for the training of each trees, therefore, most of the trees will not be polluted by the noise.This is the most important reason why the online random forest is not straightforward to over-fit.Meanwhile, it has been proved theoretically in Moreover, we can estimate the Out-Of-Bag-Error of the trees, through the Out-Of-Bag-Error, we propose to discard the tree which is polluted by the noisy samples, so it can reduce the influence of the factors such as occlusion, appearance changes, illumination change. Eventually, when the target is completely blocked even severely blocked, there are no candidate positive samples in current frame.Therefore, it exists a situation that the output of each tree will be zero .theresult can help us make a decision whether the target is completely For the candidate samples in the current frame, if all the responses of the trees are zero which means all the candidate samples are considered negative samples. If (output [1…M ] ==0) Predict the position of the target and discard the samples collected from the current frame through.In the compressive tracking algorithm, when the girl turns the face quickly, we need a faster update rate to adapt to the changes in the appearance.However, with the high update rate, it is easy to introduce the noisy. Thus, leading to the drift in the tracking.As you can see the result in the 180th frame.Meanwhile, we use a smaller update rate to do the experiment, the result is showed in the 250th and 255th frames, the drift occurs for the sake of the update rate, which cannot adapt to the changes in the appearance.However, our method abandons the trees, which is updated by original samples through estimating the Out-Of-Bag-Error adaptively. And the new trees are created with the current samples to accommodate with the changes in the appearance. We evaluate the result with the center location error which is defined as the Euclidean distance between the central locations of the tracked objects and the manually labeled ground truth.The result of the sequences from the CMBC and Rotating girl are showed in Figure 4 and Conclusion In this paper, we proposed an improved compressive tracking algorithm based on online random forest.The simulation results show that our algorithm performs better than the original algorithms when the target is under occlusion, pose variation and appearance changes. The reasons are that we adopt the adaptive compressive sensing theory to make a discriminative representation of the target, moreover, the random forest classifier is robust to the noise for selecting the features and samples to train the decision trees randomly.Finally, The Out-Of-Bag-Error can tell us whether a decision tree is polluted completely and then discard it. which is most likely to be the target sample.The main components of the proposed compressive tracking algorithm are shown by Figure. 1. Figure. 1 . Figure. 1. Main components of the proposed compressive tracking algorithm will converge to the off-line model.Where D is the minimum number of samples.The E is the minimum information gain and , blocked.What we need to do next is two things.Firstly, we should abandon the noisy samples we collect from the current frame.The second thing is to use the motion information of the target in the last several frames, including the target velocity, acceleration and other information to predict the target position.The entire proposed algorithm is depicted in Algorithm.Algorithm: improved fast compressive tracking based on online random forest Require: the t-th and (t+1)-th image frame.Require: the number of positive samples and negative samples in every frame N. Require: the number of the trees in the forest M. Require: the minimum number of the samples D .Require: the minimum gain E . ( 1 )' . According to the position of the target in the t-th frame, we collect positive and negative samples in ^c For k=1: KIf (M>0)Find the node which the sample belongs to, and then update the node l=find leaf (i); and input them into the trained classifier. ' In the experiment, the dimension of the compressed feature is set to be 50.The number of the trees is 100.The maximum depth of the tree is 5.And choose 10 tests from the test set to split the node.The minimum number of the samples is 100.The dataset consists of two sequences.It is a surveillance video from the CMBC, public sequence , Rotating Girl' respectively, which are presenting pose, scale variation and occlusion.As we can see from the result, our method tracks the target more accurately than FCT algorithm, when the target is under occlusion and pose variation.The results of the two sequences are showed in Figure2and Figure3respectively. Figure. 2 .Figure. 3 . Figure.2.The results of the surveillance video from the CMBC. Figure. 4 .Figure. 5 . Figure.4.the sequences from the ICBC-FCT -FCT-ORF maximum information gain.It is the reason why the random forest is not easy to over-fit. choice of the random tests.Random forest selects a series of tests randomly from the feature set, and then picking the best test among them according to the
2,578.8
2016-01-01T00:00:00.000
[ "Computer Science" ]
Thermodynamic modeling of a spectrum split perovskite/silicon solar cell hybridized with thermoelectric devices To facilitate the attainment of higher performance in the tandem perovskite/silicon solar cell, this work seeks to conduct the thermodynamic modeling and analysis of a spectrum splitting tandem perovskite/silicon solar hybridized with thermoelectric (TE) devices. A dichroic beam splitter is employed as the solar concentrator for the utilization of the entire solar spectrum and the effects of temperature dependency and leg geometry alteration are considered in the TE device pins. Additionally, a TE cooler (TEC) is lapped behind the backplate of the tandem solar cell to reduce overheating and allow for higher power generation from the hybrid system. Finally, novel equations are developed to study the effects of varying the halide composition of the perovskite on the overall system performance. Among several fascinating results obtained, it was disclosed that increasing the bromine composition only improved the system's performance when a halide composition of 0.2 was used for hybrid organic‐inorganic perovskite thickness ranging from 300 to 400 nm. It was also forecasted that as the solar cells are exposed to higher concentrated solar irradiances, the effects of increasing the halide composition on the system's performance will become more notable. Additionally, utilizing a TEC to maintain the tandem solar cell backplate temperature at 293K is not beneficial to the system's performance when a beam splitter is used. Finally, it was found that the highest system efficiency of 42% was obtained at a split wavelength of 800 nm which is considerably higher than the 23.6% reported for standalone perovskite/Si. These results are sufficient to provide useful insights regarding the operation of spectrum splitting tandem perovskite/silicon solar cells with TE devices. | General overview on photovoltaics The use of photovoltaic (PV) modules is on the increase due to the clean energy they provide relative to greenhouse gas emissions from fossil fuel sources like coalfired power plants. 1 Additionally, the fear that fossil fuel sources will soon be exhausted, due to the exponentially increasing human population, has accelerated the search for efficient alternate energy sources. 2 However, the sensitivity of solar cells to light and temperature causes a severe reduction in the PV module efficiency. 3 This is why previous works sought to mitigate the efficiency reduction of PVs by combining them with different solidstate energy devices which convert the waste heat to useful electricity. 4 Due to the adverse effects of high temperature on the PV module, spectrum splitters were utilized to split the concentrated beam into individual energy components. 5 The major components are short wavelengths for the PV module (such as ultraviolet and visible light) and long wavelengths (near-infrared and infrared) for the solid-state energy converters. 6 | Benefits of tandem perovskite/ silicon solar cells Research has shown that higher efficiencies are obtainable by increasing the number of stalked PV cells. 7 The major objective in the design of PV solar cells is to maximize the efficiency-to-cost ratio. Perovskite solar cells (PSCs) have reasonably met that requirement. 8 Efficiencies of lead halide-based PSCs has rapidly been improved, reaching 20% on small-area devices making this material a competitive thin-film PV technology. 9 Furthermore, recent research has shown that introducing a mixture of bromide and iodide in the halide composition of lead methylammonium perovskites allows for continuous fine-tuning of the band gap which is a fine property for multi-junction solar cells. 10 The major variations of tandem device configuration for solar cells are the perovskite-perovskite and perovskite-silicon tandem solar cells. 11 The perovskite-perovskite configuration has a lower greenhouse gas emission factor and energy payback time compared to the silicon benchmark. Also, the thermodynamic efficiencies of the perovskite-perovskite and perovskite-silicon tandem solar cells are very high (exceeding 30%), thus, paving the way for upgrading the silicon cell performance at little extra cost in short-term and long-term basis, respectively. 12 Perovskite/Si combination as tandem solar cells is of great interest due to the possibility of boosting efficiencies above 30% while reducing the cost per kilowatt hour (kWh). 13 Futscher and Ehrler 14 indicated that perovskite/Si at different locations may require different tandem configuration and/or perovskite band gaps to minimize the cost per kWh. They also showed that by using a perovskite top cell with the ideal band gap for the respective tandem configuration, perovskite/Si with power conversion efficiency limits above 41% is possible for all three tandem configurations even at non-ideal climate conditions. Al-Alshouri et al. 15 reported a tandem monolithic perovskite/Si solar cell with a record-breaking efficiency of 29.15%. The absorber of the perovskite cell was made to remain stable when exposed to illumination via a series of very fast extraction of holes and minimization of nonradiative recombination at selected hole interfaces. Kim et al. 16 designed and developed a bifacial 4-terminal tandem perovskite/Si heterojunction solar cell that generated 30% efficiency. The efficiency recorded by the hybrid bifacial solar cell was greater than the 29.43% Schockley-Quiesser limit for crystalline-based Silicon solar cells. He et al. 10 17 noted that the high conversion efficiencies obtained from tandem perovskite/ Si solar cells were due to the advantages of perovskite materials such as their low cost, tunable bandgaps, and easy fabrication. | Thermoelectric devices applied in tandem PVs Thermoelectric (TE) modules are solid-state energy conversion devices that directly convert thermal energy to electricity based on TE effects. 18 These devices offer several desirable perks such as solid-state and noiseless operation, environmental friendliness, and zero maintenance costs. 19 Due to their harnessing of the TE effects to convert thermal energy, they have found various applications in converting waste heat in several power systems to electricity. 20 More specifically, they have proven to be good companions with PV systems in hybrid designs such as the PV-thermoelectric generators (TEGs) 21 and PV-TE coolers (TECs). 22 However, PV-TEC sandwiched module has shown improved the efficiency and the overall power output of the hybrid system due to the Peltier cooling process. 23 In addition, PV-TEG hybrid system with a beam splitter demonstrated higher efficiency in comparison to the sandwiched system 24 due to the conversion of waste heat from the back plate to electric output. In addition, some authors 25 hybridized the PV-TEG system with single junction PV and beam splitters to enable spectrum management and reduce the thermal losses encountered in the direct coupling/sandwiching method. The emerging high-efficiency PSC has also been combined with TEGs to form hybrid generating systems capable of utilizing the broad solar spectrum. 26 Zhang et al. 27 estimated the features and feasibility of directly coupling a PSC with TE modules and reported that the efficiencies obtained from the hybrid and standalone systems were 18.6% and 17.8%, respectively, for a temperature coefficient that was lower than 2%. It was further shown that the PSC is a very suitable option for the PV-TEG system. Xu et al. 28 optimized the performance of a splitting PSC integrated with a TEG and recorded an efficiency of 20.3% after optimizing the performance of the device. The hybrid device was operated under AM 1.5G conditions, an open-circuit voltage of 1.29 V, and an irradiance of 100 mW cm À2 . Liao et al. 29 recorded efficiencies of 20.4%, 5.16%, and 20.8% for a standalone PSC, TEG, and hybrid PSC-TEG system, respectively. After optimizing the operating parameters of the PSC-TEG system, the hybrid device was able to realize a maximum efficiency of 22.9% for an optimum layer thickness of 449.7 nm. It was finally stated that incorporating the TEG in the PSC facilitated the reduction of waste heat emission and improvement of the power and conversion efficiency of the PSC. Zhou et al. 7 utilized a TEG in reducing the heat generation process in a concentrated PV system made of metal-halide PSC. At a concentrated solar irradiance of 3 suns, the hybrid system was able to achieve a maximum efficiency of 35%, which was 4.7% higher than that of the standalone PSC. Finally, Lorenzi et al. 30 showed that relative to the amorphous Silicon and Gallium Indium Phosphide cells, the PSC showed a maximum efficiency enhancement of 3.1% when a bismuth-telluride based TEG was operated in tandem. Based on their results, they further concluded that the PSC has a greater potential of harnessing waste heat via TEGs compared to the popular Si solar cells. However, no efforts have been made to combine the hybrid PSC/Si system with TE devices so as to improve the system efficiency by reducing the overheating of the tandem cells. | Scientific novelty The literature review shows the potential of the PSC in utilizing the broad solar spectrum by incorporating TE devices. Furthermore, the emerging tandem PSC/Si system is showing very promising power densities and efficiencies due to the deposition of perovskite in the Si layers. However, it has been found that the efficiency of the tandem PSC/Si system is strongly affected by spectral and temperature changes as well as the composition of the halide. Up to date, no effort has been made to mitigate the effects of spectral, temperature, and halide composition on the efficiencies of tandem PSC/Si systems. This work is novel in many ways. First, a thermodynamic modeling of a concentrated tandem PSC/Si system combined with a spectrum splitting TEG having trapezoidal leg geometry is presented. The spectrum splitting system used is a dichroic beam splitter which allows the TEG to utilize the infrared region which is detrimental to the efficiency of the hybrid PSC/Si system. Second, in a bid to modify and increase the utilizable spectrum, the hybrid PSC/Si system is coupled to a TEC to reduce the thermal heating of the system for improved power generation and demonstration of the halide proportion. Finally, the effects of varying the halogen composition are studied so as to better understand the effect of its tunable band gap energy. | METHODOLOGY In this section, the detailed descriptions of the hybrid PSC/Si tandem system operated with TE devices are presented in Section 2.1. Then, Section 2.2 shows the model equations used to describe the performance of the various components making up the spectral splitting hybrid device. | Description of the conceptualized system The proposed hybrid system as shown in Figure 1 consists a dichroic spectrum splitter, concentrator, perovskite/Si tandem solar cells, trapezoidal TEG, and TEC. These components are added to improve the performance of the overall system. TEG can directly convert thermal energy into electric power. Radiation energy near the bandgap is directly converted to electricity by PV panel and simultaneously, infrared energy is utilized by the TEG to convert heat to electricity. Consequently, more electricity can be produced by the hybrid system than the electricity produced by a single PV or TE system. The tandem PV device is sandwiched with the thermoelectric cooler and a co-receiver of the concentrated split light beam with the trapezoidal leg generator. As certain fraction of the incoming solar irradiance, incidents on the surface of both solid-state converters, heat is being lost to the atmosphere through convection and radiation from both top and back surfaces. | Modelling equation formulation For a wavelength range of 280 4000 nm and the cutoff or split wavelength λ s , the total concentrated irradiance into the entire system is given 31 as where all terms in Equation (1) are defined in the nomenclature. | Analysis of the trapezoidal TEG The footprints of the TEG are known to be of great importance in the modeling and performance of a TEG. Geometric parameters of the TEG (eg, thermoelement leg length and cross-sectional area) can influence the performance of a hybrid system. For the trapezoidal geometry under study, the length which is dependent the crosssectional area of the TEG is shown in Figure 2 is expressed by 20 as in Equation (2). The one-dimension energy balance of the trapezoidal leg for an infinitesimal element, yields for the p and ntype leg, respectively, as given by Equation (3). 32,33 where the parameters in all equations are defined in the nomenclature. The thermal conductivity can be replaced with a mean value within the junction temperature range and treating it as a constant while differentiating and rearranging Equations (2)- (4) gives The differential Equations (5) and (6) can be solved by applying the principle of superposition using the Euler-Cauchy and Wronskian approach for the complementary and particular solutions. These two equations result to The constants c 1 , c 2 , c 3 , and c 4 in Equations (7) and (8) can be evaluated using the following boundary conditions: For the trapezoidal leg, considering that the geometrical parameters of both n and p-type legs are same, the internal resistance for a uni-couple or a pair is derived and expressed as Since the process occurs at steady state and isolates a TEG leg, the conductive heat transfer rate in x-direction of Figure 2 is written as The negative sign indicates that the heat transfer direction is in opposite direction to the x-axis. Rearranging Equation (11) becomes Substituting, integrating, and incorporating the mean thermal conductivities of the pair, Equation (12) yield From Equation (13), the thermal conductance for the trapezoidal leg is given as The overall energy balance for the entire solar TEG in Figure 1 gives, For the selective absorber, energy balance is also given as The heat absorbed at the TEG hot junction is given by equation (18) The radiation and convective heat terms can be replaced with an equivalent heat quantity. Substituting Equation (18) into Equation (17) gives the expression in Equation (19) α TG The electrical current of the TEG with Thomson influence considered is given as Substituting Equation (20) into Equation (19) and simplifying gives F I G U R E 2 Cross section of the trapezoidal TEG leg. TEG, thermoelectric generator where ζ 000 , ζ 00 , and ζ 0 are respectively expressed as The component efficiency of the trapezoidal TEG is given by Equation (25) The thermal conductivity and electrical conductivity of the mid/high-temperature semiconductor (Skutterudite) used to model the TE legs are provided in Table 1, while Figure 3 shows the variation of the Seebeck conductivity with operating temperature. | Analysis of the Hybrid Tandem PV-TEC The analysis of the Hybrid Tandem PV-TEC with an irradiance within 280 nm to the cutoff wavelength incident on the perovskite/Si cells, powering a TEC and expected subjectsthe multi-junction cells to reasonable operating temperature. There would be thermal losses through radiation by reflection and convection to the atmosphere, while giving a net power output (after accounting for the necessary power to drive the TEC). Isolating the hybrid system and taking an energy balance of the system, the energy balance is given as The radiation and convection heat losses can be expressed as 36 Empirically, the combined transfer coefficient is represented by h 0 ¼ 5:7 þ 3:8v. The quantity of heat from the TEC hot junction is given as In this study, TECs are basically defined with specified characteristics such as I max , V max , Q max , and DT max . Imax is the input current that can produce the maximum temperature change, DT max across the TEC module. Vmax is the DC voltage at the maximum temperature difference of DT max and Q max is the maximum amount of heat absorbed at the TEC cold side at I = I max and DT max = 0. Utilizing these parameters, the module properties can be expressed as follows: From Equation (28), the hot junction temperature is given by The electrical characteristics and geometry dimensions of the TEC used in this work are provided in Table 2. The net power output of the PV-TEC is given by Equation (33) with the TEC powered by tandem module. 38 To improve the conversion efficiency and enable the sub cells respond to a range of the spectrum band, multijunction cells are interconnected in series as shown in Figure 4. As a result, the open circuit voltage and short circuit current density for multi-junction devices are given by Equations (34) and (35). 39 where the spectral response is given by Equation (36) Substituting Equations (27), (32), (34), (35), and the last term of (28), the expression gives Equation (37)The fill factor is expressed as where β ¼ qV oc nk B T pv . From series expansion, natural logarithmic series of β þ 1 is given as The combinations of Equations (38) and (39) can be simplified by applying numerical and analytical approaches. The fill factor can be fitted to a polynomial function using the expression where α, γ, and ℘ are constants. Substituting the expression in Equation (40) and β ¼ qV oc nk B T pv into Equation (37) and assuming the cold junction temperature of the TEC cooler to equal the tandem module temperature because the module is very thin in thickness in nanometric unit. The expression can be rearranged and simplified to give where ϑ 000 , ϑ 00 , and ϑ 0 are respectively expressed as: The overall efficiency of the entire system comprising the multi-junction perovskite/Si, TEG, and coolers is given as | Analysis of the TE fin For evaluating the TE fin thermal parameter, the fin thermal resistance is given as: where h cv and A eff are respectively defined as the convective heat transfer coefficient and the effective heat transfer area of the fin. h cv can be estimated using: where P rf and R e are the Prandtl number of the fluid and the Reynolds number of the flow, respectively. They can be evaluated by: where μ f , c pf , k f , v ef , and ν f are the kinematic viscosity, constant pressure heat capacity ratio, thermal conductivity, kinetic velocity, and the dynamic viscosity of the fluid, respectively. The effective heat transfer area is given by: where the fin efficiency η f is given by The fin geometrical properties are depicted in Figure 5 while the geometrical and thermal parameters used to define the fin in the numerical modeling equations are provided in Table 3. | Modeling the halide composition variation However, it is essential to note that the maximum wavelength, λ max of the solar spectrum which can be used for the photoelectric effect differs with the computation expression as the split wavelength, λ s varies 34 : For the perovskite (CH 3 NH 3 Pb 3(1Àx) Br 3x , for 0 ≤ x ≤ 1), the bandgap energy is dependent on the halide composition, x, and expressed as E g ¼ 1:55 þ 0:75x; in this case, the halide is Bromine. | RESULTS AND DISCUSSION The thermodynamic modeling of a tandem perovskite/Si-TEC hybridized with a TEG module using dichroic beam as a spectrum splitter was carried out. The study used a blend of analytical and numerical approaches utilizing the combination of MATLAB, python, and Excel spreadsheet environment. The computation was performed under a solar spectral irradiance of 1.5 air mass AM, solar concentrations ranging from 1 to 3, and fractions of the TEC maximum current as input such as 0:2I max , 0:4I max , 0:6I max , and 0:8I max . However, since perovskite cells are still undergoing material modification and this work emphasizes on what its performance would be if applied to complex system. Through numerical computations and fittings, the fill factor constants are α, γ, and ℘ are À0.002, 0.0706, and 0.2051. To utilize the wavelength influence on the spectral irradiance for the mathematical computation, Simpson's 3/8 rule was used to perform numerical integration on the 1.5AM spectral irradiance and the result is presented in Figure 6. However, for this hybrid system thermodynamic numerical analysis, the bandgap energy of the perovskite (CH 3 NH 3 Pb 3(1Àx) Br 3x (for 0 ≤ x ≤ 1)) is dependent on the halide composition, x, which in this case is Bromine. The result of influence of the halide composition (Bromine) to the output current density of the perovskite/Si-TEC at different cell temperatures of 30 C, 35 C, 40 C, 45 C, and 50 C, concentration ratio, C = 3 and 0.2 of the TEC maximum current (I c ¼ 0:2I max ) is shown in Table 4. From Table 4, it was observed that the Bromine composition affects the output current density of the system. For all temperatures used (303K, 308K, 313K, 318K, and 323K), current density, J sc increases as the temperature increases and reduces as the Bromine composition, x decreases. For 0.5 ≤ x ≤ 1.0 (1.93 ≤ Eg ≤ 2.3), the value of the tandem module current density seems to be constant except for 303K, 318K, and 323K which showed equal absolute variation of 0.001. The system performance for current density variation with solar irradiance and concentration factor for different TEC input current (a) I c ¼ 0:2I max ; (b) I c ¼ 0:4I max ; (c) I c ¼ 0:6I max ; and (d) For all values of G and I c (0:2I max , 0:4I max , 0:6I max , and 0:8I max ) with the tandem module held at a temperature of 293K with a split wavelength of 800 nm Figure 7ad, the current density J sc , progressively decreases as the concentration ratio increases through a nonlinear polynomial path. The plot of current density variation with solar irradiance within a short temperature interval for different TEC input current and concentration factor I c ¼ 0:2I max , I c ¼ 0:4I max , I c ¼ 0:6I max , and I c ¼ 0:8I max is shown in Figure 8a-d. Unlike the plots in Figure 7a-d, the group of plots shows minute variation in the temperature of the cell. Regardless of the variation in temperature, the unit value concentration still remains in the plot with the highest perovskites/Si current density values for all values of IC utilized in the computation (ie, 0.2Imax, 0.4Imax, 0.6Imax, and 0.8Imax). The plot however is quadratic and the difference in values for the different concentration plot is small. For clarity of this statement, a plot with wider temperature range is needed as shown in Figure 9, for 0:8I max . The combined system efficiency dependence on the perovskite/Si-TEC current density and voltage is shown in Figure 10. From the 3D graphical plot, the plot with concentration ratio C = 1 has the highest efficiency amidst the three-concentration ratio (C = 1-3). From conventional PV module design, it is at least expected that the module efficiency increases after certain concentration ratio values before it begins to drop. But in this case, the graphical nature is contrary to the conventional and the reason is that beyond C = 1 much electrical power is required by the TEC to bring the perovskite/Si to a temperature of 293K. Carefully observing the plot, it is also seen that the planes of J sc and V oc have an edge trajectory that is akin to an exponential function. Viewing the plot through the efficiency and J sc plane, the graph is seen to be twisted F I G U R E 6 1.5AM integrated spectrum T A B L E 3 Geometrical and thermal properties of the fin 40 faintly and the efficiency drops as the current density increases across 6 V. perovskite/Si module to run the TEC to sustain the PV cell temperature at 293K. Viewing the graphical plot from the η sys À G plane, it is seen that the efficiency drops asymptotically without converging to zero as the irradiance increases. The reason for this is that at high irradiance much heat needs to be removed from the PV module to bring its temperature to 293K. For this reason, much power must be extracted and utilized by the TEC. The V oc also increased progressively with a maximum combined system efficiency of about 42%. However, beyond 400 W m À2 , the plots were all asymptotic for all V oc values. This is because the beam was split into UV and IR, and the range under consideration on the perovskite/Si, is the UV. The UV not having thermal effect but light will have temperature at the module surface within very close and low range. Therefore, F I G U R E 8 (A) Current density variation with solar irradiance within a short temperature interval for different TEC input current and concentration factor I c ¼ 0:2I max . (B) Current density variation with solar irradiance within a short temperature interval for different TEC input current and concentration factor I c ¼ 0:4I max . (C) Current density variation with solar irradiance within a short temperature interval for different TEC input current and concentration factor I c ¼ 0:6I max . (D) Current density variation with solar irradiance within a short temperature interval for different TEC input current and concentration factor I c ¼ 0:8I max . TEC, thermoelectric cooler F I G U R E 9 Wider temperature plot for clarity of curve nature of Figure 8 small amount of electrical power would be responsible for bringing the cells under the required operating condition. Figure 12 presents a 2D graphical plot of the perovskite/Si-TEC current density against the split/cutoff wavelength of the dichroic spectrum splitter. The plot analysis was evaluated with the perovskite/Si kept at room temperature and the TEC operating with 20% less of its maximum design current. The plot demonstrated that the perovskite/Si-TEC current density for concentration factor of 1 is higher than that of 2 and 3 with maximum values of 5.1545, 4.9458, and 4.8122 A/cm 2 , respectively. Uniquely, all plots show a sharp downward slope from values range of 400 nm to 526 nm of the split wavelength. This slope emanated from the sharp increase in the solar spectrum within wavelength range of 400-526 nm (see Figure 6) and manifested downwardly because of the electric power expended by the TEC to maintain the cell at operating temperature. In comparison with previous studies that focused on monolithic standalone perovskite/silicon solar cells that recorded maximum efficiency of 23.6%, 35 it is noticed that the proposed tandem system hybridized with TE device had a maximum efficiency of 42%. This signified that the maximum efficiency of the proposed system was 1.81 times higher than that of the previous standalone system due to the inclusion of TE devices in the hybrid system design. | CONCLUSION The thermodynamic modeling and analysis of a spectrum splitting concentrated perovskite/silicon tandem solar cell integrated with TE devices was carried out in this work to further improve the performance of the standalone perovskite/silicon solar cell. A dichroic beam splitter was used as the solar concentrator to enable the broad utilization of the entire solar spectrum. The light component from the Sun was utilized by the tandem perovskite/silicon solar cell for energy conversion while the heat component was channeled to a TEG for power generation. A TEC was directly lapped to the backplate of the tandem solar cell for temperature regulation; and consequent efficiency enhancement. The accuracy of the numerical model was ensured by accounting for temperature dependency in the variable cross-sectional area TE legs and all convective and radiative losses were fully accounted for in the developed equations. The analysis was made comprehensive enough to show the effects of F I G U R E 1 0 The system efficiency variation of the current density and voltage for concentration factor of C, 1-3 F I G U R E 1 1 The system efficiency variation of the PV voltage for concentration factor of 1-3 F I G U R E 1 2 The variation of current density with cutoff wavelength varying the halide composition on the combined system's efficiency. Based on the results obtained, the following conclusions can be made: • A combined system efficiency of 42% was obtained from the proposed tandem perovskite/silicon solar cell integrated with TE devices when a unity solar concentration ratio was used at a split wavelength of 800 nm. This was 1.8 times higher than the efficiencies reported for standalone tandem perovskite/ silicon solar cells. • For the temperature range considered, the Bromine composition exerted a significant effect on the hybrid system's performance only for Bromine compositions varying from 0.2 to 0.5. • For a Bromine composition of 0.2, it was observed that varying the halide composition only yielded a higher system efficiency for a wavelength range of 300-400 nm. It was also forecasted that when the hybrid system is exposed to higher concentrated solar irradiances, the significance of varying the Bromine composition becomes very notable. • Maintaining the system's backplate temperature at 293K using a TEC was disadvantageous to the system's efficiency when a dichroic beam splitting concentrator was used. Furthermore, the power extracted from the solar cell to power the TEC was disadvantageous to the system's performance. Hence, it becomes unnecessary to use a TEC if the solar cell is operating within a functional working temperature range.
6,713
2022-08-14T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Combining Methods of Lyapunov for Exponential Stability of Linear Dynamic Systems on Time Scales Consider the linear dynamic equation on time scales ( ) ( ) ( ) ( ) ( ) [ ) 0 0 0 , ; , , , T x t A t x t f t x x t x t t ∆ = + = ∈ ∞ (1) where ( ) . n x R ∈ , ( ) [ ) ( ) ( ) 0 . , , rd n T A C t M R ∈ ∞ , [ ) 0 : , n n T f t R R ∞ × → is a rd-continuous function, T is a time scales. In this paper, we shall investigate some results for the exponential stability of the dynamic Equation (1) by combinating the first approximate method and the second method of Lyapunov. T x t A t x t f t x x t x t t where ( ) , , → is a rd-continuous function, T Introduction Let n R be a n-dimension Euclidean space, T be a time scales (a nonempty closed subset of R).We denote For convenience, we shall use the notions which appear in the book by Bohner and Peterson (see [1] [2]).The notions related to the Lyapunov function that we use follow the results of B. Kaymakcalan (see [3]).For necessary, we recall them in this process. We consider a dynamic equation ,0 0 F t = .We suppose that F satisfies all conditions such that (2) has a unique solution ( ) 0 0 , , x t t x with ( ) 0 0 x t x = .In this paper, we define the stable notions of the trivial solution ( ) 0 x t = of (2) as the followings: Definition 1.The trivial solution ( ) 0 x t = of (2) is stable on 0 t T + forall 0 ε > , there exists ( ) Definition 2. The trivial solution ( ) 0 x t = of (2) is asymptotically stable if it is stable and there exists ( ) In these definitions, if the numbers δ and 0 δ do not depend on 0 t , we say that the trivial solution of ( 2) is uniformly stable (uniformly asymptotically stable). Definition 3. The trivial solution ( ) In the simple case (see [2]), consider the dynamic equation The solution of ( 3) is exponential function ∫ . We recall some properties of the exponential function which are used later.Assume S T is set of exponential stability of ( ) , e t s λ (see [4]).Theory of stability of dynamic equation on time scales is an area of mathematics that has recently received a lot of attention (see [1] [2] [4]- [7]).And almost of the results which involve the methods of Lyapunov to investigate the stability, have been developed and obtained the interesting results to expand for dynamic equation on time scales.Besides that the criterions and sufficient conditions were given, there were short of some particular examples.We know that the calculus for functions on general time scales is complex and difficult to implement.In order to overcome obstacles, in some cases we can combine the different methods of Lyapunov to investigate the stability of the solution.The content of this paper contains two parts: the first part presents the sufficient conditions following the first approximate method for the exponential stability of the solution of the linear dynamic Equation ( 1) on time scales.The second one gives some specific examples for applications.Besides the part two we add a theorem about the stability of the solution following the second method of Lyapunov.This theorem can be seen as a corollary of the stable criterion which was presented in [3]. The Stability of Linear Dynamic Equation under Perturbation on Time Scales Consider the dynamic equation . , In proportion to the system (4), we consider where ( ) , We assume that ( ) . We denote ( ) ( ) = is exponential matrix of ( 5) with ( ) 0 0 x t x = .We easily verify that ( ) ( ) We assume that the trivial solution of (5) is exponentially stable, there exists 0 K > , ( ) then the trivial solution of ( 4) is exponentially stable if one of these conditions is satisfied i) ii) There exists a function Proof.We assume that ( ) x t is the solution of ( 4) with ( ) ( ) With 0 q < , we can choose 0 ε > , which is sufficiently small and 0 q εγ + < .So that the trivial solution of ( 4) is exponentially stable on 0 t T + .For ii), by argument similarly as in i), the proof is completed. The Stability of Scalar Dynamic Equation on Time Scales For convenience, the first we consider the scalar dynamic equation where Theorem 5. We assume that ( ) Then the trivial solution of ( 6) is exponentially stable if one of these conditions is satisfied i) ii) There exists a function where is the solution of ( 6) with ( ) 0 0 x t x = , we have ( x t e t t x e t s f s x s s By argument similarly as the proof in theorem 4, we obtain results. In the next part, for convenience to investigate the stability in specific examples, we represent a theorem about the sufficient condition for the exponential stability of the trivial solution of system (2).This result can be seen as a corollary of the stable criterion B. Kaymakcalan (see [3]). We assume 0 : the solution of ( 2) with ( ) 0 0 x t x = .Then derivative of ( ) , V t x following the trajectory of ( ) x t is defined by . x with above properties is a Lyapunov function.Theorem 6.We assume that there exists function 0 : × → is a Lyapunov function which satisfies the following conditions , where 1 0 λ > and 1 α ≥ are positive real numbers, ( ) is exponentially stable then the trivial solution of ( 2) is also exponentially stable. Proof.By the assumption the trivial solution of ( 7) is exponentially stable, then the maximal solution ( ) r t of ( 7) with ( ) .By theorem 2.1 (see [3]) we obtain ( ) ( ) Using the assumption, we have ( ) Therefore By the assumption 1 α ≥ implies the trivial solution of ( 2) is exponentially stable. Applications In this part, we represent some examples of applications.Example 1. Assume that , α β are positive constants.These functions ( ) satisfy one of the conditions i) or ii) of theorem 4. Consider system , , , , We assume that ( ) ∀ ∈ in order that system (8) has the trivial solution.We consider ( ) ( ) In order to investigate the stability of (9), we choose Lyapunov function ( Therefore the derivative of right-hand side of (9) is , , V x y z x y z = + + , we obtain system (10), we investigate the stability of the trivial solution of system ) , By using the results of theorem 6, the trivial solution of (9) is exponentially stable.Therefore following theorem 4, the trivial solution of (8) is exponentially stable.
1,744.4
2014-12-01T00:00:00.000
[ "Mathematics" ]
Uncertainty quantification for impact location and force estimation in composite structures Structural health monitoring of impact location and severity using Lamb waves has been proven to be a reliable method under laboratory conditions. However, real-life operational and environmental conditions (vibration noise, temperature changes, different impact scenarios, etc.) and measurement errors are known to generate variation in Lamb wave features which may significantly affect the accuracy of these estimates. Therefore, these uncertainties should be considered, as a deterministic approach may lead to erroneous decisions. In this article, a novel data-driven stochastic Kriging-based method for impact location and maximum force estimation, that is able to reliably quantify the output uncertainty is presented. The method utilises a novel modification of the kriging technique (normally used for spatial interpolation of geostatistical data) for statistical pattern matching and uncertainty quantification using Lamb wave features to estimate the location and maximum force of impacts. The data was experimentally obtained from a composite panel equipped with piezoelectric sensors. Comparison with a deterministic benchmark method developed in prior studies shows that the proposed method gives a more reliable estimate for experimental impacts under various simulated environmental and operational conditions by estimating the uncertainty. The developed method highlights the suitability of data-driven methods for uncertainty quantification, by taking advantage of the relationship between data points in the reference database that is a mandatory component of these methods (and is often seen as a disadvantage). By quantifying the uncertainty, there is more information for operators to reliably locate impacts and estimate the severity, leading to robust maintenance decisions. Introduction Composite structures are susceptible to barely visual impact damage (BVID) which may result in a significant reduction in residual strength and necessitates either more complex (e.g. non-visual) routine inspection methods or increased design safety margins, all of which lead to increased operational costs. 1 Thus, there is interest in developing impact damage monitoring systems which can reliably inform operators of the occurrence and location of impact damage, allowing a condition-based maintenance and thus reduction in costs. 1 In thin-walled structures (as is usually found in composite airframe), impact events generate Lamb waves which propagate across the structure and can be recorded using optimally positioned (to provide best coverage and information with minimum sensors 2,3 ) algorithms (as no algorithm is ever exact), [5][6][7][8]13,14,16 results in uncertainty in the final estimations. So far, most methods developed for impact location and severity estimation are deterministic. 1,6,9,13,[16][17][18][19] This poses a problem for operators, as an estimate with an unknown degree of uncertainty is not reliable information for assessing the condition of a structure or locating damage that needs to be repaired. Thus, it is necessary to develop impact location and severity estimation algorithms with quantified uncertainty. 7,10,12,14 For this purpose, uncertainty estimates must be reliable (e.g. wide enough), such that the actual value reliably falls within the predicted range. However, the predicted range must be as narrow as possible to not be vague and give enough certainty for assessing the impact location and severity. For impact location estimation, the most basic form of uncertainty quantification can be seen in imaging methods. [20][21][22] These methods generate contour maps that highlight regions with the highest likelihood of the impact to be located. Other methods involve strategies for multiple sampling (e.g. Monte Carlo simulations) to approximate the distribution of uncertainty within measurements of signal features and estimation algorithms. 7,12,14,23 Morse et al. 8 trained multiple localisation neural networks on the same dataset to capture the uncertainty of neural network estimates. This is then followed by the use of Kalman filters or Bayesian updating to combine the estimates of these neural networks and form the final uncertainty distribution of the impact location coordinates. Niri et al. 10 and Sarego et al. 12 captured measurement uncertainties of impact localisation features which were then propagated through localisation algorithms (through Monte Carlo simulations or Kalman filters) to obtain multiple location estimations that form the final uncertainty distribution of the estimated impact location coordinates. Similarly, for impact severity estimation, previous studies have propagated measurement uncertainties through impact force estimation algorithms (e.g. with Monte Carlo simulations) 15,23 or iterative sampling to generate multiple estimations which are combined using Bayesian updating. 7,14 These studies, however, have focused on the reconstruction of the complete impact force history 7,14,15,23 which is not completely necessary for impact damage monitoring as the main parameter for force-based damage assessment is the maximum/ peak impact force which is evaluated against the threshold force of damage formation. 13,24 In previous studies, authors have developed datadriven deterministic impact location 5,6,16 and maximum impact force 13 estimation algorithms that are accurate and robust for various simulated environmental and operational conditions (temperature, vibration noise, and different impact configurations). Here, these methods are extended to include uncertainty quantification/estimation with a novel application of the kriging technique. 25 The proposed method takes advantage of the relationship between the data points 25 used in the reference database of the data-driven methods to quantify the uncertainty without the need for expensive Monte Carlo simulations. In this article, the robustness and accuracy of the estimates on a simple (flat panel) and a complex (stiffened panel) composite panel under simulated environmental and operational conditions are investigated. For benchmarking, a comparison is made with the previously developed methods for impact location and force estimation with the proposed modified methods to assess the improvement in accuracy and robustness. Experimental setup and signal features for impact location and maximum force estimation Experimental impact data were collected using a handheld instrumented impact hammer (PCB Piezotronics 086C03) that is able to record impact forces (Figure 1(a)). The hammer has two tips, steel and plastic, to simulate impacts from different stiffness objects. The back of the hammer head also has space for a 100 g weight attachment to generate impacts with differing weight. Handheld impacts (with random angle and energy, ranging from 50 to 250 N maximum force) were imparted on two sensorised composite specimens, a simple flat (M21 T800s, (0/ + 45/245/90/0/ + 45/245/ 90)s layup) panel and a more complex stiffened (M21 T800s, (45/245/0/0/90/0)s layup) panel, following the layout shown in Figures 1 and 2. The flat panel has a silicone heating mat underneath and a temperature control system to simulate increased temperatures ( Figure 1). To simulate vibration noise, artificial 1 kHz random noise at 20% of the maximum recorded signal amplitude was generated through band filtering of white noise 5,13 and superimposed to the impact signals of the reference case (Figure 1(c)). Impact-induced Lamb wave signals were recorded through the attached PZT sensors on each plate (Figures 1 and 2), which are connected to an 8-channel PXI-5105 oscilloscope through 10x attenuation probes. Signals were recorded at 2 MS/s with a length of 100,000 sample points. As the force measurement from the impact hammer requires one oscilloscope channel, thus not all eight PZT sensors are used on each panel. For the flat panel, six sensors were used as shown in Figure 1, while only five sensors were used ( Figure 2) on the stiffened panel due to damaged sensors from previous testing campaigns. All data processing was done using MathWorks MATLAB. In total, data from six impact cases (F1-F6) representing parametric variations (reflecting simulated environmental and operational conditions) from a reference case (F1) were collected from the flat panel, while one case (S1) was collected from the stiffened panel as listed in Table 1. For each case, impacts were conducted at each location as shown in Figures 1 and 2 with four repetitions each, resulting in the total number of impacts shown in Table 1. None of the impacts generated damage on the panels. For the reference case of each panel (F1 and S1), an impact from each location (35 impacts) was used to build the reference database 5 of each plate for the impact location and force estimation algorithms, while the rest of the impact data was used as test data. For the purpose of impact localisation, the ToA difference of the Lamb wave signals from each sensor towards the earliest signal from the sensor set (as the actual impact event time is unknown) was chosen as the input feature. To mitigate the effect of the simulated vibration noise (case F5) in masking the start of the signal (Figure 1(c)), all signals were highpass filtered with a 2 kHz Butterworth IIR filter prior to ToA extraction. The ToA was then extracted using the normalised smoothed envelope threshold (NSET) method developed in Seno et al. 6 First, the signal was converted into its absolute values, then it was lowpass filtered with a 250 Hz IIR Butterworth filter. Afterwards, the amplitude of the signals from each sensor is normalised with respect to the largest amplitude in the set of sensors. The ToA was then taken when the resulting envelope passes a predetermined threshold (0.05 used here). For maximum impact force estimation, the maximum absolute (MA) signal amplitude was used as the input feature. No highpass filtering was done beforehand even in the presence of simulated random vibration noise (case F5), as the maximum impact force estimation method developed in Seno and Aliabadi 13 has been shown to be robust towards the change in amplitude. The maximum measured impact force together with the MA signal amplitude at the reference database points (cases F1 and S1, Table 1) were used to calculate the force gradient for the maximum impact force estimation as will be explained in section 'Force gradient method for deterministic maximum impact force estimation'. To estimate impact location and maximum force, the features ToA and G ma are extracted from an impact on each location (35 3 1 impacts, Table 1) of the reference cases (F1 for the flat panel and S1 for the stiffened panel, Table 1) and used as the reference database of the algorithms (each panel has its own specific database). The features ToA and A ma from the other impacts of the reference cases (F1 and S1, 35 3 3, see Table 1) and the other cases (F2-F6, see Table 1) were used as inputs for testing. Benchmark impact location and maximum force estimation methods The benchmark data-driven impact location method (reference database method 5 ) and maximum impact force estimation method (force gradient method 13 ) were taken from previous studies which have proven to be accurate and robust for simple structures under simulated environmental and operational conditions using minimum initial data. Reference database method for deterministic impact location estimation The reference database method is essentially a pattern matching method where the impact location estimate is obtained based on the similarity of features (ToA differences used in this study) of the incoming signal to that of signal features known for specific locations in a reference database (obtained from the reference impact cases F1 and S1). 5,9 To do so, first, the absolute difference (Df i ) between the incoming signal's features (f in ) and the known features in the reference database (f ref ) is calculated, as shown in equation (1), for all sensors (Ns) and all reference database points (Nr). The location estimate is then taken as the location of the database point with minimum difference (Df) Using the above definition, the location estimation is limited to points that exist in the reference database. To achieve a more thorough coverage of locations without requiring too many initial data points to build the reference database, cubic spline interpolation is used (selected from MATLAB library of interpolation functions that can handle non-uniform grids as used in the stiffened panel) to approximate the features at locations between the reference database points (two points between reference points are interpolated in this study). To increase the robustness of the final estimate, rather than calculating a single difference value for each reference point (Df i ), all the features available are used at once. Multiple values are generated by creating different combinations of features used, for example, if there are six sensors, then using combinations of five yields 12345, 12346, 12456, and so on, resulting in six feature sets and location estimates in the end. The location estimates are subsequently averaged to obtain the final estimate. Here, combinations of five (out of six) ToA difference values for the flat panel and four (out of five) for the stiffened panel were used. Force gradient method for deterministic maximum impact force estimation The force gradient method is based on the linear relation between the maximum impact force (F max ) and the MA signal amplitude (A ma ), as shown in equation (2), which is a faster and simpler way of estimating the maximum impact force than reconstructing the whole impact force history. 13 Once the gradient (G ma ) is known (i.e. calculated from the pair of F max and A ma from the impact location of the reference cases, F1 and S1, Table 1), it is possible to estimate the maximum impact force from signals with varying amplitude. The gradient of this linear relation has been found to be constant for most simulated environmental and operational conditions (except very soft impactor materials, e.g. silicone) and low (non-damaging impacts) or high (damaging) energy impacts, but it is location (x, y) and temperature (T) dependent 13 Lamb wave amplitude is known to have an inverse linear relation with temperature (T), 13,26 thus it is necessary to apply a compensation factor (a) relative to a certain reference temperature (T ref ). For the panel used in this study, the linear relation has been determined in a previous study (a = 20.0026, T ref = 24°C 13 ) and the results were used here, mostly for the increased temperature case (case F3). As the gradient also depends on location, for the reference database, the measured maximum impact force and MA signal amplitude from the reference case (F1 and S1) were used to obtain the gradient value at each location. As each sensor can give an estimate, the final force estimate was taken from the average from all sensors. Similar to the localisation method described in section 'Reference database method for deterministic impact location estimation', cubic spline interpolation is used to approximate the gradient values at locations outside of the original reference points. The gradient values are location-dependent; therefore, to obtain the appropriate value, the impact location must be known beforehand. Here, the location estimate results from the localisation algorithm described in section 'Reference database method for deterministic impact location estimation' were used. This approach has a disadvantage, as the error of the location estimation is carried over to the estimated maximum force gradient values. Figure 3 illustrates how the deterministic localisation and force estimation methods are combined. Uncertainty quantification method for impact location and maximum force estimation To allow for estimates of the uncertainties obtained using the benchmark location and maximum force estimation algorithms be incorporated in the analysis, a novel application of the kriging technique 25 is presented in this section. Originally, kriging was used for spatial interpolation of geostatistical data; however, here we propose to modify it for statistical pattern matching between the incoming impact features and the features of the reference database points to estimate the location and maximum force of impacts. Using the proposed method, it is possible to not only obtain an estimate but also the uncertainty of the estimate based on the relationship between data points used in the reference database. 25 Kriging method for spatial interpolation Kriging was originally developed for spatial interpolation of geostatistical data, using the spatial correlation between existing reference data points to determine weights for use in a linear combination of the available data points to determine the interpolated value. 25 The spatial correlation between data points may be expressed in many forms, 25 but one of the most widely used is the semivariance (g(h ref )) between data points, as shown in equation (3) ) and are averaged based on the number of data pairs that share the same distance (N h ). This is done for all the reference data points (Nr) in their possible pair combinations. It quantifies how the variance between data points changes (usually increasing) as their respective distance (h ref ) increases (or how the correlation is higher between closer data points). As the distance (h ref ) between data pairs may not always be exactly the same, they are usually grouped into bins (20 bins used in this study) to allow averaging. 25 Afterwards, a semivariance function (g fc (h)) is fitted to the discreet empirical semivariance values (g(h ref )) to allow approximation of the semivariance at a continuous distance range (h) 25 To estimate the value of an interpolated point, first, the distance between the point and the existing reference data points (h in ) is calculated. Afterwards, using the previously obtained semivariance function (g fc (h)), the approximate semivariance values for the reference (g fc (h ref )) and interpolated (g fc (h in )) data point pairs are calculated based on their respective distances (h ref and h in ). Equation (4) is then solved (I j is a column vector of 1's with j number of rows) to obtain the kriging weights (w i ) and the Lagrange parameter (m) With the obtained values, the interpolated value (z est ) and uncertainty (in the form of a standard deviation, s est , from the interpolated value) can be calculated from equation (5). As the semivariance values are always positive, to ensure that the estimated uncertainty is real-valued, the kriging weights and Lagrange parameter need to be positive-valued. Thus, a non-negative constrained linear least square solver (MATLAB lsqnonneg) is used to solve equation (4) Kriging application for impact location and maximum force estimation In section 'Benchmark impact location and maximum force estimation methods', it was shown that the impact location is determined by simply choosing the point in the reference database that has the minimum difference in features compared to the incoming impact signal. 5 In turn, the maximum impact force gradient is then taken as the value known for the said reference point. This way, there is a significant amount of information discarded as only the information from a single database point is used at the end. Here, we propose to substitute this criterion with the kriging technique which can interpolate the location and also the maximum impact force gradient using the information from all of the reference database points while also estimating the uncertainty. Contrary to the original application of kriging as shown in section 'Kriging method for spatial interpolation', in our case, the value at the point of interest (in the form of the incoming impact signal features) is already known, and what is to be estimated instead is the location of the point. Here, we propose to substitute the actual spatial distance between data points, (which is the difference in spatial coordinates) with the difference in feature values (calculated by the L2 norm of the difference vector). Thus, here data points are grouped not based on spatial closeness but by the similarity of their features. The values to be interpolated are then the actual spatial coordinates (x, y) of the data point locations and the respective maximum impact force gradients (G ma (x, y)). Thus, using the features of the incoming impact signal, it is possible to estimate the location of the impact event and the maximum impact force. Here, the L2 norm of the vector of ToA difference between the reference and interpolated data points was used to calculate their respective 'distances' (between reference data points, h ref , and also between reference and input points, h in ). The semivariance (g(h ref )) of the reference database points was calculated for the estimated variables (z: x, y, G ma (x, y)) of each sensor using equation (3) as shown in Figure 4. A linear function (g fc (h) = G g h) was then fit for all estimated variables to obtain the approximate semivariance values for equation (4). Then, equation (4) was solved to obtain the kriging weights and Lagrange parameter to calculate the final estimate and uncertainty of the impact location coordinates and maximum impact force gradients for each sensor using equation (5). A normal probability distribution function (P) was then formed using the estimate (z est , as mean) and standard deviation (s est ) for all estimated variables. Following the benchmark localisation method in section 'Reference database method for deterministic impact location estimation', the ToA difference values were also used in combinations rather than all at once, resulting in multiple estimate distributions of the impact location coordinates and the maximum impact force gradients. For each of these combinations, there is a further set of maximum impact force gradients from each sensor. The maximum impact force estimate distributions are obtained by multiplying the obtained gradient distributions with the signal amplitude for each sensor, following equation (2). It must be noted that using this approach, the maximum impact force gradients (G ma (x, y)) are determined independently from the estimated location unlike in the benchmark method (section 'Benchmark impact location and maximum force estimation methods', Figure 3), thus the localisation error is not imposed on the maximum impact force estimation. In addition, the kriging method is able to interpolate values between the reference data points due to the fitted semivariance functions, thus there is no need for the cubic interpolation done in the benchmark method. Fusion of impact location and maximum force distributions As the kriging method given in section 'Kriging application for impact location and maximum force estimation' results in multiple estimate distributions for each output variable, here methods to fuse them together are outlined. The first alternative is to simply take the mean of the estimates (as is done with the benchmark method) to obtain a wider and more conservative final estimate distribution as shown in Figure 5. To ensure that the total probability of the distribution is unity, it is divided by the area under the mean probability density function. Another method is to apply Bayesian updating, 8 treating multiple estimates as new information that can update prior estimates. It starts with an initial/prior assumption of the estimated variables (P(O)), which in this study was taken as a uniform distribution as there was no initial knowledge of their values. 8 The estimates from kriging were then used to 'update' (P(N|O)) our prior assumption to obtain a more 'informed' estimate (P(O|N)) as shown in equation (6). Here P(N) is a normalising factor (obtained from the area under the distribution of P(N|O) P(O) to ensure that the total probability of P(O|N) is unity. The informed estimate (P(O|N)) then becomes the new prior (P(O)) and is updated with the next available estimate (P(N|O)). This goes on until all of the available estimate distributions (Ne) have been used to obtain the final estimate distribution. This method results in a more narrow final estimate distribution compared to the original data ( Figure 5) as the confidence of the estimation increases when updated with new information As stated in section 'Introduction', the estimated range must be wide enough to ensure that the actual value reliably lies within the bounds of the distribution while at the same time must be narrow enough to give an estimate that is as precise as possible. Both the mean and Bayesian approaches have different characteristics and are thus both tested to compare their performance with the test data. For the maximum impact force, as there are multiple estimates not only from multiple sensors but also from the multiple feature combinations, there are two stages of fusion. Here both the mean and Bayesian approach were tested for both stages, and a third combination where the estimates from all sensors are first fused using the mean approach and the resulting estimates is finally fused using the Bayesian approach. Figure 6 shows the outline of the proposed kriging method accompanied by uncertainty fusion for both impact location and maximum force estimation. Once the final distribution has been obtained, the upper and lower bound of the estimate range is determined within the 95% confidence interval of the distribution. For the impact location, an oval area is drawn around the estimated coordinates (x est , y est ) using the estimate range of the x and y coordinates as the semi- major axes to mark the area at which there is 95% confidence that the impact location is located within. Performance evaluation and comparison of benchmark and proposed methods To evaluate and compare the performance of the proposed methods against the benchmark methods, the metrics used to measure the accuracy of the estimates are defined here. To compare the deterministic accuracy of both methods directly, the error of the estimated values from the benchmark and proposed methods towards the actual values for each test impact in Table 1 were compared. For maximum impact force, the error in estimation (F max.error ) is the difference between the estimate (F max, est ) from either algorithm towards the actual value (F max,act ). As the maximum impact force is random (ranging from 50 to 250 N), the error was taken as percentage towards the actual value as shown in equation (7). To summarise the error for a whole impact case, the mean (F max, mean ) and standard deviation (F max,std ) of the maximum force error for all impacts in the said case were calculated 13 For impact location, the error of the estimated location (x est ,y est ) towards the actual location (x act ,y act ) was calculated from the root square error (RSE) as shown in equation (8). To summarise the localisation errors (RSE) for an impact case (Table 1), a gamma distribution was then fit on the RSE values and the 90th percentile of the cumulative distribution function (RSE 90th ) was then taken as the range from the actual location that encompasses 90% of the localisation errors 5,6 To evaluate the reliability of the uncertainty estimates, the error of the estimate range (upper and lower bounds based on 95% confidence interval) for both the maximum impact force and location was calculated. For the maximum impact force range (F max, range (2) to F max, range( + ) ), the estimate range error (F max,Rerror ) is defined to be 0 when the actual force ( Fmax,act ) is within the estimated range. However, when the actual force is outside of the estimated range, then the error is calculated similar to the deterministic error in equation (7), but relative to the nearest bound (upper or lower) as shown in equation (9). To summarise the estimated range error for a whole impact case, the mean (F max, Rmean ) and standard deviation (F max,Rstd ) for all the impacts in the said case were calculated as is done with the deterministic errors F max, Rerror = (F max, range(�) �F max, act ) F max, act 100%, F max, act <F max, range(�) 0, F max, range(�) <F max, Ract <F max, range( + ) (F max, range( + ) �F max, act ) F max, act 100%, F max, range( + ) <F max, Ract For impact location, the location estimate range was defined as an oval area around the estimated location (x est ,y est ) with the upper (x range( + ) , y range( + ) ) and lower bounds (x range (2) , y range (2) ) of the coordinates as the semi-major axes (x sm ,y sm as the distance from x est ,y est to bounds). The radius of the oval area (RSE lim ) at any angle (u) is given by equation (10). The coordinates of the actual impact location relative to the estimated location (in other words the location error, x err , y err ) can also be defined in polar coordinates (RSE, u). When the radius of the location error (RSE) is smaller than the radius of the oval area limit (RSE lim ) for a given angle, the actual location (x, y) is located inside the estimated oval range and thus the error is 0. When the radius of the location error is larger than the radius of the oval area limit, the actual impact location is outside the estimate range and the error of the oval range is calculated as RSE range = RSE 2 RSE lim . To summarise the localisation error for an impact case, a gamma distribution is fitted to the range error and the 90th percentile value (RSE R90th ) of the cumulative distribution function is taken as done for the deterministic location error Results Table 2 shows the comparison of the deterministic impact location estimation for both the benchmark and the proposed kriging method (using mean (K + M) and Bayesian (K + B) fusion methods). It can be seen that all three methods are considerably accurate for all cases tested (F1-F6 and S1) and there is no significant difference between the accuracy of the deterministic location estimates between these methods. Similarly, comparing the deterministic maximum impact force estimation (Table 3) of the benchmark and proposed kriging method (with mean-mean (K + M + M), Bayesian-Bayesian (K + B + B) and mean-Bayesian (K + M + B) fusion methods), it was found that all methods were considerably accurate for all cases tested (case F1-F6) apart from the stiffened panel (S1). This is due to the stiffened panel having more spatial variation of the maximum impact force gradient (and thus higher sensitivity) due to the stiffness difference from each stiffener zone (Figure 2). It is worth noting that as the benchmark method utilises the estimate of the impact location, the localisation errors can introduce inaccuracies in the force estimation. This is apparent in the lower accuracy of the benchmark force estimates compared to the proposed kriging method (which determines the maximum force gradient independent of localisation) for the stiffened panel (S1). Although the deterministic estimates for both impact location and maximum force are mostly accurate, some level of error exists from the various sources of uncertainty that cannot be captured deterministically, especially as shown by the maximum impact force estimates of the stiffened panel (S1, Table 3). Tables 4 and 5 show the estimated accuracy for the stochastic approach using the proposed kriging method to define estimates in terms of a range (instead of a deterministic point) encompassing the estimated degree of uncertainty. For impact location (Table 4), it can be seen that both kriging methods (with mean (K + M) and Bayesian (K + B) fusion methods) have very high reliability, with all actual impact locations being situated within the estimated uncertainty range (cases F1-F6) or being only slightly outside (case S1). Looking at the uncertainty range, it can be seen that using the Bayesian updating method, the estimated location coordinates (x, y) had significantly smaller uncertainty ranges (more than 2x) compared to the mean fusion method, with only a small trade-off in accuracy. Thus, the Bayesian method is the better fusion method for location estimation as it allows for reliable and precise location estimation. For maximum impact force estimation, Table 5 shows the estimation accuracy using the proposed kriging method accompanied by three different fusion methods (Bayesian-Bayesian (K + B + B), mean-mean (K + M + M) and mean-Bayesian (K + M + B)). Using the Bayesian-Bayesian fusion method, the estimated uncertainty range is very narrow up to the level where it becomes inaccurate as some of the actual maximum force values are not encompassed in the estimated range. This is similar to what was encountered by Morse et al. 8 when using successive Bayesian updating or Kalman filters for estimating impact location. This is due to the fact that what is fused are estimates and not actual measurements, thus it will not necessarily converge to the actual measurement (location or force). However, using the mean-mean fusion method results in a very reliable (most actual values encompassed in estimated range) but very wide (up to 6x wider than the Bayesian-Bayesian method) and imprecise uncertainty range which may not be useful for impact severity assessment. This is similar to what is found from other 'conservative' methods, such as the bounded uncertainty approach, where the uncertainty distribution is reduced to a bounded range. 15 Using a combined mean-Bayesian method, a balance of good reliability and uncertainty range precision is achieved, making it more reliable than the Bayesian-Bayesian approach and more precise than the mean-mean approach. The proposed stochastic approach gives a much higher reliability of maximum impact force estimation compared to the deterministic method, especially for cases, such as the stiffened panel (S1) which is sensitive and difficult to estimate deterministically. Figure 8. Example of location and maximum impact force estimation for impacts (case S1) on the stiffened panel using the benchmark deterministic method (top) and the proposed kriging stochastic method (bottom). Figures 7 and 8 illustrate an example of the impact location and maximum force estimation using the benchmark and proposed kriging method (K + B for location and K + M + B for impact force). It can be seen that although the deterministic method is quite accurate, there is some degree of inaccuracy stemming from various sources of uncertainty, more so in some cases (such as in location 7 (Figure 8), bottom left of the stiffened panel (S1)) than others. However, the proposed kriging method is able to reliably estimate the uncertainty of the impact location and maximum force estimation (all actual locations and maximum impact force encompassed in estimated range), giving a more robust output for users to locate and assess the severity of an impact event. As mentioned in section 'Introduction', there are two main sources of uncertainty: measurement errors and algorithm inaccuracy. 7,14 The measurement errors are encoded in the reference database points, while the algorithm inaccuracy is represented in the multiple outputs given by feature combinations (section 'Uncertainty quantification method for impact location and maximum force estimation') which are merged with the proposed fusion methods. Because of this, the uncertainty range can be accurate and reliable, even though it is estimated only from the reference database points and not based on actual error measurements of the output. This also allows data-driven uncertainty quantification with a very low amount of reference database points (35) compared to other data-driven methods. An example is location uncertainty estimation with neural networks, as demonstrated by Sarego et al., 12 which used 404 data points that were perturbed into 10,000 using the known uncertainty parameters (Monte Carlo approach). This highlights the suitability of the proposed data-driven method for uncertainty quantification by taking advantage of the information contained in their reference database without the need for expensive methods often used for this purpose (e.g. Monte Carlo simulations). 10,12 Conclusion A novel data-driven stochastic kriging-based method for impact location and maximum force estimation was developed. Comparison with a deterministic benchmark method developed in prior studies 5,6,13 indicates that the proposed method gives more reliable estimates for experimental impacts under various simulated environmental and operational conditions by estimating the uncertainty. The developed method highlights the suitability of data-driven methods for uncertainty quantification by taking advantage of the relationship between data points in the reference database that is a mandatory component of these methods (and is often seen as a disadvantage). By quantifying the uncertainty, it gives more information for operators to reliably locate impacts and estimate the severity for making maintenance decisions.
8,275.4
2021-06-17T00:00:00.000
[ "Engineering", "Materials Science" ]
Transcriptome-wide map of m6A circRNAs identified in a rat model of hypoxia mediated pulmonary hypertension Background Hypoxia mediated pulmonary hypertension (HPH) is a lethal disease and lacks effective therapy. CircRNAs play significant roles in physiological process. Recently, circRNAs are found to be m6A-modified. The abundance of circRNAs was influenced by m6A. Furthermore, the significance of m6A circRNAs has not been elucidated in HPH yet. Here we aim to investigate the transcriptome-wide map of m6A circRNAs in HPH. Results Differentially expressed m6A abundance was detected in lungs of HPH rats. M6A abundance in circRNAs was significantly reduced in hypoxia in vitro. M6A circRNAs were mainly from protein-coding genes spanned single exons in control and HPH groups. Moreover, m6A influenced the circRNA–miRNA–mRNA co-expression network in hypoxia. M6A circXpo6 and m6A circTmtc3 were firstly identified to be downregulated in HPH. Conclusion Our study firstly identified the transcriptome-wide map of m6A circRNAs in HPH. M6A can influence circRNA–miRNA–mRNA network. Furthermore, we firstly identified two HPH-associated m6A circRNAs: circXpo6 and circTmtc3. However, the clinical significance of m6A circRNAs for HPH should be further validated. Background Pulmonary hypertension (PH) is a lethal disease and defined as an increase in the mean pulmonary arterial pressure ≥ 25 mmHg at rest, as measured by right heart catheterization [1]. Hypoxia mediated pulmonary hypertension (HPH) belongs to group III PH according to the comprehensive clinical classification of PH, normally accompanied by severe chronic obstructive pulmonary disease (COPD) and interstitial lung diseases [2]. HPH is a progressive disease induced by chronic hypoxia [1]. Chronic hypoxia triggers over-proliferation of pulmonary artery endothelial cells (PAECs) and pulmonary artery smooth muscle cells (PASMCs), and activation of quiescent fibroblasts, the hallmark of HPH [1,3]. The pathological characteristics of HPH are pulmonary vascular remolding, pulmonary hypertension, and right ventricular hypertrophy (RVH) [4]. So far there is no effective therapy for HPH [2]. More effective therapeutic targets are needed to be discovered. Circular RNAs (circRNAs) were firstly found abundant in eukaryotes using RNA-seq approach [5][6][7]. Pre-mRNA is spliced with the 5′ and 3′ ends, forming a 'head-to-tail' splice junction, then circRNAs are occurred [5]. According to the genome origin, circRNAs may be classified into four different subtypes: exonic circRNA, intronic circRNA, exon-intron circRNA and tRNA introns circRNA [5]. Cir-cRNAs are reported to play crucial roles in miRNA binding, protein binding, regulation of transcription, and posttranscription [5,8]. Recent reports indicated that circRNAs can translate to proteins [8,9]. Moreover, circRNAs are widely expressed in human umbilical venous endothelial cells when stimulated by hypoxia [10,11]. Up to date, only a few reports mentioned PH-associated circRNAs. Cir-cRNAs expression profile is demonstrated in HPH and chronic thromboembolic pulmonary hypertension [12]. However, the post-transcript modification of circRNAs in HPH is still unknown. N 6 -methyladenosine (m 6 A) is regarded as one part of "epitranscriptomics" and identified as the most universal modification on mRNAs and noncoding RNAs (ncRNAs) in eukaryotes [13,14]. DRm 6 ACH (D denotes A, U or G; R denotes A, G; H denotes A, C, or U) is a consensus motif occurred in m 6 A modified RNAs [15][16][17]. M 6 A modification is mainly enriched around the stop codons, at 3'untranslated regions and within internal long exons [17][18][19]. Several catalyzed molecules act as "writers", "readers", and "erasers" to regulate the m 6 A modification status [14]. The methyltransferase complex is known as writers, including methyltransferase-like-3, − 14 and − 16 (METTL3/ METTL14/METTL16), Wilms tumor 1-associated protein (WTAP), RNA binding motif protein 15 (RBM15), vir like m 6 A methyltransferase associated (KIAA1429) and zinc finger CCCH-type containing 13 (ZC3H13), appending m 6 A on DRACH [17,20,21]. METTL3 is regarded as the core catalytically active subunit, while METTL14 and WTAP play a structural role in METTL3's catalytic activity [18,22]. The "erasers", fat mass and obesity related protein (FTO) and alkylation repair homolog 5 (ALKBH5), catalyze the Nalkylated nucleic acid bases oxidatively demethylated [22]. The "readers", the YT521-B homology (YTH) domaincontaining proteins family includes YTHDF (YTHDF1, YTHDF2, YTHDF3), YTHDC1, and YTHDC2, specifically recognizes m 6 A and regulates splicing, localization, degradation and translation of RNAs [14,22,23]. The YTHDF1 and YTHDF2 crystal structures forms an aromatic cage to recognize m 6 A sites in cytoplasm [24]. YTHDC1 is the nuclear reader and YTHDC2 binds m 6 A under specific circumstances or cell types [24]. Hypoxia may alter the balance of writers-erasers-readers and induce tumor growth, angiogenesis, and progression [25,26]. Interestingly, circRNAs can be m 6 A-modified. M 6 A circRNAs displayed cell-type-specific methylation patterns in human embryonic stem cells and HeLa cells [14]. CircRNAs contained m 6 A modifications are likely to promote protein translation in a cap-independent pattern [9]. However, m 6 A circRNAs has not been elucidated in HPH yet. Here we are the first to identify the expression profiling of m 6 A circRNAs in HPH. Results M 6 A level of circRNAs was reduced in HPH rats and most circRNAs contained one m 6 A peak Three weeks treatment by hypoxia resulted in right ventricular systolic pressure (RVSP) elevating to 42.23 ± 1.96 mmHg compared with 27.73 ± 1.71 mmHg in the control (P < 0.001, Fig. 1a and b). The ratio of the right ventricle (RV), left ventricular plus ventricular septum (LV + S) [RV/ (LV + S)] was used as an index of RVH. RVH was indicated by the increase of RV/ (LV + S) compared with the control (0.25 ± 0.03 vs. 0.44 ± 0.04, P = 0.001, Fig. 1c). The medial wall of the pulmonary small arteries was also significantly thickened (19.28 ± 2.19% vs. 39.26 ± 5.83%, P < 0.001, Fig. 1d and e). Moreover, in the normoxia group, 53.82 ± 3.27% of the arterioles were non-muscularized (NM) vessels, and 25.13 ± 1.83% were fully muscularized (FM) vessels. In contrast, partially muscularized vessels (PM) and FM vessels showed a greater proportion (32.88 ± 3.15% and 41.41 ± 3.35%) in HPH rats, while NM vessels occupied a lower proportion (25.71 ± 2.55%) (Fig. 1f). Figure 1g displayed the heatmap of m 6 A circRNAs expression profiling in N and HPH. M 6 A abundance in 166 circRNAs was significantly upregulated. Meanwhile, m 6 A abundance in 191 circRNAs was significantly downregulated (Additional file 1: Data S1, filtered by fold change ≥4 and P ≤ 0.00001). Lungs of N and HPH rats were selected to measure m 6 A abundance in purified circRNAs. The m 6 A level in total circRNAs isolated from lungs of HPH rats was lower than that from controls (Fig. 1h). Moreover, over 50% circRNAs contained only one m 6 A peak either in lungs of N or HPH rats (Fig. 1i). M 6 A circRNAs were mainly from protein-coding genes spanned single exons in N and HPH groups We analyzed the distribution of the parent genes of total circRNAs, m 6 A-circRNAs, and non-m 6 A cir-cRNAs in N and HPH, respectively. N and HPH groups showed a similar genomic distribution of m 6 A circRNAs and non-m 6 A circRNAs ( Fig. 2a and b). Moreover, about 80% of m 6 A circRNAs and nonm 6 A circRNAs were derived from protein-coding genes in both groups. A previous report indicated that most circRNAs originated from protein-coding genes spanned two or three exons [14]. While in our study, over 50 and 40% of total circRNAs from protein-coding genes spanned one exon in N and HPH groups, respectively ( Fig. 2c and d). Similarly, (See figure on previous page.) Fig. 1 M 6 A level of circRNAs in HPH rats and the number of m 6 A peak in circRNAs Rats were maintained in a normobaric normoxic (N, FiO 2 21%) or hypoxic (HPH, FiO 2 10%) chamber for 3 weeks, then RVSP was detected (a, b). c The ratio of RV/ (LV + S). d H&E staining and immunohistochemical staining of α-SMA were performed in the lung sections. Representative images of pulmonary small arteries. Scale bar = 50 μm. Quantification of wall thickness (e) and vessel muscularization (f). g Heatmap depicting hierarchical clustering of altered m 6 A circRNAs in lungs of N and HPH rats. Red represents higher expression and yellow represents lower expression level. h Box-plot for m 6 A peaks enrichment in circRNAs in N and HPH. i Distribution of the number of circRNAs (y axis) was plotted based on the number of m 6 A peaks in circRNAs (x axis) in N and HPH. Values are presented as means ± SD (n = 6 in each group). Only vessels with diameter between 30 and 90 μm were analyzed. NM, nonmuscularized vessels; PM, partially muscularized vessels; FM, fully muscularized vessels. **0.001 ≤ P ≤ 0.009 (different from N); ***P < 0.001 (different from N) m 6 A circRNAs and non-m 6 A circRNAs were mostly encoded by single exons. Therefore, it was indicated that m 6 A methylation was abundant in circRNAs originated from single exons in N and HPH groups. The distribution and functional analysis for host genes of circRNAs with differentially expressed m 6 A peaks The length of differentially-expressed m 6 A circRNAs was mostly enriched in 1-10,000 bps (Fig. 3a). The host genes of upregulated m 6 A circRNAs were located in chromosome 1, 2 and 10, while the downregulated parts were mostly located in chromosome 1, 2 and 14 ( Fig. 3b). Gene ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis were performed to explore the host genes of circRNAs with differentially-expressed m 6 A peaks. In the GO analysis ( Fig. 3c, left), the parent genes of circRNAs with upregulated m 6 A peaks were enriched in the protein modification by small protein conjugation or removal and macromolecule modification process in the biological process (BP). Organelle and membrane-bounded organelle were also the two largest parts in the cellular component (CC) analysis. Binding and ion binding were the two main molecular functions (MF) analysis. The top 10 pathways from KEGG pathway analysis were selected in the bubble chart ( Fig. 3c, right). Among them, the oxytocin signaling pathway, protein processing in endoplasmic reticulum and cGMP-PKG signaling pathway were the top 3 pathways involved. In addition, vascular smooth muscle contraction pathway was the most associated pathway in PH progression [27]. In Fig. 3d left, the parent genes of circRNAs with downregulated m 6 A peaks were mainly enriched in the cellular protein modification process and protein modification process in BP. Organelle and membrane-bounded organelle made up the largest proportion in the CC classification. The MF analysis was focused on receptor signaling protein activity and protein binding. The parent genes of circRNAs with decreased m 6 A peaks were mainly involved in the tight junction and lysine degradation in the KEGG pathway analysis (Fig. 3d, right). Hypoxia can influence the m 6 A level of circRNAs and circRNAs abundance 360 m 6 A circRNAs were shared in N and HPH groups. 49% of m 6 A circRNAs detected in N group were not detected in HPH group, and 54% of m 6 A circRNAs detected in HPH group were not detected in N group (Fig. 4a). To explore whether m 6 A methylation would influence cir-cRNAs expression level, expression of the 360 common m 6 A circRNAs were identified. More circRNAs tended to decrease in HPH compared to N (Fig. 4b). Moreover, expression of m 6 A circRNAs was significantly downregulated compared with non-m 6 A circRNAs in hypoxia, suggesting that m 6 A may downregulate the expression of circRNAs in hypoxia ( Fig. 4c, P = 0.0465). Construction of a circRNA-miRNA-mRNA co-expression network in HPH We found 76 upregulated circRNAs with increased m 6 A abundance, and 107 downregulated circRNAs with decreased m 6 A abundance (Fig. 5a, Additional file 2: Data S2, Additional files 3 and 4). As known, circRNAs were mostly regarded as a sponge for miRNAs and regulated the expression of corresponding target genes of miRNAs [28]. To explore whether circRNAs with differentiallyexpressed m 6 A abundance influence the availability of miRNAs to target genes, we selected differentiallyexpressed circRNAs with increased or decreased m 6 A abundance. GO enrichment analysis and KEGG pathway analysis were also performed to analyze target mRNAs. Target mRNAs displayed similar GO enrichment in the two groups ( Fig. 5b and c). Two main functions were determined in BP analysis: positive regulation of biological process and localization. Intracellular and intracellular parts make up the largest proportion in CC part. Target mRNAs were mostly involved in protein binding and binding in MF part. In the KEGG pathway analysis, the top 10 most enriched pathways were selected ( Fig. 5d and e). Wnt and FoxO signaling pathways were reported to be involved in PH progression [29][30][31]. Then, we analyzed the target genes involved in these two pathways. SMAD4 was associated with PH and involved in Wnt signaling pathways. MAPK3, SMAD4, TGFBR1, and CDKN1B were involved in FoxO signaling pathways. To explore the influence of circRNA-miRNA regulation on PH-associated genes expression, we constructed a circRNA-miRNA-mRNA network, integrating matched expression profiles of circRNAs, miRNAs and mRNAs ( Fig. 5f and g). Micro-RNAs sponged by the target genes of interest were analyzed. MiR-125a-3p, miR-23a-5p, miR-98-5p, let-7b-5p, let-7a-5p, let-7 g-5p, and miR-205 were analyzed because they were reported to be associated with PH [32,33]. We filtered the key mRNAs and miRNAs, and founded that the two (See figure on previous page.) Fig. 2 The genomic origins of m 6 A circRNAs The distribution of genomic origins of total circRNAs (input, left), m 6 A circRNAs (eluate, center), and non-m 6 A circRNAs (supernatant, right) in N (a) and HPH (b). The percentage of circRNAs (y axis) was calculated according to the number of exons (x axis) spanned by each circRNA for the input circRNAs (left), m 6 A-circRNAs (red, right) and non-m 6 A circRNAs (blue, right) in N (c) and HPH (d). Up to seven exons are shown M 6 A circXpo6 and m 6 A circTmtc3 were downregulated in PASMCs and PAECs in hypoxia M 6 A abundance was significantly reduced in PASMCs and PAECs when exposed to hypoxia (0.107% ± 0.007 vs. 0.054% ± 0.118, P = 0.023 in PASMCs; 0.114% ± 0.011 vs. 0.059% ± 0.008, P = 0.031 in PAECs, Fig. 6a). M 6 A abundance in circRNAs was lower than it in mRNAs (0.1-0.4%) [17,18]. Next, we confirmed the back-splicing of circXpo6 and circTmtc3 by CIRI software. The sequence of linear Xpo6 and Tmtc3 mRNA was analyzed. Then we identified that circXpo6 was spliced form exon 7, 8, and 9 of Xpo6. CircTmtc3 was spliced form exon 8, 9, 10, and 11 (Fig. 6b). Using cDNA and genomic DNA (gDNA) from PASMCs and PAECs as templates, cir-cXpo6 and circTmtc3 were only amplified by divergent primers in cDNA, while no product was detected in gDNA (Fig. 6c). To identify whether circXpo6 and circTmtc3 were modified by m 6 A, we performed M 6 A RNA Immunoprecipitation (MeRIP)-RT-PCR and MeRIP-quantitative RT-PCR (MeRIP-qRT-PCR) to detect the expression of circXpo6 and circTmtc3 ( Fig. 6d and e). m 6 A circXpo6 and m 6 A circTmtc3 were significantly decreased in PASMCs and PAECs when exposed to hypoxia (P = 0.002, and P = 0.015 in PASMCs and P = 0.02, and P = 0.047 in PAECs). Discussion In this study, we identified the transcriptome-wide map of m 6 A circRNAs in hypoxia mediated pulmonary hypertension. On the whole, we found that m 6 A level in circRNAs was reduced in lungs when exposed to hypoxia. M 6 A cir-cRNAs were mainly derived from single exons of proteincoding genes in N and HPH. M 6 A abundance in circRNAs was downregulated in hypoxia in vitro. M 6 A influenced the circRNA-miRNA-mRNA co-expression network in hypoxia. Moreover, circXpo6 and circTmtc3 were the novel identified circRNAs modified by m 6 A in hypoxia mediated pulmonary hypertension. M 6 A plays important roles in various biological processes. M 6 A is associated with cancer progression, promoting the proliferation of cancer cells and contributing to the cancer stem cell self-renewal [18,21]. Lipid accumulation was reduced in hepatic cells when m 6 A abundance in peroxisome proliferator-activator (PPaR) was decreased [34]. Enhanced m 6 A level of mRNA contributed to compensated cardiac hypertrophy [35]. Also, m 6 A modification of large intergenic noncoding RNA 1281 was necessary for mouse embryonic stem cells differentiation [36]. Although it has been reported that m 6 A mRNAs were influenced by hypoxia, there is no report about m 6 A cir-cRNAs in HPH yet. Up to now, no consistent conclusion was reached about the link between m 6 A and hypoxia. Previous reports found that the m 6 A abundance in mRNA was increased under hypoxia stress in HEK293T cells and cardiomyocytes [37,38]. The increased m 6 A level stabilized the mRNAs of Glucose Transporter 1 (Glut1), Myc proto-oncogene bHLH transcription factor (Myc), Dual Specificity Protein Phosphatase 1 (Dusp1), Hairy and Enhancer of Split 1 (Hes1), and Jun Proto-Oncogene AP-1 Transcription Factor Subunit (Jun) without influencing their protein level [37]. In contrast, another reported that m 6 A level of total mRNA was decreased when human breast cancer cell lines were exposed to 1% O 2 [26]. Hypoxia increased demethylation by stimulating hypoxia-inducible factor (HIF)-1α-and HIF-2α-dependent over-expression of ALKBH5 [26]. In addition, transcription factor EB activates the transcription of ALKBH5 and downregulates the stability of METTL3 mRNA in hypoxia/reoxygenation-induced autophagy in ischemic diseases [38]. Our study found that m 6 A abundance in total circRNAs was decreased in hypoxia exposure. Moreover, our study indicated that cir-cXpo6 and circTmtc3 were the novel identified circRNAs modified by m 6 A in HPH. M 6 A abundance in circXpo6 and circTmtc3 was decreased in hypoxia. It is probably because of HIF-dependent and ALKBH5mediated m 6 A demethylation [26]. Previous reports indicated that m 6 A methylation close to 3'UTR and stop codon of mRNA is inversely correlated with gene expression [14,39]. Low m 6 A level is negatively associated with circRNAs expression, while high m 6 A level is not linked to circRNAs expression in human embryonic stem cells and HeLa cells [14]. Consistent with the previous reports [14,39], our study found that m 6 A reduced the total circRNAs abundance in hypoxia. The association between m 6 A level and specific gene abundance is remained as an open question. Some previous reports indicated that m 6 A level was positively associated with long non-coding RNA (See figure on previous page.) Fig. 3 The distribution and functional analysis for host genes of circRNAs with differentially expressed (DE) m 6 A peaks (a) Length of DE m 6 A circRNAs. b The chromosomes origins for host genes of DE m 6 A circRNAs. GO enrichment and KEGG signaling pathway analysis for host genes of upregulated (c) and downregulated (d) m 6 A circRNAs. GO enrichment analysis include biological process (BP) analysis, cellular component (CC) analysis, and molecular function (MF) analysis. P values are calculated by DAVID tool Fig. 4 The relationship of m 6 A level and circRNAs abundance in hypoxia (a) Venn diagram depicting the overlap of m 6 A circRNAs between N and HPH. b Two-dimensional histograms comparing the expression of m 6 A circRNAs in lungs of N and HPH rats. It showed that m 6 A circRNAs levels for all shared circRNAs in both groups. CircRNAs counts were indicated on the scale to the right. c Cumulative distribution of circRNAs expression between N and HPH for m 6 A circRNAs (red) and non-m 6 A circRNAs (blue). P value was calculated using two-sided Wilcoxon-Mann-Whiteney test (lncRNA) or mRNA expression [40,41]. M 6 A was positively associated with RP11-138 J23.1 (RP11) expression when ALKBH5 was overexpressed in colorectal cancer [40]. mRNAs were downregulated after METTL14 deletion in β-cells [41]. On the contrary, another reports insisted that m 6 A level was negatively associated with RNA expression [42][43][44]. the mRNA lifetime of Family with Sequence Similarity 134, Member B (FAM134B) was prolonged when the m 6 A site was mutant [42]. The decreased m 6 A level resulted in the increased expression of N-methyl-D-aspartate receptor 1 (NMDAR1) in Parkinson's disease [43]. Forkhead Box protein M1 (FOXM1) abundance was increased when ALKBH5 was upregulated in glioblastoma [44]. Our study indicated that the expression of circXpo6 and circTmtc3 was decreased with the downregulated m 6 A level. The association between m 6 A level and circRNAs abundance was not determined yet. We suspected that m 6 A may influence the expression of circXpo6 and circTmtc3 through similar manners as before [40,41]. But it needs further validation. Competing endogenous RNA (CeRNA) mechanism was proposed that mRNAs, pseudogenes, lncRNAs and circRNAs interact with each other by competitive binding to miRNA response elements (MREs) [45,46]. M 6 A acts as a post-transcript regulation of circRNAs and influences circRNAs expression, thus we suggested that m 6 A could also regulate the circRNA-miRNA-mRNA coexpression network. When the circRNAs were classified, we found that these downstream targets regulated by circRNA-miRNA of interest were mostly enriched in PH-associated Wnt and FoxO signaling pathways [30,31]. The Wnt/β-catenin (bC) pathway and Wnt/ planar cell polarity (PCP) pathway are the two most critical Wnt signaling pathways in PH [30]. As known, the two important cells associated with HPH are PASMCs and PAECs [1,3]. The growth of PASMCs was increased when Wnt/bC and Wnt/PCP pathways were activated by platelet derived growth factor beta polypeptide b (PDGF-BB) [30,47]. In addition, the proliferation of PAECs was enhanced when Wnt/bC and Wnt/PCP pathways were activated by bone morphogenetic protein 2 (BMP2). Furthermore, the FoxO signaling pathway is associated with the apoptosis-resistant and hyper-proliferative phenotype of PASMCs [31]. Reactive oxygen species is increased by hypoxia and activates AMPK-dependent regulation of FoxO1 expression, resulting in increased expression of catalase in PASMCs [48]. Our study firstly uncovered that m 6 A influenced the stability of circRNAs, thus affecting the binding of circRNAs and miRNA, resulting in the activation of Wnt and FoxO signaling pathways. Conclusion In conclusion, our study firstly identified the transcriptomewide map of m 6 A circRNAs in HPH. M 6 A level in circRNAs was decreased in lungs of HPH and in PASMCs and PAECs exposed to hypoxia. M 6 A level influenced circRNA-miRNA-mRNA co-expression network in HPH. Moreover, we firstly identified two downregulated m 6 A circRNAs in HPH: cir-cXpo6 and circTmtc3. CircRNAs may be used as biomarkers because it is differentially enriched in specific cell types or tissues and not easily degraded [6]. Also, the aberrant m 6 A methylation may contribute to tumor formation and m 6 A RNAs may be a potential therapy target for tumor [17]. Limitations still exist in the study. First, we did not analyze the m 6 A level between circRNAs and the host genes. Second, the exact mechanism of hypoxia influences m 6 A was not demonstrated. Thirdly, the function of circXpo6 and circTmtc3 in HPH was not elaborated. Lastly, besides hypoxia mediated pulmonary hypertension, many other significant PH models should also be noted, such as monocrotaline mediated PH, monocrotaline + pneumonectomy mediated PH, and so on. It is insufficient that we explored the expression profiling of m 6 A circRNAs only in hypoxia mediated pulmonary hypertension. We plan to explore the expression profiling of m 6 A circRNAs in monocrotaline-induced PH and other PH models. Moreover, the clinical significance of m 6 A circRNAs for HPH should be further validated. Methods Hypoxia mediated PH rat model and measurement of RVSP and RVH Sprague-Dawley rats (SPF, male, 180-200 g, 4 weeks) were obtained from the Animal Experimental Center of Zhejiang University, China. Rats were maintained in a normobaric normoxia (FiO 2 21%, n = 6) or hypoxic chamber (FiO 2 10%, n = 6) for 3 weeks [3,49]. Rats were then anesthetized by intraperitoneal injection of 1% sodium pentobarbital (130 mg/ kg) [50]. Then, rats were fixed in supine position on the board. All of the operations were performed after rats were (See figure on previous page.) Fig. 5 Construction of a circRNA-miRNA-mRNA co-expression network in HPH (a) Comparison of the relationship between m 6 A level and expression of circRNAs between N and HPH. The fold-change ≥2.0 was considered to be significant, which was the m 6 A abundance of HPH relative to N. Red dots represents circRNAs with upregulated m 6 A level and blue dots represents circRNAs with downregulated m 6 A level. IP/ Input referred to the m 6 A abundance in circRNAs detected in MeRIP-Seq (IP) normalized to that detected in input. b and c GO enrichment analysis includes BP analysis, CC analysis, and MF analysis. P values are calculated by DAVID tool. d and e KEGG signaling pathway analysis for the downstream mRNAs which was predicted to be ceRNA of DE cirRNAs. Methy. down & exp. down represents downregulated cirRNAs with decreased m 6 A level. Methy. up & exp. up represents upregulated cirRNAs with increased m 6 A level. f and g CeRNA analysis for DE circRNAs. Network map of circRNA-miRNA-mRNA interactions. Green V type node: miRNA; yellow circular node: DE circRNAs; blue hexagon node: target genes of miRNAs; red hexagon node: PH-related genes anesthetized and became unconscious. RVSP was measured as below. Right ventricle catheterization was performed through the right jugular using a pressure-volume loop catheter (Millar) as the previous reports [49,51]. After measurement of RVSP, all rats were put into a confined and transparent euthanasia device (to observe whether the rats were sacrificed), then 100% CO 2 was released into the device continuously until all the rats sacrificed. The criteria for sacrifice were that rats did not have spontaneous breath for 2-3 min and blink reflex. Then, heart tissues were removed and segregated. The ratio of [RV/ (LV + S)] was used as an index of RVH. Lung were removed and immediately frozen at liquid nitrogen or fixed in 4% buffered paraformaldehyde solution. All experimental procedures were conducted in line with the principles approved by the Institutional Animal Care and Use Committee of Zhejiang University. RNA isolation and RNA-seq analysis of circRNAs Total RNA (10 mg) was obtained using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) from lungs (1 g) of control and HPH rats. The extracted RNAs were purified with Rnase R (RNR07250, Epicentre) digestion to remove linear transcripts. Paired-end reads were harvested from Illumina Hiseq Sequence after quality filtering. The reads were aligned to the reference genome (UCSC RN5) with STAR software. CircRNAs were detected and annotated with CIRI software [55]. Raw junction reads were normalized to per million number of reads mapped to the genome with log2 scaled. MeRIP and library preparation Total RNA was extracted as the methods described above. Fig. 6 The expression profiling of m 6 A circXpo6 and m 6 A circTmtc3 in pulmonary arterial smooth muscle cells (PASMCs) and pulmonary artery endothelial cells (PAECs) in hypoxia. a M 6 A levels of total circRNAs were determined based on colorimetric method in vitro. PASMCs and PAECs were exposed to 21% O 2 and 1% O 2 for 48 h, respectively. Total RNA was extracted and treated by RNase R. M 6 A levels were determined as a percentage of total circRNAs. b Schematic representation of exons of the Xpo6 and Tmtc3 circularization forming circXpo6 and circTmtc3 (black arrow). c RT-PCR validation of circXpo6 and circTmtc3 in PASMCs and PAECs exposed to 21% O 2. Divergent primers amplified circRNAs in cDNA, but not in genomic DNA (gDNA). The size of the DNA marker is indicated on the left of the gel. d and e RT-PCR and qRT-PCR was performed after m 6 A RIP in PASMCs and PAECs exposed to 21% (N) and 1% O 2 (H) for 48 h, respectively. Input was used as a control (d). IgG was used as a negative control (d and e). Values are presented as means ± SD. *P ≤ 0.05 (different from 21% O 2 or the N-anti-m 6 A); **0.001 ≤ P ≤ 0.009 (different from the N-anti-m 6 A), (n = 3 each) Construction of circRNA-miRNA-mRNA co-expression network The circRNA-miRNA-mRNA co-expression network was based on the ceRNA theory that circRNA and mRNA shared the same MREs [45,46]. Cytoscape was used to visualize the circRNA-miRNA-mRNA interactions based on the RNA-seq data. The circRNA-miRNA interaction and miRNA-mRNA interaction of interest were predicted by TargetScan and miRanda. Measurement of Total m 6 A, MeRIP-RT-PCR and MeRIP-qRT-PCR Total m 6 A content was measured in 200 ng aliquots of total RNA extracted from PASMCs and PAECs exposed to 21% O 2 and 1% O 2 for 48 h using an m 6 A RNA methylation quantification kit (P-9005, Epigentek). MeRIP (17-701, Millipore) was performed according to the manufacturer's instruction. A 1.5 g aliquot of anti-m 6 A antibody (ABE572, Millipore) or anti-IgG (PP64B, Millipore) was conjugated to protein A/G magnetic beads overnight at 4°C. A 100 ng aliquot of total RNA was then incubated with the antibody in IP buffer supplemented with RNase inhibitor and protease inhibitor. The RNA complexes were isolated through phenol-chloroform extraction (P1025, Solarbio) and analyzed via RT-PCR or qRT-PCR assays. Primers sequences are listed in Table 1. Data analysis 3′ adaptor-trimming and low quality reads were removed by cutadapt software (v1.9.3). Differentially methylated sites were identified by the R MeTDiff package. The read alignments on genome could be visualized using the tool IGV. Differentially expressed circRNAs were identified by Student's t-test. GO and KEGG pathway enrichment analysis were performed for the corresponding parental mRNAs of the DE circRNAs. GO enrichment analysis was performed using the R topGO package. KEGG pathway enrichment analysis was performed according to a previous report [56]. GO analysis included BP analysis, CC analysis, and MF analysis. MicroRNAs sponged by the target genes were predicted by TargetScan and microRNA websites. P values are calculated by DAVID tool for GO and KEGG pathway analysis. The rest statistical analyses were performed with SPSS 19.0 (Chicago, IL, USA) and GraphPad Prism 5 software (La Jolla, CA). N refers to number of samples in figure legends. The statistical significance was determined by Student's t-test (two-tailed) or two-sided Wilcoxon-Mann-Whiteney test. P < 0.05 was considered statistically significant. All experiments were independently repeated at least three times.
7,032.2
2019-12-11T00:00:00.000
[ "Biology" ]
SPEAQeasy: a scalable pipeline for expression analysis and quantification for R/bioconductor-powered RNA-seq analyses Background RNA sequencing (RNA-seq) is a common and widespread biological assay, and an increasing amount of data is generated with it. In practice, there are a large number of individual steps a researcher must perform before raw RNA-seq reads yield directly valuable information, such as differential gene expression data. Existing software tools are typically specialized, only performing one step–such as alignment of reads to a reference genome–of a larger workflow. The demand for a more comprehensive and reproducible workflow has led to the production of a number of publicly available RNA-seq pipelines. However, we have found that most require computational expertise to set up or share among several users, are not actively maintained, or lack features we have found to be important in our own analyses. Results In response to these concerns, we have developed a Scalable Pipeline for Expression Analysis and Quantification (SPEAQeasy), which is easy to install and share, and provides a bridge towards R/Bioconductor downstream analysis solutions. SPEAQeasy is portable across computational frameworks (SGE, SLURM, local, docker integration) and different configuration files are provided (http://research.libd.org/SPEAQeasy/). Conclusions SPEAQeasy is user-friendly and lowers the computational-domain entry barrier for biologists and clinicians to RNA-seq data processing as the main input file is a table with sample names and their corresponding FASTQ files. The goal is to provide a flexible pipeline that is immediately usable by researchers, regardless of their technical background or computing environment. Supplementary Information The online version contains supplementary material available at 10.1186/s12859-021-04142-3. length and coverage depth of a given experiment. Before doing any statistical analyses on this data such as differential expression [8,9], researchers need to process the gigabytes or even terabytes of data to compress it and extract the desired information. Doing so requires computationally demanding steps such as RNA-seq alignment [10][11][12] and read quantification [13,14]. Since the emergence of RNA-seq, a diverse set of bioinformatics software has been designed to solve specific steps of the RNA-seq processing [15][16][17]. Several RNA-seq processing bioinformatics pipelines have been developed to tie these required processing steps together [18][19][20][21][22][23]. The common goal of these approaches involves helping biologists and researchers weave together these bioinformatics solutions to uniformly process samples from RNA-seq projects with different characteristics; for example, single-end versus paired-end. RNA-seq processing pipelines have different characteristics such as the RNA-seq aligner of choice and the quality control steps they use. The design choices of each RNA-seq processing pipeline can have an impact on which analyses researchers can perform with the processed data. Furthermore, the ease of software installation, portability, and level of support can affect the usability of these pipelines. In recent years we have worked on several RNA-seq projects [24][25][26] and designed an RNA-seq processing pipeline that satisfied our needs to generate quality checked and uniformly processed data with several quality control metrics we could then use in our statistical analyses. We then improved the usability and portability of this pipeline thanks to the Nextflow framework [27]. Our solution, SPEAQeasy, ultimately generates RangedSummarizedExperiment R objects [28] that are the foundation block for many Bioconductor R packages and the statistical methods they provide [8,9,29,30]. Other key features of SPEAQeasy are that it produces the information that coupled with DNA genotyping information can be used for detecting and fixing sample swaps [31][32][33], RNA-seq processing quality metrics that are helpful for statistically adjusting for quality differences across samples [5], data that powers the exploration of the unannotated transcriptome, and that it can be used in several computational frameworks thanks to Nextflow's configuration flexibility [27]. Overview We have developed a portable RNA sequencing (RNA-seq) processing pipeline, SPEAQeasy, that provides analysis-ready gene expression files (Fig. 1). SPEAQeasy is a Nextflow-powered [27] pipeline that starts from a set of FASTQ files [7], performs quality assessment and other processing steps (Implementation: overview), and produces easy-to-use R objects [29]. SPEAQeasy facilitates both traditional RNA-seq downstream analyses such as gene differential expression, but also the exploration of the annotated transcriptome [34,35] by quantifying reads that span exon-exon junctions and providing bigWig base-pair coverage files [36]. Input RNA-seq reads are aligned using HISAT2 [37] to a reference genome and pseudo-aligned to a reference transcriptome using kallisto [38] or Salmon [39]. Genes, exons, and exon-exon junctions are then quantified using featureCounts [14] and regtools [40]. The resulting quality metrics and read quantification outputs are then arranged to create SummarizedExperiment [28] R objects that combine the read quantification, expression feature information, and processing and quality metrics. These SummarizedExperiment objects can then be used with a wide variety of Bioconductor [29] R packages to perform downstream analyses such as differential expression [8,9,30], identification of differentially expressed regions (DER) [41], and exploratory data analysis [42,43]. For human samples, SPEAQeasy can also perform RNA-based genotype calling with BCFtools [44] which can be coupled with DNA-based genotype data to identify and resolve sample swaps ( Fig. 2: downstream). Additionally, for experiments involving ERCC spike-ins [45], SPEAQeasy generates plots by sample to quickly visualize expected versus measured concentrations for each of 92 ERCC transcripts (Additional file 1: Figure S1). Thus SPEAQeasy simplifies any RNA-seq based projects from human, mouse and rat-derived data and provides a bridge to the Bioconductor universe. Furthermore, the Nextflow-based implementation allows for more experienced developers to quickly add additional steps or switch out software, creating a flexible and scalable RNA-seq processing pipeline. We document a step-by-step example demonstrating how to replace the trimming tool Trimmomatic [46] with Trim Galore [47], to serve as a guide for those interested in modifying components of SPEAQeasy (http:// resea rch. libd. org/ SPEAQ easy/ softw are. html# using-custom-softw are). Configuring SPEAQeasy SPEAQeasy, through Nextflow [27], can be deployed in a variety of high-throughput computational environments such as: local machines, Sun/Son Grid Engine (SGE) compute clusters, servers that enable Docker [48], and cloud computing environments [49] such as Amazon AWS. Nextflow provides the ability to run the same code using configuration files that are specific to the computing environment at hand. To facilitate using SPEAQeasy we provide Docker containers for both the software and annotation files and a SPEAQeasy configuration file for such environments. For SGE or other clusters, SPEAQeasy can also use lmod [50] software modules such as the one we provide for the JHPCE SGE cluster (https:// jhpce. jhu. edu/). In order to use SPEAQeasy in a particular computing environment, identify the example configuration file (Implementation: configuration; Additional file 3: Table S1) that most resembles the setup, make a copy and edit accordingly. Our JHPCE lmod files and docker setup files provide installation instructions for researchers who wish to manually set up the software dependencies (http:// resea rch. libd. org/ SPEAQ easy). To test SPEAQeasy on a particular computer setup, first identify the "main" script appropriate for the environment. Scripts exist for execution at JHPCE or within SLURM, Fig. 1 An example samples.manifest. The samples.manifest file for paired-end samples is composed of five tab separated columns: (1) path to the first FASTQ file in the pair, (2) optional md5 signature for the first FASTQ file in the pair, (3) path to the second FASTQ file in the pair, (4) optional md5 signature for the second FASTQ file in the pair, (5) sample ID. The first two entries use the same sample ID, which is useful when a biological sample was sequenced in multiple lanes and thus generated multiple FASTQ files. The first two pairs of FASTQ files will be merged SGE, or local environments. A user of a SLURM-managed cluster would launch a test run of SPEAQeasy with: SPEAQeasy provides test samples for each combination of reference organism and strandness. These test samples are also used if the user does not remove the --small_ test option and doesn't specify a directory containing the samples.manifest file with the --input option (Implementation: test samples). While a typical test run may complete in about 15 min, the first execution will take significantly longer, as reference and annotation-related files must be downloaded and built for a given organism and annotation version. After successful completion, the log file SPEAQeasy_output.log will indicate this success at the bottom, along with details such as total run time. One can examine and become familiar with the output files from SPEAQeasy (Results: outputs), which by default are placed inside the original repository in a subfolder named results. Our documentation provides further details (http:// resea rch. libd. org/ SPEAQ easy/). sbatch run_pipeline_slurm.sh A simplified workflow diagram for each pipeline execution. The red box indicates the FASTQ files are inputs to the pipeline; green coloring denotes major output files from the pipeline; the remaining boxes represent computational steps. Yellow-colored steps are optional or not always performed; for example, preparing a particular set of annotation files occurs once and uses a cache for further runs. Finally, blue-colored steps are ordinary processes which occur on every pipeline execution. The workflow proceeds downward, and each row in the diagram implicitly represents the ability for several computation steps to execute in parallel Common SPEAQeasy options Once SPEAQeasy is installed, a researcher must create a manifest file with the information about the RNA-seq samples to be processed (Implementation: inputs). Next, select the "main" script written to work with the job scheduler available, if any (Implementation: use cases). Within this script, a researcher may modify command options for the particular analysis. Specifically, a choice of appropriate reference genome is required to be specified with the option --reference, which may take values "hg19", "hg38", "mm10", or "rn6". Specify whether reads are single or paired-end with the option --sample, which takes values "single" or "paired". Finally, the researcher would indicate the strandness pattern they expect all samples to obey with the option --strand, which may be "forward", "reverse", or "unstranded". SPEAQeasy infers the actual strandness pattern present in each sample as a quality control measure (Implementation: configuration; Fig. 3: main options). See the documentation at http:// resea rch. libd. org/ SPEAQ easy for further detailed options (Implementation: configuration). SPEAQeasy output files Each execution of SPEAQeasy generates a number of output files (Implementation: outputs, Additional file 3: Table S2). One of the primary outputs of interest are Ranged-SummarizedExperiment R objects [28], which contain information about the sequence ranges, counts, and additional annotation for each feature. SPEAQeasy produces separate files for each feature type, including genes, exons, and exon-exon junctions. Because the data is packaged into RangedSummarizedExperiment objects, a figure. b An example of a full command is shown-in this case, a test run on an SGE scheduler without docker is also specified number of Bioconductor packages can immediately be utilized to perform further analysis appropriate for a number of common use cases, starting with interactively exploring the data using tools like iSEE [42]. A collection of quality metrics is also gathered for each sample, and saved in both an R data frame, and a comma-separated values file (Additional file 3: Table S3). Users can thus assess metrics of interest at-a-glance, or utilize the information to control for covariates of interest in further analysis. Metrics include fractions of concordant, mapped, and unmapped reads during alignment, fraction of reads assigned to genes, and similar quantities. SPEAQeasy also optionally generates bigWig coverage files for each sample, and one mean coverage file for each strand [36]. To enable comparison between samples, coverage is normalized to 40 million mapped reads of 100 base pairs. While bigWig files may be used directly, SPEAQeasy performs an additional step to quantify coverage at genomic regions of interest. RData files are produced to describe the expressed regions [41], which provides a foundation for analyses involving finding differentially expressed regions. For human samples, variant calling is performed to ultimately produce a single file for the experiment in Variant Call Format (VCF) [51]. This file contains genotype information at a list of 740 single nucleotide variant (SNV) missense coding sites with MAF > 30% (Additional file 4: Supplementary File 1. Each individual typically has a unique genotype profile after variant calling, and this can be leveraged to identify mislabelled samples in conjunction with a table of identity information generated prior to sequencing, typically using a subset of the high-coverage variants in the RNA-seq data (Results: example use case involving sample swaps). Example use case involving sample swaps We provide a vignette to demonstrate how SPEAQeasy outputs can be utilized to resolve sample identity issues and perform differential expression analysis (http:// resea rch. libd. org/ SPEAQ easy-examp le/) using data from the BipSeq PsychENCODE project [52] which includes bulk RNA-seq data from bipolar disorder affected individuals and neurotypical controls from the amygdala and the subgenual anterior cingulate cortex (sACC). For reproducibility, the vignette walks through how to download the example data and run SPEAQeasy before performing the follow-up analysis. First, we show how a self-correlation matrix can be constructed from user-provided genotype calls made before sequencing. The particular calls at each SNP are represented as numeric values so that an overall correlation can be computed between any two samples. The same matrix is generated from genotype calls made by SPEAQeasy (Fig. 4a). User-provided metadata can then be leveraged to determine if samples correlate to those of the same labelled donor, and ultimately to resolve conclusive sample swaps or drop samples with more complex identity problems. Finally, the RangedSummarizedExperiment objects from SPEAQeasy can be updated with these findings and example metadata. Next, we explore the sources of variability in gene expression visually. First, principal component analysis is performed to assess the impact of variables such as total number of reads mapped and concordant map rate on expression (Fig. 4b). We also plot the first ten principal components for each individual, splitting by sex and then brain region to understand the influence of these variables on expression. Afterward, we perform a differential expression analysis (Additional file 3: Table S4 A, Fig. 4c). This involves normalizing counts with edgeR [8], forming a design matrix of interest, and controlling for heteroscedasticity in counts with voom [53]. limma [30] is used to construct a linear model of expression, from which an empirical bayesian calculation can determine genes which are significantly differentially expressed. We then select genes above a particular significance threshold, in this case p < 0.2, and plot expression against variables of interest. We show how to construct an expression heatmap with pheatmap [54] for top genes, with clusters labelled with covariates of interest-in this case, sex, brain region, and diagnosis status (Fig. 4d). At the end, we perform a gene ontology analysis using the package clusterProfiler [43]. The goal is to associate significantly differentially expressed genes with known functionality and biological processes. We show how to form example queries with the compareCluster function, and write the results to a CSV format (Additional file 3: Table S4 B). Fig. 4 Main output files from SPEAQeasy. SPEAQeasy produces the files described in the blue boxes, as the final products of interest. Counts of genes, exons, and exon-exon junctions are aggregated into three respective R objects of the familiar RangedSummarizedExperiment class. This allows users to immediately follow up with a number of Bioconductor tools to perform any desired differential expression analyses. If the --coverage option is provided, RData files are produced to provide expression information over regions in the genome. This allows users to compute differentially expressed regions using any of a number of Bioconductor packages as appropriate for the experiment. Finally, for experiments on human samples, variants are called to ultimately produce a single VCF file of genotype calls at 740 particular SNVs. Together with genotype data recorded before sequencing the samples, one can resolve mislabellings and other identity issues which inevitably occur during the sequencing process (http:// resea rch. libd. org/ SPEAQ easy-examp le) Discussion A number of "end-to-end" pipelines for RNA-seq are already publicly available [19][20][21][22]. However, the majority are difficult to install or configure, require manual handling of annotation-related files, or generally lack the degree of features we have developed in SPEAQeasy (Table S5). A common pipeline installation pattern involves the use of conda [55], where users activate and load environments where the software dependencies are installed. If conda is already available on the system, the installation process itself is typically straightforward and well-documented. However, sharing pipeline access among multiple users (e.g. in a research group/laboratory) is often nontrivial for inexperienced users, may require every individual to separately install, and this common use-case is not always documented. In contrast, SPEAQeasy provides more than one installation option, and multiple users can share a single installation instance with a single copy command: copying the main script and optionally the configuration file, which can then be modified for the individual use-case. The preferred installation method relies on Nextflow [27] to automatically pull pre-specified docker images at run-time unless they were previously downloaded; this approach is used in some currently-available pipelines [19]. One of the goals of SPEAQeasy was to provide a straightforward installation method that required neither knowledge of a software/environment management tool (e.g. conda, docker, singularity, etc.) nor root access permissions. Consequently, we also provide an alternative method for Linux users performed via a single command (Implementation: software management): Another major focus in SPEAQeasy involves minimizing users needing to configure the pipeline for the execution environment. While many existing pipelines-in theorysupport execution on a number of resource managers/ job scheduling platforms, few are pre-configured to truly leverage individual setups. For example, snakemake-based [56] pipelines [21,22] allow specification of the total number of CPU cores to allocate, behaving identically on a local machine as on an arbitrary computing cluster. However, in practice, cluster users often must consider several other hardware resources, such as memory or disk space usage. Most notably, users of SLURM-based clusters may be charged based on specified run times of individual jobs. In the case of Nextflow and snakemake-based workflows, individual jobs are internally submitted for each pipeline component, and typically it is implicitly left to the user to worry about time specification for every component. To address this common use-case, we have written and tested configuration files for a number of environments (local execution, SGE-based clusters, SLURM-based clusters), establishing sensible defaults for variables like job run-time, memory, and disk usage. SPEAQeasy provides other miscellaneous features we have not seen frequently or at all in other available pipelines (Additional file 3: Table S5). The first involves being able to automatically handle input FASTQ samples split across more than one file. Each line in the samples.manifest file (Implementation: inputs) specifies the path for one read or pair of reads for a sample, followed by an associated ID; for samples split across input FASTQ files just repeat the same ID for each set (line) of input files. Another feature is bash install_software.sh "local" custom per-sample logging, which traces the exact series of commands executed, along with some additional context helpful for debugging such as relevant working directories, exit statuses, and other logging information for each process (Additional file 2: Figure S2). We were motivated to implement this feature after observing how as the pipeline grew in complexity, it became increasingly necessary to understand Nextflow's implementation details to debug execution errors. Because even a correctly written software pipeline can encounter errors when the input for a processing step is unexpectedly different or the software has a bug, we believe pipelines without specialized debugging tools become inaccessible to most users upon errors. Software pipelines are sometimes not actively maintained. Given our interest in using SPEAQeasy ourselves [24][25][26]57], we are actively maintaining SPEAQeasy by adapting it as new software is released for different processing steps or bugs are resolved in newer versions of the SPEAQeasy dependencies. SPEAQeasy includes an example dataset which we internally use for testing the execution as we make updates to SPEAQeasy. Given the open-source nature of SPEAQeasy and Nextflow, the SPEAQeasy code can be adapted if users are interested in switching processing tools or want to expand support to other genome references beyond mm10, rn6, hg19 and hg38. The SPEAQeasy code is available on GitHub (https:// github. com/ Liebe rInst itute/ SPEAQ easy and https:// github. com/ Liebe rInst itute/ SPEAQ easy-examp le), and can be expanded through interactions with users. We anticipate that SPEAQeasy will be useful for exploring gene expression at a finer resolution, such as using exon and exon-exon junction data. The latter is powerful for exploring the un-annotated transcriptome along with base-pair coverage data [35]. SPEAQeasy will benefit from the development of statistical and bioinformatics methods that integrate results across multiple levels of expression. Overview Pipeline execution begins with a preliminary gauge of read quality and other quality metrics, via FastQC 0.11.8 [15]. Reads are then optionally trimmed using Trimmomatic 0.39 [46], and a post-trimming quality assessment is performed again with FastQC. Alignment to a reference genome is performed with HISAT2 2.1.0 [37], along with pseudoalignment to the transcriptome with kallisto 0.46.1 [38] or Salmon 1.2.1 [39]. A combination of regtools 0.5.1 [40] and featureCounts (Subread 2.0.0) [14] is used to quantify genes, exons, and exon-exon junctions. At the same time, expressed regions (ERs) are optionally computed with the Bioconductor [29] R package derfinder [41]. The result is a RangedSummarizedExperiment [29] object with counts information, RData files with ER information, and plots visualizing the associated data. Variant calling is also performed for human samples, using bcftools 1.10.2 [44] to produce a VCF file [51] for the experiment. SPEAQeasy is flexible and allows for newer versions of software to be used in place of the ones listed above. Configuration Usage of SPEAQeasy involves configuring two files: the "main" script and a configuration file. The "main" script contains the command which runs the pipeline, along with options specific to the input data, and fundamental choices about how the pipeline should behave. In this script, the researcher must specify if reads are paired-end or single-end, the reference species/genome (i.e. hg38, hg19, mm10, or rn6), and the expected strandness pattern to see in all samples (e.g. "reverse"). Strandness is automatically inferred using pseudoalignment rates with kallisto [38], and the pipeline can be configured to either halt upon any disagreement between asserted and inferred strand, or simply warn and continue. In particular, we perform pseudoalignment to the reference transcriptome using a subset of reads from each sample, trying both the rf-stranded and fr-stranded command-line options accepted by kallisto. The number of successfully aligned reads for each option is used to deduce the actual strandness for each sample. For example, an approximately equal number (40-60%) of aligned reads for each option suggests the reads lack strand-specificity and are thus "unstranded"; a large enough folddifference between the two indicates either "reverse" or "forward"-strandness. Specifically, greater than 80% of total reads aligned must have aligned using the rf-stranded option to deduce a sample is "reverse"-stranded, and less than 20% to infer "forward"strandness. We have found these cutoffs to reliably identify inaccurate --strand specification from the user, while not being so strict as to mistakenly disagree with correct specification. Another example command option in the "main" script controls whether to trim samples based on adapter content metrics from FastQC [15], trim all samples, or not perform trimming at all. The configuration file allows for fine-tuning of pipeline settings and hardware resource demands for each pipeline component. Ease of use is a core focus in SPEAQeasy, and configuration files for SLURM, SGE, and local linux environments are pre-built with sensible defaults. The user is not required to modify the configuration file at all to appropriately run SPEAQeasy; however, a great degree of control and customization exists for those users who desire it. Advanced users can tweak simple configuration variables to pass arbitrary command-line arguments directly to each of the software tools invoked by SPEAQeasy. For example, when creating wiggle coverage files from BAM alignment files, the default is to normalize counts to 40 million mapped reads of 100 base pairs. This is achieved by the default value for the following variable in each configuration file: Suppose a researcher were instead interested in normalizing to 40 million mapped reads of 150 base pairs, and wanted to skip duplicate hit reads. The above variable could be adjusted to pass the appropriate command arguments to bam2wig.py [58]: The same procedure can be used to fine-tune any other software tool used in SPEAQeasy, allowing a level of control similar to directly running each step. At the same time, settings related to variables such as strandness, possible pairing of reads, and file naming choices are automatically accounted for. bam2wig_args = " − t 4000000000" bam2wig_args = " − t 6000000000 − u" Inputs A single file, called samples.manifest, is used to point SPEAQeasy to the input FASTQ files, and associate samples with particular IDs. It is a table saved as a tab-delimited text file, containing the path to each read (or pair of reads), optional MD5 sums, and a sample ID. Sample IDs can be repeated, which allows samples initially split across multiple files to be merged automatically (Fig. 5). Input files must be in FASTQ format, with ".fq" ".fastq" extensions supported, and possibly with the additional ".gz" extension for gzipcompressed files. Outputs SPEAQeasy produces several output files, some of which are produced by the processing tools themselves (Additional file 3: Table S2) and others by SPEAQeasy for facilitating downstream analyses (Fig. 2). The main SPEAQeasy output files, relative to the specified --output directory, are: Normalized log2 Exprs p=2.07e−06 [29] that contain the raw expression counts (gene & exon: featureCounts; exon-exon junctions: from regtools; transcript: either kallisto or Salmon counts), the quality metrics as the sample phenotype data (Additional file 3: Table S3), and the expression feature information that depends on the reference genome used. Under the merged_variants/ directory for human samples, mergedVariants.vcf.gz: this is a Variant Call Format (VCF) file [51] with the information for 740 common variants that can be used to identify sample swaps. For example, if two or more brain regions were sequenced from a given donor, the inferred genotypes at these variants can be used to verify that samples are correctly grouped. If external DNA genotype information exists from a DNA genotyping chip, one can then verify that the RNA sample indeed matches the expected donor, to ensure that downstream expression quantitative trait locus (eQTL) analyses will use the correct RNA and DNA paired data. Under the coverage/bigWigs/ directory when SPEAQeasy is run with the --coverage option, [sample_name].bw for unstranded samples or [sample_ name].forward.bw and [sample_name].reverse.bw for stranded samples: these are base-pair coverage bigWig files standardized to 40 million 100 basepair reads per bigWig file. They can be used for identification of expressed regions in an annotation-agnostic way [41], for quantification of regions associated with degradation such as in the qSVA algorithm [59], visualization on a genome browser [60], among other uses. Software management SPEAQeasy provides two options for managing software dependencies. If docker [61] is available on the system the user intends to run the pipeline, software can be managed in a truly reproducible and effortless manner. As a pipeline based on Nextflow, SPEAQeasy can isolate individual components of the workflow, called processes, inside docker containers. Containers describe the entire environment and set of software versions required to run a pipeline command (such as hisat2-align), eliminating common problems that may occur when a set of software tools (such as SPEAQeasy) is installed on a different system than it was developed. Each docker image is pulled automatically at runtime if not already downloaded (on the first pipeline run), and otherwise the locally downloaded image is used. Because docker is not always available, or permissions are not trivial to configure, Linux users may alternatively install software dependencies locally. From within the repository directory, the user would run the command: bash install_software.sh "local" This installs each software utility from source, where available, and as a pre-compiled binary otherwise. Because installation is performed within a subdirectory of the repository, the user need not have root access for the majority of tools. However, we require that Java and Python3 be globally installed. The motivation for this requirement is that we expect most users to have these tools already installed globally, and local installation of these tools is generally advised against because of potential conflicts with other installations on the system. Though docker and local software installation are the officially supported and recommended methods for managing software, other alternatives exist for interested users. SPEAQeasy includes a file called conf/command_paths_long.config, containing the long paths for each software utility to be called during pipeline execution. Users can substitute in the paths to already-installed software versions for any utility, in this file. Those familiar with Lmod environment modules [50] can also trivially specify in their configuration file module names to use for a particular SPEAQeasy process. However, this tends to only be a viable option for those with a diverse set of bioinformatics modules already installed. Annotation SPEAQeasy is intended to be greatly flexible with annotation and reference files. By default, annotation files (the reference genome, reference transcriptome, and transcript annotation) are pulled from GENCODE [62] for human and mouse samples, or Ensembl [63] for rat samples. The choice of species is controlled by the command flag "--reference" in the "main" script, which can hold values "hg38", "hg19", "mm10", or "rn6". In the configuration file, simple variables control the GENCODE release or ensembl version to be used. When the pipeline run is executed, SPEAQeasy checks if the specified annotation files have already been downloaded. If so, the download is not performed again for the current or future runs. This reflects a general feature of SPEAQeasy, provided by its Nextflow base-processes are never "repeated" if their outputs already exist. The outputs are simply cached and the associated processes are skipped. SPEAQeasy also offers easy control over the particular sequences included in the analysis-a feature we have not seen in other publically-available RNA-seq pipelines utilizing databases such as GENCODE or Ensembl. In particular, researchers are sometimes only interested in alignments/results associated with the canonical reference chromosomes (e.g. chr1-chr22, chrX, chrY, and chrMT for homo sapiens). Alternatively, sometimes extra contigs (sequences commonly beginning with "GL" or "KI") are a desired part of the analysis as well. RNA-seq workflows commonly overlook subtle disagreement between the sequences aligned against, and sequences included in downstream analysis. SPEAQeasy provides a single configuration variable, called anno_build, to avoid this issue, and capture the majority of use cases. Setting the variable to "main'' uses only the canonical reference sequences for the entire pipeline; a value of "primary" includes additional contigs seen in GENCODE [62] annotation files having the "primary" designation in their names (e.g. GRCh38.primary_assembly.genome.fa). Users are not limited to using GENCODE/Ensembl annotation, however. Instead, one can optionally point to a directory containing the required annotation files with the main command option "--annotation [directory path]". To specify this directory contains custom annotation files, rather than the location to place GENCODE/Ensembl files, one uses the option "--custom_anno [label]". The label associates internallyproduced files with a name for the particular annotation used. The required annotation files include a genome assembly fasta, a reference transcriptome fasta, and a transcriptome annotation GTF. For human samples, a list of sites in VCF format [51] at which to call variants is also required. Finally, if ERCC quantification is to be performed, an ERCC index for kallisto must be provided [45]. Use cases We expect that the majority of users will have access to cloud computing resources or a local computing cluster, managing computational resources across a potentially large set of members with a scheduler such as Simple Linux Utility for Resource Management (SLURM) or Sun Grid Engine / Son of Grid Engine (SGE). However, SPEAQeasy can also be run locally on a Linux-based machine. For each of these situations, a "main" script and associated configuration file are pre-configured for out-of-the-box compatibility. For example, a SLURM user would open run_pipeline_slurm.sh to set options for his/her experiment, and optionally adjust settings in conf/slurm.config (or conf/docker_slurm.config for docker users). In the configuration file, simple variables such as "memory" and "cpus" transparently control hardware resource specification for each process (such as main memory and number of CPU cores to use). These syntaxes come from Nextflow, which manages how to translate these simple user-defined options into a syntax recognized by the cluster (if applicable). However, Nextflow also makes it simple to explicitly specify cluster-specific options. Suppose, for example, that a particular user intends to use SPEAQeasy on an SGE-based computing cluster, but knows his/her cluster limits the default maximum file size that can be written during a job. If a SPEAQeasy process happens to exceed this limit, the user can find the process name in the appropriate config file (Additional file 3: Table S1), and add the line "clusterOptions = '-l h_fsize = 100G'" (this is the SGE syntax for raising the mentioned file size limit to 100G per file, a likely more liberal constraint). We also expect a common use case would involve sharing a single installation of SPEAQeasy among a number of users (e.g. a research lab). A new user wishing to run SPEAQeasy on his/her own dataset simply must copy the appropriate "main" script (e.g. run_pipeline_slurm.sh) to a desired directory, and modify it for the experiment. All users then benefit from automatic access to any annotation files which have been pulled or built by the pipeline in the past, and by default share configuration (potentially reducing work in optimizing setup specific to one's cluster). However, user-specific annotation locations or configuration settings can be chosen by simple command-line options, if preferred. Test samples The test samples were downloaded from the Sequence Read Archive (SRA) or simulated using polyester [64], depending on the organism, strandness, and pairing of the samples. Each was then subsetted to 100,000 reads.
7,834.4
2020-12-11T00:00:00.000
[ "Computer Science", "Biology" ]
Assessment of the Quality of Newly Formed Bone around Titanium Alloy Implants by Using X-Ray Photoelectron Spectroscopy The aim of this study was to evaluate differences in bones quality between newly formed bone and cortical bone formed around titanium alloy implants by using X-ray photoelectron spectroscopy. As a result of narrow scan measurement at 4 weeks, the newly formed bone of C1s, P2p, O1s, and Ca2p were observed at a different peak range and strength compared with a cortical bone. At 8 weeks, the peak range and strength of newly formed bone were similar to those of cortical bone at C1s, P2p, and Ca2p, but not O1s. The results from this analysis indicate that the peaks and quantities of each element of newly formed bone were similar to those of cortical bone at 8 weeks, suggestive of a strong physicochemical resemblance. Introduction Dental implantation is a treatment method in which fixtures are implanted in the jawbone, followed by prosthetic implantation after a resting period of approximately 3-6 months, at which time new bone is formed around the fixtures [1]. This process sets the cortical bone as the primary anchorage unit. Bone modeling and remodeling processes are important (1) to induce long-term stability of the implants; (2) to develop osseointegration between implant materials and the bone; (3) to allow the maturation of new bone around the implants. It has been reported that the maximum occlusal force in adults with a natural dentition is 430 N [2], and similar loads are likely to be applied to implants as well as normal prostheses. Therefore, to achieve long-term retention and stability of implants under such conditions, the quality of newly formed bone around implants is important. Many studies have reported that newly formed bone around implants is spongy bone [3]. However, although the morphology of newly formed bone is reportedly like spongy bone, it is difficult to discriminate whether the bone quality is mature like cortical bone, or immature like spongy, osteoid, or cartilaginous bone; therefore, evaluation of the bone quality is required. The quality of bone forming around implants has been investigated by various groups. Nakano et al. [4] evaluated bone density and alignment of biological apatite (BAp), and Boskey and Pleshko [5] used Fourier-transform infrared (FTIR) imaging to assess bone and cartilage quality and composition. In addition, our group has reported on the use of polarized microscopy [6] and scanning electron microscopy (SEM) [7] to evaluate new bone and cortical bone quality, as well as microscopic Raman spectroscopy to analyze phosphate peaks of bone apatite [6], and microcomputed tomography (micro-CT) to assess trabecular microarchitecture and bone mineral density (BMD) [8][9][10]. However, basic research on new bone formation has been sparse, and molecular and elemental characterization of BAp, the basic building block of bone, has generally been overlooked. Moreover, responses to early dynamic loading in implants, with analysis of changes in new bone quality associated with mineralization, remain an important issue for further research. In this study, differences in the bone quality between newly formed bone and cortical bone formed around titanium (Ti) alloy implants were investigated using X-ray photoelectron spectroscopy (XPS). Experimental Animals. Six 18-week-old New Zealand White Rabbits (Sankyo Labo Service Co., Tokyo, Japan) were used in experiments. The rabbits were housed in individual metal cages at a room temperature of 23 ± 1 • C and humidity of 50 ± 1%, with ad libitum access to food and water. The experimental protocol was approved by an animal experimentation ethics committee (approval number ECA 07-0016). All experiments were conducted according to the Guidelines for the Treatment of Animals, Nihon University, Chiba, Japan. Implantation. Rabbits underwent general anesthesia with 2.0 mg/kg of intravenous Ketalar (Daiichi Sankyo, Tokyo, Japan). Implant cavities were surgically created in the tibia 10 mm distal to the knee joint, one each bilaterally, by using a 1.0 mm and 3.0 mm diameter round bar, while irrigating the area with sterile saline. Implants were inserted into the right tibia, with each rabbit receiving one implant (8 implants were used in total). After surgery, the areas were disinfected with tincture of iodine for 3 days. Rabbits were sacrificed by anesthesia overdose at 4 or 8 weeks afterimplantation, and the tibias were resected. XPS Analysis. For XPS analysis, newly formed bone in close proximity to implants and cortical bone (control) which not in close proximity to implants were analyzed using thin-cut nondecalcified histological specimens. Since both bone resorption by osteoclasts and bone formation by osteoblasts progress concurrently, differences in measurement of bone tissue results are apt to occur; therefore, it is difficult to specify the measurement area. In the present study, more than 3 areas of newly formed bone close to implants and those of cortical bone were measured, and sites within which appropriate average values were obtained were determined as the measurement areas ( Figure 2). An XPS analysis was performed under the following conditions: X-ray source: monochromatic AlKα (1,486.6 eV), detection region: 20 μmθ, and detection depth: approximately 4-5 nm (take-off angle: 45 • ). Qualitative analysis was performed using wide scan measurement, and the chemical bonding conditions of detected elements were analyzed using narrow scan measurement. Hydroxyapatite powders were used as the standard specimens, followed by performance of measurements with a correction for relative sensitivity factors (RSFs). Results Qualitative analysis of the peak strengths on newly formed bone around implants and cortical bone at 4 and 8 weeks is shown in Figure 3. Although peaks of Ca, O, P, C, Mg, N, and Na were detected, a marked unevenness in these peaks was observed at 4 weeks, in contrast to 8 weeks. Based on the results of wide scan measurement, narrow scan measurement of elements related to Ca 10 (PO 4 ) 6 OH 2 was performed. The resulting overlap of C1s, O1s, Ca2p, and P2p of newly formed and cortical bone at 4 weeks is shown in Figures 3 and 4, respectively; the results at 8 weeks are shown in Figure 5. The results of narrow scan measurement at 4 weeks ( Figure 4) indicate that, although the peak strengths of newly formed and cortical bone were almost equal at O1s, the half-width became smaller in newly formed bone. At P2p and C1s, although a shifting of the peak in newly formed bone was observed, the half-width became smaller in newly formed bone; the peak strength of newly formed bone at C1s were lower than that of cortical bone. At Ca2p (Ca2p 3/2), although the peak strengths of newly formed bone was lower than that of cortical bone, the half-width became smaller in newly formed bone, and a shifting of the peak was observed. The chemical bonding condition observed for each element in newly formed bone differed from that of cortical bone. As a result of narrow scan measurement at 8 weeks ( Figure 5), the half-width and strength of newly formed bone were almost equal to those of cortical bone at C1s, P2p, and Ca2p. At O1s, the half-width became small in newly formed bone, and a shifting of the peak was observed. The chemical bonding condition observed for each element of newly formed bone was similar to that of cortical bone. Table 2 shows the results of the quantitative analysis of each element and the Ca/P ratio. The results for newly formed bone were Ca: 15.07 ± 2.83 weight percent and P: 7.83 ± 1.56 weight percent at 4 weeks; Ca: 17.33 ± 2.393 weight percent and P: 8.90 ± 0.80 weight percent at 8 weeks. These values gradually became similar to those of cortical 6 International Journal of Biomaterials Discussion The use of XPS analysis in the present experiment has been employed in industry since the 1960s and is now being clinically applied to the qualitative analysis of several nanometersized areas on material surfaces. Elemental analysis showed complex spectra based on chemical bond arrangement and elemental compositional changes. Changes in spectra reveal alterations in chemical bonds based on changes in interatomic transition and valence-band state density. In the present study, newly formed bone at 4 weeks showed differences in the Ca/P ratio (Table 2), and in peak position, peak height, and half-width on the narrow scan measurement of each element (Figures 4 and 5). Many mineral elements were present in newly formed bone at 4 weeks, suggesting that the arrangement of chemical bonding and element composition in newly formed bone differed from those in cortical bone. Furthermore, the peak position, peak height, and half-width on narrow scan measurement at 8 weeks in newly formed and cortical bone showed similar traces, and the quantitative analysis of new and cortical bone at 8 weeks (Table 2) showed similar results. These results suggest that newly formed bone at 8 weeks showed a similar bone quality to that of cortical bone, due to progressive bone maturation and bone metabolism of minerals. Each element showed a complex spectrum with changes in the arrangement of chemical bonding regarding the peak and element composition. The spectrum is related to changes in the chemical bonding of elements caused by interatomic transition and changes in the density or state of the valence band. Under the atomic arrangement of BAp crystals in immature new bone, various minor and trace elements, including CO 3 2− , Na + , and Mg 2+ , were substituted for Ca 2+ , PO 4 3− , and OH − . Such substitutions have been shown to affect the properties of BAp crystals [16]. For example, substitution of CO 3 2− for PO 4 3− in the apatite lattice can cause changes in chemical bonding that leads to strain and reduced crystallite size in BAp [16,17]. Such changes in chemical bonds can result in changes in binding energy in the magnitude of several electron volts (eV). In chemical bonds, trace element peaks shift to higher energy as differences in the electronegativity of bond-forming elements and the electron valence of elements increase. Peak shifts are easily influenced by the surrounding molecular environment, and peak shift and half-width variations signify the presence of compounds with different molecular weights and atomic arrangement. The peak value in a chemical bond is clear with many reports [18][19][20][21]. The main combined states of each element and a corresponding peak are shown in Table 3. In the present study, differences were observed in XPS spectra between new bone at 4 weeks postimplantation and cortical bone, indicating immature bone quality due to BAp imperfection. In the chemical bonds of each spectrum, changes in the order of 0.1 eV affected BAp crystallinity [22]. The XPS spectra of new bone at 8 weeks postimplantation resembled those of cortical bone, and quantitative analysis showed higher Ca and P in newly formed bone compared to 4 weeks. This indicated that the composition of newly formed bone was closer to that of cortical bone at 8 weeks. Osseointegration around implants occurs as mineralization progresses, involving the accumulation of mineral components or BAp nanocrystals [23]. However, the present study was unable to analyze in detail the chemical bonding present in BAp nanocrystals, a subject for future analysis.
2,580.2
2012-06-18T00:00:00.000
[ "Materials Science", "Medicine" ]
Learning to Teach: How a Simulated Learning Environment Can Connect Theory to Practice in General and Special Education Educator Preparation Programs Educator preparation programs have moved away from offering interest-based courses that prepare a teacher candidate on a more surface level and have opted to integrate more authentic experiences with technology that are infused into coursework. This research study focused on redesigning key courses in both the general and special education graduate-level educator preparation programs (EPPs) to infuse learning experiences through a simulated learning environment (Mursion) to help bridge teacher candidates’ coursework and field experiences, offering them robust experience with high leverage practices and technology that increases their own competency. Data from this study demonstrated that preservice teacher candidate work within the Mursion simulated learning environment increased use of high leverage practices related to strategic teaching, collaboration, differentiation, and providing feedback. Implications for instructional coaching, microteaching, repeated practice, and closing the research to practice gap are discussed. Introduction Calls to enhance technology initiatives in teacher education programs have increased exponentially. Today's schools, both K12 and above, require students to have advanced digital skills as they continue to integrate technology into their curriculum. Thus, it is imperative that teacher candidates are prepared to work with students in this modality. Educator preparation programs work at a unique crossroads in this task [1] Many educator preparation program faculty often fall under the "digital immigrant" category while working with teacher candidates designated as "digital natives" who are placed in schools that serve students who live in a technologically advanced world, but in environments whose technology offerings may differ significantly from one zip code to the next [2,3]. These complexities must be acknowledged in designing Educator Preparation Programs (EPPs). As a result, EPPs continue to make ongoing adjustments in technology courses offered, as the options for technology in schools grow exponentially. Most educator preparation programs have moved away from offering interest-based courses that prepare a teacher candidate on a more surface level and have opted to integrate more authentic experiences with that are infused into coursework to provide a dispositional model for students of curiosity and confidence [4]. Though beneficial, the shift from technology courses added into a program versus infusing programs with rich technological offerings, expertise, and dispositions has precipitated a change, first, in the way we prepare ourselves as teacher educators and second, in the way we design our courses to authentically integrate technological experiences for our teacher candidates [5]. This research study focused on redesigning key courses in both the general and special education graduate level EPPs to infuse learning experiences through a technologically rich simulated learning environment, Mursion. Our goal was to provide a platform for teacher candidates to apply learning from their coursework in research, theory, and evidence-based practices with an immediate opportunity to apply what they were learning in the Mursion-simulated learning environment. This redesign allowed students to experience technology integration in their coursework and to witness the benefits of using key technologies for their development as teachers. Further, the redesign of the courses provided a model in the general and special education EPPs that demonstrated the benefits of the intentional infusion of technology in courses that were not traditionally designated as technologically robust. Based on the push for authentic integration of technology in EPPs alongside the gaps identified in the following literature review on the disconnect between coursework and field experiences, this study was designed to explore the following research questions: 1. How can Mursion support teacher candidates in improving their teaching skills? 2. How can Mursion support EPPs in practicing specific pedagogical teaching skills/strategies that can be disconnected from practicum experiences? Literature Review Educator preparation programs are tasked with two big outcomes: (1) helping teacher candidates prepare to be effective teachers through coursework that emphasizes their growth in understanding theory, content, and the context of the teaching profession, and (2) providing teacher candidates with field experiences that allow them to apply what they are learning in their coursework with students. Unfortunately, research in teacher education has acknowledged a disconnect between teachers' knowledge and their application of essential instruction and management skills gained through coursework in preparation programs [6].Programs have consistently attempted to refine preparation programs to close this gap, but it is a problem of practice that remains [7,8]. Thus, many teacher educators are disconnected from the practicum experience because coursework and field experiences are often siloed, with coursework delivered by faculty who are no longer working in K-12 classrooms, and fieldwork overseen by supervisors who have affiliations with the school community but may not know the coursework of the educator preparation program (EPPs) [9]. Field placements are beneficial because they give preservice teachers an opportunity to interact with students, colleagues, and administrators and provide teacher candidates opportunities to apply academic and behavioral skills in actual classrooms [10,11]. Research exists to document the impact of field experiences on beginning teachers' beliefs about teaching and learning, yet there is little research on how field experiences affect instructional practice [11,12]. One reason for this is that it is often difficult for teacher educators to align the conceptual understandings of practice with the range of complex situations that arise in actual classrooms [12]. Further, candidate readiness for the complexity of what they encounter in field placements may limit the benefits they gain from that setting [13]. Girod, M. and Girod, G.R. [14,15] explained that the complexities of real classrooms often require teachers to pick and choose what to focus on, which can be difficult considering their newness at coordinating multiple instruction and classroom management skills. Research to Practice Gaps Though attempts have been made to fill these gaps, educator preparation programs face a continual challenge to sufficiently prepare high-quality teachers to work effectively with students of all ability levels while simultaneously raising student achievement and ensuring their success in a multitude of classroom experiences [16]. Inadequate emphasis on providing preservice training on complex pedagogical and classroom management practices beyond traditional coursework and field activities may cause teacher candidates to complete EPPs without sufficient implementation knowledge of effective instructional practices and with limited classroom-ready skills [17]. Further, with their beginning knowledge of concepts from their coursework and short timeframes dedicated to field experiences, teacher candidates may not receive enough opportunities to practice and refine their teaching skills in a way that builds efficacy. Often, they will teach a lesson once without the opportunity to revisit, refine, and improve their practice upon reflection. They must move on to the next lesson, so they may never get the repeated practice that leads to advanced skills. Importantly, while research documents a positive connection between teachers' subject matter knowledge and their performance in the classroom, it has also been established that teachers with advanced preparation (in addition to typical coursework and fieldwork experiences) in teaching methods and strategies have a greater chance of successful longevity in the classroom [18]. Consequently, it is crucial that EPPs provide teacher candidates with early and intentional opportunities to practice teaching methods, implement strategies, receive focused feedback on teaching practices, and refine their teaching, with repeated practice built on this feedback. When teachers are well-prepared in both content and pedagogy, it makes an enormous difference not only to their effectiveness in the classroom but also whether they are likely to enter and stay in teaching [19]. Aligning Coursework and Field Experiences In efforts to improve the alignment of coursework and field experiences, Thomassen and Rive [20] suggested that it may be necessary to create simplified contexts where novice teachers can initially gain proficiency with target skills. Some common approaches used in EPPs to simplify the initial acquisition of target skills include case-based methods of instruction (e.g., [21][22][23][24]), video analysis (e.g., [25]) and role-playing/microteaching (e.g., [26][27][28]). These approaches augment traditional didactic instruction by exposing teachers to typical classroom scenarios and teaching strategies but are limited in authenticity and complexity because they do not require teachers to realistically respond to the range of behavior and academic challenges they will face in a typical classroom. Importantly, transfer from these simplified situations to actual classrooms depends on the extent to which practice opportunities match the authentic situation in which the learner applies the information [29]. As the effort to improve EPPs continues and evidence of field experience effectiveness increases, so does the need for innovative ways to incorporate these experiences into program coursework [30]. Therefore, EPPs must examine a variety of outcome variables associated with effective teacher performance and assess preservice teachers' knowledge and instructional experiences in order to broaden and enhance their teaching skills [31]. One response to this need is the innovative use of multimedia platforms such as simulated learning environments within EPPs. A simulated learning environment allows for combined learning in content knowledge, teaching pedagogy, and problem-solving strategies [32,33]. According to the theory of situated learning [29], training in this type of environment should readily transfer to actual classrooms. A simulation is a person, device, or set of conditions that attempt to present an authentic problem, which must be responded to as you would under natural circumstances [34][35][36]. Simulations allow individuals to have repeated trials involving high stakes situations without risk or loss of valuable resources [37]. The capacity for a simulation to be an effective learning approach is based largely on the ability of the simulation to represent the targeted scenarios in a manner that allows the transference of learning to real-time practice [38,39]. Historical Context of Simulated Learning Environments In the early 1990s, studies of learning and cognition in the field of educational psychology became heavily influenced by examining the effects of the social and its context on the learner and their learning [29,40]. As educators built on these concepts in the early 2000s, the idea of 'situated learning' and explorations of the implications of the social environment became prominent in the field of teacher education [41]. Central to the intersection of learning theory and its application in a classroom setting is the notion of apprenticeship learning [42] and legitimate peripheral participation [43]. As a novice teacher is apprenticed to an expert through observations or in practice through apprenticeship activities, their learning is scaffolded into increasingly more central professional activities. How the apprenticeship of teaching occurs for teacher candidates has been the subject of much debate and revision. Models of apprenticeship vary. On one side, candidates can be involved in observations over time before ever teaching. While on the other side, candidates are placed full time in classrooms while teaching on emergency licenses without a mentor before ever entering an educator preparation program on the other side. There are models in between these two sides, as well. Despite this variation, the majority of teacher education programs align around two major components: coursework and field experiences. Coursework is where preservice educators are exposed to the basic theories of teaching and learning, and field placements are where they apply related strategies [44]. Field placements are beneficial because they give preservice teachers an opportunity to interact with students, colleagues, and administrators. These field experiences help preservice educators understand how factors such as school culture, district policies, and state legislation influence daily classroom functions. Further, field placements give preservice teachers opportunities to balance academic, differentiation, theoretical, and managerial opportunities in actual classrooms. Techniques for concurrently managing these many teaching opportunities in the classroom cannot be represented with adequate complexity through didactic instruction alone. In spite of the many benefits of field experiences, there are limitations that must be considered as well. It is difficult for teacher educators to align field experience with the intended purposes of the placement because many classrooms and school factors cannot be controlled (e.g., curriculum, diversity of the students, school culture, quality of administration [45]). In addition, it is nearly impossible for teacher educators to match the many variables encountered in field placements with the performance levels of the teacher. For many teacher candidates, placement in the field can be a harrowing experience that has them rely on survival instincts more often than the application of concepts they are learning in their coursework. For example, teachers may be so overwhelmed with keeping students on the task that they are unable to differentiate instruction to adapt to individual student needs. Despite these difficulties, Grossman [46] maintains that a crucial element of EPPs is the opportunity to practice complex teaching skills in classroom situations that successively resemble actual practice. Traditional field placements may be too complicated for beginning teachers to learn new skills. Thus, novice educators may need additional simplified contexts for practicing essential teaching skills. The complexities of the knowledge and skills candidates are expected to learn can be daunting, and many faculty seek better ways to provide opportunities for their students to practice and receive feedback prior to stepping into a classroom. Teacher educators also desire timely experiences that allow for student failure, reflection on this failure, and a chance to try again. In response, teacher educators often utilize strategies that simulate real classroom scenarios. Historically, simulated teaching experiences involved either roleplaying with colleagues, watching films, playing card games, or engaging in case-based problems presented via print. Early years of simulation favored film-and print-based content mixed with role-playing. In the mid-1970s, technology-based simulations emerged in EPPs [45], allowing teachers to interact with virtual students and solve problems presented via computers. As computers became more accessible to the general population, technology-based simulations have become increasingly accessible in EPPs. Most virtual classrooms, including game-like environments and Second Life simulations, are accessed from a personal computer. The Sim School, TeachME (Teaching in a Mixed Reality Environment) and Cook School District simulation are examples of using virtual reality for the development of teaching skills [14,47,48]. Game-like simulators allow teachers to instruct students and make ongoing instructional decisions [49], usually by selecting from a menu of options, which triggers a range of preprogrammed student responses. Probably the most sophisticated type of virtual classroom available is a full-immersion simulation (e.g., TeachLivE and Mursion). Interactions during full-immersion simulations differ from game-like classrooms and Second Life environments because preservice teachers enter a physical classroom with student avatars projected on a screen and can instruct in a realistic classroom setting, physically moving and interacting with students as the "human-in-the-loop" produces authentic student responses in real-time which is more like real teaching. Benefits of Simulated Learning Environments Simulated learning environments have the opportunity to provide preservice teacher candidates opportunities to improve pedagogical and conversational skills (e.g., academic strategies, behavior management strategies, parent meetings, etc.) through rehearsal and reflection. Additionally, experiences in simulated learning environments enable teacher candidates to transfer knowledge learned from college coursework and apply it in the context of the simulated learning environment, thereby deepening understanding of skills and providing early, contextualized professional development [50]. Within simulated learning environments, repeated teaching trials can alleviate high stake situations without risking the loss of resources (e.g., money, time, and people). Experiences like these can create early opportunities for teacher candidates to construct and solidify evidence-based practices that are grounded in authentic and constructive teaching experiences. A simulated learning environment can be defined as the combination of real and virtual worlds that provide users with a sense of presence. The Mursion simulated learning environment, specifically discussed in this manuscript, is powered by a blend of artificial and human intelligence driven by simulation specialists ("interactors"), trained professionals who orchestrate the interactions between avatar-based characters and trainees. This approach provides a realistic experience that involves intense human-to-human interactions that can become increasingly impactful. The simulation also allows learners to fully immerse themselves in a simulation that has the capacity to produce significant and lasting changes in practice [51]. The Mursion simulated learning environments use avatars that embody specific characteristics typified by personalities that would exist within any classroom environment and represent an array of demographics and personalities [50]. During a simulation, the avatars and participants engage in interactions to practice various strategies by providing real-time verbalization of teaching or other practice-based interactions (e.g., parent discussions, etc.) in a classroom or other appropriate setting with proportionally sized avatars that provide immediate responses [37,52]. Figures 1-3 show examples of some of the avatars that can be utilized within the Mursion simulated environment. For the purpose of educator preparation, various levels of complexity can be controlled depending on the year of the educator preparation program that the teacher candidate is in allowing for the levels of behavior, and response rates. Additionally, the way in which an avatar responds can also be modified [53]. This variability affords EPPs the flexibility needed to individualize practice For the purpose of educator preparation, various levels of complexity can be controlled depending on the year of the educator preparation program that the teacher candidate is in allowing for the levels of behavior, and response rates. Additionally, the way in which an avatar responds can also be modified [53]. This variability affords EPPs the flexibility needed to individualize practice For the purpose of educator preparation, various levels of complexity can be controlled depending on the year of the educator preparation program that the teacher candidate is in allowing for the levels of behavior, and response rates. Additionally, the way in which an avatar responds can also be modified [53]. This variability affords EPPs the flexibility needed to individualize practice teaching opportunities specific to the specific needs of teacher candidates, thus the premise of this study. Participants The researchers studied one class of graduate-level initial licensure special education teacher candidates (n = 19) and one class of graduate-level initial licensure general education secondary level candidates (n = 17), both completing their respective programs required 'reading instruction' course. Thirteen (nine general education, four special education) of the total participants identified as male and twenty-three (ten general education, thirteen special education) identified as female. The age of participants ranged from 22-55, and all participants were in their first year of a two-year educator preparation program. Three of the general education teacher candidates had prior experience in working in a general education classroom (e.g., instruction assistant) and fourteen of the special education teacher candidates had prior experience working in a special education classroom (e.g., hired on an emergency-teaching license or as an instructional assistant). Sixteen of the general education teacher candidates had no prior teaching experience in any classroom setting, whereas four of the special education teacher candidates had no prior teaching experience in any classroom setting. Procedures The researchers coordinated their term of class lectures and assignments to disseminate information specific to instruction and taken from general and special education high-leverage practices (e.g., [54,55]). This sought to build a foundation for students across general and special education programs in an effort to bridge the gap between knowledge and practices for teaching in an inclusive classroom (see Table 1). These high leverage practices (HLPs) complimented one another and could be used by both general and special education teacher candidates. Prior to beginning sessions in the Mursion simulated learning environment, all participants were administered the Teacher Self Efficacy Scale (TSES) (long-form) [56], to gather pre-study perceived self-efficacy of their teaching practices. The TSES long form includes a 24-item questionnaire that includes common question stems known to be difficult for teachers in school settings and/or activities. Specifically, the 'instructional strategies' related question stems as determined by Tschannen-Moran and Woolfolk Hoy [56] were of interest to the researchers for the purpose of this study and included those found in Table 2 below. Participants rated each question on a scale from 1 (Nothing) to 9 (A Great Deal). This allowed the researchers to gain insight into teacher candidates' current perceptions and self-efficacy of their teaching. Five Mursion sessions were scheduled for each group of general and special education teacher candidates across the fall and winter terms. Participants in each respective class were assigned a teaching partner so that they could co-teach a lesson in the simulated learning environment with a middle school class of avatars with diverse learning abilities, one of which specifically has characteristics of a student with a reading learning disability. The co-teaching partners planned instruction for the middle school avatars on how to use informational texts and content-specific vocabulary. Participants taught the same lesson across all Mursion sessions and started the lesson at the beginning each time. The researchers decided to do this because recommendations for teaching with technology maintain that teacher educators should ensure sufficient repetition during coaching-centered learning [57]. Teacher candidates in this study worked with a small group of five, diverse, middle-school-aged avatars (see Figure 1) who displayed a variety of learning strengths and needs that teacher candidates could ascertain through conversation and interaction. Coaching guidance during each Mursion session was provided for the special education students by the general education professor and coaching for the general education students provided by the special education professor. Each respective professor observed each of the simulations and took anecdotal notes on each of the participants specifically on their use of high-leverage practices, including explaining and modeling content, practices, and strategies; coordinating and adjusting instruction during a lesson; and checking student understanding during and at the conclusion of lessons. High leverage practices were a focus in observations, as well as in planning for the class sessions attended by the students when they were not teaching Mursion. High leverage practices demonstrate the potential of bridging the research to practice gap and create shared understandings of effective teaching across programs [54]. Following each Mursion session, all participants completed an open-ended self-reflection and responded individually to the following questions: (1) What do you think went well during your session today, (2) What would you change for your next session, and (3) What goals do you have for you and your teaching partner for your next session? After the final Mursion session, all participants completed the TSES scale again to gather post-study perceived self-efficacy of their individual teaching practices so that the researchers could determine changes in perceived self-efficacy between pre-and post-administrations. Data Analysis The researchers used a qualitative phenomenological approach to analyze the data collected, aiming to develop a clear and articulate description of participants' experiences in the Mursion simulated learning environment. As part of the procedures for phenomenological analysis, the researchers did not have any preconceived ideas or themes when coding the data. The researchers looked at participant responses both individually and as a whole group to horizontally derive specific topics and/or themes of meaning through a scaffolded approach [58]. The pre-and post-self-efficacy responses and open-ended self-reflection responses were entered into an Excel spreadsheet. Pre-and post-TSES [56], participant open-ended self-reflections, and researcher's anecdotal observation notes from each Mursion session were analyzed, noting when specific behaviors or interactions occurred [59]. Participants' self-reflection responses were qualitatively coded using the constant comparison method noting for specific keywords and phrases, including strategies, self-efficacy, collaboration, explicit instruction, high leverage practices, and coaching [60]. After all pre-and post-self-reflection responses were coded by theme, the researchers collectively reviewed all of the reflections and double-coded the responses to determine and establish inter-rater reliability. Inter-rater reliability was calculated by counting the total number of ratings in agreement and the total number of ratings. The total was divided by the number in agreement and converted to a percentage indicating inter-rater reliability of 95%. Pre-and post-TSES scores were analyzed through descriptive analysis which supported the phenomenological methodology in that it assists in the recognition of a socially meaningful phenomenon through the identification of salient features, relevant constructs, and available measures. Descriptive data also noted when patterns in the data are observed and subsequently communicated in a format that is well suited to depict the phenomenon [61]. Research Question 1 The first research question investigated was how can Mursion support teacher candidates in improving their teaching skills? Data from the open-ended student reflections indicated that across all participants (n = 37), a majority increased their perceived self-efficacy in explicitly explaining and modeling content between initial and final sessions. Half of all participants demonstrated improvement in integrating strategies for teaching a concept, and 60% of all participants became more mindful of individual student participation in their lessons, checking in with greater frequency on student understanding and engagement. However, 100% of participants increased their efficacy in coordinating and adjusting instruction during a lesson. Participant self-reflection data aligned with the high leverage practices noted in the professors' observations: explaining and modeling content, practices, and strategies; coordinating and adjusting instruction during a lesson; and checking student understanding during and at the conclusion of lessons. Special Education Preservice Teachers Descriptive statistics from the TSES pre-assessment special education preservice teachers resulted in an overall mean of 6.21 and a standard deviation (SD) of 1.74 across all TSES instructional strategy questions. Post-assessment results indicated an overall mean of 7.29 and an SD of 1.35. When investigating the TSES instructional strategy question responses individually, the largest difference in means (1.79) between pre-/post-was found in question 23: How well can you implement alternative strategies in your classroom? The smallest difference in means (0.69) between pre-/post-was found in question 20: To what extent can you provide an alternative explanation and/or an example when students are confused? (see Table 3). General Education Preservice Teachers Descriptive statistics from the TSES pre-assessment general education preservice teachers resulted in an overall mean of 5.98 and a standard deviation (SD) of 1.36 across all TSES instructional strategy questions. Post-assessment results indicated an overall mean of 6.85 and an SD of 1.34. When investigating the TSES instructional strategy question responses individually, the largest difference in means (1.41) between pre-/post-was found in question 10: How much can you gauge student comprehension on what you have taught? The smallest difference in means (0.53) between pre-/post-was found in question 7: How well can you respond to difficult questions from your students? (see Table 4). This descriptive data indicates growth across both groups (general and special education) of teacher candidates perceived self-efficacy in instructional strategy related questions. Differences between groups can be explained by aligning self-efficacy measures with both topics and timing in the students' coursework. Both course syllabus calendars highlighted specific instruction related to high leverage practices as topics for class sessions and these aligned with the dates of students' Mursion sessions and their teaching reflections. This allowed the researchers to triangulate growth not only in the pre-and post-test TSES measure and post-Mursion self-reflection responses, but also by what was covered in class. The special education preservice teachers were in their first year (first term) of a two-year program and were taking their first reading course. This reading course provided them with new strategies for teaching reading specifically to students with disabilities each week. The data indicated that their repertoire of strategies grew, offering them many alternative strategies for teaching vocabulary and reading, as indicated by the large difference in means in TSES Question 23. However, because they were at the beginning of their program, their knowledge of differentiating ways to explain concepts and strategies had not yet been fully covered, as indicated by the smallest difference in means in TSES Question 20. The general education students' scores indicated similar insights. This study occurred during a degree required content literacy course, which focused heavily on reading comprehension. Class time was split between the content for the course and participation in Mursion sessions. Consequently, the largest difference in means for their self-efficacy scores appears in TSES Question 10 reflecting on their ability to measure reading comprehension. Relatedly, the lower difference in mean scores for TSES Question 7, as they reflected on their efficacy in responding to difficult questions, indicated a disconnect between the teaching they were doing for Mursion-vocabulary and reading comprehension, and a major focus of their other coursework, which was advanced pedagogy in their respective content areas (e.g., social studies, math, science). Many of their comments in class implied an assumption that teaching reading was still very much a basic skill and higher-order teaching occurred in their content areas. Changing this perception was a major focus of the class, and final exam scores indicated it had been met, but the focus area of vocabulary instruction in the Mursion sessions limited their conception of growth in their response to Question 7. Importantly, measures of growth across all TSES instructional strategy questions from preto post-measures demonstrate the impact of integrating Mursion sessions into their coursework. Immediate application of research (strategies learned in class) to practice (repeated teaching in Mursion to practice learned strategies) afforded an increase in the teacher candidates' perceptions of their self-efficacy in using high leverage practices in their teaching as outlined in Table 1. It is apparent that this growth was facilitated by both the Mursion technology and the purposeful alignment and adjustment of the courses to integrate technology for learning. Qualitative Data Analysis Qualitative data was taken from the students' reflections after each session. Student statements lend strength to the patterns indicated in the descriptive statistics included above. Qualitative data is organized below with headings of the high leverage practices that were used for coaching and observations. These are followed subheadings organized by themes from the coding of open-ended responses to the reflection questions to demonstrate their connection to the HLPs (see Table 1). Strategies For each of the five sessions, student pairs submitted new lesson plans, refining them each time. Over time, we noticed that lessons were refined to include more explicit descriptions of modeling content and strategies. One participant stated, "I want to make sure that I use various modalities and strategies to help the students make connections with the text through pictures, games, keywords, and using their own words." Other student reflections demonstrated attention to key strategies that had been emphasized in class the week before. For example, after learning about being explicit about expectations, one student wrote, "Add a target to the agenda so students know what is expected of them; Begin quicker as not to waste time." After a class session focused on visual aids and other scaffolds for supporting learning, one pair incorporated more visual aids in their teaching. One partner reflected, "The visual aids really added more interaction with the students this time and gave us a chance to go into expressions that are familiar to them by using emojis." All student lesson reflections demonstrated their growing understanding of class concepts and increased efforts to apply them in their teaching. The immediate time they were given to adjust their lessons and then try to teach the same lesson again with their planned adjustments demonstrated increased specificity in their reflections on strategies and their benefits for learning. These themes were directly correlated to the following HLPs: Explaining and modeling content, practices, and strategies (#2) and Using explicit instruction (#16) (see Table 1). Self-Efficacy Self-efficacy was demonstrated by the application of their classroom learning into their lesson plans each time, as well as through statements indicating their increased self-efficacy toward teaching content using high-leverage practices. Their statements demonstrated a marked shift from more abstract and general statements in the beginning, such as "I brought a positive attitude to my teaching" to statements that were more specific about how they would teach the next time, such as the Gen Ed student who said, "I will decrease the intro/relationship-building process in the beginning and focus on building relationships through the lesson activities; I will give students more time to practice using the words and definitions." A special education preservice teacher recounted after their fourth session, "I am improving by having a good base and options for lessons and activities; an understanding it does not have to be perfect. I make sure that I utilize people and resources that are available through my program and I'm realizing it's okay to practice." Collaboration Students in both groups demonstrated an increased awareness of partner interactions in teaching in their reflections. One special education/preservice teaching pair reflected, "We collaborated very well today. We made adjustments quickly and effectively and our teaching plan was effective. We used explicit instruction and had great student engagement." When reflecting on areas of improvement, they stated, "I would try to talk less and give my teaching partner more space to speak spontaneously; I would attempt to be more time conscious and I would stick to the lesson plan better." One general education preservice teacher wrote, "My partner and I problem solved our pattern of flaws and identified a solution to change our co-teaching method." Action Based Goals for Future Teaching In each reflection, students were encouraged to set goals for their future teaching. Over time, reflections demonstrated an increased specificity in action toward goals for future teaching. One participant reflected, "I was very proud of this lesson. We got straight to the point of the lesson which was vocabulary and had equal amounts of input and control as teachers; we were also able to incorporate a visual activity which was a goal from the last session." Another participant wrote, "Our last lesson felt streamlined and focused; The goal was to review vocabulary words from the article, practice strategies for understanding, and make connections to the text; I feel like we accomplished that tonight." Student goals also demonstrated increased efficacy by suggesting actions aligned with the goal. A general education preservice teacher wrote, "We should not spend as much time in the warmup. Time management is something I would change for next time; I would wear a watch so I can manage time better." Rather than writing a goal and reflecting that they weren't sure how to improve as appeared in many beginning reflections, later reflections continued to indicate a better sense of not only what to improve, but how, such as this reflection from a special education/preservice teaching pair, "Today's sessions did not flow as well as I hoped; Since we attempted a new approach, maybe we could have done more prep and troubleshooting; I want to discuss making a more meaningful connection with both vocab and pictures. " These two themes correlated directly with the following HLPs: Coordinating and adjusting instruction during a lesson (#6) and Adapt curriculum tasks and materials for specific learning goals (#13) (see Table 1). Differentiation Student reflections exhibited growth in authentic understandings of individual student needs. Reflecting on attention to individual student needs, participants set goals to increase their attention and teaching toward student needs. From example reflections, "Next time, we would encourage students to make more personal connections with the vocabulary words; see if students can come up with their own definitions and more creative sentences", "We could use a visual; Harrison had a hard time finding the word assess and it may have been easier if he could have seen it", and, "In our next lesson, we could integrate activities students can do independently or with a partner to practice the skills; integrate more differentiation specific to Harrison." These insights that focused specifically on Harrison (a Mursion avatar who exhibits characteristics of a learning disability) demonstrated student attention to differentiation, an inclusive classroom that provides personalized instruction for a student who may have a learning disability, and increased attention to student needs rather than teacher needs. This theme correlated directly with the following HLPs: Checking Student Understanding During and at the Conclusion of Lessons (#15) and Provide positive and constructive feedback to guide students' learning and behavior (#22) (see Table 1). Instructional Coaching Data from this study suggest that sessions in the Mursion simulated learning environment, in addition to instructional coaching, were effective tools that allowed for increased reflection and improved teacher efficacy. Continued Mursion sessions that combined instructional coaching with self-reflection offered teacher candidates the opportunity to continue to make changes that increase their self-efficacy in teaching, heighten their ability to work with students who have differing learning needs, structure their instruction to be more direct, explicit, and strategic, and practice intentional improvement aligned with their self-identified teaching goals. Research Question 2 The second research question investigated: How can Mursion support EPPs in practicing specific pedagogical teaching skills/strategies that can be disconnected from practicum experiences? This study provided a cross-programmatic impact. As students were able to experience microteaching and repeated practice through their participation in the five Mursion sessions, other professors who were not part of this study noted the students' improved attention to applying specific strategies in their planning in pragmatic and effective ways, especially while teaching vocabulary which was the focus of these sessions. One example of this was from a science content pedagogy instructor stating that the student was "speaking up with more confidence about how to teach vocabulary in peer planning sessions in class". This evidence of transfer between the Mursion experience and other coursework is a positive development across content strands. Finally, as these researchers continued to collaborate, it became evident that there was much about our students' field experiences that we did not know. As we shared coaching observations after the sessions and discussed the teaching we were witnessing from our preservice teachers, we acknowledged the disconnect we had from their development as teachers by not supervising them in the field. We realized that there were many changes we could make within our own programs that would help us to align our coursework more closely with the field experiences of our preservice teachers. Knowing that teacher candidates are more likely to be effective and to stay in the profession when their preparation experiences are connected to classroom practice [62], we have built-in more experiences where our students participate in case studies, video analysis, role-playing, and ultimately, Mursion. Consequently, finding methods for bridging the gap between siloed areas of study, such as special education and general education, can lead to an increased shared understanding of both disciplines, the importance of collaboration, and more authentic integration of high leverage practices for inclusive classrooms. Limitations Limitations to this study include data sampling from one institution of higher education as well as the convenience sampling of the students enrolled in the researcher's respective reading courses in the general and special education preparation programs. Additionally, the researchers had access to Mursion and funding that supported the hourly costs associated to run the simulated sessions. Mursion is not accessible unless a site has a contractual agreement and funding to pay for the related sessions and services. Directions for Future Research Educator preparation programs need to provide early opportunities for preservice teacher candidates to repeatedly perform the same or similar teaching tasks designed with their current pedagogical skills and content knowledge in mind. During repeated practice, preservice teachers should strive for incremental improvement by closely monitoring performance, looking for clues, examining performance data, and asking questions that prompt reflection. Simulated learning environments hold immense potential for fieldwork in EPPs which can be facilitated through the combined use of instructional coaching. Additional studies examining the combined constructs of using a simulated learning environment in combination with instructional coaching with preservice teachers would continue to verify the impact of their increased use in EPPs. Conclusions Learning how to effectively teach to the diverse needs of students is an art and takes a variety of opportunities for teacher candidates to understand the multifaceted aspects associated with the daily functions of a classroom. "With increased expectations for inclusive models of K-12 education for students with disabilities, there has been an emphasis on effective collaboration among general and special education teachers" [63]. EPPs need to consider the importance of training teacher candidates to work with typically and atypically developing students both academically and behaviorally. "Specifically, when working with students with disabilities, collaboration is more than just working together and takes effort, diligence, and training" [64] (p.84). Further, research has long recommended the integration of technology for improved outcomes in educating preservice teacher candidates, demonstrating strong benefits. Dieker, Straub, Hughes, Hynes and Hardin [37] discussed the benefits of a simulated learning environment, stating, "We've found that just four 10-min simulator sessions on specific teaching practice-such as how to give targeted feedback or how to ask open-ended questions-can change at least one crucial teaching behavior" (p. 56). Our research affirms this conclusion. With the landscape of today's K-12 classrooms changing, preparing all teachers to effectively support and teach a diverse group of students with and without disabilities is imperative for today's inclusive classrooms. "Collaboration should take into account that all team members should demonstrate strong pedagogical and communication skills, the ability to share knowledge, and willingness to find the time to support teamwork where all members are responsible and accountable" [65] (p. 105). Increased opportunities for early professional collaboration in EPPs will allow teacher candidates to model and be coached on effective academic and behavioral pedagogical strategies to best support the differentiated needs of learners in the classroom. As efforts to improve EPPs continue and evidence of the benefits of experiential learning effectiveness grows, so does the need for innovative ways to incorporate such aspects into higher education courses. The need for the use of such environments in EPPs is growing because of the changing demographics of both students and teachers in today's schools. Therefore, virtual simulations have the opportunity to potentially change the way in which preservice educators are trained. These environments also benefit the teacher educators working with preservice candidates who may not have the opportunity to observe them in their fieldwork. By working with candidates in this environment, coursework is refined to better match the reality of the field experiences. First, teacher candidates often lack the readiness to learn the difficult and complex skills they are to master. Rather than being eager to learn about instructional design, many students are more interested in classroom management and discipline when they first begin education course work. They are simply too new to be prepared to discuss differentiation, alignment, and adaptations to their planning and instruction. Second, teacher educators need to provide instructional settings that are authentic in their emphasis on candidate learning. Often, traditional field placements are high stakes settings that don't allow our candidates to experiment and grow with ongoing feedback. These settings don't allow them to try out their ideas because candidates are immediately put on stage with no chance for rehearsal. They can quickly fail by losing control of the class, losing face in front of their more knowledgeable mentor teacher, or only teach a lesson once and not be able to adjust before moving on to the next lesson. Simulated learning environments remedy these issues while offering EPPs robust options for integrating high leverage practices into authentic teaching experiences that connect coursework and fieldwork. Conflicts of Interest: The authors declare no conflict of interest.
9,879.8
2020-07-01T00:00:00.000
[ "Education", "Computer Science" ]
Breaking out : The Dynamics of Immigrant Owned Businesses As far as integration policy in a Danish context is concerned a phenomenon have been observed during the last decade: Immigrant businesses are spreading rapidly in the country, dominating certain business lines in deprived inner city areas. According to registry and survey data, most immigrant businesses and particularly those owned by immigrants from less developed countries in Asia, the Middle East and Africa are tiny self-employment units in which profits are low and working hours long for the owners. They occupy mainly traditional small firm dominated business lines, which the majority population tend to abandon anyway. Only seldom do they grow into larger firms and shift to more advanced and profitable business fields. This pattern, however, seems to be slowly changing in that some well-educated first and second generation immigrant groups (among them particularly women) have the potential to start and run more advanced and profitable businesses outside the traditional ethnic business lines and outside minority dominated inner city areas. Key determinants in this process seem to be owner qualifications, network patterns, financial resources and cross border business relations. INTRODUCTION Migration leads, often with significant time lag, to formation of immigrant businesses in recipient countries. In its narrowest sense, an immigrant business may simply be defined as "a firm owned by an immigrant". As such, the phenomenon is interesting only from the point of view of immigrants themselves, opening an alternative route into the economic life of their new societies. It has interest from a quantitative point of view as immigrants may influence the venture creation process in their new economies due to differences in the entrepreneurship levels of immigrants and the majority population. Immigrant businesses are, however, also interesting from a qualitative point of view. They are usually heavily infused with culturalethnic elements influencing what they produce, how they are managed, the composition of the staff, how they relate to other businesses, and how they build their international relationships. In other words, they add variation and international outlook to the economy of the recipient country, as a consequence of the opportunity structure created by a combination of several factors such as the specific welfare state regime, the specific structure of the market, the character and the scope of the relation between immigrant populations and the institutions (be it formal or informal) of the host country and the dynamics of the specific immigrant populations [1] . Immigrant businesses are not distributed randomly in the economy of the recipient country. They are predominantly small-scale family firms, clustered in specific business lines and urban areas [2] . This reflects the competitive advantages they enjoy in certain business fields compared to businesses owned by the majority population. Competitive advantages for immigrant businesses vis-à-vis the market are significant for goods with a significant ethnic component such as clothes and food. This is particularly true in immigrant dense areas where the "home market" provides immigrants with better business opportunities than entrepreneurs from the majority population due to co-ethnic trust and communication mechanisms. However, it also applies to immigrant businesses which in culture loaded fields deals predominantly with the majority population, because these customers may find that they are more convincing and competent producers or traders of such products [3] . Immigrant businesses may also benefit from co-ethnic solidarity and resource mobilization which influence how they get started, with whom they do business, and the way employment patterns are shaped. Immigrant groups often choose to employ and do business with co-ethnics because trust relations are easier to build up with those of shared cultural backgrounds and because they are, as a group, under pressure from their new society and in need of in-group solidarity in order to cope with that pressure [4] . The focus in this paper is on breaking-out mechanisms vis-à-vis business lines using the Danish situation as its empirical basis. We want to document (which can be done more accurately in Denmark than in most other countries through registry analysis) that immigrant businesses are generally less profitable than the personally owned businesses of the majority population and not a route to growing and prospering businesses. Further, through a comparison of five selected immigrant groups, studied through registry analysis as well as survey data, we want to trace their different strategies, according to self-employment and business development and their ability to make firms grow, restructure and relocate. Finally we want to identify determinants that influence breaking-out patterns and potentials. As our focus is on breaking-out mechanisms we apply the immigrant business concept rather than the more narrow ethnic business one. Our focus is not only on immigrant businesses which succeed in growing on the basis of ethnic characteristics and relationships, but also on businesses that assimilate into the mainstream economy by abandoning ethnic traits apart from that of immigrant ownership. A MODEL OF IMMIGRANT BUSINESS BREAKING OUT FROM MARGINAL BUSINESS FIELDS Taken together, the fragments of evidence from the USA, Britain and East Asia [5] illustrate that immigrant businesses sometimes do start to grow, change strategies, and break out of enclaved immigrant areas. Such change seems to take place at the expense of the ethnic character of these businesses. Their rooting in the ethnic community is relaxed, but immigrant ownership is retained, as are often the close trading relationships with immigrant businesses, even across borders. Ram and Jones have elaborated a model which attempts to capture the market break-out process, suggesting two basic dimensions: local vs. non-local, and ethnic vs. non-ethnic [6] . These dimensions have been mapped into four quadrants: Local & Ethnic: Enclosed immigrant businesses, mainly in the retail and service lines, trapped in crowded immigrant dominated areas, serving predominantly immigrants, Non-Ethnic & Local: Growing immigrant businesses in low order retail and service lines, serving the needs of the majority population and locating inside immigrant dominated areas, Ethnic & Non-Local: Serving mainly immigrants but operating in a wider territory (the city, the region, the nation) such as wholesalers and manufacturers that distribute and produce ethnic goods or goods for immigrant firms, Non-Ethnic & Non-Local: The ultimate breaking-out form, confined neither by customer ethnicity or locality. Encompasses manufacturers and high-order retail, wholesale and services for the open market. The bulk of UK immigrant businesses are in the A and B positions: relatively few are found in the C and D positions where profit levels generally are higher. This model seems to capture important sides of the breaking-out process, but is in need of some elaboration. "Ethnic" remains a rather unclear concept in that ethnic identity depends on ongoing social construction processes and in that some people find that they have two or more ethnic identities (e.g. a religious identity combined with the identity stemming from their country of origin, cf. Light's study of the business activities of different religious Iranian groups in Los Angeles [7] . Moreover the concept combines individualistic (identity) and social dimensions (relationship and social action) as well as different market aspects such as the ethnic character of firms in the supply chain and the ethnic content of consumer goods (which also change in the course of time: pizza's are, for instance, no longer seen as Italian food but rather as non-ethnic food of Italian origin). The "Non-Local" category is also ambiguous in that it seems both to refer to the location of businesses and markets and in that the authors talk only about levels within the British national economy while ethnic businesses may well reach out to the international level. But, taken together, the model is a useful departure for studies of breakingout processes. It points to the need to study the local vs. non-local dimension as well as business line structuring and the shaping of business relationships along ethnic lines. Moreover, there is an in-build firm size dimension in the model, in that position A is predominantly small-scale while position D is predominantly medium-and large-scale. In the subsequent empirical sections on immigrant businesses in Denmark, we shall therefore look in particular at four dimensions: 1) business line agglomeration and dynamics, 2) firm-size structure and dynamics, 3) location of immigrant businesses, and 4) the shaping of immigrant business owners' relationships. EMPIRICAL EVIDENCE: COMBINING REGISTRY AND SURVEY DATA The results are based on two data sources: Registry data and a survey among 279 immigrant business owners in the greater Copenhagen area. The registry data consists of micro-data on the socio-economic characteristics of the entire population living in Denmark in 1982Denmark in , 1989Denmark in , 1996Denmark in and 1997Denmark in -2002 and all privately owned firms in Denmark 1992Denmark -1996Denmark and 1997Denmark -2002. The database, which is maintained and available at Statistics Denmark, links information from various official statistical registers. The data includes information on individuals by socio-economic status, place of birth, place of work, age, gender, education, income, source of income, employment, periods of unemployment, tax payments, ownership of house or business, citizenship, date of immigration, date of emigration, date of death, marital status, number of children and a wealth of other variables. The same type of information can also be obtained on parents, spouses and in-laws with the possibility of cross generation information where parents or spouses live, or have lived, in Denmark. The data also includes registry information about firms, for example, owners' place of birth, citizenship status, date of a firm's establishment, date of immigration (if immigrant), number of employees and their place of birth and citizenship status, turnover, exports, tax payments, business line, number of businesses, level of education, socio-economic status before starting up as self-employed, date of shutdown (when relevant), socio-economic status following a shutdown, and a great many other variables. In this study we have used a cross-national sample of all immigrants and their descendants (between the ages of 18-59 years) living in Denmark in 1982Denmark in , 1989Denmark in , 1996 or 2002 and a 5 per cent control group of native Danes for the first three years and a 10 per cent control group of native Danes for the years 1997-2002. The registry analysis of firms is based on all immigrant owned ones (with owners in the age group 18-59 years) and a 5 per cent control group of firms owned by native Danes for the period 1992-1996. For calculation purposes, individual observations taken from the 5 per cent control group have been weighted in order to represent the true distribution across the total population. A questionnaire survey was carried out to supplement the registry based analysis, e.g. on intraand inter-ethnic business owner networks which cannot be studied satisfactorily by registry data. The questionnaire survey response rate was 40.9 per cent (279 respondents out of a sample of 682 business owners, interviewed between November 1998 and May 1999). The sample population was drawn from a total number of 2,329 business owners in 1998, who originated from five selected countries of origin, and were living in Copenhagen and its surrounding suburbs (using individual-based ID numbers). The ID numbers were then combined with the firm's registration at Statistics Denmark. The five countries in question are (in alphabetical order): China (PRC, Taiwan and Vietnam), Iran, Pakistan and Turkey, the former Yugoslavia. The background for the selection of the five groups was as follows. Pakistanis, Turkish and Ex-Yugoslavians were selected because they are the three major groups of immigrants from the late 1960s when labour immigration to Denmark was still possible. An additional reason is the significant difference in the self-employment rate for the three groups, particularly between Pakistanis and Ex-Yugoslavians who have almost the same immigration history and population size. Chinese were selected because they were reputed for having a high self-employment rate combined with a strong specialisation in catering. Iranians we selected because they seemed to have a higher educational level than the other groups and because their selfemployment rate has increased significantly during the 1990s (from 10,6% in 1989 to 29,3% in 1996). All five groups encompass all immigrants from the selected countries whether or not they have become Danish citizens. Selection of the 682 respondents was based on a stratification methodology. Initially, 10 business owners were chosen from each combination of business line and national origin. A sample size of 20 represented Turkey, as many immigrants from Turkey are of Kurdish origin, and have different motivations/backgrounds for setting up a business. The selection was furthermore stratified into twelve business-lines, structured in such a way that business lines with a high share of immigrant business owners were exposed, e.g. splitting service firms into cleaning ones and others. Finally the survey population was limited geographically to the greater Copenhagen area. This was done due to practical reasons as well as the fact that the explorative study had shown that the only significant difference between the capital and provincial regions of Denmark consists of a time lag in the development of the business structure. An analysis of the response/non-response levels for the different immigrant groups and their lines of business, based on background data available for both groups through registry data, only revealed insignificant variation which could be explained as a random, nonsystematic deviation. All tables and figures in the paper have been produced by the authors, and are based on either data provided by Statistics Denmark or survey data. ARE IMMIGRANT BUSINESSES IN DENMARK CAPTURED IN MARGINAL BUSINESS FIELDS? In the following passage we will take a glance at what business fields different groups of immigrants have placed there businesses in on a basis of empirical data. We will demonstrate the differences in gross income between the different immigrant groups and the natives, a comparison with business lines between natives and the immigrant groups respectively, the impact of different variables for change in the business line, a comparison between firm size, the distribution of turnover, the extend to which the businesses are places in areas with a low or high level of migrants, what financial and social networks immigrant entrepreneurs participate in, and finally to what extent different determinants have importance for the possibilty to change business line from typical immigrant business lines to mainstream business lines. Being an entrepreneur can be expected to involve long working hours at least in the first years. Thus there must be advantages that compensate for this. The question appears: why do some immigrants become self-employed in a society like the Danish where most immigrants can receive welfare benefits without it having any impact on their possibility to uphold their residence permit? One should expect income aspirations and the desire to be independent of employers to be the key motives, combined with blocked opportunities at the general labour market. In that case it seems reasonable to expect self-employed to earn at least as much as welfare beneficiaries and normally also more than wage earners. The following table shows that there is a gap between the income for self employed and wage earners. Table 1 shows the differences in gross income per capita for the five immigrant groups compared to native Danes. The three columns of the table represent three different socio-economic categories. The table demonstrates that the expectancy of a relatively high income for self-employed is met for the native Danes, but not for the five selected immigrant groups. On average wage earners from immigrant groups earn considerably less than what the native Danes earn, mainly because a larger part af them is in lower paid jobs. Also self-employed immigrants earn less than native wage earners. Taken together, the table demonstrates that it is not on average economically advantageous for immigrants to become self-employed. This impression is confirmed by comparison of the Ex-Yugoslavian and Pakistani minorities which have followed quite different income strategies since about 1980. Pakistanis have increasingly shifted from being wage-earners to becoming self-employed, while only few Ex-Yugoslavians have followed this pattern. This has not resulted in higher incomes for the Pakistani group. On the contrary, Ex-Yugoslavians earn more than Pakistanis in all of the three socio-economic categories. Looking at the income difference between individuals of Iranian national background and Turks, their income across different socioeconomic groups is almost alike in spite of the big difference of educational merits, representing two ends of a continuum. Without the economic incentive to have business in Denmark, we must expect immigrants to be very keen on their possibilities to break out from the business lines with low yield. COMPARISON OF IRANIAN, PAKISTANI, TURKISH, EX-YUGOSLAVIAN AND CHINESE BUSINESSES IN DENMARK In the following we will take a glance at the differences among the five Danish immigrant groups according to characteristics in their business lines, employment structure, location of businesses, networks etc. This has the purpose to find resemblances and differences between the groups that can lead to an understanding of what determinants are vital to break out for all and for specific groups. Table 2 shows the number of businesses in the selected business lines for the five immigrant groups. The table shows that in each group there is a tendency towards the concentration of firms in certain business lines. Furthermore, the table indicates a complete absence of Chinese business owners in two of the business lines: "Transportation" and "Cleaning". When looking at the business line "Service" a difference between Pakistanis and Ex-Yugoslavians appears. The latter is over-represented here. The Pakistani group, however, has the highest representation in the lowprofit-margin business line of "Supermarkets, kiosks, etc.", which is known for its hard working environment and long working hours. This contributes to the explanation of the income differerences between the two groups shown in the previous table. Table 4 shows the distribution of turnover by the country of origin of the business owners. The main result here is that there is significant variation between the five groups. While only 39,1% of the Pakistani owned firms and 48,9% of the firms owned by people from the former Yugoslavia have a turnover below 1.000.000 DKK, close to 70% of the Iranian, Turkish and Chinese owned firms fall in this category. The same can be seen on the opposit end of the scale. More than 30% of the Pakistanis and Ex-Yugoslavians have a turnover above 2.000.000 DKK. BUSINESS LINE AND FIRM SIZE Iranians have low levels of turnover as well as incomes. One important reason could be their much shorter duration of stay in Denmark. As argued in the 2nd passage the actual location of the firm have importance for break out opportunities. In the following we will caste a glance on this in a Danish context. GEOGRAPHICAL LOCATION OF IMMIGRANT BUSINESSES Immigrant businesses are located throughout Denmark, with concentrations around the big cities. There are significant variations between the groups of immigrant businesses. Iranians follow the general picture, with businesses located throughout Denmark, and a concentration around the big cities. For Pakistanis, the picture is quite different. Their businesses are almost entirely located in the greater Copenhagen area. The Ex-Yugoslavians differ from the other groups by locating their businesses in the greater Copenhagen area and in the municipality of Helsingør in North-Zealand. Turks follow the general picture with representation throughout the country, but with a tendency of higher concentrations around the island of Zealand. Table 5 show some differences for immigrant business located in Central Copenhagen, its suburbs and the province. According to table 5 the share of businesses in traditional business lines is slightly lower in Central Copenhagen than in the suburbs and the province (19.9% in proportion to 25,8% and 31%). The province is characterised by a high share of businesses in some of the traditional business lines, particularly restaurants and cafeterias, barbeques, etc., but also with a low share in the retailing business lines. These differences may be understood as a result of weak tendency of breaking-out in the capital. Table 6 is based on a survey showing answers to the question: "How did you finance the purchase of your business? Multiple answers are allowed for this question, which means that the column percentages do not add up to 100 per cent. 13 different categories could be chosen, here only the three most common categories are presented. The three selected categories are: 1. Own saving, 2. Loan in Bank and/or Financial Institutions and 3. Loan or Gift from Family or Friends. Table 6 shows that while "Own saving" is the most frequent source of financing the purchase of businesses owned by these five immigrant groups, "Loan in Bank and/or Financial Institutions" is the least used source, (this is however unlike the native Danish business owners). One of the reasons here could be due to the unwillingness of established banks and financial institutions to take risks in approval of loans. A possible obstacle for immigrant entrepreneurs to obtain break out is their limited use of banks and other financial institutions when it comes to financing their businesses. Family and other close relations who lend money to entrepreneurs might have specific wishes for which business line the entrepreneur should start up. This to secure the repayment of the loan. In practice this might mean that the most innovative entrepreneurs are confined in specific business lines. BUSINESS OWNERS' FINANCIAL NETWORKS Immigrants' lack of financial resources to form businesses can possibly be overcome through Rotating Credit Associations [8] . Using the Rotating Credit Associations (RCA's) model enables immigrants to achieve a large sum for investment through collective pooling. Geerts describes RCA's as follows: "A lump sum fund composed of fixed contributions from each member of the association is distributed, at fixed intervals and as a whole, to each member in turn. Whether the fund is in kind or cash; whether the order the members receive the fund is fixed by lot, by agreement, or by bidding; whether the time period over which the society runs is many years or a few weeks; whether the sums involved are minute or rather large; whether the members are few or many; and whether the association is composed of urban traders or rural peasants, of men or women, the general structure of the institution is the same" [9] . Such a model naturally presupposes the existence of trust between the members [10] and some kind of sanctions and back-up system in case one or more members fail to make their contributions. Such trust is often present within families, but the model is also used in a broader way, e.g. by immigrants from the same locality or region in their home countries [11] . Our data indicates that the awareness of the RCA-system is high and increasing among immigrants. A different financial system grounded on different priorities can motivate or de-motivate certain types of financial transactions and investment, by the fact the scope for such activities is often much broader than those offered by financial institutions of the host country. BUSINESS OWNERS' SOCIAL NETWORKS The social networks immigrant entrepreneurs take part in are of vital importance for their chances to break out. Previous research [12] has shown that access to mono-ethnic social networks is essential when it comes to the possibilities to borrow money for starting a firm. On the other hand access to the natives leads to an opportunity to break out for example when it comes to sharing information on the mainstream market. According to Light: "Strong ties and start-ups depend upon ethnic background, but information retrieval and break-out require class resources. A mixture of class and ethnic resources may be the best overall endowment, but, of course, neither start-up nor break-out is an overall process. Both happen at discrete moments in time --so it matters which resources are available when." [13] Thus the network relationships of the business owners were a central aspect of the survey investigation. The questionnaire contained several questions covering different kinds of business owner network relationships. Results from the answers of two of these questions are presented in this section. This and similar questions were constructed in such a way, that the answers were sub-divided into two parts. One part gives the names of individuals. The other part reveals the individual's relationship to the respondent. The respondent is allowed to refer to a maximum of five named individuals. The method used for generating the basis-data for figure 1 consisted of computing the average number of individuals within five categories of relationships. This average can theoretically range from 0 to 5. The maximum observed average is 0.83 (for the Iranian/Relatives cell in the cross-tab of figure 1). The maximum scale value of all diagrams inside figure 1 is 1.00. This scale maximum value has been chosen due to graphical presentation only. A regular pentagon shaped area indicates a diverse network where no single category of relationships is more important than others. If the point of one angle of the shape is sharply skewed in one direction, it means that the category pointed at plays a more significant role. Source: Statistics Denmark, own calculation, *** p < 0.001, ** p < 0.01, * p < 0.05 An example of such a skewed distribution is illustrated in figure 1 for Iranian business owners, where two categories of relationships ("Family/Relatives" and "Native Danes") proved significantly more important than the remaining three, when these respondents make business decisions. Out of the five groups, Iranians have the most balanced pattern of relationships, approximating the pentagon shape. Even though all groups have high averages when it comes to including natives in their business networksprobably because all immigrant groups frequently obtain advice from native Danes (e.g. accountants) when they make decisions -it seems like especially the Ex-Yugoslavians have advantages when it comes to business contact with Danes. This implies that it should be easier for the Ex-Yugoslavians to obtain break out than for other immigrant groups. BREAKING-OUT DETERMINANTS To investigate which determinants have had impact on breaking out possibilities for firms that were in business in 1997 and were still active in 2002, table 7 shows the results of a logistic regression-analysis of business line changes based on quantitative longitudinal registry micro-data. The model is operationalised so that the dependent variable is a binary one: The business line of a business owner's firm in 1997 is compared with the business line of that same person's firm in 2002. If a change in business line has occurred over this period, the value of the dependent variable is set to 1. If no change has taken place, the value is set to 0. The estimates shown in table 7 refer to the possibility of a business line change having taken place, given 1) the national origin of the business owner, 2) the concentration level of immigrants and descendants in business owner's residential area in 2002, 3) the increase or decrease in certain economic key figures from 1997 to 2002, 4) his/her citizenship status in 1997, his/her highest degree of education and finally whether his/her education was obtained in Denmark or abroad. Studying the figures in table 7 more closely, reveals that immigrants and descendants from Turkey are more likely to change from a marginalized business line to another than immigrants from other countries. Another look at table 7 shows that immigrants and descendants who owned a firm that had had an increase in turnover are less willing to change business line, than firm owners who had a decrease in this. The opposit is the case for business owners who had an increase in sum of assets or profit. An obtained Danish citizenship does not seem to have much importance on the possibility to change business line, but the degree of education does seem to have something to say on that matter. It can be seen that business owners with a medium long education have a marked positive difference when it comes to change from typical immigrant business line to a mainstream business line. It might seem surprising that this is not the case for respondents with Long Higher Education, but a fair explanation for this is the preference for employment instead of entrepreneurship for this group. All in all the table shows that within the available variables one of the most important factors for break out for ethnic businesses is the level of education for the business owner. Though it must be held in consideration that important factors -such as the social and financial networks the business owners interact inwere not among the available variables for the shown model. THE DENSITY OF INTER-ETHNIC RELATIONSHIPS RELATIVE TO INTRA-ETHNIC ONES Close intra-ethnic relationships is the key indicator of immigrant businesses in deprived immigrant areas as well as in prospering business enclave areas. Immigrant business owners tend in such areas to give priority to transactions with co-ethnics, as customers, employees and business partners, because of intense information flows, easy communication and trust building background institutions within these groups. This makes negotiating a business contract and joint action easier with a co-ethnic than with other people. Strong intra-ethnic relationships usually correlate with weak ties to other ethnic groups, including the majority population. Upon arrival in their new countries all immigrants suffer from an information and knowledge gap, but some groups catch up more quickly than others through search, interaction and learning processes. Running a business is one way of catching up, usually leading immigrants into action-learning processes through which they discover the secrets of their new society. However, even business owners may sometimes have only limited interaction with the majority population. This is quite understandable in local areas with a dominant ethnic group, but it also occurs in some immigrant businesses in majority dominated areas where the need for communication and interaction with the majority population is marginal. Other types of businesses, however, simply cannot be run without intense interaction with the majority population. The shaping of intra-ethnic and inter-ethnic relationships does not correlate with breaking-out processes unambiguously. Ethnic business enclaves illustrate that firms may start growing and prospering based on a dominance of intra-ethnic networks, but it seems to be the exception rather than the rule. In Denmark and other European countries where immigrants gather in multi-ethnic inner city areas rather than mono-ethnic zones, relating to other ethnic groups, including the majority population, seems to be a safer road to breaking-out than the intra-ethnic strategy. The Chinese and Iranian business owners in Denmark illustrate this point. Chinese business owners seem characterised by limited communication and interaction with other ethnic groups. They have, like the Chinese in most other European countries [14] , specialised strongly in the catering and restaurant businesses and hence are running businesses all over the country. Nevertheless they remain quite isolated from the majority population and other immigrant groups. This may explain why this group, which in other parts of the world is known to be entrepreneurial, was not able to use its first mover advantage in the ethnic restaurant field and the resultant high profits in the 1960s and 1970s to establish larger and more profitable businesses in other sectors, in spite of increased competitive pressure and consequent income squeeze during the ´eighties and ´nineties. THE IMPORTANCE OF COMPETENCIES In order to run a business in a foreign country, an immigrant need both general competencies achieved through formal education, business competencies, which are both of a general nature and sector specific, and cultural competencies to interact and negotiate with the majority population and its businesses and institutions. The impact of the three types of qualifications: general, business and cultural, on immigrant businesses and breaking-out processes is ambiguous. In business areas with low demands for general qualifications, an advanced educational level may be a hindrance to success rather than the opposite, in that owners in such cases may be running businesses in which they are not truly interested and committed because they do not correspond to their educational level and aspirations. In knowledge intensive business areas a high educational level is a must. While formal education is only imperative for some businesses, business experience and talent as well as cultural competences is important in all business areas. The ambiguous impact of qualifications on immigrant business was revealed in the Danish data on Iranian business owners who are by far the most welleducated of the five immigrant groups, usually educated in Denmark and speaking Danish fluently, but at the same time also a group who themselves perceive a wide gap existing between their current business competencies and the needed ones, and was also the group which had the lowest per capita income in 1996 (see table 1). One possible explanation for this paradox is their limited business experience, but weak motivation may also be part of the explanatory pattern. In cases where they never aspired to become selfemployed, choosing this path because they could not find employment that corresponded their social capital, they may well "break-out" by becoming employees rather than by attempting to establish new businesses outside the traditional immigrant business sectors. The overall pattern of qualifications amongst immigrant business owners in Denmark indicates a limited potential for breaking-out for the huge majority of the present business owners who suffer from low qualification levels. They work hard, but they do not have the potential of breaking-out in a society in which qualification standards increase rapidly. The exception from this pattern is the well-educated first generation immigrants and the growing number of well-educated second generation immigrants who are searching for a role in economic life as business owners. A major process in which businesses shift hands from parents to children started during the 1990s, particularly for business owners of Turkish and Pakistani descent. This is likely to lead to the setting up of more businesses outside the traditional immigrant business sectors. FINANCIAL RESOURCES It seems reasonable to assume that there is a close connection in the relationship between financial resources and breaking-out potential. One of the main reasons why immigrant businesses tend to cluster in certain areas and business lines is low entrance barriers. Immigrant business owners seeking to break-out should therefore normally expect higher entrance barriers in other fields, which usually implies the need for more financial resources. An additional reason why some Iranian first generation immigrants and Turkish and Pakistani second generation immigrants may succeed in breaking-out is the financial resources they have access to through their families. Many Iranian immigrants come from wealthy families and second generation Turkish and Pakistani immigrants may benefit from the resources their parents have succeeded in accumulating during their careers as business owners. CROSS BORDER BUSINESS RELATIONS Cross border business relations are often an inherent characteristic of immigrant businesses. Immigrant business owners frequently run business activities in their countries of origin, but they may also link up with co-ethnic immigrants in other countries, or their businesses may be sufficiently strong to expand to other countries as with any ordinary business. The latter option is still rare in countries like Denmark with a short history of immigrant businesses, but increasingly found in countries with a richer experience such as the USA and Britain. The dominant form of cross border relations is small scale and person driven bi-country business activity, but some larger units are also found. For some ethnic commodities there are room for immigrant owned businesses of a medium or large size as key players in ethnic commodity chains such as importers and wholesalers in a number of countries. Moreover, the growing number of immigrant business firms opens up business options for specialised service institutions such as accounting and financial ones, even at the international level. CONCLUSIONS The focus of this paper has been on breaking-out processes from immigrant dominated business lines, using the Danish situation as its empirical basis. Most immigrant businesses in Denmark and elsewhere are small family owned firms, of which the huge majority do not grow, restructure and relocate. But some do, and the paper aims at improving our understanding of this process taking departure in the model of immigrant business breaking-out by Ram and Jones. The empirical section was based on longitudinal registry data and a survey among immigrant business owners from 1999, both of which were structured in a way that allowed comparison between five immigrant business owner groups, coming from Pakistan, the former Yugoslavia, Turkey, Iran and China. The empirical data demonstrated that for all the compared immigrant groups the average gross-income is lower for self-employed than for wage earners. And that the bulk of immigrant firms are, and remain, small family units within traditional immigrant business lines such as small retail shops, restaurants and fast food outlets. Only about 12 per cent of the firms have more than 5 employees (of which about 80 per cent were from the Danish majority). It also showed that all the groups took advice from native Danes when taking important business decisions. A number that must be guarded with care as all businesses in Denmark are obliged to a yearly revision by an authorised accountant. A statistical regression analysis of the position of firms within business lines in the period 1997-2002 indicated some breaking-out tendencies: an increasing number of firms are being established outside the traditional immigrant business lines. It also showed that the most important factor among the variables was the owners' educational level. Of the five groups, Iranian business owners seem to have the highest potential for breaking-out as their level of education is far above that of the other groups. Moreover, they have more pluralistic networks than the other groups, with close contact to other ethnic groups, and relatively frequent investments in their country of origin. An explanation of why Iranians have a lower degree of change in business line than the other groups is possibly their partiality for employment according to their level of education. Also, the growing number of second generation business owners, mainly of Pakistani and Turkish descent, seem to have a potential for breaking-out as they generally are well-educated and often have funds for investment from their parents. It should be stressed, however, that breaking-out is still a rare phenomenon in Denmark. This empirical evidence and the lessons from some other countries such as the USA and Britain [15] led us to suggest four breaking-out determinants: (1) the degree of density in inter-ethnic relationships relative to intraethnic ones, (2) the level and composition of competencies (general, business and cultural), (3) financial resources, and (4) cross border business relations. In others words, immigrant business owners with close contacts to other ethnic groups (including the majority population), with an advanced and broad competence profile, with financial resources (often derived from family sources), and with cross border business relations, are the ones who are most likely to develop firms that grow, restructure and relocate, i.e. breaking-out firms.
8,974
2007-06-30T00:00:00.000
[ "Economics" ]
Construction Optimal Combination Test Suite Based on Ethnic Group Evolution Algorithm The optimal test case suite constructing problem is defined thus: given a set of test requirements and a test suite that satisfies all test requirements, find a subset of the test suite containing a minimum number of test cases that still satisfies all test requirements. Existing methods for solving test case suite generation problem do not guarantee that obtained test suite is optimal. In this study, we propose a global optimization and generation method to construct optimal combinatorial testing data. Firstly, an encoding mechanism is used to map the combinatorial testing problem domain to a binary coding space. After that, an improving ethnic group evolution algorithm is used to search the binary coding space in order to find the optimal code schema. Finally, a decoding mechanism is used to read out the composition information of combinatorial testing data from the optimal code schema and construct optimal test case suite according to it. The simulation results show this method is simple and effective and it has the characteristics of less producing test data and time consumption. INTRODUCTION The combinatorial software testing is a method for designing test suite for the Software Under Test (SUT), which generate test case based on a certain combinatorial covering criterion.According to the difference of covering strength, the combinatorial software testing method can be classified into single factor covering method, pair wise combinatorial covering method and multiple combinatorial covering methods.All of above test methods try to make use of as little as possible test cases to cover as much as possible combinatorial sets.Since the costs of executing test cases and managing test suites may often be quite significant, a test suite subset that can still satisfy all requirements is desirable.Such a subset is known as a representative set.Assuming that the cost of executing and managing each test case is the same, a representative set with a minimum number of test cases is desirable and is called an optimal test case suite.As mentioned in Harrold et al. (1993), the optimal test case suite generation problem is NP-complete and as mentioned in Yan and Zhang (2009) it is equivalent to solving the set-covering problem.Cohen et al. (2003) gives the definition of Covering Array (CA) and Mixed level Covering Array (MCA).The difference between CA and MCA is that each factors of CA has a same value range, but in MCA it can be different.So, the CA can be looked upon as a special case of MCA and the processing method of them has no difference. The conventional construction mechanism makes use of some mathematical methods, such as orthogonal array (Yan and Zhang, 2008) and heuristic algorithms to generate an approximate test suite.For heuristic algorithms can generate less test data than mathematical construction methods, so many researchers are absorbed in using heuristic algorithm to generate combinatorial test suite.In such studies, the one-test-at-a-time mechanism has gotten a wide application in helping a heuristic algorithm to generate test data.Based on this mechanism, in once computation, a heuristic algorithm will select a best test case t i , which can cover most strength t combinations in the uncovering Combination Set (CS) and make it join Test Suite (TS).Then, these strength t combinations, covered by t i , will be deleted from CS.After that, this process will repeat until all of strength t combinations are covered.There are two studies (Shiba et al., 2004;Zha et al., 2010) provide heuristic methods based on one-test-at-a-time mechanism to construct combinatorial test data.The main steps of this mechanism are as follow: Algorithm 1: The one-test-at-a-time mechanism: 01: Initializing test suite TS = Ø 02: Initializing combination set CS according to CA For the one-test-at-a-time mechanism just can generate one test case in once computation process and it costs a lot of calculation on matrix transformation operation.So, it is likely to take a long time to generate whole representative set.Moreover, even to a smallscale CA, because of the one-test-at-a-time mechanism does not have the ability of dynamically adjusting the composition of TS in whole construction process.Therefore, these heuristic methods can only generate an approximate representative set generally.So, heuristic methods are difficult to generate an optimal combinatorial test suite.For example, there is a CA with four factors in Table 1 and 2 is an optimal pair wise combinatorial covering test suite of it.If we use onetest-at-a-time method to generate the combinatorial test data, it is likely to select a0b0c0d0, a1b1c1d1 and a2b2c2d2 to join TS one by one.Because of both of them can cover 6 strength 2 combinations, which suffice for the optimality criterion.On this condition, no matter whatever test case in left 78 cases has been chosen to join TS in next computation cycle, there is at least one combination is repetitive with the covered 24 strength 2 combinations generated by above three test cases.The scale of final combinatorial test suite will be more than 9.And it is no way to generate an optimal combinatorial test suite. THE GLOBAL OPTIMIZATION AND GENERATION MECHANISM FOR TEST DATA In this study, we will translate this problem into a code optimization problem and use the Evolution Algorithm (EA) to optimize the structure of test data.Firstly, we map the combinatorial test problem domain to a binary coding space by an encoding process.Then an EA has been used to search the coding space in order to find the optimal individual.After that, the binary code of the optimal individual has been decoded to generate combinatorial test suite by a decoding process.The core steps of this method as follow: Algorithm 2: The EA based combinatorial test data global optimization and generation mechanism: 01: Mapping the combinatorial test problem domain to a binary code space based on encoding process 02: Initializing the population 03: While (the termination criteria aren't reached) 04: Using an EA to search the coding space 05: End While 06: Decoding the code of optimal individual in population and generating a combinatorial test suite As we can see, encoding process, decoding process and global searching process are three main processes in this mechanism.In this section we will discuss the encoding mechanism, decoding mechanism and the fitness function. Encoding mechanism: The aim of encode is to set the mapping relationship between the combinatorial test problem domain and the binary coding.Firstly, we give some basic definitions that are used throughout.Definition 1: A population of EA consists of an n-tuple of strings A i (i = 0, 1, … n-1) of length L, where the genes Γ j ∈{0, 1}, j = 0, 1, …L-1. Definition 3: Make a serial number for all test cases in Φ by ascending order and set the value of serial number is from 0 to |Φ|-1.Then we can reference a test case in Φ by its serial number, that is t j ∈Φ, j = 0, 1,…, |Φ|-1. If we set L = |Φ|, the t j ∈Φ can correspond to the gene Γ j in the binary code by the serial number j.Moreover, we set a given, if Γ j = 1 then the t j will be chosen to join the test suite.According to these definitions, the gene structure of A i can be translated into a subset of test case.For example, there are 81 test cases in the CA of Table 1.The test case of this CA are a0b0c0d0, a0b0c0d1, … and a2b2c2d2 and its serial number are 0, 1, … and 80.If the value of genes 2, 16, 41 and 77 in an individual are 1, then the corresponding test cases are t 2 , t 16 , t 41 and t 77 .And its details are a0b0c0d2, a0b1c2d1, a1b1c1d2 and a2b2c1d2. Decoding mechanism: The role of decoding mechanism is to parse the coding structure and construct the combinatorial test data according to it.In order to facilitate the processing, we use a v's digit number to present the details of a test case. Definition 4: To a covering array CA (N; t, k, v), a test case can be expressed as a v's digit number and each digit correspond to a factor in the CA.The value of j th factor is k j and the range of each factor is set from 0 to v-1.The function of translating v's digit number of test case t j into its serial number can be determined by: For to the CA in Table 1, we set the order of v's digit number from top digit to low digit is corresponding to the factor from A to D. Then the corresponding relation between the details of each test case with 3's digit number and its serial number with ten's digit is shown in Table 3.The translating function is: So, we can make use of an inverse process of formula (1) to get the details of each test case according to its serial number.The decoding process for a CA with k factors is: Algorithm 3: Decoding mechanism: 01: Get the coding information of A i 02: For j = 0 to L 03: If (Γ j == 1) //Decoding and getting the details of t j 04: dnum = j, m = 0 05: While (dnum>0) 06: x = dnum%v; dnum/ = v 07: z j,m = x; //x is the value of m th factor in t j 08: m++ 09: End While 10: End If 11: End For Fitness function: Generally, we can evaluate the price of A i from two aspects in the iteration searching process, the first one is the covering degree for the strength t combination and the second one is the scale of combinatorial test suite.In this study, the fitness of A i is determined by: where, ω = The number of covered combination in CS θ = The scale of combinatorial test suite Obviously, 0<ω≤v t C t k and 0<θ≤│Φ│, µ>1 is a controlled parameter, which is set to make sure the Fitness>0. CONSTRUCTING TEST SUITE BASED ON EGEA/PAD From Algorithm 2, we can see the quality of combinatorial test data is based on the optimization performance of EA.In this section, we produce an improving Ethnic Group Evolution Algorithm (EGEA) to search binary code space and find high quality solution. As we known, the population structure of EA has a heavy influence on its searching efficiency.So, we propose a novel population searching mechanism EGEA, which makes use of the clustering process to analyze the population structure and build up ordered ethnic group organization to control the population searching process (Hao et al., 2010).The experiment has shown it is helpful in avoiding the premature convergence phenomenon while increasing the convergence speed of population greatly. In ethnic group clustering process, individuals have been assigned into ethnic group so that they have a high degree of similarity within the ethnic group and that the ethnic group is to be distinct.The clustering model consists of two parts: a technique for calculating distance for binary code between two individuals and a grouping technique to minimize the distance between individuals of each ethnic group.The objective here and in any clustering method, is to minimize the distance between individuals in each ethnic group while maximizing the distance between ethnic groups. In order to design suitable clustering method, we need to analyze the bound characteristics of set covering problem.The following inequalities on CAN (t, k, v) are basic ones and it can be found in Chateauneuf and Kreher (2002) and Martirosyan and Tran Van (2004): Symbol-fusing: Row-deleting: The lower and upper bound: for any v≥2, t≥2 we have: where, k≤2 n and n is the smallest integer such that v≤2 n . From these inequalities we can see, the genes, whose value are 1, have occupied a small proportion in the optimal code schema.In order to emphasize importance of these genes, we produce a novel hierarchical ethnic group clustering method based On Positive Attribute Distance (PAD). Calculating the similarity of individuals by PAD: In EGEA, we calculate the binary code distance between two individuals based on the Weighted Hamming Distance (WHD) during the ethnic group clustering process.The WHD between A i and A j is: where, and the weight of gene is η w k = L wk w + 1.Meanwhile, the similarity index between two individuals is as follows: The result obtained is thus an expression between 0 and 1, where 1 designates absolute similarity between the two binary codes of individuals and 0 designates absolute diversity between the two binary codes of individuals. In study (Gelbard and Spiegler, 2000;Gelbard et al., 2007), propose to use PAD to calculate the distance between objects.As in the HD, the PAD yields a clustering method by calculating the degree of similarity between objects whose various features are represented in a binary manner.Unlike HD, however, PAD follows the outcome of the Hempel's Raven paradox regarding the use of positive predicates.In PAD, the similarity between two binary sequences is as follows: where, ψ i = The number of 1's in i th binary sequences ψ j = The number of 1's in j th binary sequences ψ ij = The number of 1's common to both i th and j th Binary sequences Since the definition of PAD is more simple than WHD.Meanwhile, it is easy to calculate and the result of PAD is in the interval between 0 and 1, where 1 expresses absolute similarity and 0 expresses absolute diversity.These properties make PAD can take place of WHD easily.So, we try to make use of PAD as the similarity index between individuals for ethnic group clustering. Ethnic group hierarchical clustering based on PAD: In this study, we propose a new ethnic group hierarchical clustering mechanism based on PAD and the details are shown in Algorithm 4: Algorithm 4: Ethnic Group Hierarchical Clustering: Step according to the race exponent of its center individual.If the sequence number of E i in this queue is j, then the weight of E i is: The parameter θ∈ (0, 1) has been set up for controlling the clustering granularity.For it affect the structure of ethnic group directly, we hope θ can dynamically adjust its value according to the status of population.So, a adaptive mechanism has been designed to adjust the number of θ.The formula is: where, γ = Diversity parameter of population and ε∈ (0, 1) is a constant SIMULATION RESULTS In this section, there are four serials and 19 CA problems are selected for simulation tests.Based on Algorithm 2, we make use of EGEA/PAD and CGA to search the binary code space.Both the source codes are realized by VC++6.0 on a 2.1-GHz AMD Phenom PC with 1 GB memory and the operation system are Windows 2003. The experimental statistic results of EGEA/PAD and CGA are obtained over 20 independent trials.In first round experiments, we set t = 2, v = 2 and k is gradually increasing from 3 to 9 and the statistic results are shown in Table 4.In second round experiments, we set t = 2, = 3 and the k is gradually increasing from 3 to 7 and the statistic results are shown in Table 5.In third round experiments, we set t = 3, v = 2 and the k is gradually increasing from 4 to 8 and the statistic results are shown in Table 6.In finally round experiments, we set t = 3, v = 3 and the k is gradually increasing from 4 to 7 and the statistic results are shown in Table 7.So far the best result of these problems can be found in Colbourn's web site (Colbourn, 2009). Meanwhile, the comparison between EGEA/PAD and CTS on the test data minimum scale when t = 3 is shown in Fig. 1.As can be seen, the EGEA/PAD significantly outperforms another two methods for 10 CA problems in all 12 problems when t = 2, meanwhile, it also has get the best results for 7 CA problems in all 9 problems when t = 3. CONCLUSION In this study, we propose a combinatorial test data global optimization and generation method, which include the encoding and decoding mechanism and an improving ethnic group evolution algorithm-EGEA/PAD.The experimental results show this mechanism have a good performance in most testing problem. However, the problem scale of CA is growing exponentially and it restrains the searching ability of this method heavily.In future study, we will focus on design more succinct coding mechanism or coding compression mechanism to make this method can solve more largescale and complex CA problem. Fig. 1 : Fig. 1: Comparison between EGEA/PAD and CTS on the minimum scale of test data with t = 3 best result when v up to 3 and k is less than 7 and 6.If t is up to 3, the EGEA/PAD and CGA can find the best result when v = 2 and k is less than 8 and 6, but only the EGEA/PAD can find the best result when v = 3 and k is less than 6.The result show an EA based combinatorial test data global optimization and generation mechanism is valid.It can find most optimal results in 19 CA problems.the performance of searching algorithm have a heavily influence on the quality of solution.The comparison of statistic results between EGEA/PAD and CGA show the EGEA/PAD can improve the quality of solution greatly.In study (Alan andLeonid, 2004), Hartman use the Combinatorial Test Services (CTS) package to solve CA problems.Williams translate optimal combinatorial test suite construction problem into an integer program problem and list the minimum scale of test data in study(Williams and Probert, 2002).The comparison is shown in Table8.And the run times are also compared.Meanwhile, the comparison between EGEA/PAD and CTS on the test data minimum scale when t = 3 is shown in Fig.1.As can be seen, the EGEA/PAD significantly outperforms another two methods for 10 CA problems in all 12 problems when t = 2, meanwhile, it also has get the best results for 7 CA problems in all 9 problems when t = 3. Table 1 : A system with four parameters Parameter Table 3 : The corresponding relation between the detail of each test 6: Creating ethnic group: Transform each O to be a ethnic group O i →E i and set the center individual of O i to be the center individual of E i . Table 4 : The statistic results of EGEA/PAD and CGA on the scale of test data when t = 2 and v = 2 Table 5 : The statistic results of EGEA/PAD and CGA on the scale of test data when t = 2 and v = 3 Table 6 : The statistic results of EGEA/PAD and CGA on the scale of test data when t = 3 and v = 2 Table 7 : The statistic results of EGEA/PAD and CGA on the scale of test data when t = 3 and v = 3 Table 8 : Comparison between EGEA/PAD and another two methods on the minimum scale of test data when t = 2
4,419.8
2013-06-10T00:00:00.000
[ "Computer Science" ]
Surprising results on phylogenetic tree building methods based on molecular sequences Background We analyze phylogenetic tree building methods from molecular sequences (PTMS). These are methods which base their construction solely on sequences, coding DNA or amino acids. Results Our first result is a statistically significant evaluation of 176 PTMSs done by comparing trees derived from 193138 orthologous groups of proteins using a new measure of quality between trees. This new measure, called the Intra measure, is very consistent between different groups of species and strong in the sense that it separates the methods with high confidence. The second result is the comparison of the trees against trees derived from accepted taxonomies, the Taxon measure. We consider the NCBI taxonomic classification and their derived topologies as the most accepted biological consensus on phylogenies, which are also available in electronic form. The correlation between the two measures is remarkably high, which supports both measures simultaneously. Conclusions The big surprise of the evaluation is that the maximum likelihood methods do not score well, minimal evolution distance methods over MSA-induced alignments score consistently better. This comparison also allows us to rank different components of the tree building methods, like MSAs, substitution matrices, ML tree builders, distance methods, etc. It is also clear that there is a difference between Metazoa and the rest, which points out to evolution leaving different molecular traces. We also think that these measures of quality of trees will motivate the design of new PTMSs as it is now easier to evaluate them with certainty. http://www.biomedcentral.com/1471-2105/ 13/148 ition. These techniques are useful, but limited. Specifically, simulations are excellent to discover errors and to find the variability that we may expect from the methods. Yet simulations usually rely on a model of evolution (e.g. Markovian evolution). It is then expected that a method which uses the same model will perform best. Measures of quality include bootstrapping, branch support confidence and indices on trees (like least squares error in distance trees or likelihood in maximum likelihood (ML) trees). These measures also rely on some statistical model which is essentially an approximation of reality. Bootstrapping values have suffered from over-confidence and/or misinterpreted and are sensitive to model violations [13][14][15][16]. Furthermore these techniques are directed towards assessing a particular tree rather than assessing the methods. Small scale comparisons are valuable but usually lack the sample size to make the results statistically strong. We consider any evidence which is in numbers less than 100 to be "anecdotal". Any study where a subset of cases is selected is a candidate to suffer from the bias arising from an author trying to show the best examples for his/her method. Finally, intuitions are very valuable, but cannot stand scientific scrutiny. We refer as intuitions, decisions which are not based on strict optimality criteria. E.g. character weights in traditional parsimony methods; using global or local alignments; various methods for MSA computation; various measures of distances, etc. The main problem is that there is no "gold-standard" against which methods can be evaluated. Hopefully this paper will provide two such standards. Computing phylogenetic trees consumes millions of hours in computers around the world. Because some of these computations are so expensive and not reliable, biologists are tempted to use faster, lower quality, methods. This evaluation (which itself consumed hundreds of thousands of hours) will help bioinformaticians extract the most of their computations. In particular, as we show, some of the best PTMS are remarkably fast to compute. We measure the quality of the PTMS in two ways, by their average difference on trees which have followed the same evolution and by their average distance to taxonomic trees. This allows us to find the best methods, and by averaging in different ways, the best components of the methods. There is no single method that is best in all circumstances. Some of the classes of species show a preference for a particular method. This should not come as a surprise, different organisms may leave different molecular imprints of their evolution. Results We now introduce the two measures on PTMSs. The Intra measure For a given PTMS and several orthologous groups (OGs) we can construct a tree for every OG. The trees should all follow the same evolutionary history, hence the trees should all be compatible (Figure 1, shaded yellow). The average distance between trees built from different OGs is thus a measure of quality of the method (the smaller the distance, the better the method). We call this measure the Intra measure. Since the PTMS does not get any information about the species of the input sequences, the only way for it to produce a smaller distance between trees is by extracting information from the sequences. In this sense, the best algorithm is the algorithm which extracts the most relevant information from the sequences to derive the phylogeny; which is exactly what we want. In mathematical terms the Intra measure of a PTMS M is the expected value: where g i and g j are two different orthologous groups. The distance d (., .) is the Robinson-Foulds distance [17] between two trees built with the same PTMS over different OGs. It is computed only over the species appearing in both OGs ( Figure 2). We estimate this expected value from all the available pairs of OGs. The measure will be incorrect for the cases of lateral gene transfers (LGT), where sequences do not follow the same evolution. LGT events will be few and since all methods will be affected we do not expect a bias from them. The Taxon measure This measures how far the computed tree is from the true taxonomic tree. A smaller distance, averaged over a large number of OGs, means a better method. For a given PTMS and several orthologous groups (OGs) we compute the distance between the tree built on each OG and the true taxonomic tree (or its approximation from NCBI, Figure 1, shaded blue). We call this average distance the Taxon measure. The trees derived from the taxonomy represent the consensus and summary of many scientific papers, databases and experts and could be described as the "state of the art". Errors in the taxonomy should affect all methods equally and will be like random noise.(Biases derived from the use of these methods for building the Taxonomy are discussed in the Caveats (iv) section.) In mathematical terms the Taxon measure of a PTMS M is the expected value: Taxon(M) = E d M(g), T g where d(., .) is the RF distance between two trees, g is an orthologous group, M(g) is the tree produced by M applied to the sequences in g and T g is the taxonomic http://www.biomedcentral.com/1471-2105/13/148 tree for the species in the group g. We estimate the expected value by the average over all the orthologous groups available to us. Notice, that while the taxonomic tree is a single tree, we will be sampling tens of thousands of different subsets of this single tree (and many hundreds of totally independent subsets). See Methods, Robinson-Foulds distance = 0 Reduce to common leaves Figure 2 An example of the evolution of several species recovered by two proteins and the basis for the Intra measure. http://www.biomedcentral.com/1471-2105/13/148 Table 1, for full results. In [18,19] a similar idea is used, that of comparing the trees against a small, indisputable, topology. To achieve statistical significance we consider complete genomes and apply the methods to all the OGs possible (with at least 4 species) according to the OMA database [20,21]. This gives us very large sample sizes and an unbiased sample, as almost nothing is excluded (see methods for details). To describe the PTMSs unambiguously we need to use a descriptive name for each one. The convention that we use describes the steps which are used to build the tree. For example, ClustalW InducDist BioNJ stands for the name of the procedure which starts by making a multiple sequence alignment (MSA) using ClustalW, then derives the distances from the pairwise alignments induced by the MSA and finally builds a tree from these distances using the BioNJ algorithm. A method is then a sequence of components which start from the molecular sequences and end with a phylogenetic tree. The components of the tree building methods used here are listed in Table 2. Most of the possible compatible combinations were tried. Notice that the total number of methods can grow very quickly, for this study 176 PTMSs were tested. Our main results are: the introduction of the Intra and Taxon measures to evaluate PTMSs; the excellent correlation between them; the top rated PTMSs for Metazoa and non-Metazoa; the results on best components, i.e. best MSA methods, best tree building methods and best pairwise alignment methods. Figure 3 shows a plot of the PTMSs on their Intra vs Taxon measures. It can be seen that the two measures are extremely well correlated. Table 3 shows the same correlation in numerical form and for each species class. Here a "class" means a convenient group of related species, explained in more detail in the methods section. It should be noted that, for a given class or set of classes, the numerical values of the Intra measure for all the PTMSs are comparable (lower values mean better methods). So are the values of the Taxon measure. But, for a given class, the Intra and Taxon measures are not numerically comparable, as they are taken over different sets, in one case over all the pairs of OGs which intersect on 4 or more leaves in the second case over all the OGs. This is why we compare the orderings (usually by computing Pearson's correlation coefficient) of the PTMSs by each measure, but not the corresponding numerical values. Tables 4 and 5 show the best (Taxon) scoring PTMS. The first table shows the top 3 methods for Metazoa and for non-Metazoa. The results group well in two sets. Metazoa favors codon-based methods whereas the rest favor induced distance methods. In terms of sample sizes this division is quite even, the number of OGs are in a 1:2 relation but since Metazoa has larger groups and longer sequences, the total amino acids involved are close to 1:1 ( Table 6). The symbol " " stands for a method which is better than another with statistical significance better than 1 in a million (p-value < 1e-6). The symbol ">" stands for a pvalue < 0.05 and the symbol "≥" means its p-value > 0.05. To justify the grouping of the classes we have computed the correlations between the classes. Table 7 shows Pearson's correlation coefficients of the Intra measures for all the classes against each other. The average correlation for non-Metazoa is 0.9867 in a tight range, from 0.9696 to 0.9964. Notice also that OtherEukaryota share the same preferences for the methods as Archaea and Bacteria away from Metazoa. All the correlations with Metazoa are much lower. The natural grouping of the classes is to have one group with Metazoa and another group with the rest. The very high correlations of the different non-Metazoa classes are the main argument supporting the quality of the Taxon measure. The measure is strong enough to replicate the rankings on several groups. This is a form of bootstrapping, as the results are replicated from independent different samples. Tables 8, 9, 10, 11, 12 and 13 show results over component methods for the Taxon measure. We are working under the assumption that better trees derived from variants of the components (e.g. MSAs) mean better components (e.g. better MSAs). While this may be controversial, it is very difficult to argue the opposite, see [19]. These results are aggregations of various classes and various methods. In all cases care is taken to include the same companion methods for each comparison. The numerical value shows the difference of the Taxon measures (and its 95% confidence margin) between the methods. It measures the average difference of RF distances or wrong splits, e.g. = 1 means that on the average one method makes one additional mistake per tree. n indicates the number of OGs which have been used to measure this difference (in some cases the OGs end up used more than once, for example for different MSAs when comparing ML methods). See Methods, Table 14. Averaging over the component methods PhyML is the best tree builder using MSAs ( Table 8). The results are consistent accross classes except for a significant worsening of Parsimony and Gap for Metazoa. Global alignments [22] dominate the pairwise alignments methods ( Table 9). The most significant difference between Metazoa and non-Metazoa is that CodonPAM is propelled to the front by a significant margin in Metazoa. It should be noted that the CodonPAM mutation matrix is an empirical mutation matrix based on data from vertebrates [23]. The genomes included in Metazoa have diverged more recently than for other classes, like Archaea, which also explains the better performance of http://www.biomedcentral.com/1471-2105/13/148 information. Regardless of the reason, the advantage of codon-based methods is an order of magnitude larger than the differences between the other methods. Hence codon-based methods appear unavoidable for Metazoa. Table 10 confirms the same difference at the level of MSA-induced alignments. The distance methods, Table 11, see LST changing from last position in non-Metazoa to first position in Metazoa. In this case the absolute differences are relatively small. Table 12 shows the comparisons of empirical substitution matrices. The differences between the best and the worst matrix are statistically significant but very minor. Table 13 shows the results for MSAs. PartialOrder, which is an algorithm designed to deal with alternative splicings, works better for Metazoa. ClustalW, from a middle ranking in non-Metazoa, drops to a clear last for Metazoa. The rest of the rankings remain quite consistent for all species. The most important message coming out of these results is that the best methods are minimal evolution (distance) methods over pairwise alignments induced by MSAs. A method like Mafft InducDist BioME is 2-3 orders of magnitude faster than ML methods and outperforms them all by a good margin. Discussion It may appear surprising that the best method for non-Metazoa starts by using Mafft which is not the best MSA (Table 13). In general, the best PTMS may not include the best components, and vice-versa, the best individual components may not give the best PTMS. Components may combine/exploit their abilities/weaknesses. For example, an MSA method which does a very good job with amino acids but a mediocre job with gaps, may compose very well with ML methods but poorly with Gap trees. We have to remember that the analysis of components, Tables 8, 9, 10, 11, 12 and 13 is done over an average of many situations. The statistical significance of the difference between methods is one aspect, the magnitude of the difference is also important. The testing was done over such large samples that often minor differences are still statistically significant. We consider that a difference of less than = 0.01 (that is in 100 trees, on the average, we get one less error) is without practical significance. A = 0.05 difference, on the other hand, means that one method will produce one better branch every 20 trees, which can be considered significant. Mafft InducDist BioME is the top method for non-Metazoa under the Taxon measure and is ahead of the top ML method, Prank PhyML, by = 0.048 correct branches per tree. The number of incorrect branches per tree of each method is 1.425 and 1.473 respectively. (For Metazoa the best methods are Par-tialOrder CodonDist BioNJ and Prank RAxMLG with http://www.biomedcentral.com/1471-2105/13/148 Caveats, what can go wrong? Here we describe some problems that may affect the power/correctness of the PTMS evaluation (for dependencies on absolute/relative distances, number of leaves and sequence length see Methods). i The OGs should follow the same evolutionary history. This is normally the case, except when we have lateral gene transfers (LGT) or OGs which do not follow Fitch's definition of orthology [24] precisely. For the purpose of testing the methods, it is much better to skip dubious OGs. The OMA orthologous database fits best our needs [25,26], as it sacrifices recall in favor of precision. ii The Intra test measures the ability of recovering a phylogenetic signal from sequences. Other reasons for mutation of the sequences may leave their trace in the conclusions. For example, it is known that the environmental temperature affects the GC content of the sequences due to DNA stability [27]. Consequently, we will expect a bias at the codon level that will tend to group together organisms that live in a high-temperature environment. iii The methods should produce trees with complete structure, i.e. no multifurcations, all nodes must be binary. A method which produces a tree with multifurcations will have an advantage as it will normally make fewer mistakes. In the extreme, a star tree is always correct. iv Since PTMSs have been in use for many years, the preferred methods of the community may show an undeserved good performance under the Taxon measure (but not under the Intra measure!). This is not unreasonable, since for bacteria many classical phylogenetic methods do not apply (e.g. bacterial paleontology has very few results), and taxonomies may have been constructed with some of these methods. A method which has been the favourite of the taxonomists will be displaced to the left of the main line in Figure 3 (it reduces the Taxon measure and leaves the Intra unchanged). We can see that parsimony, RaxML and PhyML show a small shift to the left and hence it is possible that these methods have biased the building of the Taxonomies. This shift is noticeable but quite small, so we can conclude that this is not a major bias. v Finally, the Intra measure, by being a consistency measure, may be insensitive to systematic mistakes of the tree building. This would be something that affects the Intra measure but not the Taxon measure. Stephane Guindon suggested that long branch attraction (LBA) could mislead the Intra measure for methods that suffer from it, by systematically computing one of the incorrect trees. To study this properly we generated a new class called LBAExamples which is composed of a quartet ((A,C),(B,D)), where the branches leading to the leaves C and D are much longer than the other branches. This quartet is sometimes called the "Felsenstein example" and is used to demonstrate how some methods, like parsimony, systematically http://www.biomedcentral.com/1471-2105/13/148 will reconstruct the wrong tree (and hence the name LBA). See distance of 1, hence for quartets the values of the Taxon measure coincide with the number of incorrect trees. The gap that separates the non-LBA from LBA methods is large in absolute and in relative values. So we can conclude that LBA is successfully detected by both measures. b Methods based on k-mer statistics (not reported here, but also evaluated in our computations) fare much worse than all the other methods in general. These are methods which count the number of, for example, tri-mers, and use as distances a statistical test (like chi-squared) on the tri-mer frequencies. For example, a method based on DNA tetramers scores 0.952 for the Taxon measure (it gets 95.2% of the quartets wrong!) but scores 0.0917 in the Intra measure. This is quite extreme, (the method is very poor in every context) but supports the observation that the Intra measure is a consistency measure and if the method systematically fails and there is only one way of failing, then the consistency is good. In terms of the plot of Figure 3, these cases will be points displaced to the right of the main line. This extreme case reinforces our recommendation of using both measures to conclude the performance of methods. c This side study showed additional surprises. The guide tree produced by Prank, usually quite good, but unfortunately it suffers severely from LBA (Taxon = 0.532). We were also unaware that the SynPAM methods, which are maximum likelihood methods, also suffer from LBA. The above caveats indicate that the problems are relatively few and seldom apply to both measures. Consequently a method which does well under both measures is a very strong candidate. Conclusions We show, through a comparison of methods against trees involving tens of millions of data points, which are the most effective PTMSs. This uncovers a big surprise as one of the favorite methods among the community, the ML methods, score poorly. Methods based on MSA induced pairwise alignments and minimal evolution not only produce better trees, but are 100 to 1000 times faster to compute. This should revolutionize this niche of bioinformatics. We also show that a new measure of quality, the Intra measure, is highly correlated with the Taxon measure (closeness to taxonomic trees) and it does not suffer from the biases of the practice. These new measures are likely to be extremely helpful in the development of new and better algorithms. Methods We cannot show all the computations and results in this publication because of their size (about 57Gb). We have developed a web site which allows the exploration of all the data and all the results to their most minute detail. We intend to maintain this website for as long as possible, and to upgrade it periodically both with new genomes and with new methods. It contains very useful information in our view. This can be found in: http://www.cbrg.ethz.ch/services/ toolcompare Source data The study was done over complete genomes for three reasons: species coverage is quite ample, 755 complete genomes were used, we can obtain a large number of very reliable OGs and since all OGs from entire genomes are used, no selection bias is possible. A complete description of the classes can be found in the OMA1000 database which is accessible at: http://www.biorecipes.com/Orthologues/ status.html Sample output showing the selection of which methods to compare when summarizing results, Table 14. The difference of the Taxon measures is taken over corresponding pairs of trees. These corresponding pairs differ only in the components we want to compare. Furthermore, they will be computed over exactly the same population of OGs. To do the analysis, the genomes were grouped in the classes shown in Table 6 (in this work we will call any of these groups a "class"). The column "OGs kept" shows the number of groups which had 4 or more acceptable sequences. The study includes all the publicly available genomes as of Nov/2010, the release of OMA1000 [21]. For proteobacteria and firmicutes, which are relatively overrepresented and many species sequenced multiple times (e.g., there are 26 genomes of different strains of E.coli), only 265 genomes of 452 were chosen for proteobacteria (89 of 177 for firmicutes) as follows: For each pair of genomes an average evolutionary distance was computed. Iteratively, one of the members of the closest pair of genomes was discarded. The discarded one was the one with lower "quality index" (a simple ad-hoc measure of quality of complete genomes). In this way we retained the "best", most diverse, 265 proteobacteria and the most diverse 89 firmicutes. All the major versions of model organisms ended up in these classes. As a control, we also computed the same trees over all the genomes of firmicutes (177). The correlation coefficient of the Taxon measure between the full class of all firmicutes and the class with 89 genomes is 0.994058. Knowing this value we are confident that the results are not affected by removing very similar (or repeated) genomes. Comparing within classes is better than grouping the classes together for the following reasons: -The classes are more uniform and may reveal biases (as they do) specific to the classes. -The missing relations -between classes -are usually so obvious that almost no method will get them wrong. It is the fine grain differences that matter. -The computation time would be out of reach for some methods. -The problems of some well documented LGTs, like proteins of mitochondrial origin, are avoided. The selection of the OMA [20] database of OGs was done because OMA is particularly careful about removing paralogous sequences at the expense of sometimes splitting groups (precision at the expense of recall). A split OG is a minor loss of data, of little consequence given our sample size, whereas the inclusion of paralogous sequences breaks the basic assumption for the correctness of the Taxon and Intra measures. The main assumption that links the Intra measure with the quality of the methods is that any pair of groups represents the same evolution path. If one group contains an orthologous pair and the corresponding pair in another group is paralogous, these will correspond to different evolutionary histories and the comparison is wrong. Sequence/group cleanup Only the OGs with 4 or more sequences are used (2 or 3 sequences will never show different unrooted topologies). We also removed all but one copy of identical sequences. A method which is given a few identical sequences will most likely place them all in a single subtree. The shape of this subtree will be unrelated to the phylogeny of the sequences (there is no information available to make a decision). Since this adds noise to the results (not necessarily bias), we remove all identical sequences but one from the OGs. Additionally we remove sequences for which more than 5% of their amino acids or codons are unknown ("X") as this is a sign of poor quality of the sequence. Both policies together remove about 3.5% of the sequences. Bayesian methods Bayesian methods for tree building have not been included in this study because they do not follow the PTMS definition. In principle a bayesian tree building method produces a probability distribution over all trees given the corresponding priors. If the priors are ignored and only the tree with highest probability is selected, then this is ML, not bayesian. Approaches which build consensus trees from several of the most probable trees produce multifurcating trees which contain less information and hence are not comparable to fully determined trees. Any prior which contains information about the tree which is not extracted from the sequences themselves will violate our assumptions for PTMS. Tree building methods We have computed 176 trees per OG, that is a total of 176 × 193138 trees or about 34 million trees. The tree building methods are a combination of several components, for example Mafft PhyML represents the method composed of building an MSA with Mafft and then using the PhyML program. The component methods are only a subset of the existing methods. The ones chosen are the ones that we perceive as the most popular and effective in the community plus the ones which have been written locally. We welcome suggestions of promising new components to test. Pairwise alignment methods which compute a distance and variance matrix from the sequences in an MSA: -CodonDist -estimate the ML CodonPAM distance from pairwise alignments induced by an MSA. The MSA is over amino acids, and the corresponding codons from the protein are used to replace the amino acids. [23] -InducDist -estimate the ML distance from pairwise alignments induced by an MSA with the GCB rate matrices. Distance methods which produce a tree from a distance/variance matrix: -BioNJ -an improved version of the NJ algorithm, [49]. -FastME -build a tree using the minimum evolution principle, version 2.07, [50]. -BioME -a version of FastME with iterative improvements -LST -build a tree using the least squares principle, with the distances weighted by the inverse of their variance [36,51]. -NJ -the neighbor joining method, [52]. Taxonomies database We chose the NCBI [53,54] taxonomies to build the taxonomic trees, the basis of our Taxon measure. The NCBI database is detailed and extensive and it covers all the species that were included in OMA1000. The ITIS database [55], another well known taxonomic database, is not as complete, in particular for bacteria, where many of the entries we need are absent. Computation The computations were carried out in our own cluster of Linux machines, about 300 cores. These were done using Darwin [36] as a host system. Additionally we used the Brutus cluster, a central service of ETH. We estimate that we used about 646,000 hours of individual CPUs. Table 15 shows the top most time consuming tasks. Of the classes, most of them took time proportional to the number of OGs. The exception being Metazoa, which Distances between trees We use the Robinson-Foulds (RF) [17] distance to measure distances between trees. The RF distance basically counts how many internal branches of the unrooted trees do not have a corresponding branch which divides the leaves in the same two sets. For trees with n leaves, the RF distance may be as high as n − 3. When the taxonomic tree is not completely determined (that is, some nodes have more than two descendants), we have to correct the computation of the RF distance. This is relatively straightforward to fix. The maximum distance in these cases is less than n − 3. Absolute vs relative distances vs 0-1 distances There are arguments to use the absolute RF distance and arguments to use relative RF distances (the absolute distance divided by n − 3). Fortunately, the results http://www.biomedcentral.com/1471-2105/13/148 are remarkably consistent for the absolute and relative RF distance. Table 16 shows Pearson's and Spearman's correlation coefficients between the absolute and relative measures, per class for all PTMSs. Clearly the rankings are not affected by this choice. There are also arguments that the RF distance may not reflect evolutionary distance. That is to say, that sometimes a small evolutionary change produces a tree which has a large RF distance to the original and other times a large evolutionary change produces a tree which has a small RF distance. This has been recently discussed in [56]. To take this concern in consideration we also computed the 0-1 distances, a Hamming-style distance, 0 if the trees are equal, 1 otherwise. The 0-1 distance may not be as sensitive as other distances, since it collapses all wrong trees into a single case, in particular it will give very little information about large trees when there is almost always some error. The correlation coefficients between the 0-1 distance and the RF distance for the Taxon measure are 0.9905 (non Metazoa) and 0.9004 (Metazoa). The correlation is excellent for non Metazoa, and the rankings for the Taxon or 0-1 distances have relatively insignificant differences. The correlation for Metazoa is good but lower and some methods, notably ML methods, move ahead. If we take as an example Prank RAxMLG in Metazoa, we find that its 0-1 distance is 0.5417. Excluding the trees with 10 or less leaves, it is 0.9614 and excluding the trees with 20 or less leaves it is 0.9824. Even medium size trees are mostly wrong. Clearly the 0-1 measure loses too much information for large trees and reflects the quality of small trees alone. This has motivated us to study the impact of the distance functions used for the Taxon and Intra measures in depth, which will be reported in a future work. The 0-1 distances are shown as a separate column in the full Tables 1 and 14. To make safer conclusions about comparisons of methods, we should use the Taxon, Intra and 0-1 measures. Large trees vs small trees It may be argued that small trees are too simple and bigger trees are the important ones. To analyze this effect we divide the OGs into two groups, the ones with 15 or fewer leaves and the ones with more than 15 leaves. We then compute the correlation coefficient for the measures on all PTMS for these two groups. Table 17 shows the results for each of the classes. All the correlations are high, and those for the non Metazoa are remarkably high. From these correlations we can conclude that the number of leaves used for the quality analysis does not influence the results. Long sequences vs short sequences In a similar way it may be argued that groups with long sequences behave differently than groups with short sequences. To analyze this effect we again divide the OGs into two groups, the ones with average sequence length less or equal to the median and those with average above the median. We then compute the correlation coefficient for the measures on all PTMS for these two groups. Table 18 shows the results for each of the classes. As above, the correlations are high and even higher for non Metazoa. From these correlations we can conclude that the average length of the sequences used for the quality analysis does not influence the results. In these last two comparisons, where we select the groups based on properties of the OG (like number of sequences and average length), we have to use the Taxon measure which is based on distances of a single group. The Intra measure is based on pairs of groups, and hence not suitable for these splits. Variance reduction techniques To compare two building methods in the Taxon measure, we can use the average distances to the taxonomic tree over all the OGs. These averages will have a relatively large variance and the difference may not be statistically significant. To refine the comparison of two particular methods, we study the difference of distances of the two methods for each OG. The expected value of the difference coincides with the difference of the averages, but the confidence margins are much better because the variance of the difference is normally smaller. This is a well known variance reduction technique [57].
7,826
2012-06-27T00:00:00.000
[ "Biology" ]
# aintnobodygottimeforthat : cultural appropriation , stylization and the social life of hashtag interjectionality Th is paper will discuss a particular hashtag meme as one example of a potential new manifestation of interjectionality, engendered and fostered in the written online context of social media. Th e case derives from a video meme and hashtag from the United States which ‘went viral’ in 2012. We will ask to what extent hashtags might perform interjectional-type functions over and above their referential functions, thereby having links to other, more prototypically interjectional elements. Th e case will also be discussed from multiple sociolinguistic perspectives: as an example of the (indirect) signifying of ‘whiteness’ through ‘black’ discourse, as cultural appropriation in the context of potential policing of these racial divides in the United States, and as a case of performative stylization which highlights grammatical markers while simultaneously downplaying phonological markers of African American English. We will end by speculating as to the implications of the rise of (variant forms of) hashtags for processes of creative language use in the future. 1 Introduction Th is paper is concerned theoretically with one example of the kinds of linguistic creativity aff orded by new technological developments, based on the idea that a social media tool such as Twitter has opened up the arena of what it is possible for language users to do in the realm of interjectionality.I use the term "hashtag interjectionality" specifi cally as an innovative use of the hashtag.Th is interjectionality is a creative use that moves the hashtag beyond being a simple sorting too, which was its original designed purpose.Many writers have noted the rise of enregistered 'internet language' forms (Squires 2010).Th e hashtag has also recently become a way of referencing social movements and trends (#MeToo, for example), and, I argue here, moving into the realm of interjectionality and the forceful expression of feeling is also a natural step. We will base this discussion around the idea of a continuum of interjectionality (following Ameka 1992and Stange 2009, 2016), the idea that expressions can be more or less 'interjectional' , fl uctuating between levels of 'emotiveness' vis-à-vis levels of 'cognitive content' (see further in section 3 below).In the era of the hashtag, and contra Burdick above, we will claim that the hashtag does of course 'mean' something, indeed, can mean and do many things, and has demonstrated its potential as a linguistic device.Th e paper will focus empirically on a family of hashtags we characterize as the #aintnobody… family, which is at time of writing a meme with about six years of history behind it.As such, it is not particularly special in itself: it has lived a mundane life and not been part of a spectacular social movement the way #MeToo, #TimesUp and #BlackLivesMatter have done.It fi ts into a middle range, neither being one of the very fi rst instances of hashtags (the type emerged in 2007); nor is it brand new.As a meme in 'middle age' it seems to have proven to have a certain amount of staying power and has thus attained a kind of 'every-day-ness' , which makes it suitable to serve as an example of the kinds of broadly sociolinguistic processes this paper will illustrate as being at play in social media usages of hashtags. Th e present paper will therefore discuss this hashtag (and its family of variants) from several perspectives.To begin with, I will present relevant theoretical approaches to interjectionality, (following Stange (2016Stange ( , 2009) ) and Ameka 1992), a key theoretical concept here that emphasizes gradability rather than absolutist defi nitions of what makes interjections the special part of speech they are (as discussed in Ameka 1992).Th e essence of my theoretical claim in this paper is that we can fi nd similarities and echoes between hashtags and the structural and, especially, functional, classes of interjections and interjectional phrases.Th e particular example presented here shows tantalizing relations to several features of interjections as a pragmatic class.Although the parallel is not perfect, it does reinforce the view that there is a place in the literature for a foregrounding of degrees of interjectionality aff ecting various pieces of linguistic form in the (relatively) new 'written language' arena of social media (see also the other papers in this journal issue). Aft er these theoretical refl ections, I will consider other more sociolinguistic aspects of this particular hashtag.Th ere are several that I fi nd particularly interesting, so the case will be discussed from multiple perspectives, as noted above.For example, the tag itself mediates and semiotizes race politics in the United States in a particular way.We will thus consider the tag as an example of (indirect) indexicality of 'whiteness' through 'black' discourse (in the manner identifi ed by Jane Hill in 1999).Secondly, it can be seen as an instance of cultural appropriation within this context of the potential policing of racial diversity and race in the United States, as several commentators have noted.Th e other interesting angle is that a specifi c phonetic stylization process (Coupland 2007), using varying degrees of 'bleaching' of the written form is a central part of this semiosis, even in the face of the possibility of the repeated performance of the original clip which is enabled by the video meme technology that permeates social media.To conclude, I will urge that, in line with many other streams of scholarship, we need to focus on degrees rather than absolutes -moving the analytical focus from interjections to shades of interjectionality.Combining that particular analytical move with a sensitivity to the social semiotic levels that any instance of novel language use can construct will enable innovative types of linguistic performance such as these data examples to be understood more fully. Th e data Th e example case derives from a video meme and hashtag originating in the United States, which subsequently 'went viral' in 2012.Th e original news clip from which it derives is from an interview with Kimberley 'Sweet Brown' Wilkins, the resident of an apartment building that had caught fi re.Th e interview was originally broadcast on KFOR-TV in Oklahoma City, on April 8, 2012. Wilkins' own dramatic escape from the burning building is related to the news interviewer as a short narrative.Kimberley Wilkins performs this monologue in a basilectal form of AAVE, with many linguistic features of that type of speech.Th e monologue in full is transcribed here: Voiceover: One resident describes her horrifying experience when she fi rst realised the complex was on fi re Kimberley Wilkins: Well, I woke up to go get me a cold pop and then I thought somebody was barbecuin' .I said "Oh Lord Jesus it's a fi re" and then I ran out, I didn't grab no shoes or nothin' Jesus I ran for my life.And then the smoke got me, I got bronchitis.Ain't nobody got time for that.Source: Web1. Anne H. Fabricius Scandinavian Studies in Language, 10(1), 2019 (85-97) Th e interviewee's fi nal utterance, oft en transcribed as "ain't nobody got time fo' dat" became a much-quoted stereotyped fragment from the episode and, as commonly happened at that time, an auto-tune version of the interview by the Parody Factory emerged soon aft er, also in April 2012 (Web2). Various versions of the written tag can now be found on Twitter: As well as the form used in the title here: #aintnobodygottimeforthat Th ere is also an existing acronym form #angtft (as well as #angtfd) which seems to have emerged at the same time as the longer form of the hashtag, according to Twitter archives online. In terms of rates of usage, a simple Google search on 26 th February 2018 revealed that the various forms yielded diff erent numbers of hits: Table 1.Forms and Google 'hits' for the #aintnobody… family of hashtags, February 2018. In terms of its subsequent quotation and re-usage, at time of writing, the hashtag seems to be used most commonly to express a range of emotional responses.It appears most commonly as an expression of frustration at time wasted, or of eff ort wasted, on the part of self and other people.Annoyances of other kinds can also feature.It seems also to be used commonly in connection with illness on the part of the speaker/writer, such as infl uenza or bronchitis (which was part of the original interview, so this may be a case of more extensive quotation), for example during the winter months.Note that in the Urbandictionary.comdefi nition of the acronym form, it is glossed as "especially useful when time is short" (Web3). Anne H. Fabricius Scandinavian Studies in Language, 10(1), 2019 (85-97) In other cases, the hashtag seems to express refl ective bemusement at, or a general sense of detachment from or unwillingness to engage with, the topic of the tweet.It is also commonly used to express anger at and rejection of other people's actions.Th e examples given below are all from diff erent tweeters.Note however that example (d) also seems more 'contentful' , because it spells out the implicitness of the (…) aft er "the building of IKEA furniture". 2 a.It's funny how I can look at dogs and be all."omg I want one!" but when it comes to kids, I may think they're cute but never do I say that I want one.J#AintNobodyGotTimeForTh at b.You know you're old when you ask someone what 'starting a streak' means and what the point of it was.#AintNobodyGotTimeForTh at c.I don't get how some people can be two-faced hypocrites #AintNobodyGotTimeForTh at d. Love IKEA furniture.However, the building of IKEA furniture.... #AintNobodyGotTimeForTh at e. Life is too short to eat olives with pips in #AintNobodyGotTimeForTh at Th us, we can see in the meanings of the emotional responses expressed by this hashtag a sense of the perception of something negative, either within oneself or in one's own experience, or deriving from other people, that is emphatically rejected by the use of the hashtag.Th ese and similar emotive usages provided the primary impetus for looking at this hashtag as a potential interjectionlike utterance, exhibiting some degrees of interjectionality.Th is terminological and theoretical discussion will be the focus of the next section. Interjections and interjectionality Ameka's classic 1992 paper on interjections classifi cation distinguishes between interjections, which are single bounded utterances, and interjectional phrases, which are more complex utterances made up of items that can also appear in other contexts.His classifi cation scheme would put the #aintnobody… family into the class of interjectional phrases, since they fulfi l the criterion of being "multi-word expressions, phrases, which can be free utterance units and refer to mental acts" (Ameka 1992: 111).Th ey are most clearly examples of "completive or exclamatory utterances" (1992: 104).Like Ameka, we do not label these hashtag forms 'interjections' proper, but reserve that term for shorter, less complex utterances.We will prefer to suggest that these hashtags are 'interjectional' and 'display interjectionality' to some degree. Anne H. Fabricius Scandinavian Studies in Language, 10(1), 2019 (85-97) We can note that longer syntactic structures shortened to acronyms are also interesting in themselves, although there is not scope in this paper to delve deeply into this.#angtft can be said to function as a pointer to the full form, and it is an example of the mode of internet writing that is now commonplace (cf.'#MAGA' , Trump's campaign slogan from 2016 and other older 'leet speak' abbreviations such as 'ff s' = 'for fuck's sake' , 'fwiw' = 'for what it's worth' , 'wtf ' = 'what the fuck' and so on.See also Squires 2010 for further discussion of this mode of writing and its enregisterment). Interjectionality, as a complex of features and functions that an utterance can have more or less of, or be used with more or less of, was introduced in Nübling ( 2004) and has been explicated by Nübling and Stange in a series of publications.Stange (2009: 31) for instance refers to 'parameters of interjectionality' -and in her 2016 study, Stange (2016:17) explicates her approach to determining the degree of interjectionality of an utterance: "an interjection is said to exhibit a high degree of interjectionality if 1.It is primarily emotive By means of this model, she sets up a continuum (in her fi gure 2.2, based on Nübling (2004:18)) between the highest and lowest degrees of interjectionality, where a scale from emotive to cognitive to conative to phatic interjections moves from the highest to the lowest degree of interjectionality, as the interjections vary along the parameters described in 1 through 4 above. Our data examples (a, b, c, d, e above) seem to most clearly fulfi l the criteria for interjectionality in the fi rst three cases.Th e #aintnobody tags are clearly expressive of emotion and exclamatory.Th e tags seem to be just as much used as a reply to other tweets on Twitter as they are used in solo tweets that are not addressed to anyone else (although no large quantitative study of this has been carried out).In these cases the hashtags simply accompany personal private musings on a topic, and so they are not dependent on a specifi c addressee but, in the way of social media at large, utterances can be simply "put out there", addressed to the public audience at large.Th e degree to which the tags can be regarded as spontaneous outbursts and thus 'semi-automatic' in Stange and Nübling's terms is more diffi cult to ascertain in a written medium, simply because of the asynchronous nature of tweeting as a language practice.Certainly, the tag's usages could be said, prima facie, to be based on diff erent degrees of refl ection.We can ask, given the content they report on, is the use Anne H. Fabricius Scandinavian Studies in Language, 10(1), 2019 (85)(86)(87)(88)(89)(90)(91)(92)(93)(94)(95)(96)(97) in (a.) above more 'refl ective' and less semi-automatic than (e.) or (c.), where refl ection on a person's attitude to having children could be seen as a more considered emotion than the immediate frustration of eating olives with pips, and reaction to that experience more or less "in the moment" as expressed in the tweet or spontaneous anger at other people's hypocrisy?While we cannot at all determine with accuracy how "automatic" the reaction expressed in any one tweet was, there can also be more or less surprising tweet connections made between topics and the hashtag, but exploring this in more detail is outside the scope of this paper. Other linguists have started to investigate hashtags in use and tried to defi ne their semantic and pragmatic, as well as formal and functional properties.Hashtags seems to cover many functions, but the interjectional function is defi nitely there in those accounts.Scott (2015), for instance, takes a relevance theory-perspective and claims that "the role of hashtags has developed beyond their original purpose (as metadata tags, ed.), and […] they now also function to guide readers' interpretations" (Scott 2015: 8).Hashtags, she argues, give contextually relevant information that helps readers fi nd their way to the intended interpretation in a discourse context (such as social media) that is large and dynamic and to some extent discursively unpredictable, also in terms of scalability, with the potential of individual tweets to reach millions of readers.Norrick's (2009) approach to interjections also gives a promising perspective for this paper.His claim that interjections are a large and open-ended functional class of utterances seems particularly appealing here, where we are seeing pieces of linguistic form take on a role as emotion-expressers, with greater and lesser degrees of the kinds of characteristics that make up 'interjectionality' as we have been using the term here.It opens the door for hashtags to fulfi l this particular interactional role, and that is what we are claiming has happened with the #aintnobody… hashtags. #Aintnobody… and race politics in the United States Two overriding political themes can be taken up in connection to the #aintnobody… hashtags.Th e fi rst concerns its semiosis of race and racial divides in the United States, and the second concerns aspects of cultural appropriation.Th ese two are ultimately intertwined and have echoes to other racial diversity problematics in the same context, as will be discussed below.Two commentators that I am aware of have raised issues of uncomfortableness around the spread and continued usage of the #aintnobody… tags, and these issues were raised quite quickly aft er the meme's initial dissemination in April 2012. 3 Anne H. Fabricius Scandinavian Studies in Language, 10(1), 2019 (85-97) In 2013, Sesali Bowen wrote a blog post condemning the cultural appropriation by the white majority in the United States of certain elements of underprivileged black women's experience (oft en suitably sanitized) under the rubric of 'ratchet culture' .Th e 'Sweet Brown' #aintnobody… memes were placed in this category.Bowen's argument is that these examples (including the Miley Cyrus 'twerk' phenomenon that was the original impetus for her commentary) are an expression of cultural appropriation of aspects of (female) black lived experience into the white world, essentially for the purposes of selling something.As she writes "It is super easy to borrow from the experiences of others as a way to be "fun, " or stretch boundaries on what is "acceptable, " without any acknowledgement of context or framework." She claims that "ratchet works to simultaneously police and defy gender, class, sexuality, and respectability norms.Folks with certain privilege are willing and able to fl oat in and out of ratchet at will". Th is move between association and disassociation with features of black women's experience is however not available to all, but, the writer claims, only to certain types of (non-black) privilege.Black people on the other hand (and perhaps especially black women) will be read through the lens of their race and its aff ordances and limitations at all times.Th e author argues that using a hashtag meme that originates from dramatic negative circumstances, transforming it with humorous or mocking intent constitutes therefore an illegitimate act of appropriation (Bowen 2013). In a slightly diff erent vein, in October 2013, Charles E. Williams wrote on the topic for the Huffi ngton post (Williams 2013).He writes that he had himself admired the honesty and unguarded language in the original interview, but had also speculated as to whether reacting with humour to the depiction of tragic events (such as a house fi re) using such unmonitored black community language constituted a racist stance on the part of white observers.Williams argues that it is best if "quite frankly this kind of dialect and the humour it sometimes encourages stays in the confi nes of my cultural community" (Williams is an African-American Baptist pastor).He quotes W.E.B. du Bois, who wrote in "Souls of Black Folks" from 1903: "It is a peculiar sensation, this double-consciousness, this sense of always looking at one's self through the eyes of others, of measuring one's soul by the tape of a world that looks on in amused contempt and pity".Th is consciousness of othering, Williams argues, leads him to be nervous in public contexts of revealing his own (basilectal) African American Vernacular English, knowing as he does what it can lead to of judgmental attitudes and consequences.Th e empowered conclusion he ends with, however, is that this language and its authenticity ought to be an Anne H. Fabricius Scandinavian Studies in Language, 10(1), 2019 (85-97) object of pride, even in the face of mainstream condemnation or rejection, a point that has been very familiar to sociolinguists at least since, for example, the publication of Labov 1969. Th e fact remains that the #aintnobody… hashtags provoke diff erent responses for diff erent readers, and the political backdrop to what is for some a seemingly innocuous internet meme is by no means straightforward.As Susan Gal has written (2018: 9), the use of a register of language (such as African American Vernacular English) is always "a response to other ways of naming the phenomenon: essentially dialectic".Th e hashtag bears something of its original context, and the phonetic details of the way it was uttered with it on its travels, particularly when snippets of the original video and audio can be circulated endlessly on the internet.Moreover, as Gal points out, "participants enact speaker types by using register fragments conventionally linked to such person typifi cations" (Gal 2018: 5), so as the enactment of a speaker type who provokes mockery and humour, the "Sweet Brown" typifi cation carries on in each discursive iteration, making its usage, as Bowen also claims, a political act marking and policing a racial divide.Jane Hill, in a series of publications (eg.Hill 1999) has indeed shed light on the tendency of white mainstream culture in the United States to appropriate the language of others.She claims that "white public space is constructed partly through intense monitoring of the speech of racialized populations… for signs of linguistic disorder".Her cases mainly concerned Latino/Latina language forms, which are commonly appropriated and reframed within white discourses.Hill's condemnation of the practice centres on the fact that such stylizations carry with them an assumption of "white public space" being the normatively unmarked order, in a dialectic juxtaposition, as Gal would also claim.Th is mechanism of contrast, I maintain, can be seen at work in the #aintnobody… hashtags as well.We will see further the ways in which this racial political context is worked with in the following section, which will discuss phonetic stylization and variations in orthographic 'mainstreaming' . Phonetic Stylization in #aintnobody tags In section two above, we showed the various forms in which the hashtag occurs online: 4 Th e details of the stylizations involved in the spelling of these tags are complex: 'ain't nobody' occurs in all instances, and its implied contrast is with standard 'nobody has' , without the negative concord of 'ain't nobody' .' Ain't' in itself is a much-stigmatized verbal auxiliary, standing grammatically in place of 'isn't' , 'aren't' or 'hasn't, haven't (depending on grammatical context).But 'ain't' and the negative concord with 'nobody' remain in all the manifestations of the hashtag, apart from the acronymic form. Th e other variations at the end of the hashtag show evidence of a gradual 'bleaching out' of the spelling of the particular AAVE/Southern phonological features that are present in the original performance.'Fo dat' shows both nonrhoticity ('fo' for 'for'), typical both of AAVE and traditional Southern US English, and DH-stopping, common in AAVE and for instance, New York traditional speech ('d' for the voiced interdental fricative).'..Fordat' retains DH-stopping and retains rhoticity, and '...fothat' does the reverse: maintaining the non-rhotic 'fo' and using standard 'that' instead of the stopped version, 'dat' .'...Forthat' , the form which as we noted in section two garners the highest numbers of Google hits and thus seems to be the most commonly used form, has fully standardized the phonological variation while keeping the two signs of grammatical variation, 'aint' and negative concord.Th e most common form is now the most phonologically 'mainstreamed' form.Th e fi nal form in our list, a completely reduced acronym, in the style of many internet terms which likewise have been reduced to initials, carries with it no visible signs of phonetic stylization (although there is also a marked form #angtfd which seems to be mostly used by the black twitter community, at fi rst glance). Following Nikolas Coupland (2007), we can see these written forms as carrying with them various degrees and types of linguistic stylization.Coupland writes extensively on this phenomenon, which involves either oral or written performance of a style which sticks out, which marks itself literally as the 'marked' item of a dialectical pair that contrasts an upholding of and a breaking of language norms.Coupland claims that "speakers design their talk in the awareness -at some level of consciousness and with some level of autonomous control -of alternative possibilities and of likely outcomes (2007: 146)." His further elaboration of stylization makes the following points (2007:154): radically mediates understanding of the ideational, identifi cational and relational meanings of its own utterances. • It requires an acculturated audience able to read and predisposed to judge the semiotic value of a projected persona or genre.It is therefore especially tightly linked to the normative interpretations of speech and non-verbal styles entertained by specifi c discourse communities.• It instigates, in and with listeners, processes of social comparison and re-evaluation (aesthetic and moral), focused on the real and metaphorical identities of speakers, their strategies and goals, but spilling over into re-evaluation of listeners' identities, orientations and values. Especially interesting are the comments on the fact that stylisation as a metacommunicative mode requires a 'knowing audience' .Interestingly, the reproduction possibilities of video and audio memes mean that global knowing audiences can constantly be created anew on the internet as it functions today, without the need for such pre-existing cultural (or sub-cultural) knowledge. If we combine Coupland's theorizing of stylization with Gal's theory of register in a linguistic anthropological light and Hill's discussion of language appropriations that position speakers and performers as more or less mainstream, we have a powerful set of tools to understand the internet hashtag #aintnobodygottimeforthat and its other family members.Aft er the initial positioning of this instance of speech as nonstandard, nonmainstream and stigmatized, it has become normalized into mainstream discourse through orthographic bleachings (ultimately, all the way to an acronym) that remove evidence of non-standard phonology.Th e internet being what it is, the visual and phonetic qualities of the original clip still lurk in the background, and are indeed revived from time to time, to bring back the racial and social marking of the tag and tying it to its original performance.But in its new position, which has brought it to the edge of the 'interjectionality' universe, as the examples in section two showed, spelling variations have made it more mainstream, more 'non-AAVE' and more used in the social media universe as an informal emotive interjection, more mundane and, potentially, more lasting and mainstream. Conclusions Th is paper has dealt with the horizon of new linguistic openings that technological developments aff ord in social media contexts such as Twitter.Interjectionality as a shaded area, a case of degree rather than absolute, has been a major focus here.Th e #aintnobody… family is not particularly special in itself, but it has attained a kind of 'every-day-ness' .At the same time, as an instance of sociolinguistically interesting semiosis, it opens up a discussion of Anne H. Fabricius Scandinavian Studies in Language, 10(1), 2019 (85-97) does not require an addressee 4. It is produced semi-automatically." • Stylisation is therefore fundamentally metaphorical.It brings into play stereotyped semiotic and ideological values associated with other groups, situations or times.It dislocates a speaker and utterances from the immediate speaking context.• It is refl exive, mannered and knowing.It is a metacommunicative mode that attends and invites attention to its own modality, and
6,206.2
2019-05-31T00:00:00.000
[ "Linguistics" ]
COMPUTATION OF CHEMICAL POTENTIAL AND FERMI- DIRAC INTEGRALS APPLIED TO STUDY THE TRANSPORT PHENOMENA OF SEMICONDUCTORS In the given paper, two methods of calculating with high precision accuracy the chemical potential and the integrals of the type called the Fermi–Dirac of different indexes are presented. Our calculations are conclusive with already existing data. These data are essential not only in the study of the theory of solids but at the explanation of the experimental results of investigated transport phenomena in solids, namely, in semiconductors. Introduction The physical laws of systems consisting of a huge number of particles are of a statistical nature. Statistical distribution determines the probability of having those or other defining states parameters of the particles of the system. 1 One example of systems of the huge number of particles that requires a statistical approach is the solid state, in particular, semiconductors. The distribution of electrons over energies, namely, the Fermi-Dirac distribution, is the most important. The Boltzmann distribution, which is valid for particles obeying classical mechanics, is the limiting case of the Fermi-Dirac distribution. In the state of statistical equilibrium, the ideal electrons gas in solid state obeys the Fermi-Dirac statistics. 2,3 Different physical phenomena are differently sensitive to the type of statistical distribution. When considering the transport properties in a solid state, we inevitably collide with integrals of the type called the Fermi-Dirac integrals. 2,3 These are certain integrals often encountered not only in the study of the theory of semiconductors but at the explanation of the experimental results of investigated transport phenomena in semiconductors. In this case, we have to use not only the tables of values of the Fermi integral but also their approximate formulas. 4 However, the calculation of integrals according to approximate formulas and tables gives a sufficiently high value of error, especially for large and small values of the reduced Fermi level. The error can reach approximately 25%. Using approximations is inconvenient and inaccurate. Therefore, the goal of our paper is to find an appropriate method for calculation of the Fermi-Dirac integrals for application in the study of semiconductors properties with high accuracy <<1%. The presented paper allows also determining a normalization constant, included in the Fermi-Dirac statistics, -the chemical potential with high reasonable accuracy. Gauss-Legendre method integrals solution Functional integral according to Gauss-Legendre method is presented as the sum of (n-1) coefficients: . (1) Let's say, at searching of integral we foresee only 2 coefficients, then it follows from (1): (2) This expression consists of 4 unknown coefficients (C0, C1, x0, x1) and, consequently, we need 4 boundary conditions: The value of integral may be taken at an arbitrary [a,b] boundary. In our case, we take [-1,1] interval and const=1. From (3), boundary conditions follow: The solution of (4) equations gives the following values for C and x coefficients: Taking into account the value of two coefficients, the magnitude of integral is: Taking into account 4 coefficients, we have to add 4 boundary conditions: f=x 4 , f=x 5 , f=x 6 , f=x 7 . At these boundary conditions calculated coefficients are: These coefficients are needed to be installed into (1) for calculation of the digital value of integral. If we take into account n coefficients, we will need 2n boundary conditions. In general, our task is to solve integral in arbitrary boundaries. For calculation of integral, it is necessary to transfer the boundary [a,b] into [-1,1]. Let's say, the value of the new argument is given by: Finally, for integral solved with two coefficients we obtain formula: . (9) The more terms we take into account in the formula (1), the more accurate value of integral will be. In general, taking into n coefficients: where ƒ(x) is an arbitrary function, which is continuous in [a,b] interval and Ci and xi are coefficients found from boundary conditions. The second way to calculate Ci and xi coefficients is to solve Legendre polynomial equitations: 5,6 (11) To obtain Ci and xi coefficients, first, we have to generate Legendre polynomials and solve them. We can use MATLAB-s built-in functions to generate polynomials and solve them 7 (legendrePolynomials_1.m), or we can manually generate. Method of undefined integrals solution Let's say the integral is not given in limited [a,b] interval, but in [0,+∞] range. We can decompose integral into two parts: . (14) In the second part of integral, we substitute the variable: a a a We can apply (10) formula to the two parts of this integral and solve any integral, which is defined in this range. Taking into account (14) and (15) formulas, we obtain: Calculation of integrals by (16) formulas and their summation give the final meaning of (14) integral. Integral Fermi and its derivative The general view of integrals of Fermi is given by the formula: where ξ is the chemical potential. Many authors do not take into account 1) k Γ( + member and introduce integral Fermi as: The formula for Fermi integrals derivative is given by: For gamma function, given in (17) formula, it can be written: If n is a natural number, It is also known, that The graphics of integrand function in (18) formula for different values of k and ξ are given in Fig.1. It is clear from Simson's integral calculation method Another way to calculate integrals of function f(x), which are defined in [a,b] range, is Simson's integration method, the general formula of which is given in (20): 7 (20) Results for Simson's rule are in good agreement with the Gauss-Legendre method. Error estimation Finally, we show the advantages of the method presented in the article by calculating the error of implemented calculations. Gauss-Legendre method error estimation has been done by using (21) formula: The values of E parameter estimated by (22) formula are given for different n in Table 2. It is clear that the increase of n-value decreases the value of E and error goes to nearly zero. Conclusion The chemical potential and Fermi-Dirac integrals are essential for a basic understanding of semiconductors properties. In this paper, there have been calculated Fermi-Dirac integrals by two different ways -Gauss-Legendre and Simson's methods. Both methods are in good agreement with each other. Our data let reduce the error of calculation of Fermi-Dirac integrals up to <<1%.
1,458.8
2019-09-07T00:00:00.000
[ "Chemistry", "Physics" ]
Eosinophils and IgE Receptors : A Continuing Controversy phils produce cytokines, such as interleukin-4 (IL-4) and D IL-5, which are potentially important in the recruitment of URING THE PAST 2 decades, considerable new information has been obtained about the functions of the eosinophils, thus causing chronic allergic inflammation. eosinophil and its roles in human disease. Presently, the Second, IgE bound to receptors on antigen-presenting cells, eosinophil is recognized as a proinflammatory granulocyte such as CD23 on B cells and to high-affinity IgE receptors implicated in protection against parasitic infections and be(FceRI) on Langerhans’ cells and monocytes, can enhance lieved to play a major role in allergic diseases, such as bronantigen internalization and presentation to T cells, resulting chial asthma, allergic rhinitis, and atopic dermatitis. The in continuous activation of the immune system. Finally, eosinophil is an important source of cytotoxic cationic proIgE may mediate killing of the invading helminth and host teins, such as major basic protein (MBP), eosinophil peroxicell damage by acting as a ligand for antibody-dependent dase, and eosinophil cationic protein. These proteins are pocell-mediated cytotoxicity (ADCC) by macrophages and tentially two-edged swords; on the one hand, they protect other immune cells. In fact, immunoepidemiological studthe host against overwhelming helminth infections, but on ies showed a significant correlation between the production the other hand, they damage the host’s tissues. Eosinophils of antischistosome IgE antibodies and the acquisition of imalso induce inflammation by releasing lipid mediators, oxymunity against reinfection to S haematobium. In allergic gen metabolites, and cytokines. Numerous studies have diseases such as bronchial asthma, there is a close correlation shown the association of eosinophils and various human between serum IgE levels and the prevalence and severity parasitic and allergic diseases. For example, at present, the of the diseases. Thus, there is now converging evidence to most common worldwide cause of eosinophilia is probably support roles for IgE in resistance to helminthic infections infection with helminths, and high eosinophil counts correand in the pathophysiology of allergic diseases in humans. late with lack of reinfection after treatment of Schistosoma Therefore, it is reasonable to speculate that IgE is involved haematobium infections. Analyses of patients infected with in the activation of eosinophils in these diseases. Onchocerca volvulus have shown striking deposition of the Early studies on the killing of schistosomula in vitro by eosinophil granule MBP around degenerating microfilaria. human eosinophils used cells purified from normal or In allergic asthma, eosinophilic and lymphocytic infiltration slightly eosinophilic individuals, together with heat-inactiin the epithelium and lamina propria of the airways are convated sera from individuals with schistosome infection. The sistently found even in mild and stable asthma. Indeed, results of these studies suggested that killing requires IgG correlations have been observed between the numbers of and is independent of complement. IgG1 and IgG3 subinfiltrating eosinophils and asthma disease severity. Pulmoclasses were effective in mediating ADCC by human eosinonary segmental allergen challenge in allergic individuals phils, whereas IgM, IgG2, and IgG4 were not only inactive, causes eosinophil recruitment into the airways; this is associbut blocked the effects of the active subclasses. A quite ated with the release of eosinophil granule proteins and the separate phenomenon was observed with low density, soincrease in vascular permeability. Despite the strong assocalled hypodense eosinophils that can be isolated from indiciations among eosinophils, their cytotoxic granule proteins, viduals with very high eosinophil counts. Receptors for IgE and human diseases, the mechanism(s) responsible for eosinwere identified on both rat and human hypodense eosinoophil activation in vivo is largely unknown. phils, and hypodense human eosinophils were shown to Helminth infections and allergic diseases are characteristikill schistosomula in the presence of IgE. Subsequently, cally associated not only with peripheral blood and tissue this human eosinophil IgE receptor was shown to be similar, eosinophilia, but also with high levels of both total and antigen-specific IgE antibodies. IgE antibodies may be involved in disease in three ways. First, the central feature in From the Departments of Immunology and Medicine, Mayo Clinic anaphylactic and immediate hypersensitivity reactions is and Mayo Foundation, Rochester, MN. IgE-dependent activation of mast cells and basophils leading Address reprint requests to Hirohito Kita, MD, Department of to the release of histamine and other inflammatory mediators, Immunology, Mayo Clinic, 200 First St SW, Rochester, MN 55905. such as prostaglandins and leukotrienes. Furthermore, upon q 1997 by The American Society of Hematology. 0006-4971/97/8910-0042$3.00/0 activation through IgE receptors, human mast cells and baso- D IL-5, which are potentially important in the recruitment of URING THE PAST 2 decades, considerable new information has been obtained about the functions of the eosinophils, thus causing chronic allergic inflammation. 9eosinophil and its roles in human disease.Presently, the Second, IgE bound to receptors on antigen-presenting cells, eosinophil is recognized as a proinflammatory granulocyte such as CD23 on B cells and to high-affinity IgE receptors implicated in protection against parasitic infections and be- (Fc e RI) on Langerhans' cells and monocytes, can enhance lieved to play a major role in allergic diseases, such as bronantigen internalization and presentation to T cells, resulting chial asthma, allergic rhinitis, and atopic dermatitis. 1 The in continuous activation of the immune system. 10,11Finally, eosinophil is an important source of cytotoxic cationic pro-IgE may mediate killing of the invading helminth and host teins, such as major basic protein (MBP), eosinophil peroxicell damage by acting as a ligand for antibody-dependent dase, and eosinophil cationic protein.These proteins are pocell-mediated cytotoxicity (ADCC) by macrophages and tentially two-edged swords; on the one hand, they protect other immune cells. 12In fact, immunoepidemiological studthe host against overwhelming helminth infections, but on ies showed a significant correlation between the production the other hand, they damage the host's tissues. 2 Eosinophils of antischistosome IgE antibodies and the acquisition of imalso induce inflammation by releasing lipid mediators, oxymunity against reinfection to S haematobium. 13In allergic gen metabolites, and cytokines. 2Numerous studies have diseases such as bronchial asthma, there is a close correlation shown the association of eosinophils and various human between serum IgE levels and the prevalence and severity parasitic and allergic diseases.For example, at present, the of the diseases. 14Thus, there is now converging evidence to most common worldwide cause of eosinophilia is probably support roles for IgE in resistance to helminthic infections infection with helminths, and high eosinophil counts correand in the pathophysiology of allergic diseases in humans.late with lack of reinfection after treatment of Schistosoma Therefore, it is reasonable to speculate that IgE is involved haematobium infections. 3Analyses of patients infected with in the activation of eosinophils in these diseases. Onchocerca volvulus have shown striking deposition of the Early studies on the killing of schistosomula in vitro by eosinophil granule MBP around degenerating microfilaria. 4uman eosinophils used cells purified from normal or In allergic asthma, eosinophilic and lymphocytic infiltration slightly eosinophilic individuals, together with heat-inactiin the epithelium and lamina propria of the airways are convated sera from individuals with schistosome infection. 15The sistently found even in mild and stable asthma. 5Indeed, results of these studies suggested that killing requires IgG correlations have been observed between the numbers of and is independent of complement.IgG1 and IgG3 subinfiltrating eosinophils and asthma disease severity. 5Pulmoclasses were effective in mediating ADCC by human eosinonary segmental allergen challenge in allergic individuals phils, whereas IgM, IgG2, and IgG4 were not only inactive, causes eosinophil recruitment into the airways; this is associbut blocked the effects of the active subclasses. 16A quite ated with the release of eosinophil granule proteins and the separate phenomenon was observed with low density, soincrease in vascular permeability. 6,7Despite the strong assocalled hypodense eosinophils that can be isolated from indiciations among eosinophils, their cytotoxic granule proteins, viduals with very high eosinophil counts.Receptors for IgE and human diseases, the mechanism(s) responsible for eosinwere identified on both rat and human hypodense eosinoophil activation in vivo is largely unknown.phils, 17 and hypodense human eosinophils were shown to Helminth infections and allergic diseases are characteristikill schistosomula in the presence of IgE. 18Subsequently, cally associated not only with peripheral blood and tissue this human eosinophil IgE receptor was shown to be similar, eosinophilia, but also with high levels of both total and antigen-specific IgE antibodies.IgE antibodies may be involved in disease in three ways.First, the central feature in but not identical, to the low-affinity IgE receptor (Fc e RII) with the cestode, Mesocestoides corti.0][21][22][23] Eosinophils from patients with eosinophilia express another low-affinity IgE-binding to human eosinophils, expressed the type II Fc receptor for IgG (Fc g RII), but they were unable to detect any IgE binding molecule belonging to the S-type lectin family, called Mac-2/e-binding protein. 24The cytotoxic function of eosinophils to mouse eosinophils.More recently, Jones et al 33 thoroughly examined eosinophils obtained by bronchoalveolar lavage was abolished by the antibody against this molecule. 24More recently, in 1994, Gounni et al 25 described Fc e RI on eosino-(BAL) from the lungs of CBA/J mice infected with Toxocara cani by flow cytometry.They found that murine eosinophils phils from patients with marked eosinophilia.The evidence to support this claim was comprehensive and included the are negative for surface IgM (sIgM), sIgA, sIgE, and Fc e RII, but are positive for sIgG1 and Fc g RII.Furthermore, they following: inhibition of [ 125 I] IgE binding to eosinophils by anti-Fc e RI a-chain monoclonal antibody (clone 15.1); sur-showed that culturing eosinophils for 24 or 48 hours with exogenous IgE and/or IL-4 did not induce IgE binding capac-face expression of Fc e RI a-chain by flow cytometry; immunostaining of tissue eosinophils with 15.1; the demonstration ity or Fc e RII expression.In this issue of Blood, de Andres et al 34 expand these studies considerably by including Fc e RI of Fc e RI a-, b-, and g-chain transcripts; release of eosinophil granule proteins after stimulation of eosinophils with 15.1; and Mac-2 and by examining mRNA transcripts and receptor-mediated cellular functions.de Andres et al 34 examine and inhibition of IgE-dependent eosinophil ADCC against schistosome targets by 15.1.Altogether, these findings sug-murine eosinophils from two sources, namely eosinophils isolated from liver granuloma of CBA mice infected with S gest that human eosinophils express three receptors for IgE, namely Fc e RI, Fc e RII, and Mac-2, and that IgE induces eo-mansoni and bone marrow cells isolated from BALB/c mice and cultured with a combination of eosinophil growth fac-sinophil mediator release and ADCC through these receptors.Thus, on the basis of these reports, IgE-mediated acti-tors.The results from these two different cell sources are virtually identical.Murine eosinophils lack IgE receptor ex-vation of eosinophils was implicated as an important mechanism for host defense and in the pathophysiology of human pression; neither surface expression of Fc e RII or Mac-2 nor binding of murine IgE to the cells could be detected.Reverse disease. However, the seemingly strong association between the transcription polymerase chain reaction (RT-PCR) analyses did not detect mRNA transcripts for the a-chain of Fc e RI or eosinophil and disease becomes confusing and controversial in mice.For example, in helminth-infected animals, antibod-Fc e RII, but did detect Mac-2 mRNA.In vitro culture of granuloma eosinophils did not induce IgE-binding or expres-ies to IL-5 suppressed blood eosinophilia and eosinophil infiltration into the tissues.However, ablation of eosinophilia sion of IgE receptors.In contrast to the lack of IgE receptors, functioning IgG receptors, including Fc g RIIb and Fc g RIII, by anti-IL-5 was not associated with a diminution of resistance in mice infected with S mansoni 26 or with Trichinella were detected on granuloma eosinophils, consistent with previous observations by others.spiralis. 27Similarly, anti-IL-4 depletion of IgE responses failed to interfere with protective immunity to S mansoni. 26tudies of receptor expression have a number of potential pitfalls.For example, transcription of mRNA or even the These findings suggest that neither eosinophils nor IgE are critical for immunity to these parasites in the mouse.In presence of synthesized receptor protein within the cell does not necessarily indicate the expression of the receptor on contrast, mice infected with Trichuris muris showed the exact opposite: their resistance to infection was associated with cell surface. 35Another question concerns the potential discrepancies between receptor expression and actual function-the production of Th2 cytokines, such as IL-4 and IL-5, tissue eosinophilia, and intestinal IgA production. 28In mu-ing of the receptor.An antibody raised against an IgE receptor expressed on one cell type may not recognize the IgE rine models of asthma using BALB/c mice sensitized and challenged with ovalbumin (OVA), one study showed that receptor on another cell type, the best example being lack of binding of the antibody against human B-cell Fc e RII neither IL-5 nor eosinophils are required for airway hyperresponsiveness. 29 In contrast, another study in which the (CD23) to human eosinophils. 23Finally, precautions are needed to minimize or eliminate contamination by other cell C57BL/6 mice rendered IL-5 deficient by homologous gene recombination were sensitized and exposed to OVA, the types, particularly in highly sensitive RT-PCR analyses.The careful and well-designed study by de Andres et al 34 pub-animals failed to develop eosinophil infiltration into the lungs, airway hyperresponsiveness, and lung damage; their lished in this issue of Blood examined transcription of mRNA, surface receptor expression, IgE-binding capacity, littermate controls showed all these responses. 30Reconstitution of IL-5 production with recombinant vaccinia viruses and receptor-mediated cellular function; thus, they seem to address all of these potential problems.Their observations, completely restored antigen-induced eosinophilia and airway dysfunction in these IL-5-deficient mice, suggesting a cen-together with previous reports by others, provide convincing evidence that murine eosinophils, seemingly unlike human tral role for IL-5 and eosinophilia in the pathogenesis of allergic lung disease.The inconsistencies among these mu-eosinophils, lack IgE receptors.Therefore, murine eosinophils may use the Fc g R, complement receptors, and/or possi-rine disease models, as well as the inconsistencies between the human and mice findings, allows recognition of potential bly Fc a R in their antigen-dependent cellular functions; these may explain the differences between mouse and human ob-difference(s) between human and mouse eosinophilic leukocytes. servations and the discrepancies among findings made in mice.Previously, Lopez et al 31,32 and human eosinophils deserve some comments and cau-individuals. 35In addition, transcription of mRNA for Fc e RI a-chain was enhanced by IL-4 in these human eosinophils, 35 tions.First, expression of IgE receptors on human eosinophils and their IgE-dependent functions are not phenomena similar to results with human mast cells. 43The expression of Fc e RII is tightly regulated in a tissue-specific manner, and commonly seen in eosinophils from all sources.Almost all the studies that showed the presence of IgE receptors on IL-4 again is the most potent inducer of Fc e RII expression for various types of cells. 44Therefore, those blood or tissue human eosinophils have used eosinophils from patients with marked eosinophilia, including the hypereosinophilic syn-conditions with abundant IL-4 may favor the expression of IgE receptors on eosinophils.Because they used eosinophils drome and diseases associated with skin disorders and lymphomas. 18,19,24,25There are no data to support expression isolated from an almost ideal source, namely liver granuloma of mice infected with S mansoni, the findings by de Andres of IgE receptors on eosinophilia from healthy donors and/ or subjects with mild to moderate eosinophilia due to more et al 34 are particularly informative.In addition to tissue and disease specificity, mice may also display another level of common conditions, such as allergy and helminth infection.Three studies that sought IgE receptors in these conditions complexity.Because strains of mice differ greatly in their capacities to produce IL-4 and high levels of serum IgE, 45 did not detect Fc e RII on peripheral blood eosinophils 36,37 and detected only minimal levels of Fc e RI expression on expression of IgE receptors on eosinophils may differ, depending on the strain.By using mice selected for heightened eosinophils infiltrating into the bronchial tissues of patients with asthma 38 and on blood eosinophils from patients with production of IgE, Eum et al 46 concluded that the recruitment of eosinophils to the airways and high IgE titers are both allergic rhinitis. 37Furthermore, IgE-dependent functions of eosinophils were not observed in blood eosinophils from required for lung pathology of allergic BP2 mice, and they speculated that IgE activation of eosinophils is important in normal individuals, whereas these eosinophils did respond vigorously to IgG1 and IgG3 through Fc g RII. 16,39,40Another this mouse model.It would be interesting if eosinophils from these animals were subjected to the rigorous analyses used level of complexity is added by the discrepancy between receptor expression and IgE-mediated cellular functions of by de Andres et al. 34 Furthermore, van der Vorst et al 47 reported the in situ localization of IgE cytophilic antibodies eosinophils.Normal eosinophils, which fail to mediate IgEdependent ADCC, do mediate such killing after activation on murine eosinophils in the gut tissues of Swiss albino mice infected with Hymenlepis diminuta, although this study with platelet-activating factor (PAF) even in the absence of the expression of IgE receptors. 41eeds to be interpreted with caution due to the ubiquitous expression of IgE binding proteins in murine tissues. 48Thus, Second, in association with the activation status issue described above, expression of IgE receptors on eosinophils as summarized in Table 1, IgE receptor expression may be disease, tissue, species, and/or strain specific, and the obser-may be tightly regulated by various environmental factors.For example, in humans the expression of Fc e RII was limited vations obtained in a certain condition should not be generalized.9 or Mac-2. 24Hence, earlier studies showed evidence 4, -5, and -6 and tumor necrosis factor-a in normal and asthmatic for the expression of the low-affinity IgE receptor without airways: Evidence for the human mast cell as a source of these any evidence for the high-affinity IgE receptor, Fc e RI.Howcytokines.Am J Respir Cell Mol Biol 10:471, 1994 ever, later in 1994, the same investigators reported the pres- 10. Mudde GC, Bheekha R, Bruijnzeel-Koomen CAFM: IgEence of Fc e RI on human eosinophils, and all of the IgEmediated antigen presentation.Allergy 50:193, 1995 dependent functions of human eosinophils were explained 11.Maurer D, Fiebiger E, Ebner C, Reininger B, Fischer GF, Wichlas S, Jouvin MH, Schmittegenolf M, Kraft D, Kinet JP, Stingl by this receptor. 25Again, IgE-dependent ADCC for schisto-G: Peripheral blood dendritic cells express Fc e RI as a complex comsomula was essentially completely inhibited by antibody to posed of Fc e RIa-and Fc e RIg-chains and can use this receptor for Fc e RI, raising an obvious question as to how the IgE binding IgE-mediated allergen presentation.J Immunol 157:607, 1996 and ADCC can be totally inhibited by antibodies against 12. Capron A, Dessaint JP, Haque A, Capron M: Antibody-dethree distinct IgE receptors.Thus, more confirmatory work pendent cell-mediated cytotoxicity against parasites.Prog Allergy is needed to finally resolve the presence or absence of IgE 31:234, 1982 receptors on human eosinophils, as well as on mouse eosino-13.Hagan P, Blumenthal UJ, Dunn D, Simpson AJG, Wilkins phils.HA: Human IgE, IgG4 and resistance to reinfection with Schisto-In conclusion, with the recent development of techniques soma haematobium.Nature 349:243, 1991 to manipulate genes and with the increased availability of a 14.Sears MR, Burrows B, Flannery EM, Herbison GP, Hewitt CJ, Holdaway MD: Relation between airway responsiveness and wide variety of immunologic reagents for mice, the numbers serum IgE in children with asthma and in apparently normal children. of murine models of human immunity and disease have strik-N Engl J Med 325:1067, 1991 ingly increased.The studies by de Andres et al 34 in this issue 15.Butterworth AE, Sturrock RF, Houba V, Mahmoud AA, Sher of Blood warn us of potential cellular differences between A, Rees PH: Eosinophils as mediators of antibody-dependent dammouse and human immunologic responses and encourage us age to schistosomula.Nature 256:727, 1975 to reexamine the suitability of mouse models for human 16.Khalife J, Dunne DW, Richardson BA, Mazza G, Thorne diseases.At the same time, their report raises unanswered KJI, Capron A, Butterworth AE: Functional role of human IgG questions regarding the expression of IgE receptors on husubclasses in eosinophil-mediated killing of schistosomula of Schisman eosinophils.Further studies on IgE receptors on mouse tosoma mansoni.J Immunol 142:4422, 1989 and human eosinophils may solve the existing controversies 17.Capron M, Capron A, Dessaint JP, Torpier G, Johansson SGO, Prin L: Fc receptors for IgE on human and rat eosinophils.J and help to interpret and to understand the pathophysiologic Immunol 126:2087, 1981 mechanisms of human eosinophilic disorders, the roles of 18. Capron M, Spiegelberg HL, Prin L, Bennich H, Butterworth eosinophils in human immunity, and their murine models.AE, Pierce RJ, Ouaissi MA, Capron A: Role of IgE receptors in effector function of human eosinophils.J Immunol 132:462, 1984 Table 1 . Reports on the Expression of IgE Receptors by Eosinophils have described the distribution of Fc receptors on eosinophils isolated from mice infected The issues regarding expression of IgE receptors on mouse AID Blood 0059 / 5H35$$1162 04-25-97 17:21:13 blda WBS: Blood * IgE-dependent function after stimulation with PAF. Expression of the mRNA transcript for Fc e RI achain was detected in peripheral blood eosinophils from pa-ciled.In 1988, the eosinophil IgE receptor was originally found to have a low affinity (K D of 10 07 mol/L) and to tients with allergic rhinitis, but not in those from normal .Galli SJ, Gordon JR, Wershil BK: Cytokine production by possess a molecular weight corresponding to that of mast cells and basophils.Curr OpinImmunol 3:865, 1991Fc e RII.20,42IgE-binding and IgE-dependent ADCC to schis-9.Bradding P, Roberts JA, Britten KM, Montefort S, Djukanovic tosomula were completely inhibited by antibody to either R, Mueller R, Heusser CH, Howarth PH, Holgate ST: Interleukin-Fc e RII 8
4,965.8
1997-01-01T00:00:00.000
[ "Biology", "Medicine" ]
Prediction of dynamic behaviour of Pt/SiNx suspended beams for their use in harsh environment This paper reports a mechanical study of double clamped micro beams of an Accelerometerbased on Thermal Convection (ATC). One of the prime application of ATC is shock measurement where the magnitude of acceleration could exceed 20000g [1-2]. The ATC is formed of 3 suspended resistors. The ATC would support a large frequency spectrum of high magnitude due to a shock application. The eigen frequencies has to be found because the resonant effect could destroy the suspended resistors. Analytic results describe the heater resistor as tensile string [4]. Finite element method (FEM) and measurement of resonant frequencies with interferometric microscope validate the tensile string model [5]. Also by experimental measurement, it is shown that the increase of temperature reduces the tensile stress of the beam and then the values of resonance frequencies. Introduction Thermal micro machined accelerometers have been intensively studied because of their high shock reliability. This reliability is due to lack of seismic mass [1][2]. The main application of this kind of accelerometer is high magnitude shock detection in automotive industry like air bag crash detection or defence applications [3]. The goal of this paper is to demonstrate that the behaviour of the suspended resistors is the same as tensile string [5]. This comparison is done by FEM and by experiments with an interferometric microscope. The innovation of this paper consists in the measurements of resonant frequencies by heating the thermal resistors. The final objective is to study the influence of temperature on resonance frequencies. Physical principle and microstructure design The ATC is based on free convection in a sealed chamber containing N2 [1][2][3]. As shown in figure 1, The heater resistor is placed between two detectors. When the heating resistor is powered, it creates a symmetrical temperature profile. Consequently, the detectors are at the same temperature (continuous line). Although in presence of acceleration, the thermal gradient tends to shift in the same direction than the acceleration (dotted line). By knowing the coefficient of thermal resistance (CTR) and the resistance at 0 °C (R0) of the platinum resistor, the difference of electric voltage collected between the two detection resistors is converted into difference of temperature. The shift in temperature is proportional to the acceleration [2]. The heater resistance temperature could exceed 150 °C, then it's important to study the influence of temperature on resonant frequencies. Analytical results and FEM modelisation The beam is described by a clamped-clamped beam model, with one degree of freedom, given by Euler Bernouilli equation [5][6]. Although in the case of high tensile constrain (L>>h where L the length and h is the thickness of the heater) the resonance frequencies are a first mode integer multiple. The equation below (1) describes the frequency of the n th mode with n=1,2,3 as a function of tensile stress σ, the mass density ρ and the length of the beam L: This model is suitable for beams composed by only one material. Although FEM need to be performed in order to determine the resonance frequencies and the mode shape for a SiNx/Pt beam. The FEM is achieved with ANSYS software. Analysis for two different lengths and widths of the beam is performed: the first is 2000 µm by 50 µm and a second is 500 µm by 10 µm. These dimensions correspond to different device design [2]. The thickness of SiNx beams is around 500 nm. SiNx/Pt beams is 500 nm for SiNx layer and 300 nm for Pt layer. The results of FEM are given in Table 1. The results are classified by compositions and dimensions of the beams: (1) The resonance frequencies are a first mode integer multiple of the first mode and the mode shapes are the same as tensile string. This result is valid for every size and composition of the beam studied. The next part presents the experimental values of eigen frequencies and mode shapes found by an interferometric microscope. Modes shapes and resonant frequencies Experimental measurement of frequencies and mode shape is done by the Fogale photomap 3D, optical profiling system. This system is combined with an ultrasonic transducer which allows scanning bandwidth in order to find eigen frequencies and mode shapes. The mode shape at the resonant frequencies are found by vibrometry in stroboscopic light [7] and are shown in Figure2. The measurements have been done on SiNx/Pt resistor of 500 µm by 10 µm. In order to get a clear view of the flexural displacement, the ratio of Y-axis: X-axis, was chosen around 1:5000: The behaviour of SiNx and SiNx/Pt beam were the same as tensile string. The values in table 3 shows very good agreement between experimental (Table 3) and FEM results (Table 1): FEM shows a good prediction of the resonant frequency and the shape of deformation. Influence of the temperature on resonant frequencies The operating heater resistor temperature was around 200 °C. With a DC generator an electric current through this resistor was set. The knowledge of CTR and Ro allows us deducing temperature as a function of the electric current. The result of SiNx/Pt 500 by 10 µm of beam for five differents temperatures are gathered in table 4. The eigen frequency of the first mode and the equivalent tensile stress with equation (1) were calculated: The increase in temperature shifts the resonant frequency to lower frequencies. Unfortunately, it is difficult to measure the resonant frequency at a temperature higher than 150 °C, because of mechanical bulking of the beam [5]. In figure 3, f(T)=f with equation (2) Experiment and theoretical model are well matched. By extrapolation for a sample of SiNx/Pt beam of 500 by 10 µm at 200 °C, the resonance frequency is 18,500 kHz. Conclusion FEM and experiments allows us to conclude that Euler Bernoulli model in the case of high tensile stress is suitable to describe suspended resistor of ATC. By experimental measurements, the result of 500 by 10 µm beams out of plane resonant frequency is around 154 kHz at ambient temperature. Nevertheless, the increase of temperature reduces the value of resonant frequency by reducing the value of tensile stress. The next step of our work is to find a link between the amplitude of acceleration and the deflection of the suspended resistor. In order to estimate the critical acceleration before damaging the device, a combined experimental-analytical approach is possible by measuring the quality factor and resonant frequency [10]. Finally, FEM approach could be carried out to determine the maximum acceleration in function of the frequency.
1,524.6
2017-11-01T00:00:00.000
[ "Engineering", "Physics" ]
YOLOv8-MU: An Improved YOLOv8 Underwater Detector Based on a Large Kernel Block and a Multi-Branch Reparameterization Module Underwater visual detection technology is crucial for marine exploration and monitoring. Given the growing demand for accurate underwater target recognition, this study introduces an innovative architecture, YOLOv8-MU, which significantly enhances the detection accuracy. This model incorporates the large kernel block (LarK block) from UniRepLKNet to optimize the backbone network, achieving a broader receptive field without increasing the model’s depth. Additionally, the integration of C2fSTR, which combines the Swin transformer with the C2f module, and the SPPFCSPC_EMA module, which blends Cross-Stage Partial Fast Spatial Pyramid Pooling (SPPFCSPC) with attention mechanisms, notably improves the detection accuracy and robustness for various biological targets. A fusion block from DAMO-YOLO further enhances the multi-scale feature extraction capabilities in the model’s neck. Moreover, the adoption of the MPDIoU loss function, designed around the vertex distance, effectively addresses the challenges of localization accuracy and boundary clarity in underwater organism detection. The experimental results on the URPC2019 dataset indicate that YOLOv8-MU achieves an<EMAIL_ADDRESS>of 78.4%, showing an improvement of 4.0% over the original YOLOv8 model. Additionally, on the URPC2020 dataset, it achieves 80.9%, and, on the Aquarium dataset, it reaches 75.5%, surpassing other models, including YOLOv5 and YOLOv8n, thus confirming the wide applicability and generalization capabilities of our proposed improved model architecture. Furthermore, an evaluation on the improved URPC2019 dataset demonstrates leading performance (SOTA), with an<EMAIL_ADDRESS>of 88.1%, further verifying its superiority on this dataset. These results highlight the model’s broad applicability and generalization capabilities across various underwater datasets. Introduction In the sustainable management of marine resources, the accurate detection and localization of underwater resources are crucial.Remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) play an irreplaceable role in locating marine life, mapping the seabed, and other underwater tasks.The scope of these applications extends from monitoring marine species [1] to underwater archaeology [2] and aquaculture [3].However, designing a fully functional AUV requires the integration of advanced technologies such as target detection [4,5], tracking [5][6][7][8], grasping [9], human-machine interaction [10], autonomous control [8], and multimodal sensor integration [11].ROVs and AUVs play a central role in the development of underwater target detection technology.They assist in mapping the seabed and locating potential obstacles by identifying the terrain and biological categories of the seabed, and they are also used to inspect underwater facilities.Although existing target detection technologies perform well in extracting low-level features such as shapes, outlines, and textures [12][13][14], the recognition of these features often lacks precision and is slow in complex underwater environments. Underwater target detection faces numerous challenges.Firstly, the absorption and scattering of light by water cause unstable lighting conditions, significantly reducing the contrast between targets and their backgrounds [15].Secondly, factors such as water currents, suspended particles, and foam can cause image blurring and distortion, thereby reducing the recognition accuracy [16].Moreover, the diversity of the targets in underwater environments, with significant differences in appearance, size, and shape among marine organisms, adds complexity to detection tasks [17].Finally, various noise and disturbances such as water waves, bubbles, and floating debris further interfere with the detection and identification processes [18].To address these challenges, researchers have proposed several improvement strategies, including expanding the receptive field, enhancing the feature expression capabilities, implementing multi-scale information fusion, and facilitating comprehensive information interaction [19][20][21][22][23][24][25][26][27][28][29]. Firstly, increasing the receptive field helps to better capture the contextual information and environmental characteristics of targets, which is crucial in making accurate predictions in complex underwater environments.Deep convolutional neural networks (CNNs) have demonstrated their exceptional processing capabilities across various domains in recent years.Numerous studies have shown that by adjusting the depth of the CNN and the size of the convolutional kernels, the network's receptive field can be effectively expanded [19].These strategies are particularly beneficial in tasks requiring dense predictions, such as semantic image segmentation [20,21], stereo vision [22], and optical flow estimation [23].Ensuring that each output pixel is influenced by an adequate receptive field enhances the accuracy and robustness of the algorithm. Additionally, the adoption of nonlinear activation functions, the integration of attention mechanisms, and the application of data augmentation techniques [24] can significantly enhance the network's ability to process the input data, thereby improving the accuracy in recognition, classification, or localization tasks.Techniques such as feature pyramid networks [25], multi-scale fusion modules [19], and Atrous Spatial Pyramid Pooling (ASPP) [26] enable the generation of feature maps at various resolutions, effectively integrating feature information from different scales to enhance the system's recognition capabilities.Advanced architectures such as standard Transformers and their variants [27,28] and DenseNet [29] further boost the model's performance and adaptability by managing complex data structures. In summary, in the field of underwater target detection, existing research has been conducted on the aforementioned improvement strategies.However, there are still significant shortcomings in considering the complex underwater environment comprehensively and achieving higher precision.To address this, this paper introduces the improved YOLOv8-MU model, which integrates advanced computer vision technologies such as large kernel blocks (LarK blocks) [30], C2fSTR, and Spatial Pyramid Pooling Fully Connected Spatial Pyramid Convolution (SPPFCSPC) [31] with attention mechanisms to enhance the model's receptive field, multi-scale fusion capabilities, and feature expression abilities.Furthermore, by incorporating a fusion block [32], we have further enhanced the model's performance in multi-scale feature fusion, optimizing the feature aggregation process and thus improving the flow of gradient information and network performance at various levels.Additionally, the model has been optimized to accommodate resource-limited edge devices, with an improved loss function (MPDIOU) [33] that enhances the precision of localization for targets with unclear boundaries. The experimental results on the URPC2019 dataset demonstrate that the YOLOv8-MU model achieved an<EMAIL_ADDRESS>of 78.4%, which represents a 4.0% improvement over the original YOLOv8 model.Additionally, the model reached an<EMAIL_ADDRESS>of 80.9% on the URPC2020 dataset and 75.5% on the Aquarium dataset, surpassing other models such as YOLOv5 and YOLOv8n, thereby confirming the broad applicability and generalization capabilities of our proposed improved model architecture.Additionally, evaluations on the refined URPC2019 dataset demonstrated leading performance, achieving an<EMAIL_ADDRESS>of 88.1%, which further confirms its superior performance on this dataset.These results highlight the model's extensive applicability and generalization across various underwater datasets and provide valuable insights and contributions to future research in underwater target detection. The structure of this document is as follows.Section 2 provides a review of the literature relevant to this field.The YOLOv8-MU model proposed in this study, along with the experimental analysis, is detailed in Sections 3 and 4, respectively.Finally, Section 5 summarizes the contributions of this paper and outlines areas for future research. Object Detection Object detection technology is mainly divided into two types: one-stage and two-stage object detection.Two-stage object detection first generates candidate region boxes and then classifies and regresses these boxes to determine the location, size, and category of the target.Common two-stage object detection algorithms include the R-CNN family, such as R-CNN [34] and Faster R-CNN [35].Current research is focused on improving the models in the R-CNN family to make them more efficient and accurate.For example, Zeng et al. [36] proposed an underwater object detection algorithm based on Faster R-CNN and adversarial networks, enhancing the robustness and rapid detection capability of the detector.Song et al. [37] proposed an underwater object detection method based on an enhanced R-CNN detection framework to address challenges such as uneven illumination, low contrast, occlusion, and the camouflage of aquatic organisms in underwater environments.Hsia et al. [38] combined Mask R-CNN, data augmentation (DA), and discrete wavelet transform (DWT) to propose an intelligent retail product detection algorithm, improving the detection of overlooked objects. One-stage object detection directly processes the entire image and simultaneously predicts the location, size, and category of the target through regression methods to improve the detection efficiency.Common one-stage object detection algorithms include the YOLO family, SSD, and RetinaNet.For example, the YOLO series of algorithms achieve rapid detection by dividing the image into grids and predicting the bounding boxes and classification confidence for each grid.The YOLO series has undergone multiple iterations and improvements.YOLOv1 [39] addressed the shortcomings of two-stage detection networks.YOLOv2 [40] added batch normalization layers after each convolutional layer and eliminated the use of dropout.YOLOv3 [41] introduced the residual module Darknet-53 and the feature pyramid network (FPN), resulting in significant improvements.The backbone network of YOLOv4 [42] is based on CSPDarknet53, using cross-stage partial connections (CSPs) to facilitate the information flow between different layers.YOLOv5 [43] introduced multi-scale prediction, automated hyperparameter optimization, and a more efficient model structure, leading to improvements in both speed and accuracy.YOLOv6 [44], YOLOv7 [45], and YOLOv8 [46] added many technologies on the basis of previous versions.There are also many improvements to the YOLO series to achieve more efficient detection performance.For example, Li et al. [47] proposed an improved YOLOv8 algorithm that integrates innovative modules from the real-time detection transformer (RT-DETR) to address the occlusion problem in underwater fish target detection.The algorithm, trained on an occlusion dataset using an exclusion loss function specifically designed for occlusion scenarios, significantly improves the detection accuracy.Additionally, SSD [48] uses a pyramid structure to classify and regress locations on multiple feature maps, making it more suitable for handling objects of different sizes.RetinaNet [49] introduces focal loss and a feature pyramid network to address the disparity between foreground and background classes, achieving higher accuracy. In summary, two-stage object detection performs better in terms of accuracy but is slower in speed, whereas one-stage object detection has an advantage in speed but may lack accuracy.In practical applications, the choice between these methods depends on the specific requirements regarding the detection speed and accuracy. Transformer In the field of natural language processing (NLP), the Transformer model has become a mainstream technology that is widely recognized for its capabilities in understanding and generating text.Over time, researchers have begun to explore the application of Transformer architectures in the field of computer vision (CV), aiming to enhance the efficiency and accuracy of image-related tasks.In early attempts, Transformers were employed as enhanced decoders to optimize the model performance.For instance, Yang et al. [50] developed the TransPose model, which directly processed features extracted by convolutional neural networks (CNNs), to model the global relationships in images and effectively capture the dependencies between key points.On the other hand, Mao et al. [51] designed the Poseur method, utilizing lightweight Transformer decoders to achieve higher detection accuracy and computational efficiency. Furthermore, Transformers have also been successfully applied to a broader range of image processing tasks.For example, the Vision Transformer (ViT) is a groundbreaking example that directly applies Transformer architectures to tasks such as image classification.Xu et al. [52] demonstrated the transferability of knowledge between different models and the flexibility of models through the ViTPose project.Recent research advances indicate that combining attention mechanisms from Transformers with object detection networks can lead to significant performance improvements.For instance, Wang et al. [53] integrated the SimAM attention module into the YOLO-BS network to improve the accuracy in detecting large coal blocks, helping to reduce congestion in underground conveyor systems.Similarly, BoTNet [54] introduced the BoT module with a self-attention mechanism, which optimizes and accelerates the training process of small networks by simulating the behavior of large networks, thereby effectively extracting and integrating features at different scales. Based on these advanced observations and innovations, this study aimed to integrate attention mechanisms and Transformer modules into the YOLOv8 network architecture to further enhance the model's performance in various object detection tasks.This introduction aimed to leverage the powerful global information modeling capabilities of Transformers to improve the efficiency and accuracy of image recognition and processing tasks. SPP In the research of machine vision and object recognition, the Spatial Pyramid Pooling (SPP) module and its improved versions, such as Spatial Pyramid Pooling Fast (SPPF), Simplified SPPF (SimSPPF), Atrous Spatial Pyramid Pooling (ASPP), Spatial Pyramid Pooling, Cross-Stage Partial Channel (SPPCSPC), and SPPFCSPC, have been widely utilized to improve the accuracy of object detection.These modules effectively address the problems caused by differences in input image sizes, avoiding image distortion.The initial concept of the SPP module was proposed by He et al. [55], aiming to overcome the challenge of inconsistent sizes.Subsequently, to further improve the processing speed, the SPPF [43] and SimSPPF [44] modules were developed successively.Additionally, Chen et al. introduced the ASPP module [56] in the DeepLabv2 semantic segmentation model, which enhances the recognition capability for multi-scale objects by capturing information at different scales through parallel dilated convolutions.The SPPCSPC module [45] achieves a performance improvement by optimizing the parameters and reducing the computational complexity, without expanding the receptive field. In recent years, attention mechanisms have been introduced into object detection networks to enhance the models' ability to detect small objects in complex scenes.For example, Wu et al. [57] proposed an effective multi-scale attention (EMA) mechanism based on multi-scale feature fusion, which automatically adjusts the weight distribution in the feature maps to focus more on key areas of the image.This is particularly effective in accurately identifying small objects in complex environments.Given this, this study aimed to integrate these improved SPP modules and attention mechanisms into the YOLOv8 network architecture, aiming to further optimize the performance of the network in various object detection tasks. IoU Loss In the research field of object detection, localization, and tracking, the precise regression of bounding boxes is crucial.In recent years, localization loss functions, represented by the intersection over union (IoU) loss [58] and its derivative versions [59][60][61][62], have played a central role in improving the accuracy of bounding box regression.These types of loss functions optimize the model by evaluating the overlap between the predicted bounding boxes and actual bounding boxes, effectively mitigating the impact of variations in the aspect ratio on the detection performance.However, the IoU loss has certain limitations.For instance, when the predicted box and the ground truth box do not overlap, the IoU value remains zero, failing to reflect the actual distance between them.Additionally, in cases where the IoU is the same, it cannot distinguish between positional differences. To address these challenges, several studies have proposed various improvements to the IoU loss, including the Generalized IoU (GIoU), Distance IoU (DIoU), CIoU, Efficient IoU (EIoU), and Wise IoU (WIoU).GIoU loss overcomes the issue of traditional IoU calculation resulting in zero by introducing the concept of the minimum enclosing rectangle, although it may lead to smaller gradients and slower convergence in some scenarios [59].DIoU loss enhances the model's sensitivity to the position by considering the distance between the center points of the predicted and ground truth boxes, but it does not involve shape matching [60].CIoU loss builds upon this by incorporating the difference in aspect ratio, although it may cause training instability in certain circumstances despite improving the shape-matching accuracy.EIoU loss balances the relationship between simple and hard samples by introducing separate consistency and focal losses, thereby enhancing the stability and efficiency of the model [61].WIoU loss further enhances the model's performance and robustness through a dynamic non-monotonic static focus mechanism (FM) [62]. In general, these variants of the IoU loss effectively improve the accuracy of bounding box regression and the robustness of the models by introducing mechanisms in loss calculation that consider the distance between the predicted and ground truth boxes, differences in the position center points, the consistency of the aspect ratios, and the handling of samples with varying difficulty levels.In practice, selecting the appropriate variant of the loss function tailored to specific object detection tasks is a key strategy in optimizing the detection performance. Methodology While the YOLOv8 model has achieved significant progress in the field of object detection, it still exhibits certain limitations.Firstly, it adopts a larger network architecture, resulting in slower processing speeds compared to other models within the YOLO family.Secondly, for objects with limited feature information, the localization accuracy may not be sufficiently high.Furthermore, the absence of the consideration of inter-object relationships during the prediction process may lead to issues such as overlapping bounding boxes.Additionally, the utilization of fixed-scale anchor boxes may struggle to accommodate objects with varying aspect ratios, potentially resulting in object deformation.To address these issues, we designed YOLOv8-MU, as shown in Figure 1. LarK Block The convolutional neural network (ConvNet) with large kernels has shown remarkable performance in capturing sparse patterns and generating high-quality features, but there is still considerable room for exploration in its architectural design.While the Transformer model has demonstrated powerful versatility across multiple domains, it still faces some challenges and limitations in terms of computational efficiency, memory requirements, interpretability, and optimization.To address these limitations, we introduce the LarK block from UniRepLKNet into our model [30], as depicted in Figure 2.This block leverages the advantages of large kernel convolution to achieve a wider receptive field.By employing larger convolutional kernels, the LarK block can capture more contextual information without necessitating additional network layers.This represents a key advantage of large kernel convolution, enabling the network to capture richer features. As illustrated in Figure 2, the block utilizing the dilated reparam block is referred to as the large kernel block (LarK block), while the block employing DWconv 3 × 3 is termed the small kernel block (SmaK block).The dilated reparam block is proposed based on equivalent transformations, with its core idea being the utilization of a non-sparse large kernel block (kernel size K = 9), combined with multiple sparse small kernel blocks (kernel sizes k are 5, 3, 3, 3), to enhance the feature extraction effectiveness.The sparsity rate r determines the distribution of non-zero elements within the convolution kernel, where a higher sparsity rate implies more zero elements within the kernel, aiding in reducing the computational complexity while maintaining the performance.For instance, to accommodate larger input sizes, when the large kernel K is increased to 13, the corresponding adjustment of the small kernel sizes and sparsity rates is made to be k = (5, 7, 3, 3, 3) and r = (1,2,3,4,5).This adjustment allows us to simulate an equivalent large convolutional layer with a kernel size of (5,13,7,9,11), effectively enhancing the feature extraction by integrating large kernel layers in this manner.We observe that, apart from capturing small-scale patterns, the ability to enhance a large kernel capturing sparse patterns may yield higher-quality features, aligning perfectly with the mechanism of dilated convolution [30].From the perspective of sliding windows, dilated convolution layers with a dilation rate of d scan the input channels to capture spatial patterns, where the distance between each pixel of interest and its neighboring pixels is d − 1.Therefore, we adopt dilated convolution layers parallel to the large kernels and sum their outputs.The large kernel block is primarily integrated into the middle and upper layers of the model to enhance the depth and expressive capability of the model when using large kernel convolutional layers.This enhancement is achieved by stacking multiple SE blocks to deepen the model.The squeeze-and-excitation (SE) block compresses all channels of the feature map into a single vector through a global compression operation, which contains global contextual information about the features.Then, this vector is activated through a fully connected layer and a sigmoid activation function to restore the number of channels to match the input features.This activation vector is multiplied element-wise with the original feature map, thereby enhancing or suppressing certain channels in the feature map.The SE block can enhance the model's feature expression capability, especially in the early stages, particularly when there is a lack of sufficient contextual information. C2fSTR The proposed C2fSTR in this paper modifies the original YOLOv8 architecture's C2f module using the Swin Transformer block [28].Compared to the original C2f module, the modified C2fSTR module facilitates better interactions between strong feature maps and fully utilizes the target background information, thereby enhancing the accuracy and robustness of object detection under complex background conditions.Figure 3a illustrates the structure of the C2fSTR. The C2fSTR consists of two modules.One is the Conv module, which consists of a Conv2d with a kernel size of 1 × 1 and a stride of 1, followed by batch normalization and the Silu activation function.The role of the convolution module is to reduce the length and width of the feature map while expanding the dimensionality.The other module is the Swin Transformer block, which comprises a linear layer (LN), a shifted window multi-head self-attention (SW-MSA), and a feed-forward MLP (MLP).The structure includes some Swin Transformer modules.The function of the Swin Transformer block is to expand the scope of the information interaction without increasing the number of parameters by restricting the attention computations to be within each window.Its structure is illustrated in Figure 3b.1)).W-MSA and SW-MSA are multi-head self-attention modules, employing regular and shifted window configurations, respectively [28]. Traditional Transformers typically compute the attention globally, leading to high computational complexity.The computational complexity of the multi-head attention mechanism is proportional to the square of the size of the feature map.To reduce the computational complexity of the multi-head attention mechanism and expand the range of the information interaction, in the Swin Transformer, the feature map is divided into windows.Each window undergoes window-based multi-head self-attention computation followed by shifted window-based multi-head self-attention computation, enabling mutual communication between windows [65].The computation of consecutive Swin Transformer blocks is shown in Equation ( 1): where ẑl and z l represent the output features of the (S)W-MSA and MLP modules of block l, respectively, and W-MSA and SW-MSA represent window-based multi-head self-attention using regular and shifted window partitioning configurations, respectively.When employing the window-based multi-head self-attention (W-MSA) module, selfattention calculations are conducted solely within individual windows, thereby preventing information exchange between separate windows.To address this limitation, the model incorporates the shifted window multi-head self-attention (SW-MSA) module, which is an offset adaptation of the W-MSA.However, the shifted window partitioning approach introduces another issue: it results in the proliferation of windows, and some of these windows are smaller than standard windows.For instance, a window comprising 2 × 2 patches may expand to encompass 3 × 3 patches, more than doubling the number of windows, which may consequently lead to an increase in parameters.To resolve this issue, a cyclic shift along the top-left direction is proposed.This method involves cyclically shifting the input features, enabling the windows within a batch to consist of discontinuous sub-windows, thereby maintaining a constant number of windows.Thus, although the shifted window strategy intrinsically increases the number of windows, the cyclic shift approach effectively mitigates this issue by ensuring the stability of the window count. In this way, by confining the attention computations to each window, the Swin Transformer enhances the model's focus on local features, thereby augmenting its ability to model local details.However, object recognition and localization in images depend on the feature information of the global background.The information interaction in the Swin Transformer is limited to individual windows and shifted windows, capturing only local details of the target, while global background information is difficult to obtain [66].To achieve a more extensive information interaction and simultaneously obtain both global background and local detail information, we apply the Swin Transformer block to C2f, replacing the Darknet bottleneck and forming the C2fSTR feature backbone system.This combined strategy enables a comprehensive information interaction, effectively capturing rich spatial details and significantly improving the model's accuracy in object detection in complex backgrounds. SPPFCSPC_EMA As shown in Figure 4, YOLOv8-MU replaces the SPPF module in YOLOv8 with the SPPFCSPC module and introduces multiple convolutions and concatenation techniques to extract and fuse features at different scales, expanding the receptive field of the model and thereby improving the model's accuracy.Additionally, we have introduced the EMA module, whose parallel processing and self-attention strategy significantly improve the model's performance and optimize the feature representation [67].By combining the SPPFCSPC and EMA modules to form the SPPFCSPC_EMA module, not only are the model's accuracy, efficiency, and robustness enhanced, but the model's performance is further improved while maintaining its efficiency.The SPPFCSPC module integrates two submodules: SPP and fully connected spatial pyramid convolution (FCSPC) [69].SPP, as a pooling layer, can handle input feature maps of different scales, effectively detecting both small and large targets.FCSPC is an improved convolutional layer aimed at optimizing the representation of the feature maps to enhance the detection performance.By performing multi-scale spatial pyramid pooling on the input feature map, the SPP module captures information about targets and scenes at different scales [55].Subsequently, the FCSPC module convolves the feature maps of different scales output by the SPP module and divides the input feature map into blocks.These blocks are pooled and concatenated, followed by convolution operations, to enhance the model's receptive field and retain key feature information, thereby improving the model's accuracy [69].The SPPFCSPC module is an optimization of SPPCSPC based on the SPPF concept, reducing the computational requirements for the pooling layer's output by connecting three independent pooling operations and improving the speed and detection accuracy of dense targets without changing the receptive field [68].The results produced using this pooling method are comparable to those obtained using larger pooling kernels, thus optimizing the training and inference speeds of the model.The calculation formula for the pooling part is shown in Equation (2): where R represents the input feature layer, S 1 represents the pooling layer result of the smallest pooling kernel, S 2 represents the pooling layer result of the medium-sized pooling kernel, S 3 represents the pooling layer result of the largest pooling kernel, S 4 represents the final output result, and ⊛ represents tensor concatenation. The EMA [67] mechanism employs three parallel pathways, including two 1 × 1 branches and one 3 × 3 branch, to enhance the processing capability for spatial information.In the 1 × 1 branches, global spatial information is extracted through two-dimensional global average pooling, and the softmax function is utilized to ensure computational efficiency.The output of the 3 × 3 branch is directly adjusted to align with the corresponding dimensional structure before the joint activation mechanism, which combines channel features, as shown in Equation ( 3).An initial spatial attention map is generated through matrix dot product operations, integrating spatial information of different scales within the same processing stage.Furthermore, the 2D global average pooling embeds global spatial information into the 3 × 3 branch, producing a second spatial attention map that preserves precise information on the spatial location.Finally, the output feature maps within each group are further processed through the sigmoid function [70].As illustrated in Figure 5, the design of EMA aims to assist the model in capturing the interactions between features at different scales, thereby enhancing the performance of the model. Here, z c represents the output related to the c-th channel.The primary purpose of this output is to encode global information, thereby capturing and modeling long-range dependencies. Therefore, the overall formula for the SPPFCSPC_EMA module is as shown in Equation ( 4): Fusion Block DAMO-YOLO has improved the efficiency of node stacking operations and optimized feature fusion by introducing a specially designed fusion block.Inspired by this, we replaced the C2f module in the neck network with the fusion block to enhance the fusion capability for multi-scale features.As illustrated in Figure 6, the architecture of the fusion block commences with channel number adjustment on two parallel branches through 1 × 1 CBS, followed by the incorporation of the concept of feature aggregation from the efficient layer aggregation network (ELAN) [71] into the subsequent branch, composed of multiple RepBlocks and 3 × 3 CBS.This design leverages strategies such as CSPNet [72], the reparameterization mechanism, and multi-layer aggregation to effectively promote rich gradient flow information at various levels.Furthermore, the introduction of the reparameterized convolutional module significantly enhances the performance. Four gradient-path fusion blocks are utilized in the model, each splitting the input feature map into two streams.One stream is directly connected to the output, while the other undergoes channel reduction, cross-level edge processing, and convolutional reparameterization before further dividing into three gradient paths from this stream.Ultimately, all paths are merged into the output feature map.This design segments the gradient flow paths, introducing variability in the gradient information as it moves through the network, effectively facilitating a richer flow of gradient information.As for Figure 6, the RepBlock is designed to employ different network structures during the training and inference phases through the use of reparameterization techniques, thereby achieving efficient model training and rapid inference speeds [73].Following the recommendations of RepVGG, we optimized the parameter structure, clearly segregating the multi-branch used during the training phase from the single branch used during the inference phase.During the training process, the RepBlock adopts a complex structure containing multiple parallel branches, which extract features through 3 × 3 convolutions, 1 × 1 convolutions, and batch normalization (BN).This design is intended to enhance the representational capacity of the model.During inference, these multi-branch structures are converted into a single, more streamlined 3 × 3 convolutional layer through structural reparameterization, eliminating the branch structure to increase the inference speed and reduce the memory consumption of the model. The conversion from a multi-branch to a single-branch architecture is primarily motivated by three considerations.Firstly, from the perspective of speed, the models reparameterized for inference demonstrate a significant acceleration in inference speed.This not only expedites the model inference process but also enhances the practicality of model deployment.Secondly, regarding memory consumption, the multi-branch model necessitates the allocation of memory individually for each branch to store its computational results, leading to substantial memory usage.Adopting a single-path model significantly reduces the demand for memory.Lastly, in terms of model flexibility, the multi-branch model is constrained by the requirement that the input and output channels for each branch remain consistent, posing challenges to model modification and optimization.In contrast, the single-path model is not subject to such limitations, thereby increasing the flexibility of model adjustments. MPDIOU Although they consider multiple factors, existing boundary box regression loss functions, such as CIoU, may still exhibit inaccurate localization and blurred boundary issues when dealing with complex scenarios where the target boundary information is unclear, affecting the regression accuracy.Given the intricate underwater environment and limited lighting conditions, the boundary information of target objects is often inadequate, posing challenges that prevent traditional loss functions from adapting effectively.Inspired by the geometric properties of a horizontal rectangle, Ma et al. [33] designed a novel boundary box regression loss function based on the minimum point distance L MPDIoU .We incorporated this function, referred to as MPDIoU, into our model to evaluate the similarity between the predicted and ground truth boundary boxes.Compared to existing loss functions, MPDIoU not only better accommodates blurred boundary scenarios and enhances the object detection accuracy but also accelerates the model's convergence and reduces the redundant computational overhead, thereby improving the localization and boundary precision for underwater organism detection. The calculation process of MPDIoU is as follows.Assume that (x , respectively.Subsequently, the final L MPDIoU can be calculated using Equations ( 5) and ( 6) based on d 1 and d 2 . The MPDIoU loss function optimizes the similarity measurement between two bounding boxes, enabling it to adapt to scenarios involving both overlapping and non-overlapping bounding box regression.Moreover, all components of the existing bounding box regression loss functions can be represented using four-point coordinates, as shown in Equations ( 7)-( 9).), respectively, while their widths and heights are also represented.Through Equations ( 7)-( 9), we can calculate the nonoverlapping area, the distance between the center points, and the deviation in width and height.This method not only ensures comprehensiveness but also simplifies the computational process.Therefore, in the localization loss part of the YOLOv8-MU model, we choose to use the MPDIoU function to calculate the loss, to enhance the model's localization accuracy and efficiency.In this study, the dataset used to validate the effectiveness of our optimized model was URPC2019 (http://www.urpc.org.cn/index.html,accessed on 15 June 2023), a publicly available dataset for underwater object detection.It includes five different categories of aquatic life, Echinus, starfish, Holothurian, scallops, and waterweeds, with a total of 3765 training samples and 942 validation samples.Examples of the dataset's images are shown in the first row of Figure 7. Simultaneously, we conducted experiments on the URPC2019 dataset (in the absence of waterweeds) and the refined URPC2019 dataset to further demonstrate the superior detection accuracy of our proposed improved model.Additionally, we performed detection experiments on the URPC2020 (http://www.urpc.org.cn/index.html,accessed on 15 June 2023) dataset.Similar to URPC2019, URPC2020 is an underwater dataset, but it differs in that it contains only four categories, Echinus, starfish, Holothurian, and scallops, with a total of 4200 training samples and 800 validation samples.The second row of Figure 7 displays examples of images from this dataset.Finally, we conducted experiments on the Aquarium dataset, which differs from the URPC series in terms of the types of seabed substrates.The Aquarium dataset, provided by Roboflow (https://universe.roboflow.com/brad-dwyer/aquarium-combined/3,accessed on 20 April 2024), encompasses various categories, such as fish, jellyfish, penguins, puffins, sharks, starfish, and minks.Additionally, the dataset includes augmented versions, incorporating rotations and flips, totaling 4670 images, comprising 4480 training images, 63 testing images, and 127 validation images.Examples of images from this dataset are illustrated in the third row of Figure 7. Through experiments conducted on these three datasets, we aimed to validate the feasibility and extensive applicability of our model. Environment Configuration and Parameter Settings The experiments in this study were conducted on the Ubuntu 20.04, utilizing the PyTorch 1.11.0 deep learning framework.The experimental setup included the parallel computing platform and programming model developed by NVIDIA (Santa Clara, CA, USA), the Python 3.8 programming language, and server processors released by Intel.The performance of different GPUs and the size of the RAM significantly impact the experimental results.Therefore, we maintained a consistent experimental environment throughout our entire experimental process.The specific configuration is shown in Table 1.To enhance the persuasiveness of the experiments, we conducted experiments based on the original YOLOv8 model, during which a series of parameter adjustments were made and multiple experimental tests were conducted.Ultimately, we determined that some of the main hyperparameters for all experiments would adopt the same settings as shown in Table 1.A larger batch size can speed up training, so we set it to 16.In terms of loss calculation, we continued YOLOv8's approach of combining the classification loss, bounding box regression loss, and distribution focal loss, with the weights of the three losses being 7.5, 0.5, and 1.5, respectively, to optimize the model.In addition, momentum and weight decay were important hyperparameters for the optimization of the model, with the detailed settings available in Table 2. Evaluating the quality of YOLO models requires a comprehensive consideration of speed, accuracy, applicability, robustness, and cost, among other factors, with varying focus points in different use scenarios.For the URPC series datasets, this study primarily focuses on the accuracy of the improved YOLOv8 model.We assess the model's accuracy by calculating and comparing the average precision (AP) for each class and the mean average precision (mAP).Additionally, we examine the impact of floating point operations (FLOPs) and the number of parameters (Para) on the model accuracy to verify the superiority of our improved YOLOv8 model. The calculation of the AP value is related to the calculation and integration of the precision-recall curve.First, it is necessary to calculate the precision and recall values using Equations ( 10) and (11), where TP, FP, and FN represent true positive, false positive, and false negative.True positive is the number of positive samples predicted as positive by the model; false positive is the number of negative samples predicted as positive by the model; false negative is the number of positive samples predicted as negative by the model.Subsequently, the average precision for each category is calculated according to Equation (12).To reflect the performance of the model on the entire dataset, the mAP's value is calculated according to Equation (13).In the calculation of the mAP, we take the value at an IoU of 0.5 and write it as<EMAIL_ADDRESS>which means that the detection is considered successful only when the intersection part of the true box and our predicted box is greater than 50%. Comparative Experiments 4.2.1. Experiments on URPC2019 We first conducted a literature search or experiments on the performance of various models on the URPC2019 dataset, including the Boosting R-CNN [37] model, which introduces the idea of reinforcement learning to improve Faster R-CNN [35], the YOLOv3 model, the YOLOv5 series models, the improved YOLOv5 [73], the YOLOv7 model, the YOLOv8 series models, and our optimized YOLOv8 model.The experimental data are shown in Table 3.We also plotted a bar graph, as shown in Figure 8, to provide a more intuitive comparison of the performance of each model.After our observation and analysis, we find that the optimized model performs better than the other models, especially since the optimization of the AP values of each category is more obvious.Particularly in the detection of the waterweeds category, the data performance is quite good, with an AP value increase of 25.2% compared to the traditional Boosting R-CNN model.The AP value is also only slightly lower than that of the YOLOv3 model compared to the YOLO series models, and there is an increase of nearly 20% compared to the baseline model, YOLOv8n.This indicates that the improved YOLOv8 model has overcome the difficulties faced by other models in detecting the waterweeds category, demonstrating a unique advantage in enhancing the AP value for the individual category of waterweeds.Furthermore, upon analyzing the<EMAIL_ADDRESS>values, we find that the YOLOv8-MU model also demonstrates superior performance in terms of the overall dataset detection accuracy.The<EMAIL_ADDRESS>of YOLOv8-MU is the highest in Table 3, namely 78.4%, which is 8.2% higher than that of the traditional Boosting R-CNN model, 2.7% higher than that of the improved YOLOv5 [73], and 4% higher than that of the baseline model, YOLOv8n.It is closest to the YOLOv3 model but shows an improvement.The main reason is that although the AP value of YOLOv8-MU in the waterweeds category is lower than that of YOLOv3, YOLOv8-MU has higher detection accuracy in the remaining four categories compared to YOLOv3.This also verifies the effectiveness of YOLOv8-MU in improving the overall detection accuracy on the URPC2019 dataset. In deep learning models, a relatively low number of parameters and FLOPs can reduce the model's computational complexity and size, enhancing its performance and applicability in practical applications.For this reason, we specifically plotted the bar graphs shown in Figures 9 and 10 based on Table 3 to compare the number of parameters and FLOPs among various models.It can be seen that although the number of parameters and FLOPs in our optimized model, YOLOv8-MU, has increased compared to the baseline model, YOLOv8n, they are still reduced compared to other models.This proves that our model achieves the effect of being lightweight. Additionally, experiments were conducted on both the URPC2019 dataset (without waterweeds) and the refined URPC2019 dataset, to compare our proposed YOLOv8-MU model with several recent underwater object detection methods, as shown in Table 4.We also drew bar charts based on maps of different models, as shown in Figure 11, to provide a more intuitive comparison of our model with other models.The URPC2019 refined dataset comprises a total of 4757 images, while the URPC2019 dataset (without waterweeds) contains 4707 images.Although the total number of images differs, our model demonstrates superior detection accuracy even with fewer photos, further highlighting the superiority of our model in terms of detection precision. To more intuitively demonstrate the superiority of our optimized YOLOv8 model's detection performance, we extracted and compared the detection results of different models on the URPC2019 dataset, as shown in Figure 12.Our model outperformed other models in both precision and recall.As can be seen clearly in rows 1 to 4, our optimized model did not detect any targets beyond the ground truth, indicating that our model has high precision.In the results for the images in rows 5 to 8, both YOLOv5s and YOLOv8n exhibit the same issue, failing to detect all targets in the ground truth and missing some targets, while our model exhibits high recall.This sufficiently demonstrates the effectiveness of our optimized YOLOv8 model in detection on the URPC2019 dataset. Experiments on URPC2020 On the URPC2020 dataset, which is part of the same series as URPC2019, we also conducted a series of experiments.The results are presented in Table 5; based on these results, we plotted bar graphs with different horizontal axes, as shown in Figures 13 and 14.We observed that the URPC2020 dataset, unlike URPC2019, has only four biological categories and is missing the Waterweeds category, which leads to high AP values for a single category, resulting in a small improvement in the detection performance relative to other models but an improvement that is sufficient to reflect the advantages of our model.We compared the experimental results of the improved YOLOv5 [73], SA-SPPN [79], and YOLOv8n with our model, Our_n, and found that the<EMAIL_ADDRESS>score of our improved model was higher than those of the other models.Additionally, we compared YOLOv8s with ours to demonstrate the high efficiency of our improved model in terms of detection accuracy. Experiments on Aquarium To validate the extensive applicability of our enhanced model across various seabed substrates and diverse datasets, we conducted supplementary experiments on the Aquarium dataset.The results are detailed in Table 6.Additionally, we visualized the performance metrics using bar charts with distinct horizontal axes, as depicted in Figure 15.Notably, on the Aquarium dataset, we observed a deficiency in robust performance regarding the AP values for the puffin category, resulting in a relatively minor enhancement in the detection performance for this specific category.Nonetheless, in comparison to the alternative models, our proposed YOLOv8-MU not only achieves higher AP values in detecting the puffin category but also outperforms other models in the overall<EMAIL_ADDRESS>scores.We performed comparative analyses with YOLOv5s, YOLOv5m, YOLOv5n, YOLOv8s, and YOLOv8n using our model and consistently found higher<EMAIL_ADDRESS>scores with our improved model.Furthermore, we evaluated the efficiency of our enhanced model in terms of detection accuracy compared to YOLOv8s, confirming the superiority of our approach.7 compares the impact of using the LarK block to replace different positions of the C2f in the backbone on the accuracy, the number of parameters, and the computational complexity of the model on the URPC2019 dataset, for various categories of marine life.Among them, the model with the middle two C2fs in the backbone replaced by the LarK block performed the best, achieving an<EMAIL_ADDRESS>of 75.5%, with the smallest number of parameters, similar to the model in which only the last C2f was modified, and with FLOPs at a medium level.In contrast, the model in which only the last C2f was modified had the smallest number of parameters and the lowest computational complexity but experienced a decrease in accuracy compared to the original YOLOv8n.The accuracy of other models with different modification positions was also lower than that of the original YOLOv8n.Therefore, in the subsequent research, we adopted the model that replaced the middle two C2fs with the LarK block, as it ensured higher accuracy while improving the speed of the object detection model, with a smaller modification to the network.8 compares the impact of using the C2fSTR to replace different positions of the C2f in the backbone on the accuracy, the number of parameters, and the computational complexity of the model on the URPC2019 dataset, for various categories of marine life.Among them, the model with the last C2f in the backbone replaced by C2fSTR performed the best, achieving an<EMAIL_ADDRESS>of 75.2%, with the smallest computational load and the fastest speed.In contrast, the model that replaced all C2fs had a decrease in accuracy, with an<EMAIL_ADDRESS>of only 73.8%.Other models with different modification positions, although all having an<EMAIL_ADDRESS>higher than YOLOv8n, did not perform as well as the model in which only the last C2f was modified regarding the computational load and speed.Therefore, in our subsequent research, we adopted the model that replaced the last C2f with the C2fSTR, as it ensured the highest accuracy while also achieving the best computational efficiency and speed.9 shows the impact of using the fusion block to replace different positions of the C2f in the neck on the accuracy, the number of parameters, and the computational complexity of the model on the URPC2019 dataset, for various categories of marine life.Among them, the model with all C2fs in the neck replaced by the fusion block performed the best, achieving an<EMAIL_ADDRESS>of 74.7%; although its number of parameters and computational complexity were not the lowest, its accuracy was the highest.In comparison, the models in which we modified the last three C2fs and the middle two C2fs had smaller parameter counts and lower computational complexity but<EMAIL_ADDRESS>values of only 74.1% and 73.5%, respectively, which were 0.3% and 0.9% lower than those of YOLOv8n.Modifications at other positions also failed to improve the model's accuracy compared to the modification of all C2fs.Therefore, in our subsequent research, we adopted the model that replaced all C2fs with the fusion block, as it achieved higher target detection accuracy.In this section, we took the original YOLOv8 as the base and gradually added or removed the components included in our model to explore the contribution of each component to the overall performance of the system model, thereby demonstrating their effectiveness in improving YOLOv8.We conducted multiple ablation experiments, and, by analyzing Table 10, we can see that different combinations of modules had varying effects on the performance of the YOLOv8 model.block, and MPDIoU are used simultaneously, achieving the highest<EMAIL_ADDRESS>of 78.4%, which is an increase of 4.0% compared to the original YOLOv8.In summary, based on the experimental results, the simultaneous use of the LarK block, SPPFCSPC_EMA, C2fSTR, fusion block, and MPDIoU can produce the best performance improvement.These results provide guidance for the design and configuration of optimized object detection systems. Analysis of Results To further prove the effectiveness of each module of YOLOv8-MU, we summarized and compared the experimental results of each ablation experiment on the dataset URPC2019.We took all class<EMAIL_ADDRESS>PR curves from each precision-recall (PR) curve and summarized them in the same coordinate system, as shown in Figure 16.From the figures, we can see that after our model is added to the original YOLOv8 model, the PR curve as a whole moves closer to the upper-right corner, which indicates that, after the addition of each module, the performance of the improved YOLOv8 changes in a positive direction, which further proves the effectiveness of each improved module. Conclusions and Future Work In this study, we have successfully developed and validated an advanced underwater organism detection framework named YOLOv8-MU, which significantly improves the detection accuracy.By replacing the original backbone network structure with the LarK block proposed in UniRepLKNet, we obtain a larger receptive field without increasing the model's depth.Integrating the Swin Transformer module into the C2f module further enhances the model's capability to learn and generalize to various underwater biological features.Combining the multi-scale attention module EMA with SPPFCSPC significantly improves the detection accuracy and robustness for multi-scale targets.Introducing a fusion block into the neck network enhances the model's feature extraction and integration capabilities across different scales.By utilizing the MPDIoU loss function, which is optimized based on the vertex distance, we effectively address target localization and boundary precision issues, thereby enhancing the detection accuracy.Validation on the URPC2019 and URPC2020 datasets demonstrates that the YOLOv8-MU model achieves<EMAIL_ADDRESS>scores of 78.4% and 80.9%, respectively, representing improvements of 4.0% and 0.9% over the YOLOv8n model.These achievements not only validate the effectiveness of our proposed method but also provide new research directions and practical foundations for the development of target detection technology in complex environments.Additionally, the evaluation of the refined URPC2019 dataset demonstrates leading performance (SOTA), with an<EMAIL_ADDRESS>of 88.1%, further confirming the superiority of the model on this dataset. Figure 1 . Figure 1.The structure of YOLOv8-MU.It consists of the backbone, neck, and head, including detailed structures of C2f and Detect [46]. Figure 2 . Figure 2. The structural design of UniRepLKNet.The LarK block consists of a dilated reparam block, SE block[63], feed-forward network (FFN), and batch normalization (BN)[64] layers.The only difference between the SmaK block and the LarK block is that the former uses a depth-wise 3 × 3 convolutional layer to replace the dilated reparam layer of the latter.Stages are connected by downsampling blocks, which are implemented by stride-2 dense 3 × 3 convolutional layers[30]. Figure 4 . Figure 4.The structure of SPPFCSPC_EMA.SPPFCSPC performs a series of convolutions on the feature map, followed by max-pooling and fusion over four receptive fields (one 3 × 3 and three 7 × 7).After further convolution, it is fused with the original feature map and finally combined with EMA to form the SPPFCSPC_EMA module (Conv: convolution; MaxPool2d: max pooling)[68]. Figure 6 . Figure 6.Structural diagram of the fusion block, which includes a schematic diagram of the RepBlock.(a) represents the model structure used during training, and (b) represents the model structure used during inference [73]. 2 ) 1 ) represent the coordinates of the top-left and bottom-right points of the ground truth box, respectively, and (x represent the coordinates of the top-left and bottom-right points of the predicted box, respectively.Parameters w and h represent the width and height of the input image, respectively.The formulas for the ground truth box and the predicted box are d 2 |C| represents the area of the smallest bounding rectangle encompassing both the ground truth and predicted boxes.The center coordinates of the ground truth and predicted boxes are denoted by (x gt c , y gt c ) and (x pd c , y pd c Figure 8 . Figure 8. Performance comparison of various models on the URPC2019 dataset.The red line represents Scores of YOLOv8-MU. Figure 9 . Figure 9. Bar graph comparison of FLOPs for various models on the URPC2019 dataset.The red line represents FLOPs of YOLOv8-MU. Figure 10 . Figure 10.Bar graph comparison of the number of parameters for various models on the URPC2019 dataset. Figure 11 . Figure 11.Map values of different models on the URPC2019 dataset with the waterweed category removed. Figure 12 . Figure 12.Comparison of target detection results between different models. Figure 13 . Figure 13.Performance comparison of various models on the URPC2020 dataset.The red line represents Scores of Ourn. Figure 14 . Figure 14.Performance comparison of various models on the URPC2020 dataset. 4. 3 . Ablation Study 4.3.1.Comparison of the Effectiveness of the LarK Block at Different Positions Table Table 2 . Settings of some hyperparameters during training. Table 3 . Performance comparison of the YOLOv8-MU model and other models on the URPC2019 dataset. Table 4 . The<EMAIL_ADDRESS>comparison of four class objects on the URPC2019 dataset. Table 5 . Performance comparison of the YOLOv8-MU model and other models on the URPC2020 dataset. Table 6 . Performance comparison of the YOLOv8-MU model and other models on the Aquarium dataset.Figure 15. Performance comparison of various models on the Aquarium dataset. Table 7 . Parameter comparison when replacing the C2f with the LarK block at different positions in the backbone. Table 8 . Parameter comparison when replacing the C2f with the C2fSTR at different positions in the backbone.Comparison of the Effectiveness of the Fusion Block at Different Positions Table Table 9 . Parameter comparison when replacing the C2f with the fusion block at different positions in the neck. Table 10 . Demonstration of the effectiveness of each module in YOLOv8-MU.' √ ' indicates that we use this module.
11,760
2024-05-01T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
A Survey on Intelligent and Effective Intrusion D etection system using Machine Learning Algorithm - A system Network intrusion discovery framework (NIDS) helps the system admin to identify network security breaks in their own association. Nonetheless, numerous difficulties emerge while building up an intelligent and effective NIDS for unexpected and capricious attacks. In recent years, one of the foremost focuses inside NIDS studies has been the application of machine learning knowledge of techniques. Proposed work present a novel deep learning model to enable NIDS operation within modern networks. The model shows a combination of deep learning, capable of correctly analyzing a wide-range of network traffic. Moreover, additionally proposes novel deep learning classification display built utilizing feature extraction techniques. The performance evaluated network intrusion detection analysis dataset, particularly KDD CUP dataset. INTRODUCTION One of the major challenges in network security is the provision of a robust and effective Network Intrusion Detection System (NIDS). Despite the considerable advances in NIDS system, the majority of solutions still operate using less-successful signature-based techniques, rather than anomaly detection strategies. The current issues are the existing techniques leads to ineffective and inaccurate detection of attacks. There are three main limitations like, volume of network data, in-depth monitoring and granularity required to improve effectiveness and accuracy and finally the number of different protocols and diversity of data traversing. The main focus of NIDS research has been the application of machine learning and shallow learning techniques. The initial deep learning research has demonstrated that its superior layer-wise feature learning can better or at least match the performance of deep learning techniques. It is able to facilitating a deeper evaluation of network data and faster identification of any anomalies. In this paper, proposes a novel deep learning version to enable NIDS operation inside modern networks. Despite increasing awareness of network security, the existing solutions remain incapable of fully protecting inter-net applications and computer networks opposite the threats from ever-advancing cyber-attack method like as DoS attack and computer malware. Developing effective and adaptive security approaches, therefore, has become more critical than ever before. The traditional security techniques, as the first line of security defense, such as user authentication, firewall and data encryption, are insufficient to fully cover the hole landscape of network security while facing challenge from ever-evolving intrusion skills and method [1]. Hence, other line of security defense is more recommended, like Intrusion Detection System (IDS). Currently, an IDS alongside with anti-virus software has become an important complement to the security infrastructure of most organizations. The combination of these two lines provides a more comprehensive defense against those threats and enhances network security. A significant amount of research has been conducted to develop intelligent intrusion detection techniques, which help achieve better network security. Bagged boosting-based on C5 decision trees [2] and Kernel Miner [3] are two of the earliest attempts to build intrusion detection schemes. Methods proposed in [4] and [5] have successfully applied machine learning techniques to classify network traffic patterns that do not match normal network traffic. Both systems were equipped with five distinct classifiers to detect normal traffic and four different types of attacks (i.e., DoS, probing, U2R and R2L). However, current network traffic data, which are often huge in size, present a major challenge to IDSs [9]. These big data slow down the entire detection process and may lead to unsatisfactory classification accuracy due to the computational difficulties in handling such data. Classifying a huge amount of data usually causes many mathematical difficulties which then lead to higher computational complexity. As a well-known intrusion calculation dataset, KDD Cup 99 dataset is a typical example of more-scale datasets. This dataset contains of more than five million of training samples and two million of testing samples respectively. Such a large scale dataset check the building and testing procedure of a classifier, or form the classifier unable to do due to framework failures caused by low memory. Furthermore, large-scale datasets usually contain noisy, redundant, or uninformative features which present critical challenges to knowledge discovery and information modelling. 2. RELATED WORK The paper [1] focuses on deep learning methods which are inspired by the structure depth of human brain learn from lower level characteristic to higher levels concept. It is because of abstraction from multiple levels, the Deep Belief Network (DBN) helps to learn functions which are mapping from input to the output. The process of learning does not dependent on human-crafted features. DBN uses an unsupervised learning algorithm, a Restricted Boltzmann Machine (RBM) for each layer. Advantages are: Deep coding is its ability to adapt to changing contexts concerning data that ensures the technique conducts exhaustive data analysis. Detects abnormalities in the system that includes anomaly detection, traffic identification. Disadvantages are: Demand for faster and efficient data assessment. The main purpose of [2] paper is to review and summarize the work of deep learning on machine health monitoring. The applications of deep learning in machine health monitoring systems are reviewed mainly from the following aspects: Autoencoder (AE) and its variants, Proposes the use of a stacked denoising autoencoder (SdA), which is a deep learning algorithm, to establish an FDC model for simultaneous feature extraction and classification. The SdA model [3] can identify global and invariant features in the sensor signals for fault monitoring and is robust against measurement noise. An SdA is consisting of denoising autoencoders that are stacked layer by layer. This multilayered architecture is capable of learning global features from complex input data, such as multivariate time-series datasets and high-resolution images. Advantages are: SdA model is useful in real applications. The SdA model proposes effectively learn normal and fault-related features from sensor signals without preprocessing. Disadvantages are: Need to investigate a trained SdA to identify the process parameters that most significantly impact the classification results. Proposes a novel deep learning-based recurrent neural networks (RNNs) model [4] for automatic security audit of short messages from prisons, which can classify short messages(secure and non-insecure). In this paper, the feature of short messages is extracted by word2vec which captures word order information, and each sentence is mapped to a feature vector. In particular, words with similar meaning are mapped to a similar position in the vector space, and then classified by RNNs. Advantages are: The RNNs model achieves an average 92.7% accuracy which is higher than SVM. Taking advantage of ensemble frameworks for integrating different feature extraction and classification algorithms to boost the overall performance. Disadvantages are: It is apply on only short messages not large-scale messages. Signature-based features technique as a deep convolutional neural network [5] in a cloud platform is proposed for plate localization, character detection and segmentation. Extracting significant features makes the LPRS to adequately recognize the license plate in a challenging situation such as i) congested traffic with multiple plates in the image ii) plate orientation towards brightness, iii) extra information on the plate, iv) distortion due to wear and tear and v) distortion about captured images in bad weather like as hazy images. Advantages are: The superiority of the proposed algorithm in the accuracy of recognizing LP rather than other traditional LPRS. Disadvantages are: There are some unrecognized or missdetection images. In [6] paper, a deep learning approach for anomaly detection using a Restricted Boltzmann Machine (RBM) and a deep belief network are implemented. This method uses a one-hidden layer RBM to perform unsupervised feature reduction. The resultant weights from this RBM are passed to another RBM producing a deep belief network. The pretrained weights are passed into a fine tuning layer consisting of a Logistic Regression (LR) classifier with multi-class softmax. Advantages are: Achieves 97.9% accuracy. It produces a low false negative rate of 2.47%. Disadvantages are: Need to improve the method to maximize the feature reduction process in the deep learning network and to improve the dataset. The paper [7] proposes a deep learning based approach for developing an efficient and flexible NIDS. A sparse autoencoder and soft-max regression based NIDS was implemented. Uses Self-taught Learning (STL), a deep learning based technique, on NSL-KDD -a benchmark dataset for network intrusion. Advantages are: STL achieved a classification accuracy rate more than 98% for all types of classification. Disadvantages are: Need to implement a realtime NIDS for actual networks using deep learning technique. In [8] paper choose multi-core CPU's as well as GPU's to evaluate the performance of the DNN based IDS to handle huge network data. The parallel computing capabilities of the neural network make the Deep Neural Network (DNN) to effectively look through the network traffic with an accelerated performance. Advantages are: The DNN based IDS is reliable and efficient in intrusion detection for identifying the specific attack classes with required number of samples for training. The multicore CPU's was faster than the serial training mechanism. Disadvantages are: Need to improve the detection accuracies of DNN based IDS. In [9] paper, proposes a mechanism for detecting large scale network-wide attacks using Replicator Neural Networks (RNNs) for creating anomaly detection models. Our approach is unsupervised and requires no labeled data. It also accurately detects network-wide anomalies without presuming that the training data is completely free of attacks. Advantages are: The proposed methodology is able to successfully discover all prominent DDoS attacks and SYN Port scans injected. Proposed methodology is resilient against learning in the presence of attacks, something that related work lacks. Disadvantages are: Need to improve proposed methodology by using stacked autoencoder deep learning techniques. Based on the flow-based nature of SDN, we propose a flow-based anomaly detection system using deep learning. In [10] paper, apply a deep learning approach for flow-based anomaly detection in an SDN environment. Advantages are: It finds an optimal hyper-parameter for DNN and confirms the detection rate and false alarm rate. The model gets the performance with accuracy of 75.75% which is quite reasonable from just using six basic network features. Disadvantages are: It will not work on real SDN environment. Overview Lot of work has been done on this Intrusion Detection system as it is basic building block for the detection of various network attacks .Variety of Machine Learning and Deep Learning algorithms are implemented to develop an efficient and useful IDS system. 3.EXSTING SYSTEM The current network traffic data, which are often huge in size, present a major challenge to IDSs These "big data" slow down the entire detection process and may lead to unsatisfactory classification accuracy due to the computational difficulties in handling such data. Machine learning technologies have been usually used in IDS. However, most of the traditional machine learning technologies refer to shallow learning; they cannot effectively solve the enormous intrusion data classification issue that arises in the face of a real network application environment. Additionally, shallow learning is incompatible to intelligent analysis and the predetermined requirements of high-dimensional learning with enormous data. Disadvantages: Computer systems and internet have become a major part of the critical system. The current network traffic data, which are often huge in size, present a major challenge to IDSs. These "big data" slow down the entire detection process and may lead to unsatisfactory classification accuracy due to the computational difficulties in handling such data. Classifying a huge amount of data usually causes many mathematical difficulties which then lead to higher computational complexity. 3. SYSTEM OVERVIEW In this paper, propose a novel deep learning model to enable NIDS operation within modern networks. The model proposes is a combination of deep and shallow learning, capable of correctly analyzing a wide-range of network traffic. More specifically, we combine the power of stacking our proposed Non-symmetric Deep Auto-Encoder (NDAE) (deep learning) and the accuracy and speed of Random Forest (RF) (shallow learning). This paper introduces our NDAE, which is an auto-encoder featuring non-symmetrical multiple hidden layers. NDAE can be used as a hierarchical unsupervised feature extractor that scales well to accommodate high-dimensional inputs. It learns non-trivial features using a similar training strategy to that of a typical auto-encoder. Stacking the NDAEs offers a layer-wise unsupervised representation learning algorithm, which will allow our model to learn the complex relationships between different features. It also has feature extraction capabilities, so it is able to refine the model by prioritizing the most descriptive features. Advantages are: • Due to deep learning technique, it improves accuracy of intrusion detection system. • The network or computer is constantly monitored for any invasion or attack. • The system can be modified and changed according to needs of specific client and can help outside as well as inner threats to the system and network. • It effectively prevents any damage to the network. • It provides user friendly interface which allows easy security management systems. • Any alterations to files and directories on the system can be easily detected and reported. CONCLUSION In this paper, we have discussed the problems faced by existing NIDS techniques. In response to this we have proposed our novel NDAE method for unsupervised feature learning. We have then built upon this by proposing a novel classification model constructed from stacked NDAEs and the RF classification algorithm. Also we implemented the Intrusion prevention system. The result shows that our approach offers high levels of accuracy, precision and recall together with reduced training time. The proposed NIDS system is improved only 5% accuracy. So, there is need to further improvement of accuracy. And also further work on real-time network traffic and to handle zero-day attacks. Future Scope • In our future work, the first avenue of exploration for improvement will be to assess and extend the capability of our model to handle zero-day attacks. • We look to expand upon our existing evaluations by utilizing real-world backbone network traffic to demonstrate the merits of the extended model
3,180.8
2020-01-18T00:00:00.000
[ "Computer Science", "Engineering" ]
Effective actions for dual massive (super) p-forms In d dimensions, the model for a massless p-form in curved space is known to be a reducible gauge theory for p > 1, and therefore its covariant quantisation cannot be carried out using the standard Faddeev-Popov scheme. However, adding a mass term and also introducing a Stueckelberg reformulation of the resulting p-form model, one ends up with an irreducible gauge theory which can be quantised à la Faddeev and Popov. We derive a compact expression for the massive p-form effective action, Γpm\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\Gamma}_p^{(m)} $$\end{document}, in terms of the functional determinants of Hodge-de Rham operators. We then show that the effective actions Γpm\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\Gamma}_p^{(m)} $$\end{document} and Γd−p−1m\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\Gamma}_{d-p-1}^{(m)} $$\end{document} differ by a topological invariant. This is a generalisation of the known result in the massless case that the effective actions Γp and Γd−p−2 coincide modulo a topological term. Finally, our analysis is extended to the case of massive super p-forms coupled to background N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 1 supergravity in four dimensions. Specifically, we study the quantum dynamics of the following massive super p-forms: (i) vector multiplet; (ii) tensor multiplet; and (iii) three-form multiplet. It is demonstrated that the effective actions of the massive vector and tensor multiplets coincide. The effective action of the massive three-form is shown to be a sum of those corresponding to two massive scalar multiplets, modulo a topological term. E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>Abstract: In d dimensions, the model for a massless p-form in curved space is known to be a reducible gauge theory for p > 1, and therefore its covariant quantisation cannot be carried out using the standard Faddeev-Popov scheme. However, adding a mass term and also introducing a Stueckelberg reformulation of the resulting p-form model, one ends up with an irreducible gauge theory which can be quantised à la Faddeev and Popov. We derive a compact expression for the massive p-form effective action, Γ (m) p , in terms of the functional determinants of Hodge-de Rham operators. We then show that the effective actions Γ (m) p and Γ (m) d−p−1 differ by a topological invariant. This is a generalisation of the known result in the massless case that the effective actions Γ p and Γ d−p−2 coincide modulo a topological term. Finally, our analysis is extended to the case of massive super p-forms coupled to background N = 1 supergravity in four dimensions. Specifically, we study the quantum dynamics of the following massive super p-forms: (i) vector multiplet; (ii) tensor multiplet; and (iii) three-form multiplet. It is demonstrated that the effective actions of the massive vector and tensor multiplets coincide. The effective action of the massive threeform is shown to be a sum of those corresponding to two massive scalar multiplets, modulo a topological term. Introduction The model for a massless gauge two-form in four dimensions was introduced in the mid-1960s by Ogievetsky and Polubarinov [1] who showed that it describes a spin-zero particle. Unfortunately, their work remained largely unknown for a decade. The same model was rediscovered, and generalised, twice in 1974 in the context of dual resonance models [2,3]. However, active studies of gauge p-forms in diverse dimensions began only in the late 1970s when it was recognised that such fields naturally occur in supergravity theories, see, e.g., [4][5][6] for early publications and [7][8][9] for reviews. Gauge p-forms are also of special interest in string theory where they appear in the low-energy effective actions see, e.g., [10][11][12][13] for reviews. JHEP01(2021)040 There are two important themes in modern quantum field theory that originated by studying the quantum dynamics of massless gauge p-forms: (i) reducible gauge theories; and (ii) quantum equivalence of dual theories. It is appropriate here to briefly recall these developments. For p > 1, all massless p-form actions are examples of the so-called reducible gauge theories (following the terminology of the Batalin-Vilkovisky formalism [14]). In the framework of covariant Lagrangian quantisation, reducibility means that the generators of gauge transformations are linearly dependent. This fact has a number of non-trivial implications, which are: (i) gauge-fixing functions are constrained; (ii) ghosts for ghosts are required; and (iii) a naive application of the Faddeev-Popov quantisation scheme leads to incorrect results. Several consistent quantisation procedures have been developed to quantise reducible Abelian gauge theories such as gauge p-forms [15][16][17][18][19], including the formulations of [17,19] which apply in the supersymmetric case. These quantisation schemes are much easier to deal with than the Batalin-Vilkovisky formalism [14]. 1 In d dimensions, two massless field theories describing a p-form and a (d − p − 2)-form are known to be classically equivalent, see, e.g., [9,23] for reviews. These theories are dual in the sense that the corresponding actions are related through the use of a firstorder (or parent) action, see e.g. [24]. The issue of quantum equivalence of such classically equivalent theories was raised, building on the results of [25], in 1980 by Duff and van Nieuwenhuizen [26,27]. They showed, in particular, that (i) a massless two-form and a non-conformal scalar in four dimensions give rise to different trace anomalies; and (ii) the corresponding one-loop divergences differ by a topological term. These results were interpreted in [26] as a possible quantum non-equivalence of these dual field realisations. The issue was resolved in several publications [19,24,28,29] in which it was shown that the effective actions of dual massless theories in four dimensions differ only by a topological invariant being independent of the spacetime metric. As a result, the dual theories are characterised by the same quantum energy-momentum tensor, T ab , which proves their quantum equivalence. 2 Analogous results hold in higher dimensions [24,30], as well as for dual supersymmetric field theories in four dimensions [19,29] (see also [31] for a review). It is worth discussing the supersymmetric story in some more detail. Several important massless N = 1 supermultiplets in four dimensions can be realised in terms of super p-forms [32] (see also [33]), with the cases p = 0, 2 and 3 corresponding to the chiral, tensor and three-form multiplets, respectively. The corresponding supersymmetric theories are related either by a duality transformation or by a superfield reparametrisation. 1 One of the earliest applications of the Batalin-Vilkovisky formalism [14] was the Lagrangian quantisation [20,21] of the Freedman-Townsend model [22]. Ref. [20] was accepted for publication in Sov. J. Nucl. Phys. in 1987. It was subsequently withdrawn shortly before publication, after the authors had been informed by a colleague that the same problem had already been solved elsewhere. Due to a limited access to the journals, at the time it was not possible to verify this information, which in fact turned out to be false. 2 In the four-dimensional case, the dual two-form and zero-form theories are classically non-conformal. As emphasised in [29], the quantum operator T a a in such theories "contains the effects of both classical and quantum breaking and is not equal to the trace anomaly." In other words, there is no point to compare trace anomalies in classically non-conformal theories. JHEP01(2021)040 The simplest model for the tensor multiplet [34] in a supergravity background is given by the action 3 where Ψ α is a covariantly chiral spinor,DβΨ α = 0. Its dual version [34] describes the non-conformal scalar multiplet. Let us represent the dynamical variables in (1.2) as Φ = P + V andΦ = P − V , where V is a real scalar and the operators P + and P − have the form 4 Then we end up with the three-form multiplet realisation [32] of the non-conformal scalar multiplet. The corresponding action is This theory was studied in [19], see also [31] for a review. The models (1.1) and (1.2) are dually equivalent [34]. Their quantum equivalence was established in [29] in the case of an on-shell supergravity background, and in [19] for an arbitrary supergravity background. Since the three-form multiplet action (1.4) is obtained from (1.2) by setting Φ = P + V , the physical fields can be chosen to coincide in both models. The main difference between the models (1.2) and (1.4) at the component level is that one of the two real auxiliary scalars in (1.2) is replaced by (the Hodge dual of) the field strength of a three-form in the case of (1.4). Being non-dynamical, the three-form is known to generate a positive contribution to the cosmological constant [26,[37][38][39][40][41]. In order to achieve a better understanding of the three-form multiplet model (1.4), we describe its dual version. It is obtained by starting with the first-order action [19] where V and L are unconstrained real scalars. Varying S[V, L] with respect to L leads to the three-form multiplet action (1.4). On the other hand, varying V gives Our consideration below can readily be extended to the nonlinear theories which were introduced in [34] and are obtained by replacement G 2 → f (G). However, such theories are non-renormalisable in general and will not be studied in what follows. It should be pointed out that the duality transformations for the nonlinear f (G) models were described in [35]. The special choice of f (G) ∝ G ln G corresponds to the so-called improved (superconformal) tensor multiplet [36]. 4 For any scalar superfield U , P+U is covariantly chiral, and P−U antichiral. JHEP01(2021)040 This constraint defines a deformed tensor multiplet, in accordance with the terminology of [42]. The dynamics of this multiplet is described by the action At the component level, the main manifestation of the deformation parameter µ in (1.7) is the emergence of a positive cosmological constant. Unlike (1.7), no parameter µ is present in the action (1.4). However, µ gets generated dynamically, since the general solution of the equation of motion for (1.4) contains such a parameter, with µ a real parameter. On the mass shell, we can identify P + +P − V = L. The effective actions corresponding to different values of µ differ by a cosmological term. The authors of [19] made use of the choice µ = 0 and demonstrated that the effective actions Γ chiral and Γ 3-form , which correspond to the locally supersymmetric models (1.2) and (1.4), differ by a topological invariant. It should be pointed out that general duality transformations with three-form multiplets and their applications were studied in [43][44][45]. So far we have discussed the models for massless p-forms and their supersymmetric extensions. Massive antisymmetric tensor fields were discussed in the physics literature even earlier than the massless ones. Kemmer in 1960 [46], and independently Takahashi and Palmer in 1970 [47], showed that the massive spin-1 particle can be described using a 2-form field. Further publications on massive antisymmetric fields [3,[48][49][50][51][52][53] revealed, in particular, that a massive p-form in d dimensions is dual to a massive (d−p−1)-form. 5 This raised the issue of quantum equivalence of dual models. Some quantum aspects of massive p-forms were studied using the worldline approach in [56,57]. In the important work by Buchbinder, Kirillova and Pletnev [58], the quantum equivalence of classically equivalent massive p-forms in four dimensions was established. In the present work we extend the results of [58] to d dimensions. Our proof of the quantum equivalence of dual theories in d = 4 differs from the one given in [58]. Our approach is also extended to the case of massive super p-forms coupled to background N = 1 supergravity in four dimensions. Specifically, we study the quantum dynamics of the following massive super p-forms: (i) vector multiplet; (ii) tensor multiplet; and (iii) three-form multiplet. In particular, we demonstrate that the effective actions of the massive vector and tensor multiplets coincide. Massive super p-forms have recently found numerous applications, including the effective description of gaugino condensation [59][60][61][62], inflationary cosmology [63], and effective field theories from string flux compactifications [64] (see also [65] for a review). Here we do not attempt to give a complete list of works on massive super p-forms and their applications. However it is worth mentioning those publications in which such supermultiplets were introduced in the case of four dimensional N = 1 supersymmetry. Massive tensor and vector multiplets coupled to supergravity were studied in [34,53,66]. Tensor multiplets JHEP01(2021)040 with complex masses were studied in [69][70][71]. To the best of our knowledge, a massive three-form multiplet was first discussed in [31], although a massive three-form is contained at the component level in one of the models introduced by Gates and Siegel [72]. This paper is organised as follows. In section 2 we derive effective actions Γ (m) p for massive p-form models in d-dimensional curved spacetime. We then demonstrate that, for 0 ≤ p ≤ d − 1, the effective actions Γ (m) p and Γ (m) d−p−1 differ by a topological invariant. Section 3 is devoted to alternative proofs of some of the results of section 2 specifically for the d = 4 case. Effective actions for massive super p-forms in four dimensions are studied in section 4. In section 5 we discuss the obtained results and sketch several generalisations. Four technical appendices are included. Appendix A collects the properties of the Hodge-de Rham operator. Appendix B gives a summary of the results concerning massless p-forms in d dimensions. The effective action of a massless three-form in d = 4 is discussed in appendix C. Finally, appendix D describes dual formulations in the presence of a topological mass term. We make use of the Grimm-Wess-Zumino geometry [73] which underlies the Wess-Zumino formulation [74] for old minimal supergravity (see [75] for a review) discovered independently in [76][77][78][79][80][81]. Our two-component spinor notation and conventions follow [31]. The algebra of the supergravity covariant derivatives, which we use, is given in section 5.5.3 of [31]. In order to have a uniform notation for non-supersymmetric and supersymmetric theories, in this paper we make use of the vielbein formulation for gravity. The background gravitational field is described by a vielbein e a = dx m e m a (x), such that e = det(e m a ) = 0, and the metric is a composite field defined by g mn = e m a e n b η ab , with η ab the Minkowski metric. All p-form fields in d dimensions carry Lorentz indices. We make use of the torsion-free covariant derivatives Massive p-forms in d dimensions In this section we derive effective actions Γ (m) p for massive p-form models in curved space and demonstrate that Γ Classical dynamics The dynamics of a massive p-form is described by the action is the field strength, and m the mass. It is assumed in this section that m = 0. The Euler-Lagrange equation corresponding to (2.1) is It implies that and therefore the equation of motion turns into where p is the covariant d'Alembertian (A.5). The symmetric energy-momentum tensor corresponding to the model (2.1) is with η ab the Minkowski metric. It is conserved, on the mass shell. Duality equivalence It is known that the massless models for a p-form and (d − p − 2)-form are classically equivalent, see appendix B. In the massive case, however, a p-form is dual to a (d − p − 1)form, see, e.g., [51,53]. Here we recall the proof of this result. To demonstrate that the massive theories with actions S (m) and introduce the first-order action JHEP01(2021)040 Here the variables L q and A q are unconstrained (d − p − 1)-forms. Varying with respect to A q returns the original action, eq. (2.1). On the other hand, varying with respect to L q and B p leads to the dual action S . The equations of motion corresponding to (2.8) are Making use of these equations, one may show that the energy-momentum tensors in the theories S (m) (2.10) Quantisation Associated with the massive p-form To obtain a useful expression for Γ (m) p , we introduce a Stueckelberg reformulation of the theory. It is obtained from (2.1) by replacing 14) The gauge freedom allows us to choose the gauge condition V a(p−1) = 0 and then we are back to the original model. The compensating field V a(p−1) appears in the action (2.13) only via the field strength This gauge freedom is characterised by linearly dependent generators, which makes it tempting to conclude that the gauge theory under consideration is reducible. Nevertheless, (2.13) is an irreducible gauge theory and can be quantised à la Faddeev and Popov. JHEP01(2021)040 The point is that (2.15) is a special case of the transformation (2.14) with ζ a( To quantise the gauge theory with action (2.13), we choose the gauge fixing with ρ a(p−1) an external field. The gauge variation of χ a(p−1) is Here O is the kinetic operator in the massive p-form model (2.1). Making use of (2.11), we conclude that the Faddeev-Popov determinant ∆ FP is Now, in accordance with the Faddeev-Popov procedure, the effective action is Averaging the right-hand side over ρ a(p−1) with weight leads to As a result, for the effective action we obtain This is a recurrence relation. It leads to a simple expression for the effective action In the d = 4 case, this result agrees with [58]. The representation (2.23) is formal since each term on the right-hand side contains UV divergences. This issue is addressed by introducing a regularisation for the effective action, (Γ (m) p ) reg . We will use the following prescription: with ω, ε → +0. Here the right-hand side involves the (heat) kernel of the evolution operator U k (s) = exp(is k ) acting on the space of k-forms. The kernel of U k (s) is defined by JHEP01(2021)040 where the delta-function is for any k-form ω. In accordance with the definition of the delta-function, the trace over Lorentz indices in (2.24) is (2.28) Quantum equivalence In d dimensions, the model for a massive p-form is classically equivalent to that for a massive (d − p − 1)-form. Let us analyse whether this equivalence extends to the quantum theory. Our analysis will be based on the fact that the spaces of p-forms and (d − p)-forms are isomorphic, and the corresponding Hodge d'Alembertians are related to each other as follows where ω is an arbitrary p-form. Making use of the relations (2.23) and (2.29), one may show that There are two distinct cases. If the dimension of space-time is odd, d = 2n + 1, the functional X (m) can be seen to vanish identically, In the even-dimensional case, d = 2n, X (m) can be rewritten in the form: This functional is no longer identically zero. However, it turns out to be a topological invariant in the sense that where we have introduced the functional Giving the gravitational field a small disturbance, the functional Υ(s) varies as This variation may be rearranged by making use of the Ward identities in conjunction with the relations where the double vertical bar means setting x = x and a = a . Then one obtains which is equivalent to (2.32). Similar arguments may be used to show that Υ(s) is actually s-independent, For small values of s, it is well known that the diagonal heat kernel has the asymptotic expansion with a n (x, x) the Seeley-DeWitt coefficients. As a result, the topological invariant (2.33b) takes the form which is the heat kernel expression for the Euler characteristic, see, e.g., [83]. The above analysis is a variant of the famous heat kernel proofs of the Chern-Gauss-Bonnet theorem, see [83] for a review. JHEP01(2021)040 3 Massive p-forms in four dimensions In this section we will present alternative proofs of some results from the previous section in the d = 4 case. The topological mismatch X (m) in (2.30) will be ignored. Two-form field The model for a massive two-form in curved space is described by the action where we have denoted This theory is classically equivalent to the model with action S (m) , which describes the massive vector field in curved space. We are going to show that exp iΓ For this we consider the following change of variables 6 Its Jacobian proves to be We perform the change of variables (3.4) in the action Making use of (3.5) leads to which is equivalent to (3.3). 6 Given an arbitrary p-form ωp on a compact Riemannian manifold (M, g), the Hodge decomposition theorem states that ωp = dϕp−1 + d † Ψp+1 + hp, where hp is harmonic, php = 0. It is assumed in (3.4a) that p has no normalised zero modes. Three-form field The model for a massive three-form in curved space is described by the action In terms of the field strength H = ∇ a V a , the equation of motion is This shows that the three-form model (3.9) is equivalent to the massive scalar model Classical equivalence of the theories (3.9) and (3.11) is established by considering a firstorder model with Lagrangian The effective action for the massive three-form model is We are going to show that exp iΓ (3.14) For this we consider the following change of variables [19] V a = ∇ a Φ + 1 2 The corresponding Jacobian is see [19] for the derivation. We perform the above change of variables in the path integral exp 2iΓ For the action S (m) Then, taking into account (3.16) leads to (3.14). JHEP01(2021)040 4 Massive super p-forms in four dimensions In this section we study effective actions of the following massive locally supersymmetric theories in four dimensions: (i) vector multiplet; (ii) tensor multiplet; and (iii) three-form multiplet. In the massless case, these multiplets are naturally described in terms of super p-forms, with p = 1, 2 and 3, respectively. The models for massive vector and tensor multiplets are classically equivalent. Here we will demonstrate their quantum equivalence. At the component level, the locally supersymmetric models of our interest contain the massive p-form models we have studied in the previous section. Setup The massive vector multiplet in a supergravity background [34,66] is described in terms of a real scalar prepotential V . The action is The massive tensor multiplet [34] is described in terms of a covariantly chiral spinor superfield Ψ α ,DβΨ α = 0, and its conjugateΨα. The action is where we have introduced the real superfield which is covariantly linear, (D 2 − 4R)G = 0. Similar to the vector multiplet, the massive three-form multiplet is formulated in terms of a real scalar prepotential V . The corresponding action is obtained from (1.4) by adding a mass term, where the operators P + and P − are defined in (1.3). We recall that P + U and P − U are covariantly chiral and antichiral, respectively, for any scalar superfield U . Associated with the above massive models are their effective actions defined by Quantisation of the massive vector multiplet model The Stueckelberg reformulation of the massive vector multiplet model is obtained by replacing in the action (4.1). The resulting action is invariant under gauge transformations To quantise the gauge theory with action (4.7), we introduce the gauge fixing with F a background real superfield. The gauge variation of χ is and therefore the Faddeev-Popov determinant is Averaging the right-hand side over F with weight we obtain exp iΓ where we have introduced the operator 7 [31,82] Our final result (4.14) relates the effective actions (4.5a) and (4.5c). 7 The d'Alembertian Quantisation of the massive tensor multiplet model The Stueckelberg reformulation of the massive tensor multiplet model, eq. (4.2), is obtained by replacing in the action (4.2). This leads to the action where we have introduced the covariantly linear superfield The action is invariant under gauge transformations To quantise the gauge theory with action (4.17), we introduce the gauge fixing where U is a background real superfield. The gauge variation of χ is Here O is exactly the operator which determines the vector multiplet action (4.1). This means that the Faddeev-Popov determinant is vector . Since the right-hand side of (4.23) is independent of U, we can average it over U with weight This leads to exp iΓ where the d'Alembertian c acts on the space of covariantly chiral spinors [19,29] c Ψ α := Our final result (4.25) relates the effective actions (4.5a) and (4.5b). Quantisation of the massive three-form multiplet model The Stueckelberg reformulation of the massive tensor multiplet model, eq. (4.4), is obtained by replacing The resulting action is invariant under gauge transformations To quantise the gauge theory with action (4.28), we introduce the gauge condition where ξ α is a background chiral spinor. The gauge variation of χ α is Here O is the operator which determines the massive tensor multiplet model (4.2). This means that the Faddeev-Popov super-determinant is tensor . (4.32) Therefore, the effective action is given by the path integral Since the right-hand side is independent of the chiral spinor ξ α and its conjugateξα, we can average over these superfields with weight This will lead to the relation exp iΓ which connects the effective actions (4.5b) and (4.5c). Analysis of the results We have derived three different relations which connect the three effective actions defined in (4.5). They are given by the equations (4.14), (4.25) and (4.35). These results have nontrivial implications. Firstly, it follows from (4.14) and (4.35) that tensor . (4.36) Therefore, the classically equivalent theories (4.1) and (4.2) remain equivalent at the quantum level. Secondly, making use of (4.25) and (4.36) leads to Thirdly, from (4.35) and (4.37) we deduce The superfield heat kernels corresponding to the operators appearing in (4.37) and (4.38) were studied in [19,31,82,84,85]. As follows from (4.37), the effective actions Γ tensor coincide, without any topological mismatch. This is due to the use of the Stueckelberg formulation defined by eqs. (4.6) and (4.7). A topological mismatch will emerge if we consider a slightly different Stueckelberg reformulation, which is obtained by replacing the dynamical superfield in (4.1) by the rule This leads to the action which possesses the gauge freedom Modulo a purely topological contribution, the functional (4.38) proves to be twice the effective action of a scalar multiplet. To justify this claim, let us consider the following dynamical system where Φ is a chiral scalar. This model proves to be dual to the massive three-form theory (4.4). The action (4.42) is invariant under gauge transformations JHEP01(2021)040 corresponding to the massless three-form multiplet. Quantisation of the reducible gauge theory can be carried out using the method described in [19]. Next, we represent the chiral scalar Φ in (4.42) as Finally, we introduce new variables K ± = 1 √ 2 (V ± U ). Then the action turns into This is the three-form counterpart of the theory which describes two decoupled massive scalar multiplets in a supergravity background. The quantum effective action for this theory is where H (ψ) denotes the following operator [31,82] By definition, the operator H (ψ) acts on the space of chiral-antichiral column-vectors A useful expression for Det H (ψ) in terms of the functional determinants of covariant d'Alembertians is derived in [31,82]. Since the effective actions (4.38) and (4.47) should differ only by a topological term, we conclude that is a topological invariant. It is a generalisation of the invariant introduced in [19,29]. Our analysis in this section provides the supersymmetric completion of the results obtained in section 3. Discussion and generalisations In this paper we derived compact expressions for the massive p-form effective actions for 0 ≤ p ≤ d − 1, where d is the dimension of curved spacetime. We then demonstrated that the effective actions Γ (m) p and Γ (m) d−p−1 differ by a topological invariant. These results were extended to the case of massive super p-forms coupled to background N = 1 supergravity in four dimensions. There are several interesting p-form models which we have not considered in this work and which deserve further studies. Here we briefly discuss such models. As a natural generalisation of the Cremmer-Scherk model for massive spin-1 in d = 4 [3], the dynamics of a massive p-form in d dimensions can be described in terms of a gauge-invariant action involving two fields B p and A q , with q = d − p − 1, and a topological (B ∧ F ) mass term. The action is where I (m) stands for the topological mass term As is well known, this model is dual to the massive theories S The corresponding generators are linearly dependent, and therefore the gauge theory (5.1) should be quantised using the Batalin-Vilkovisky formalism [14] or the simpler quantisation schemes [17][18][19], which are specifically designed to quantise Abelian gauge theories. It would be interesting to show that the effective action for the gauge theory (5.1) coincides with (2.23) modulo a topological invariant. In four dimensions, a supersymmetric generalisation of the Cremmer-Scherk model was given by Siegel [34] where the mass term is given by JHEP01(2021)040 This is a dual formulation for the models (4.1) and (4.2). The action (5.4) is invariant under combined gauge transformations corresponding to the massless vector and tensor multiplets. This reducible massive gauge theory can be quantised using the method described in [19]. The mass term (5.5) is locally superconformal [53]. For the supergravity formulation used in the present paper, this means that (5.5) is super-Weyl invariant. We recall that a super-Weyl transformation of the covariant derivatives [89,90] is where the parameter Σ is chiral,DαΣ = 0, and M αβ andMαβ are the Lorentz generators defined as in [31]. Such a transformation acts on the prepotentials V and Ψ α as see [31] for the technical details. The mass term (5.5) is the supersymmetric version of the d = 4 Green-Schwarz anomaly cancellation term. Another supersymmetric analogue of the Cremmer-Scherk model is described by the action (4.42). If d is even, d = 2n, one can introduce massive n-form models with two types of mass terms [67][68][69], with m and e constant parameters. Here the second mass term vanishes if n is odd (however, it is non-zero in the case of several n-forms [69].) The model ( Supersymmetric extensions of (5.8) have been discussed in several publications including [69][70][71]. In particular, the massive tensor multiplet model (4.2) possesses the following generalisation: Quantisation of this model can be carried out using the approach developed in section 4. In conclusion, we would like to come back to the important work by Duff and van Nieuwenhuizen [26]. Their argument concerning the quantum non-equivalence of the dual two-form and zero-form models in d = 4 was based on the different trace anomalies. However, these theories are non-conformal and, therefore, the quantum operator T a a "contains the effects of both classical and quantum breaking and is not equal to the trace JHEP01(2021)040 anomaly" [29]. Nevertheless, the argument given in [26] can be refined within a Weylinvariant formulation for general gravity-matter systems [86,87]. We recall that a Weyl transformation acts on the covariant derivative as ∇ a → ∇ a = e σ ∇ a + ∇ b σM ba , (5.10) with the parameter σ(x) being arbitrary. Such a transformation is induced by that of the gravitational field e a m → e σ e a m =⇒ g mn → e −2σ g mn . (5.11) In the Weyl-invariant formulation for gravity in d = 2 dimensions, the gravitational field is described in terms of two gauge fields. One of them is the vielbein e m a (x) and the other is a conformal compensator ϕ(x). The latter is a nowhere vanishing scalar field with the Weyl transformation law Any dynamical system is required to be invariant under these transformations. In particular, the Weyl-invariant extension of the Einstein-Hilbert gravity action is 9 The field φ = −∆p ln ϕ was interpreted in [30] as the dilaton. JHEP01(2021)040 where E d is the Euler invariant, Relation (5.16) is a generalisation of (B.8). The expression on the right-hand side of (5.16) is a local functional and can be removed by adding a local counterterm. This proves the quantum equivalence of the theories. In a similar manner supergravity in diverse dimensions can be formulated as conformal supergravity coupled to certain compensating supermultiplet(s) [91]. The super-Weylinvariant extensions of the models (1.1) and (1.2) are given (see, e.g., [53]) by where S 0 is the chiral compensator,DαS 0 = 0, corresponding to the old minimal formulation for N = 1 supergravity, see [89,[92][93][94][95]. By definition, S 0 is nowhere vanishing and possesses the super-Weyl transformation δ Σ S 0 = ΣS 0 . The matter chiral scalar in (5.18b) is super-Weyl neutral. The models (5.18a) and (5.18b) are classically equivalent. On general grounds, these models should be equivalent at the quantum level. It would be interesting to carry out explicit calculations to check this. It should be pointed out that the actions (5.18a) and (5.18b) lead to non-minimal operators for which the standard superfield heat kernel techniques [31,82,84,85] for computing effective actions do not work. Quantum supersymmetric theories with non-minimal operators were studied in [96,97]. Our analysis in this paper was restricted to those systems in which the classical action is quadratic in the dynamical fields and therefore the corresponding effective action admits a closed-form expression in terms of the functional determinants of certain operators. In the case of nonlinear theories, such as the following model [35,53] (5.19) and its duals, it is not possible to obtain simple expressions for the effective action. Nevertheless, the issue of quantum equivalence can still be addressed using the path integral considerations described by Fradkin and Tseytlin [24]. This approach was used in [20] to prove quantum equivalence of the Freedman-Townsend model [22] and the principal chiral σ-model. A Hodge-de Rham operator Given a non-negative integer p ≤ d, the so-called Hodge-de Rham operator (also known as the covariant d'Alembertian) is defined to act on the space of p-forms. We recall that the operators of exterior derivative d and co-derivative d † are defined to act on a p-form ω as B Massless p-forms in d dimensions Setting m = 0 in (2.1) gives the massless p-form field theory JHEP01(2021)040 where H := ∇ a V a is the field strength being invariant under gauge transformations The second term in the action is a boundary term; it was introduced in [40,43]. To obtain a consistent variation problem, one demands [40] that This shows that the model under consideration has no local degrees of freedom. Different values of c correspond to different vacua in the quantum theory. When computing the path integral, for a given c we make use of the background-quantum splitting such that the classical action becomes Here the first contribution on the right is the cosmological term. Evaluating the path integral, for the effective action one gets The functional X is the four-dimensional version of the topological invariant (B.7). D Duality with topological mass term To construct a dual formulation for (5.1), we introduce the first-order action +L a(q) mA a(q) + F a(q) (C) , (D.1) JHEP01(2021)040 where L a(q) and C a(q−1) are unconstrained antisymmetric tensor fields. The equation of motion for C a(q−1) implies that L a(q) = 1 (p+1)! ε a(q)b(p+1) F b(p+1) (B), and then the action (D.1) turns into (5.1). On the other hand, we can eliminate L a(q) from the action (D.1) using the corresponding equation of motion. This leads to This is the Stueckelberg formulation for the massive (d − p − 1)-form model, see eq. (2.13). Thus we have shown that the massive q-form model (D.2) is dual to (5.1). There is an alternative dual formulation for (5.1), which is obtained by making use of the first-order action where L a(p) and V a(p−1) are unconstrained antisymmetric tensor fields. The equation of motion for V a(p−1) implies that L a(p) = 1 (q+1)! ε a(p)b(q+1) F b(q+1) (A), and then the action (D.3) turns into (5.1). On the other hand, integrating out L a(p) leads to the massive p-form model (2.13). Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited.
8,472.8
2020-09-17T00:00:00.000
[ "Materials Science" ]
The Decline of Literature : A Public Perspective After centuries of dominance, literature has not been in a robust health for the last few decades. Several scholars have addressed the decline of literature in a number of books and articles attributing it to institutional and economic reasons. However, a major factor has not been taken into account. It is the larger audience who receives and absorbs literature. In this paper, I argue that the decline of literature emanates from the lack of appreciation of literature among the public who have deserted this field of humanity in the present days. I will investigate the causes of this desertion and explore its consequences in the field of literature. Through using a questionnaire, this paper looks and evaluates the experiences and perspectives of public. It is expected that the findings will contribute to literature to better understanding the decline. Introduction Literature's decline and lack of power to persuade critics of its continued viability in the academic setting is a matter that remains under scrutiny.However, this paper argues that the decline of literature emanates not only from the problems that reside within the profession of literature itself, but also from the lack of appreciation and understanding of literature among the public who have abandoned this field of humanity presently.It examines the causes of this abandonment and explores its consequences not only on the field of literature.Furthermore, this paper explores the scholars' investigations of the decline--or "death" as referred to by some scholars--and its attribution to the institutionalization and compartmentalization of the profession.This paper involves the use of a questionnaire.Nevertheless, my goal is not to quantify or establish percentages; it is to broaden widely the range of possibilities out beyond my own reactions.The questionnaire has been thoroughly phrased to elicit the experiences and perspectives of the public.Therefore, this paper describes deliberately and quotes accordingly what the public consumers of literature have expressed and discusses their responses to the question: "why do you think most people do not read literature?" For the past few decades, numerous critics and scholars have lengthily studied the decline in humanities in general and literature in particular.At first, there was a tendency to use the statement "the death of literature" instead of "the decline of literature".It was first used in the sixties to refer the lack of interest in literature.Although the word death was used in that era specifically to compare it intentionally to the statement "death of God" that was coined by the German philosopher Friedrich Nietzsche, it still suggested in a stronger sense a sign of weakness and inefficiency in a field that had been influencing people's lives for many centuries. In a book that carries the same notion, The Death of Literature, Alvin Kernan mentions some major factors that led to that "death".He lists: the death of the author, the drastic change of scholars' perspectives towards classic works or masterpieces, the independence of criticism from functioning as a servant of literature, the increase of television watchers at the expense of literature, and the attack of several radicals of literature--such as Terry Eagleton. 1 However, Kernan does not take into consideration the public--the vast consumers of literature outside the academic walls.But if literature has died, how can we explain the continuation of exposure to literature?People do still buy fiction works, read poems, attend literary conferences or symposiums, or basically absorb one kind of literary form or another.Literature, to some extent, is still being cherished commonly not only among literary scholars, but also among the considerable number of people who appreciate literature in this day and age.A valid evidence of this is the number of literary works being published every year.For example, over 100 million copies of Fifty Shades of Grey were sold in the world--and still being sold up to this moment. 2Notwithstanding this promising news, this conviction disregards, to some degree, the dissension that exists amongst the larger audience in the literature arena who has, unfortunately, begun deserting this field of humanity. Literature: Its Rise and Fall The word literature has been historically modified to carry different, yet interconnected meanings.At first, it was used to refer to poetry in all its kinds due to the stereotypical conception people had of valuing poetry and treasuring poets.Then, the meaning slightly changed to signify "polite letters".Other meanings of "anything written" or " all serious writing" emerged later attributing literature to a much broader sense.The modern use of the term to refer to all genres of literature was adopted in the later eighteenth century.Since then, literature has gone beyond referring to older works Flourishing Creativity & Literacy to encompass performed plays, recited poems, novel-based movies, and the list goes on.The works of greatest writers such as Shakespeare, John Milton, Jane Austen, the Bronte's sisters, and many others--labeled as classics--have remained mostly, and exclusively, inside the Academic walls.However, modern works are being acknowledged in academic settings immensely nowadays.Literature in its broader sense was admitted to universities as an enriching form of knowledge, being taught to students up to this moment.Students of literature are exposed a wide variety of literary works; they read them and interpret them, their research rely upon them.They flourish literature and keep it from dying.The admission was, and remains to a large extent, institutionally and systematically established. The systemization and institutionalization of literature studies in English departments have placed faculty members into isolated sections--a notion referred to as compartmentalization--which have disordered college students due to the lack of communication between these sections, which Gerald Graff argues in his historical investigation of the profession of literature in his enriching book Professing Literature: An Institutional History (1987).Approaching literature systematically makes it fall inevitably and precisely under a series of "acclaimed" works without giving the students or the audience the options to explore other works--especially upon realizing the ancient purpose of reading literature is seeking pleasure or profit of any sort.According to Graff, "…the idea that literature could or should be taught-rather than simply enjoyed or absorbed of part of the normal upbringing of gentlefolk-was a novel one, and no precedents existed for organizing such an enterprise." For a long time, literature has been considered a treasure by which universities have been privileged.It has played a dynamic role in the humanities and English departments particularly.But prior to that, the discipline of English appeared as two separate fields: composition and literature.These two fields represented two distinct kinds of students.The scholars David Shumway and Craig Dionne state that "composition has always had the most students; literature has had the most of the prestige (all the prestige for several decades, although composition has gained a bit in the last three)".However, these two fields have been both employed in the service of enriching humanity.Because literature has been a valued subject in universities for almost a century, it attracted many students who came to pursue their master and doctoral degrees.However, this attraction has unquestionably changed.The number of students majoring in English as a whole dropped by almost one third.Andrew Delbanco notes, "a trend consistent with the contraction of the humanities (literature, language, philosophy, music, and art) as a whole, which fell as a percentage of all Ph.Ds.from 13.8 percent to 9.1 percent between 1966 and 1993." There are various reasons for this regression in the numbers of students.One reason is that it came as a result of the postwar expansion that, according to Delbanco, occurred specifically in the formerly understated science and technology fields.Delbanco states, "with increased access to college for many students whose social and economic circumstances would once have excluded them, vocational fields such as business, economics, engineering, and, most recently, computer programming have also flourished."The growing prosperity of these fields came at the expense of humanities.Not only literature started to lose some of its value because of the increasing number of students who refrained from pursuing their studies in its all fields, but also because of the waves of criticism that attacked literature.The attack was pointed at high culture and universities in general in the sixties--an energetic era that held not only civil right movements but also radical changes in many disciplines and policies.Kernan notes, "the attack on literature first began to be noticeable in the universities in the disturbances connected with the Vietnam War…" Moreover, in his five-chapter-book The Rise and Fall of English, the renowned scholar, Robert Scholes, gives a thorough critique of the nature of English studies in the United States today recalling for a critical change for the future. By providing examples of two early American colleges: Yale and Brown, Scholes gives a substantiated analysis of their rise at the end of the nineteenth and beginning of the twentieth century.Although Scholes addresses the dark situation of English, in general, literature falls under his analysis as an academic field.According to Scholes, the decline of English is attributed to the canon of theories, texts, and political issues that circled humanities extensively.He calls, at the end of his book, for an instant and essential construction of the discipline toward a methodology of composition, rhetoric.While extensively addresses the situation of English departments, Scholes fails to encompass the public in this malaise of English.It is true that inside the academic walls, English has deteriorated, but it has been neglected outside the academic walls as well. Similarly, Alvin Kernan's book What Happened to Humanities offers the opinions of some of the most notable American academic commentators on the critical changes that happened in the humanities in the latter part of the 20th century.The essays in the volume ascribe the decline to factors such as demographic shifts, lack of financial support, and the shifting communication technology.They also investigate the impact of these factors on books, libraries and the phenomenology of reading in the age of images. What the Public Say What I have explored so far has indeed played an important role in the decline of literature--an internal role.Over the last few decades, the scholars, I have mentioned, and many others have scrutinized the decline of literature as a result of fundamental internal problems.Compellingly, this attribution appears valid.Nevertheless, it would be more conducive to interpreting this decline when considering the public's views and not relying solely on the profession itself.Therefore, this paper serves to establish a new perception of the decline of literature through examining the public's voice. Forty people participated in the questionnaire (20 men, 20 women.)For the sake of anonymity and authenticity, neither the name of the participant nor the name of the institution that the respondent represents was recorded.The language in the questionnaire was simple and straightforward--taking into account that participants may have a variety of backgrounds and different levels of understanding.Complicated concepts and terms were avoided.Although the questionnaire consisted of eight simple questions, a lot of people showed no sign of interest in filling them out.Since the questionnaire was not pleasing to most people whom I have communicated with, I could simply anticipate that reading literature would not be more intriguing to them. The first question was: how many books do you read monthly?Although the question incorporated all kinds of books-fiction and non-fiction--and all genres as well and did not specify literature, yet some participants chose magazines as alternatives to books instead of saying 'none".Most people read 1-3 books per month, which shows a surge of interest in reading not only literature but also other kinds including self-help books.The purpose of asking such a question is to identify from the very beginning if the lack of reading literature is attributed to reasons that are pertinent to literature only or all books, in general.Furthermore, it is shown that women are likely to read books more than men, almost twice as much.The number of women who read more than three books per month outweighed men.Only two men said they read more than three books. The second question was specific: what genre of literature would interest you most?The result was unpredictably surprising.The majority of men chose non-fiction as the most interesting kind of literature while all of the female respondents chose fiction, particularly novel and short stories.Only one man chose poetry, but the rest was not interested in fiction.The fact that men are not usually tempted by fictional stories implies the possibility of finding a larger portion of the actual population who do not consider fiction intriguing.The tendency to read nonfiction emanates from the concept many people have of not looking at fiction as a learning tool--a story that can teache us a valuable lesson in an esthetic and creative way.Most men, according to the questionnaire, did not believe or understand, this concept that women did. In order to see whether or not people's desire to read literature is driven by someone's else recommendation or advice, the third question emerged: have you purchased a book based on someone's recommendation?There were some variations in the results among men and women.The only commonality amidst these responses is that readers at a young age rarely purchase a recommended book, unlike older readers who said that they often read books just because they were suggested. What is your favorite book?Who is your favorite author?These questions, four and five were proposed to understand if the public was intrigued by the world's masterpieces--classics--or if the most celebrated authors captivated the public.Unpredictably, only one person chose a classic--John Milton's Paradise Lost.The choices of favorite books ranged from Sylvia Plath's The Bell Jar (1963), John Irving's The Cider House Rules (1985), Frank McCourt's Teacher Man (2005), Cormac McCarthy's The Road (2006), Ted Dekker's The Circle (2011), and many others.It is shown here that the public tends not to be attracted to classics--works that have been abundantly celebrated among literature scholars for a long time.Their choices include mostly works from the 20th and 21st centuries. As for the authors, and aside from Edgar Allan Poe, the list lacks highly celebrated writers.They chose Hunter S. Thompson, Ted Dekker, Kurt Vonnegut, Pat Conroy, and some writers that are not largely known.The absence of world major names in literature in these responses¬¬ such as: America's Hemingway, Russia's Dostoyevsky or Tolstoy, France's Hugo or Voltaire, Ireland's Joyce or Bernard Shaw, England's Shakespeare or Austen--a seemingly endless supply of well-known authors--appears to come from one of the following two accounts: (1) they have not had the opportunity to expose themselves to these classics, ( 2) they have read one or another of these works but they do not constitute a landmark on their journey of reading.Both accounts indicate a shocking fact; what have been labeled as a masterpiece or classic, is widely unappreciated by the majority of public. The sixth question asked the participants if they agreed with the statement: modern authors are as good as writers from the 18th and 19th centuries.The statement was to measure the respondents' abilities to distinguish the different writing styles of authors from various literary periods.The majority did not agree with the statement.The responses resonate with the above conviction-public do not treasure classics.The several that opposed were all women who believed that there was a significant difference as writers from the 18th and 19th centuries were exceptionally better. To detect if the complexity of some works impedes the public's understanding and to see if certain skills are required to be a reader of literature, the seventh question arose: reading a literary work requires some skills such as perfect grasp, flawless comprehension, and vast vocabulary.The majority believed that literature ought to be approached with these skills.However, most of the female readers, aged over 30, had an opposing view.According to them, reading literature does not necessitate acquiring vast vocabulary or having impeccable comprehension-from the context, one can understand. Literature encompasses different degrees of complexity in all of its genres.Some writers tend to use complexity in their writings whether or not they intended so.These writings challenge the apprehension of many readers who feel undermined unlike professional readers known as intellectuals who find them uncomplicated and interesting.The idea of describing a specific novel as a page-turner shows how readers like to resort to easy-to-read books and forsake the most complicated ones such as Virginia Woolf's To The Lighthouse or Thomas Pynchon's Gravity's Rainbow.The reader's intelligence is often challenged when reading such books.While the understanding of the hard-to-read books gives the reader the pleasure of competence and mastery, the failure to understand generates feeling of inadequacy and inferiority among those who find those books challenging. The last question specifically addresses the problem of the decline: what do you think are the main reasons that some people do not read literature?By asking such a question and, more importantly, by concluding the questionnaire in a brief essay question was an attempt to interpret the decline of literature through the lenses of the public, the common readers.The respondents' reactions varied drastically, but there is an agreement over several factors by which, the public believes, why most people have abandoned reading literature. Lack of time is the main cause in most of the responses.This response is vague and does not provide an accurate justification.Most people tend to use this reason to defend their failure or sluggishness so they do not perform a certain action or develop a hobby.The concept of "lack of time" is not reality; it is more conviction and perception.There is time for personal pleasure, exercise, work, social communications and house chores.What one lacks is not time; it's time management, our ability to control and use it properly.For example, the amount of time spent on watching TV per day can be saved for other things.According to a recent New York Times article, "an average of four hours and 39 minutes consumed by every person every day."This massive amount of time equals reading a minimum of twenty books per month.So, limiting TV viewing to only two hours per day saves each person nearly 80 hours (almost fours days) of free time each month that can be used for profitable experiences. The impact of technology on people's lives is what the public states as another cause of the decline.It is true that technology has dominated our lives--ironically was thought of as a means to make life easier.As a matter of fact, it has been used mostly to expedite our time--or to kill out time in a common sense--without providing the expected benefits or even any kind of social values.Cell phones, computers, email, and Internet, all distract most people and negatively impact them to a larger extent.The excessive use of these devices and means of technology result in: failure at multitasking, lack of family time, and incapability of enhancing brain skills--thinking, interpreting, and analyzing, which often comes from reading.The neuroscientist Susan Greenfield addressed the undesirable impact of this digital age on people's identity emphasizing the threat on the human brain.The influence of media--primarily TV--has been a major factor providing the alternatives for pleasure and easy access to information, as some people from the public believe.According to one response, " media has resulted in an effect known as the 'dumbing down of America'".Watching takes less brainpower and people, as one man states, "are always looking for the easy way out."As noted earlier, the amount of consumed time when watching TV, or even videos, constitutes a threat to people's intellectuality. With the growing number of TV channels and movies produced every year, it has become very challenging nowadays to plant inside people the passion for reading.And it has become even more difficult to develop reading literature as a habit for the younger generations.One response indicated that when a person is not interested in literature, this shows that reading has not become a habit when that person was a child.In other words, parents sometimes do not encourage their children to be readers of literature, or at least general readers, which would likely to result in a lack of interest when they become adults. The lack of advertising is another reason that some people believe would be a major cause of this decline.The success of the Harry Potter series is attributed to the effective advertisements.According to one response, "the Harry Potter book series being so successful and popular to read in time in which printed media is not the most widely used source of entertainment".The case of the success of the fantasy series of Harry Potter can be perfectly used to prove this claim.However, Harry Potter was also created as a product, which explains the main goal of advertising the series. Advertisement is what insinuates people to a certain product.It is very difficult to apply the same concept to classics or most of the modern works for which publishers would not consider profitable.The advertisement can be indirectly education-based, where students are exposed to a wide variety of literary works, even simplified versions or picture books of world classics.This way or another, we will have a future generation who will receive and appreciate literature as much as they appreciate technology. Conclusion There is a decline in literature and in the number of people who receives it.The problem is not recent; however, it has not been discussed inclusively.If we continue to address this issue institutionally and neglect to adopt the public's perspectives, we will fail to come up with effective and applicable solutions to the problem.It has been a few decades since the problem was identified; it is the time now to expand it thoroughly by including diverse views from common people¬¬--both readers and nonreaders.This paper has presented some of the public's points of views.But, more research is needed to expand this issue fully.Also, more empirical studies ought to be done in order to validate any claims that may arise when tackling this issue.For a field that has had a forceful manifestation of expressions, emotions, and thoughts through the use of aesthetic language, there is absolutely an instant need to reconsider its decline in an age when literature can be used not only for pleasure but also for providing rich source of information.
5,084.8
2015-06-01T00:00:00.000
[ "Education", "Economics" ]
Reliable Colour Detection for Matching Repaint Application This paper presents the design and development of a system that creates the optimal conditions to sense colour for matching repaint application. The effectiveness of eliminating, and then recreating the lighting conditions to a desired level in a controlled enclosure is explored. The desired artificial light is introduced into the enclosure, providing optimal, constant conditions to measure surface reflectance. The tests conducted showed that the proposed system worked well. The testing results were good and showed that the relative results between samples taken at exactly the same location were an accurate match and consistent. Introduction Paint is commonly used to enhance the appearance of cars and buildings, both internally and externally.During its lifetime paint is exposed to a number of situations that can alter its original appearance; these include weather, scratching and graffiti.Scratching and graffiti are typically localised damage.Therefore in theory to restore the affected area to its original appearance a repaint is required with the same paint mixture originally used.However, there is the problem that damage caused by weathering affects the entire painted surface, and its appearance is globally altered, requiring the whole surface to be repainted in order to restore its consistent appearance; this method can get expensive depending on the area needing a repaint.Alternatively, an undamaged area of the surface to be repaired can be analysed to obtain its current appearance and a custom paint mixed accordingly.Currently the industry best practice for doing so is manual analysis by a trained artisan who visually matches the colour to a pre-specified DOI: 10.4236/wjet.2019.71009130 World Journal of Engineering and Technology range of paint.There are automatic means of analysing the appearance but they are complex and so consequently cost a lot. During this research work, light was researched, i.e. lighting conditions and their effects on colour.Light is the key source of energy that feeds all life on Earth and we use it to get information about our surroundings and what we are doing.Light is a type of "energy" and is thought of as being carried by photons which are elementary particles, from a basic point of view responsible for the electromagnetic phenomenon [1].These photons are the transporters of all rays of radiation.The transfer of energy for one photon is described in the equation below: The energy in a wave is proportional to the intensity, frequency and speed of the light. This energy can be measured with the use of Cameras, spectrometers, and other commercially available sensors.Sensors usually have different sensitivities to the different wavelengths of light which are usually described in their datasheets. Al-Bahadly and Berndt [1] reported using LEDs emitting wavelengths that sufficiently overlapped and the intensity of the reflectance from a test piece, subjected to each different wavelength measured by an LDR.It was found that the system worked and the reflected wavelengths could be sufficiently sampled, the system was just a prototype and so requires more calibration, testing and elimination of noise to become viable.Al-Bahadly and Darhmaoui [2] proposed a camera based method for colour detection and associated software to convert RGB colour space to CMYK colour space.It was found that the system gave fairly consistent results at varying viewing angles, but they note that some factors would influence the results such as light, temperature and cleanness of test piece.They recommend that optimal conditions for image capture be found to better accuracy and that it could be considered a colour constancy problem, in which illumination is kept constant.S. Li and Pandharipande [3] reported using an RGB LED as both the light source and light sensor, here an array of RGB LEDs connected in parallel was used to sense the colour while simultaneously illuminating without perceivable flicker, and relay the colour to a destination RGB LED.Included in the experiment was an algorithm to detect arbitrary colours, showing that it is possible. Although not directly related to colour detection for the purposes of repainting, this experiment highlighted the fact that a single RGB LED could be used to both illuminate and detect colour, although for the concept to be viable for the pur-World Journal of Engineering and Technology pose of repainting it needs to give reliable results, which could only be achieved if environmental illumination is kept constant. Polzer, Gaberl, Davidovic, and Zimmermann [4] proposed a system where a BiCMOS colour sensor is used without colour filters to detect colour illuminating from an RGB LED.The system relies on the fact that silicone has wavelength dependent penetration depths.By taking photocurrent reading at the appropriate depths one is able to determine the intensity of red, green and blue light cast on the sensor.Again although this is not directly related to colour detection for the purposes of colour detection with slight modification it could be applied to the application and would also greatly benefit in the application if external illumination were to be kept constant. The paper focused on devising a method for eliminating the ambient light and recreating the desired lighting environment.The method was realised in a variety of different ways and configurations.Each configuration was an improvement of the last, leading to an optimal final design that was used to be used for testing. Research has been conducted to determine what advancements have been made with respect to colour detection and what applicable technologies are available.Apart from the current expensive spectrophotometers on sale, the research showed that cheaper alternatives such as those using cameras [2], were unreliable. The reason for this is because they have no control over the lighting conditions in which they analyse the colour.The designed device must solve this problem. The tests conducted showed that the device/method worked.Samples within tests had little to no colour variation.There were issues with the consistency of the measurement device.The settings in manual mode were not always the same even though they were set to the exact same level, making it difficult to get compare between tests run at different times. Light and Colour Light is responsible for the colour we see.Lights electromagnetic wavelength falls between 380 nm (violet) and 780 nm (red) [5].White light is made up of approximately equal proportions of all visible wavelengths.When these shine on or through an object, some wavelengths are absorbed while other are reflected or transmitted. If the white light source is deficient in the wavelength that a surface reflects, the colour of the surface will appear different to what it should be.This can be clearly seen in Figure 1 where the reflected spectral power distribution changes as the incident spectral power distribution changes.The reflected spectral power distribution is only the same as the percent reflectance if the incident spectra are evenly distributed [6]. Selecting a Source of White Light For this problem the main concern is reproduction of colour.Over the past 80 World Journal of Engineering and Technology years, the sources of choice for colour reproduction have been sunlight and tungsten-based light [7].Both these lighting sources have a relatively even distribution of energy across the spectrum with no large troughs or peaks in output at any wavelength in the visual band.This makes them great sources of illumination if colorimetric accuracy is desired since they both have a colour rendering index (CRI) of 100 [8], this is shown in Table 1. According to [8], Figure 2 shows the use of a tungsten bulb is the common choice for colorimetric accuracy.They generally have lower spectral distribution in the lower wavelengths (blue light).This gives the yellow appearance of most tungsten filament bulbs.This is not true for all tungsten based lights.There is a series of Tungsten halogen bulbs currently made by Tailored lighting, Inc. called SoLux that has a spectral distribution closely matching that of the sun as demonstrated in Figure 3 while having a CRI of 99+ [9]. System Design From research, it was evident that there was a lack of control over the lighting conditions during the detection of surface reflectance.The literature highlighted that digital techniques for determining colour constantly were only suitable for object detection where it was acceptable for the colour to be distinguishable as a known colour, but would not be able to return the colour exactly under all lighting conditions [10].It showed there had been success in the medical field detecting, low energy luminance by eliminating unknown lighting from the detection area [11].This led to method of doing just this, but to measure reflectance. Mechanical Design An enclosure with artificially introduced lighting concept will be used to consistently analysecolour.There are many aspects that must be designed and tested for it to give the desired reliable results. The focus of this research work is to verify the use of an enclosure that eliminates light and introduces new light as a reliable environment for colour detection.Detection within the environment should be able to give the same results on the same surface and location, irrespective of when it is tested and the external lighting conditions it is tested in.Therefore, the lighting used was not the optimal lighting to be used if exact colour was to be rendered, nor was the use of an iPhone camera intended to be the analysis device if exact colour were to be detected. The lighting used allowed the effective testing of the lighting location on the surface reflectance.The lighting conditions could be repeated, giving the same results.It was easy to change the location of the bulbs and test what effect this had. The iPhone camera was used with its settings in manual mode to analyse the reflectance of the surface at a certain location within the enclosure.It was, as with the lighting able to give accurate results relative to the previous samples in the same test, but not of the actual colour without further calibration which is outside the scope of this project.The use of an iPhone camera introduced many design elements into the enclosures designed.The height, width and depth were all determined by the cameras field of view and focal length respectfully.If a simple RGB sensor were used for the detection device, the dimensions of the enclosure would have been drastically reduced. Design Testing To test the qualitative performance of the designs, three surfaces were chosen with different properties.A textiles surface in the form of a purple t-shirt (Figure 4(a)) was chosen to determine the performance of a matt surface where little direct light reflectance is expected.An artificial wood vinyl surface (Figure 4(b)) was chosen to determine the devices performance on a medium gloss surface where some direct light reflection is expected.A shiny blue folder (Figure 4(c)) was chosen to determine the performance of the device on a high gloss surface where a lot of direct light reflection is expected.This high gloss surface was chosen due to it similarity to the gloss level of a typical cars paint job. Prototyping and Final Design In order to get to the final design, there were three prior prototypes.Each one of these prototypes improved on the last.The initial designs design for an enclosure stayed throughout the prototypes.The main issue that the first prototypes experienced was point lighting as shown in Figure 5.It shows the point lighting form the first prototype where the iPhone flash was used as the light source and represents the worst point lighting experienced from all prototypes.World Journal of Engineering and Technology Prototypes two and three used three tungsten filament bulbs coming from the top of the enclosure, and passed through a diffuser. Prototype two still had point lighting, and even though it wasn't as bad as the first porotype it presented no clear sampling area where the surface reflectance stayed constant. Prototype three improved on prototype two by incorporating reflectors behind the diffuser to distribute the lighting around better.While this did distribute the lighting better, it resulted in a large area of direct light reflectance, and only left a small area that would be appropriate for sampling as shown in Figure 6. The final design came after the third prototype and is called the final design as it gave the best results. The purpose of this design was to improve on design three by moving the lighting closer to the sides, thus moving the direct light reflection points closer to the sides and creating a larger area of constant lighting for colour detection.areas of constant illumination, the area of constant illumination for these surfaces will be the same as the defined area.However, Figure 8(c) shows that there is now a larger, square area where lighting does not change.This area would be large enough to average and get an accurate colour reading from. Results and Discussion To get quantitative results for the chosen design, MATLAB was used.A program was written that takes a jpeg image and extracts the red, green and blue channels into their respective arrays.The arrays are then iterated through at a desired location and the red green and blue values averaged to return their respectful averaged results.The averaged area is shown as the red square in Figure 9. The best design from the results was selected to do testing.The test piece for The iPhones built in camera app sets white balance, shutter speed and ISO speed automatically to what it thinks it should be to get an optimal photo.This is a problem if you want consistent results because these setting will change from one surface to another.Therefore, two-third-party app was used on the iPhone to control the white balance, shutter speed and ISO speed.Unfortunately, both apps were not reliably able take photos that with the same colour balance even with the settings set to the same for all photos.If the camera had been looking at a bright object in automatic mode before it was set to manual mode and the settings changed to the desired level, the resultant image would be brighter than if it had been looking at a dark object in automatic mode before being set to manual mode and the settings taken to the same desired levels. Once the camera is set to manual mode, all the images taken during this time were found to have the same characteristics and could be compared with one another to determine relative colour accuracy. Blue Folder Different Locations Figure 10 shows the average colours of the samples taken at different locations on the blue folder one after another with the lights in the room on. The results from the samples vary slightly.However, the variation is almost indistinguishable by the human eye.The human eye can distinguish about 10 million different colours [12].However, an 8-bit JPEG image can distinguish between them: World Journal of Engineering and Technology Blue Folder Same Location Lights-On Figure 11 shows the averages of the samples taken at the same location.Consecutive samples were taken at the location with the room lights on and then with the room lights off.Each sample was a minute apart. The results vary quite a bit from those of the samples taken at different locations.This is due to the manual control issues with the camera as explained in Section 4. The results were really good when looking at them, there was no variation between samples. Blue Folder Same Location Lights-Off The result with the lights-off varies with the lights-on but, again there was no variation between samples as shown in Figure 12. Overall Discussion The results of all the tests were good.The tests have shown that the relative results between samples taken at the same location were a 100% match.Since both the lights on and light off at the same location had no variation between them, the variation between samples in the varying locations test, can be put down to differences in the actual surface reflectance at those different locations. The difference between the lights on and lights off values is unknown.If the enclosure was letting in external lighting, then you would assume turning off the lights would decrease all some of RGB values, however the red and green channels were one value higher each.It may be because the enclosure moved slightly after turning of the lights and so a slightly different location was sampled. Conclusion The design and development of an enclosure that creates the optimal conditions for colour detection has been described in this paper.The enclosure stops all external light from entering it, and artificially recreates the lighting conditions to a desired level for colour detection.An iPhone camera is then used to capture an image of the surface and MATLAB used to process the image and find the average RGB values.The results were good between tests as long as the camera was not turned off between them.The results were always stable between samples in tests, which proved the fact that the enclosure provided a constant lighting environment for colour detection.To further verify the effectiveness of the enclosure it is recommended that a full control over the conditions is required so that the reliability between tests is established. Figure 1 . Figure 1.Effect of incident spectra on the reflected spectra. Figure 2 . Figure 2. Spectral distribution of common light sources. Figure 5 . Figure 5. Bad point lighting from the first prototype. Figure 7 ( Figure 7(a) shows the top and side of the final design.Figure 7(b) shows the Figure 7 ( Figure 7(a) shows the top and side of the final design.Figure 7(b) shows the bottom of the design with the lighting turned off.As can be seen the lights have been moved to the side and a fourth light was added to more evenly distribute the lighting as shown in Figure 7(c). these results was the blue folder used during the design stage.Pictures were taken and the average colour data from the sample area calculated, compared and displayed.The difference in RGB values of picture samples taken within tests determined the reliability of the results.World Journal of Engineering and Technology Figure 8 . Figure 8. Final design test results; (a) Purple t-shirt test, (b) Wooden desk test, (c) Blue folder test. Figure 9 : Figure 9: Colour Sample for test pieces. Figure 10 . Figure 10.Blue folder colour samples taken at different locations with the room lights-on. Figure 11 . Figure 11.Blue folder colour samples taken at the same location with the room lights-on. Figure 12 . Figure 12.Blue folder colour samples taken at the same location with the room lights-off. Table 1 . CRI of common light sources. *Efficacy reduces with increasing power.
4,256
2019-01-31T00:00:00.000
[ "Mathematics" ]
Experimental Study to Performance Improvement of Vapor Compression Cooling System Integrated Direct Evaporative Cooler and Condenser For areas with very hot and humid weather condition increased latent and sensible load are a major problem in cooling systems that will increase compressor work so that electricity consumption will also increase. Combined condenser with direct evaporate cooling will increase the heat removal process by using an evaporative cooler effect that will increase the efficiency of energy use. This paper presents the study of the use of evaporator cooling and condenser. This paper mainly calculated energy consumption in steam compression cooling systems and related problems. From the results of this study, the use of condensers with evaporative cooling, power consumption can be reduced to 46% and performance coefficient (COP) can be increased by about 12%, with 1,2 kW cooling capacity. Introduction Energy is a very important factor in encouraging strong economic development for growth in any country.With the rising prices of fossil fuels [1].The savings and reductions in energy consumption, help reduce global warming. The increase in living standards and demand for human comfort has led to an increase in energy consumption.The amount of energy consumed by air conditioners, refrigerators, and water heaters is increasing rapidly, and occupies about 30% of total power consumption [3].Electricity consumption for air conditioning systems has been estimated at about 45% for residential and commercial buildings [4].Due to the rapid growth in world population and economy, total world energy consumption is projected to increase by about 71% from 2003 to 2030 [5].Therefore, any attempt to reduce the overall cooling system energy consumption will contribute to large-scale energy savings at the international level.Reduced cooling unit energy consumption can be achieved by improving performance.This can be done by lowering the compressor power consumption, increasing the heat dissipation capacity of the condenser, or reducing the difference between condenser and evaporator pressure. Higher condensation temperatures lead to an increase in pressure ratio across the compressor, thereby increasing the compressor's work and thereby reducing compressor life and coefficient of performance (COP).High outdoor temperatures above 35°C in summer are one of the reasons, leading to a decrease in the coefficient of performance (COP) of most air-cooled units to the range of 2.2 to 2.4 [6].In addition, if this temperature stays above 45°C for long periods of time, the AC will be over trip due to excessive condenser working pressure.Chow and Cengel [7,8] mentioned that the air conditioning performance coefficient decreased by about 2-4% for every 1°C increase in condenser temperature. In Middle Eastern countries, atmospheric temperatures during summer are approaching 40-45°C sometimes higher.During this condition, the AC compressor works continuously and consumes more electrical power and COP decreases [9].Therefore, it is necessary to lower the ambient air temperature before passing through the condenser coil, to lower the temperature and pressure of the condenser.This can be achieved by using evaporative cooling, which lowers the condensation temperature from the dry ball temperature outdoors to near wet ball temperatures [10].The efficiency of evaporative cooling is essentially unaffected by high environmental temperatures in dry climates.The most significant evaporative coolant benefits during peak periods of utility when the difference between the largest dry and wet ball temperature [11].This can result in significant overall energy and demand savings because any small reduction in electricity consumption in the housing sector can save significant amounts of energy [12,13]. Experimental The new air conditioning systems with independent condenser and cooling evaporative cooling are shown schematically in Figure 1.It mainly consists of evaporative cooling and steam compression cooling systems, measuring instruments and control devices. Measurement The water flow rate of the shell and tube evaporative coolers is measured using a volume flow meter with an accuracy of ± 0.5%, the total airflow rate of the evaporative fan is controlled by the frequency converter.Anemometer is used to measure air flow rate (maximum 4.8 m/s).Relative humidity was measured using a psychrometer with an accuracy of 1%.Thermo-gun is used to measure various temperatures, and the data collected is analyzed. Theoretical Analysis The cooling system components comprising one compressor, evaporative cooling water pump, and a single phase power fan are measured separately.The cooling capacity can be calculated by applying the equilibrium energy equations as follow: The cooling capacity can be calculated by the equation Furthermore, the compressor input power and heating capacity are also calculated in Equation 3and Equation (4): The overall performance of the air conditioning system is evaluated by the performance coefficient (COP), which can be obtained by the equation As for the condensing coil on the condenser, the heat exchange from the condenser is affected by the heat transfer coefficient (Ko), which can be calculated as follows: According to the Cooling Principles and Tools (Zhang 1987), the heat transfer coefficient (k) of the refrigerant in the condensation coil is calculated in Eq. (7). The values of 1 4 ⁄ and with the temperature difference can be seen in table 1 (Zhang, 1987).The coefficient of heat transfer for water ( )out of the condenser coil can be calculated in equation ( 8) by reference (Parker and Treybal, 1962). The coefficient of heat transfer for air ( )out of the condenser coil is calculated in equation ( 9) according to the reference (Zhang, 1987). The values of the parameters C and m in the temperature difference are shown in table 2. 3 Results and Discussion Effect of dry air inlet temperature To improve the cooling performance of air conditioning systems with evaporative cooling, many factors influence it.Meanwhile, to limit the number of tests, only four key parameters have been tested: the water temperature in the evaporative cooler, the dry air ball temperature, the air velocity and the rate of water spray.Therefore, experimental testing is conducted under different conditions.Based on the experimental results, the thermodynamic properties of the refrigerant at different points of the cooling cycle are obtained (figure1) and parameters such as mass flow rate, cooling capacity, compressor input power and COP of the system are also calculated. For dry temperature conditions 30 o C to 32 o C, with relative humidity of 80%, evaporative cooler with variation of 0.03kg/ms to 0.05 kg/ms, air velocity 4m/s to 4.8m/s, and under the 50Hz compressor frequency condition the variation of the cooling capacity with the inlet water temperature of the evaporator is plotted in fig. 2 and 3.It can be seen that the cooling capacity and COP of air cooling system with evaporative cooler will increase as air speed increases from 4 m/s up to 4.8 m/s, but the cooling capacity and COP of the evaporative cooling system decreases when the dry-ball temperature increases from 30 o C to 32°C This is due to the fact that the temperature and condensation pressure increase with increasing dry ball temperature, so the specific cooling capacity decreases and the specific compressor power increases. Effect of air velocity and spray water rate Under the 70Hz compressor frequency condition, 80% relative humidity, temperature of dry air 32 o C and water inlet temperature evaporative 27 o C. Variations of cooling capacity and COP with water spray rate and air velocity are plotted in Figures 4 and 5. .9,respectively.The main reason is that the cooling water soaks the condensation roll better as the spray water rate increases from 0.03 to 0.05 kg/m-s, and the moisture vapor also increases.Therefore, the higher heat transfer coefficient and mass transfer will increase due to higher rate of bursts.Furthermore, the temperature difference between cooling water and refrigerant decreases with increasing spray water rate, and condensation temperature also decreases.Thus, cooling capacity and COP increase with increasing water spray. On the other hand, the cooling capacity and COP of the air conditioning system with the evaporative cooler condenser increases with the air velocity rising from 4 m/s to 4.8 m/s.With a water spray condition of 0.04kg/m-s, the cooling capacity and COP increased from 0.23 to 0.8 kW and 4.2 to 4.8 respectively.This is mainly because the higher air velocity increases the heat exchanger coefficient, and the condensation temperature and lowers the pressure. Conclusions From the research has been done, the improvement of air conditioning system performance with evaporative cooling condenser combined, and the variation between influenced factors, such as evaporator air temperature, dry temperature, air velocity and water spray rate, cooling capacity, COP has been analyzed.The main conclusions are as follows. 1) The experimental tests show that the cooling capacity and COP of the air conditioning system with the combined evaporative cooling-condenser increased significantly with increasing inlet temperature of evaporator, air velocity and water spray rate.COP increased by 12% with air velocity increasing from 4.0 to 4.8 m / s and with spray rate increasing from 0.03 to 0.05 kg / m-s. 2) It was also found that an increase in dry air ball temperature would decrease COP. Fig. 2 .Fig. 3 . Fig.2.Variation of cooling capacity with dry ball temperature and air velocity Fig. 4 .Fig. 5 . Fig. 4. Variation of cooling capacity with spray water rate and air velocity Table 1 . Value of Table 2 . Values of parameters C and m
2,105.6
2018-01-01T00:00:00.000
[ "Engineering" ]
Sustainable Development Strategy for Russian Mineral Resources Extracting Economy The immaturity of strategic and conceptual documents in the sphere of sustainable development of the Russian economy had a negative impact on long-term strategic forecasting of its neo-industrialization. At the present stage, the problems of overcoming the mineral and raw material dependence, the negative structural shift of the Russian economy, the acceleration of the rates of economic growth, the reduction of technological gap from the developed countries become strategically in demand. The modern structure of the Russian economy, developed within the framework of the proposed market model, does not generate a sustainable type of development. It became obvious that in conditions of the market processes’ entropy, without neo-industrial changes, the reconstruction of industry on a new convergence-technological basis and without increasing the share of high technology production the instability of macroeconomic system, the risks of environmental and economic security of Russia are growing. Therefore, today we need a transition from forming one industry development strategy to the national one that will take into account both the social and economic and environmental challenges facing Russia as a mineral resources extracting country. Introduction The strategy of sustainable development is considered to be the program of long-term changes in the structure of the Russian economy, integrating the mechanism of interaction between the state, business, ecologically significant subjects of the economy and the population.The implementation of the sustainable development strategy should include a number of steps that must be followed by certain strategic directions -the components of Russia's social and economic development associated with the neoindustrialization of the country's economy. The stages of programming and implementation of sustainable development strategy include the following: At the first stage, the implementation of the sustainable development strategy should completely change the raw material production of the 4th technological mode.This implies the automation and robotization of coal, oil and gas, metal ores extraction and an increase in labor productivity in these industries by 1.5-2 times (up to the level of the US and EU companies), the transfer of these industries to the full cycle of production of modern products of organic chemistry, modern alloys and carbon plastics, full supply of the latest materials of domestic engineering, construction industry.Technological base of the first stage should become available technological platforms, and investors -high-tech holdings.With regard to strategic changes in the structure of property relations and institutions, at the first stage, the state needs to initiate vertical integration of the subjects of the raw, processing and sectoral sectors, legislatively establish the most attractive conditions for investing innovations that increase the technological level of raw material extraction and processing, creating conditions for the start of environmentally friendly production. At the second stage, it is necessary to ensure the modernization of the processing enterprises of the 5th technological mode: auto, aircraft, shipbuilding, and energy.It is these sectors that should ensure the improvement of the ecological level of deep processing of raw materials (the 4th technological mode), which will expand the scope of application of environmentally friendly technologies and enhance their international competitiveness.We believe that it is impossible to do without imports of modern environmentally friendly technologies and their adaptation in existing technological platforms and innovative clusters, with investment support for existing in Russia commodity holdings, state guarantees and insurance.That is, the role of the state at the second stage is to stimulate the transition from the export of raw materials to its processing, large-scale investment of environmentally friendly technologies in innovative clusters, and the formation of a powerful environmental lobby. At the third stage, the development of the 6th technological mode processes is connected with convergent technologies and the emergence of new industries, in which generated environmental innovations can find demand in the raw and processing industries.These are such convergent branches of the 6th technological mode as bioenergy and the production of environmentally friendly fuel, industrial distributed computing programming, and nanomaterial engineering.The technological basis of the third stage should be formed on a cross-platform principle, in network clusters, with large-scale government support.The state at the third stage of implementation of the neo-industrial strategy should concentrate on the development of inter-cluster and cross-platform interactions, maximum benefits and investment in the development and implementation of environmentally friendly technologies, in the creation of network clusters for the production of environmentally friendly materials. It should be noted that, without government support, high risks of investing in R & D and industries in the sphere of environmentally friendly technologies discourage large Russian and international investors.At the same time, in Russia there is no real support for the neo-industrial maneuver necessary for sustainable development either in the industrial sector or in the strategic documents of the Government.The share of high-tech products of the 6th technological mode in gross Russian exports does not exceed 1.5%, while the share of innovative enterprises is below 10%.The task of creating 25 million high-tech jobs by 2020 and a general increase in labor productivity in one and a half times that of 2010 (due to the growth of investments and technological renewal of industry, state support for R & D and the upgrading of employees' skills) is a demonstration of ambitions that are not backed by any strategic documents.Therefore, if in 2012 the share of products of the 6th technological mode was about 5% in the US GDP, in Russia it was 0.3%, which corresponds to the contribution of fundamental science to the production of the final product of civil industries. Results and Discussion In order for the structural transformation of the Russian economy to take place in the form of neo-industrialization, rather than the consolidation of raw material specialization, it is necessary to develop a strategic union of the state and business.Today, they cannot separately initiate the sustainable development, because the state does not have corresponding institutions that determine the conditions for the inter-sectoral redistribution of financial, labor, intellectual, production resources.In turn, Russian business does not show interest in mass investment in environmentally friendly technologies. Therefore, the neo-industrial strategic union is understood, first of all, as an institutional solution to the problem of interaction between business and the state, that is, as a definite long-term plan.On the other hand, such an alliance is a combination of forms of interaction between the state and business -integration, investment, credit, emission, and social.The following strategic directions of the structural policy of sustainable development can be identified: 1.The emphasis on the use of basic (natural, energy, accumulated intellectual capital and educational level of the population) and domestic (funds of the state budget and extrabudgetary funds, capital of investors and banks, market potential of special economic zones, technological parks, interregional sales networks), but not external resources (funds of international funds, foreign capital, international marketing).This means involving the financial capital of the banking system, the stock market, the largest national corporations in the process of investing innovations, manufacturing the products of the 6th technological mode industries for the domestic market and its promotion abroad, by creating appropriate tax, credit incentives for the state, promoting the integration of R & D and high-tech processing, and also raw material productions, by using the market potential of IT, biochemical, nuclear-energy clusters. 2. Replacing the established after the "return of the state" in the 2000s term of long-term effectiveness of Russian business "participation in authority" on "participation in innovation".To do this, it is necessary to actively apply neo-dirigisme that is the full-scale use of indirect economic regulators by the state, not only in the financial sphere (in the sense of monetarists), but also to stimulate the inter-sectoral flow of investment, labor, intellectual capital, technology transfer, the development of regional clusters of productions 6th technological mode and small innovative entrepreneurship.To do this, it is important, first of all, to create the required institutional environment, which includes both state institutions that fully encourage innovation and the priority of breakthrough technologies in state support of industry, as well as business institutions that can reduce its transaction costs when investing innovations. 3. The creation of long-term conditions for sustainable growth of wellbeing in the process of formation of environmentally friendly production in processing industries on a new technological basis, and active support of the state for environmental entrepreneurship.As a key, long-term goal of structural reforms in the Russian economy, the growth of wellbeing is impossible without reducing the number of loss-making industrial enterprises (by 2014up to 40%), expanding employment in manufacturing and high-tech industries.At the same time, raw materials extraction remains the most profitable type of business in the Russian industry.The narrow layer of innovative firms is unstable, and most of these business entities -enterprises, banks, and venture funds -are in fact not engaged in innovations, but import of finished products and shallow processing of raw materials, violating the ecological balance. Therefore, the union of the state and business, aimed at changing the most stable elements of the structure of the national economy -formal and informal institutions, means of production, mechanisms for inter-sectoral and inter-regional distribution of factors of production and resources -should become the first step towards sustainable development in the process of neo-industrial structural transformations.That is, by influencing on the most stable elements of the structure of the national economy, the state can initiate a reduction of obstacles to sustainable development by stimulating innovative activity in manufacturing industries, involving large businesses in investing innovations.To do this, it is important to develop legislation concerning public-private partnerships, high-tech holdings, tax incentives, government loans and guarantees for innovative firms, to initiate a neo-industrial lobby.This will give impetus to the change in technological, cluster structure, due to redistribution of resources between environmentally hazardous and safe production, between raw materials and innovative clusters. 4. The development of own scientific potential, import and adaptation of nature-saving technologies instead of import of production equipment.It is necessary to take into account the experience of Argentina in the implementation of the strategy of "neo-nationalism" (comprehensive support of the national industry and domestic investors).Due to the fact that after the economic crisis in the late 1990s, the manufacturing industry was developing at a faster pace, the Argentine economy experienced a positive structural shift (the share of the modern 5th technological mode in 1997-2007 increased from 15 to 35% of GDP).This was preceded by a five-year economic policy of the state, including the return of the natural rent to the state, the creation of state-owned scientific and industrial enterprises (mainly in agriculture), large tax breaks to R & D organizations, partial control over raw materials prices, the use of state investments to increase the capitalization of the largest manufacturing companies, transfer of the latest technologies as a prerequisite for multinational corporations' foreign direct investment.As a result, Argentina overcame the dominance of agriculture and became one of the industrial countries that came close to the beginning of sustainable development. Realization of the potential for sustainable development of the Russian economy is impossible without the development of technological platforms and public-private partnerships in innovative clusters.Public-private partnerships in Russia play the role of attracting business entities to maintenance, operation, reconstruction, modernization or new construction of public infrastructure facilities on terms of sharing risks and responsibilities.But the state does not take responsibility for fulfilling the main condition of sustainable development -changing the institutional and reproductive structure of the economy in such a way as to ensure a massive inflow of investments in environmentally friendly technologies and speed up the replacement of existing production facilities with new ones that take into account the growing requirements of environmental safety. Neo-industrial structural transformations require an early overcoming of the existing disintegration of science, production and finance in the conditions of reforms.Today, scientific institutions, small innovative enterprises, development companies practically have neither a financial base nor an opportunity to produce an innovative product, since their connections with large industrial enterprises were not established during the reform period. Considering this, for the development of resource-technical and technological factors of sustainable development of the Russian economy, the following tools can be offered: 1. Neo-industrial state order, which should become a means of changing the trajectory of the development of relations between innovation and production business and the state towards sustainable development.We understand it as the development of market demand for environment friendly technologies, means of production and ready-to-introduce technologies for environmental protection by the state.The economic mechanism of such state order should consist of three parts: a) an order for R & D projects on environmentally friendly technologies in the framework of the appropriate sustainable development strategy, which the Russian government should adopt; b) an order for the production of products capable of reducing environmental damage from the activities of extractive enterprises dominating in the Russian industry; c) state guarantees and state export crediting not less than 15% of environmentally friendly technologies produced within the framework of the neo-industrial state order, which should serve as a guarantee of their high international competitiveness. As a promising organizational form of neo-industrial state order, a public-private partnership is considered.Within it an investment fund and venture capital funds, specialized tax and customs privileges should be formed. 2. Selective investment support for innovative projects of environmental-friendly technologies.When examining them, it is necessary to take into account the technological level of the replaced products, in order to concentrate government investments on the most promising for sustainable development of the Russian economy projects.To do this, it is proposed to classify enterprises -potential beneficiaries of state investments, in the following way: users of foreign technologies (not implementing the full cycle of R & D), converters of environmental technologies (adapting them to Russian production conditions using their own scientific capabilities), effective innovators (using R & D sporadically for individual projects) and breakthrough innovators, for which the creation of environmentally friendly technologies is the main factor of competitiveness. For the least developed in Russia manufacturing and high-tech industries, such as radio electronics, instrument making, computer building, production of ultra-light and superstrong materials, microelectronics, robotics (in which the share of imports reaches 95-100%), it is advisable to provide the state orders and state investments both to breakthrough and effective innovators, as well as to the firms-converters of environmentally friendly technologies. In turn, in industries with 60-80% import share (chemistry of semiconductors, aircraft construction, electro-instrument engineering, machine-tool building, satellite engineering, etc.), the investment state support is appropriate for efficient and breakthrough innovators.Finally, for industries with an import share of 40-60% (organic chemistry, vehicles), the state order and state investments are needed only for breakthrough innovators. 3. The import of environmentally friendly technologies and highly qualified specialists, which should eventually replace the import of the finished product.Along with this, it should be emphasized that there is a need for a comprehensive use of investment, organizational, institutional, financial and credit instruments of structural transformation of the Russian economy for the development of non-industrial import substitution in it. Conclusion The mechanism of development of neo-industrial structural transformations in the Russian economy aiming to enter the way of sustainable development reflects a multiplicative effect.It consists in the consistent modernization of the extractive and processing industries based on environmentally friendly technologies, in systematic interaction between the state and business in the development of the necessary institutions, in the formation of a neo- E3S Web of Conferences 21, 04014 (2017) DOI: 10.1051/e3sconf/20172104014The Second International Innovative Mining Symposium industrial state order, involving technological platforms and innovation clusters, implementing interest rate policies, state guarantees and investments.
3,568.6
2017-01-01T00:00:00.000
[ "Economics" ]
MAPK Cascades and Transcriptional Factors: Regulation of Heavy Metal Tolerance in Plants In nature, heavy metal (HM) stress is one of the most destructive abiotic stresses for plants. Heavy metals produce toxicity by targeting key molecules and important processes in plant cells. The mitogen-activated protein kinase (MAPK) cascade transfers the signals perceived by cell membrane surface receptors to cells through phosphorylation and dephosphorylation and targets various effector proteins or transcriptional factors so as to result in the stress response. Signal molecules such as plant hormones, reactive oxygen species (ROS), and nitric oxide (NO) can activate the MAPK cascade through differentially expressed genes, the activation of the antioxidant system and synergistic crosstalk between different signal molecules in order to regulate plant responses to HMs. Transcriptional factors, located downstream of MAPK, are key factors in regulating plant responses to heavy metals and improving plant heavy metal tolerance and accumulation. Thus, understanding how HMs activate the expression of the genes related to the MAPK cascade pathway and then phosphorylate those transcriptional factors may allow us to develop a regulation network to increase our knowledge of HMs tolerance and accumulation. This review highlighted MAPK pathway activation and responses under HMs and mainly focused on the specificity of MAPK activation mediated by ROS, NO and plant hormones. Here, we also described the signaling pathways and their interactions under heavy metal stresses. Moreover, the process of MAPK phosphorylation and the response of downstream transcriptional factors exhibited the importance of regulating targets. It was conducive to analyzing the molecular mechanisms underlying heavy metal accumulation and tolerance. Introduction Among a variety of soil pollutants, heavy metal pollution is a key worldwide environmental issue. Heavy metals (HMs) cannot be decomposed and will be available in the soil permanently. Heavy metals usually exist in the environment and interact with plants and human systems. The toxicity of HMs in the environment is the product of natural and human actions [1]. HMs are inorganic and non-biodegradable pollutants which are not easy to metabolize. Therefore, their concentrations in soil are increasing significantly [2]. In the surrounding environment, accidental biomagnification and bioaccumulation of heavy metals have become a predicament for all organisms, including plants [3]. A variety of heavy metals in the soil are absorbed by plants. These heavy metals may or may not be necessary for the normal growth of plants [4]. When the growth and development of plants are constantly stressed by heavy metals, their biological systems are irreversibly damaged, resulting in reductions in plant yield and productivity [5]. Therefore, plants respond to and adapt to these environmental challenges through a series of physiological and biochemical changes. This process involves a series of complex signal pathways [6,7]. Essentially, stress sources can induce intracellular signal perception through external signals (calcium or miRNA) [8]. On this basis, the signals sensed by cell membrane surface receptors are amplified step-by-step through phosphorylation and dephosphorylation and transmitted to cells [9]. In cells, signals can activate specific effector proteins, such as kinases, enzymes or transcriptional factors, in the cytoplasm or nucleus and regulate the expression of specific genes [10]. Heavy metal stress signals of different intermediate molecules activate different transcriptional factors, resulting in the expression of different antioxidant enzymes [11]. Among them, protein phosphorylation is an important signal transduction mode in plants, which is catalyzed by mitogen-activated protein kinases (MAPKs). MAPKs are just some of the most principal and highly conserved signaling molecules in eukaryotes. They function downstream of the sensor/receptor to coordinate cellular responses for normal growth and development of the organism [12]. MAPKs can be easily identified based on sequence similarity and the signature TXY activation motif and enable the transmission of signals generated by ligand-receptor interactions with downstream substrates [13]. There are various MAPK pathways in cells. Each pathway is interrelated and independent and plays a major role in cell signal transmission. In plants, the MAPK cascade is intertwined with other signal transduction pathways to form a molecular interaction network [14]. Diverse cellular functions in plants, including growth, development, biological and abiotic stress responses, are regulated by this network. For instance, MAPK in plants can target and regulate bZIP, MYB, MYC and WRKY transcriptional factors under stress conditions [15]. However, considerable progress has been achieved in understanding the mechanism of action of the MAPK cascade in plant innate immunity. Upon Rhizoctonia cerealis infection, strongly upregulated TaMKK5 activates TaMPK3, and the phosphorylated TaMPK3 interacts with and phosphorylates TaERF3 [16]. However, it is seldom reported that the MAPK signal is induced and activated by downstream transcriptional factors under heavy metal stress. The roots are the plant's main organ for uptake of heavy metal from the soil [17]. When the roots perceive heavy metal stress, they immediately trigger the signal transduction system and mediate the transcriptional regulation of related genes in the plants. Conserved MAPK signaling pathways are known to regulate cell growth and death, differentiation, the cell cycle and stress responses. MAPK is a series of phosphorylation steps from MAPKKKs (MKK kinases), via MAPKKs (MAPK kinases), to MAPK [18]. Its main function is to phosphorylate related transcriptional factors. Subsequently, transcriptional factors can induce metal response gene expression by binding specific cis-regulatory elements. Each hierarchy of the MAPK cascade is encoded by a small gene family, and multiple members can function redundantly in an MAPK cascade [19]. Plant MAPKs are usually located in the cytoplasm and/or nucleus, although they may also be transferred from the cytoplasm to the nucleus in some cases [20]. It is essential to understand the progress of signal transduction and regulation pathways in plants under heavy metal stress. Thus, we focused on the molecular mechanisms of heavy metals entering plant cells to produce reactive oxygen species, nitric oxide and plant hormones and activate the MAPK cascade signal pathway. In addition, the downstream transcriptional factors responding to genes of the MAPK cascade are also summarized ( Figure 1). This review provides more comprehensive background knowledge of plants under heavy metal stress and new insights into the molecular mechanisms of transcriptional factors in heavy metal tolerance and accumulation. Heavy metal exposure triggers multiple signaling pathways such as NO, ROS and phytohormones. These signals interact and activate the MAPK cascade. Subsequently, MAPK cascade phosphorylates and activates related transcriptional factors including bZIP, WRKY, MYB, HSF and other transcriptional factors, which further induce the expression of defense genes, metal transporter genes, PCs, MTs, antioxidant related genes, etc. Finally, heavy metal tolerance or accumulation is enhanced in plants. MAPK Was Directly Activated under Heavy Metal Stress As a cell signaling enzyme, MAPK regulates a variety of biological processes in eukaryotes [21]. MAPK pathways are very developed and complex and are usually induced to deal with biological and abiotic stress. In plants, heavy metal stress initiates a variety of signal pathways, including the MAPK cascade (Table 1). In Broussonetia papyrifera roots, under cadmium (Cd) stress over time, MAPK transcripts were downregulated at 3 hours but upregulated at 6 h [22]. Moreover, OsMPK3 and OsMPK6 overexpression lines increased the transcription level of the stress response genes encoding superoxide dismutase, ascorbate peroxidase, glutamine synthase and aldehyde oxidase under arsenic stress and drought stress [23]. Furthermore, the SlMAPK3 gene of the tomato was significantly induced under Cd 2+ treatment. The overexpression of SlMAPK3 significantly increased leaf chlorophyll content, root biomass accumulation and root activity in transgenic plants, demonstrating that SlMAPK3 enhanced Cd tolerance [24]. Different Signal Molecules Activate MAPK Pathway under Heavy Metal Stresses The MAPK cascade can interact with signal molecules such as plant hormones, active ROS and NO. The crosstalk between ABA, auxin, MAPK signaling and the cell cycle in Cd-stressed rice seedlings has also been described [38]. In Arabidopsis thaliana, exposure to excess Cd or copper (Cu) led to the activation of NADPH oxidases, hydrogen peroxide (H 2 O 2 ) overproduction and MAPK cascades [39]. In addition, the roots of soybean seedlings treated with 25 mg·L −1 Cd showed increased NO production and the upregulation of the MAKPK2 transcription level [40]. ROS It is well known that reactive oxygen species in plants are induced by heavy metals; thereby, ROS, as signal molecules, lead to the activation of MAPK kinases [41]. Two important MAPK cascades (MEKK1-MKK4/5-MPK3/6 and MEKK1-MKK2-MPK4/6) act downstream of ROS, which were found to participate in both abiotic and biotic stresses [25,26]. As a redox active metal, a certain concentration of Cu 2+ directly induces the formation of ROS. When Alfalfa seedlings were exposed to excessive Cu 2+ , ROS accumulated and activated four different mitogen-activated protein kinases (MAPKs): SIMK, MMK2, MMK3 and SAMK [27]. Except for Cu 2+ , Cd stress was able to activate ZmMPK3-1 and ZmMPK6-1 via ROS induction in maize roots [29]. Moreover, the activities of MPK3 and MPK6 increased significantly in Cd-treated Arabidopsis seedlings, whereas this increase disappeared in the plants pretreated with the ROS scavenger glutathione (GSH). The above results fully indicate that Cu 2+ -or Cd 2+ -induced ROS accumulation in plants activate MAPK cascade [28]. H 2 O 2 , as a product of oxidative stress, is involved in amplifying the functions of signal molecules. MAPK can also be activated by H 2 O 2 to maintain intracellular homeostasis [42]. Furthermore, the overexpression of downstream MAPK may also be a signaling transmission mechanism after sensing H 2 O 2 [43]. These signaling components contain at least three specific phosphorylated kinases (MAPK2, MAPK3 and MAPK6), which can be observed in all living cells. Moreover, excessive Cu led to the activation of NADPH and the excessive production of H 2 O 2 , thereby inducing the MAPK cascade in Arabidopsis roots [39]. All these results indicate that the MAPK cascade activated by ROS molecules can play an important role under different metal stresses. NO NO is involved in plant growth and development and regulates heavy metal responses in plants [31]. In HM treated plants, the interaction between the NO signal and the MAPK cascade has long been known [32]. Application of NO to Arabidopsis roots can rapidly activate protein kinases with MAPK properties [30]. When two-week-old Arabidopsis was exposed to 100 µM CdCl 2 for 24 h, Cd 2+ -induced NO production was investigated with the NO-sensitive fluorescent probe DAF-FM diacetate. Moreover, Cd 2+ -induced MAPK and caspase-3-like activities were inhibited in the presence of the NO-specific scavenger (cPTIO). These results prove that NO can quickly activate protein kinases with MAPK characteristics in Arabidopsis roots [33]. On the contrary, the caspase-3-like activity was significantly inhibited in mpk6 mutants after Cd 2+ treatment, and the tolerance of Arabidopsis mpk6 mutants to Cd 2+ and NO concentrations was also reduced [34]. In addition, this seriously affects the growth of rice seedlings and promotes the production of ROS and NO in rice roots after excessive Ar exposure. Subsequently, MAPK and MPK were activated in rice leaves and roots, respectively [35]. Plant Hormones A number of phytohormones such as salicylic acid (SA), abscisic acid (ABA), auxin (IAA) and ethylene (ET) participated in important stress-related and developmental plant processes. The MAPKs homolog AtMPK3 and AtMPK6 of Arabidopsis are mainly involved in some environmental and hormonal responses [36]. As a homolog of AtMPK6, SIPK of tobacco has been proved to be a protein kinase induced by SA, which can be activated under environmental stresses [37]. Upon exposure to Cd, ABA could partially compensate the inhibitory effect of Cd on rice root growth, reduce auxin accumulation and affect the distribution of auxin. Moreover, the key genes of auxin signal transduction, including YUCCA, PIN, ARF and IAA, are negatively regulated by MAPK [38]. Moreover, ET and MAPK signal pathway-related genes were induced in soybean seedlings with Cd treatment. Subsequently, promoter sequence analysis showed that multiple regulatory motifs sensitive to ET and other plant hormones were found in MAPKK2 [32]. Transcriptional Factors Regulate Heavy Metal Tolerance MAPKs can phosphorylate various transcriptional factors in different abiotic stresses [44,45]. Transcriptional factors contain many phosphorylation sites and can regulate heavy metal stress by controlling the expression of downstream genes ( Table 2). They also function as a central component in the regulatory networks of heavy metal detoxification and tolerance. Currently, many transcriptional factors with regard to heavy metal detoxification and tolerance have been found in plants. Among them, transcriptional factors such as basic leucine zipper (bZIP), heat shock transcription factor (HSF), WRKY, myeloblastosis protein (MYB) and ethylene-responsive transcription factor (ERF) have been known to play important roles in regulating heavy metal detoxification and tolerance in plants ( Figure 1). AtMYB4 Cd MYB4 regulates Cd-tolerance via the coordinated activity of improved anti-oxidant defense systems and through the enhanced expression of PCS1 and MT1C under Cd-stress in Arabidopsis. Fe The translocation of iron from root to shoot is affected by the DwMYB2. 39 [63] WRKY Cd RsWRKY transcripts were significantly elevated under Cd and Pb treatments. [64] AtWRKY12 Cd WRKY12 represses GSH1 expression to negatively regulates cadmium tolerance in Arabidopsis. [66] AtWRKY13 Cd WRKY13 activation of DCD during cadmium stress. 49 [67] GmWRKY142 Cd GmWRKY142 confers cadmium resistance by upregulating the cadmium tolerance 1-like genes. 54 [68] AtWRKY47 Al A WRKY transcription factor confers aluminum tolerance via regulation of cell wall modifying genes. 63 [69] AtWRKY6 As WRKY6 transcription factor restricts arsenate uptake and transposon activation in Arabidopsis. 66 [70] HSF SaHsfA4c Cd The expression of SaHsfA4c was induced by cadmium and enhanced Cd tolerance by ROS -scavenger activities and shock proteins expression. TaHsfA4a OsHsfA4a Cd HsfA4a of wheat and rice confers Cd tolerance by upregulating MT gene expression. 46 [73] PvBip1 Cd HSF/HSP participates in the reconstruction of protein conformation and improves intracellular homeostasis to increase cadmium tolerance. [74] HSF1A Cd HsfA1a upregulates melatonin biosynthesis to confer cadmium tolerance in tomato plants. 55 [75] PuHSFA4a Zn PuHSFA4 activates the antioxidant system and root development-related genes and directly targets PuGSTU17 and PuPLA. AemNAC2 Cd Overexpression of AemNAC2 led to reduced cadmium concentration. 52 [77] VuNAR1 Al VuNAR1 regulates Al resistance by regulating cell wall pectin metabolism via directly binding to the promoter of WAK1 and inducing its expression. [78] ZAT6 Cd ZAT6 coordinately activates PC synthesis-related gene expression and directly targets GSH1 to positively regulate Cd accumulation and tolerance in Arabidopsis. [76] PvERF15 Cd PvERF15 and PvMTF-1 form a cadmium-stress transcriptional pathway. 44 [82] 4.1. bZIP bZIPs, a large family of transcriptional factors in plants, are involved in a variety of biological processes and environmental challenges. A total of 135 bZIP-encoding genes were discovered by analyzing the whole genome and transcriptome of radish (Raphanus Sativus). Specifically, RsbZIP010 exhibited downregulated expression under a variety of heavy metal stresses, such as Cd, Cr and lead (Pb) stresses [46]. In the Glycyrrhiza uralensis genome, 66 members of the GubZIP gene family were identified using a series of bioinformatics methods based on the hidden Markov model (HMM). Among them, 45 and 51 GubZIP genes were differentially expressed genes in roots and leaves under 0.02 g·kg −1 Cd stress, respectively [47]. Recently, LOC_Os02g52780, an ABA-dependent stress-related gene that belongs to the bZIP transcription factor family, was found by mapping QTL using 120 rice recombinant inbred lines and further testified to Cd accumulation in rice grains. The significant difference in the expression of the LOC_Os02g52780 gene between parents indicates that they are related to the tolerance of rice to Cd stress, which may affect Cd accumulation in rice grains [48]. TGA (TGACG motif-binding factor) factor in Arabidopsis is a member of a subfamily of bZIP transcription regulators and is involved in the induction of pathogenic phase-and resistance-related genes [49]. When Arabidopsis responded to chromium (Cr 6+ ) stress, bZIP transcription factor TGA3 enhanced transcription of L-cysteine desulfhydrase (LCD) through a calcium (Ca 2+ )/calmodulin2 (CaM2)-mediated pathway and then promoted the generation of hydrogen sulfide (H 2 S), whereas H 2 S can trigger various defense responses and help reduce accumulation of HMs in plants [50]. In heavy metal accumulator Brassica juncea, the TGA3 homologous gene BjCdR15 is upregulated in plants with Cd treatment for 6 h, indicating that BjCdR15 transcription factor plays an irreplaceable role in regulating the absorption and long-distance transport of Cd. Western analysis showed that the abundance of AtPCS1 protein increased significantly in Cd-treated plant shoots. Moreover, its overexpression confers tolerance and accumulation of Cd in A. thaliana and tobacco due to the regulation of the synthesis of phytochelatin synthase and the expression of several metal transporters [51]. BnbZIP3 from Boehmeria nivea positively regulates heavy metal tolerance. On the contrary, BnbZIP2 shows higher sensitivity to drought and heavy metal Cd stress during seed germination [52]. Additionally, bZIP transcription factor also interacts with other transcriptional factors (TFs) or mediates the downstream TFs to regulate Cd uptake. For example, bZIP transcription factor ABSCISIC ACID-INSENSITIVE5 (ABI5) interacts with MYB49 and represses its function by preventing its binding to the downstream genes bHLH38, bHLH101, HIPP22 and HIPP44, resulting in the inactivation of IRT1 and a reduction in Cd uptake in A. thaliana [53]. In the zinc deficiency reaction in A. thaliana, two members of group F in the bZIP transcriptional factors, bZIP19 and bZIP23, can bind zinc (Zn) ions to the zinc sensor motif and play the function of a central regulator [54,55]. MYB MYB proteins are key regulators controlling development, metabolism and responses to biotic and abiotic stresses [83]. In rice, OsMYB45 positively regulates Cd stress, and its mutant exhibited lower catalase (CAT) activity and higher concentrations of H 2 O 2 in the leaves compared with the wild-type [56]. SbMYB15, from a succulent halophyte Salicornia brachiata Roxb, is an important heavy metal response gene. Overexpression of SbMYB15 in transgenic tobacco can reduce the absorption of heavy metal ions Cd and nickel (Ni) and improve the scavenging activities of the antioxidative enzymes (CAT and SOD) [57]. AtMYB4 regulates Cd tolerance by enhancing protection against oxidative damage and increases expression of PCS1 and MT1C [58]. Furthermore, JrMYB2 from Juglans regia is considered to be an upstream regulator of JrVHAG1 that improves CdCl 2 tolerance in plants. Under Cd treatment, the heterologous overexpression of JrVHAG1 in A. thaliana showed a significant increase in fresh weight and primary root length and higher activities of SOD and POD compared with the wild-type [59]. As is a toxic metalloid in plants, usually in combination with sulfur and metals, and can be found in two inorganic forms, arsenite [As (III)] and arsenate [As (V)] [84]. Rice R2R3 MYB transcription factor OsARM1 (ARSENITE-RESPONSIVE MYB1) regulates the absorption of As (III) and root-to-stem transport by regulating the As-associated transporters (OsLsi1, OsLsi2 and OsLsi6) [61]. In Arabidopsis, AtMYB40 negatively regulated the expression of PHT1;1 (Pi transporter) and positively regulated the expression of PCS1, ABCC1 and ABCC2, which acts as a central regulator conferring plant As (V) tolerance and reducing As (V) uptake [62]. In addition to Cd and As, MYB transcriptional factors are also involved in the homeostasis or absorption of essential elements such as Zn and iron (Fe). MYB72 is involved in metal homeostasis in Arabidopsis, and its knockout mutant was more sensitive to excess Zn or Fe deficiency compared to the wild-type [60]. Moreover, DwMYB2 from the orchid can enhance Fe absorption as a regulator. In DwMYB2-overexpressing Arabidopsis plants, the Fe content in roots is two-fold higher compared to that in wild-type roots, while the reverse is true in shoots. This difference in Fe content between roots and shoots indicated that the translocation of iron from root to shoot in transgenic plants was regulated by DwMYB2 [63]. WRKY WRKY proteins, composed of a WRKY domain (WRKYGQK) and a zinc finger motif, can generally recognize the cis-acting W-box elements (TTGACC/T) of downstream genes. WRKY genes were found in many plant genome databases. A total of 126 WRKY genes have been found in the radish genome database. RT-qPCR analysis showed that 36 RsWRKY genes changed significantly under one or more heavy metal stresses. Specifically, 24 and 20 RsWRKY transcripts were induced under Cd and Pb treatments, respectively [64]. In soybean, 29 Cd-responsive WRKY genes were retrieved through the comprehensive transcriptome analysis of soybean under Cd stress. The overexpression of GmWRKY142 in A. thaliana and soybean decreased Cd uptake and positively regulated Cd tolerance. Further analysis indicated GmWRKY142 activated the transcription of AtCDT1 (Digitaria ciliaris cadmium tolerance 1), GmCDT1-1 and GmCDT1-2 by directly binding to the W-box element in their promoters; however, CDT1 rich in cysteine (Cys) proteins are important chelators of Cd [68]. Besides, AtWRKY6 controls As (V) uptake through the regulation of Pi transporters while simultaneously restricting arsenate-induced transposon activation [70]. WRKY can enhance plant Cd tolerance or maintain the balance of metal ions by regulating downstream functional genes. AtWRKY12 negatively regulates Cd tolerance in Arabidopsis though directly binding to the W-box of the promoter in GSH1 and indirectly repressing phytochelatin synthesis-related gene expression [65]. Another WRKY transcription factor AtWRKY13 enhances plant Cd tolerance by directly upregulating an ABC transporter PDR8 [66] and promoting D-cysteine desulfhydrase and hydrogen sulfide production in Arabidopsis [67]. Additionally, AtWRKY47 regulates genes responsible for cell wall modification (e.g., XTH17, ELP), which can maintain aluminum (Al) balance in ectoplasts and symplasts and improves Al tolerance [69]. HSF Heat shock transcription factor is well known for responding to external heat stress. The member of class A has also been reported to be involved in the heavy metal stress response. A total of 22 Hsf members were identified in Cd/Zn/Pb hyperaccumulator Sedum alfredii and phylogenetically clustered into three classes, SaHsfA, SaHsfB and SaHsfC. In detail, 18 SaHsfs were responsive to Cd stress [71]. The expression levels of SaHsfA4c transcripts and proteins in all tissues were induced by Cd. Concurrently, it can upregulate ROSrelated genes and HSPs, resulting in lower levels of ROS accumulation after Cd stress in transgenic Arabidopsis and non-hyperaccumulation ecotype S. alfredii [72]. HsfA4a in wheat and rice, all belonging to class A4a Hsfs, can confer Cd tolerance by upregulating metallothionine gene expression [73]. Transcriptome analysis of Cd-treated switchgrass roots showed that HSF/HSP was involved in the process of normal protein conformation reconstruction and intracellular homeostasis under Cd stress. Overexpression of an HSP gene in Arabidopsis significantly improved the tolerance of plants to Cd [74]. Transcription factor heat shock factor A1a (HsfA1a) can induce melatonin biosynthesis to some extent and endow tomato plants with Cd tolerance [75]. Moreover, PuHSFA4a from Populus ussuriensis regulates the target genes PuGSTU17 and PuPLA to activate the antioxidant system and root development, thereby promoting excess-Zn tolerance in roots [76]. Therefore, the members of class HsfA enhance heavy metal tolerance by regulating the expression of key genes such as heavy metal chelators or antioxidants. Other TFs In addition to bZIP, MYB, WRKY and HSF, other transcription factor families also regulate the heavy metal response. In Aegilops markgrafii, overexpression of AemNAC2 in wheat led to reduced Cd concentrations, thus contributing to Cd tolerance [77]. Vigna umbellata NAC-type TF, VuNAR1, confers Al resistance by regulating cell wall pectin metabolism [78]. Cd induces the expression of a C 2 H 2 zinc-finger transcription factor, ZAT6, which could directly target GSH1 expression, thereby triggering Cd-activated PC synthesis in Arabidopsis [79]. Basic helix-loop-helix (bHLH) transcriptional factors AtbHLH104, AtbHLH38 and AtbHLH39 positively regulate genes involved in heavy metal absorption and detoxification [80,81]. There is also a complex regulation network between these transcriptional factors. For example, the ABA-mediated ABI5-MYB49-HIPP regulatory network repressed Cd uptake in Arabidopsis [76]. In Phaseolus vulgaris, ethylene responsive factors PvERF15 and metal response element-binding transcription factor (MTF) PvMTF-1 form a Cd-stress transcriptional pathway [82]. Conclusions The MAPK cascade pathway is known to play an important role in plant growth, development and resistance to stress. For example, drought stress activates the MAPK cascade, phosphorylates selected targets and controls the activities of phospholipase, microtubule associated protein, cytoskeleton protein, kinase and other transcriptional factors in response to drought stress. A novel GhMAP3K15-GhMKK4-GhMPK6-GhWRKY59 phosphorylation loop that regulates the GhDREB2-mediated and ABA-independent drought responses in cotton has been identified [85,86]. However, in comparison with other abiotic stresses, there is little information about the MAPK cascade phosphorylating transcriptional factors in plant responses to heavy metals. Heavy metals have the characteristics of strong biological toxicity and rapid migration. They can lead to plant nutritional defects, inhibition of chlorophyll synthesis, reduction of photosynthesis, oxidative stress and, finally, inhibit plant growth and even result in death [87]. Plant roots sense heavy metal stress, trigger signal transduction and then cause a series of changes in physiological state and microstructure. Plant responses to HMs are regulated by the differential expression of genes, the enhancement of antioxidant-system activity and by the synergistic crosstalk between signal molecules. In depth understanding of the plants' heavy metal stress perception, signal transduction and response processes are the prerequisites for plants to maintain stability under stress conditions [88]. The perception of heavy metal stress can trigger a variety of signal molecules in plants, such as NO, hormones, ROS, etc. These signaling molecules may activate the MAPK cascade. The kinase signal from upstream transmits to the downstream receptor and activates transcriptional factors, such as bZIP, HSF, MYB, WRKY, etc. These transcriptional factors promote the absorption, transport, isolation and detoxification of HMs by regulating downstream functional genes. These cascade responses involve complex and ordered mechanisms of the synergistic intracellular and extracellular regulation of homeostasis which is designed to translate extracellular stimuli into intracellular responses. Improving the chances of plant survival in heavy metal environments requires the activation of multiple defense responses. At present, the research on plant response to heavy metal stress has been carried out continuously. Although many signaling molecules are involved in plant responses to HM exposure, the exact nature of signal transduction is still unclear, as are the interactions between signal molecules and the functions of target proteins. In addition, there are still some gaps in our knowledge regarding the regulatory circuits of stress responses required for the protection of plant reproductive development. Therefore, it is necessary to explore a variety of ways to understand the tolerance and accumulation mechanisms of plants to HMs. In future studies, the key genes involved in HM accumulation should be further determined. At the molecular level, it is of great significance to clarify the interactions of signal transduction and signal cascades in plants with heavy metal exposure.
6,063.4
2022-04-01T00:00:00.000
[ "Environmental Science", "Biology" ]
The Multitude of Molecular Hydrogen Knots in the Helix Nebula We present HST/NICMOS imaging of the H_2 2.12 \mu m emission in 5 fields in the Helix Nebula ranging in radial distance from 250-450"from the central star. The images reveal arcuate structures with their apexes pointing towards the central star. Comparison of these images with comparable resolution ground based images reveals that the molecular gas is more highly clumped than the ionized gas line tracers. From our images, we determine an average number density of knots in the molecular gas ranging from 162 knots/arcmin^2 in the denser regions to 18 knots/arcmin^2 in the lower density outer regions. Using this new number density, we estimate that the total number of knots in the Helix to be ~23,000 which is a factor of 6.5 larger than previous estimates. The total neutral gas mass in the Helix is 0.35 M_\odot assuming a mass of \~1.5x10^{-5} M_\odot for the individual knots. The H_2 intensity, 5-9x10^{-5} erg s^{-1} cm^{-2} sr^{-1}, remains relatively constant with projected distance from the central star suggesting a heating mechanism for the molecular gas that is distributed almost uniformly in the knots throughout the nebula. The temperature and H_2 2.12 \mu m intensity of the knots can be approximately explained by photodissociation regions (PDRs) in the individual knots; however, theoretical PDR models of PN under-predict the intensities of some knots by a factor of 10. INTRODUCTION Approximately 50 planetary nebulae (PNs) are presently known to have "small scale" heterogeneities located inside or outside the main ionized nebulae (Gonçalves et al. 2001). Cometary knots are a subcategory of small scale structures found commonly in nearby, evolved PNsPNs (O'Dell et al. 2002). Because of its closest proximity (213 parsecs Harris et al. (1997)), the Helix Nebula (NGC 7293) is the best case to study the structure and excitation conditions of cometary knots. The nature of the cometary knots in the Helix was first established by Meaburn et al. (1992). The detailed structure of the cometary knots has been resolved in ionized gas lines in the optical by O'Dell & Handron (1996) with further detailed analysis by O'dell (1998), andO'Dell et al. (2000). The emerging optical picture of the cometary knots reveals that they are neutral gas condensations that appear as comet like structures with rims bright in Hα and tails that appear as shadows in [OIII] and that point away from the central star. The rim of low-excitation ionize gas has a steep temperature gradient indicating that the knots are photo-evaporating and that ionization fronts are advancing into the knots (O'Dell et al. 2000). A recent anlaysis of knots over the whole Helix nebula by O'Dell et al. (2004) revealed a new 3-D picture for the main ring of the Helix: it is composed of a disk structure and an outer ring tilted almost perpendicularly with respect to the disk. Within each of these components, they observed a similar, progressive evolution in the structure of the knots. The knots closest to the central star and clearly inside of the ionization front were elegantly carved with the brightest rims. The knots furthest from the central star appeared slightly more amorphous in their structure with less well defined rims. The culmination of these optical observations appear to support the theory that these knots were initially formed earlier by instabilities at the ionization front or perhaps by the interaction of the fast stellar wind and then have been sculpted by interaction with the harsh radiation field of the central star (Capriotti 1973). In contrast to the high angular resolution (∼0.01 ′′ ) optical studies of the ionized gas lines in the cometary knots, the molecular gas observations have had lower angular resolution (4 ′′ -41 ′′ ) and sensitivity making it difficult to determine the detailed structure and excitation of the main gas component of the cometary knots. These low resolution studies have revealed that the Helix has retained a significant amount of molecular gas (Young et al. 1999;Huggins & Healy 1986;Speck et al. 2002) and that the molecular gas appears to be very clumpy and is probably confined to cometary knot structures (Speck et al. 2002;Huggins et al. 2002). The only detailed study of an isolated cometary knot, which is close to the central star, shows no evidence for large velocities in the molecular gas, ruling out a stellar wind shaping the knot, and reveals a stratified structure for the ionized and molecular gas emissions that is expected in a photodissociation region (PDR) (Huggins et al. 2002). However, since recent optical studies show an evolution of the knot structure with radial distance from the central star , it is not clear that this single knot study is representative of all the knots in the nebula. In order to determine the structure and excitation of the H 2 emission in the cometary knots at comparable resolution to optical images across the Helix, we pursued high angular resolution (∼0.2 ′′ ) NICMOS/NIC3 F212N H 2 images at several locations in the nebula, in parallel with the HST/ACS program recently published by O'Dell et al. (2004). The remainder of this paper is organized as follows. In section 2, we report the observation and data processing procedures. In section 3, we discuss the major observational results and how these relate to the optical ionized gas line emissions imaged by O'Dell et al. (2004). In section 4, we interpret the observations in the context of current understanding of the Helix's 3-D structure and discuss the number density, mass, evolution and excitation of the knots as revealed by our H 2 images. We summarize our conclusions in section 5. OBSERVATIONS The Hubble Helix project (GO program 9700; PI: M. Meixner) imaged the Helix nebula during the 2002 Leonids meteor shower that presented a risk to the HST. The imaging involved a 9-panel mosaic of the Helix using the ACS WFC instrument in the F658N filter (transmitting equally well both the Hα 6563Å and [N II] 6584Å lines) and the F502N filter (dominated by the [O III] 5007Å line). In parallel with the ACS imaging, we used NICMOS (Thompson et al. 1998) to image 7 of the possible 9 field positions, 5 of which landed on the nebula (positions 1, 2, 3, 4 and 5) and 2 of which were off the nebula (positions 7 and 9) and used for background measurements for the 5 fields on the nebula. Figure 1 shows the location of these fields on the Helix and the RA and Dec of the field centers for field positions 1, 2, 3, 4, and 5 are listed in Table 1. These parallel NICMOS field positions had insignificant overlap with the the ACS images. Because we wanted maximum field of view and our target was a diffuse nebula, we used the NIC3 camera, 0. ′′ 2 pixel −1 , with the F212N filter to image the H 2 2.12 µm line emission in the nebula. For field positions 1 and 2, half the time was spent in the Paα filter F187N that is sufficiently low signal-to-noise as to be useless and is not discussed further. For each field position, the two dither positions for ACS resulted in two slightly overlapping NICMOS/NIC3 images. The NIC3 MULTIACCUM, FAST readout mode was used. The Hubble Helix project and its results (McCullough & Hubble Helix Team 2002) immediately went into the public domain. The ACS images were analyzed in combination with ground based CTIO images in similar filters and have been published by O'Dell et al. (2004). In this work we analyze and discuss the NICMOS H 2 2.12 µm emission and its relation to the ionized gas at high spatial resolution. The NICMOS/NIC3 images were reduced and calibrated using the standard set of NIC-MOS calibration programs provided in the latest version (Version 3.1) of IRAF/STSDAS 2 . The CALNICA calibration routines in STSDAS perform zero-read signal correction, bias subtraction, dark subtraction, detector non-linearity correction, flat-field correction, and flux calibration. The pedestal effect was removed by first manually inserting the STSDAS task biaseq in the middle of the CALNICA processes (before flat-fielding) and then employing the STADAS task pedsub after the CALNICA processes. Cosmic rays were identified and replaced by the median filtered pixel value. The four dither positions for field positions 7 and 9 were combined to make a "sky" image that is completely attributed to the telescope emission. This "sky" was subtracted from each of the dither frames for field positions 1-5 resulting in a H 2 dominated emission frame. The continuum emission from the Helix nebula in the F212N filter is negligible in the sky-subtracted images as demonstrated by Speck et al. (2002). The two dither positions for each field position were combined using drizzle which magnifies the images by a factor of 2. The final drizzled images have a plate scale of 0. ′′ 10 pixel −1 and have been rotated so that north is up and east is to the left. Total integration times for the final, drizzled F212N images ranged from 768 seconds for field positions 1 and 2 to 1792 seconds for field position 3. In order to discern the relative distribution of the H 2 line emission with the ionized gas line tracers, we compare our results with comparable resolution optical images. The overlap between the NICMOS fields and the ACS fields is insignificant. Fortunately, O'Dell et al. (2004) presented ground based CTIO images of Hα/[NII], [OIII], Hβ and [SII] of the entire Helix Nebula at comparable resolution to our NICMOS images. We registered these CTIO images to the WCS of the NICMOS images. The initial comparison, using just the absolute coordinates of the NICMOS and CTIO images, permitted a close enough alignment to identify at least 1 star in common between the NICMOS and CTIO images that was used for translational alignment. The four CTIO images had 6-7 stars in common with each other and our detailed comparison revealed small rotational errors between them up to 0.08 • relative angular rotations. Using the star in common with the NICMOS image as the "origin," we improved the relative rotational alignment of the CTIO images to better than 0.005 • . The CTIO images were then translated to the NICMOS WCS position, by aligning the star in common using the task "register" in IRAF. Only the part of the CTIO images that overlap with the NICMOS field positions are shown. For each field position, we selected a prominent H 2 knot, labeled with a cross in the figures, for surface brightness measurements of the H 2 emission and made a cross-cut along the radial direction from the central star in order to quantify the relation between H 2 emission and distance from the central star. These positions and cross-cuts are labeled on Figs. 2-6 and the locations of the H 2 knots with respect to the field centers are listed in Table 1. Table 2 lists the surface brightness measurements for H 2 and the optical line tracers at the knot position. The average surface brightness of the H 2 knot emission (DN/pixel) was determined for a circular aperture enclosing the brightest part of the H 2 emission (∼3 pixel radius centered on the +). We then converted this average into physical units by multiplying by 2.44929 × 10 −18 erg s −1 cm −2Å−1 DN −1 (DN = counts per second), which is the photometry conversion keyword derived for the NICMOS, NIC3 camera, filter F212N for the 77.1 K detector, appropriate for observations taken after January 2002. Finally, to arrive at the units in Table 2, we mulitplied by 212.1Å which is the filter bandwidth of the F212N filter, and divided by 2.2350443 × 10 −13 sr which is the solid angle of the 0. ′′ 10 pseudo-pixel. For each of the measured H 2 knots in the NICMOS fields, we measured the surface brightness of the ionized gas emission lines, Hα[NII], [OIII], Hβ and [SII], in the same way and list the results in Table 2. A conversion factor of 1.66 × 10 10 was used to convert ADUs/pixel to photons s −1 cm −2 sr −1 for all the CTIO images and then each image was multiplied by its photon energy, hc λ to determine the surface brightness in units of erg s −1 cm −2 sr −1 , the same as the NICMOS H 2 line measurements. For the cross-cuts shown in Figure 7, we applied the same conversion factors. 2004) for the respective field positions 1, 2, 3, 4, and 5 . Our NIC3 images have better sensitivity and angular resolution than the image in the Speck et al. (2002) and we observe H 2 line emission at much larger distances than seen in their large scale, ground based mosaic. The NICMOS field positions 1 and 2 lie closer to the central star at approximately the same projected distance. The field position 3 follows next in radial distance with positions 5 and 4 overlapping at the farthest radial distances (Fig. 7). In all of the NICMOS field positions the H 2 emission is highly structured revealing arcs and pillars of emission that point towards the central star (Figures 2-6). This highly structured appearance contrasts with the more smooth, and less structured appearance of the ionized lines. This difference indicates that the H 2 line emission is confined to the high density neutral gas of the cometary knots (O'Dell & Handron 1996). On the other hand, the ionized gas emission arises from both the more diffuse nebula (50 cm −3 ) and the cometary knots. Closer inspection of all 5 positions reveals that the Hβ emission structures correlates very well with the structures observed in H 2 . The [SII] emission appears to correlate with the H 2 emission in positions 1 and 2, but does not show the structure as well as the Hβ. This suggests that the [SII] emission is more extended, diffuse or less defined by the cometary knots. The Hα [NII], which appears like a combination of the Hβ and [SII] emission, follows the H 2 emission but is much more diffuse in appearance than the H 2 or Hβ. Results In positions 1 and 2, the uniformity of the [OIII] emission is punctuated by dark shadows of cometary knots that appear clearly in H 2 emission. The H 2 knots must lie in front of some highly ionized gas emitting in order to cause the extinction of the [OIII] emission. The fact that most of the H 2 knots do not appear as [OIII] shadows indicates that most of the H 2 knots are confined within the neutral disk and below the highly ionized, diffuse gas apparent in [OIII] The combined cross-cuts (Fig. 7) dramatically show the difference between the H 2 emission and optical line ionized gas tracers. The ionized gas tracers appear to smoothly drop with increasing radius. Differences in the intensity vs. radial distance for these ionized gas tracers are due to the photo-ionization structure of the Helix (O'dell 1998;O'Dell et al. 2004). Of all the ionized gas tracers shown, the Hβ profiles reveal some clumped structure at the 5-10% level on its basically smooth profile. In contrast, the H 2 emission appears to almost randomly fluctuate with radial distance because of the highly clumped, knot structure of the H 2 emission. The H 2 intensity does not appear to decrease significantly with radial distance because the individual knots have a small range of intensities. Positions that overlap in radial distance, 1 & 2 and 4 & 5, are offset in their optical line emissions because they are located at very different azimuthal positions and the nebula has a distinct variation with azimuthal angle. Interestingly, the H 2 emission does not appear to have offsets in emission in the overlap regions further supporting that the H 2 emission appears to have an almost constant level with radial distance. Detailed comparison of line intensities of the brighter knots in each field position also support this contrast between optical ionize gas emission and H 2 emission. Table 2 lists the brightnesses of a small region in each field position shown as a white cross in the Figures 2-6. These regions were selected as bright rims of the H 2 knots and hence measure the brightest H 2 emission in the fields. Our NICMOS images were taken farther out in the Helix nebula into regions that were below the detection limit of Speck et al. (2002), ∼ 10 −4 erg s −1 cm −2 sr −1 , and hence complement their picture. The H 2 emission from the bright H 2 clumps decreases by a factor of two between the nearest (pos1) and most distant (pos5) clumps. In comparison, the ionized line emission drops more steeply with the Hβ emission dropping by a factor of 4 and the [OIII] line dropping by a factor of 5. The H 2 nebular structure Speck et al. (2002) imaged the entire Helix nebula in H 2 showing that it followed the rest of the gas mass tracers and interpreted the H 2 as arising in knots in the main disk of the nebula. However, in their recent analysis of the ACS and CTIO data in combination with velocity data from the literature, O'Dell et al. (2004) revealed an entirely new 3-D structure of the Helix nebula in which the main ring is broken into an inner disk and an outer ring that are almost perpendicular with respect to each other (∼78 • ). It is the superposition of these two rings that gives the Helix its helical appearance. Similar multiple axis structures have been observed easily in edge-on poly-polar planetary nebulae such as NGC 2440 (Lopez et al. 1998); however, the almost pole-on view of the Helix has made it more difficult to define its geometry. How does the full-nebula H 2 image of Speck et al. (2002) fit into this new paradigm for the Helix? Speck et al. (2002). The H 2 near-IR emission of the inner disk is more cleanly separated from the outer ring structure than is found in the optical line tracers and supports the conclusion that the inner disk is a separate structure than the outer ring ). This cleaner separation occurs because the H 2 emission arises only from the knots and thus there is better contrast than for the optical line tracers that are diluted by the diffuse emission. The H 2 emission arises at the outer edge of the inner disk that is filled with higher excitation ionized gas, as traced by [OIII], in the center . The H 2 emission arising in the inner disk is also much fainter than the H 2 emission in the outer ring. The opposite is true for the ionized gas tracers: they are much brighter in the inner disk than the outer ring. This reversal suggests that the molecular gas in the inner disk has been largely photo-dissociated in comparison to the outer ring and, hence, the cometary knots are more evolved in the inner disk. The outer ring does not appear very ring-like in the H 2 emission, but more like a ring with southern and northern arc-extensions at the eastern and western edges that surround the northwest and southeast "plumes" mentioned by O'Dell et al. (2004). In fact, the H 2 emission appears brightest in these plumes. The CO kinematics suggest that these plumes are coherent structures (Young et al. 1999). One possible explanation is that the outer ring defined by O'Dell et al. (2004) is really part of an outer bipolar structure and the southern and northern arcs are the limb-brightened edges of the bicones in the outer structure. Such an interpretation is in line with the Helix nebula being a poly-polar planetary nebula being viewed pole-on. The edge-on poly-polar PN, NGC 2440, has an inner torus of H 2 emission and an outer bipolar nebula of H 2 emission with axes of symmetry that are tilted with respect to one another (Latter et al. 1995). Our NICMOS field positions appear to lie in some of the fainter or non-existent regions of H 2 emission as observed by Speck et al. (2002) which had less sensitivity than our study. Positions 1 and 2 are located outside the southern edge of the inner disk in what appears to be a gap between the inner disk and outer structures. Position 3 lies to the southwest of the outer structure where no apparent H 2 emission appears in the H 2 image by Speck et al. (2002). Positions 4 and 5 are located even further away from previously detectable H 2 emission south of the nebula. Positions 3, 4 and 5 also lie in regions where no HI 21 cm line emission was detected by Rodríguez et al. (2002) nor CO emission detected by Young et al. (1999). The number density of H 2 knots and area filling factor of H 2 emission decreases with increasing radius and is the lowest in positions 3, 4 and 5 (Figs. 4-6). In the large beams of these radio line observations (∼31 ′′ FWHM for CO and ∼42 ′′ FWHM for HI 21 cm) , the intensity of the neutral gas emission from these knots is beam diluted and falls below the detection limit of the radio observations. The multitude of molecular knots in the Helix Our observations clearly show that the morphology of the molecular hydrogen emission is highly clumped in comparison to the ionized gas tracers. Figure 9 shows a multi-color image comparing the [OIII], the Hα/[NII] and the H 2 emission for position 1. The structure observed in this image is primarily due to the H 2 emission clumps. In fact, the H 2 emission images are striking by their lack of diffuse H 2 emission. Close inspection of the more intense regions shows they are composed of overlapping knots of H 2 emission. Hence, we confirm the conclusion of Speck et al. (2002) that the molecular hydrogen is confined to the high density knots such as seen in the optical by O' Dell & Handron (1996). A similar conclusion was reached by Speck et al. (2003) for the Ring Nebula based on comparison of high resolution ground-based H 2 emission images to the optical HST images. Thus in two evolved planetary nebulae, the Helix and the Ring Nebulae, the H 2 line emission is highly structured and confined to knots. The near-IR H 2 emission provides us with the highest angular resolution map of the neutral gas knots in the Helix nebula. Previous work has suggested that the knots contain all the neutral gas detected at substantially lower angular resolution in the CO emission (∼31 ′′ FWHM) by Young et al. (1999), in the CI emission (∼15 ′′ FWHM) by Young et al. (1997), the HI emission (∼42 ′′ FWHM) by Rodríguez et al. (2002) and the H 2 line emission (∼4 ′′ FWHM) by Speck et al. (2002). However, all previous neutral gas studies have had insufficient resolution and sensitivity to separate and determine the structure and number density of these neutral gas knots. The optical study of the knots by O' Dell & Handron (1996) provided an initial, lower limit for the total number of cometary knots to be 3500 in the entire nebula. They base this estimate by extrapolating the number density of knots they can identify in their WFPC2 optical images vs. radius to the entire nebula. Our H 2 images reveal that many more molecular knots exist as defined by the H 2 emission arcs than can be identified in the optical images (Figures 2-6). For example, in the NICMOS field position 1 the number of knots that appear as [OIII] shadows are less than 10; however, the number of arc-shaped H 2 emission structures is ∼150. In Table 3, we list the number of knots, which we identify by arcs of H 2 emission, and the FOV of the image. The number density of knots is simply the total divided by the FOV. The area filling factor of H 2 emission is the percentage of the FOV that contains H 2 emission structures above a 1σ threshold intensity (∼1-2 × 10 −5 erg s −1 cm −2 sr −1 ). The total number of knots, the number density of knots and the area filling factors are the highest for positions 1 and 2, and decrease with larger radial distance from the star as seen in positions 4 and 5. Interestingly, the peak surface brightness of the knots does not decrease significantly with radial distance as we see in Figure 7 and Table 2. We estimate the total number of molecular knots in the Helix by scaling the number of knots we observe in our NICMOS images to the total angular size of the Helix. If we look at the H 2 image of Speck et al. (2002) we find that our NICMOS field positions 1 and 2 land in a region of average or slightly below average H 2 intensity. So, we base a conservative estimate of the total number of knots by using the average knot number density of positions 1 and 2, 0.041 knots/arcsec 2 . The H 2 emission region is an annulus with an inner radius of ∼170 ′′ and an outer radius of 450 ′′ covering a total angular area of 5.5 × 10 5 arcsec 2 . Multiplying the average knot number density by the total angular area equals ∼23,000 molecular hydrogen knots in the Helix nebula or a factor of 6.5 larger than previous estimates based on optical images (O'Dell & Handron 1996). The estimated mass of a single Helix knot is ∼1.5 × 10 −5 M ⊙ (O'Dell & Handron 1996) or 10 −4 M ⊙ (Young et al. 1997). However, the Young et al. (1997) CI study defined a "knot" to be the size of 30 ′′ ×10 ′′ in size, which would include ∼10 H 2 knots if we assume the knot densities of postion 1. Because our H 2 knots appear similar in shape and size to the optical knots observed by O' Dell & Handron (1996), we adopt their mass esimate for individual knots. If, for simplicity, we assume that all H 2 knots have similar properties with an average mass of ∼1.5 × 10 −5 M ⊙ (O'Dell & Handron 1996), then the total neutral gas mass of the Helix nebula is ∼0.35 M ⊙ . Our neutral gas mass estimate is substantially larger than the 0.01 M ⊙ estimated by O'Dell & Handron (1996) who underestimated the total number of knots. It is also higher than the 0.18 M ⊙ estimated from the CO observations that were corrected for the atomic CI emitting gas by Young et al. (1999) who have underestimated the mass because the CO observations do not detect all of the molecular gas. Our estimate of the neutral gas mass is comparable to the ionized gas mass of 0.3 M ⊙ (Henry et al. 1999), and the total gas mass in the nebula is >0.65 M ⊙ in the main part of the disk. The mass loss rate, that created the main disk over ∼28,000 years isṀ > 2.3 × 10 −5 M ⊙ y−1. Such a large mass loss rate supports the independent conclusion that the Helix's progenitor star was massive, 6.5 M ⊙ , resulting in a present day core mass of 0.93 M ⊙ (Gorny et al. 1997). Figure 10 shows a close up of the bright knot in NICMOS field position 1 in three tracers, [OIII], H 2 and Hβ. The knot extinguishes the [OIII] emission and its long shadow runs off to the bottom left corner. The arc-shaped head of the knot is clearly observed in the H 2 emission and is apparent, at lower contrast, in the Hβ emission. Close comparison of the molecular hydrogen emission shows that it almost coincides with the Hβ and is displaced towards the star from the shadowed regions of the the [OIII] absorption of the knot (Fig. 10). This structure suggests that the H 2 emission arises in mini-PDRs on the clump surfaces that point toward the central star. The intensities measured for the H 2 emission are consistent with the PN-PDR models of Natta & Hollenbach (1998). Comparison of this knot structure ( Fig. 10) with the detailed study of an optically bright knot in the inner most regions of the Helix (Huggins et al. 2002) reveals a change in the knot's structure. The most inward knot studied in H 2 by Huggins et al. (2002) is pillar-like with a well developed crown structure. The Hα emission in this knot lies closer to the central star compared to the H 2 emission and the H 2 emission lies closer to the central star than the CO emission of the knot. This stratified structure is what we expect for a PDR. However, we do not see a separated stratification of the H 2 and Hβ emission from our example knot from Position 1. This difference in layered vs. not layered structure of the knots' PDRs suggest that the inner most knot has a more evolved PDR front than the Position 1 knot. The evolution of H 2 knots A comparison of the different field positions shows a progression in the morphology of the H 2 knots with radial distance from the central star. The positions closest to the central star (positions 1 and 2) have numerous knots, with cometary structures, i.e. arcs at the top spires. The farthest positions (4 and 5) have substantially less knots and less structure to the knots. Position 3, which lies in between the two extremes, has a middle density of knots that are lined in continuous rows with fewer spire structures. A similar trend from highly structured to almost amorphous was observed in the Hα structure of the knots by O'Dell et al. (2004). The H 2 emission traces this morphological change even further out into the nebula showing that the neutral gas clumps have even less structure PNs at the outer edge of the nebula. This progression supports the idea that the initial stages of the structure formation is caused by instabilities in the interacting winds front or by the ionization front that are later refined by the photo-excitation. 4.4. The H 2 knots as mini-PDRs When molecular hydrogen was first imaged in PNs it was found that their brightnesses were too high for the molecular emission to have originated in photo-dissociation regions (PDRs) and thus attributed to shock excitation (Beckwith et al. 1978;Zuckerman & Gatley 1988). However, the PDR models used for these comparisons were designed for interstellar molecular clouds, rather than circumstellar nebulae around rapidly evolving stars. As such these models only included Far-UV photons and failed to include the soft X-ray emission inherent from very hot (>100,000 K) white dwarfs. The temperature of the white dwarf also changes as the star evolves during the lifetime of the PN. Furthermore, the original models assumed that the cloud was homogeneous, while the gas around PNs is clearly very clumpy in structure. In addition, the interstellar clouds are not expanding, whereas this is the case for the gas around PNs, causing a change in the optical depth of the gas and therefore allowing the photons to penetrate the gas more easily. The models of Natta & Hollenbach (1998) included three of these factors (evolving central star, expanding gas, x-ray photons) into their PDR model for PNs, and showed that the molecular hydrogen emission was approximately consistent with excitation of H 2 in PDRs in these environments. The Natta & Hollenbach (1998) model has been applied to three PNs, NGC 2346 (Vicini et al. 1999), the Helix nebula (Speck et al. 2002), and the Ring Nebula (Speck et al. 2003). Our new observations of the Helix molecular knots confirms that the PDR gas in the Helix resides in a multitude of mini-PDRs, not in a diffuse molecular component (Speck et al. 2002;Huggins et al. 2002). The other, perhaps more suprising fact is that the intensity of individual knots remains fairly constant with distance from the central star in the range 5-9 × 10 −5 erg s −1 cm −2 sr −1 (Table 2; Fig. 7 ). The apparent, almost random variation in the H 2 line intensity with respect to radius (Fig. 7) occurs because the number density of knots varies along the cross cut not the intensity of the knots themselves. The relative consistency of the H 2 line intensity of the individual knots with respect to distance from the central star further supports that the PDRs are distributed as mini-PDRs and there is not one PDR front for the Helix but a multitude of them. The higher rotational lines of H 2 , that were observed with ISOCam by Cox et al. (1998), indicate a thermalized temperature of the gas to be ∼900 K that appears to be independent of distance from the central star. The gas density within the knots, ∼10 4 -10 5 cm −3 is high enough to thermalize the flourescently excited H 2 gas. The observed H 2 line intensities of the Helix and the derived molecular gas temperature are approximately consistent with the PN/PDR theory of Natta & Hollenbach (1998). For evolution of the most massive stars, e.g. core mass 0.836 M ⊙ , at the age of the Helix, ∼16,000 years, Natta & Hollenbach (1998) predict H 2 gas temperatures of ∼1000 K and H 2 2.12 µm line intensities of ∼5 × 10 −5 erg s −1 cm −2 sr −1 in good agreement with the observed temperature and the individual H 2 line intensities of the majority of knots. At this stage of the PN PDR evolution, the H 2 line intensity decreases only gradually with time and the heating of the molecular gas is dominated by the soft-X-ray emission of its 123,000 K central star (Bohlin et al. 1982). Despite this approximate success of the PN/PDR models of Natta & Hollenbach (1998), these models fall short of complete success. The brightest H 2 intensity is almost a factor of 10 larger than the Natta & Hollenbach (1998) prediction. Recent model calculations of the H 2 intensity of knots in radiative equilibrium with the stellar radiation field by O'Dell et al. (2005) also under-predict the H 2 line intensities. The solution to this underprediction of both models, may be to combine the time evolution aspects of the Natta & Hollenbach (1998) models with the inclusion of knots, as modelled by O'Dell et al. (2005). For example, the time-dependent process of photo-evaporation causes an advection of the H 2 from the surface of the knot and a propagation of the PDR front into the knot which may boost the H 2 emission because of the constant photodissociation of fresh molecular gas (Natta & Hollenbach 1998). Evidence for photoevaporation has been found in the ionized gas studies of the inner most knots (O'Dell et al. 2000), which are knots that are directly exposed to the central star light. Thus, the higher H 2 knots could be those that are directly exposed to the central starlight and experiencing photoevaporation. However, most knots experience a softer starlight that has been attenuated by intervening knots of molecular gas and dust. Secondly, the brightest H 2 intensity detected by Speck et al. (2002), ∼3 × 10 −4 erg s −1 cm −2 sr −1 , may be the result of multiple knots along the line of sight, i.e. filling factors greater than 1 that have a multiplying effect on the intensity. This raises an interesting point that the spatial distribution of H 2 over the entire nebula varies not because of substantial H 2 intensity variation, as one might expect in a PDR front. Rather, the apparent H 2 surface brightness is proportional to the number density of H 2 knots. Conclusions New observations of H 2 2.12 µm line reveal several new aspects to the molecular knots of the Helix nebula. The H 2 images reveal that the knots have arcuate structures with the apex pointing towards the central star. These molecular hydrogen knots are most highly structured in the field positions closest to the central star and become increasingly less structured with increasing radius. All of the H 2 emission is confined to knots. In contrast the ionized gas tracers have a significant component of diffuse ionized gas emission. Using the number density of molecular hydrogen knots in the 5 NICMOS field positions, we estimate the total number of knots to be ∼23,000, a factor of 6.5 more than previous estimates based on optical images. The total neutral gas mass in the Helix based on these new knots estimates is 0.35 M ⊙ assuming an average mass of ∼1.5 × 10 −5 M ⊙ for the individual knots based on previous work by O 'Dell & Handron (1996). The H 2 emission structure of the entire Helix nebula supports the recent interpretation of the Helix as a nearly pole-on poly-polar planetary nebula. The average intensity is 5-9 × 10 −5 erg s −1 cm −2 sr −1 remains relatively constant with projected distance from the central star. The temperature and H 2 2.12 µm intensity of the knots suggest an origin in the photodissociation regions (PDRs) of the individual knots; however, theoretical models for the PDRs in planetary nebulae do not adequately reproduce the H 2 intensity. The brightest knots appear in regions of more numerous knots and may be exposed to direct starlight that may cause rapid photoevaporation in comparison to the more embedded knots of the disk. We gratefully acknowledge the work of many STScI colleagues who contributed to this observational setup of this project. Zoltan Levay who superposed the NICMOS fields on the combined CTIO and ACS image. This work was supported in part by an STScI grant GO 01041 and by the internal STScI funds, DDRF D0001.82319. Table 2 were taken. The white line shows the location of the crosscut shown in Fig. 7. The approximate direction of the central star is shown with the label.
8,920.8
2005-09-29T00:00:00.000
[ "Physics" ]
CancerGPT: Few-shot Drug Pair Synergy Prediction using Large Pre-trained Language Models Large pre-trained language models (LLMs) have been shown to have significant potential in few-shot learning across various fields, even with minimal training data. However, their ability to generalize to unseen tasks in more complex fields, such as biology, has yet to be fully evaluated. LLMs can offer a promising alternative approach for biological inference, particularly in cases where structured data and sample size are limited, by extracting prior knowledge from text corpora. Our proposed few-shot learning approach uses LLMs to predict the synergy of drug pairs in rare tissues that lack structured data and features. Our experiments, which involved seven rare tissues from different cancer types, demonstrated that the LLM-based prediction model achieved significant accuracy with very few or zero samples. Our proposed model, the CancerGPT (with ~ 124M parameters), was even comparable to the larger fine-tuned GPT-3 model (with ~ 175B parameters). Our research is the first to tackle drug pair synergy prediction in rare tissues with limited data. We are also the first to utilize an LLM-based prediction model for biological reaction prediction tasks. Introduction Foundation models have become the latest generation of artificial intelligence (AI) (Moor et al. (2023)). Instead of designing AI models that solve specific tasks one at a time, such foundation models or "generalist" model can be applied to many downstream tasks without specific training. For example, large pre-trained language model (LLM), such as GPT-3 (Brown et al. (2020)) and GPT-4 (OpenAI (2023)), has been a game changer in foundation AI model (Mitchell and Krakauer (2023)). LLM can apply its skills to unfamiliar tasks that it has never been trained for, which is few-shot learning or zero-shot learning. This is due in part to multitask learning, which enables LLM to unintentionally gain knowledge from implicit tasks in its training corpus (Radford et al. (2018)). Although LLM has shown its proficiency in few-shot learning in various fields (Brown et al. (2020)), including natural language processing, robotics, and computer vision (Veit et al. (2017); Brown et al. (2020); Wertheimer and Hariharan (2019)), the generalizability of LLM to unseen tasks in more complex fields such as biology has yet to be fully tested. In order to infer unseen biological reactions, knowledge of participating entities (e.g., genes, cells) and underlying biological mechanisms (e.g., pathways, genetic background, cellular environment) is required. While structured databases encode only a small portion of this knowledge, the vast majority is stored in free-text literature which could be used to train LLMs. Thus, we envision that, when there are limited structured data and limited sample sizes, LLMs can serve as an innovative approach for biological prediction tasks, by extracting prior knowledge from unstructured literature. One of such few-shot biological prediction tasks with a pressing need is a drug pair synergy prediction in understudied cancer types. Drug combination therapy has become a widely accepted strategy for treating complex diseases such as cancer, infectious diseases, and neurological disorders. In many cases, combination therapy can provide better treatment outcomes than single-drug therapy. Predicting drug pair synergy has become an important area of research in drug discovery and development. Drug pair synergy refers to the enhancement of the therapeutic effects of two (or more) drugs when used together compared to when each drug is used alone. The prediction of drug pair synergy can be challenging due to a large number of possible combinations and the complexity of the underlying biological mechanisms (Zagidullin et al. (2019)). Several computational methods have been developed to predict drug pair synergy, particularly using machine learning. Machine learning algorithms can be trained on large datasets of in vitro experiment results of drug pairs to identify patterns and predict the likelihood of synergy for a new drug pair. However, most of the data available comes from common cancer types in certain tissues, such as breast and lung cancer; very limited experiment data are available on certain types of tissues, such as bone and soft tissues (Fig. 1). Obtaining cell lines from these tissues can be physically difficult and expensive, which limits the number of training data available for drug pair synergy prediction. This can make it challenging to train machine learning models that rely on large datasets. Early studies in this area have relied on relational information or contextual information to extrapolate the synergy score to cell lines in other tissues, (Chen and Li (2018); Sun et al. (2020); ; Kuru et al. (2022); ), ignoring the biological and cellular differences in these tissues. Another line of studies has sought to overcome the discrepancy between tissues by utilizing diverse and high-dimensional features, including genomic (e.g., gene expression of cell lines) or chemical profiles (e.g., drug structure) (Preuer et al. (2018); Liu and Xie (2021); Kuru et al. (2022); Hosseini and Zhou (2023); Kim et al. (2021)). Despite the promising results in some tissues (with abundant data), these approaches cannot be applied to tissues with too limited data to adapt its model with the large number of parameters for those high-dimensional features. In this work, we aim to overcome the above challenge by LLMs. We hypothesize that cancer types with limited structured data and discrepant features still have good information in scientific literature. Manually extracting predictive information on such biological entities from literature is a complex task. Our innovative approach is to leverage prior knowledge in scientific literature encoded in LLMs. We built a few-shot drug pair synergy prediction model that transforms the prediction task into a natural language inference task and generate answers based on prior knowledge encoded in LLMs. Our experimental results demonstrate that our LLM-based few-shot prediction model achieved significant accuracy even in zero shot setting (i.e., no training data) and outperformed strong tabular prediction models in most cases. This remarkable few-shot prediction performance in one of the most challenging biological prediction tasks has a critical and timely implication to a broad community of biomedicine because it shows a strong promise in the "generalist" biomedical artificial intelligence (Moor et al. (2023)). Drug pair synergy prediction Lots of methods have been proposed to predict drug pair synergy in recent years. Based on the data type to use, these methods can be classified either as a multi-way relational method or as a context-aware method. Multi-way relational methods (Chen and Li (2018); Sun et al. (2020); ) use drug and cell line's relational information without any further chemical or gene information as input and predict drug pair's synergy. Contextaware methods (Preuer et al. (2018); Liu and Xie (2021);Kuru et al. (2022); Hosseini and Zhou (2023)) further utilized chemical and gene information from drugs and cell lines to predict drug pair's synergy, which usually contains drug-drug, drug-gene, gene-gene interactions, and cellular environment. These methods usually achieve good performance with rich features on common tissues. However, both approaches do not apply to the cell lines in rare tissues with the limited size of data and cellular information. Kim et al. (2021) uses transfer learning to extend the prediction model trained in common tissues to some of the rare tissues with relatively rich data and cellular features. However, it cannot be utilized for rare tissues with extremely limited data and cellular information. Figure 1: Few-shot prediction in biology. A. Different from task-specific approach, large pre-trained language model can perform new tasks which are not been explicitly trained for. B. Drug pair synergy prediction in rare tissues is an important examples of numerous few-shot prediction tasks in biology. C. Large pre-trained language model can be an innovative approach for few-shot prediction in biology thanks to its prior knowledge encoded in its weight. Few-shot learning on tabular data Traditional supervised learning algorithms can struggle due to the difficulty in obtaining enough labeled data for classification. Few-shot learning is an emerging field that aims to address this issue by enabling machines to learn from a few examples rather than requiring a large size of labeled data. Meta-learning (Finn et al. (2017); Wang et al. (2023); Gao et al. (2023)) is one technique for few-shot learning. It trains a model on a set of tasks in a way that allows it to quickly learn to solve new, unseen tasks with a few examples. Another technique is data augmentation (Nam et al. (2023); Yang et al. (2022)), which generates new examples by transforming existing data. One promising but less explored direction is to leverage LLMs, particularly when prior knowledge encoded in a corpus of text can be served as a predictive feature. TabLLM (Hegselmann et al. (2023)) is one such framework. It serializes the tabular input into a natural language text and prompts LLM to generate predictions. Leveraging TabLLM, we investigated the effectiveness of LLMs in few-shot learning tasks in biology. Language models for biomolecular sequence analysis There has been a growing interest in using language models for biomolecular sequence analysis, and one approach involves the training of language models with biomolecular data (Madani et al. (2023); NVIDIA (2023)). These models learn the language of biomolecules, such as DNA, RNA, and protein sequences, similar to how GPT-2 (Radford et al. (2018)) or GPT-3 (Brown et al. (2020)) learns human language. However, our study takes a different approach. Rather than training a language model specifically for biomolecular data, we use a language model that has been pre-trained on a corpus of human language text. This pretrained model is used as a few-shot prediction model for drug pair synergy data, allowing us to make accurate predictions with minimal training data. By leveraging the power of pre-trained language models, we are able to make use of existing resources and obtain generalizability to diverse biological prediction tasks beyond biomolecule sequence analysis. Results We developed CancerGPT, a few-shot drug pair synergy prediction model for rare tissues. Leveraging LLMs-based tabular data prediction model (Hegselmann et al. (2023)), we first converted the prediction task into a natural language inference task and generated answer using prior knowledge from the scientific literature encoded in LLM's pre-trained weight matrices (Section 5.3, Fig. 2). We presented our strategy to adapt the LLM to our task with only a few shots of training data in each rare tissue in Section 5.5 and Fig. 3. To evaluate the performance of our proposed CancerGPT model and other LLM-based models, we conduct a series of experiments in which we compare the model with various other tabular models (Section 6). We measured accuracy using the area under the precisionrecall curve (AUPRC) and the area under the receiver operating curve (AUROC) under the different settings. We considered different few-shot learning scenarios, where the model is provided with a limited number k of training data to learn from (k=0 to 128). By varying the number of shots, we can examine the model's ability to adapt and generalize with minimal training data. Next, we investigated the performance of CancerGPT and other LLM-based models across different tissue types. Since cancer is a highly heterogeneous disease with various subtypes, it is crucial for the model to be able to accurately predict outcomes in diverse tissue contexts. We then investigated whether the LLM's reasoning for its prediction is valid by checking its argument with scientific literature. "Drug combination and cell line: The first drug is AZD1775. The second drug is AZACITIDINE. The cell line is EW-8. Tissue is bone. The first drug's sensitivity using relative inhibition is 25.687. The second drug's sensitivity using relative inhibition is 1.752. Synergy:" Prompt "why"? Fact check LLM's reasoning (CancerGPT, GPT-2, GPT-3) Figure 2: Study workflow. We first converted the tabular input to natural text and created a task-specific prompt (Section 5.2). The prompt was designed to generate binary class predictions (e.g., "Positive", "Not positive"). We fine-tuned the LLMs (GPT-2 and GPT-3) with k-shots of data in rare tissues (Section 5.5). We further tailored GPT-2 by fine-tuning it with a large amount of common tissue data, in order to adjust GPT-2 in the context of drug pair synergy prediction (Can-cerGPT, Section 5.4). We evaluated and compared the prediction models with a different number of shots and tissues (Section 6). We investigated the LLM's reasoning based on factual evidence. Methods 0 2 4 8 16 32 64 128 Pancreas Table 1: AUPRC of k-shot learning on seven tissue sets. n 0 :=total number of nonsynergistic samples (not positive), n 1 :=total number of synergistic samples (positive). We used 20% data as a test set in each rare tissue, while ensuring the binary labels were equally represented. Table 2: AUROC of k-shot learning on seven tissues sets. We evaluated the accuracy of our synergy prediction models. We calculated the AUPRC and AUROC of the LLM-based models (CancerGPT, GPT-2, GPT-3) and baseline models (XGBoost, TabTransformer) ( Table 1, 2). Due to an imbalance in positive and non-positive labels, we reported both AUPRC and AUROC. Details on the classification task and threshold of synergy are discussed in Section 6.3. Number of shots Number of training data and accuracy Overall, the LLM-based models (CancerGPT, GPT-2, GPT-3) achieved comparable or better accuracy in most of the cases compared to baselines. In the zero-shot scenario, the LLM-based models generally had higher accuracy than the baseline models in all experiments except stomach and bone. As the number of shots increased, we observed mixed patterns across various tissues and models. TabTransformer consistently exhibited an increase in accuracy with more shots. CancerGPT showed higher accuracy with more shots in the endometrium and soft tissue, and GPT-3 showed higher accuracy with more shots in the liver, soft tissues, and bone, indicating that the information gained from a few shots of data complements the prior knowledge encoded in CancerGPT and GPT-3. However, the LLM-based models sometimes did not show significant improvements in accuracy in certain tissues, such as the stomach and urinary tract, suggesting that the additional training data do not always improve the LLM-based models' performance. With the maximum number of shots (k=128), the LLM-based model, specifically GPT-3, was on par with TabTransformer, achieving the highest accuracy with the pancreas, liver, soft tissue, and bone, while TabTransformer achieved the best accuracy with endometrium, stomach, and urinary tract. Tissue types and accuracy The accuracy of the models varied depending on the tissue types, as each tissue possessed unique characteristics and had different data size. In pancreas and endometrium tissues, GPT-3 showed high accuracy with only a few shots (k=0 or 2). Generally, the cell lines from the two tissues are difficult to obtain and have a limited number of well-established cell lines, which makes them less investigated. For example, the pancreas is located deep within the abdomen, making it difficult to access and isolate cells without damaging them. The endometrium is a complex tissue that undergoes cyclic changes during the menstrual cycle, and this dynamic process complicates the cell culturing process. Due to this limited training data, few-shot drug pair synergy prediction in these tissues required even higher generalizability. In the liver, soft tissue, and bone, GPT-3 again achieved the highest accuracy than any other models, including one that trained with common tissues (TabTransformer, Can-cerGPT). This may be because these tissues have unique cellular characteristics specific to their tissue of origin that training with common tissues may not help predict accurately. For example, hepatic cell lines (originated from liver tissue) are often used in research on drug metabolism and toxicity and have unique drug response characteristics due to high expression of drug-metabolizing enzymes such as cytochrome P450s (Guo et al. (2011)). Bone cell lines have bone-specific signaling pathways that can affect drug responses, and the extracellular matrix composition and structure in bone tissue can also impact drug delivery and efficacy (Lin et al. (2020)). On the other hand, models trained with common tissues (TabTransformer, CancerGPT) achieved the best accuracy in the stomach and urinary tract tissues of all k, indicating that the prediction learned from common tissues can be extrapolated to these tissues. Particularly, CancerGPT achieved the highest accuracy with no training sample (k=0) in the stomach. Comparing LLM-based models When comparing LLM-based models, CancerGPT and GPT-3 demonstrated superior accuracy compared to GPT-2 in most tissues. GPT-3 exhibited higher accuracy than CancerGPT in tissues with limited data or unique characteristics, while CancerGPT performed better than GPT-3 in tissues with less distinctive characteristics, such as the stomach and urinary tract. The higher accuracy of CancerGPT compared to GPT-2 highlights that well-balanced adjustment to specific tasks can increase the accuracy while maintaining generalizability. However, the benefits of such adjustments may diminish with larger LLM models, such as GPT-3 (175B parameters), in situations where more generalizability is required. The fact that CancerGPT with smaller parameters (124M parameters) achieved the comparable accuracy to GPT-3 with larger parameters (175B parameters) implies that further fine-tuning of GPT-3 could achieve even higher accuracy. Fact check LLM's reasoning We evaluated whether the LLM can provide the biological reasoning behind its prediction. In this experiment, we used zero-shot GPT-3 because other fine-tuned LLM-based models compromised its language generative performance during the fine-tuning and were not able to provide coherent responses. To do this, we randomly selected one true positive prediction and examined whether its biological rationale was based on factual evidence or mere hallucination. Our example was the drug pair AZD4877 and AZD1208 at cell line T24 for urinary tract tissue. We prompted the LLMs with "Could you provide details why are the drug1 and drug2 synergistic in the cell line for a given cancer type?". Details on prompt generation are discussed in Supplementary 1. We evaluated the generated answer by comparing it with existing scientific literature. We found that the LLM provided mostly accurate arguments, except for two cases (Table 3) in which no scientific literature exists. By combining these individual scientific facts the LLM inferred the unseen synergistic effect. Generally, drugs targeting non-overlapping proteins in similar pathways are more likely to be synergistic (Cheng et al. (2019); Tang and Gottlieb (2022)). In this case, both AZD4877 and AZD1208 target similar pathways that inhibit tumor cell divisions without overlapping protein targets. The Loewe synergy score of this pair at T24 was 46.82, meaning strong positive synergistic effect. Example of prediction results As an example, we listed predicted synergistic drug pairs for stomach and soft tissue using CancerCPT (Table S3.1, S3.2) and bone and liver tissue using GPT-3 (Table S3.3, S3.4). We randomly selected two true positive, false positive, true negative, and false negative prediction examples. We discovered that Loewe synergy scores of the true negative or false negative prediction examples were close to the threshold we used to categorize the label (i.e., Loewe score >5). This suggests that accuracy may vary significantly by different thresholds for determining positive synergy. Setting more extreme thresholds (e.g., >10, >30), like previous models (Kim et al. (2021), Kuru et al. (2022), Hosseini and Zhou (2023)), may increase the prediction accuracy. Discussion Summary Our study investigates the potential of LLMs as a widely applicable few-shot prediction model in the field of biology. Specifically, we propose a new few-shot model for predicting drug pair synergy, which can be used in rare tissues with few or no training samples available. We transformed tabular data prediction into natural language inference tasks and fine-tuned LLMs (GPT-2, GPT-3) with very few samples in each tissue. The CancerGPT model, which was further tuned with a large amount of common tissue data, showed comparable accuracy to the few-shot tuned GPT-3 model, indicating that tailoring Excerpt of the generated answer Fact check and reference "The combination of AZD-4877 and AZD1208 has been studied in T24 cells...to be synergistic in reducing bladder cancer cell growth and metastasis" False. No study conducted on this drug pair "The combination was also found to target multiple pathways involved in the growth and spread of bladder cancer cells." True. AZD1208 is a PIM1 inhibitor. PIM1 is overexpressed in bladder cancer initiation and progression (Guo et al. (2010)). AZD4877 is a drug designed to target bladder cancer (Jones et al. (2013)). "...Specifically, AZD-4877 was found to inhibit the activation of proteins involved in the promotion of tumor growth..." True. AZD4877 is a drug designed to target bladder cancer (Jones et al. (2013)). "...AZD1208 was found to inhibit proteins associated with the inhibition of tumor growth." True. AZD1208 inhibits the cell growth by suppressing p70S6K, 4EBP1 phosphorylation, and messenger RNA translation (in acute myeloid leukemia) (Cortes et al. (2018)). "This combination was also effective at reducing the production of inflammatory mediators such as cytokines, which are known to contribute to tumor progression." False. AZD1208 is a pan-PIM kinase inhibitor, and PIM kinases are downstream effectors of cytokine (National Cancer Institute (2011)). However, AZD4877 has no evidence in reducing inflammatory mediators. "...these two drugs have been shown to reduce levels of apoptosis inhibitors, which can also play a role in tumor progression." True. AZD1208 induce cell apoptosis (Cervantes-Gomez et al. (2019)). AZD4877 is a inhibitor of Eg5, which promotes cell apoptosis (Borthakur et al. (2009)). Table 3: Example of generated answer when the LLM was asked to provide its reasoning for its prediction GPT-3 to specific tasks could further improve prediction accuracy. The LLM's reasoning for its prediction revealed that it implicitly infers unseen synergistic effects by combining several independent scientific facts. Why drug pair synergy prediction to evaluate LLMs The prediction of drug pair synergy in uncommon tissues serves as an excellent benchmark task for evaluating LLMs in few-shot learning within the field of biology. This prediction requires incorporating multiple pieces of information, such as drug and cell line, as well as the sensitivity of drugs to the cell lines, in order to infer the synergistic effects. While detailed information on these entities can be found in scientific papers, the interaction effect, or synergistic effect, is primarily available through biological experiments. To effectively assess LLMs' inference capabilities, one must employ a prediction task where the ground truth is not explicitly available in text format but can be determined through alternative sources for model evaluation. Typically, drug pair synergy scores are obtained through high-throughput testing facilities involving robot arms (He et al. (2018)). Therefore, individual records of the experiments are seldom recorded in academic literature, decreasing the likelihood of their use as training data for LLMs. Additionally, few studies have been conducted on rare tissues regarding their synergy prediction models, and their synergy prediction outcomes are not explicitly stated in text format. Another similar task is predicting the sensitivity of a single drug in a cell line; however, since the sensitivity of individual drugs is extensively researched and welldocumented in publications, the LLM model may merely recollect from the text rather than infer unseen tasks. Comparison to existing drug pair synergy prediction models It should be noted that it was not possible to compare our LLM-based models with previous predictions of drug pair synergy. The majority of models necessitate high-dimensional features of drugs and cells (e.g., genomic or chemical profiles), along with a substantial amount of training data, even the one specifically designed for rare tissue (Kim et al. (2021)). This kind of data is not easily accessible in rare tissues, which makes it challenging to carry out a significant comparison. Our model is designed to address a common but often overlooked situation where we have limited features and data. Thus, we compared the LLM-based models with other tabular models that share the same set of inputs. Contribution The contribution of our study can be summarized as follows. In the area of drug pair synergy prediction in rare tissues, our study is the first to predict drug pair synergy on tissues with very limited data and features, which other previous prediction models have neglected. This breakthrough in drug pair synergy prediction could have significant implications for drug development in these cancer types. By accurately predicting which drug pair will have a synergistic effect on these tissues in which cell lines are expensive to obtain, biologists can directly zoom into the most probable drug pairs and perform in vitro experiments in a cost effective manner. Our study also delivers generalizable insights about LLMs in the broader context of biology. To the best of our knowledge, our study was the first to investigate the use of LLMs as an few-shot inference tool based on prior knowledge in the field of biology, where much of the latest information is presented in unstructured free text (such as scientific literature). This innovative approach could have significant implications for advancing computational biology where obtaining abundant training data is not readily possible. By leveraging the vast amounts of unstructured data available in the field, LLMs can help researchers bypassing the challenge of limited training data when building data-driven computational models. Furthermore, this LLM-based few-shot prediction approach could be applied to a wide range of diseases beyond cancer, which is currently limited by the scarcity of available data. For instance, this approach could be used in infectious diseases, where the prompt identification of new treatments and diagnostic tools is crucial. LLMs could help researchers quickly identify potential drug targets and biomarkers for these diseases, resulting in faster and more effective treatment development. Limitations The present study, while aiming to showcase the potential of LLMs as a few-shot prediction model in the field of biology, is not without its limitations. To fully establish the generalizability of LLMs as a "generalist" artificial intelligence, a wider range of biological prediction tasks must be undertaken to validate it. Additionally, it is crucial to investigate how the information gleaned from LLMs complements the existing genomic or chemical features that have traditionally been the primary source of predictive information. In future research, we plan to delve deeper into this aspect and develop an ensemble method that effectively utilizes both existing structured features and new prior knowledge encoded in LLMs. Furthermore, while we observed that GPT-3's reasoning was similar to our own when fact-checking its argument with scientific literature in one example, it is important to note that the accuracy of its arguments cannot always be verified and may be susceptible to hallucination. It is reported that LLMs can also contain biases that humans have (Schramowski et al. (2022)). Therefore, further research is necessary to ensure that the LLM's reasoning is grounded in factual evidence. Despite these limitations, our study provides valuable insights into the potential of LLMs as a few-shot prediction model in biology and lays the groundwork for future research in this area. Problem Formulation Objective Our objective is to predict whether a drug pair in a certain cell line has a synergistic effect, particularly focusing on rare tissues with limited training samples. Given an input x = {d 1 , d 2 , c, t, ri 1 , ri 2 } of drug pair (d 1 , d 2 ), cell line c, tissue t, and the sensitivity of the two drugs using relative inhibition, the prediction model is where y is the binary synergy class (1 if positive synergy; 0 otherwise). Prior research ; Hosseini and Zhou (2023)) has employed three different scenarios for predicting drug pair synergy (random split, stratified by cell lines, stratified by drug combinations). Our task is to predict synergy when the data are stratified by tissue, which is a subset of cell lines. Why tabular input As discussed in Section 2, relationships learned in a tissue cannot be well generalizable to other tissues that have different cellular environments. This biological difference poses a challenge in predicting drug pair's synergy in tissues with a limited number of samples. The limited sample size makes it even more difficult to incorporate typical cell line features, such as gene expression level, which has large dimensionality (e.g., ∼ 20,000 genes). Due to this data challenge, the drug pair synergy prediction model is then reduced to build a prediction model with limited samples (few or zero-shot learning) with only limited tabular input feature types. Specific input features were described in Section 6. Synergy prediction models based on Large pre-trained language models Converting tabular input to natural text To use an LLM for tabular data, the tabular input and prediction task must be transformed into a natural text. For each instance of tabular data (Fig. 2), we converted the structured features into text. For example, given the feature string (e.g., "drug1", "drug 2", "cell line", "tissue", "sensitivity1", "sensitivity2") and its value (e.g., "lonidamine", "717906-29-1", "A-673", "bone", "0.568", "28.871"), we converted the instance as "The first drug is AZD1775. The second drug is AZACITIDINE. The cell line is SF-295. Tissue is bone. The first drug's sensitivity using relative inhibition is 0.568. The second drug's sensitivity using relative inhibition is 28.871." Other alternative ways to convert the tabular instance into the natural text are discussed in previous papers (Li et al. (2020); Narayan et al. (2022)). Converting prediction task into natural text We created a prompt that specifies our tasks and guides the LLM to generate a label of our interest. We experimented with multiple prompts. One example of the prompts we created was "Determine cancer drug combination synergy for the following drugs. Allowed synergies: Positive, Not positive. Tabular Input . Synergy:". As our task is a binary classification, we created the prompt to only generate binary answers ("Positive", "Not positive"). Comparing these multiple prompts (Supplementary 1), the final prompt we used in this work was "Decide in a single word if the synergy of the drug combination in the cell line is positive or not. {{ Tabular Input }}. Synergy:". LLM-based prediction model Large pre-trained language models We built our prediction models by tuning GPT-2 and GPT-3 into our tasks (Fig. 2). GPT-2 is a Transformer-based large language model which was pre-trained on a very large corpus of English data without human supervision. It achieved state-of-the-art results on several language modeling datasets in a zero-shot setting when it was released, and it is the predecessor of GPT-3 and GPT-4. GPT-2 (Radford et al. (2018)) has several versions with different sizes of parameters, GPT-2, GPT-Medium, GPT-Large, and GPT-XL. We used GPT-2 with the smallest number of parameters (regular GPT-2, 124 million) in this work to make the model trainable on our server. To adjust the model for a binary classification task, we added a linear layer as a sequence classification head on top of GPT-2, which uses the last token of the output of GPT-2 to classify the input. The cross-entropy loss was used to optimize the model during the fine-tuning process (discussed below). GPT-3 (Brown et al. (2020)) is a Transformer-based autoregressive language model with 175 billion parameters, which achieved state-of-the-art performance on many zero-shot and few-shot tasks when it was released. GPT-3.5, including ChatGPT (OpenAI (2022)), a famous fine-tuned model from GPT-3.5, is an improved version of GPT-3. However, the GPT-3 model and its parameters are not publicly available. Although the weight of the GPT-3 model is undisclosed, OpenAI offers an API (OpenAI (2021)) to fine-tune the model and evaluate its performance. We utilized this API to build drug pair synergy prediction models through k-shot fine-tuning. There are four models provided by OpenAI for finetuning, Davinci, Curie, Babbage, and Ada, of which Ada is the fastest model and has comparable performance with larger models for classification tasks. For that reason, we use GPT-3 Ada as our classification model. After uploading the train data, the API adjusted the learning rate, which is 0.05, 0.1, or 0.2 multiplied by the original learning rate based on the size of the data, and fine-tuned the model for four epochs. A model of the last epoch was provided for further evaluation. CancerGPT We further tailored GPT-2 by fine-tuning it with a large amount of common tissue data, in order to adjust GPT-2 in the context of drug pair synergy prediction. We named this model CancerGPT. CancerGPT used the same structure as the modified GPT-2 mentioned above. A linear layer was added to the top of GPT-2, which uses the last token of the GPT-2 output to predict the label. To use the pre-trained GPT-2 model, the same tokenizer was used as GPT-2. Left padding was used to ensure the last token was from the prompt sentence. The cross-entropy loss was used to optimize the model. CancerGPT was first fine-tuned to learn the relational information between drug pairs from common tissues, similar to collaborative filtering (Suphavilai et al. (2018)) (Fig. 3). This approach was based on the assumption that certain drug pairs exhibit synergy regardless of the cellular context, and therefore, the relational information between drug pairs in common tissues can be used to predict synergy in new cell lines in different tissues (Hosseini and Zhou (2023)). Additionally, we incorporated information on the sensitivity of each individual drug to the given cell line, using relative inhibition score as a measure of sensitivity ). By doing so, we were able to gather a more detailed and nuanced understanding of the relationship between drugs and cell lines. Subsequently, we utilized CancerGPT as one of the pre-trained LLMs and fine-tuned to k shots of data in each rare tissue (as discussed in the following section). All the LLM models use the tabular input that was converted to natural text and share the same prompt. k-shot fine-tuning strategy The LLM-based models had different training and fine-tuning strategies (Fig. 3). Samples of common tissues were split into 80% train data and 20% validation data for CancerGPT. The models were trained using train data and evaluated by validation data to determine the models with specific hyperparameters to be used for further fine-tuning on rare tissues. For the GPT-2 and GPT-3 based prediction models, we directly used pre-trained parameters from GPT-2 (Radford et al. (2018)) using Huggingface's Transformers library (Wolf et al. (2020)) and GPT-3 Ada from OpenAI (Brown et al. (2020)) respectively. Figure 3: Training strategy of baseline and proposed LLM-based models. General tabular models and CancerGPT were first trained with samples from common tissues then k-shot fine-tuned with each tissue of interest. GPT-2 and GPT-3 are pre-trained models, and we fine-tuned them with k shots of data in each tissue. All these models were then fine-tuned with k shots of data in each of the rare tissues. For bone, urinary tract, stomach, soft tissues, and liver, we performed experiments with k from [0,2,4,8,16,32,64,128]. For endometrium and pancreas, because of the limited number of data, we implemented experiments with k from [0,2,4,8,16,32] from the endometrium, and only zero shot (k = 0) for the pancreas. With the limited number of shots, a careful balance of binary labels in the train and test set was critical. We partitioned the data into 80% for training and 20% for testing in each rare tissue, while ensuring the binary labels were equally represented in both sets. We randomly selected k shots from the training for fine-tuning, while maintaining consistency with previously selected shots and adding new ones. Specifically, we maintained the previously selected k shots in the training set and incremented additional k shots to create 2 × k shots. The binary label distribution in each k shot set followed that of the original data, with at least one positive and one negative sample included in each set. For evaluation stability, the test data was consistent across different shots for each tissue. Dataset We utilized a publicly accessible extensive database of drug synergy from DrugComb Portal (Zagidullin et al. (2019)), which is an open-access data portal where the results of drug combination screening studies for a large variety of cancer cell lines are accumulated, standardized, and harmonized. The database contains both drug sensitivity rows and drug pair synergy rows. After filtering the available drug pair synergy rows, the data contains 4,226 unique drugs, 288 cell lines, with a total of 718,002 drug pair synergy rows. We employed the Loewe synergy score, which ranges from -100 (antagonistic effect) to 75 (strong synergistic effect), for drug combination synergy. (Greco et al. (1995)) The Loewe synergy score quantifies the excess over the expected response if the two drugs are the same compound (Ianevski et al. (2017);Yadav et al. (2015)). In this paper, we focused on cell lines from rare tissues. We defined the rare tissues as the ones with less than 4000 samples, which include the pancreas (n=39), endometrium (n=68), liver (n=213), soft tissues (n=352), stomach (n=1,190), urinary tract (n=2,458), and bone (n=3985). We tested our models with each of the rare tissues. Baseline models We compared the LLM-based prediction model with two other tabular models that take the same set of inputs. We specifically used XGBoost (Chen and Guestrin (2016)) and TabTransformer (Huang et al. (2020)). XGBoost is one of the gradient-boosting algorithms for supervised learning based on tree ensemble. for structured or tabular data. It is widely used in large-scale drug synergy data (Sidorov et al. (2019); Celebi et al. (2019)). TabTransformer is a self-attention-based supervised learning model for tabular data. TabTransformer applies a sequence of multi-head attention-based Transformer layers on parametric embeddings to transform them into contextual embeddings, in which highly correlated features will be close to each other in the embedding space. Considering the highly correlated nature of drugs in our data, TabTransformer can be a very strong baseline in this work To train the two baseline models, we first converted the drugs and cell lines in the tabular data into indicators using one-hot coding. Tissue information was not used in training because the models will be tested in one specific rare tissue that is not used in training. Neither XGBoost nor TabTransformer is a pre-trained LLM; thus, no further contextual information can be inferred through the unseen tissue indicator. For XGBoost, all the variables (drugs, cell lines, and sensitivities) were used as input to predict the drug pair synergy. For TabTransformer, we first trained an embedding layer from scratch on the categorical variables (drugs and cell lines) and passed them through stacked multi-headed attention layers, which we then combined with the continuous variables (sensitivities). This combination then passes through feed-forward layers, which have a classification head. Hyperparameter Setting The predicted output was a binary label indicating the presence of a synergistic effect, with a Loewe score greater than 5 indicating a positive result. We used AUROC and AUPRC to evaluate the accuracy of classification. Regression tasks were not possible in our LLMbased models because our model can only generate text-based answers ("positive" or "not positive"), with poor precision in accurately quantifying the synergy value. XGBoost was used with a boosting learning rate of 0.3. The number of the gradient boost trees was set to 1000 with a maximum tree depth of 20 for base learners. TabTransformer was used with a learning rate of 0.0001 and a weight decay of 0.01. The model was trained for 50 epochs on common tissues. During the training, the model with the best validation performance was selected for further fine-tuned on rare tissues. For each k shot in each tissue, the model was fine-tuned using the same learning rate and weight decay for 1 epoch and tested with AUPRC and AUROC. Details in the hyperparameter setting are discussed in Supplementary 2. CancerGPT was first fine-tuned with pre-trained regular GPT-2 for 4 epochs on common tissues. The learning rate was set to be 5e-5 and weight decay was set to be 0.01. Then the model was fine-tuned for k shots in rare tissues. The same hyperparameters are used in training. The model was finally tested with AUPRC and AUROC. GPT-2 and GPT-3 are directly fine-tuned on rare tissues with pre-trained parameters from regular GPT-2 and GPT-3 Ada. For each k shot in each tissue, GPT-2 is fine-tuned for 4 epochs using a learning rate of 5e-5 and a weight decay of 0.01. The hyperparameters of GPT-3 are adjusted by OpenAI API based on the data size. The model was also fine-tuned for 4 epochs. GPT-2 and GPT-3 fine-tuned models were finally tested with AUPRC and AUROC.
9,242.8
2023-04-18T00:00:00.000
[ "Computer Science" ]
SARS-CoV-2 infects human neural progenitor cells and brain organoids Dear Editor, Coronavirus disease 2019 (COVID-19) caused by the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has resulted in over 13 million confirmed cases and more than 580,045 deaths across 218 countries and geographical regions as of July 16, 2020. This novel coronavirus primarily causes respiratory illness with clinical manifestations largely resembling those of SARS. However, neurological symptoms including headache, anosmia, ageusia, confusion, seizure, and encephalopathy have also been frequently reported in COVID-19 patients. In a study of 214 hospitalized COVID-19 patients in Wuhan, China, neurologic findings were reported in 36.4% of patients, and were more commonly observed in patients with severe infections (45.5%). Similarly, a study from France reported neurologic findings in 84.5% (49/58) of COVID-19 patients admitted to hospital. Importantly, a recent study in Germany demonstrated that SARS-CoV-2 RNA could be detected in brain biopsies in 36.4% (8/22) of fatal COVID-19 cases, which highlights the potential for viral infections in the human brain. To date, there has been no direct experimental evidence of SARS-CoV-2 infection in the human central nervous system (CNS). We recently demonstrated that SARS-CoV-2 could infect and replicate in cells of neuronal origin. In line with this finding, we showed that SARS-CoV-2 could infect and damage the olfactory sensory neurons of hamsters. In addition, angiotensin-converting enzyme 2 (ACE2), the entry receptor of SARS-CoV-2, is widely detected in the brain and is highly concentrated in a number of locations including substantia nigra, middle temporal gyrus, and posterior cingulate cortex. Together, these findings suggest that the human brain might be an extra-pulmonary target of SARS-CoV-2 infection. To explore the direct involvement of SARS-CoV-2 in the CNS in physiologically relevant models, we assessed SARS-CoV-2 infection in induced pluripotent stem cells (iPSCs)-derived human neural progenitor cells (hNPCs), neurospheres, and brain organoids. We first evaluated the expression of ACE2 and key coronavirus entry-associated proteases in hNPCs. Our data suggested that ACE2, TMPRSS2, cathepsin L, and furin were readily detected in the hNPCs (Supplementary information, Fig. S1). Next, we challenged iPSC-derived hNPCs with SARSCoV-2 at 10 multiplicity of infection (MOI) and with SARS-CoV as a control. Supernatant was harvested at 0, 24, and 48 h post infection (hpi) for virus replication assessment. Interestingly, our data suggested that SARS-CoV-2, but not SARS-CoV, could replicate in hNPCs (Fig. 1a; Supplementary information, Fig. S2). In addition, we quantified the cell viability of SARS-CoV-2infected hNPCs. Importantly, SARS-CoV-2 infection significantly reduced the viability of hNPCs to 4.7% (P < 0.0001) and 2.5% (P < 0.0001) of that of the mock-infected hNPCs at 72 and 120 hpi, respectively (Fig. 1a). In contrast to the substantial cytotoxicity induced by SARS-CoV-2 in the infected hNPCs, SARS-CoV-2 infection did not significantly upregulate interferon (Supplementary information, Fig. S3) and pro-inflammatory (Supplementary information, Fig. S4) response in the infected hNPCs. Next, we challenged 3D neurospheres with SARS-CoV-2 and harvested supernatant samples from the infected neurospheres at 0, 24, 48, and 72 hpi for virus replication assessment. We found the SARS-CoV-2 RNA-dependent RNA polymerase (RdRp) copy number significantly increased in a time-dependent manner (Fig. 1b, left). In addition, a significant amount of infectious virus particles were released from the infected neurospheres as determined by plaque assays (Fig. 1b, right). In parallel, SARS-CoV-2-infected neurospheres were cryosectioned and immunostained for viral antigen assessment. Importantly, SARS-CoV-2 nucleocapsid (N) protein was readily detected across the infected neurospheres but no positive signals were detected in the mock-infected neurospheres (Fig. 1c). Furthermore, electron microscopy detected extensive viral particles in vacuoles within the double-membrane structures, which may represent sites of viral particle formation (Fig. 1d). These findings indicate that neurospheres were permissive to SARS-CoV-2 infection and supported productive virus replication. Next, we examined whether SARS-CoV-2 could infect 3D human brain organoids. We generated iPSC-derived human brain organoids using previously described protocols. The 35-day-old brain organoids showed self-organizing internal morphology with fluid-filled ventricular-like structures resembling that of developing cerebral cortex (Fig. 1e). Cryosectioning and immunostaining were performed to determine the expression and distribution of neuronal markers in 35-day-old brain organoids. Pan-neurons, early forebrain, and hNPCs markers were identified by TUJ1, PAX6, and NESTIN staining, respectively. The TUJ1 staining identified a primitive cortical plate with early neurons (Fig. 1e), whereas PAX6 staining represented the radial glia in the cerebral cortex (Fig. 1e). In addition, NESTIN staining identified active proliferating NPCs in the brain organoids (Fig. 1e). Overall, these results indicated that telencephalon development and cerebral neurogenesis could be modelled by our organoid system. To investigate whether the brain organoids were permissive to SARS-CoV-2 infection, human iPSC-derived 35-day-old brain organoids were challenged with SARS-CoV-2. Importantly, extensive SARS-CoV-2 antigen was detected in the infected samples at 72 hpi (Fig. 1f), indicating that SARS-CoV-2 directly infected the brain organoids. Immunofluorescence staining and confocal microscopy revealed SARS-CoV-2-N signals in the peripheral regions (Fig. 1f, arrows) and in deeper regions of the brain organoids (Fig. 1f, white arrowheads). In addition, cell-cell fusion was readily detected in regions with robust SARS-CoV-2 infection (Fig. 1f, yellow arrowheads). No SARS-CoV-2-N signals were detected in the mock-infected brain organoids (Fig. 1f). We next analyzed supernatant samples from infected brain organoids to evaluate SARS-CoV-2 virus particle release. The results demonstrated SARS-CoV-2 RdRp gene copy number increased in a timedependent manner, suggesting active release of progeny virus particles from infected brain organoids (Fig. 1g, left). Specifically, Cells were plated on low attachment 96-well plates to form embryoid body (EB). After 7 days, the EBs were plated to form rosettes expressing neural progenitors using 1 defined medium DMEM/F-12 supplemented with 20 ng/mL FGF2 and Gem21 NeuroPlex (Gemini Bio-Products). For neurosphere generation, 4,000 neural progenitor cells were seeded on low attachment plates under rotation without FGF2. Formation of brain organoids Human brain organoids were generated from iPSCs as previously described 2,3 . After a further week in Media III, the organoids were transferred to Media II and incubated until day 35 with the media refreshed every 3 days. Viruses and biosafety The SARS-CoV-2 HKU-001a (GenBank accession number: MT230904) and SARS-CoV GZ50 (GenBank accession number: AY304495) were propagated as previously described 4 . Both viruses were titered in Vero E6 cells by plaque assay. All experiments involving live SARS-CoV-2 and SARS-CoV followed the approved standard operating procedures in our Biosafety Level 3 facility 4 . Infection of neurospheres or organoids with SARS-CoV-2 To evaluate whether neurospheres or organoids were permissive to SARS-CoV-2, neurospheres or brain organoids were inoculated with 6 × 10 6 PFU/mL SARS-CoV-2 in organoid culture medium and incubated at 37 o C for 24 hours. The inoculum was aspirated at 24 hours post virus challenge. Organoids were washed with culture medium three times and then further incubated until harvest at the indicated time points. Infection of hNPCs with SARS-CoV-2 or SARS-CoV To evaluate whether hNPCs were permissive to SARS-CoV-2 infection, differentiated hNPCs were challenged with 10 MOI SARS-CoV-2 or SARS-CoV. Supernatant samples from cells were harvested at 0, 24, and 48 hpi. Samples were lyzed with AVL buffer (Qiagen) and virus replication was determined by qRT-PCR. Immunostaining and confocal microscopy Immunofluorescence staining and confocal microscopy were performed as previously described 5 with slight modification. Briefly, organoids were fixed in fresh paraformaldehyde (4% PFA, Sigma-Aldrich) at pH 7.4 and 4℃ overnight. Organoids were then transferred to 30% (wt/vol) sucrose solution in PBS at 4℃ overnight. Plaque assays Supernatants from infected organoids and neurospheres were harvested at 0, 24, 48, and 72 hpi and titrated on Vero E6 cells. After incubation at 37°C for 72 hours, cells were fixed with 10% neutral-buffered formalin. For forming unit (PFU) visualization, fixed samples were stained with 0.5% crystal violet in 25% ethanol/distilled water for 10 minutes for plaque. Cytotoxicity assay Cell viability was determined by the CellTiter-Glo luminescent cell viability assay (Promega). The kit detects adenosine triphosphate (ATP) levels as a function of cell viability, and was used according to manufacturer's specifications.
1,825.6
2020-08-04T00:00:00.000
[ "Medicine", "Biology" ]
Superimposed gratings induce diverse response patterns of gamma oscillations in primary visual cortex Stimulus-dependence of gamma oscillations (GAMMA, 30–90 Hz) has not been fully understood, but it is important for revealing neural mechanisms and functions of GAMMA. Here, we recorded spiking activity (MUA) and the local field potential (LFP), driven by a variety of plaids (generated by two superimposed gratings orthogonal to each other and with different contrast combinations), in the primary visual cortex of anesthetized cats. We found two distinct narrow-band GAMMAs in the LFPs and a variety of response patterns to plaids. Similar to MUA, most response patterns showed that the second grating suppressed GAMMAs driven by the first one. However, there is only a weak site-by-site correlation between cross-orientation interactions in GAMMAs and those in MUAs. We developed a normalization model that could unify the response patterns of both GAMMAs and MUAs. Interestingly, compared with MUAs, the GAMMAs demonstrated a wider range of model parameters and more diverse response patterns to plaids. Further analysis revealed that normalization parameters for high GAMMA, but not those for low GAMMA, were significantly correlated with the discrepancy of spatial frequency between stimulus and sites’ preferences. Consistent with these findings, normalization parameters and diversity of high GAMMA exhibited a clear transition trend and region difference between area 17 to 18. Our results show that GAMMAs are also regulated in the form of normalization, but that the neural mechanisms for these normalizations might differ from those of spiking activity. Normalizations in different brain signals could be due to interactions of excitation and inhibitions at multiple stages in the visual system. Visual stimulation. Visual stimuli were generated with a PC containing a Leadtek GeForce 6800 video cardand were presented monocularly on a CRT monitor (Dell P1230, refresh rate 100 Hz, mean luminance 32 cd/m 2 ) placed 40 cm away from the cats' eyes. We first mapped the receptive fields (RFs) of the whole recording sites for each experiment through a sparse noise experiment. Afterward, all stimuli were presented full-field to cover the RFs of all recording sites and were viewed monocularly. Typically, the size of the stimulus was chosen so that it formed the largest circle (38°) possible using the whole screen. The contrast of the gratings was modulated sinusoidally. Spatial frequency was adjusted to optimally drive gamma oscillations in LFP for most sites in the electrode array. Spatial frequencies of 0.03, 0.05, 0.07, 0.1, and 0.2 cycles/° were chosen for different cats (referred to as LM7, LM2, LM9, FM5, and EM7 respectively), to induce strong gamma oscillation. The time durations for the visual experiment were: pre-stimulus 0.4 s, on-stimulus 2 or 4 s, and off-stimulus 0.4 s. We used a stimulus setup that consisted of various plaids with varying contrasts. Grating contrasts of 0%, 7%, 10%, 14%, 21%, 31%, and 45% were used, and plaids were obtained by summing two gratings. For each experiment, the angle between component orientations in the plaid was fixed to 90°. All the stimuli were presented with a temporal frequency of 2 Hz. The stimuli were shown in random order in blocks presented at least ten times. The spatial frequency for visual stimulus in plaid experiments was determined based on drifting grating experiments with different spatial frequencies. We on-line down-sampled and analyzed the LFP from 10 sites (out of 96 sites in total) from the array recording to get a rough estimation of the average preferred spatial frequency of MUA and that of gamma oscillation (between 60 and 100 Hz). The spatial frequency is chosen between these two mean values to activate most sites with strong gamma oscillation and MUA. Data analysis. Spectrum fitting. The power spectrum of the LFP response over 300-2000 ms (the same as in the previous work 25 ) after the stimulus onset was estimated with multi-taper techniques 33 . This method was implemented on a Chronux toolbox (http://chron ux.org/). A typical parameter setup (time-bandwidth product, 3; tapers, 5) was used to calculate the LFP spectrum. To quantify the responses, we separated the LFP spectrum www.nature.com/scientificreports/ into baseline power and narrow-band gamma oscillations. We used the following equations to capture the characteristics of these oscillations: where, With 30 Hz < u 1 < 100 Hz and 30 Hz < u 2 < 100 Hz. The variable f represents the frequency bin in the spectrum. F(f ) is fitted to the broad-band oscillation that obeys a power law, and G(f ) is used for the narrow-band gamma oscillation that is the Gaussian function. w b and w g denote the weight of the baseline power and narrowband oscillations respectively. u k u k and σ k represent the peak frequency and bandwidth of the gamma oscillation. f 0 is the initial peak frequency, and other parameters ( n 0 , b 0 , and k 0 ) are constant values. Note that in this paper, we used two Gaussian functions to capture the two gamma oscillations: we constrained fitting parameters u k (peak frequency) with restricted ranges for two gamma oscillations in each animal. Thus, the frequency range for low gamma would cover all sites' lower peak frequencies under all conditions and the frequency range for high gamma will cover all sites' higher peak frequencies under all conditions. Furthermore, the frequency ranges for high gamma and low gamma have no overlapping for each animal. Criteria for choosing good gamma and MUA. The good MUA responses from the recording sites were selected according to their signal-to-noise ratio (SNR). The SNR was defined as the standard deviation of the stimulus MUA (the epoch from 200 ms after the stimulus onset to the end of the stimulus) divided by the standard deviation of the pre-stimulus MUA (epoch between the recording onset and stimulus onset). An MUA site was considered a good choice if the SNR of at least one stimulus condition at the site was greater than 2.5. Strong narrow-band gamma oscillation sites were chosen through three steps. (1) Significant gamma oscillation sites were first identified. The z-score of the power spectrum was computed relative to the spontaneous activity. For each frequency bin and stimulus condition, the power spectrum of the baseline activity (the trials during the blank screen) was subtracted from that of the induced activity and divided by the standard deviation of the baseline activity (considering trials of all stimulus conditions). A recording site was considered to have significant gamma oscillation if at least one bin in the frequency range between 30 and 98 Hz showed a z-score value greater than 1.96 (95% threshold) for the situation where the drifting grating evoked the highest firing rate. (2) For the significant gamma sites, we then fitted the spectrum using Eqs. (1)-(3). We evaluated the fitting spectrum using the goodness of fit, and only the sites where the goodness of fit was greater than 0.8 were selected as good fitting sites. (3) For these good fitting sites, the relative fitting gamma power (Gamma SNR) was calculated as the narrow-band oscillation weight w g under stimulus condition divided by w g under blank condition. A site was considered a good choice of gamma oscillation if the Gamma SNR of at least one stimulus condition at the site was greater than 2. However, the response matrix consisted of 49 visual responses induced by the corresponding stimuli. The site selections introduced above could not guarantee that the response matrix had a good pattern. Therefore, before the pattern analysis, we introduced a method to determine whether a site had a good response pattern. We assumed that if a site had a good response pattern, its responses to drifting gratings at different contrasts must be well explained by a contrast response function described by Eq. (4); and if a site's contrast response tuning couldn't be well fitted by Eq. (4), we excluded the signal from the site for further analysis. The contrast response was fitted by the following model: where c denotes the contrast value of the grating, c 50 is half-maximal contrast; n 1 and R 2 n 2 are constant exponents related to response normalization. R max and R 0 are the estimations of the maximum and initial responses respectively. The goodness of fit for the result was defined as Eq. (5). Only the site had goodness of fit larger than 0.6 was chosen for further analysis. In summary, sites with good MUA were selected by satisfying both the MUA SNR criteria and the good response pattern criteria; sites with good gamma were selected by satisfying both the strong gamma oscillation criteria and the good response pattern criteria; the sites with all good signals refer to those sites with both good MUA and good gamma (the chosen data set was summarized in Table 1). Response pattern analysis. To depict the response pattern, we proposed an index to capture the characteristics of the patterns of gamma oscillation and MUA. The interaction index is defined as Eqs. (6) and (7), which can describe the level of cross-orientation interaction. (1) www.nature.com/scientificreports/ The c 1 and c 2 denote the contrast values of the two orthogonal gratings. Considering the generation of a plaid that is the summation of two-component gratings orthogonal to each other, a prediction matrix (M 2 ) is proposed. The interaction index is then defined as the difference between the prediction matrix and the raw response matrix (M 1 ). If the index is negative, the plaid part of the raw input matrix is smaller than the linear prediction matrix, and the changing pattern of the raw response matrix tends to behave like the cross-orientation suppression phenomenon. Statistical analysis. A non-parametric test (Bootstrap method 34 ) was used to do the statistical analysis for the results in this paper. The bootstrap method was implemented by the following steps: (1) we randomly selected N samples with replacement from the raw data set (N is the number of trial in the dataset); (2) we then calculated the value we want to test (mean value or correlation coefficient for example) for the selected data; (3) we repeated step (1) and (2) for 1000 times and counted the number (K) of these mean values that meet the test. We then calculated 1-K/B as the p-value. Only if 1-K/B is lower than 0.05, we considered that the mean value of the raw dataset is significantly different from the expectation of the test. The equivalent p-value for the bootstrap method is calculated as 1-K/B. The statistical differences between the mean value of the interaction index (for gamma oscillations and MUA) and zero were calculated through this method. Similarly, the bootstrap method was also applied to the delta interaction index (fit response minus real response). We used the bootstrap method to compare the fitting parameters of low gamma and high gamma (or other pairs of signals: low gamma and MUA; high gamma and MUA) in the normalization model. Pearson's correlation was used to quantify the linear correlation between the interaction index of one signal (low gamma, high gamma, and MUA) and another signal (low gamma, high gamma, and MUA) different from the first one. Robust regression analysis for the correlation measurements was also examined. To evaluate the performance of the modified normalization model for a given signal, the linear correlation between the fit and real response was calculated through Pearson's correlation. Note that if Pearson's correlation is not suitable, Spearman's rank correlation coefficient was used instead. Results MUA and LFPs were acquired from 96-electrode arrays or two 48-electrode arrays implanted in five anesthetized cats (named EM7, FM5, LM2, LM7, and LM9). The arrays covered an area of 16 mm 2 , which contained brain areas 17 and 18. Large-sized (38°) visual stimuli (blank, two drift gratings, and plaid) were presented (drifting at 2 Hz per cycle and lasting for 2 s) on the screen to cover the receptive fields of all the recording sites of these cats (Fig. 1a). The stimulus orientation and spatial frequency of the drifting gratings were chosen to be optimal for most sites in each array. In total, 480 sites were recorded. We obtained 183 sites with good MUA and 396sites that had strong gamma oscillations in the LFPs; And 138 sites have both good MUA and good gamma based on selection criteria (detailed in "Materials and methods"). The chosen stimulus parameters for the five cats are listed in Table 1. Two distinct narrow-band gamma oscillations coexist in the cat visual cortex. Strong MUA and LFP responses were activated after the onset of stimuli and remained relatively high amplitude compared with those under the blank condition (Fig. 1b,c). Apart from the transient response, sustained oscillatory components were observed in the LFP during the later stimulus presentation periods (0.3-2 s). Interestingly, two salient narrow-band gamma oscillations were found in the visual cortex of all five animals under stimulus conditions (at high contrast; see Fig. 1d for an example site and Fig. 2 for population results). A spectrum fitting procedure (detailed in "Materials and methods") was utilized to estimate the gamma components in the LFPs to capture the Table 1. Fundamental information for the five cats. The first 3 rows represent the main stimulus parameters (size, spatial frequency, and orientation) used in plaid experiments. Row 1: We used large size (38° represents the largest circle on our screen) stimulus to activate good gamma and MUA response; Row 2: The spatial frequency used in the plaid experiment for 5 animals; Row 3: Two orthogonal orientations used in the plaid experiment are listed for each animal. The other 3 rows show the numbers of sites with good MUA, good gamma, and good signals for both respectively. www.nature.com/scientificreports/ narrow-band gamma oscillation (Fig. 2). This fitting procedure 35 had an excellent performance for explaining the power spectrums under all conditions (the average goodness of fit was 0.938 ± 0.028). We defined the two narrow-band gamma oscillations (blue curves in Fig. 2b,c) as low gamma (LG) and high gamma (HG) based on their relative frequency range and the broad-band component as baseline power (black curves in Fig. 2b,c). To confirm the generality of the coexistence of two distinct gamma oscillations, we calculated the LFP spectrums of all 480 recording sites. The gamma components for most recording sites (460/480) can be well captured by the Spectrum Fitting method, and 53% of the recorded sites (254/480) showed two distinct gamma oscillations. These gamma oscillations were induced by drift grating and at least twice as large as their baseline power (Fig. 2d,e). For the later sections, only the sites that had low/high gamma oscillations and MUA responses (n = 138) were selected for further analysis. Interestingly, there was an individual difference for peak frequency of low and high gamma among animals (Fig. 2f,i). The peak frequencies of high gamma or low gamma from different electrodes were highly consistent under different stimulus conditions in the same animal, but peak frequencies for high/low gamma were rather variable among different animals (see five clusters in Fig. 2f,i). However, the cross-animal variability doesn't affect the definition for low and high gamma oscillations and we put low gamma from individual animals together and term them as low gamma for the rest of the results. Results for high gamma were also presented in this way. Moreover, the plaid also induced two distinct narrow-band oscillations for the majority (73%, 348/480) of the recording sites ( Fig. 2g-i). Overall, we found that two distinct narrow-band gamma oscillations coexisted in the cat visual cortex. Gamma oscillations are mostly suppressed by an additional grating. For recording sites with all signals good (n = 138), low gamma power induced by the drifting grating at high contrast (45%) was significantly suppressed (p < 0.001, Figure S1a) when the orthogonal grating (contrast = 45%) was added (Table S1). However, on average, MUA was significantly enhanced (p < 0.001, Figure S1b) for superimposed gratings, and there was no significant difference (p = 0.23, Figure S1c) for high gamma power induced by single gratings and superimposed gratings (Table S1). We also got similar results with more data points by using qualified sites for each of three signals (LG, n = 325; HG, n = 375; MUA, n = 183) (Figure S1d-f). These results imply that superimposed gratings had different modulation on low gamma, high gamma, and MUA. www.nature.com/scientificreports/ To get a further understanding of how these three signals (LG, HG, and MUA) change with superimposed gratings, we measured spiking activity and gamma oscillations activated by various plaids. Plaids were formed by the summation of two orthogonal drift gratings with independently varying contrast (c 1 , c 2 ). Both gratings had seven contrast levels, allowing 49 plaids to be attained (Fig. 3a). LFP spectrums were calculated for each plaid during the sustained time course (Fig. 3b), and the corresponding MUA responses were also recorded (Fig. 3c). The stimulus effects on gamma oscillations were investigated by fixing the contrast (c 1 ) of the base grating and obtaining the contrast (c 2 ) tuning of the mask grating for LG and HG. The gamma power was enhanced with increasing contrast (c 2 ) when only the drift grating was presented (c 1 = 0), whereas an additional grating (c 1 > 0) led to the suppression of gamma power (Fig. 3d). The responses for the superimposed gratings were then calculated as a response matrix for gamma oscillations and MUA (Fig. 3e). To quantify the interaction effect of the two superimposed gratings, we defined an interaction index as the difference between the prediction matrix (the linear summation of responses induced by two-component The size and spatial frequency of these stimuli were 38° and 0.05 cycle/ o . In (a), (b), and (c), gray lines indicate the raw LFP spectrums. Blue curves and black curves indicate the estimated narrow-band gamma power and baseline power through the spectrum fitting procedure. (d) shows the peak frequency and relative gamma power of the low gamma oscillation (narrow-band) induced by the drifting grating for all the recording sites. The horizontal error bar represents the mean and standard deviation of the relative gamma power and the vertical error bar represents that of peak frequency. Similar to (d), (e) shows the peak frequency and relative gamma power of the high gamma oscillation (narrow-band) induced by the drifting grating for all the recording sites. (f) Presents a comparison between the peak frequency of the low gamma and high gamma oscillations for sites with all signals good (n = 138: LG, HG, and MUA). (g)-(i), the same as (d)-(f), but for gamma oscillations induced by a plaid stimulus. Note that the relative gamma power is defined as the ratio between the stimulusdriven gamma power and the baseline. The threshold (black dotted line) for selecting salient gamma oscillations is labeled in subfigures (a), (e), (g), and (h). www.nature.com/scientificreports/ gratings) and the raw response matrix (Fig. 4a). If the gamma oscillation was suppressed when the additional grating was added, the interaction index was negative, while a positive interaction index meant facilitation. The index of most sites was significantly lower (p < 0.001) than 0 (n = 136 for LG; n = 132 for HG; n = 128 for MUA in Fig. 4b). This result implies that for the majority of sites, the gamma oscillations and MUA showed crossorientation suppression 8,29 . Some sites also showed cross-orientation facilitation 32,36 for both gamma oscillations and MUA (Fig. 4b, n = 2 for LG; n = 6 for HG; n = 10 for MUA). Surprisingly, there was no significant correlation between the interaction index of the low gamma power and high gamma power (r = 0.02, p = 0.78 in Fig. 4c); the same was true for the interaction index of MUA and that of high gamma oscillation (r = 0.15, p = 0.07 in Fig. 4c). MUA and low gamma had weak correlation (r = 0.28, p < 0.001 for LG and MUA) for their interaction indices. Taken together, for most sites, gamma oscillations were suppressed by the additional grating in the plaid stimuli. However, the interaction index of MUA is weakly correlated with that of low gamma and no significant correlation was found between the interaction index of high gamma and that of MUA. Response patterns of gamma oscillations are different to those of MUAs. We further checked whether the response patterns of gamma oscillations were similar to the MUA response patterns. Gamma response patterns were estimated through a linear fit from the MUA response matrix (Fig. 5a), allowing the residual and goodness of fit for a single site to be acquired (Fig. 5a,b). The goodness of fit for most sites was less LG and HG, the correlation between LG and MUA, and the correlation between HG and MUA). Pearson's correlation was used to test the significance of the relationship in each pair of comparisons (significant correlation was marked as a red font). Linear regression (black lines in c) was also calculated for correlation measurements. www.nature.com/scientificreports/ than 0.8 (0.50 ± 0.28, mean and standard deviation for LG; 0.57 ± 0.21 for HG; Fig. 5c). In short, the gamma oscillations and MUA differ not only in their interaction index but also in their response patterns. A normalization model for both gamma oscillations and MUA. Previously, we demonstrated the differences between the response patterns of gamma oscillations and MUA. Interestingly, they share a common feature known as cross-orientation suppression, which has been explained by applying the normalization model to spiking activity 28,29,31 . To test whether the normalization model can also regulate the response patterns of gamma oscillations, we derived a modified normalization model with the following form (Eqs. 8-10): where, R 1 and R 2 in Eqs. (8) and (9) are the normalization process in the visual system, c 50 is half-maximal contrast, m 1 and m 2 are constant exponents, and R max estimates the maximum stimulus-driven responses. The recorded response is defined as the weighted sum of R 1 and R 2 , and parameter b denotes the weight. Note that c 50 , m 1 , and m 2 are related to the process of normalization, while b is related to the orientation tuning of the individual site. The normalization model accounted well for low gamma, high gamma, and MUA (Fig. 6a). For the data where all three signals were good (n = 138), the mean values of the goodness of fit were 0.83 ± 0.07 for LG, 0.84 ± 0.14 for HG, and 0.82 ± 0.17 for MUA (Fig. 6b). For data where a single signal was good (n = 325 for LG; n = 375 for HG; n = 183 for MUA), the mean values of the goodness of fit were also high ( Figure S5a; 0.8 ± 0.09 for LG, 0.81 ± 0.16 for HG, and 0.81 ± 0.17 for MUA). Furthermore, for sites with all signals good (n = 138), the interaction index of the model was also comparable with those of the real responses for low gamma, high gamma and MUA (LG, r = 0.93, p < 0.001; HG, r = 0.72, p < 0.001; MUA, r = 0.72, p < 0.001; Fig. 7a). There was a significant difference among interaction index from different signals (p = 0.002 for LG; mean difference = 0.03, p = 0.009 for HG; p = 0.006 for MUA; Fig. 7b), but the difference was very small and negligible (mean difference = 0.03 for LG, mean difference = 0.03 for HG, There was only small difference between the interaction index for fit and real responses (mean difference = 0.02, p = 0.13 for LG; mean difference = 0.03, p < 0.001 for HG; mean difference = 0.07, p < 0.001 for MUA; Figure S5c www.nature.com/scientificreports/ among the three signals (r = 0.82, p < 0.001 for LG and HG; r = 0.63, p < 0.001 for LG and MUA; r = 0.53, p < 0.001 for HG and MUA; Fig. 8). The parameter m 1 for low gamma was weakly correlated with that for high gamma (r = 0.32, p < 0.001) and not significantly correlated with that for MUA (r = 0.05, p = 0.53). And the parameter m 1 for high gamma was weak correlated with that for MUA (r = 0.37, p < 0.001). For parameter m 2 , correlation between low gamma and high gamma (r = − 0.25, p < 0.001) was significant, but the correlation between low gamma and MUA (r = 0.16, p = 0.06) or between high gamma and MUA (r = − 0.07, p = 0.4) was not significant. These results indicate different mechanisms behind the response patterns of the gamma oscillations and MUA to superimposed gratings. To obtain further insight into the difference between the response patterns of gamma oscillations and those of MUA, population fitting parameters for the three signals were compared. There was no significant difference (p = 0.1 for LG and MUA; p = 0.31 for HG and MUA; p = 0.16 for LG and HG) between gamma oscillations and MUA for parameter b (0.52 ± 0.28 for LG; 0.5 ± 0.2 for HG; 0.48 ± 0.27 for MUA; Fig. 9a-c). However, there were significant differences (p < 0.001 for LG and MUA; p < 0.001 for HG and MUA) between the gamma oscillations and MUA for other fitting parameters (c 50 , m 1 , and m 2 ), with gamma oscillations having higher mean values and variance than MUA (For LG, c 50 : 0.37 ± 0.31, m 1 : 2 ± 0.56, m 2 : 2.11 ± 1.76, Fig. 9a; For HG, c 50 : 0.36 ± 0.33, m 1 : 2.14 ± 0.83, m 2 : 1.91 ± 2.23, Fig. 9b; For MUA, c 50 : 0.14 ± 0.19, m 1 : 1.59 ± 0.93, m 2 : 0.71 ± 0.89; Fig. 9c). Note that the major differences between gamma oscillations and MUA were caused by fitting parameters m 1 and m 2 . We then compared m 1 and m 2 site-by-site in the two gamma oscillations and MUA (Fig. 9d). Many sites had m 2 higher than m 1 for both high gamma oscillations and low gamma oscillations, but m 2 is lower than m 1 for MUA from almost all sites (p < 0.001). The fitting parameter domain (m 1 -m 2 ) of low gamma and high gamma was more widely distributed than MUA (Fig. 9e). The diversity in fitting parameters for gamma oscillations is mainly because m 1 -m 2 of low gamma and high gamma for many recording sites were smaller than 0 (44% recording sites for high gamma; 35% recording sites for low gamma), but m 1 -m 2 was higher than 0 for most recording sites of MUA (95% recording sites) (see consistent results in figure S6 for data where a single signal was good, n = 325 for LG; n = 375 for HG; n = 183 for MUA). Relationship between model parameters and sites' response properties. In the previous section, we found that parameters (m 1 -m 2 , m 1 , m 2 ) in the normalization model for gamma oscillations were different from those model parameters for MUA. Two possibilities might explain such parameter differences for gamma. One possibility is that parameters (m 1 -m 2 ) is mainly determined by noise level of a signal, and the noise level of Linear regression was calculated to confirm the linear correlation (the black solid line). (b) Presents histograms of the differences between the fit responses and real responses. Note that Pearson's correlation was used to test the significance of the relationship between the two indices, and the bootstrap method was used to test whether the distribution of the delta interaction index was significantly different from 0 (black line in subfigure b). All the significant correlations were labeled as red fonts. www.nature.com/scientificreports/ gamma oscillations might be more variable than those of MUA due to either recording properties or response properties for individual sites. If this possibility is true, then we should expect that (m 1 -m 2 ) for the three different brain signals (HG, LG, and MUA) are all correlated with their signal-noise level (SNR) in a similar way, and sites with high SNRs will have similar (m 1 -m 2 ) for all three brain signals. The other possibility is that parameters (m 1 -m 2 ) in the normalization model are mainly determined by neural circuitry underlying each of the three signals. If the second possibility is true, then we might also see a correlation between SNRs and (m 1 -m 2 ) for the three signals but in different correlation trends due to their different underlying mechanisms. In the experiment, we used a single set of stimulus parameters (two orthogonal stimulus orientations and one spatial frequency) to test a large population of simultaneously recorded neurons with diverse tuning properties (different orientation preferences and spatial frequency preferences). The mismatch of stimulus parameters and some sites' preferences for orientation or spatial frequency will activate neural mechanisms at different levels and lead to different relative noise levels for the three brain signals at different recording channels. To test the above two possibilities, we correlated m 1 -m 2 with three factors, SNR of the three signals, the ratio between the stimulus spatial frequency and preferred spatial frequency for individual sites (Delta SF), and the difference between the stimulus orientation and preferred orientation for individual sites (Delta Ori). To capture the trend of m 1 -m 2 with different factors, we divided our data into six groups, each of which had a similar number of recording sites based on the factor's value order, and calculated the mean and standard deviation for m 1 -m 2 in each group (as shown in Fig. 10). For low gamma, no significant correlation was found between m 1 -m 2 and three factors (SNR: r = − 0.04, p = 0.66, Fig. 10a; Delta SF: r = − 0.16, p = 0.07, Fig. 10b; Delta Ori: r = − 0.11, p = 0.2, Fig. 10c). More importantly, www.nature.com/scientificreports/ there always exist some recording sites with (m 1 -m 2 ) less than 0 in all bins of SNR, Delta SF, and Delta Ori, which suggests that the three factors do not strongly affect the m 1 -m 2 value for low gamma. Interestingly, the relationship between m 1 -m 2 and the three factors for high gamma oscillations was rather different from that for low gamma. For high gamma, m 1 -m 2 was significantly correlated with SNR and Delta SF (r = − 0.37, p < 0.001, for SNR; r = 0.45, p < 0.001, for Delta SF), and no significant correlation was found between m 1 -m 2 and Delta Ori (r = 0.13, p = 0.14). The recording sites with negative m 1 -m 2 mostly had high SNR (Fig. 10d). More interestingly, m 1 -m 2 was clearly related to Delta SF (Fig. 10e). When the chosen SF for visual stimulus was lower than a site's preferred SF (Delta SF < 0 in Fig. 10e), more sites had negative m 1 -m 2 , but when Delta SF was larger than 0 (chosen SF for visual stimulus was higher than a site's preferred SF), more sites had positive m 1 -m 2 ( Fig. 10e; also see consistent results in figure S7 for data where a single signal was good, n = 325 for LG; n = 375 for HG; n = 183 for MUA). Overall, besides the relationship between m 1 -m 2 and the three factors, negative (m 1 -m 2 ) existed in most bins of different factors for gamma oscillations (except in the bins with low Delta SF). On the contrary, m 1 -m 2 for MUA was higher than 0 for most recording sites under all conditions (Fig. 10g-i). Interestingly, there also existed a negative correlation between m 1 -m 2 and Delta SF (r = − 0.35, p < 0.001) for MUA. Taken together, m 1 -m 2 of gamma oscillations are more diverse than that of MUA under all factors. What's more, this diversity for high gamma is mainly related to the specific spatial frequency (a site's m 1 -m 2 is likely to be negative when stimulus SF is lower than the site's SF preference) but less related to a specific orientation. The different relationship between (m 1 -m 2 ) and the three factors for LG, HG, and MUA rules out the possibility that model parameters are mainly determined by the noise level of recording signals (possibility one), but support the idea that normalization of the three brain signals is due to different neural mechanisms. To get a more comprehensive understanding of model parameters m 1 and m 2 as well as (m 1 -m 2 ), we correlated all the four model parameters (c 50 , m 1 , m 2 , and b) with Delta SF and Delta Ori. Compared with previous results for Delta SF and (m 1 -m 2 ) (Fig. 10e,h), the correlation between m 1 -m 2 and Delta SF for high gamma was www.nature.com/scientificreports/ determined by m 2 , but that for MUA was mainly determined by m 1 : Normalization parameter m 2 was correlated with Delta SF for high gamma (m 1 : r = − 0.07, p = 0.44, m 2 : r = − 0.42, p < 0.001; Fig. 11b), while m 1 was correlated with Delta SF for MUA (m 1 : r = − 0.41, p < 0.001, m 2 : r = 0.14, p = 0.11; Fig. 11c). The insignificant correlation results for low gamma (Fig. 10b) can be explained as the balance between m 1 and m 2 (m 1 : R = 0.22, p = 0.008; m2: r = 0.22, p = 0.011; Fig. 11a). Consistent with correlation result for Delta Ori and (m 1 -m 2 ) (Fig. 10c,f,i), no significant correlation was found between Delta Ori and m 1 or m 2 for all three signals: low gamma (m 1 : r = 0.03, p = 0.73, m 2 : r = 0.1, p = 0.24; Fig. 11d), high gamma (m 1 : r = 0.02, p = 0.81, m 2 : r = − 0.16, p = 0.06; Fig. 11e) and MUA (m 1 : r = − 0.12, p = 0.17, m 2 : r = − 0.03, p = 0.7; Fig. 11f). Parameter c 50 had correlation with Delta SF for two signals (r = 0.18, p = 0.04 for low gamma; r = − 0.41, p < 0.001 for high gamma; Fig. 11a,b), but no significant correlations were found between Delta SF and b for three signals (r = − 0.03, p = 0.74 for low gamma; r = − 0.16, p = 0.06 for high gamma; r = − 0.06, p = 0.49 for MUA; Fig. 11a-c). Similar results can be found with more data points by using qualified sites for each of three signals (LG, n = 325; HG, n = 375; MUA, n = 183) ( Figure S8). It is worth noticing that model parameter m 2 has a significant correlation with Delta SF and it has a range wider than that of parameter m 1 for HG (Fig. 11b), which implies that SF-related m 2 is the main factor regulating the diversity of normalization for HG. However, parameter m 1 is more correlated to Delta SF than parameter m 2 is for MUA (Fig. 11a,c), which suggests that SF-related m 1 is the main factor regulating the diversity of normalization for MUA. Interestingly, parameter m 1 had similar trends with m 2 for LG. Taken together, these correlation results further suggest that low gamma, high gamma, and MUA are regulated by the different manner of normalization; and the relationship between stimulus parameters and model parameters (m 1 , m 2 , and m 1 -m 2 ) consistently shows that normalization in high gamma is related to sites' tunings to spatial frequency. Figure 11. The relationship between fitting parameters and stimulus factors (Delta SF and Delta Ori). Subfigure (a) presents the relationship between Delta SF and the four fitting parameters (from left to right: c 50 , m 1 , m 2 , (b) for low gamma (LG, n = 138 sites). Subfigures (b) and (c) are similar to (a), but for high gamma (HG, n = 138 sites) and MUA (n = 138) respectively. Subfigure (d) shows the scatter plot of Delta Ori and the four fitting parameters (from left to right: c 50 , m 1 , m 2 , (b). Note that Spearman's correlation was used to account for the relationship between the stimulus factors and the fitting parameters. Subfigures (e) and (f) are similar to (d), but for high gamma and MUA respectively. All the significant correlations were labeled as red fonts. www.nature.com/scientificreports/ The comparison of model parameters between A17 and A18. The main finding in the previous section is that normalization parameters (m 1 , m 2 , and m 1 -m 2 ) for high gamma are highly related to the discrepancy of spatial frequency between stimulus and sites' preferences ( Figs. 10e and 11b). This finding might be due to the fact that recording sites in A17 and A18 have different preferred spatial frequencies (preferred SF is lower for A18 than for A17 37 ). We further checked whether this is the case. Based on the shift of the receptive field center of each recording site, we can determine whether a recorded site belongs to A17 or A18 38 . We measured the distance (in the unit of millimeter) of each site toA17/A18 border (A17: negative value; A18: positive value). Then the fitting parameters in the normalization model were compared with this distance index. For low gamma, there was no significant correlation between fitting parameters related to normalization and distance index (c 50 : r = − 0.15, p = 0.08; m 1 : r = − 0.07, p = 0.39; m 2 : r = − 0.05, p = 0.56; Fig. 12a-c), except that parameter b, which was related to the orientation selectivity, was significantly correlated with distance index (r = 0.28, p = 0.001; Fig. 12d). As we predicted, for high gamma, fitting parameters (c 50 , m 1 , and m 2 ) were all significantly correlated with distance index (c 50 Taken together, the model parameters of low gamma, high gamma, and MUA have very different distribution patterns in brain areas 17 and 18. This result again implies that distinct neural mechanisms are behind low gamma, high gamma, and MUA, and normalization is region-specific, pathway-specific, and stimulus-specific. Figure 12. The correlation between brain areas (A17 vs. A18) and fitting parameters. The distance from the border between Area 17 and Area 18 was used to represent the spatial position of the individual recording site. The A17/A18 border was determined through the previous work 38 . Subfigure a shows the relationship between the distance from A17/A18 border (distance index) and fitting parameter c50 for three signals (from top to bottom: low gamma, high gamma, and MUA). Subfigures (b-d) are similar to (a), but they account for the relationship between distance index and fitting parameters m 1 , m 2 , b, and m 1 -m 2 respectively. Spearman's correlation was used to account for correlation measurement. Note that the sites with negative distance index are located in A17, while the positive distance index means A18 (zero is marked by red dot line). The p-values (T-test) located above red dashed lines in each subplot represent the statistical significances for whether parameters in brain Area 17 are significantly different from Area 18. The data points in each subfigure were divided into six groups based on their ranking order for values of each variable. The error bar in each group denotes the mean value (black circle) and standard deviation (black bar) of normalization parameter. Discussion In this work, we have systematically studied the response patterns of MUA and gamma oscillations induced by plaid stimuli with a descriptive model to unify normalization effects in both spiking activity and gamma oscillations. We then quantitatively compared normalizations in different neural signals. We demonstrated that gamma oscillations in cat visual cortex are mostly regulated by cross-orientation suppression but in a manner different from the regulation of spiking activity (MUA). Furthermore, we found that the model parameters for gamma oscillations were more diverse than those for MUA. Model parameters that capture diversities of low/ high gamma and MUA showed distinct relationships with discrepancy of spatial frequency between stimulus and preference of sites. As for spatial transition on brain mapping from cat area 17 to 18, model parameters of low/ high gamma and MUA also showed three distinct patterns. Overall, our results imply that the neural mechanisms for normalizations of gamma oscillations and MUA differ from each other. Comparisons between existing studies and the current work. Our study demonstrates that crossorientation suppression of gamma oscillation is a common phenomenon in anesthetized cats. This finding of cross-orientation suppression of gamma oscillation is consistent with previous studies implemented in awake monkeys 21,22 and humans 24 . However, no explanation or descriptive model has been used to account for this phenomenon on gamma oscillation. Compared with previous studies, we have made progress in several directions. (1) We derived a spectrumfitting procedure to capture the signature of the narrow-band gamma oscillation, which is quite different from studies on the broad-band component of the LFP [39][40][41] . Only a few studies have considered the separation of the narrow-band and broad-band oscillations 35,41 , which are quite important and necessary. (2) An interaction index was derived to quantify interaction effects between the responses induced by the two-component gratings used to generate the plaids 22 . Through this method, we were able to compare the response patterns among gamma oscillations and MUA. (3) We proposed a modified normalization model adapted to our data by adding and adjusting fitting parameters based on previous models 28,29 . Our study revealed several interesting findings: first, the response patterns of the gamma oscillations for some sites showed cross-orientation facilitation instead of suppression. Second, although the response patterns of both gamma oscillations and MUA were different, they could be depicted by a unified normalization model. Third, the normalization parameters for gamma oscillations had a broader range than that for MUA. Comparison of the modified normalization model with other models. Normalization models 28,29 have previously been provided to explain the cross-orientation suppression of spiking activity. We are curious about whether the existing normalization models 27,31 could also explain the response patterns of gamma oscillations to plaids. The existing normalization model 29 is expressed as: We directly applied this model to our data but found that it was not suitable for explaining our data (Figure S2a in supplementary information). This result implies that the gamma oscillations induced by the superimposed gratings were more complicated than assumed by the normalization model based on contrast gain control 42 . Based on the assumption that the cross-orientation suppression of gamma oscillation may be due to the competition between the two-component gratings of the plaid 22 , we modified the previous normalization model (Modified model denotes as M 1 ) by introducing a weighting variable (b) to combine the responses driven by the two-component gratings. Furthermore, according to the observation of our response patterns, we utilized two independent exponent variables (m 1 and m 2 ), rather than a common one 28 (n), for the numerator and denominator in the normalization model. According to this way, another normalization model used in previous studies 27,31 was also modified (M 2 ). We applied the above two modified normalization models (M 1 , M 2 ) to our response patterns of gamma oscillations and MUA, and compared the performance of M 1 and M 2 with our model (M 0 ). The equations for these three models are: The goodness of fit of these three models was calculated for all the recording sites ( Figure S2b-e), and Model M 2 showed the lowest performance. We also compared the fit response with the real response for each site concerning the interaction index (details provided in the methods) for these three models ( Figure S3). A bootstrap procedure was used to compare the correlation coefficients between the fit and real responses. The coefficient was higher with our model (M 0 ) than with the other two models ( Figure S4). In summary, our model (M 0 ) showed the best performance of all three normalization models. Despite the good performance of our normalization R = R max · c 1 n c 50 n + c 1 n +(k · c 2 ) n . M0: M1: R 1 = R max · c 1 n c 50 n + c 1 n +(k · c 2 ) n , R 2 = R max · c 2 n c 50 n + (k · c 1 ) n +c 2 n , R = b·R 1 +(1−b)·R 2 . M2: www.nature.com/scientificreports/ model, it lacks a description of the neural circuit, and therefore cannot interpret the neural mechanism behind the response patterns of the gamma oscillation. In future work, more mechanistic models such as a model of dynamic systems based on the interactions between excitatory and inhibitory neurons should be created to obtain a full understanding of the changing effects of plaids on gamma oscillations. The characteristics of two distinct narrow-band gamma oscillations. Gamma oscillation is an outstanding feature of local field potentials (LFP) in the visual cortex, but the neural mechanisms and functions are still under debate. Early studies supported that gamma plays important roles in binding visual information 4,9 ; however, growing evidence showed that gamma oscillation is stimulus dependent 2,43 and not always visible 41,44,45 , which implies that gamma is a by-product of network activity and it may not be essential for cognitive functions. Consistent with early work 46,47 , we found two distinct gammas in the visual cortex. This is an interesting result for both hypotheses about the functional roles of gamma oscillations. For the hypothesis that supports gamma is a by-product of the neural network, how the neural network generates two narrow-band gamma oscillations is a challenging question. Moreover, for the hypothesis that support gamma is important for cognitive functions, the challenging question is to answer the different roles that the two gamma oscillations have. This paper is not intended to decide which sides are right, but the fact that gamma oscillations are regulated by normalization in a way different from MUA suggests that gamma oscillations are important signals for us to understand the neural network. Unlike previous studies that reported the existence of two gamma oscillations in the spiking activity of cat visual cortex 47 , the gamma oscillations measured in our study were acquired by carefully separating the narrowband components from the broad-band components of the LFPs. There are several reasons why we emphasize the importance of narrow-band gamma oscillations. The broad-band component of an LFP may imply spectral properties of the excitatory postsynaptic potentials or leakage of the high frequency components of the spiking activity into the LFP [48][49][50] . In contrast, the narrow-band oscillation in an LFP may reflect the interaction between the fast-spiking inter neurons and pyramidal cells 51 . The narrow-band gamma oscillation is quite different from the broad-band component of LFPs. Recent work 46 demonstrates that two narrow-band gamma oscillations coexist in the LFP of primates, with two distinct frequency bands (slow gamma, 25-45 Hz, and fast gamma, 45-70 Hz). However, the frequency ranges of our low gamma (30-70 Hz) and high gamma oscillations (70-100 Hz) were higher than reported in the previous study. It has been demonstrated that the peak frequency of gamma oscillations depends on the anesthetization level 52,53 . Whether or not the difference between the frequency ranges was inherited from different mechanisms needs to be further investigated. Our results showed that the characteristics of these two narrow-band gamma oscillations are distinct from each other. First, there was no significant correlation (r = 0.02, p = 0.78) between low gamma (LG) and high gamma (HG) in respect to the interaction index (Fig. 4c). This is an interesting result that plaid can induce two distinct response patterns for LG and HG simultaneously. Second, the fitting parameter (m 1 -m 2 ) for low gamma is not significantly correlated with Delta SF, while (m 1 -m 2 ) for high gamma is affected by specific spatial frequency. Finally, the diversity of (m 1 -m 2 ) for high gamma (not low gamma) is region-specific for brain areas (A17/A18). These correlation analyses imply that low gamma and high gamma are regulated by different rules of normalization. Interestingly, the model parameters (m 1 -m 2 ) and m 2 are widely distributed and strongly correlated with the Delta SF for high gamma. Our current study cannot fully reveal the neural mechanisms underlying the results of correlation and diversity for HG responses. Neural normalization might involve multiple neural circuitries, including feedforward signals, local recurrent connection, and feedback connections, similar to neural oscillations 43,45,54 . Characteristics of both normalization and oscillations are combined effects of these neural circuits 37 . We speculate that high gamma and its normalization is generated pre-cortically 29,47 . In the pre-cortical stage (retina or LGN), the X and Y cells generate their own high gamma with distinct spatial frequency preferences. Therefore, we see distinct values of (m 1 -m 2 ) and m 2 between A17 and A18, which leads to a strong correlation between Delta SF and (m 1 -m 2 ). However, low gamma is generated in the cortical network, which combines the inputs of X and Y cells from LGN. The combination of multiple feed-forward inputs will weaken the relationship between normalization and spatial frequency preferences. A recent modeling work also implies that low and high gamma oscillations are different signatures for neural connections in different spatial scales 55 . In future work, in order to dissect different circuitry, it will be worth investigating normalizations and oscillations with plaids at different stimulus sizes and spatial frequency 27,28 . It is also worth building a dynamic model 56 to further understand parameters in the descriptive normalization model behind these two gamma oscillations and their normalizations. Data availability The data and code used in our current study have not been deposited in a public repository because the electrophysiological data of cats were stored in self-customized. If one has specific request, contact the author for the data. www.nature.com/scientificreports/
11,832.8
2021-03-02T00:00:00.000
[ "Biology", "Physics" ]
Biological Signal Processing with a Genetic Toggle Switch Complex gene regulation requires responses that depend not only on the current levels of input signals but also on signals received in the past. In digital electronics, logic circuits with this property are referred to as sequential logic, in contrast to the simpler combinatorial logic without such internal memory. In molecular biology, memory is implemented in various forms such as biochemical modification of proteins or multistable gene circuits, but the design of the regulatory interface, which processes the input signals and the memory content, is often not well understood. Here, we explore design constraints for such regulatory interfaces using coarse-grained nonlinear models and stochastic simulations of detailed biochemical reaction networks. We test different designs for biological analogs of the most versatile memory element in digital electronics, the JK-latch. Our analysis shows that simple protein-protein interactions and protein-DNA binding are sufficient, in principle, to implement genetic circuits with the capabilities of a JK-latch. However, it also exposes fundamental limitations to its reliability, due to the fact that biological signal processing is asynchronous, in contrast to most digital electronics systems that feature a central clock to orchestrate the timing of all operations. We describe a seemingly natural way to improve the reliability by invoking the master-slave concept from digital electronics design. This concept could be useful to interpret the design of natural regulatory circuits, and for the design of synthetic biological systems. 1 Derivation of a reduced model for the JK-latch with independent heterodimer binding sites The reactions describing the genetic JK-latch, presented in Table S1 and S2 can be classified into three different categories: protein-protein reactions (including protein degradation), protein-DNA reactions and gene expression (including transcription and degradation of mRNA and translation into proteins). A full deterministic model can be derived by setting up a rate equation for the mean concentration of each biochemical species in the reactions. A first step towards a reduced model is to assume the occupation states of the promoters (that is the protein-DNA reactions) to be in equilibrium with the current transcription factor concentrations at all times. Since all TF's act as repressors, we assume that the only promoter state contributing to mRNA production is the unoccupied one. Therefore the rate equation for mRNA of, for instance, gene A is, where ν m A is the maximal transcription rate, λ m the mRNA degradation rate and P A (t) denotes the promoter activity function, that is, the probability of the promoter to be unoccupied and free to bind RNA polymerase at a certain point in time. Hence, assuming all protein-DNA reactions to be in equilibrium, P A (t) describes the equilibrium probability to find the promoter unoccupied as a function of current transcription factor concentration. This probability can also be calculated by thermodynamic models, corresponding to [1,2]. Their specific form for the genetic JK-latch is defined in the boxes of Figs. 2 and 3. As a further simplification, we assume all protein-protein reactions to rapidly reach equilibrium with the current total concentration of reacting proteins. For instance, for the concentrations of all protein species containing A and K this leads to the following set of equations 1 where we can additionally set K = K tot − KA. After solving this system for the concentrations of homoand heterodimers all promoter activity functions in Figs. 2 and 3 of the main text can then be expressed in terms of total concentrations, i.e., P A (B 2 , KA) = P A (A tot , B tot , K tot , J tot ). Altogether, we are left with two equations left to describe the total concentration of each gene product. For instance, the equations for gene product A are As a last step, we set Eq. (6) to zero thereby assuming mRNA translation to be fast with respect to transcription (we will discuss this assumption in the next section). Solving the resulting equation for m A and substituting it in Eq. (5) leads to with an effective maximal expression rate ν A = ν m A ν p A /λ m . Thus, each gene product in our models can be described by a single effective equation. 2 Derivation of a reduced model for the genetic J-K latch with mutually exclusive heterodimer binding sites Making the same quasi-equilibrium assumptions for protein-protein and protein-DNA as in Section I, the reduced model of the genetic J-K latch with mutually exclusive binding sites for the heterodimers KA and JB reads where only the operator occupancy functions of the heterodimers now also depend on the respective other heterodimer: (9c) However, in order to introduce a delay required for a successful toggle operation, the unbinding kinetics of the overlapping operator sites needs to be slow (see main text), such that the quasi-equilibrium assumption that lead to Eqs. (9a) and (9b) is not strictly valid. To describe the slow dynamics of the overlapping operator complex we set up the master equation for the occupancy states of the overlapping operators. There are three states to be accounted for: (i) the binding site for KA is occupied, (ii) the binding site for JB is occupied and (iii) both binding sites are unoccupied. We denote the probabilities of these states as q KA , q JB and q 0 , respectively. The probabilities equivalent to Eqs. (9b) and (9c) to find the operator in an unoccupied state are then given by O KA = 1 − q KA and O JB = 1 − q JB . Taking k on to be the on rate of both operators and k off the respective off-rate, the master equations read: These equations are controlled by the external variables KA and JB . For given values of these variables, the system has a unique fixed point, which we denote by (q * KA ,q * JB ,q * 0 ). In the fixed point, the right hand side of Eq. (10) becomes zero, thus we can rewrite Eqs. (10a) and (10b) aṡ Furthermore, we want to investigate Eqs. (10) under a constant toggle signal. Therefore, it is reasonable to assume that both heterodimers KA or JB are abundant in the system at all times. Then, the effective on-rates k on KA and k on JB are much faster than the off-rate k off and thus the probability q 0 to find the overlapping operator sites unoccupied is approximately zero. Additionally, substituting q KA = 1 − O KA and q JB = 1 − O JB , the approximated equations can be written aṡ We can identify the fixed points O * KA and O * JB with the quasi-equilibrium operator occupancy functions defined in Eqs. (9b) and (9c), which are functions of the time dependent heterodimer concentration. These equations can then be solved by variation of the constant: with "memory kernel" g k (τ ) = k off e −k off τ . Instead of accounting for the initial conditions in an own term, we formally assume that for all negative times the system is held fixed at the initial point and integrate infinitely long into the past. This auxiliary assumption also simplifies the initial condition problem for the delayed dynamical system: for a distributed delay, instead of a single initial value, strictly the entire history prior to t = 0 has to be defined. Since we are interested in the qualitative long term behavior of the solution, we assume this history to be a single point from which the system is released at t = 0. Putting this result back into Eq. (8) leads to the delayed differential equations with exponentially distributed delay introduced in Fig. 3 Model 2. Additional delays in the reduced model. For the following stability analysis, we include the additional delay caused by the translation process. While this is done for the sake of completeness, this additional delay is about ten times smaller than the delay caused by the overlapping heterodimer operators and is therefore not considered in the discussion of the main text. The additional delay arises from the reaction chain between the beginning of mRNA transcription and completion of protein translation. In principle, these kinds of delay alone can already lead to oscillations [4,5]. Consider the rate equation for an arbitrary protein concentration A, with regulated transcription, linear degradation and an explicit equation for its mRNA denoted by m A : Here, P A (t) denotes an arbitrary promoter activity function of gene A. Solving Eq. (15) and treating the initial conditions as above leads to Putting this back into Eq. 14 leads to with an effective expression rate ν A = ν p ν m /λ m and a delay kernel g λ (τ ) = λ m e −λmτ . Thus, taking the dynamics of mRNA into account, leads to an additional distributed delay acting on the entire promoter activity function. In a reduced model the protein concentration therefore responds to a change in promoter activity on a timescale given by the mRNA degradation rate. This is generally valid for any gene controlled by regulated recruitment. There are, however, mechanisms of gene regulation, involving active degradation of mRNA [6,7]. In that case, the dynamics of mRNA have to be accounted for by its full dynamic description. Linear stability analysis on a system with distributed delays Here we discuss linear stability analysis for nonlinear systems with multiple cascaded exponentially distributed delays, which closely follows the book of [8]. The results presented here are generally applicable to any system of that kind. The model system considered here is the simple J-K latch with overlapping heterodimer operators in Eqs. (8) and (9). As discussed in the last sections, it contains delays on the promoter activity functions P A and P B and, within those, on the operator occupation functions O KA and O JB . The memory kernels g k (τ ) and g λ (τ ) of these delays are exponential distributions with rate parameters k off and λ m , respectively. To perform a linear stability analysis on our model system, it first is important to note that while delays change the stability of a fixed point, they do not alter their position with respect to the undelayed system. This is easy to see: consider a fixed point FP = (A * , B * ) of the undelayed system and the where D g k abbreviates a distributed delay with memory kernel g. We assume that the delayed system has stayed in this fixed point for a very long time, such that the delayed function can be written as for any normalized kernel g k (τ ). Therefore, a fixed point of the undelayed system is a fixed point of the delayed system as well. To perform a linear stability analysis on the model system Eq. (8), we must take into account that the dynamic variables A and B occur within different delay integrals and hence need to be treated separately in the linearization around a fixed point FP . Therefore, we categorize A and B by the delays acting on them. The model system iṡ which can more formally be written aṡ Here, variables with subscript g λ occur in functions, which are delayed only by mRNA degradation, whereas variables with subscript g λ,k are delayed by mRNA degradation and the off-rate of the overlapping operators. We now can linearize the system around a fixed point FP = (A * , B * ) and obtain linear where D g λ denotes a delay with kernel λ m e −λmτ and D g k a delay with kernel k off e −k off τ . Since the position of the fixed point does not change for delayed variables, all partial derivatives are evaluated in the same point FP . Taking into account that the functionals D are linear, the system can be written as Herein, the delayed variables take the form We make the usual ansatz for a linear system:Ã = c A e zt andB = c B e zt with z ∈ C to be determined. Upon this ansatz, the delayed variables become where L(g λ (τ ); z) denotes the Laplace transform of the delay kernel. This holds for any normalized delay kernel. Since g λ (τ ) is an exponential distribition, its Laplace transform takes a particularly simple form: Variables with two delays can be evaluated in a similar way: Putting these results back into Eqs. 25 and 26 yields a linear equation for the coefficients c A and c B : Note that all partial derivatives have to be evaluated at the fixed point under consideration. Subtracting the left hand side from the diagonal elements of the right hand side leads to the familiar form a an eigenvalue problem. Here, in contrast to a common linear stability analysis, additional polynomial terms are incorporated, representing the delays. In order for the system to have a solution, its determinant has to be zero. Thereby we obtain a characteristic equation for the exponents z. Evaluating the determinant, we obtain a polynomial of 6th order in z. In order for the considered fixed point to be stable, each solution of Eq. 33 must have a negative real part. The timescales of the system considered here are given by τ λ = 1/λ m and τ k = 1/k off , which are the mean values of the memory kernel. These characteristic times are usually referred to as the mean delay [8]. Written in terms of the mean delay τ , the Laplace transform of an exponential distribution becomes 1/(1 + τ z). Thus, if the delay is small, its contributing terms in the characteristic equation are close to one and therefore do not change the stability of a fixed point. In the following we will employ the derived method of linear stability analysis to investigate the genetic JK-latch with overlapping heterodimer operators. (iv) If necessary, characterize the system's qualitative behavior (e.g. oscillatory) by solving its equations numerically. Employing this scheme for different parameter values we identify two important bifurcations: the system undergoes a Hopf bifurcation [9,10] if the concentration of input proteins J tot and K tot is increased (see Fig. S2), then is oscillatory for a certain concentration range until it becomes stable again. Additionally, the system exhibits another Hopf bifurcation if the mean delay time is increased (i.e. the parameter k off is decreased), while a constant toggle signal is applied. This is illustrated in Fig. S3A: at a critical operator dwell time τ crit ≈ 32 min the system becomes oscillatory with a period that increases with the dwell time. In the form we set up the reduced model for the genetic JK-latch with overlapping heterodimer operators, we couple the dynamics of protein concentrations with the intrinsically fluctuating switching of operator states. Therefore, although the deterministic model provides valuable insights into the working principles of the circuit, the quantitative period of the stochastic system is not correctly reproduced. This is shown in Fig. 3B, where it is apparent that the average of the stochastic period is much shorter than the period of the deterministic model. Also the stochastic system displays oscillations in a regime of short heterodimer dwell times, in which the deterministic system does not oscillate at all. In fact, it is known that a small number of reactant molecules together with negative feedback and time delay in gene expression can lead to delay-induced instabilities, such that a system turns oscillatory even when its deterministic counterpart is not [4]. If the delay (dwell time) is long, the strength of the toggle signal is chosen appropriately and the signal duration is well timed, then high probabilities of a correct toggle response can be achieved -as can be seen in Fig. S4A and B. Genetic master-slave latch We extended the reaction system of the JK-latch to a master-slave latch by adding additional regulation of the signal genes J and K -see Table S4. This was done by firstly including homodimerization of signal proteins. J 2 and K 2 bind two operators on the promoter of the respective other gene to repress it. Additionally, gene J is repressed by heterodimers KA by binding to a third operator in its promoter region, while gene K is repressed by JB in the same way. Notably, the restrictive promoter layout of the overlapping heterodimer operators on genes A and B with very slow unbinding rates is no longer necessary in the master-slave latch and is changed to independent operators with normal unbinding rates. With this additional regulation, signal genes J and K become bistable when their maximal transcription rate is increased externally (by the toggle signal) and thereby form the master toggle switch. Then, the master toggle switch settles into a state that is determined by the current concentrations of heterodimers and therefore by the current state of the slave latch. Dependent on the operator strengths for homodimers and for heterodimers two kinds of erroneous responses are possible upon a toggle signal: If homodimers bind to strongly with respect to heterodimers, then the master switch is prone to stochastic fluctuations at the onset of the toggle signal and biased towards that signal protein which is first abundant enough to form homodimers. If, on the other hand, heterodimer binding is too strong, random single bursts of the repressed signal gene can be sufficient to switch the state of the master latch. Fig. S4C shows the toggle probability as a function of those key parameters for a toggle signal duration of 60 min. It can be seen that the master-slave latch is robust in that sense, that it responds correctly for a broad range of those parameters. Table S1. All reactions involved in the genetic JK-latch with overlapping heterodimer binding sites. A graphical representation of the reaction network is shown in Fig. S1. Proteins and their dimers are denoted by capital letters; transcripts of a gene X are denoted by m X . A gene X is represented by its promoter P X , which can be occupied by transcriptions factors. Each occupation state of a promoter is represented by an own species of reactants for which an empty operator is indicated by · and an occupied operator by the name of the respective transcription factor. In this notation the different binding sites are separated by the symbol |. To reduce the number of occupation state combinations the operator complex for heterodimers is separated from the other promoter states and denoted by O. To make transcription nevertheless conditional on heterodimers, we include the respective species for an empty binding site as reactant and product into the corresponding transcription reaction. This ensures that transcription requires an empty heterodimer operator to proceed but does not change its concentration. All corresponding parameter values are listed in Table S3. JK-latch with overlapping heterodimer operators Promoter and operator states Additional reactions in J-K latch without overlapping heterodimer operators Table S2. With independent heterodimer operators, heterodimers can bind simultaneously to their binding sites. All other reactions are the same as in the JK-latch with overlapping heterodimer operators, listed in Table S1. Parameter Value Description and References Transcription ν m J , ν m K (0.05 − 5) min −1 strong inducible promoter [11]; induction can be achieved e.g. by upstream binding activators [12] or via small non-coding RNAs [13] Dimerization s on J , s on K 0.2 nM −1 min −1 assumed to be diffusion limited [17] s off J , s off Table S5. Additional parameters used in the genetic master-slave latch. Instead of the symbol k for rates in the slave circuit, the symbol s is used for rates of all addtional processes (dimerization and protein-DNA binding of signal proteins) in the master circuit. Since a delay in heterodimer binding is no longer needed, the assumption of very slow unbinding kinetics of heterodimers from promoters of genes A and B has been released. Fig. (A). The root is critical in that sense that it is the only solution of the characteristic equation, which becomes positive for certain values of J tot /K tot . Starting from a low concentration of input proteins the system initially has two stable (solid line) and one unstable (dotted line) fixed point. The upper stable fixed point represents the ON state, whereas the lower one represents the OFF state. As the concentration of input proteins is increased, the two stable fixed points loose their stability at a critical concentration c 1 , after which delay induced oscillations commence (orange lines). At input protein concentration c 2 the three fixed points collapse to one. That, however, does not alter the system's oscillatory behavior. At concentration c 3 the system's single fixed point becomes stable again. This is due to depletion of homodimers by increasingly forming heterodimers, which eventually abolishes the switch-like behavior of genes A and B. . At a critical dwell time τ crit ≈ 32 min the system undergoes a Hopf bifurcation and thereafter oscillates with a period that is approximately linear in τ . (B) Average period of the stochastic race condition as a function of the mean dwell time τ . In comparison to (A) the stochastic system oscillates even for small dwell times (although with very low amplitude) and has a shorter period than the deterministic model. The latter is due to the fact that the deterministic model couples the dynamics of protein concentrations with probabilities of operator switching, which has a distorting effect on the oscillatory dynamics. J-K latch (exclusive heterodimer binding) Master-Slave latch Figure S4. Toggle probability p toggle as a function of key parameters. Contours indicate equal probability to toggle successfully into the complementary state. Each data point (a grid of 30 by 30 per plot) is estimated by testing the final state of 5000 simulation runs of the respective full stochastic model. (A) Dependence of p toggle on the duration T and the strength of the toggle signal in the JK-latch. Here the strength of the toggle signal is tuned by a concerted variation of the transcription rates of genes J and K, i.e., ν J = ν K = ν. (B) Dependence of p toggle on the duration T of the toggle signal and the off-rate k off for unbinding the overlapping operator sites. As expected, p toggle increases as the delay (dwell time of heterodimers on the binding site) is increased. (C) Dependence of p toggle in the master-slave latch on the additional rates s off J2 , s off K2 for unbinding the homodimer operators and s off JB , s off KA for unbinding the heterodimer operators in the additional toggle switch (master latch). The master-slave latch is robust in that sense that p toggle is high for a broad range of parameters. Figure S5 Dynamics of the deterministic model of the master-slave latch. (A) Phase diagram of the toggle switch as a function of the effective maximal transcription ratesν A andν B . The parameters are chosen such that the system is in the bistable regime in the absence of input signals (point O) and the circuit is set to the ON state initially. The curves indicate dynamic changes ofν A andν B , incurred by applying the toggle signal (simultaneous expression of both input genes J and K) for 200 min (green solid curve) and then releasing it (red curve). In contrast to the JK-latch, even if the toggle signal is applied a long time, the system enters the correct monostable regime (A low, B high), switches to the OFF state and returns to the bistable region without approaching the other monostable regime. In particular, the master-slave latch does not oscillate, even under a continuous toggle signal.
5,544
2013-07-16T00:00:00.000
[ "Biology", "Computer Science" ]
Metal-organic ion transport systems The design of synthetic membrane transporters for ions is a rapidly growing field of research, motivated by the potential medicinal applications of novel systems. Metal-organic coordination complexes provide access to a wide range of geometries, structures and properties and their facile synthesis and tunability offer advantages in the design of ion transport systems. In this review, the application of metal-organic constructs as membrane-spanning channels and ion carriers are explored, and the roles of the metal coor- dination complexes within these functional assemblies are highlighted. Introduction The transport of ions through biological membranes is vital to life, and ion transport has diverse roles that encompass the storage of cellular energy in the form of electrochemical gradients, cell signalling and communication, fluid balance, and pH regulation [1]. Consequently, cells contain several unique ionic environments that are compartmentalised within the plasma membrane, membranebound organelles, endosomes and vesicles. Because ions are impermeable to lipid membranes, ion transport must be mediated via the highly regulated action of membrane-bound proteins, such as ion pumps, which transport ions against an electrochemical gradient using active processes, and ion channels which provide a membrane-spanning pore through which ions can diffuse [2]. While most examples of ion transport in biological systems occur via the action of proteins, ion transport can also be mediated by synthetic constructs. Such synthetic ion transporters can operate via two main mechanisms; mobile carriers can bind to and increase the hydrophobicity of ions to allow them to passively diffuse through the lipid bilayer, whilst channels insert into the membrane and provide a pore through which ions can move. Due to the need for carriers and channels to interact with both the lipid bilayer and the ion of interest, they need to contain both lipophilic and polar properties. A combination of several techniques are currently used to predict the structure of an active transporter within a lipid bilayer, however it is important to note that the absolute conformation of molecules within the membrane is difficult to determine. Structural studies in solution and crystallography in the solid state can only indicate potential structures of transporters in a membrane. Therefore, authors often propose the structures of the active ionophores based on studies outside bilayers and indirect evidence from transport assays. These transport assays are often conducted in synthetic vesicles, and the concentration of a certain ion can be monitored via use of a fluorescent or colorimetric indicator e.g., 8hydroxypyrene-1,3,6-trisulfonic acid (HPTS) for H + , and lucigenin for Cl À , or ions can be monitored via ion selective electrode readings. Additionally, patch clamp measurements or phospholipid bilayer conductance (PBC) measurements can be used to observe channel opening and closing events in patched sections of a curved bilayer, such as a cell membrane/vesicle or a planar lipid bilayer respectively. A summary of these analytical techniques have been reviewed elsewhere [3]. Overall, as transporters become more structurally and functionally diverse, there is also a need for need analytical techniques to evolve alongside to shed light on complex processes such as self-assembly, aggregation, and stimuliresponse. There has been tremendous progress in the field of synthetic ion transport in recent years, and the biological effects of ion transporters are beginning to be investigated. Transporters can activate cell signalling pathways such as apoptosis, and cause changes in biological micro-environments such as lysosomal pH, although there is still much to learn about the biological impact of different ion transport processes [4][5][6]. End applications include discovering agents that produce either a therapeutic effect by replacing the function of faulty biological channels, or a toxic effect that can arise by disrupting vital biological ion gradients in target cells [7,8]. Furthermore, ion transporters themselves are becoming more diverse, as a wider variety of supramolecular structures are being investigated. Recent advances have seen the development of stimuli-responsive ion transporters, whose activity can be controlled using triggers such as membrane potential [9][10][11][12], light [13][14][15][16][17], temperature [18], pH [19], reduction [20] and enzymes [21,22]. The majority of synthetic transport systems reported to date are based on purely organic scaffolds. However, there is increasing interest in the development of active ion transporters based on metal-ligand coordination constructs. The incorporation of charged metal ions into a necessarily lipophilic membrane transporter might seem initially counterintuitive -however, researchers have developed synthetic strategies to balance the solubility properties of their constructs to enable membrane partitioning and function. Coordination chemistry offers a versatile strategy to create diverse, highly tailored structures, as ligands can be modified to tune properties such as size, lipophilicity, flexibility, and function, whilst the metals provide different coordination geometries and a convenient attachment point for several functional groups. It is possible to introduce a variety of functional groups both pre-and post-assembly, and these can include groups to enable gating, lipid chains to promote membrane interaction, and reactive handles for bio-conjugation. The metal itself can also play a role in ion-binding, as coordination can have an electronwithdrawing effect, thereby polarising nearby groups for anion binding [23][24][25]. Additionally, certain metal ions can introduce desirable properties such as luminescence to the complexes formed. The solubility of metal-organic architectures can be readily tuned [26], and stimuli-responsive behaviour can be engineered [21,27] to produce additional useful function. Encouragingly, metal-organic constructs have already found application as therapeutics [28,29], diagnostics [30] and delivery vehicles [31,32], which paves the way for the further application of metal-organic transport systems as therapeutics. In this article, we will review the development and application of metal-organic architectures as ion transport systems. We will highlight the advantages of incorporating metal-ligand coordination chemistry into the design of membrane transporters, alongside key design strategies that have emerged from existing work in this field, and the exciting potential for further advancement. Ion channels and membrane-spanning constructs Ions bear an electrostatic charge, and therefore they are often too hydrophilic to pass through biological membranes. In biology, ions commonly move through membrane-spanning protein channels which provide a pore through which ions can move. Depending on the size and shape of these pores, certain ions can be screened out selectively while others pass through (e.g., the potassium channel) [33]. Synthetic channels must therefore resemble natural channels in some aspects, and commonly, they have a membrane-spanning lipophilic domain. They can either have an intrinsic pore or self-assemble to form a pore through which ions can pass. Due to the requirement for large molecular structures to span membranes, self-assembly has been explored as a promising building strategy to large, complex structures, and metal coordination in particular can facilitate the self-assembly of ligands to create functional structures. In this section, the coordination-drive self-assembly of functional ion channels is reviewed based on the metal chosen to template the formation of the membranespanning structure. Palladium Palladium(II) is a popular metal ion of choice for building synthetic ion channels, as palladium-ligand bonds have highly predictable and rigid square planar coordination geometries, which allow for the predictable assembly of ligands into larger architectures. Thus, the coordination between palladium and hydrophobic ligands is often explored as the driving force for the self-assembly of small molecules into functional supramolecular ion transporters. The earliest example of a membrane spanning Pd(II) complex was published by Fyles and Tong in 2007 [34]. The macrocyclic complex 1 was based on a structure initially reported by Ogura and co-workers [35] and was subsequently modified to bear lipophilic membrane-spanning anchors. The structure in Fig. 1a) builds upon a tetrameric square opening comprised of four 4,4-bipyridine ligands constructed via coordination to Pd(II) ions in each corner. Additionally, each square planar palladium corner is coordinated to a chelating ethylene diamine ligand appended to a C16 alkyl chain which was intended to span the length of the phospholipid membrane (Fig. 1b)). Interestingly, ion conductance experiments lead to the observation of three types of behaviour; initial erratic behaviour followed by occasional, short channel-like openings, followed by frequent very long-lived and highly conducting channel openings. These long-lived channels were presumed to involve an extended aggregate of the palladium complex and lipids surrounding a toroidal pore in the membrane. Ultimately, these results highlight that self-assembled systems may not be static, and that aggregation and dynamic structural interconversion are important factors that must be considered. The work of Tong and co-workers [34] established that Pd(II) can play a structural role in assembling active ion channel components. Building on this concept, Wilson and Webb were able to use the dynamic assembly/disassembly of Pd(II) complexes to construct gated ion transporters [36]. In this system, the addition and removal of palladium was used to reversibly control the assembly of membrane-spanning channels based on ligand L1, a pyridyl cholate lipid (Fig. 2). In order to achieve palladium-gated ion transport, the authors utilised the HPTS assay [37] (based on the pH responsive fluorescent probe 8-hydroxypyrene-1,3,6-trisul fonic acid) to assess sodium transport. They observed no ion transport using L1 alone; however, upon addition of PdCl 2 to vesicles coassembled with the ligand, ion transport was observed due to the formation of a membrane-spanning complex. Ion transport was also observed upon addition of the pre-formed Pd(II)-(L1) 2 com-plex to vesicles. Furthermore, the reversibility of the ion conductance was demonstrated after two rounds of palladium addition followed by palladium removal using the chelator, 18S6, which scavenged and removed the palladium from the assembly. Thus, the activity of the self-assembled channel could be controlled through dynamic assembly and disassembly, providing a route to bio-reminiscent gating behaviour. To further explore novel mechanisms for the gating of channels, Webb and co-workers utilized binding interactions between channel components and biomolecules to control the spacing and geometry of ligand L2, which assembled to form Pd(II)-pyridyl cholate-based ion channels (Fig. 3) [38]. In a similar mechanism to the system reported by Wilson and Webb [36], the addition of palladium functions to cross link two pyridyl cholate ligands to yield a membrane-spanning structure capable of ion conductance (Fig. 2c) and 3c)). The ligands were also functionalized with biotin, which was designed to be presented on the membrane surface. Upon addition of multivalent avidin, a decrease in ion flow was observed via the HPTS assay and conductance measurements. This suggests that avidin was able to bind to the biotin moieties and block the channels or alter the spacing between the membrane spanning complexes to be non-ideal for transport. Additionally, aggregation of the biotin-bearing vesicles was observed, as binding to multivalent avidin binding can potentially bridge the membrane-spanning complexes on different vesicles (Fig. 3d)). Overall, this paper reflects the importance of spatially appropriate pre-organization when generating active supramolecular assemblies and the potential for biopolymers to regulate the activity of synthetic constructs. While the spatial organization of channel components can be altered by external interactions, multi-metallic architectures can form a variety of interesting geometries. Therefore, coordination to metals can instigate the arrangement of large, planar ligands into active, 3-dimensional, porous structures. This strategy was explored in 2011 by Webb and co-workers, wherein membrane spanning bis(meso-3-pyridyl) porphyrin ligands (L3) oligomerized to form pores upon the addition of Pd(II) in solution and in the lipid bilayer membrane (Fig. 4a)) [39]. A trimeric cyclic structure was proposed to be the active species, forming a triangular pore with the porphyrins constituting the faces of the structure whilst Pd (II) ions connected the edges (2, Fig. 4b)). However, in solution phase studies, linear oligomers appeared at higher concentrations, and the identity of the trimeric pore was inferred at concentrations below 2 mM. The membrane-active species was shown to transport anionic 5/6-carboxyfluorescein. The authors proposed that this dye could be transported through the triangular openings of the trimeric pore. Metal coordination complexes have the potential to assemble into aggregated porous structures, and in some cases, these aggregated metal complexes have been investigated as long-lived stable pores. Similarly, inherently porous metal-organic assemblies may form ion conducting channels, and this strategy was investigated by Kempf and Schmitzer in 2017 [40]. In this report, two types of transporters were developed based on the Pd(II) mediated selfassembly of functionalized 2,4,7-triphenylbenzimidazole ligands (Fig. 5). Although the structure of the active species is unknown, computational modelling suggests that individual Pd(II)-L 2 complexes were found to further associate either via pi-stacking interactions for complex 3, or direct coordination for complex 4 to form a porous aggregate. For complex 3, it was proposed that 4 units were required to span the length of the lipid bilayer, whilst for complex 4, 3 units were required. This aggregation behaviour is supported by gel formation in DMSO. While conducting studies on the antibacterial properties of these metal-organic assemblies, the authors found that complex 4 had a minimum inhibitory concentration (MIC) of approximately 40 lM against B. thuringiensis. However, addition of PdCl 2 to B. thuringiensis-incubated complex 4 (1 h) reduced the MIC value to approximately 20 lM, indicating the ability of Pd(4) to alter bacterial membrane permeability. The permeability of these assemblies was tested in a B. thuringiensis tetracycline-resistant strain. Here, the authors found that both 4 and Pd(4) decreased the tolerance of the bacteria to tetracycline, but that Pd(4) was more effective. Overall, chloride transport was observed, and the treatment of bacterial membranes with the transporter increased their permeability to antibiotics. Copper Copper-ligand complexes also have several advantages for use in constructing ion transport systems. Copper complexes are highly stable in biological contexts [41], and, in small concentrations, copper ions have a low toxicity in the body [42]. Although copper paddlewheels are sensitive to water [43], inside the lipid bilayer, they have proven to be useful scaffolds in the design of metal-organic architectures. One example of its use as an ion transporter is given in the section below. The first example of a copper-based ion channel was published by Matile and co-workers in 1999 [44]. The authors designed the channel with a ligand (L4) containing two main functionalitiesa Cu(II) ion binding site, and a rod-shaped pi-slide (Fig. 6a)). The ligand-binding site consisted of an iminodiacetate (IDA) group, which allowed for ligand aggregation, and subsequent channel formation through polyhistidine (pHis) binding to copper (II) ions ( Fig. 6b) and 6c)). This aggregation in turn created a cation binding site between two rod-shaped pi-slides, and potassium transport was confirmed via the HPTS assay. Matile and co-workers found that it was possible to block channel activity with tetraethylammonium (TEA) cations -hypothesising that this large cation binds to the pi-slides. Kim and co-workers published another example of a copperbased ion channel (complex 5, Fig. 7) in 2008 [45]. The channel itself was based on a metal-organic polyhedron MOP-18, with a large internal cavity studied extensively by Yaghi and co-workers (Fig. 7a)) [46,47]. In this work, Kim and co-workers introduced a hydrophobic shell to allow for facile insertion into bilayers. Using an HPTS assay, this channel was determined to transport protons and alkali metal ions, as shown in Fig. 7c). Here, the Cu(II)paddlewheels (Fig. 7b)) play a structural role in the construction of the polyhedron that defines the cavity of the channel. Another example of the use of copper within ion channel formation was reported by Gokel and co-workers in 2009 [48], shown in Fig. 8 -the authors synthesised nanocapsules appended with hydrophobic chains, one capsule which assembled via the coordination of Cu(II) ions to ligand L5 (complex 6), and a similar capsule which assembled via hydrogen bonding interactions. Using planar bilayer conductance (PBC) measurements, the authors determined that channel-like behaviour was only observed in the copperseamed capsules. This implies that the copper played a structural role, which in turn, allows for the channel-like properties of these molecules. Fig. 4. a) The structure of ligand L3 reported by Webb and co-workers [39]; b) the proposed structure of the active trimeric pore, through which anionic dye molecules could be transported. Fig. 5. a) The structure of complex 3 reported to assemble into membranespanning channels by Schmitzer and co-workers [40]. b) The structure of complex 4 reported by the same authors. Colour scheme: hydrophobic parts (red), pi-stacking benzyl groups (green) and coordinating ligands to palladium (blue). In 2020, Webb and co-workers reported the synthesis of a switchable ion channel [49]. Much like the example reported by Matile [44], metal-coordination was used as a tool to drive the assembly of multiple membrane-spanning units (L6) to 'switch on' ionic conductance. In this case, the channel was composed of a chloro-bridged copper dimer complex. Each ligand consisted of a metal chelating unit and a large hydrophobic foldamer (Fig. 9a)). One foldamer was not able to interact with and span the phospholipid membrane. However, upon the addition of CuCl 2 to the membrane, two ligands were able to aggregate and bind the copper(II) ions, which formed the active species. The removal of copper(II) ions via addition of EDTA was subsequently used to 'switch off' channel activity (Fig. 9b) and 9c)). Low cation selectivity was shown via the HPTS assay, while greater anion selectivity was observed, and the transport of chloride was confirmed via assays using the halide sensitive dye lucigenin. Channel-like behaviour was confirmed via PBC measurements. The authors then measured the antibiotic activity of L6 (where R = t Bu and CH 2 CH 2 SiMe 3 ) against a strain of B. megaterium bacteria. The MIC of the Cu-L6 complexes were significantly lower than L6 only. In fact, these complexes show similar activity to alamethicin, a channel-forming peptide antibiotic; however, these complexes Fig. 6. a) Copper-based ion channels have a hydrophobic pi slide (red), and a copper-coordinating iminodiacetate head group (blue) [44]. b) The ligand is able to insert into the membrane, and c) aggregation caused by polyhistidine binding is proposed to bring the ligands into close proximity to form a channel. [46,47]. b) The copper paddlewheel structural motif. c) The functionalisation of MOP-18 with hydrophobic alkyl chains yielded complex 5, which can partition into a membrane and enable ion transport through the porous structure [45]. Fig. 8. a) Pyrogallol ligands L5 contain copper binding sites (dark blue) and hydrophobic alkyl chains (red). b) Six ligands are proposed to self-assemble to create a copper-seamed capsule 6, and c) these porous capsules are thought to insert into the membrane and enable the transport of protons or group 1 metal ions -two proposed channel formation mechanisms are shown, involving ion passage through either one or two capsules [48]. are not suitable for use in clinic due to their low haemolytic activity. Another example of a copper(II) gated system was reported in 2021 by Ohba and co-workers. In this example, amphotericin B served as the membrane-spanning unit which was functionalised with bipyridine to bind Copper(II). The complex displayed pHdependent and Cu(II) gated permeability to Ca 2+ , which was assessed via the use of POPC vesicles loaded with Asenazo III, a colourimetric indicator for calcium [50]. When Cu(II) is present at pH 9, a dinuclear Cu(II) 2 (l-OH) 2 (L7) 2 complex is proposed to provide parallel orientation of two ligands with a distance of about 8 Å as suggested by the crystal structure (Fig. 10). Aggregation of the ligand in POPC liposomes is suggested due to the observed hypsochromic shift in the heptaene absorbance, whilst circular dichroism revealed a weakly absorbing couplet when copper was added at pH 9.0, suggesting channel assembly via parallel association of the amphotericin moiety. Zinc Zinc(II) is another popular choice of metal ion for the construction of metal-organic cages and frameworks. Zinc ions are also essential for various biological processes, like growth, reproduction, and immune function. This may be advantageous for use in biological contexts due to the low toxicity of the component metal ions. Kobuke and co-workers [51] designed a molecular nanopore composed of self-assembled zinc porphyrin-based macrocycles with the aim of synthesising a structure that resembles photosynthetic light-harvesting antenna. Three zinc porphyrin-containing units (7, Fig. 11a) self-assembled to create a macrocycle (7 3 ) which was cross linked via metathesis (Fig. 11b)), and the authors propose that two macrocycles interact to span the bilayer (Fig. 11c)). In this case, the Zn(II) porphyrins provide a rigid skeleton upon which large, stable macrocycles can be built. Here, the macrocycles contain six carboxylate groups that point up and down to aid with insertion into, and interactions with, the hydrophilic part of the lipid bilayer membrane -using PBC experiments, these groups were found to be crucial for channel activity, as analogues without carboxylate groups were shown to be inactive. The carboxylate ions also play a role in switching the channel on and off. The authors found that electrostatic interactions with cationic amino groups on poly(amidoamine) (PAMAM) dendrimers block the channel opening, and when removed, the channel turns on again. Generally, such large pores are unlikely to show any selectivity -Kobuke and co-workers showed that even large tetrabutylammonium cations can pass through. In the presence of TMACl, the authors observed a moderate cation selectivity. In 2017, Haynes, Keyser and Nitschke [52] reported the ion channel activity of a Zn 10 (L8) 15 prismatic assembly 8 (Fig. 12ab)). The active channel species (8) was formed via subcomponent self-assembly -a technique that utilises amines and aldehydes, as well as a metal-ion template, to synthesise imine-based metalorganic architectures. Due to its unique structure the pentagonal prismatic complex can bind three types of anions in distinct pockets -perchlorate anions bind within peripheral binding Fig. 10. a) Structure of an amphotericin B-bi-pyridine conjugate b) Schematic representation of L7 interacting with the lipid membrane before and after Copper (II) complexation at pH 9. This leads to the formation of hydroxide bridged dinuclear complexes. c) The dimerised complex is proposed to aggregate to form pores [50]. Three ligands can self-assemble to create a metallocycle, which is subsequently cross-linked through alkene metathesis and c) two cross-linked metallocycles are thought to self-assemble to span the membrane. pockets, halide ions bind within the toroidal pore of the prism, and sulfonate ions bind above and below the prism. The initial observation of halide ion transport led to investigation into larger anions being transported through the central cavity, however, only selectivity for halides was seen. The transport was not observed to be cation dependent. Given the finding that that sulfonate ions could bind above and below the pore, the authors hypothesised that sulfonate ions could be used to block channel activity (Fig. 12c)). Correspondingly, they found that the addition of sodium dodecyl sulfate (SDS) could effectively block the ion channel. It is possible to achieve precise spatial organisation of ligands via metal-ligand coordination, and in a recent publication, Liu and co-workers synthesised three Zn 4 L 4 metallocycles 9 with functionalised sub-nanometre apertures (Fig. 13) [53]. 1,1 0 -bi-2naphthol-based ligands were functionalised with hydroxyl, ethoxyl, and pentaethylene glycol groups facing towards the centre of the pore. UV-vis titrations were used to determine the binding constants of the 3 complexes towards halides, and all complexes displayed the trend I À >Br À >Cl À from strongest to weakest binding respectively. The HPTS assay was used to assess halide transport and revealed that the ethoxyl substituted metallocycle displayed selectivity of up to 38x for I À compared to other halides (Cl À and Br À ) in line with the Hofmeister series, whilst the pentaethylene glycol substituted complex showed little selectivity, but had a lower EC 50 . The HPTS assay conducted with different cations revealed that transport is not cation dependent whilst an assay using the halide responsive dye lucigenin confirmed that the channel itself can mediate Cl À /NO 3 À antiport. Channel-like behaviour was confirmed via patch clamp measurements. Other metals Ongoing research is seeking to expand the toolbox of metal complexes that can be applied in the design of functional ion transporters. Diversifying the metal building-blocks used enables access to a wider variety of geometries that can be applied to build increasingly complex structures such as catenanes, capsules and polyhedra, whilst polyvalent metal-ligand bonds provide an anchoring point for a variety of functional moieties such a lipophilic chain, and structural motifs. Furthermore, metal-ligand bonds can have varying stabilities and reactivities, and this can be used to achieve reaction-based control of essential properties such as self-assembly, valency and conformation. Lastly, the inclusion of metals into structures such as porphyrins, can greatly change the physical properties of the transporter such as lipophilicity, solubility, and charge, and these are essential in tuning the activity and function of ion transporters. Inspired by Kobuke and co-workers [51], Tecilla and co-workers reported a zinc porphyrin-based nanopore in 2012 [54]. The authors chose to incorporate the zinc into the porphyrin ligands (L9, Fig. 14a)) to improve the solubility of the overall channel. Meanwhile, the metallocycle 10 was assembled via interactions between pyridyl donors and Re(I) ions (Fig. 14b)). The authors incorporated carboxylate groups into the scaffold to enable interactions with the hydrophilic portion of the lipid membrane, to mediate metallocycle stacking and to allow for on/off activity through electrostatic interactions with amino groups on PAMAM dendrimers, as previously shown by Kobuke [48]. Channel activity was inferred via a HPTS assay in 100 nm liposomes. In 2014, Winterhalter and co-workers synthesised an amphiphilic cobalt-based structure capable of pore formation (11, Fig. 15a)) [55]. The Co(III) ion plays a crucial role in formation of the positively-charged amphiphile headgroup, and the interaction of the complex with negatively-charged phospholipid head groups allows for high biological activity within lipid bilayers. The 'tail' portion of the amphiphile is composed of a long alkyl chain, containing a crown ether moiety able to coordinate cations and aid ion transport. The authors proposed pore formation occurring via 13. a) 1,1 0 -bi-2-naphthol-based ligands were functionalised with hydroxyl, ethoxyl, and pentaethylene glycol groups, and were incorporated into Zn 4 L 4 metallocycles 9. b) It was proposed that the metallocycles could pack to form a nanotube and thus a pathway for ionic conductance through a lipid bilayer [53]. two amphiphile units coming together and inducing curvature within the bilayer leaflet -this curvature strain causes ion pore formation and eventual cell rupture (Fig. 15 b-e)). Barboiu and co-workers reported porous, molybdenum-based metal-oxide capsules that could function as ion channels in 2015 (Fig. 16a)) [56]. The capsules (12) were surface-functionalised with an organic surfactant to enhance their lipophilicity and allow for incorporation into the lipid bilayer (Fig. 16b)). The capsule was formed of different molybdenum units, all with different oxidation states -but ultimately, the role of the metal was, once again, structural. Using HPTS assays, the authors found that the channel can mediate cation transport and does show some cation selectivity within the alkali metals, depending on the desolvation rates of the cation. As shown by Kim and co-workers (2008) [45], the cuboctahedral geometry is useful for synthesising synthetic ion channelsits unique geometry provides cavities and apertures for ion to pass through. Furukawa and co-workers utilised complex 13, a cuboctahedral Rh(I) metal-organic polyhedron (MOP), as an ion channel in 2017 [57]. The authors incorporated long alkyl chains into the MOP scaffold to facilitate incorporation into the lipid bilayer -however, interestingly they found that the length of the chain was proportional to the opening time of the channel. They reasoned that the MOP structure can rotate within the lipid bilayer, with ionic conductance occurring through either the triangular or square apertures, and hence shorter chains allowed for faster rotation and thus a shorter channel opening time (Fig. 17a), and 17b)). Li and co-workers used Cd(II) ions to direct the formation of discrete, nested, hexagonal structures called Kandinsky circles [58]. The corners of the Kandinsky circles were comprised of bent, multi-layered bis-terpyridine ligands L10 (Fig. 18a). Depending on the degree of cross-linking in the ligand, a range of structures of increasing diameter comprised of 1-4 nested hexagons were produced (Fig. 18b) and 18c)). Through PBC measurements, the authors found that the complexes containing 2, 3, or 4 nested [54]. b) The complexation is thought to form square metallocycles consisting of four ligands held by Re(I)-based corners. c) Two metallocycles are thought to interact to span the bilayer. Fig. 15. a) Amphiphilic ligand 11 is composed of a cationic cobalt head group (blue) and a lipophilic tail composed of crown ether cation binding group (green) and a lipophilic chain (red) [55]. The authors propose that b) the lipid inserts into the membrane, c) binds a cation, and d-e) can dimerise to cause membrane curvature and the formation of pores. Fig. 16. a) A representation of a molybdenum oxide capsule 12, which was subsequently modified to bear alkyl chains via surfactant attachment [56]. b) The alkyl chains promote interaction with the membrane whilst cations can move through the porous capsule. hexagons could incorporate into lipid bilayers to form a web-like structure, able to form a transmembrane channel within bacteria-like lipid membranes (Fig. 18d)). This in turn imparted antimicrobial effects against MRSA. Patch clamp analysis in KCl showed stepwise conductivity, confirming ion transport via a channel. Although not as common, iron compounds have been reported in the construction of ion transporters. One such example was published recently by Hamada and co-workers [59], in which modified Prussian blue nanoparticles (PB NPs 15) were investigated as ion transporters and were found to be good ionic conductors in liposomal membranes (Fig. 19a)). To increase the hydrophobicity of the nanoparticle, the authors modified the surface of the PB NPs with hydrophobic oleyamines (Fig. 19b)). In these nanoparticles, the role of the metal ions varies -they provide structural support to the nanoparticle, while aiding with ionic conductivity too. The authors found that, via HPTS assays, it is possible to transport hydroxide ions across the lipid membrane using PB NPs. Leigh and co-workers have also exploited the coordination chemistry of Fe(II) in the construction of synthetic ion transporters [60] as they explored use of interlocked structures including an (Fe II ) 5 pentafoil knot, an (Fe II ) 6 -Star of David (Fig. 20a)) and a metal-free Star of David catenane -as ion channels. These structures each contain a central cavity, which could serve as a binding site and passage for ion conductance. The authors found that the metallated Star of David structure 16 could function as an anion channel (Fig. 20b)), while the smaller (Fe II ) 5 structure showed reduced activity, showing that the size of the architecture can play a role in determining transport efficiency. Interestingly, two structural analogues of the active (Fe II ) 6 species -a de-metallated Star of David [2]Catenane and a closely related (Fe II ) 6 cyclic helicateshowed no transport activity (Fig. 20c) and d)), showing that both the presence of the metal and the final ring closing step is vital for ion transport to occur (Fig. 20c) and 20d)). While metal ions can impart structural stability and direct assembly, it is also important to consider the functionality of the ligands. In a recent example published by Liu and co-workers, six chiral BINOL-derived ligands were used as spacers between four n-Bu 3 -Cp 3 Zr 3 clusters as vertices to yield a tetrahedral cage assembly (Fig. 21a) and b)) [61]. The intrinsically chiral cavity was decorated with a range of binding sites such as phenol, phenol-ether, and crown ether groups (R groups, Fig. 21c). This chiral cavity decorated with multiple binding sites allowed for the enantio-specific recognition of amino acids, as confirmed by fluorescence titrations with labelled amino acids. These cages were able to insert into lipid bilayers and the cage decorated with hydroxyl groups preferentially transported L-asparagine over D-asparagine, whilst the cage decorated with crown ether groups showed a preference for Darginine over L-arginine (Fig. 21c). The cages were successfully Fig. 17. a) The cuboctahedral complex 13 can transport cations through either square apertures or b) triangular apertures, and rotation of the complex causes switching between the two pathways [57]. Fig. 18. a) The general structure of ligands L10. Ligands are increasing in size and can be cross-linked in a nested arrangement. The smallest inner-most ligand is appended with short alkyl chains (R1) and is cross-linked to larger ligands via pyridinium groups (R3), and the largest outer-most ligand is capped with a tertbutyl group (R2). b) Nested ligands can contain 1-4 units of increasing size crosslinked together. c) These ligands give rise to self-assembled Kandinsky circles of different sizes in combination with Cd(II) ions. In each case, the inner pore is 4.5 nm, whereas the outer diameter for structures containing 2, 3, and 4 nested units are 7.9 nm, 9.8 nm, and 11.4 nm respectively. d) Kandinsky circles containing 2-4 nested units are proposed to self-assemble into tubes that span the membrane [58]. incorporated into the bilayer without causing lysis, as shown via a carboxyfluorescein dye release assay. Enantioselective amino acid transport was confirmed via dye-release assays via diffusion of transported amino acids through a dialysis membrane and subsequent labelling with fluorescamine (Fig. 21d). Ion carriers Ion carriers (also known as ionophores) offer an alternative strategy for ion transport across biological lipid membranes. While channels form a membrane spanning structure which allows ions to cross the bilayer, carriers can bind individual ions and provide a hydrophobic coating which allows them to passively diffuse into the phospholipid bilayer. Thus, carriers often have 2 structural components; ion binding sites, and hydrophobic moieties that enable effective membrane partitioning. All the carriers discussed here are anion transporters which bind either via metal-anion coordination and/or NAHAanion hydrogen bonding interactions. A 2014 study by Tecilla and co-workers demonstrates the importance of having both an ion binding site and a hydrophobic moiety for ionophoric activity [62]. The ionophores 18 and 19 (Fig. 22) were built upon a square planar palladium complex bearing hydrophobic 1,3-Bis(diphenylphosphino)propane (dppp) ligands, which were inert to ligand exchange. The anion binding was achieved via the incorporation of exchangeable ligands (OTf À or Cl À ) which can be displaced by a guest anion. Interestingly, ionophore complex 18 demonstrated cation-independent and aniondependent OH À /X À antiport via the HPTS assay. The hydrophobic dppp ligand or PdCl 2 alone did not demonstrate activity, confirming that both the lipophilic ligand and the ion-binding site was necessary for activity. Furthermore, 18 bearing the more labile OTf À ligands was more effective at promoting pH-gradient collapse compared to 19 bearing Cl À ligands. Further anion transport activ- Fig. 19. a) The crystal structure of Prussian Blue. b) Prussian Blue nanoparticles were modified with a lipophilic coating to allow interaction with the lipid bilayer, and subsequent ion transport [59]. ity was confirmed by a lucigenin fluorescence quenching assay. The proposed transport mechanism involves the monomer at low concentrations, or a bridged l-OHdimer at high concentrations, and binding occurs via halide exchange with OH À . This mechanism is supported by U-tube experiments which demonstrates that 18 can transport Cl-across a bulk chloroform membrane. Metal coordination has the potential to play a multi-faceted role, as a structural component which connects hydrophobic and guest binding sites, and as an optical label, as many metal complexes are emissive (such as those based on Ir, Re, Ru, and Eu just to name a few). Thus, the emissive properties of certain metal complexes allow them to be visualised in biological systems. One such fluorescent anion carrier published in 2019 by Mao and co-workers reports an octahedral cyclometalated iridium complex bearing hydrophobic 2-phenylpyridine ligands, and an imidazole-based anion binding site [5]. Complexes 20 and 21 differ based on the presence of either 1 or 2 imidazole NAH binding sites respectively (Fig. 23). Cation independent Cl À /HCO 3 À and Cl À /NO 3 À antiport was observed via chloride selective electrode studies, and 20 was more active than 21. The transport activity greatly increased when the pH outside the vesicles was raised from 4 to 7.2, demonstrating that deprotonation can have a profound effect on activity, and both complexes were determined to be carriers rather than channels as their activity decreased in vesicles containing cholesterol. Due to their potent Cl À transport activity, further biological studies were carried out, and treatment with the complexes led to cellular chloride influx, which was observed via the quenching of a chloride-sensitive fluorescent indicator N-(ethoxy carbonylmethyl)-6-methoxyquinolinium bromide (MQAE). The complexes were cytotoxic against a range of cancer cell lines, and the activity was found to be caused by reactive oxygen species (ROS) induced apoptosis. Furthermore, the weakly basic complexes accumulated in the lysosomes via the ion trapping effect, and the Cl À /HCO 3 À antiport activity was shown to raise lysosomal pH in cells, leading to lysosome dysfunction. This rise in lysosomal pH subsequently blocked autophagic pathways by inactivating hydro-lases, and the presence of resulting autophagosomes with undigested cellular components was observed using transmission electron microscopy. Ultimately, both complexes also demonstrated in vivo anticancer activity as they inhibited tumour growth in mice, and due to these promising results, there are ongoing efforts to create next generation anti-cancer agents by integrating spatial targeting of tumours. The previous examples show that hydrophobic metal complexes can have potent ion transport activity. Therefore, understanding how to fine-tune the ion binding site can potentially lead to more tailored applications. A report by Wright and coworkers published in 2020 investigated whether anion binding strength can be modulated via coordination chemistry [63]. In this study, metal coordination was applied to alter the conformation of a ligand and lock it in an active conformation with convergent NAH groups for anion chelation, as well as polarizing the NAH groups, thereby making them more potent anion binding sites. Phosphazane ligands were coordinated to a variety of chelating (Rh(I), Mo(0)) and non-chelating (Au(I)) metals, whilst the effect of including an electron withdrawing CF 3 moiety into the organic ligand scaffold was also explored (Fig. 24). Overall, the most potent ion transporter of the series was compound 23 shown in Fig. 24, in which the NAH bonds are polarized due to the coordinated Rh(I) cations and the presence of electron withdrawing CF 3 groups. The chloride transport activity of the complexes was investigated via the lucigenin quenching assay, which revealed that Rh(I) complexes were more active compared to Mo(0) complexes due to the greater polarization of the NH bond by the more highly charged cation. Furthermore, the chelating metals (Mo(0) and Rh(I)) bound to both the phosphorous and pyridyl-nitrogen atoms of the ligand, effectively locking the ligand in the active conformation, whilst the non-chelated gold complexes were less active, as the metal only coordinated to the phosphorous, allowing free rotation of the PAN bond. The exception to this rule was compound 22 shown in Fig. 24, a gold complex with surprising activity due to anion cap- Fig. 23. Fluorescent complexes, 20 and 21, contain an imidazole-based anion binding site (green) and hydrophobic ligands (red) [5]. Hydrogen bonds to chloride anions are shown as green dotted lines. (0)) and non-chelating (Au(I)) metals as well as either H or an electron withdrawing CF 3 groups on the pyridine ring. Anion binding groups (green) can either be pointing towards each other, or away from each other. Hydrophobic groups are represented in red [63]. Hydrogen bonds to chloride anions are shown as green dotted lines. ture and release within the coordination sphere of gold seen via NMR. The metals in the previous examples were used for structural reasons, or to polarise or pre-organise the binding site; however, metals can also have roles as responsive modulators of transport activity. As many metals are redox-active, the development of redox-responsive ion carriers is a possible avenue of exploration. One such system was reported by Gale and co-workers in 2020 [20]. In this example, the active anion transporter was based on an organic 1,3-bis(benzi-midazol-2-yl)pyrimidine (BisBzImPy) structure, which was rendered inactive via coordination to gold chloride or gold N-heterocyclic carbene, the latter being a more sterically bulky group (BG) to block the binding site of the ligand (complex 24, Fig. 25). The gold complex 24 was designed to be reduced by glutathione (GSH), a biologically relevant reductant, and subsequent liberation of the organic ligand enables transmembrane chloride transport. A series of complexes were synthesised bearing different electron withdrawing groups at position R in Fig. 25 to polarise the NH chloride binding groups, whilst two different blocking groups (BGs) were explored. Gold chloride was found to be less labile to reduction than the bulkier gold Nheterocyclic carbene. These complexes were investigated as cytotoxic agents against cancerous (human colon adenocarcinoma SW620) and non-cancerous (human embryonic kidney HEK293, and human mammary epithelial MCF10A) cell lines, and higher cytotoxic activity was observed in cancer cell lines which is thought to be due to their higher levels of GSH. As demonstrated by Tecilla and co-workers [62] and Wright and co-workers [63], the coordination sphere of metals can be utilised for the capture and release of anions. Consequently, coordination to target ligands to a metal centre can enable their transport across a bilayer, particularly when binding alters the physical properties such as lipophilicity. Gale and co-workers explore this concept further, reporting a carrier that can bind and transport OH À ions via coordination to Pt(II) [64]. The carriers are based on square planar platinum triflate complexes (Fig. 26). It is proposed that the labile triflate anion in compound 27 can be displaced by water, and the following deprotonation gives rise to a highly lipophilic platinum hydroxido complex, which is able to partition into the bilayer. Subsequent hydroxide release into the vesicle lumen causes an increase in intravesicular pH which was monitored via the HPTS assay. Interestingly, this carrier can carry out an uphill transport process and establish a transmembrane pH gradient in vesicles (pH in > pH out ). Additionally, transport was observed against a concentration gradient whereby the external pH was 2 units lower than the internal pH. The OH À transport activity was also found to be gated by the addition of competing ligands including halides, dihydrogen phosphate, acetate, and nitrate. The authors propose that the anions work by interfering with the hydrolysis of the Pt (II) compounds through ligand exchange reactions. Conclusions Metal-organic structures offer unparalleled structural diversity, synthetic facility, and functional flexibility. The field of metal-organic ion transport has great potential to expand to overcome the challenges that lie between design and biological application. This review demonstrates that ion transport can be achieved via the action of a diverse range of metal-organic structures encompassing small carriers, metallocycles, capsules and porous framework materials. To further the field, future work may include exploring more structurally diverse complexes and expanding the functional roles that metals can facilitate. Metals have multifaceted roles in this review as structural components which bring together functional ligands, initiators of self-assembly/ disassembly, modulators of transport activity (e.g., redox active modulators), fluorescent complexes for biological imaging applications, and modulators of reactivity. Future prospects for development in this field are varied and exciting and in this regard the modularity and tunability of metal-organic architectures can be exploited in numerous ways. The structural design of architectures may enable the fine-tuning of interactions with the target guest, whilst screening out others. Similar structural tuning is seen in biology, whereby the potassium channel has a pore just wide enough for potassium, whilst sodium is too small to interact strongly with the channel and cannot be dehydrated [65]. Strides have also been made towards the transport of complex guests such as amino acids via the use of chiral cages [61]. Spatial biological targeting may be achieved via postsynthetic conjugation with targeting groups such as localizing peptides, antibodies and small molecule targeting groups [24,32]. This will allow better accumulation at the desired site of action. Stimuli responsiveness has been explored in this review, and ultimately, targeted therapeutic effects may be achieved by activating the transporter within disease-specific microenvironments. Alternatively, therapeutic effects may aim to replace the function of a highly regulated biological channel, in which, the activity of the synthetic transporter should ideally be regulated by the same factors that regulate the native channel. The incorporation of fluorescent metals has enabled imaging of transporters, and such innovations could one day lead to the possible combination of imaging and therapy, termed theranostics [25]. It will also be important to understand how to tune biologically relevant parameters in complex models such as bio-distribution, bio-stability, solubility, and off-target toxicity (both derived from the complexes and their component ligands and metal ions). The funding source(s) had no involvement in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. Fig. 25. Gold complexes 24 can be reduced by glutathione to release an active chloride ionophore [20]. The NH chloride binding groups are represented in green whilst the hydrophobic groups are represented in red. Hydrogen bonds to chloride anions are shown as green dotted lines. Fig. 26. Square planar platinum complexes containing a labile ligand (shown in green) are proposed to hydrolyse in water, and bind OH À via direct coordination to Pt(II) to give the formation of lipophilic platinum hydroxido complexes [64]. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The funding source (s) had no involvement in the study design; in the collection, analysis and interpretation of data; in the writing of the report; or in the decision to submit the article for publication.
10,501.4
2022-11-01T00:00:00.000
[ "Chemistry" ]
Smallholder Coffee Productivity as Affected by Socioeconomic Factors and Technology Adoption Despite the increase in area under coffee in Kenya in the last decade, productivity has been on the decline. Numerous production technologies have been developed through on-station research but there has been limited on-farm research to assess the impact of these technologies at the farm level. On the other hand, smallholder farmers are endowed differently and this would positively or negatively affect the adoption of recommended technologies and hence coffee productivity. -is study was carried out to evaluate the effects of socioeconomic factors and technology adoption on smallholder coffee productivity at the farm level. -e study employed stratified random sampling where 376 farmers were randomly sampled from six cooperative societies which had been preselected using probability proportional to the size sampling technique. -e effects of socioeconomic factors and technology adoption on coffee productivity were analyzed using the stochastic Cobb-Douglas production function. -e study revealed that off-farm income, access to credit, type of land tenure, and land size had significant positive effects on coffee productivity. -erefore, coffee farmers should be encouraged to diversify their income sources and to embrace credit financing, as the government reviews land use policies to avail adequate agricultural land. -e study further revealed that the adoption of recommended application rates of manure, fungicides, and pesticides had significant positive effects on coffee productivity. -e adoption of these technologies should therefore be enhanced among small-scale farmers to improve coffee productivity at the farm level. Introduction e economic prosperity of some advanced as well as less developed countries is dependent on agriculture, which is a major source of food, income, and employment [1]. In Kenya, agriculture accounts for 18% of the total formal employment and directly contributes approximately 26% of the annual gross domestic product (GDP) [2]. e sector supplies 65% of Kenya's total exports and is thus the largest contributor of foreign exchange earnings in the country through the export of horticulture, tea, and coffee [3]. Coffee is among the most traded commodities and also among the most consumed beverages in the world [4]. In Kenya, coffee is the fourth leading foreign exchange earner after tourism, tea, and horticulture and contributes about 8% of the total agricultural output in the country [5]. Despite the increase in area under coffee in Kenya in the last decade, productivity has been on the decline and this has attracted a lot of policy concerns in the country. is has generated research interest towards the development of production technologies aimed at improving coffee yields and quality. However, according to Muzari et al. [6], there is a large gap between the actual productivity of a smallholder farmer and the feasible productivity that is achievable with the available technology. e introduction of innovations has been proposed as one of the many solutions that can potentially improve agricultural performance and boost agricultural productivity [7]. Unfortunately, agricultural productivity may be hindered by the inability of farmers to adopt innovations such as improved production practices, inputs, and varieties [8]. is may result from the inability of the farmers to comply with the conditions associated with the adoption of such technologies such as recommended method and period of application. Coffee productivity can reportedly be increased through expansion of the total acreage under the crop, adoption of scientifically generated technologies, and increased efficiency of the allocation of scarce production resources for output maximization [9]. For example, various studies reported a positive and significant effect of the adoption of Coffee Berry Disease (CBD) and Coffee Leaf Rust (CLR) resistant cultivars on coffee productivity [10][11][12]. Deviations from the optimal frontier production function in coffee can be due to under-or overutilization of the factors of production and random exogenous shocks such as climate change, uncertainty of factor prices, and market variations [6]. Cheserek and Gichimu [13] noted that climate change had rendered significant proportions of traditional coffee growing zones less suitable for coffee production, thus causing a shift from optimal to suboptimal coffee growing zones, which affects crop yields and quality. e Kenyan Government has established a research institute known as Coffee Research Institute (CRI) which has conducted extensive on-station research on coffee production and management in the country and recommended various production technologies. ese include improved coffee varieties which are released together with their agronomic recommendations including plant spacing, fertilizer application, pest and disease control, and canopy management [5]. However, CRI has conducted limited on-farm research to assess the impact of research recommendations on coffee productivity at the farm level. is is important considering that it is at the farm level where the recommended technologies interact with the prevailing socioeconomic environment. In addition, research centers examine the impacts of one agricultural technology focusing on a single technology adoption with less regard to interdependence among technologies [14]. Moreover, the trials are carried out in either the research stations or demonstration plots with little consideration of the farmer characteristics and dynamics at the farm level. erefore, there is limited quantitative information on farmer-related or on-farm production constraints and yield gaps. e objective of this study was to determine the effect of socioeconomic factors and technology adoption on coffee productivity among the smallholder coffee farmers in Embu County, Kenya. e study conducted an indepth analysis of total factor productivity through optimum input allocation in combination with the socioeconomic characteristics among the smallholder coffee farmers. In order to determine the impact of the released technologies on productivity, the study examined the effect of these recommended technologies and socioeconomic characteristics on coffee yields at the farm level. e findings provide clear understanding of how the technical factors interact with socioeconomic factors at the farm level and their ultimate effects on coffee productivity. is will guide the policymakers to put in place sustainable intervention measures to ensure that released technologies will have the desired impact on productivity. e study first sought to assess the socioeconomic factors and levels of adoption of recommended technologies among smallholder coffee farmers. is was followed by an empirical analysis of how socioeconomic factors and technology adoption affects coffee productivity. Description of the Study Site. e study was conducted in Manyatta and Runyenjes subcounties in Embu County, Kenya, where most of the marketed coffee is produced by smallholder farmers. e area is located in Upper Midland (UM) zone, 2-3 agroecological zones, within an altitude range from 1600 to 1800 m above sea level [15]. e rainfall pattern in the study area is bimodal but the annual quantity ranges between 1120 and 1495 mm. e average temperatures range from a minimum of 12°C in July to a maximum of 30°C in March and September [5]. Sampling Procedure and Sample Size. e study applied multistage stratified random sampling to select the farmers to be interviewed. Two cooperative societies were randomly selected from Runyenjes subcounty and four from Manyatta subcounty. Probability proportional to size sampling criteria was employed to sample the respondents from the farmers who were members of the selected cooperative societies. e sample size for the study was 376 smallholder coffee farmers selected using the formula of Cochran [16] as described as follows: where n0 is the required sample size, Z is the t value at 95% confidence level from normal table (1.96), p is the probability that respondent has characteristic being measured, q is (1 − p) probability that respondent has no characteristic being measured, and e is 5% level of significance. Assuming 50% probability that the respondent had the characteristic being measured, the sample size was calculated as follows: Since the estimated target population was only 20,000 farmers, the sample size was adjusted using the equation recommended [16] for small size finite population correction as follows: e number of farmers from each cooperative society was determined using the following formula applied by [15]: 2 International Journal of Agronomy where k is the number of farmers to be interviewed, p is the number of members in a cooperative society, and M is the total number of smallholder coffee farmers in the selected cooperative societies. Data Collection. e primary data were obtained from sampled farmers using structured questionnaires covering one crop season (one production year). e key primary variables collected were coffee output (yield) versus level of adoption of recommended technologies including improved varieties, proper tree spacing, types and rates of fertilizers, pesticides and fungicides applied, and method of canopy management adopted. Data were also collected based on the socioeconomic factors of the respondents such as gender, age, family size, farming experience, and availability of offfarm income. e study was based on the firm theory that the main objective of smallholder farmers is profit maximization through cost minimization and improved production. Data Analysis. Descriptive statistics were used to analyze the demographic characteristics of the farmers that were hypothesized to influence coffee productivity. To determine the effect of recommended technologies and socioeconomic factors on coffee productivity, the stochastic Cobb-Douglas production function was used as proposed by Aigner et al. [17] and Seyoum et al. [18]. In this study, the Cobb-Douglas production model was specified as where ln is the natural logarithm; Y is the observed coffee output; β 0 , β i , β j , and β k are the vector parameters estimated; X i is the number of inputs; Z j is the values of socioeconomic factors; D k is the dummy variables for the adoption of recommended technologies (1 denotes adopted; 0 denotes nonadoption); and ε is an error term. e Cobb-Douglas production model was preferred because it has the advantage of allowing for statistical inference and the estimated coefficients are easy to interpret. Demographic Characteristics of the Respondents. e social characteristics of the respondents are summarized in Table 1. Out of the 376 farmers that were interviewed, 74.7% were male, while 25.3% were female. e majority (84.6%) of the farmers were aged between 41 and 60 years with farmers below 30 years constituting only 1.1%. e household size ranged from 1 to 9 family members with the majority (69.7%) having 4-6 members. Most of the farmers (75.8%) had attained secondary education and above with only 3.2% lacking formal education. e majority (90.1%) of the respondents had 10 years and above of experience in coffee farming with only 9.8% having less than 10 years of farming experience (Table 1). Table 2 shows descriptive statistics of the selected economic characteristics of the respondents. e majority (77.6%) of the sampled farmers owned one acre of land and below and most of them (83%) had allocated half an acre and below to coffee production. Only 14.1% of the sampled respondents were purely farmers with the rest (85.9%) earning some off-farm income from other economic activities. Less than half (43.4%) of the sampled respondents had land ownership rights in form of land title deed, implying that the majority of the respondents did not have security of land tenure ( Table 2). e majority of the respondents (86.4%) received extension services either from their cooperative society (77.7%) or from research institutions (8.8%). e number of extension visits ranged from none to three but most of the respondents (49.5%) were visited twice. Credit facilities for financing coffee farming were accessible to 70.5% of the sampled farmers but 29.5% of the farmers did not access credit due to various reasons ( Table 2). Most of those who could not access credit cited uncertainty in coffee returns as the major hindrance. Appropriation of Production Inputs. Input use was found to vary with the scale of production and farm size and among different farmers (Table 3). ese variations were in both the type of input used and the application rates. Only 38% of the farmers used the recommended rate of NPK fertilizer per tree while 36% used more than the recommended rate. Most of the farmers (41.2%) applied more than the recommended rate of CAN fertilizer per tree while 35.4% applied the recommended rate of CAN. e majority of the farmers (90%) applied manure at the recommended rate of 20-40 kg per tree. Only 33.5% of the farmers who were growing traditional varieties that are susceptible to CBD and CLR applied fungicides at the recommended rate of 2-3 liters or kg per acre. e majority of them (51.6%) exceeded the recommended application rate of fungicides. However, this study established that the majority of the farmers (52.1%) used the recommended pesticide rate of 1-2 liters per acre (Table 3). Table 4 shows the descriptive statistics of agronomic practices of coffee management. e recommended coffee varieties for the study area were Ruiru 11 and Batian due to their production potentials and resistance to major coffee diseases. However, the majority of the farmers (67.2%) were still dependent on the traditional varieties (SL28, SL34, and K7) that are relatively less yielding and susceptible to the major coffee diseases. A high proportion of farmers (59.3%) had practiced a change of cycle on their coffee trees in the last ten years. Annual coffee pruning was a popular practice adopted by 98.4% of the farmers but tree capping was not a common International Journal of Agronomy practice as only 15.4% of the farmers were practicing it. e majority of the farmers (94.4%) maintained 2 heads per stem as recommended. Coffee Production. Descriptive statistics for coffee production across the sampled farms are shown in Table 5. e number of coffee trees among the sampled farmers ranged from 30 to 2000 trees which averaged 262 trees per farmer. e cherry production ranged from 100 to 12,000 kg which averaged 1106.24 kg while buni (dried cherry) production ranged from 1 to 430 kg which averaged 27.85 kg per farm. e results revealed variations in coffee productivity with cherry output per tree ranging from 0.62 to 20 kg which averaged 4.66 kg. On the other hand, cherry output per acre ranged from 82 to 21,368 kg with a mean of 2433.75 kg (Table 5). Effect of Socioeconomic Factors and Technology Adoption on Coffee Productivity. A stochastic Cobb-Douglas production function was used to show the effect of socioeconomic International Journal of Agronomy factors and technology adoption on coffee productivity ( Table 6). Multiple regression analysis revealed that the coefficient for farmers' engagement in off-farm income-earning activities was positive (0.080) and significant at 5% level (P � 0.028), implying that engagement in off-farm activities increases coffee yield by about 8%. Land ownership had a positive (0.091) and significant coefficient at 5% level (P � 0.011), implying that owning land increases coffee output by 9.1%. e coefficient for land size under coffee was positive (0.353) and highly significant at 1% level (P � 0.001), implying that increasing the land size by 10 percent increases coffee output by 3.5%. Access to credit had a positive (0.074) and significant coefficient at 5% level (P � 0.030), implying that availability of credit increases coffee output by 7.4%. On technology adoption, the coefficient for the recommended manure rate was positive (0.196) and significant at 1% level (P � 0.001), implying that the coffee yield for adopters of recommended manure rate was 19.6 percent higher than that of nonadopters. e coefficient for recommended rate of fungicides was also positive (0.118) and significant at 5% level (P � 0.010), implying that the yield obtained by adopters of recommended fungicide rate was 11.8% higher than that obtained by nonadopters. In addition, the recommended rate of pesticides had a positive (0.088) and significant coefficient at 5% level (P � 0.012), indicating that the yield obtained by adopters of recommended fungicide rate was 8.8% higher than that obtained by nonadopters. e absolute t values for significant variables were relatively higher indicating an increase in the difference between the variables and the null hypothesis. Variance Inflation Factor (VIF) was lower than 5 for all predictor variables, implying significant multicollinearity among independent variables used in the model (Table 6). Discussion Most socioeconomic factors including gender and age of the farmer, education level, coffee farming experience, household size, and extension services were not found to significantly influence coffee productivity. However, off-farm income, access to credit, land ownership (tenure), and land size had a significant effect on coffee productivity. e study revealed that 85.9% of the farmers were earning some offfarm income and 70.5% were accessing credit facilities. ese two factors may have played a significant role in cushioning farmers' financial constraints and ability to mitigate costs and risks associated with the adoption of new technologies, thus reducing crop failures and increasing productivity. Access to credit and off-farm income also increases the ability of the farmer to access key farm inputs at the required time and increases household risk-bearing ability against production and market risks. e significant effect of off-farm income on productivity has also been reported in other studies [19][20][21]. Access to credit had a positive and significant elasticity in explaining variations in coffee yield. e significant effect of credit accessibility on farm productivity was also reported in previous studies International Journal of Agronomy 5 [19,22,23]. Access to credit and availability of off-farm income would also finance investment in capital-intensive technologies for increased production efficiency and productivity per unit area to avoid diseconomies of scale. Land size is an issue of concern in the study area as the majority of the sampled farmers have less than one acre of land. Land size is assumed to have a direct influence on the size of the coffee farm as farmers with bigger land sizes are able to allocate more land to coffee. e increase in land size increases the scale of production and motivates the farmers to adopt new technologies, hence increasing coffee productivity. Bigger land sizes would also allow some trials on new technologies without affecting the scale of the main crop hence promoting adoption. Studies by Gebeyehu [10] and Senkondo et al. [24] also reported a significant effect of land size on coffee productivity. Chepng'etich et al. [25] and Ng'ombe [26] also made a similar observation on sorghum and maize, respectively. However, the studies by Minai et al. [20] and Musaba and Bwacha [23] found the land size to be negatively related to coffee and maize productivity, respectively. Apart from land size, ownership was also found to have a positive significant influence on coffee productivity. Challa and Tilahun [21], Musaba and Bwacha [23], and Cherukut et al. [27] also reported significant effects of land tenure on farm productivity. Land ownership allows the farmers to make long-term production decisions and facilitate access to credit by offering the land as collateral. Although most farmers had adopted the recommended or higher rates for all the inputs that were sampled, most of them had no significant effect on coffee productivity apart from manure, fungicides, and pesticides. is was attributed to the fact that farmers are driven by output maximization and would be motivated to use high-yielding methodologies that would guarantee high productivity but at a relatively lower cost. e significant effect of organic manure on coffee productivity was also reported by Gebeyehu [10], Chemura et al. [28,29], and Dzung et al. [30]. Increased usage of organic manure increases plant height and stem thickening resulting in increased production [30]. Organic soil fertility management is among the important attributes of sustainable and climate-smart agriculture through the improvement of the physicochemical and biological soil properties [10]. e use of organic manure improves soil organic matter which enhances moisture retention as a climate change strategy in water-stressed areas [29]. In addition, organic manure causes increased microbial activity and availability of macronutrients in the soil [29]. However, Mignouna et al. [22] reported the negative effect of manure use on maize productivity explained but attributed it to inappropriate application rates. is study established that the majority of the farmers (62.7%) in the study area were still growing traditional varieties that are susceptible to fungal diseases such as CBD and CLR. is necessitated the use of fungicides to control these diseases. e application of fungicides at the recommended rates had a positive and significant effect on coffee yield. Positive interaction of fungicides and coffee yields has been reported in several previous studies [10,24,31]. Apart from controlling the fungal diseases, copper-based fungicides enhance leaf retention, thus promoting chlorophyll process and nutrient uptake, resulting in healthy tree growth and high marketable yields [32]. An amount of pesticide used was found to be positively related to coffee yields in the study area. A similar observation was made by Gebeyehu [10], Lechenet et al. [31], Popp et al. [33], and Ngeywo et al. [34]. Coffee is susceptible to more than 850 species of insect pests including leaf miners, berry moths, berry borers, stem borers, thrips, and aphids [35]. Some of them like the berry borer can cause bean yield losses of 30-35%. Others like the stem borers may lead to the death of the coffee tree but most of them only weaken the plant, resulting in yield reduction [35]. Some insect pests also act as vectors for the transmission of coffee diseases. e appropriate use of pesticides in coffee production is therefore expected to increase the yields and quality of coffee, thus improving productivity per unit area. Cherry yields in the study area averaged 4.67 kg per tree which was slightly higher than the current national average of 4.26 kg for traditional varieties and 10 kg per tree for the improved varieties [36]. Coffee production was hypothesized to be a function of factor inputs, recommended technologies, and socioeconomic characteristics in the farm environment. e study established that the majority of the farmers had adopted the recommended or higher rates of application for all the farm inputs that were sampled. e low yields can therefore be attributed to the low adoption of the improved varieties since only 29.3% and 3.5% of the sampled farmers were growing Ruiru 11 and Batian varieties, Highlighted factors were significant at * * * 1% and * * 5% 6 International Journal of Agronomy respectively. Low yields can also be caused by the untimely application of farm inputs, poor method of application, or wrong choice of inputs especially fungicides and pesticides. For effective and sustainable management of CBD and CLR, Alworah and Gichuru [32] recommended timely fungicide application, proper choice of chemicals, and adoption of good agricultural practices. e low yields may also be attributed to climate change effects which include changes in rainfall patterns and increase in diurnal temperatures, both of which have a significant influence on the dynamics of pests and diseases [37]. Conclusions is study demonstrated variations in technology adoption and socioeconomic factors of the sampled farmers. Consequently, there were significant variations in coffee productivity among small-scale farmers. Adoption of the recommended application rates of manure, fungicides, and pesticides had a significant positive effect on coffee productivity. e study therefore recommends the adoption of these and other recommended technologies for improved coffee productivity at the farm level. e policymakers should therefore put the necessary measures in place to persuade the farmers to adopt recommended technologies for improved coffee productivity. e study established that the availability of off-farm income, access to credit, adequate land size, and confidence of land ownership have a pivotal role in enhancing the adoption of recommended technologies at the farm level. Coffee farmers should therefore be encouraged to diversify their income sources and to embrace credit financing. On the other hand, the government should review land use policies and land tenure systems to avail adequate agricultural land and for long-term investments. Data Availability Most of the data used to support the findings of this study are included in the paper. Additional data are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
5,496.4
2021-02-27T00:00:00.000
[ "Agricultural and Food Sciences", "Economics" ]
Porcine Circovirus Type 2 Rep Enhances IL-10 Production in Macrophages via Activation of p38-MAPK Pathway Porcine circovirus type 2 (PCV2) is one of the major threats to pig farms worldwide. Although PCV2 has been identified to promote IL-10 production, the detailed regulatory roles of PCV2 Rep for IL-10 production remain unclear. Herein, we first found that PCV2 Rep, rather than PCV1 Rep, enhanced IL-10 expression at the later phase of PCV2 infection in porcine alveolar macrophages (PAMs). Furthermore, we found that PCV2 Rep directly activated the p38-MAPK pathway to promote transcription factors NF-κB p50 and Sp1 binding to the il10 promoter, but PCV1 Rep did not. During PCV2 infection, however, PCV2 Rep promoted the binding activities of NF-κB p50 and Sp1 with the il10 promoter only at the later phase of PCV2 infection, since Rep proteins only expressed at the later phase of the infection. Moreover, silence of the thymine DNA glycosylase (TDG), a Rep-binding protein, significantly reduced the binding activities of NF-κB p50 and Sp1 with il10 promoter, resulting in the reduction of IL-10 production in PCV2-inoculated PAMs at the later phase of infection. Taken together, our results demonstrate that Rep proteins enhance IL-10 production during PCV2 infection of PAMs via activation of p38-MAPK pathways, in which host TDG is a critical mediator. Introduction Porcine circovirus type 2 (PCV2) aggressively spreads throughout the world and seriously hinders the economic development of the pig industry worldwide [1,2]. As an immunosuppressive pathogen, PCV2 infection can increase the risk of porcine reproductive and respiratory syndrome virus (PRRSV), porcine parvovirus (PPV), and other viruses or bacteria infection, leading to porcine circovirus associated diseases (PCVAD) [3]. So far, among the potential 11 open reading frames (ORFs) of the PCV2 genome sequence, only four ORFs have been studied in-depth, whose encoding proteins are currently recognized as the major functional and structural proteins [4]. As one of the two largest genes in the PCV genome sequence, ORF1 encodes a 35.8 kDa virus replication-associated protein (Rep), which is considered to be necessary for viral replication and plays a vital role in cell-mediated immunity [5,6]. Although ORF1 is highly conserved between PCV1 and PCV2, they may play different roles in the virus-induced immune response. Interleukin-10 (IL-10), originally identified as an inhibitor of interferon-gamma (IFN-γ) and Interleukin-2 synthesis in Th2 cells [7], efficiently inhibits proliferative and cytokine responses in T cells and has shown to mediate both immunological unresponsiveness and the suppression of immune reactions [8,9]. As a key regulatory anti-inflammatory cytokine of multiple immune cells, IL-10 plays a pivotal role in limiting excessive inflammatory responses [10]. In the context of infectious disease, studies have reported an increase in pathogen clearance in the absence of IL-10 [11,12]. Our previous study has revealed that PCV2 infection promotes IL-10 expression, and mitogen-activated protein kinases (MAPKs) and phosphoinositide 3 kinase PI3K/Akt signaling pathways are involved in the regulation of IL-10 production in porcine alveolar macrophages (PAMs) during PCV2 infection [13]. Meanwhile, we observed that the Rep has also played roles in IL-10 expression during PCV2 infection. Up to date, the exact role that Rep plays in PCV2-induced IL-10 secretion is still unclear. Herein, we first confirmed that PCV2 Rep enhances the production of IL-10 in PCV2-infected-PAMs at the later phase. Then, we explored the roles of the NF-κB, PI3K/Akt, ERK, and p38-MAPK signaling pathways in IL-10 expression induced by PCV2 Rep in PAMs, and further identified the regulatory roles of transcription factors NF-κB p50 and Sp1 in promoting the IL-10 transcription. Then, we figured out the function of Rep binding protein TDG in the regulation of transcription factor (NF-κB p50 and Sp1) activities and IL-10 expression. These results provide new insight for understanding how PCV2 Rep enhances IL-10 expression during PCV2 infection. Cells and Viruses The porcine alveolar macrophage cell line (CRL-2843) and human embryonic kidney 293A cell line (CRL-1573) were purchased from American Type Tissue Culture (ATCC, Manassas, VA, USA); PK-15 (Porcine kidney 15 cell line) were donated from the Innovative team of animal pathogen surveillance and epidemiology in Harbin Veterinary Research Institute, CAAS [14]. HEK-293A cells and PK-15 cells were maintained in Dulbecco's minimum essential medium (12100046; Invitrogen Carlsbad, CA, USA) supplemented with 10% heat-inactivated fetal bovine serum (FBS). PAMs were maintained in RPMI 1640 medium (31800022; Invitrogen) with 10% heat-inactivated FBS (13011-8611; Tianhang Biotechnology, Huzhou, China), sodium pyruvate, nonessential amino acids, 100 U/mL penicillin, and 0.1 mg/mL streptomycin. All cell lines were plated in a fully humidified atmosphere containing 5% CO 2 at 37 • C. Cells in the exponential phase of growth were used in our study. PCV2-Rep1 and PCV1-Rep2 were generated from PCV1 (AY193712) and PCV2 (MH492006), which were isolated and stocked in our laboratory. Based on the study of Fenaux et al. [15], full-length PCV2 or PCV1 DNA sequences were amplified using primers F-PCVSAC and R-PCVSAC. The amplified sequences were cloned into pGEM-T Easy vector to construct the full-length PCV2 plasmid (T-PCV2) and full-length PCV1 plasmid (T-PCV1). Then, we constructed T-PCV2-Rep1 in the T-PCV2 plasmid by replacing its Rep encoding sequence by the PCV1 Rep encoding sequence using homologous recombination. Similarly, we constructed T-PCV1-Rep2 in the T-PCV1 plasmid through replacing its Rep encoding sequence by the PCV2 Rep encoding sequence. In detail, for the construction of T-PCV2-Rep1, the two fragments of the PCV2 Rep gene (Rep2A and Rep2B) in the T-PCV2 plasmid were replaced by the two fragments from PCV1 Rep gene (Rep1A and Rep1B) by homologous recombination using the ClonExpress II One Step Cloning Kit (C112-01, Vazyme, Nanjing, China); for the construction of T-PCV1-Rep2, the two fragments of the PCV1 Rep gene (Rep1A and Rep1B) in the T-PCV1 plasmid were replaced by the two fragments from the PCV2 Rep gene (Rep2A and Rep2B). The fragments Rep1A, Rep1B, Rep2A, and Rep2B were amplified by Polymerase Chain Reaction (PCR). Primer sequences were: To gain recombinant PCV2-Rep1 and PCV1-Rep2 viruses, the T-PCV2-Rep1 and T-PCV1-Rep2 plasmids were digested with restriction endonuclease Sac II, the fragments were collected and then re-cyclized overnight using the T4 ligase, respectively. Subsequently, cyclized PCV2-Rep1 DNA or cyclized PCV1-Rep2 DNA were transfected into PK-15 cells using lipofectamine 2000 (Invitrogen) according to the manufacturer's instructions. The transfected cells were cultured for three days, followed by frozen and thawing three times before centrifugation, and then the supernatants were collected to infect other cells and continuously propagated in PK-15 cells for at least five passages; the recombinant viruses PCV2-Rep1 and PCV1-Rep2 were obtained from the culture and purified by density gradient ultracentrifugation, and the details of the transfection, infection, and viral purification procedures were similar to those previously reported [16,17]. The copy numbers of the viruses were measured by the method as previously described [18]. Construction of Recombinant Adenoviruses Rep1 from the PCV1 genome sequence and Rep2 from the PCV2 genome sequence were amplified and cloned into recombinant adenovirus vector pShuttle-CMV. The pShuttle-ORFs were recombined with the backbone vector pAdeasy-1 in E. coli BJ5183 and then transfected into HEK-293A cells after linearization to generate recombinant adenoviruses, according to the manufacturer's instructions. Enzyme Linked Immunosorbent Assay (ELISA) Porcine alveolar macrophages (PAMs) adhered to six well plates, and then the cells were infected with five MOI PCVs or 100 MOI rAds. In order to detect IL-10 secretion in the PCV1, PCV2, PCV2-Rep1, or PCV1-Rep2 infected cells, we harvested the culture supernatants at 24 h, 48 h, and 72 h p.i., which were replaced into fresh media at each time point; in order to detect IL-10 secretion in the rAd-Blank, rAd-Rep1, and rAd-Rep2 infected cells, cells were respectively infected with these recombinant adenoviruses for 12 h, 24 h, or 48 h, then the culture supernatants were collected for ELISA detection at indicated time points without media refreshment. The levels of IL-10 secretion were measured using a commercial ELISA kit (P1000; R&D, Minneapolis, MN, USA), according to the manufacturer's instructions. Western Blotting The total protein of the cells was isolated in Radio-Immunoprecipitation Assay (RIPA) Buffer with Phenylmethanesulfonyl fluoride (PMSF), according to the manufacturer's instructions (Thermo, Rockford, IL, USA). Equivalent protein subjected to Sodium dodecyl sulfate -polyacrylamide gel electrophoresis (SDS-PAGE) analysis and transferred to polyvinylidene fluoride (PVDF) membranes (Millipore Corp, Billerica, MA, USA). After blocking with 5% non-fat milk in Tris-Buffered Saline with Tween 20 (TBST) buffer for 1 h, the membranes were incubated with the following primary antibodies at 4 • C overnight. Primary antibodies included: anti-phospho-Akt Wuhan Boster Biotech, Wuhan, China) or anti-rabbit IgG (BA1058; Wuhan Boster Biotech) were incubated for 1 h. Western Enhanced Chemiluminescence Substrate (Bio-Rad, Hercules, CA, USA) was used for enhanced chemiluminescence detection, according to the manufacturer's instructions. Quantitative Polymerase Chain Reaction (Q-PCR) mRNA of the cells were isolated by TRIzol reagent, according to the manufacturer's instructions. RNA concentration and purity were measured using a NanoDrop spectrophotometer (Thermo). Reverse transcription of mRNA was performed using M-MLV reverse transcriptase (Invitrogen). mRNA levels were analyzed by a Bio-Rad IQ5 Real-Time PCR System using SYBR-green based Q-PCR with specific primers. The relative quantification of mRNA was undertaken through the ∆∆Ct method [19]. Primers sequences were IL-10-F: AATCTGCTCCAAGGTTCCCG; IL-10-R: TGAACACCATAGGGCACACC; β-actin-F: GGACTTCGAGCAGGAGATGG; β-actin-R: AGGAAGGAGGGCTGGAAGAG. Luciferase Reporter Assay The porcine il10 promoter sequence was amplified and cloned into a pGL3 basic vector (Promega, Madison, WI, USA), according to the Banday assay [20]. PAMs were transfected with a mixture of pGL-IL-10 activity reporter plasmid and pRL-TK renilla luciferase plasmid using lipofectamine 2000 (Invitrogen). Luciferase activities were measured 24 h later using the Dual-Luciferase reporter assay (Promega), according to the manufacturer's instructions. Transfection of siRNAs Cells were seeded overnight before transfection, allowed to reach 50% confluency by the time of transfection, and transfected with Akt siRNA, p38 siRNA, ERK siRNA, p50 siRNA, VG5Q siRNA, TDG siRNA, ZNF265 siRNA, and c-Myc siRNA (Table 1) (Sangon Biotech, Shanghai, China) using lipofectamine 2000 (Invitrogen), respectively. At 24 h post-transfection, the cells were infected with PCV2 or recombinant adenoviruses for the indicated times in each experiment. Table 1. siRNA sequences for target genes used in this study. Chromatin Immunoprecipitation (ChIP) The ChIP assays proceeded following the Cold Spring Harbor Protocols. Briefly, the cells were cross-linked by formaldehyde before lysis for the nuclear. The nuclear were then further lysed by nuclei lysis buffer for the chromatin and proceeded with sonication. The chromatin was quantified and divided into 100 µg per antibody. p50, Sp1, and c-jun monoclonal antibodies and irrelevant antibodies were added to the chromatin samples overnight at 4 • C, followed by protein A(G)-agarose/salmon sperm DNA beads. Protein A(G)-agarose/salmon sperm DNA beads were added to the samples again to bind to the antibodies-chromatin compound. Then, the compound was digested by proteinase K, and extracted by phenol:chloroform to purify the nucleic acids. The nucleic acids were resuspended in H 2 O and analyzed by PCR. The specific primers for PCR are AP1-F: CCAGCTGTGGAAGCTCACAA; AP1-R: GGAACAACGGGCCATGCTTA; p50-F: TTGGAGAGGTCTAGGGAAGGG; p50-R: AGAGCTGTGCCTTCTTCGTT; Sp1-F: ACACGTGAATGGAACCCACA; Sp1-R: GAGGCTACCTCTCTCCCCTT. Statistical Analysis The data were presented as means ± SEM or means ± SD, and the results are representative of three independent experiments. Data of multiple groups were analyzed by Analysis of Variance (ANOVA) and Bonferroni post-hoc test, while comparisons between the two groups were performed by unpaired t tests. p values of <0.05 or <0.01 were considered as statistically significant for all analyses. PCV2 Rep Enhances the Production of IL-10 in PAMs Porcine Alveolar Macrophages Previous studies have revealed that PCV2 infection induces IL-10 production in vivo and in vitro, whereas PCV1 does not show the same effect on IL-10 expression [21]. In the process of inducing IL-10 expression, PCV2 Cap plays a pivotal role in the early phase of infection [13], whether PCV2 Rep protein is also involved in the regulation of IL-10 expression during PCV2 infection remains unknown. Herein, to indicate the roles of Rep protein in regulating IL-10 expression, we used the recombinant viruses PCV1-Rep2 (a PCV1 mutant that replaced ORF1 in PCV1 backbone by PCV2 ORF1) and PCV2-Rep1 (a PCV2 mutant that replaced ORF1 in PCV2 backbone by PCV1 ORF1) to infect PAMs ( Figure 1A). In 0-24 h post-inoculation (h p.i.) of these viruses, the secretion of IL-10 was barely detectable in either the PCV1-or PCV1-Rep2-infected PAMs, but was able to be detected in either PCV2-or PCV2-Rep1-infected PAMs and the IL-10 secretion showed similar levels in PCV2and PCV2-Rep1-infected PAMs. In 24-48 h p.i., PCV1 still could not significantly induce IL-10 secretion, while PCV1-Rep2 could moderately induce IL-10 production, and the secretion of IL-10 in PCV2-Rep1-infected PAMs began to show a reduction relative to that in PCV2-infected PAMs. In 48-72 h p.i., PCV1 still could not markedly induce IL-10 production, while PCV1-Rep2-induced IL-10 secretion further increased, whereas PCV2-Rep1-induced IL-10 secretion further decreased relative to PCV2 infection ( Figure 1B). Furthermore, we examined the kinetics of IL-10 mRNA in PAMs response to the infection of PCV1, PCV2, and recombinant viruses (PCV1-Rep2, PCV2-Rep1). A detailed time-course showed that starting at 24 h, the transcription of IL-10 significantly increased in the PCV2-and PCV2-Rep1-infected PAMs when compared to PCV1-infected PAMs, while the IL-10 mRNA level of PCV1-Rep2-infected PAMs began to upregulate relative to PCV1-infected PAMs from 48 h p.i. ( Figure 1C). Notably, the IL-10 mRNA level of PCV2-Rep1-infected PAMs began to decrease slowly relative to PCV2-infected PAMs, after 48 h p.i. (Figure 1C). These results indicated that PCV2 Cap induces IL-10 production at the early phase of infection, and PCV2 Rep can further enhance IL-10 production at the later phase of infection in PAMs, whereas PCV1 does not induce IL-10. PCV2 Rep Rather Than PCV1 Rep Directly Promotes IL-10 Production in PAMs To verify whether PCV2 Rep (Rep2) or PCV1 Rep (Rep1) could directly induce IL-10 production in PAMs, we measured the levels of IL-10 production in PAMs that were infected with adenovirus expressing Rep2 (rAd-Rep2), Rep1 (rAd-Rep1), or control blank adenovirus (rAd-Blank). The results of western blotting showed that the Rep expression increased throughout the time course of recombinant adenovirus infection and had no significant difference between rAd-Rep1-and rAd-Rep2-infected cells (Figure 2A). Compared to the uninfected cells (Mock), rAd-Blank, rAd-Rep1, and rAd-Rep2 all significantly induced IL-10 production. Compared to rAd-Blank, rAd-Rep1 did not markedly increase IL-10 production, but rAd-Rep2 dramatically increased IL-10 expression at both protein and mRNA levels ( Figure 2B,C). To make clear the effects of Rep2 and Rep1 proteins on the transcription of IL-10, we detected the promoter activities of IL-10 by luciferase reporter assays in PAMs, which were separately transfected with the corresponding Rep2 or Rep1 expression plasmids. Similarly, il10 promoter activities in Rep2-expressed PAMs were markedly higher when compared to that in Rep1-expressed PAMs ( Figure 2D). These data suggest that Rep2, rather than Rep1, directly enhances the promoter activity of IL-10 to increase the IL-10 transcription. Previous studies have indicated that the MAPK signaling pathways participate in the regulation of IL-10 production in macrophages [22]. To confirm whether PCV2 Rep could also activate the MAPK signaling pathways, PAMs were infected with rAd-Rep2 for 24 h and 48 h. Results showed that rAd-Blank infection could basically activate the Akt, p38-MAPK, and ERK signaling pathways relative to Mock infection. Interestingly, compared to rAd-Blank, rAd-Rep2 could significantly enhance the phosphorylation of p38-MAPK, but did not increase the phosphorylation levels of Akt or ERK ( Figure 3A). Consequently, in the cells transfected with p38-MAPK specific siRNA, rAd-Rep2 infection-induced IL-10 production was markedly downregulated, whereas negative control siRNA (NC), Akt specific siRNA, and ERK specific siRNA transfection did not markedly change the IL-10 production in rAd-Rep2 infected-PAMs ( Figure 3B). However, Akt specific siRNA, p38-MAPK specific siRNA, and ERK specific siRNA could all downregulate IL-10 expression in PCV2-infected cells at 24 h p.i., as per our previous report ( Figure 3C). These results suggest that PCV2 Rep can upregulate IL-10 expression through enhancing the activity of the p38-MAPK signaling pathway. PCV2 Rep Activates p38-MAPK Signaling to Promote NF-κB p50 and Sp1 Binding to il10 Promoter To further investigate which transcription factors are involved in the process of Rep2-inducted IL-10 transcription, we detected the binding levels of transcriptional factors NF-κB p50, Sp1, and AP1 to the il10 promoter in rAd-Blank-, rAd-Rep1-, or rAd-Rep2-infected PAMs by ChIP assays. The ChIP assay results showed that rAd-Blank, rAd-Rep1, and rAd-Rep2 could promote the transcriptional factors of p50, Sp1, and AP1 binding to the il10 promoter ( Figure 4A-C). Compared to rAd-Blank, rAd-Rep2 could further increase the binding levels of p50 and Sp1 to the il10 promoter, while rAd-Rep1 could not ( Figure 4A,B). However, neither rAd-Rep2 nor rAd-Rep1 could alter the binding level of AP1 to the il10 promoter ( Figure 4C). Next, we investigated which transcription factors were regulated by p38-MAPK signaling in PCV2 Rep-induced IL-10 transcription. Inhibition of p38-MAPK with specific siRNA decreased the binding levels of both p50 and Sp1 with the il10 promoter in rAd-Rep2-infected PAMs, whereas inhibition of NF-κB activation with p50 specific siRNA only decreased the binding levels of NF-κB p50 with the il10 promoter ( Figure 4D,E). These data suggest that PCV2 Rep activates p38-MAPK signaling to promote NF-κB p50 and Sp1 binding to the il10 promoter. Rep Protein Activates p38-MAPK at the Later Phase of PCV2 Infection To identify the characteristics of PCV2 Rep in the activation of the p38-MAPK signaling pathway, we examined the phosphorylation of p38-MAPK in cells infected with PCV1, PCV2, PCV2-Rep1, or PCV1-Rep2 at 0 h, 12 h, 24 h, and 48 h. At the earlier stage of infection (12 h and 24 h p.i.), the phosphorylation of p38-MAPK was detected in PCV2-and PCV2-Rep1-infected cells, but not in the PCV1-and PCV1-Rep2-infected cells. The levels of p-p38-MAPK did not show a significant difference in PCV2-and PCV2-Rep1-infected cells ( Figure 5A,B). At the later stage of infection (48 h p.i.), p-p38-MAPK was detected in the PCV1-Rep2-infected cells, but the level of p-p38-MAPK was lower in PCV1-Rep2-infected cells than that in PCV2-and PCV2-Rep1-infected cells and the level of p-p38-MAPK in the PCV2-Rep1-infected cells was lower than that in the PCV2-infected cells ( Figure 5A,B). Meanwhile, detection of the Rep proteins expression showed that either PCV1 Rep or PCV2 Rep was not able to be detected until 48 h p.i., and both Rep1 and Rep2 expression were higher in the PCV2 backbone than that in the PCV1 backbone ( Figure 5A,C,D). These results suggest that the PCV2 Rep protein is involved in the regulation of p38-MAPK signaling at the later phase of PCV2 infection since Rep is expressed at the later phase of PCV2 infection. 3.6. Rep Protein Enhances the Binding Activities of p50 and Sp1 with il10 Promoter via p38-MAPK at the Later Phase of PCV2 Infection Furthermore, we investigated the characteristics of binding activities between transcriptional factors (Sp1, p50) and the il10 promoter in cells infected with PCV1, PCV2, PCV2-Rep1, or PCV1-Rep2 at 12 h, 24 h, and 48 h. At the earlier stage of infection (12 h and 24 h p.i.), the binding activity of NF-κB p50 with the il10 promoter in PCV2-Rep1-infected cells was similar to that in the PCV2-infected cells, but was not detected in PCV1-or PCV1-Rep2-infected cells ( Figure 6A,C); the binding activity of Sp1 with the il10 promoter showed the same results ( Figure 6B,D). At the later stage of infection (48 h p.i.), the binding activities of NF-κB p50 and Sp1 with the il10 promoter in PCV2-Rep1-infected cells were lower than that in the PCV2-infected cells, whereas the binding activities of NF-κB p50 and Sp1 with the il10 promoter in PCV1-Rep2-infected cells were beginning to be detected and appeared to be higher than that in PCV1-infected cells ( Figure 6A-D). However, the binding activities of NF-κB p50 and Sp1 with the il10 promoter in PCV1-Rep2-infected cells were still significantly lower than that in the PCV2-Rep1-infected cells and PCV2-infected cells ( Figure 6A-D). These results suggest that PCV2 Rep cannot promote the binding activities of NF-κB p50 and Sp1 with the il10 promoter at the earlier phase of PCV2 infection, but enhances the binding activities of NF-κB p50 and Sp1 with the il10 promoter at the later phase of PCV2 infection. Previous studies have found four binding proteins of PCV2 Rep including the transcriptional regulator c-Myc, the zinc finger protein 265 (ZNF265), thymine DNA glycosylase (TDG), and the angiogenic factor VG5Q [23]. To make clear which of them are involved in Rep mediating IL-10 induction, specific siRNAs of c-Myc, ZNF265, TDG, and VG5Q were respectively transfected into cells before rAd-Rep2 infection. The gene silencing of the targets were identified by western blotting ( Figure 7A). The results showed that only the silencing of TDG could significantly decrease the production of IL-10 at 48 h post rAd-Rep2 infection, while the silencing of other Rep-binding proteins did not significantly affect IL-10 production in rAd-Rep2-infected PAMs ( Figure 7B). Notably, transfection of TDG specific siRNA did not significantly decrease IL-10 production induced by PCV2 relative to the negative control siRNA in 0-24 h p.i., whereas TDG specific siRNA could decrease IL-10 secretion in 24-48 h p.i., and decrease approximately half of IL-10 secretion in PCV2-infected cells compared with NC in 48-72 h p.i. (Figure 7C). Consistently, the levels of IL-10 mRNA were also obviously decreased in the TDG-knockdown cells at the later phase of PCV2 infection (48 h and 72 h p.i.), but were not affected at the earlier phase of PCV2 infection (24 h p.i.) ( Figure 7D). Furthermore, we assessed the roles of TDG in promoting the binding of NF-κB p50 and Sp1 to the il10 promoter in PCV2-infected cells. Results showed that the binding levels of NF-κB p50 and Sp1 with the il10 promoter in TDG-knockdown cells were significantly lower than that in negative control siRNA-transfected cells at 48 h p.i. (Figure 7E,F). These results indicate that the Rep protein interacts with host TDG to promote the binding activities of NF-κB p50 and Sp1 with the il10 promoter that enhances IL-10 production at the later phase of infection. Discussion PCV2, as one of the most important swine viruses, seriously affects the development of the global swine industry [24]. Previous studies have proven that IL-10 contributes to the development of immunosuppression in some virus infection hosts [25,26]. Regardless of whether in vivo and in vitro, IL-10 expression was significantly upregulated and associated with the development of immunosuppression during PCV2 infection [27,28]. In our previous studies, we identified that the PCV2 Cap protein induces IL-10 production in PAMs through the activation of the PI3K/Akt, ERK, p38-MAPK, and NF-κB p50 signaling pathway at the earlier phase of infection [13]. In this study, we investigated the regulatory roles of PCV2 Rep protein in the induction of IL-10 in PAMs. The results demonstrate that PCV2, PCV2-Rep1, and PCV1-Rep2, but not non-pathogenic PCV1, significantly induced IL-10 production in PAMs, suggesting that PCV2 Rep, but not PCV1 Rep, can induce IL-10 expression. Further exploration found that PCV2 Rep interacting with TDG activates the p38-MAPK signaling pathway to promote NF-κB p50 and Sp1 binding to the il10 promoter, which further increases the production of IL-10 at the later phase of PCV2 infection. All these results indicate that Rep is another critical regulator in enhancing IL-10 production during PCV2 persistent infection. As a key anti-inflammatory cytokine, IL-10 participates in immune response and regulates by multiple signaling pathways in the activated macrophages [29][30][31]. Previous studies have shown that PCV2 infection activates the PI3K/Akt pathway to suppress premature apoptosis for improved virus growth and aggravate the effects on autophagy and PCV2 replication [32,33]. In addition, it has been reported that the p38-MAPK and ERK pathways are involved in regulating IL-10 expression [34][35][36]. Previous studies have also shown that the p38-MAPK pathway not only plays important roles in the PCV2 replication and contributes to virus-mediated changes in PK-15 cells, but is also mediated via gC1qR to suppress IL-12p40 expression to increase the risk of other pathogenic infection after PCV2 infection [37,38]. Simultaneously, it has been reported that the ERK signaling pathway is involved in PCV2 infection and is beneficial to PCV2 replication in cultured cells [39]. In the present study, we found that the PCV2 Cap could enhance expression of p-p38 in the earlier phase of PCV2 infection. PCV2 Rep was detected in PCVs-infected cells at 48 h and induced the phosphorylation of p38. These results indicate that PCV2 Rep activated the p38-MAPK signaling pathway, upregulating the production of IL-10 at the later phase of infection. Simultaneously, only the p38-MAPK signaling pathway was found to be activated by Rep to enhance IL-10 expression in the later phase of PCV2 infection. Moreover, the PCV2 Rep protein could enhance the transcription factors NF-κB p50 and Sp1 to bind with the il10 promoter via p38-MAPK signaling activation at the later phase of PCV2 infection, aside from the earlier phase of PCV2 infection. Furthermore, we verified that specific siRNA of p38-MAPK could inhibit the upregulation of IL-10 at the later phase through inhibiting the binding activities of p50 and Sp1 to the il10 promoter, while the specific siRNAs of ERK and PI3K/Akt did not have any influence on the production of IL-10 induced by the PCV2 Rep protein. Our results further confirmed that the PI3K/Akt, ERK, and p38-MAPK signaling pathways are also involved in regulating IL-10 expression at the earlier phase of infection as per our previous report [13]. These results demonstrate that PCV2 Rep and Cap can activate different pathways to regulate the IL-10 expression at different stages. Taken together, our results demonstrate that the p38-MAPK signaling pathway is activated by PCV2 Rep to participate in the production of IL-10 at the later phase of infection. PCV2 is the primary causative agent of naturally occurring PMWS [40]. Previous studies have proven that PCV2 components play important roles in impairing the immune system and concurrent co-infection with other virus [41,42]. In this study, we found that PCV2 Rep could obviously promote IL-10 production, while PCV1 Rep could not. PCV2 Rep specifically activated the p38-MAPK signaling pathway to promote IL-10 production in PAMs, while PCV1 Rep could not. These data demonstrate that PCV2 Rep is another major component of PCV2 in enhancing the production of IL-10 in PCV2-inoculated cells. Meanwhile, these data also suggest that the difference in Rep protein between PCV1 and PCV2 might be another major reason for why PCV2 can induce immune suppression and cause PMWS whereas PCV1 cannot. c-Myc, ZNF265, TDG, and VG5Q were identified to interact with Rep in PCV2-infected cells in previous studies [23,43], but the roles of these proteins in the infection and pathogenic processes of PCV2 are still unclear. In the present study, TDG was identified as an important regulator in promoting IL-10 expression. Knockdown of TDG significantly decreased the binding activities of NF-κB p50 and Sp1 to the il10 promoter at the later phase of PCV2 infection, resulting in a dramatic reduction of IL-10 expression at both the mRNA and protein levels. These results demonstrate that the interaction of PCV2 Rep with TDG is critical for the enhancement of IL-10 expression at the later phase of PCV2 infection. However, we are still far from unveiling the complete mechanisms of PCV2 Rep regulating inflammatory cytokines, which have been addressed in our ongoing works. Conclusions In summary, this study provides certain evidence that PCV2 Rep can activate the p38-MAPK signaling pathway to promote NF-κB p50 and Sp1 binding to the il10 promoter to further enhance IL-10 expression at the later phase of PCV2 infection, in which host TDG is a critical regulator. These findings might help us to understand the regulatory roles and mechanisms of PCV2 Rep proteins in boosting the production of IL-10 in porcine macrophages.
6,378.4
2019-12-01T00:00:00.000
[ "Biology", "Chemistry" ]
Molecular Links Between Angiogenesis and Neuroendocrine Phenotypes in Prostate Cancer Progression As a common therapy for prostate cancer, androgen deprivation therapy (ADT) is effective for the majority of patients. However, prolonged ADT promotes drug resistance and progression to an aggressive variant with reduced androgen receptor signaling, so called neuroendocrine prostate cancer (NEPC). Until present, NEPC is still poorly understood, and lethal with no effective treatments. Elevated expression of neuroendocrine related markers and increased angiogenesis are two prominent phenotypes of NEPC, and both of them are positively associated with cancers progression. However, direct molecular links between the two phenotypes in NEPC and their mechanisms remain largely unclear. Their elucidation should substantially expand our knowledge in NEPC. This knowledge, in turn, would facilitate the development of effective NEPC treatments. We recently showed that a single critical pathway regulates both ADT-enhanced angiogenesis and elevated expression of neuroendocrine markers. This pathway consists of CREB1, EZH2, and TSP1. Here, we seek new insights to identify molecules common to pathways promoting angiogenesis and neuroendocrine phenotypes in prostate cancer. To this end, our focus is to summarize the literature on proteins reported to regulate both neuroendocrine marker expression and angiogenesis as potential molecular links. These proteins, often described in separate biological contexts or diseases, include AURKA and AURKB, CHGA, CREB1, EZH2, FOXA2, GRK3, HIF1, IL-6, MYCN, ONECUT2, p53, RET, and RB1. We also present the current efforts in prostate cancer or other diseases to target some of these proteins, which warrants testing for NEPC, given the urgent unmet need in treating this aggressive variant of prostate cancer. INTRODUCTION In the United States, prostate cancer is responsible for the second most cancer death in men, behind lung cancer. It is estimated that about 31,620 deaths in 2019 in USA are caused by prostate cancer (www.cancer.org). Androgen deprivation therapies (ADT) that target the androgen receptor (AR) is the main treatment for prostate cancer (1)(2)(3)(4). ADT is effective initially. However, the majority of tumors invariably relapse and progress, becoming castration resistant prostate cancer (CRPC) (1)(2)(3)(4). Frequently associated with ADT resistance is the emergence of neuroendocrine prostate cancers (NEPC) that have a poor prognosis with no effective treatment (5)(6)(7)(8). With the common use of new generation potent ADT into clinic, the incidence of NEPC is rising (6,(9)(10)(11)(12). NEPC are highly vascularized (13,14). Increased angiogenesis and expression of NE markers are two prominent phenotypes of NEPC (13)(14)(15)(16) and are expected to be molecularly linked. However, direct molecular connections between these two phenotypes in NEPC remain largely unclear. The main purpose of this review is to summarize the reported and potential connections between the regulation of increased angiogenesis and expression of NE markers. Further, we analyze the implications of these connections for prostate cancer. Our goal is to identify key regulators of both characteristics as potential targets for NEPC, with the hope of hitting two birds with one stone to achieve better therapeutic efficacy and fewer side effects. Angiogenesis is involved in prostate cancer survival, progression, and metastasis (61). Its importance in prostate cancer has been established (62,63). Higher microvessel density is associated with worse prognosis in prostate cancer (64,65). VEGF as well as some neurosecretory peptides, e.g., serotonin, bombesin, and gastrin, have been shown to boost angiogenesis in NEPC (15). We recently reported that ADT repression of thrombospondin 1 (TSP1 or THBS1), a potent endogenous angiogenesis inhibitor, contributes to angiogenic phenotype in NEPC (66). Several reviews have already described the current knowledge and therapeutic development targeting angiogenesis in prostate cancer (61,67,68 (62). They found only tumors with strong expression of both VEGF and NE showed significantly poor clinical characteristics such as higher microvessel density, T stage, dedifferentiation, and shorter disease-specific survival. PROTEINS AND PATHWAYS REGULATING BOTH NE PHENOTYPE AND ANGIOGENESIS It remains largely unclear whether neuroendocrine differentiation and angiogenesis regulate each other in NEPC. It is also unclear what proteins directly link these two prominent characteristics of NEPC. Our literature search did not yield reports showing direct involvement of pro-angiogenic factors VEGF and neurosecretory peptides (serotonin, bombesin, and gastrin) in promoting NE marker expression. On the other hand, among the NE marker proteins, only CHGA (73,74) has been shown to directly contribute to angiogenesis. As summarized below and depicted in Figure 1, several signaling proteins have FIGURE 1 | Targeting molecules common to pathways promoting angiogenesis and neuroendocrine phenotype in prostate cancer. Androgen derivation therapy (ADT) elevates cAMP level, which activates PKA, resulting in phosphorylation and activation of CREB1. Activated CREB1 directly induces transcription of several genes involved in neuroendocrine differentiation (NED) and angiogenesis, such as VEGF, ENO2, GRK3, and HDAC2. VEGF is a potent pro-angiogenic factor, while ENO2 is a neuroendocrine marker. GRK3 promotes angiogenesis, NE marker expression, and prostate cancer progression. HDAC2 is critical for prostate cancer progression that is induced by chronical bio-behavioral stress and signals from beta adrenergic receptors (ADRBs). GRK3 and HDAC2 promotes angiogenesis, at least in part through downregulating TSP1. TSP1 is well-established as an anti-angiogenesis factor. Through unclear mechanisms, CREB1 activation enhances the PRC2 function of EZH2, which is critical for NED and angiogenesis induced by ADT. In endothelial cells, VEGF induces EZH2 expression and activity, which contributes to VEGF's action in promoting angiogenesis. Loss of p53 and RB1, alone or in cooperation, promote angiogenesis and NE phenotype through multiple mechanisms (detailed in text). IL-6 pathway activation enhances angiogenesis (through inducing VEGF) and NE phenotype (through inducing CHGA). AURKA interacts with N-Myc and regulates the stability of the latter, which promotes NED. AURKA and AURKB regulate angiogenesis in endothelial and neuroblastoma cells. HIF1A promotes angiogenesis through inducing VEGF. Moreover, it also cooperates with FoxA2 to promote NED and tumorigenesis. ONECUT2 has recently emerged as a master regulator of NED. Recent studies have also implicated receptor tyrosine kinase RET in regulating NED and angiogenesis. Novel strategies targeting the proteins and pathways that regulate both prominent phenotypes may be effective to treat NEPC (detailed in text). been reported to regulate both angiogenesis and NE marker expression, often in separate diseases or biological contexts. These proteins are potential molecular links between the two important characteristics of NEPC. CHGA CHGA is one of the classic markers for NEPC. It is a secreted glycoprotein that shows paradoxical properties in angiogenesis (71,(73)(74)(75). Recent studies showed CHGA can be proteolytically cleaved into active peptides by thrombin. This cleavage shifts its function from anti-to pro-angiogenesis under pathophysiologic conditions, which could be observed in prothrombin activation or multiple myeloma (73,74). Its function in angiogenesis in NEPC is still unclear. p53 and RB1 p53 and RB1, two most prominent tumor suppressors, have been implicated in both angiogenesis and NE marker expression in separate studies. Mutations and loss of p53 or RB1 are common alterations in prostate cancer patients (76). Tumors containing p53 mutations are usually more vascularized than tumors harboring wild type p53. This pattern has been observed in several independent clinical studies on prostate, colon, and breast cancers (77)(78)(79)(80). Some basic mechanisms underlying p53's inhibition of angiogenesis have been detailed. Ravi et al. found that, under hypoxic conditions, p53 inhibits the HIF1A activity that is required for VEGF transcription (81). Besides VEGF, p53 also inhibits other pro-angiogenetic factors, such as bFGF-BP (bFGF-binding protein) and COX-2 (cyclooxygenase-2). In addition, p53 also induces anti-angiogenetic factors, such as TSP-1 and EPHA2 (ephrin receptor A2) (82). However, it is not clear whether or how p53 itself plays a role in regulating NE phenotype. RB1 has also been reported to regulate tumor angiogenesis (83)(84)(85). For example, Lasorella et al. reported Id2 (inhibitor of differentiation 2), a target of RB1, mediates angiogenesis of pituitary tumors from Rb1 +/− mice (86). RB1 loss is one of the most critical events in neuroendocrine carcinoma (12, 87, 88), but the mechanism by which RB1 contributes to NE phenotype is largely unclear. A recent study reported RB1 takes part in regulating both angiogenesis and NE phenotypes. Labrecque et al. found, under hypoxic conditions, RB1-loss deregulates the expression of genes that govern angiogenesis, metastasis and NE differentiation. These effects led to a more invasive phenotype as well as NE protein markers expression in human prostate cancer cells (40). Growing evidence implies a cooperative function of p53 and RB1 in tumor angiogenesis. Martinez-Cruz et al. found that combinatorial deletion of p53 and RB1 augmented tumor angiogenesis in a spontaneous squamous cell carcinoma mouse model, comparing with loss of p53 alone (89). Similarly, inactivation of p53 and RB1 leads to a pro-angiogenic transcriptional response in keratinocytes (90). In a xenograft model of retinoblastoma, p53 was shown to increase VEGF expression and promote angiogenesis in cells deficient for p21/RB1 pathway (91). All these observations underline the possibility of p53 and RB1 cooperation in promoting prostate cancer angiogenesis. Interestingly, p53 and RB1 are also both connected to NE marker expression in prostate cancer. In a NEPC xenograft model LTL-331R that relapsed upon castration resistance of prostate adenocarcinoma patient-derived xenograft LTL-331, genomic alterations of both p53 and RB1 were observed (39). Of note, Beltran et al. showed (25) that "concurrent loss of RB1 and p53 was present in 53.3% of NEPC patient tumors vs. 13.7% of CRPC-adenocarcinoma samples (P < 0.0004, proportion test)." In a classic NEPC genetically engineered mouse (GEM) model called TRAMP, p53 and RB1 are both inactivated in the prostate by SV40 large T antigen oncoprotein, which induces the development of prostate cancers that subsequently progress to NEPC (92). Using GEM model and human cell models, loss of p53 and RB1 has been shown to promote linear plasticity and a phenotypic shift from AR-dependent luminal epithelial cells to AR-independent NEPC with resistance to enzalutamide (an antiandrogen drug) (26,36). PKA-CREB1 Axis Both angiogenesis and NE marker expression can be induced by increased cellular cAMP level (93)(94)(95). Androgen deprivation therapy (ADT) increases cAMP level in prostate cancer cells, which activates the PKA-CREB1 pathway that in turn regulates both phenotypes. VEGF and ENO2 have been identified as targets of CREB1 and regulate angiogenesis and NE marker expression, respectively (96)(97)(98). However, targets of CREB1 that regulate both phenotypes were largely unknown. We recently reported two direct targets of CREB1, GRK3 (G protein coupled receptor kinase 3) and HDAC2 (histone deacetylase 2). GRK3 was shown to promote both angiogenesis and NE marker expression in separate studies (detailed below). Induction of HDAC2 by CREB1 is critical for prostate cancer progression promoted by chronical bio-behavioral stress that activates PKA-CREB1 pathway though beta adrenergic signaling (99). It is still unknown whether HDAC2 is involved in NE phenotype regulation in prostate cancer. In another study, we found that PKA-CREB1 signaling enhances the epigenetic repressive activity of EZH2 (enhanced zeste homolog 2) that in turn induces NE phenotype and angiogenesis (detailed below). In short, the PKA-CREB1 axis seems to be a master upstream regulator for both NE phenotype and angiogenesis in prostate cancer. GRK3 We initially uncovered GRK3 as a key regulator of the progression of prostate cancer through unbiased shRNA and focused cDNA screening of human kinases (100). GRK3 is essential for metastatic prostate cancer cells in culture and in mouse xenografts. Further, its overexpression promotes orthotropic prostate tumor growth in mouse xenografts. Mechanistically, GRK3 promotes prostate cancer progression in part through repressing two anti-angiogenic factors TSP1 and PAI2, thus inducing angiogenesis in prostate cancer cells (100). Genomic profiling and immunohistochemical staining of human prostate cancers showed that GRK3 is upregulated in advanced prostate cancers (100,101). Of note, we found a strong trend between GRK3 protein level and glomeruloid microvascular proliferation, a marker of VEGFA-driven angiogenesis, in prostate cancer patient samples. This result further supports a role of GRK3 in stimulating angiogenesis. We recently reported that GRK3 promotes ADT resistance and NE marker expression of prostate adenocarcinoma cells (101). The kinase dead form of GRK3 abolished these phenotypes, indicating a requirement of GRK3's kinase activity (100,101). Moreover, GRK3 is positively associated with NE marker expression in human cancer cell lines and patient tumors. Upon GRK3 silencing, expression of NE markers induced by ADT was reduced. These results suggest that GRK3 is a key regulator of both NE phenotype and angiogenesis in prostate cancer. It is worth further investigating the molecular mechanisms of GRK3 and the potential of inhibiting GRK3 as a novel strategy to block NEPC. EZH2 Polycomb repressive complex 2 (PRC2) is another important regulator for both angiogenesis and NEPC. PRC2 usually renders transcriptional repression by tri-methylating lysine 27 of histone H3 (H3K27me3) on target genes (102,103). As the key catalytic subunit of PRC2, EZH2 is widely overexpressed in many tumors, including prostate cancer (102). Overexpression of EZH2 and elevated PRC2 activity promote prostate cancer cell proliferation and migration (103). Clermont et al. found that EZH2 is one of the most upregulated epigenetic regulators in NEPC across multiple datasets from clinical to xenograft tissues (104). Dardenne et al. reported that high catalytic activity of EZH2 promotes N-Myc-AR-PRC2 complex formation and promotes NE phenotype (37). Ku et al. emphasized that overexpressed EZH2 in prostate-specific Pten-Rb1-p53 triple knockout mice plays a pivotal role in promoting prostate cancer lineage plasticity, antiandrogen resistance, and neuroendocrine phenotype (26). We recently demonstrated that EZH2 presents a critical molecular link for NE phenotype and angiogenesis, downstream of ADT-activated PKA-CREB1 signaling (66). EZH2 is activated by ADT and PKA-CREB1 signaling, which in turn induces NE markers and reduces TSP1 in prostate cancer cells. Our study also fills in a gap of knowledge how EZH2 overexpression in cancer cells directly contributes to tumor angiogenesis. Lu et al. have showed that EZH2 is induced by VEGF in endothelial cells, which contributes to angiogenesis (105). TSP1 TSP1 is found to have various specific biological activities in different specific tumor environments. The role, regulation and expression patterns of TSP1 in human malignancies are highly context dependent and complicated. On general knowledge of TSP1 in urological cancers, please refer to this outstanding review (106). TSP1 is the first identified endogenous inhibitor of angiogenesis. It suppresses endothelial cell proliferation, migration, and tube formation, as well as induces endothelial apoptosis (107)(108)(109). While TSP1's role in angiogenesis is wellknown, we recently established its role and regulation in NEPC (66). As expected, TSP1 inhibits angiogenesis induced by NEPC cells. Furthermore, the expression of TSP1 in NEPC is significantly lower than that in CRPC-adenocarcinoma, and NE markers negatively correlate with TSP1 in several prostate cancer datasets (66). Interestingly, TSP1 silence increase NE marker expression in PC3 prostate cancer cells, which suggests that TSP1 may directly regulate NE phenotype. This intriguing observation supports an intimate relation between NE phenotype and angiogenesis in prostate cancer cells (66). The molecular mechanisms underlying TSP1's role of NE phenotype warrants further investigation. IL-6 As a pro-inflammatory cytokine, interleukin-6 (IL-6) is expressed in both of prostate tumors and the stromal tumor microenvironment. IL-6 is well-known to participate in cellular angiogenesis. Recently, Culig and Puhr have elegantly reviewed the role and regulation of IL-6 in prostate cancer (110). Several signaling pathways downstream of IL-6 orchestrate angiogenesis and NE phenotype in prostate cancer. For example, Ishii et al. showed that IL-6 promotes angiogenesis by up-regulating VEGF through PI3K/AKT pathway (111). On the other hand, IL-6 boosts NE phenotype by inducing CHGA and ENO2 expression through JAK/STAT3 and MAPK pathways (112,113), as well as AMPK activation and autophagy induction (114). Detailed molecular mechanisms that connect IL-6 induced angiogenesis and NE phenotype need to be further elucidated. MYCN As a key oncogene driver in neuroblastoma, MYCN (N-Myc) is also a critical regulator of NEPC and SCLC (small cell lung cancer, a poorly differentiated neuroendocrine lung cancer) (21,37,71,115). While convincing evidence supporting a direct role of N-Myc in regulating angiogenesis is scarce, NDRG1 (N-Myc downstream-regulated gene 1) has demonstrated pleiotropic roles in angiogenesis and cancer progression, depending on cancer types (71,116). Aurora Kinases A and B Aurora kinase A and B (AURKA/B) phosphorylate and stabilize N-Myc protein, which sustains N-Myc function in promoting NE phenotypes in neuroblastoma (117). AURKA and AURKB have been shown to regulate VEGF production and angiogenesis in endothelial cells directly and in neuroblastoma cells (118,119). It is postulated that AURKA and/or AURKB may regulate angiogenesis of NEPC, although direct evidences are needed. HIF1A-FOXA2 Axis HIF1 and HIF2 are well-known key regulators of angiogenesis (48,50,120). Recent studies have also implicated them, especially HIF1A, in regulating neuroendocrine phenotype in prostate cancer. HIF1A cooperates with FOXA2, a transcription factor expressed in NE tissue, to induce several HIF1A target genes that are required for hypoxia-mediated NE phenotype and metastasis in prostate cancer (41,43). ONECUT2 (OC2) According to recent reports by Rotinen et al. and Guo et al., ONECUT2 plays a critical role in poorly differentiated neuroendocrine prostate tumors as a master transcriptional regulator (41,121). As a survival factor in mCRPC models, ONECUT2 depresses AR transcription-related program and activates NE differentiation genes and progression to lethal disease (121). Besides, overexpression of ONECUT2 in prostate adenocarcinoma under hypoxia condition is able to inhibit AR signaling and induce NE phenotype (41). Given the crucial role of hypoxia in angiogenesis, we postulate that ONECUT2 may also contribute to the angiogenic phenotype of NEPC, which warrants further study. One study in ovarian cancer demonstrated that silencing ONECUT2 reduces VEGF expression and vascularization in xenograft tumors (122). RET RET mutations are found to enrich in lung adenocarcinoma with NE differentiation (123,124). Knockdown of RET inhibits prostate tumor growth in vivo (125). A recent study from Justin Drake's lab has showed that RET phosphopeptides and mRNA levels are higher in NEPC than in prostate adenocarcinoma, while RET inhibitor AD80 blocks NEPC cell growth in culture and in mouse xenografts (126). Further experiments on gain and loss of function of RET protein will need to be carried out in NEPC cell models. While a role of RET in angiogenesis is well-established in medullary thyroid cancers (127), it is still unclear whether it is critical for the angiogenic phenotype in poorly differentiated neuroendocrine tumors, such as NEPC. TARGETING THE MOLECULAR LINKS BETWEEN ANGIOGENESIS AND NE PHENOTYPE FOR DEVELOPING NEW THERAPIES As summarized above, elevated angiogenesis and NE marker expression are two important interconnected phenotypes. Targeting key molecules linking these two phenotypes may be effective therapeutic strategies for neuroendocrine prostate cancers. Potential therapeutic agents targeting some of these molecules include beta blockers inhibiting PKA-CREB1 signaling, TSP1 mimetic peptides, inhibitors of EZH2 and HIF1 pathway, and IL-6 pathway blockade. It is paramount to evaluate these and other related agents, alone and in combinations, for NEPC, given the reported contributions of their targets in this lethal variant of prostate cancer that has no effective treatment. Beta Blockers Beta blockers which inhibit beta adrenergic signaling and PKA-CREB1 activation, have been used to treat patients with cardiovascular diseases for decades. According to epidemiology studies, cancer patients who have used beta blockers for cardiovascular diseases have better clinical outcomes than the matched patients who do not use, in multiple cancer types, including melanoma, prostate, lung, and breast cancers (128)(129)(130). Results from these retrospective investigations are consistent with emerging evidences supporting anti-tumor effects of beta blockers in cancer cells in vitro and in mouse xenografts (99,(131)(132)(133). Because that beta blockers have been already applied in hypertension and heart diseases for years, they may also become efficient and safe therapies for NEPC. Beta blockers propranolol and carvedilol are tested in several cancer clinical trials (clinicaltrials.gov). However, major obstacles of beta blockers in clinical studies include incomplete understandings of their mechanisms of action in cancers, as well as a shortage of biomarkers for patient selection and efficacy monitoring (129,134). We recently reported that propranolol downregulates NE marker expression and inhibits angiogenesis and growth of NEPC cell-derived xenografts by blocking a critical pathway CREB1-EZH2-TSP1 (66). This finding suggests that this pathway's activity level may serve as a biomarker for future cancer clinical trials of beta blockers. The therapeutic value of propranolol and other PKA-CREB1 signaling inhibitors in prostate cancer treatment should be further tested. EZH2 Inhibitors Based on the driving role and significant overexpression of EZH2 in many tumors, several inhibitors targeting EZH2 have been developed, such as GSK126, GSK343, GSK503, EPZ6438, CPI-1205, PF-06821497, and DZNeP. Some of these EZH2 inhibitors have demonstrated anti-tumor activity against NEPC in vitro and in vivo. Beltran et al. found that GSK343 preferentially inhibited cell viability of NEPC cells, while minimally affecting non-NEPC cells (25). Ku et al. reported GSK503 restored enzalutamide sensitivity of prostate tumors from castrated Pten-Rb1 double knockout mouse (26). DZNeP has also shown some anti-tumor activity in preclinical studies of several cancer types, including prostate cancer (135,136). We recently demonstrated that conditioned media from prostate cancer cells expressing EZH2 shRNA or treated with GSK126 or EPZ6438 inhibit in vitro angiogenesis of endothelial cells (66). In addition, GSK126 and DZNeP were shown to decrease NE marker expression (66). Several EZH2 inhibitors are currently in clinical trials for multiple types of lymphoma, synovial sarcoma, and solid epithelial tumors: NCT03010982 and NCT01897571 (EPZ6438), NCT03480646 (CPI-1205), and NCT03460977 (PF-06821497). It is conceivable that the existing EZH2 inhibitors or other new drugs under development may have positive efficacy targeting NEPC. HIF Pathway Inhibitors Pathways of hypoxia-inducible factors (HIF) play key roles in development of resistance to different treatment modalities. Thereby, HIF pathway inhibitors targeting advanced cancers warrant further clinical studies either as a single agent or in combination with other therapeutic agents (137). Specifically for prostate cancer, two mCRPC clinical trials of HIF pathway inhibitors, including 2ME2 nanocrystal dispersion (panzem) and 17-AAG (tanespimycin), have been reported, which unfortunately showed little efficacy (138,139). However, given the critical roles of HIF in control both angiogenesis and neuroendocrine phenotypes in NEPC, future testing of other inhibitors of HIF pathway, alone or in combinations, is still justifiable for NEPC. Interestingly, Guo et al. recently showed that TH-302 (evofosfamide), a prodrug activated by hypoxia, significantly inhibits NEPC tumor growth (41). An ongoing immunotherapy study combines ipilimumab (targeting CTLA-4) and evofosfamide for the treatment several solid tumor types, including confirmed metastatic or locally advanced prostate cancers (NCT03098160). Aurora Kinase Inhibitor Phase II trial of Alisertib (MLN8237), an Aurora Kinase A inhibitor, for castration resistant and neuroendocrine prostate cancers was recently completed (140). Although the report did not meet its primary endpoint of significantly extending 6-month radiographic progression-free survival (rPFS), a subset of advanced prostate cancer patients with AURKA and N-Myc activation achieved significant clinical benefits. TSP-1 Mimetic Peptides ABT-510, a TSP-1 mimetic peptide, has been tested in phase I and II clinical trials for many cancer types, including soft tissue sarcoma, metastatic melanoma, renal cell carcinoma, and advanced solid tumors (141)(142)(143)(144). ABT-510 failed to show significant clinical benefits as a single agent, suggesting a combinatory strategy is needed. Combination of ABT-510 and cytoxan leads to a delay in progression of PC-3 tumor xenografts (145). Notably, in a phase I study of glioblastoma, combination of ABT-510 with temozolomide and radiotherapy moderately extended overall survival time (146). These findings suggest that combination of ABT-510 with other standard anti-tumor therapies may be an effective strategy to yield better clinical efficacy. Recently, a new TSP-1 mimetic peptide, ABT-898, with greatly increased potency over ABT-510, has been generated. Its efficacy has been tested in rodents and dogs (147)(148)(149), and have showed more notable antiangiogenic efficacy than ABT-510 (147). Investigation of the therapeutic potential of ABT-510 and ABT-898 in prostate cancers, especially in NEPC, warrants additional studies. IL-6 Pathway Blockade Given its critical contributions to cancer progression, IL-6 signaling pathway (IL6-/IL6R/JAK/STAT3) is being actively pursued for novel cancer therapies. Recent progress and obstacles in targeting IL-6 to treat cancers have been well-summarized (150)(151)(152). Agents blocking IL-6/IL-6R or inhibiting JAK/STAT3 to block tumor progression have been or are being tested in clinical trials, such as siltuximab (an anti-IL-6 mAb), tocilizumab (an anti-IL-6R mAb), Ruxolitinib (a JAK signaling inhibitor). Although many evidences confirmed a key role of IL-6 cascades in regulating the growth of malignant cells in preclinical studies, anti-IL6 or anti-IL6R mAbs have not demonstrated clinical efficacy in several cancer types. The lack of efficacy of IL-6 pathway blockade in cancer is likely due to tumor cells plasticity, displaying different tumor clones in tumor samples in vivo (153). Testing IL-6 pathway inhibitors, in combination with standard or other targeted therapies, is still favored for NEPC. FUTURE DIRECTIONS Besides the knowledge gaps and future directions abovementioned for individual regulators or therapeutic developments, we believe that the following three directions warrant further investigation to fully understand and target the molecules common to pathways promoting angiogenesis and neuroendocrine differentiation of prostate cancer. Do Neuroendocrine Differentiation and Angiogenesis Promote Each Other? We have described several genes reported to regulate both neuroendocrine and angiogenic phenotypes. Much of the knowledge for both phenotypes was in different biological contexts or cancer types. It is largely unclear whether induction of one phenotype leads to increase in another phenotype in the same biological system, such as in NEPC. It is conceivable that induction of neuroendocrine phenotype may promote angiogenesis, in part due to secretion of pro-angiogenic factors by neuroendocrine cells, such as VEGF and neuropeptides bombesin and gastrin, although the roles of these factors in neuroendocrine phenotype are still unclear (15). Do Critical Regulators Established in One Phenotype Contribute to the Other Phenotype? This review mainly focuses on genes that have been implicated in regulating both angiogenesis and neuroendocrine differentiation, although in separate contexts for many genes. To better understand these two phenotypes and to facilitate the development of effective treatments for NEPC, a systematic investigation is necessary to define the roles of these regulators in a shared context. Moreover, studies have characterized the function of several other proteins in regulating either neuroendocrine differentiation (such as BRN2, PEG10, SRRM4, REST, and DEK) or angiogenesis (such as FGF, TGF, EGFL6, PDGF, MMPs, and CCL2). Given the intimate links between the two characteristics as we summarize, it is worthwhile to investigate the roles of critical regulators of neuroendocrine differentiation in regulating angiogenesis, and vice versa. Anti-angiogenesis Therapy and Combination Treatments for NEPC? Positive results in anti-angiogenic therapy were observed in pancreatic neuroendocrine tumors (PNET), another type of neuroendocrine tumors that are well-differentiated with better prognosis than SCLC and NEPC. Sunitinib is a multi-targeted tyrosine kinase receptor inhibitor of VEGFR1-3, PDGFR, c-kit, RET, CSF-1R, and FLT3. It has demonstrated direct antitumor and antiangiogenic effects, and has received FDA approval for the treatment of locally advanced or metastatic PNETs (154,155). In SCLC, it was demonstrated that higher VEGF is associated with poor prognosis, which makes it a reasonable strategy to block VEGF pathway for inhibiting angiogenesis and tumor progression. However, only limited clinical benefits in this attempt was observed (156). As far as we know, no result has been reported on clinical trials of anti-angiogenic therapy for NEPC. Due to the striking pathological similarity between SCLC and NEPC, it is likely that, for NEPC, finding the right combinations of anti-angiogenesis and other therapies will be key to achieve significant efficacy for NEPC. Several strategies of combining anti-angiogenic regimens with targeted/chemo/immune therapies have been or are being tested clinically in several cancer types (59). These strategies include combining different anti-angiogenic regimens, simultaneously inhibiting angiogenesis and driving oncogenes, or combining anti-angiogenic regimens with immunotherapy. It is conceivable that similar combinatorial strategies are applicable to NEPC. Another strategy for NEPC is to target key regulators for both NEPC phenotypes that we have discussed, i.e., neuroendocrine differentiation and angiogenesis, hitting two birds with one stone. In section Targeting the Molecular Links Between Angiogenesis and NE Phenotype for Developing New Therapies, we have summarized some opportunities for developing therapeutics to target pathways involved in both angiogenesis and neuroendocrine phenotypes. It may be necessary to co-target multiple key regulators of both phenotypes to simultaneously block alternative pathways that NEPC cells may use to escape. CONCLUSION NEPC is lethal without effective treatment. It is still poorly understood. They often have both elevated neuroendocrine marker expression and increased angiogenesis, the mechanisms of which remain largely elusive. Here, we summarize the literature on several proteins and pathways that regulate both angiogenesis and neuroendocrine phenotype in prostate cancer and other contexts. Bridging the mechanistic gaps between regulation of angiogenesis and neuroendocrine phenotype will facilitate better understanding of NEPC progression. We also discuss the opportunities of targeting some of these key regulators to inhibit both angiogenesis and neuroendocrine phenotype for treatments of patients with NEPC. Furthermore, many of the molecular mechanisms that we discuss here for NEPC are also dysregulated in small cell lung cancer (SCLC), a poorly differentiated aggressive neuroendocrine lung carcinoma. Therefore, we expect that much of the current knowledge and new therapeutic potentials summarized here for NEPC are relevant to SCLC. AUTHOR CONTRIBUTIONS ZW, YZ, ZA, and WL wrote the paper. FUNDING This work was supported by awards from American Cancer Society (RSG-17-062-01) and Cancer Prevention and Research Institute of Texas (CPRIT, RP170330) to WL. It was also supported by CPRIT (RP150551) and the Welch Foundation (AU-0042-20030616) to ZA.
6,673.8
2020-01-21T00:00:00.000
[ "Biology" ]
Optimal Deep Learning Driven Smart Sugarcane Crop Monitoring on Remote Sensing Images : Crop monitoring is a process that involves regular field visits that seem to be difficult since it needs a huge amount of time and manpower. Thus, in modern agriculture, with an extensive range of satellite data such as Landsat, Sentinel-2, Modis, and Palsar, data are readily available. Sugarcane is a tall perennial grass belonging to the genus Saccharum, utilized for producing sugar. These plants were generally 2–6 m tall with fibrous, stout, jointed stalks, rich in sucrose, that will be accumulated in the stalk internodes. Sugarcanes have a different growth pattern and phenology than many other crops; thus, the spectral and temporal features of satellite data are examined by utilizing statistical and machine learning (ML) techniques for optimal discrimination of sugarcane fields with other crops. In this study, we propose an Optimal Deep Learning Driven Smart Sugarcane Crop Monitoring (ODLD-SSCM) model on Remote Sensing Images. The presented ODLD-SSCM model mainly intends to estimate the crop yield of sugarcanes using RSIs. In the presented ODLD-SSCM technique, the sugarcane yield mapping can be derived by the use of the self-attentive deep learning (SADL) model. Besides, an oppositional spider colony optimization (OSCO) algorithm is used for the hyperparameter tuning of the ODLD-SSCM model. A detailed set of experimentations were performed to demonstrate the enhanced outcomes of the ODLD-SSCM model. A comprehensive comparison study pointed out the enhancements of the ODLD-SSCM model over other recent approaches. Introduction Remote sensing (RS) refers to an effective source of data for site-specific crop monitoring, offering spatial and temporal data [1].Orbital imageries were typically utilized in the agricultural sector for finding spectral differences resultant from crop and soil features at a large-scale, supportive diagnostics for agronomical crop variables, and help the agriculturalists in making superior management decisions [2].For example, over the years, orbital images are employed to demarcate management areas for annual crops, observe in-field yield variability for several crops like cotton and corn, develop crop growth models, map vineyard variability, map grasslands biomass, and plan the wheat harvest among others [3].Certain main limits based on orbital images were lacking the measurement accuracy and ground truth data of the agronomical variables.Additionally, the empirical methods for predicting agronomical variables related to spectral data might have spatial and temporal limitations for implementation over various seasons and fields [4].For the sugarcane, the valuation of the spatial changeability becomes difficult because of the restricted adapted solutions majorly for yield mapping. Owing to its exclusive phenological stages, sugarcane is one distinct growing paradigm as articulated in the time sequence of data, typically spanning all over the year [5].In the Germination stage, the tiny sprouts appear whereas, in the Tillering stage, the plants advance quickly deprived of elongation.In the third stage, named the grand growth stage, there will be a stem extension trailed by the significant leaf extension [6].The last stage is the maturity stage in which the crop will lose its chlorophyll content and vigour, not appropriate for harvest.Sugarcane is a commercial crop that can be harvested related to the capability of market demands and sugar mills [7].This lively nature of the harvesting period imposes difficulty on sugarcane categorization as the crop cycle includes large inconsistencies.Similarly, randomness in the time sequence paradigm might appear in the initial harvest, since the agriculturalist might cultivate other crops in similar fields.Yield maps become necessary to better comprehend the in-field changeability, for delimiting management regions, and enhance sitespecific management techniques [8].Sugarcane yield map can be generally gained from data accumulated straight by screens on harvesters that provide certain restrictions.An alternative typical feature of the prior study was the dependence on a single image at a presented crop phase for investigating the relation between yield and spectral Vis [9].Assumed the comparatively advanced temporal resolution of accessible images (for instance, the revisit frequency of the Sentinel-2 constellation is 5 days), and the potentiality of new analytical tools for managing several variable forecasters, a time series method for sugarcane yield prediction was valuable to examine.Machine Learning (ML) methods were displayed to offer higher forecasting accuracy than the conventional statistical studies along with that identifying dataset patterns [10].Such methods were typically imperilled to tests and training procedures, depending on the dataset difficulty. In this study, we propose an Optimal Deep Learning Driven Smart Sugarcane Crop Monitoring (ODLD-SSCM) model on Remote Sensing Images.The presented ODLD-SSCM model mainly intends to estimate the crop yield of sugarcanes using RSIs.In the presented ODLD-SSCM technique, the sugarcane yield mapping can be derived by the use of the self-attentive deep learning (SADL) model.Besides, an oppositional spider colony optimization (OSCO) algorithm is used for the hyperparameter tuning of the ODLD-SSCM model.A detailed set of experimentations were performed to demonstrate the enhanced outcomes of the ODLD-SSCM model. Related Works Kumpala et al. [11] applied the DL technique depends on the CNN model YOLO which creates a simulation for recognizing images.The technique has been utilized for recognizing sugar cane diseases with specific images.The Sugar cane-Leaf Disease Diagnoses Scheme was developed and designed to assist the user in automatically recognizing sugar cane diseases.Sugar cane-Leaf Disease Diagnoses Scheme contained two: disease diagnoses and detection.The authors in [12], applied DL and computer vision technologies for selecting and planting healthier billets that improved the plant population and the yields for each hectare of sugarcane planting.The researchers applied popular CNN architecture for processing larger image datasets and TL approaches for expanding the outcomes to distinct types of sugarcane.It is time consuming to label and collects enormous datasets for every type of sugarcane where quality inspection is required, previous to planting.The author applied a two-step TL approach for extending the trained structure to novel variety. In [13], proposed an object detection technique based on DL for sugarcane stem node detection in complicated environments, and the generalization and robustness capability have been enhanced using the dataset expansion model for stimulating distinct illumination conditions.The effect of data extension and lighting conditions in distinct times on the outcomes of sugarcane stem node recognition has been deliberated, and the dominance of YOLO v4 implemented best in the study has been confirmed by relating four distinct DL approaches such as YOLO v3, Fast RCNN, SSD300, and RetinaNet.Picoli et al. [14] examine the Landsat image probable for sugarcane scarcity recognition by measuring the relationships among normalized difference water index (NDWI), vegetation condition index (VCI), normalized difference infrared index (NDII)), global vegetation moisture index (GVMI), agricultural and vegetation scarcity indices (normalized difference vegetation index (NDVI).The study presented two novel indices merging short-wave infrared (SWIR) and near-infrared (NIR) bands projected for detecting sugarcane deficiency.Each index was collectively and individually compared to soil water surplus and water deficit, based on the climatological soil-water balance (CSWB) technique. A new DL architecture has been developed in [15] for detecting either a sugarcane plant is contaminated or not by examining its stem, color, leaves, and so on.The study encompasses three scenarios according to the distinct feature extractors such as VGG-19, Inception v3, and VGG-16.They are the pertained model where distinct classifications are trained.Johansen et al. [16] used a higher-spatial resolution GeoEye-1 satellite imagery and geographic object-based image analysis (GEOBIA) for 3 years to map damages of canegrub and developed two mapping techniques appropriate for risk mapping.These involved: (1) early separation of sugarcane block limitations; (2) subsequent omission and classification of harvested or fallow tracks, fields, and other nonsugarcane features within the block limits; (3) detection of possible canegrub-damaged regions with lower NDVI value and higher level of image textures within all the blocks; (4) the additional filtering of canegrub spoiled area to lower, medium, and higher probability; and (5) classification of risk.In this study, an effective ODLD-SSCM model has been developed for smart sugarcane crop monitoring model on remote sensing images.The presented ODLD-SSCM model mainly intends to estimate the crop yield of sugarcanes using RSIs.In the presented ODLD-SSCM technique, the sugarcane yield mapping can be derived by the use of the SADL model.Besides, the OSCO algorithm is used for the hyperparameter tuning of the ODLD-SSCM model.Fig. 1 depicts the block diagram of the ODLD-SSCM approach. Yield Mapping Using SADL Model In the presented ODLD-SSCM technique, the sugarcane yield mapping can be derived by the use of the SADL model.The SADL integrates a self-attention module with the CNN for addressing the variable length signal data.Initially, convolution is used for the sensor signal for the feature extraction [17].The sensor information is in the form of matrix with a sensor and variable length time axis.The matrix dataset of -ℎ wafer ( = 1, ⋯ , ) is denoted by = ( (1) , ⋯ , ( ) ) ∈ ℝ × , whereby indicates the sensor count, and indicates the timestamp count for the -ℎ wafer.For extracting correlation amongst each sensor, a convolution filter in the form of a nonsquare rectangle moves alongside the time axis.A convolutional filter is employed to the raw sensor dataset with a window of successive time steps for producing a feature map . The vector is a fixed-size encoder of variable-size H that summarizes the series with attention over time.Next, the feedforward layer is on top of the encoder to categorize the wafer stream for fault or normal classes.Attention weight has real benefits.It provides insight to determine the crop yield mapping of sugarcane.Fig. 2 illustrates the framework of the SADL technique. Parameter Tuning Using OSCO Algorithm Next, the OSCO algorithm is used for the hyperparameter tuning of the ODLD-SSCM model.The SCO set the overall amount of spiders and arbitrarily assign individual of the male spider (MSP) groups, and marry the female spider (FSP) within the search space for generating novel spider individual and continues to repeat which realize the iteration of entire spider groups and continuously move close to the optimum solution [18], Lastly, the value of the optimum solution is attained and it is given below. Initial value setting The algorithm should set the first value of the number of FSP and MSP, for producing FSP and MSP population using the following equation From the expression, the overall amount of spiders is , 65%∼90% of the Spider Population is chosen as FSP, the remaining is MSP, and represent the number of male FSPs. and female and MSP sets are generated at random. The following equation is used to define the mating radius of the spider. Here, ℎℎ and indicates the upper and lower limits of -dimension parameter. Resolving the individual weight of the spider The weight of an individual spider signifies its proportion while resolving the optimum solution and it is greater, then spider individuals are close to the optimum solution; if it is smaller, then spider individuals are distant from the optimum solution. = 1 − ( )− − (7) Let be the weights of individual spider, ( ) shows the fitness values of all the individuals, and indicate the fitness values of the best and worst individuals. Female spider movement The FSP gets closer to the individuals with large weight in the following: Now, + indicates the individual weight of middle MSP and denotes the location of the individuals that weight is higher than the MSP and closer to the MSP .If +1 < + , the MSP iterative model is individual learning, closer to the middle male individuals.If +1 ≥ + , the MSP iteration model is individual learning, closer to the optimum female individuals and random part in the neighborhood. Male and female spider mating For -ℎ individual males, within its mating radius, if there exist female individuals, mating behavior occurs.A new spider is generated to form a novel Spider Population after marriage.In the novel Spider Population , evaluate the distribution probability of every individual: (11) The optimum individual is chosen based on the probability roulette model.Once the location of the optimum individual , optimum individual replaces the worst.Iterate the abovementioned operation until each male and female could complete marriage. The ODLD-SSCM model is derived by the integration of SCO with the oppositional based learning (OBL) concept.This technique demonstrated to be an effective model to make the search pattern of metaheuristics more realistic [19].This method stems from the instantaneous estimation of the opposite pair of the base agent to increase the probability of meeting matching agents.The conflicting of real number ∈ [ , ] is given as ⃗ ⃗ in the following: ⃗ ⃗ = + − (12) In Eq. ( 12), and indicates the lower and upper bounds.In multi-dimension space, is formulated by = { 1 , 2 , 3 , … … ., } and ∈ [ , ], whereas = 1, 2, 3, 4, … …, and it is shown below: (13) In the process of optimization, the opposite point is replaced with the solution according to the optimal fitness values.In another word, the location of the population is upgraded according to the and finest values. Results and Discussion This section investigates the experimental validation of the ODLD-SSCM model under diverse aspects.Tab. 1 provides an overall yield mapping outcomes of the ODLD-SSCM model.4 providers a brief sugarcane yield mapping performance of the ODLD-SSCM approaches on GNDVI.On training data, the ODLD-SSCM technique has provided RMSE of 2.17, R2 of 0.927, and MAE of 1.385.Likewise, on testing data, the ODLD-SSCM method has rendered RMSE of 5.36, R2 of 0.592, and MAE of 3.909.Also, on the entire data, the ODLD-SSCM model has provided RMSE of 3.63, R2 of 0.833, and MAE of 2.290. [170] Figure 6: Result analysis of ODLD-SSCM approach under NDVI Fig. 6 displays the detailed sugarcane yield mapping performance of the ODLD-SSCM model on NDVI.On training data, the ODLD-SSCM approach has offered RMSE of 1.79, R2 of 0.952, and MAE of 1.387.Also, on testing data, the ODLD-SSCM method has offered RMSE of 4.73, R2 of 0.620, and MAE of 3.726.Also, on the entire data, the ODLD-SSCM approach has offered RMSE of 3.32, R2 of 0.810, and MAE of 2.266.models on different bands [4].The results indicated that the ODLD-SSCM model has offered enhanced performance with the least RMSE values over other models.For instance, on the entire spectral band dataset, the ODLD-SSCM model has offered a reduced RMSE of 2.93 whereas the RF and MLR models have gained increased RMSE of 3.35 and 6.34 respectively.Meanwhile, on the entire GNDVI dataset, the ODLD-SSCM technique has presented a reduced RMSE of 3.63 whereas the RF and MLR algorithms have attained increased RMSE of 3.96 and 6.08 correspondingly.Finally, on the entire NDRE dataset, the ODLD-SSCM algorithm has granted a reduced RMSE of 3.41 whereas the RF and MLR approaches have gained increased RMSE of 3.72 and 6.3 correspondingly.At last, on the entire NDVI dataset, the ODLD-SSCM model has provided a reduced RMSE of 3.32 whereas the RF and MLR approaches have gained increased RMSE of 3.63 and 6.29 correspondingly.An extensive R2 examination of the ODLD-SSCM model with recent approaches is provided in Fig. 9 and Tab. 3. The experimentation outcomes depicted that the ODLD-SSCM model has shown effectual outcomes with maximum R2 values.For instance, on the entire spectral band dataset, the ODLD-SSCM model has accomplished a higher R2 value of 0.909 whereas the RF and MLR models have demonstrated lower R2 values of 0.895 and 0.461 respectively.Furthermore, on the entire GNDVI dataset, the ODLD-SSCM method has accomplished a higher R2 value of 0.833 whereas the RF and MLR approaches have demonstrated lower R2 values of 0.819 and 0.408 correspondingly.Additionally, on the entire NDRE dataset, the ODLD-SSCM technique has accomplished a higher R2 value of 0.814 whereas the RF and MLR methods have demonstrated lower R2 values of 0.798 and 0.429 correspondingly. [174] Tab. 4 and Fig. 10 deliver a detailed MAE inspection of the ODLD-SSCM method with other approaches on different bands.The results denote the ODLD-SSCM approach has gained enhanced performance with the least RMSE values over other models.For example, on the entire spectral band dataset, the ODLD-SSCM model has offered a reduced MAE of 1.607 whereas the RF and MLR approaches have obtained increased MAE of 2.077 and 4.692 respectively.Meanwhile, on the entire GNDVI dataset, the ODLD-SSCM method has presented a reduced MAE of 2.290 whereas the RF and MLR approaches have obtained increased MAE of 4.838 and 2.660 correspondingly.Eventually, on the entire NDRE dataset, the ODLD-SSCM model has offered a reduced MAE of 2.068 whereas the RF and MLR techniques have obtained increased MAE of 2.568 and 4.916 correspondingly.Finally, on the entire NDVI dataset, the ODLD-SSCM approach has provided a reduced MAE of 2.266 whereas the RF and MLR techniques have attained increased RMSE of 2.606 and 4.893 correspondingly.After examining these experimentation outcomes, the ODLD-SSCM model ensured its betterment in sugarcane yield mapping performance over other models. 5 Conclusion In this study, an effective ODLD-SSCM model has been developed for the smart sugarcane crop monitoring model on remote sensing images.The presented ODLD-SSCM model mainly intends to estimate the crop yield of sugarcanes using RSIs.In the presented ODLD-SSCM technique, the sugarcane yield mapping can be derived by the use of the SADL model.Besides, the OSCO algorithm is used for the hyperparameter tuning of the ODLD-SSCM model which is derived by the integration of SCO with the OBL concept.A detailed set of experimentations were performed to demonstrate the enhanced outcomes of the ODLD-SSCM model.A comprehensive comparison study pointed out the enhancements of the ODLD-SSCM model over other recent approaches. Figure 2 : Figure 2: Structure of SADL Fig. 3 offers a brief sugarcane yield mapping performance of the ODLD-SSCM model on the spectral band.On training data, the ODLD-SSCM model has offered RMSE of 1.52, R2 of 0.975, and MAE of 1.034.In addition, on testing data, the ODLD-SSCM method has presented RMSE of 4.56, R2 of 0.680, and MAE of 3.019.Moreover, on the entire data, the ODLD-SSCM technique has presented RMSE of 2.93, R2 of 0.909, and MAE of 1.607. Figure 3 : Figure 3: Result analysis of ODLD-SSCM approach under Spectral bands Fig. 4 providers a brief sugarcane yield mapping performance of the ODLD-SSCM approaches on GNDVI.On training data, the ODLD-SSCM technique has provided RMSE of 2.17, R2 of 0.927, and MAE of 1.385.Likewise, on testing data, the ODLD-SSCM method has rendered RMSE of 5.36, R2 of 0.592, and MAE of 3.909.Also, on the entire data, the ODLD-SSCM model has provided RMSE of 3.63, R2 of 0.833, and MAE of 2.290. Figure 4 : Figure 4: Result analysis of ODLD-SSCM approach under GNDVI Fig. 5 provides a brief sugarcane yield mapping performance of the ODLD-SSCM approach on NDRE.On training data, the ODLD-SSCM algorithm has presented RMSE of 1.83, R2 of 0.925, and MAE of 1.455.Similarly, on testing data, the ODLD-SSCM method has offered RMSE of 4.61, R2 of 0.591, and MAE of 3.667.Additionally, on the entire data, the ODLD-SSCM algorithm has rendered RMSE of 3.41, R2 of 0.814, and MAE of 2.068. Fig. 7 Fig. 7 exemplifies a comprehensive sugarcane yield mapping performance of the ODLD-SSCM model on WDRVI.On training data, the ODLD-SSCM technique has provided RMSE of 2.26, R2 of 0.976, and MAE of 1.346.Also, on testing data, the ODLD-SSCM model has offered RMSE of 4.86, R2 of 0.588, and MAE of 3.867.Moreover, on the entire data, the ODLD-SSCM algorithm has rendered RMSE of 3.68, R2 of 0.812, and MAE of 2.280. Figure 7 : Figure 7: Result analysis of ODLD-SSCM approach under WDRVI Tab. 2 and Fig. 8 provide a detailed RMSE examination of the ODLD-SSCM model with other Figure 10 : Figure 10: MAE analysis of ODLD-SSCM approach (a) Training, (b) Testing, and (c) Entire the FSP iteration model is individual learning, farther from the neighborhood optimum individuals, far from the global optimum solution individual and random part, for ensuring the search accurateness.If ≤ , the FSP iterative model is individual learning, closer to the neighborhood optimum individual, closer to the global optimum solution individual and random part.closer to the middle MSP.The individual spider with the middle weight value is the middle MSP in the population as follows: Table 1 : Result analysis of ODLD-SSCM approach with distinct measures Table 2 : RMSE analysis of ODLD-SSCM approach with existing algorithms under distinct bands Table 3 : R2 analysis of ODLD-SSCM approach with existing algorithms under distinct bands Table 4 : MAE analysis of ODLD-SSCM approach with existing algorithms under distinct bands
4,695.6
2022-12-01T00:00:00.000
[ "Agricultural and Food Sciences", "Computer Science", "Environmental Science" ]
From the Body with the Body: Performing with a Genome-Based Musical Instrument INTRODUCTION: In this paper we present Silico, a new Digital Musical Instrument which ideally represents the performer itself. This instrument is composed by two parts: an interface (a sensor glove), which relies on the movements of the performer’s hand, and a computational engine (a set of patches developed in Max 7), which generates sound events based on the genomic data of the performer. OBJECTIVES: We want to propose a new reflection on the relation between the body and musical instruments. Moreover, we aim to investigate the voluntary and involuntary aspects of our body, intended as a starting point for a musical performance. As a metaphor of these two layers, we used here the hand and the genome of the performer. METHODS: We have investigated our objectives through the whole design process of a Digital Musical Instrument, using a practice-based approach. RESULTS: Our system is a multilayered composed instrument which maps its computational part and its interface on the performer’s body. Silico can be used as a standalone musical instrument to generate music in real time. CONCLUSION: Our works shows a new path about the use of genomic data in a musical way, as a new perspective of human-computer interaction in a performative contexts. Introduction The human genome is the genetic material in each cell of a human body. Since its discovery in the 90s, musicians and sonic artists have been attracted to use it as a mean to generate sound [1,2,3]. Different authors have developed different methods to sonify the genome, but they all tend to rely on direct mapping between some genome data and the sonic output. In this paper we propose a fresh look to using genome for musical purposes, by presenting Silico, a DMI, whose computational engine is based on the genome of the performer, while the interface is based on the hand of the performer. Our proposal is to use the genome as a basis to design a new instrument rather than to directly mapping to sound parameters. To do this, we rely on the typical computational engine/interface distinction that characterises Digital Music Instruments (DMIs) [4]. In particular, we propose to use the genome to define limits in the computational engine of a DMI that can then be played through an interface. The two elements of the instrument constitute a multilayered representation of the dichotomy between the body as what we inherited, that we cannot change, and the body as the tool that allow us to act: the engine represents the genome, the inherited material that constitutes the body, and the interface is controlled by an hand, the part of the body we use to manipulate tools. Silico contributes to the electronic music debate by presenting a new way of using genome for musical expression. Moreover, along with this reflection, it offers a new reflection about the role of body in music performance. The rest of this paper is organised as follows: in section 2. we present related works in the field of DMIs design, genome sonification, and use of body for musical expression; in section 3. we describe the design of Silico, pinpointing the distinction between the computing engine with its relation to the genome, and the interface based on hand ergonomics; we then discuss our work against the literature described in section 2., and conclude outlining possible future directions. Background In this section we present 1) literature related to the design and compositional strategies for DMIs, 2) related works on body-inspired design of musical interfaces, and 3) the genome structure and how it has been used for sonification purposes. Designing digital instruments DMIs are normally composed of two main distinct elements: a computing part that comprises sound synthesis and automations algorithms and an interface that usually comprises hardware and software technology. As opposed to an acoustic instrument, where the gestural interface is also part of the sound production unit, in a DMI the interface is usually completely separated [5]. With DMIs, a musician has the possibility to design arbitrary mapping between his/her gesture and the music parameters [6]. Magnusson introduced the concept of ergomimesis to describe how new digital instruments emerge from a process of transport (transduction) from one domain to another [7]. If we think of an acoustic instrument, in fact, we can establish a direct relationship between gesture and produced sound, or combine a certain type of action with a sound result consistent with it (for example, the correlation between picking a string and the sound produced by itself). However, considering the arbitrariness of its mapping, in a DMI it is not possible to draw such direct relationships. Wanderley also recognised that mapping is a crucial issue in a DMI, as the relationship between gesture and sound can be completely arbitrary according to the idiosyncratic needs of any specific situation [5,8]. Therefore, the intrinsic characteristic of DMIs offers the possibility to compose by inventing the interaction strategies. Many works [9,10,11] have investigated the inherent meaning of the digital instrument, from which we have extrapolated some main ideas which we will briefly illustrate here. In 2002, Schnell and Battier introduced the concept of composed instrument [9]. A composed instrument is a tool that shares features of an instrument (it can be played), a machine (it is composed of algorithms), and of notation (it represents the function of a score). Similarly, Cook claims that the music created by the new instruments is strongly influenced by the choices made during the design process, which in turn are influenced by various artistic, human and technological factors [10]. Building upon these characteristics, Magnusson proposed the idea that DMIs are epistemic tools, as they represent both the function of an instrument and carry a notion of how the music can be thought, composed, and performed. In fact, we can actually say that the process of design and mapping strategies has to be considered as part of the compositional process [11]. DMIs as design objects have also been analysed borrowing tool from HCI [12]. In this sense the concept of affordances and constraints have been widely explored [13,14,15,16]. Particularly relevant is the proposal by Magnusson, who, starting from a phrase by Boden ("Far from being the antithesis of creativity, constraints on thinking are what make it possible" [17]), defines expressive limitations that face the thinking, creative, performing human as "subjective constraints" [15]. Body and sound Research on the relation between the human body and music has a long tradition [18], in relation to the growing of DMIs studies. Studies on performative gestures contributed to the investigation about the relation between body and movement, defining the gesture itself as an element that contributes to the formation of the musical meaning as well as the auditory features [5,19,20], and contributes to increase the strength of the performance itself in the audience's perception [21]. With this in mind, Iazzetta, for example, defines the body as "instrument through which the gesture becomes actual" [22]. Moreover, the body has taken on an important role in the study and design of DMIs, also thanks to the proliferation of low-cost sensors that have favoured their spread [5,4]. As an example, we cite here the works by Tanaka who used electromyography and muscle sensors to control electronic musical instruments [23], and extended the gestural boundaries through the use of sensor-based instruments [24]. The performances by Donnarumma "Music for Flesh I & II", offer another important contribution to the development of the debate about music and body [25,26]. In these performances the Author relied on wearable hardware sensors device for capturing biological body sounds called the "Xth Sense". From an holistic perspective, we mention as an example the works of Donato et al., which have extended gesture control systems also in the manipulation of light projection [27], From the Body with the Body: Performing with a Genome-Based Musical Instrument and the Cyber Composer by Ip et al., a system which generates music according to the hand gestures of the user [28]. Particularly relevant for the research about gestures and music are those instruments that relied on gloved hands. The hand has always received special attention thanks to its intrinsic characteristics not only in the field of research but also in pop culture (the famous Cyberglove, for example [29]). It is not difficult to come across gloves equipped with various types of sensors, from the most common flex sensors, accelerometers and gyroscopes to more elaborate position recognition systems [30]. Already in 1984 at STEIM (Studio for Electro-Instrumental Music), Waisvisz had developed and presented his instrument, "The Hands", equipped with ultrasonic sensors, buttons, switches, and accelerometers [31]. A more recent example is presented by Laetitia Sonami, who subsequently developed a complex control system with strong gestural features [32], comparable to the modern MiMuGlove. Genome and Sonification The genome is the genetic material in each cell of an organism. It is encoded into the deoxyribonucleic acid (DNA), a molecule composed of two chains that coil around each other, that carries the genetic instructions for the development of our body. The human genome is present in every single cell of a human being, and is composed of over 3 billion nitrogen bases (Adenine (A), Cytosine (C), Guanine (G) and Thymine (T)) divided over 22 pairs of autosomes and 2 sex chromosomes. Regions of this DNA contain sequences which, when read by the cell's molecular machinery, are capable of producing proteins ("coding regions"). The nitrogen bases in these regions are read into triplets called codons, each of which identifies a specific amino acid of the future protein (see [33,34] for an extensive description of the genome). Researchers in genetics and bioinformatics have developed various methods to analyse and organise human genome resources for identifying features of DNA sequence [35,36]. Moreover, since the launch of the Human Genome Project [34], a vast amount of data about the human genome has been made available on the internet, attracting attention from various fields [37]. Many experiments with genome and music have been made, particulary using sonification. Sonification is a non-speech audio rendering process used to convey information about data and/or interactions [38,39]. Sonification is primarily applied in the area of Auditory Display [40], in this context three main techniques emerged: 1) Audification, that is direct playback of data samples, 2) Parameter Mapping Sonification, or PMSon, that associates multidimensional/multivariate information with auditory parameters, and 3) Model-Based Sonification, or MBS, the creation of processes that support interaction, by involving the data in a systematic way [41]. Gena and colleagues pioneered the sonification of DNA in the mid-90s, with a system that converts the genome information to MIDI events [42,1]. Other examples are represented by Won, who sonified the chromosome 21 [3], and by Temple, who developed six algorithms to sonify the human DNA, comparing them according to which one is more informative [2]. Moreover, Grond et al. developed an interactive sonification technique to explore ribonucleic acid using a combined auditory and visual interface [43]. The commonality among these scholars is the exploration of a single specific element of the genome. The aforementioned works mainly focused on sonification algorithms, but the idea to use a human genome to define the algorithm of a musical instrument is currently overlooked. Silico Silico aims at representing the body both at the computational level and at the interface level. The genome of the performer is used to shape the sound synthesis algorithms, while the physical body (a hand) is used for the controller. Not only are these two levels shaped on the actual body of the performer, but also they represent two different constituents of the body itself: the genome, an inherited element that is intrinsically unchangeable by the individual, and the hand, that embodies the human capability to manipulate objects and operate tools. As we have seen in 2.1 a DMI is constituted of two main elements, a computational engine (that combines a number of musical algorithms), and a controller interface. Neither of the two, if analysed in isolation, represents the DMI. In Silico, we designed both of these parts based on constituents of the body of its main creator and performer (first author of this manuscript). The computing element is a real-time algorithm that pilots four sound synthesis engines. The genome of the performer is used to model the range of each sound engine. As compared to other traditional sonification system, Silico maps genomic data at a meta layer, determining the performative possibilities and not directly the sound. The control interface is a glove that relies upon the ergonomic ability of the performer. Silico is composed of two main elements: 1) Computational engine that mainly relies on genomic data to define sound synthesis constraints 2) and an Interface composed of an augmented glove. The actual music generated by Silico is the results of the combination of the genomic instrument constraints and the performative parameters. The values range of the sound engines is determined by the weight of each group of amino acids and multiplied from time to time by the current state of the corresponding sensor on the control interface. In this way, we create a double level of control: an involuntary/structural one that determines the extension of our instrument (the genome), and a voluntary/formal one that moves within these extensions (the interface). We summarized the whole structure of Silico in fig. 1. The resulting instrument is a multilayer non-deterministic representation of the body of its performer. The engine represents the genome, that is not controllable, while the interface is a representation of the actions, that can be controlled. Every sound event is thus a combination of both the uncontrollable element of the body (the sonification of the genome), and the controllable element (the movement of the hand). As we focus primarily on the relations between the performance and the machine, we try to keep the audio system as simple as possible. Computational engine Genome data mining In section 2.3, we described how many authors have relied on single elements of the genome in their sonification work [42,3,43]. In this study we followed a similar approach, trying to find a greater data specificity to be used as a starting point. The data mining step focused on the four groups of ordinary amino acids: apolar, polar, basic, acid [33]. The 20 ordinary amino acids, in fact, can be thus grouped according to their biochemical behaviour and their effect in the final polypeptide chain. Below is a simplified subdivision of codons into groups: Since about 99% of the human genome is common to all individuals, we decided to investigate the remainder, which represents personal differences and defines the uniqueness of each one of us. Our attention has therefore shifted to the so-called mutations, which are the alterations of the nucleotide sequence compared to a physiological standard. A reference genome, representative of the whole human species [34], is employed in genomics as a standard to analyse the differences (variants) between different genetic heritages and individuals. The most recent version (GRCh38), released in 2013 by the Genome Reference Consortium, has been used in this paper. In our design we proceeded to the sequencing and analysis of a personal genome, obtaining a collection of DNA mutations. After the sequencing process with Illumina SBS Technology [44], the data (reads) were aligned by using BWA [45] and base variations were annotated in an Annovar text file [46]. In a file of this type, each string is formed by a numerical value, which represents the position of the variant in the reference protein, and two letters placed before/after that number, which respectively represent the codon of the standard genome and the mutated codon. For example, the string "p.L815T" means that in position 815 a codon L (Leucine) has been replaced by a codon T (Threonine). The length of the file obviously depends on the number of substitutions present in the analysed genome, and it generally ranges from a few dozen to a few hundred. The Annovar file is then imported into a simple list operators and counters system, developed in Max, ver. 7.1.0. The counter system which first calculates the number of codons actually replaced, and secondly groups them by type, calculating their incidence (weight) in relation to the variants type. The values thus obtained are normalised 0. -1., assigning by default the value 1. to the greater and accordingly rescaling the others. Finally, the patch automatically generates five text files in the main folder of the application, labelled "Weights", "Apolar", "Polar", "Basic" and "Acid". "Weights" contains the incidence of each group of amino acids, while "Apolar", "Polar", "Basic" and "Acid" contain the number of individual codons belonging to the reference group. Sound synthesis engines Silico's audio engine is made up of four synthesis engines, one for each group of amino acids. The choice of certain types of audio synthesis was aesthetically determined by the characteristics of each group of amino acids. The motivation behind this choice is not grounded in the semantic content of the sound rather on its acoustic features. Indeed, the main purpose that motivated the choice of the four sounds was to obtain sound materials that were simple enough to be clearly differentiated, and at the same time sufficiently complex to be somehow musical. Moreover, the sound should have a sufficient number of parameters, whose range could be mapped with the difference between the specific genome of the performer and the reference genome. Therefore, it follows that four types of sounds and four types of synthesis coexist in our system, each representing one specific group of amino acids. Codon management is also differentiated: each group of amino acids, as said already, contains a different number of codons, which will determine a different number of choices within the synthesis engine itself: • The Apolar group is represented by an engine which operates multiband subtractive synthesis on concrete samples of voices and applies timed delay on the signal. • The Polar engine generates clouds of grains from concrete samples of noises. • The Basic group operates additive synthesis with selectable waveform using 5 harmonic oscillators. • Finally, the Acid group is represented by a simple/complex frequency modulation (FM) synth. Each engine is capable of holding up to 20 voices in polyphony. At every trigger received, the system probabilistically selects a different amino acid by reading the pre-loaded correspondent files. In the Apolar and Polar cases, different codons are matched to different 2seconds samples, stored into the main folder, to be processed. In the Basic and Acid cases, different codons determine respectively the oscillator's waveform and the FM complexity. These matching are summarised in table 1. From a poetic point of view, a system of this type allows us not only to be able to use our instrument to build performance in real time, but also to determine, through our inherent characteristics, the number and types of instruments of our small synthetic orchestra. Trigger automations The choice of each sound event is entirely determined by a probabilistic system that refers to the files derived from genomic analysis. At each event trigger, the system reads the probabilities from the "Weights" file and chooses the respective synthesis engine to be activated. A message is then generated to be sent to the corresponding polyphonic synth, which contains a series of values in a specific order, depending on the type of synthesis corresponding. To start a performance, we just need to specify a global triggering speed and activate the "Karl" toggle. Then the system starts playing immediately thanks to Karl, a very simple automatic triggering subpatch. Moreover, as it is basically generative, Silico is able to play itself without the need for external control by randomly setting the parameter values, in the absence of a significant variation of the input signal (> 0.05) for more than 1 minute. It is therefore possible to disconnect the interface at any time and let the system ring, or reconnect it and start interfacing with it again. Additional sound treatments Finally, we use gigaverb~ for adding some reverb by default and send every synthesis engine to a different speaker in a quadraphonic environment. Therefore, it is possible to configure the system to automatically switch to a stereo environment with three different presets: 1) Stable-Unstable configuration: grouping Polar and Acid engines on the left (L) channel and Apolar and Basic engines on the right (R) channel; 2) Mixed configuration: grouping Apolar and Polar engines on the L channel and Basic and Acid engines on the R channel; 3) Dual-Mono configuration: all engines to L-R. Cyberglove We proceeded to breadboard a prototype of our glove interface. We used five flex sensor 2.2", five 10kΩ resistors, electric cable, heat shrink tube and an Arduino Nano equipped with ATmega328P processor, which is powered by USB cable. While the electric circuit is actually very simple and does not require further explanation, we need to discuss the code a little more in depth. In fact, we notice that the resistance value at 0 and 90 degrees was not always the same. The flex sensors, notoriously, are not very precise, so we first needed to fix this problem. After a few tries, we used a constant voltage value of 4.98V and limited the resistance values between 37.3kΩ at 0 degrees and 90kΩ at 90 degrees. These resistance values are obtained from an empiric calculation: we recorded a collection of 50 full-flat and full-flex values, and operated an approximated mean between the higher 5 values in flat position and the 5 lower value in flex position. In this way we ensure that we From the Body with the Body: Performing with a Genome-Based Musical Instrument can always track and approximate precisely at least the limit positions. What is in the middle does not need to be so precise: the use of this kind of controllers introduces by force a certain degree of inaccuracy, so we operate a further mean every 10 values in order to avoid glitches and smoothen the movements. After these calculations, the final value is rescaled between 0. and 1., appended to an index representing the correspondent finger and sent out at every cycle (10ms each). After the circuit was made, we sewed some little pieces of cloth in the upper part of the glove fingers, where the sensors need to be lodged, and fixed the board on a velcro strap. Notice that we used a left-handed glove because the guitar-background of the first author guaranteed him greater finger independence on the left hand. The complete prototype is shown in fig. 2. Mapping In the first phase of this work, we started by isolating interesting parameters for each synth. Apart from dynamic (volume) and density of events, which can be considered as high-level controls (we refer to them as "global parameters"), every kind of digital synthesis has by its nature a different number and a different type of parameters, which also depend on its complexity. Despite the overall simplicity and neatness of each synthesis engine, we figured out that we still had to deal with a large number of parameters. So, in order to make the system easily controllable, we made sure that not only every synth had the same number of parameters, but also that these parameters were comparable to the same domain categories. We therefore chose three domains, in which we grouped the characteristic parameters of each engine (we refer to them as "specific parameters"). A comprehensive summary of all the five domains and the corresponding parameters for each synthesis engine is shown in table 2. We used a one-to-one mapping strategy [43] in order to enhance the clearness of use, and assigned each domain to a different finger, considering the ergonomics of the glove. The objective of this process was to grant a sharper control over the parameters requiring more precise manipulation. After a few trials, we identified the following mapping as the best solution: • 1° finger: dynamic; • 2° finger: frequency; • 3° finger: time; • 4° finger: spectrum; • 5° finger: density. Moreover, in this mapping the global parameters (dynamic and density) are controlled by the external fingers, allowing a more precise control over the other three domains. Graphical User Interface In this work, we can consider the Graphical User Interface (GUI) as a simple visual feedback tool to help the performer interpret what is going on during the performance. Every synth engine is visually represented by a big coloured button, which blinks whenever a trigger is received. An approximate percentage, calculated by the system after loading the "Weight" file, is also shown over these buttons and indicates the probability for every synth to be activated. Finally, we inserted five sliders representing the realtime status of each sensor on the glove controller, a clock to control the overall duration of the performance and two pop-up windows for loading boot files and configure the system ( fig. 3). Practical use We present a practical use of the system in a solo performance context. This section is also accompanied by example audio files, which are freely available at this link: https://soundcloud.com/silico_1_0 The genome we used (the genome of the first author of this paper) contains a fairly even distribution of variants for each group of amino acids, respectively 20%, 25%, 29%, 24% (the percentages are approximate to the whole). The first four audio files in the folder (Apolar, Polar, Basic, Acid) are a representation of the range of each synthesis engine individually. We obtain this by muting the other three engines, for a demonstration purpose. This is not a real performative possibility, as the choice of each engine is assigned by Karl. Each of the four files explores the full extension range of the specific parameters (in the frequency, time and spectrum domains), for each of the four synthesis engines. As described in the Mapping section, these parameters are manipulated with the glove, in the following order: frequency min-max, spectrum min-max, time min-max. In order to make them clearer, we have chosen only one amino acid per group, and the files have been converted to mono. We also highlighted the different cues with comments directly on the SoundCloud page. The fifth file (Silico_1) contains an example of performance lasting about 7 minutes. This performance unfolds following an extremely simple and linear development, which reaches a climax around the two thirds of the execution itself. The sixth file (Silico_alt) contains a performance version created to highlight the different expressive possibilities and clarify the constraints set by the genome. We then manually entered different boot files into the system, "inventing" an example-genome with more extreme features. In this case, we made sure that two groups of amino acids clearly prevailed over the others, assigning a percentage of 40% to the Polar and Basic groups and a percentage of 10% to the Apolar and Acid groups. A similar degree of imbalance has also been maintained within each group by changing the variant content on the individual codon. To allow a direct comparison, the hand gesture performance used in the previous file (Silico_1) was recorded and reused to generate this file. The hand performance is identical in both files, the only element that changes is the distribution of genomic data, and is therefore possible to appreciate how much different genomes determine the overall result. Discussion Every cell in our body is built from the information stored in our genome, thus being our genetic material the quintessential determinant of many bodily characteristics and specifications. This notwithstanding, growth and development of ourselves is strongly impacted and, to a certain extent, controlled by external environmental features and our will. Our existence is continuously confronted by both the inner specifics of our "physical instrument", dictated by the genome, and the external determinants trying to morph it. The unprecedented challenge we tried to face in this work is to constitute a new metaphoric representation of ourselves through different layers of a DMI. In Silico the genome is not simply "performed" or sonified: similarly to what happens in living bodies, it is used to constitute the building blocks of the instrument. Our genetic alterations are the determinants of the different instrument features, being the genome rather a pure expressive tool than the object of the musical performance. The final composed instrument that the musician can use in a performance is based on the genomic data, created from the body, and therefore not modifiable or adaptable to each performance. This represents the constraints which the performer must deal with; this is also a metaphor of human existence, where a human being can not modify his/her own body. He/she can hence express degrees of freedom by operating the DMI with what, by definition, expresses the human evolution and the willful action: the hand. Through manipulation the artist can subsequently determine the composition and partially control what happens on stage. Discussing Silico against existing literature on genome sonification, we introduce the idea of a meta layer between the genomic data and the sound. Indeed, as we have seen in section 2.3, [42,3,43] have created direct mapping between data and sound, while we developed a different approach, using the genome to design a new instrument. We argue that this approach was fundamental to obtain the musical expressivity discussed in the previous paragraph. Our approach also presents a new metaphor to use the body of a performer in an electronic music performance. As described in section 2.2, [23,24,25,26,31] successfully used the actual body as part of the performance, and the element that mainly leads the design of the musical interface. Our approach can complete these strategies, by offering the possibility to use the genome, From the Body with the Body: Performing with a Genome-Based Musical Instrument the inherent and hidden element of the body, as part of the musical interfaces. Conclusion and future work In this work we presented a new DMI in which we wanted to renegotiate the hierarchy between the user and the machine and the relationship of mutual responsibility in creating a musical performance. The use of the genome to determine the constraints of the system has allowed us to delineate a new control layer which is fixed and dependent on the user himself, which we basically define as "structural". Controlling through the voluntary gestures of the tangible body an algorithmic system whose properties / possibilities are closely linked to the nontangible characteristics of the body itself, represents on the poetic level the same patterns of everyday life, that is, acting consciously with and through a predetermined body, according to its inner properties. This perspective introduces interesting developments to be addressed in the future, which we can ideally divide into two parallel lines of investigation: the user's point of view and the performance implications. In the first case, we first want to implement additional control systems using the vast amount of gestures that the body makes available to us, through the use of more complex sensors such as the muscle sensors we already mentioned in section 2.2. Furthermore, it is possible to consider the inclusion of a non-voluntary control level in real time, which can be effectively provided through the use of new generation EEG sensors. The computational engine can be further developed, both in terms of general settings, including in the first instance the possibility of choosing between various types of synthesis, and in terms of their complexity, in order to make Silico even more customisable and immersive. The data mining process on genomic data will also be further enhanced through the addition of other biological layers, such as frameshifts, premature stop gain of a protein or the disruptive event represented by chromosomal rearrangements. From the performative point of view, we want to introduce a visual support for the audience, in order to clarify the interpretation of the gesture by showing the system status in real time, and to enhance the global experience. The aforementioned tasks will ideally be conducted through case studies and specific tests in the presence of a selected audience, whose impressions will be collected through questionnaires. Similarly, we intend to proceed by inserting the system into instrumental contexts, from soloists to small ensembles, in order to evaluate the collaborative potential. Finally, a final evaluation will take place by testing the instrument and the users of different musical extraction, inserting different genomic data from time to time, to determine and evaluate the actual possibilities of interaction that arise from the differences, voluntary and involuntary, between different users.
7,565.6
2020-04-20T00:00:00.000
[ "Art", "Biology", "Computer Science" ]
Validation of an Assistance System for Merging Maneuvers in Highways in Real Driving Conditions In the latest study conducted by the National Highway Traffic Safety Administration in 2018, it was published that human error is still considered the major factor in traffic accidents, 94 %, compared with other causes such as vehicles, environment and unknown critical reasons. Some driving scenarios are especially complex, such as highways merging lanes, where the driver obtains information from the environment while making decisions on how to proceed to perform the maneuver smoothly and safely. Ignorance of the intentions of the drivers around him leads to risky situations between them caused by misunderstandings or erroneous assumptions or perceptions. For this reason, Advanced Driver Assistance Systems could provide information to obtain safer maneuvers in these critical environments. In previous works, the behavior of the driver by means of a visual tracking system while merging in a highway was studied, observing a cognitive load in those instants due to the high attentional load that the maneuver requires. For this reason, a driver assistance system for merging situations is proposed. This system uses V2V communications technology and suggests to the driver how to modify his speed in order to perform the merging manoeuver in a safe way considering the available gap and the relative speeds between vehicles. The paper presents the results of the validation of this system for assisting in the merging maneuver. For this purpose, the interface previously designed and validated in terms of usability, has been integrated into an application for a mobile device, located inside the vehicle and tests has been carried out in real driving conditions. Introduction Reducing the number of traffic accidents is an issue that has been a social concern for quite some time.Today, thanks to the new technologies implemented in the automotive sector, it has been possible to alleviate the number of fatal accidents on the roads, but traffic accidents are still considered one of the leading causes of death worldwide according to recent studies by the World Health Organization [1]. According to the latest study carried out by the National Highway Traffic Safety Administration in 2018 [2], 94 % of serious accidents are due to human errors related to the decision, such as performing illegal maneuvers, driving too fast, overconfidence or misjudgment of another car. The increasing development of Advanced Driver Assistance Systems (ADAS) technologies helps to improve these situations because they assist the driver in making decisions in risky situations and suggest actions that favor not only the safety of the driver himself but also that of those around him. However, some complex scenarios such as highway merging, which is the subject of this study, involve a high mental load for the user due to the large amount of information that must be processed while performing the merging maneuver.Timely decision making is crucial in this context since if it takes too long to enter the main road, the driver will reach the end of the acceleration lane without speed, assuming a risk of entering at a low speed on the highway.That is why in previous studies [3,4] a driver assistance system was proposed for the merging maneuver on highways.The following study is a continuation of the same one, in which the system design and its validation in real driving are presented. State-of-the-art ADAS offers great potential for further improving road safety, in particular by reducing driver error.Examples include adaptive cruise control (ACC), which permits maintaining a constant speed in accordance with road conditions and keeping a predetermined safety distance, pedestrian protection systems (PPS), capable of alerting the driver and acting autonomously to reduce the hit speed, and blind-spot detection (BSD), which indicates the existence of another vehicle or object in a blind spot in the rear detection area. However, there are still many complex scenarios where human error is present due to the high amount of information that the driver has to process while making the right decision for the maneuver.Merging situation is one of the most critical scenarios that occur on the road because to perform a safe maneuver, the driver depends not only on the variables of his vehicle and the environment but also on the relative speed and position of adjacent vehicles. Several authors have dedicated their studies to this situation, such as [5] which studied accidents in situations of lane change and merging, [6] which analyzed the time needed to perform a merging maneuver between young and elderly drivers or [7], which performed a realistic multi-driver merging simulation, where several driving simulators were connected to each other in order to have a more naturalistic behavior.This last study compares an ADAS cooperative system for merging situations in two conditions, single-driver simulation and multi-driver simulation, in which all drivers are warned of the maneuver that the ego vehicle is going to perform.However, most of the studies carried out are in the field of simulation and very few in real driving, due to the cost and risk involved in performing tests, especially if there is dense traffic.In the study carried out by [8], real driving tests were carried out with 10 subjects, analyzing the effect of traffic density on the state of the driver's eye.From the results, he supported the need for a driving assistant that could suggest to the driver to accelerate or decelerate the vehicle depending on the gap necessary for merging.A study that serves as a precedent to ours is [9], which developed a merging system on highways, which provided a visual warning on a Google map on a smartphone, verified in real driving.Unlike ours, this article had three vehicles connected, indicating the need to accelerate, brake or enter the gap by means of three sentences according to the calculations of the algorithm until the end of the lane, in addition to being a more complex algorithm than ours. This paper validates an application of merging assistance in real driving, based on cooperative systems, which have already been used in previous studies applied to the merging maneuver as for example [10] and [11].Thanks to this technology the vehicles share internal variables of position and speed, proposing a more affable and safe environment. Previous work In previous studies [3,4] the influence of the merging situation on the cognitive behavior of the driver depending on traffic density was analyzed.The tests, carried out with several subjects in real driving using an eye-tracking system, confirmed that pupil diameter is a sensitive indicator of this type of situations.The fixations were also analyzed, whose duration was affected during the maneuver due to the amount of information to be processed in a single glance to the rear-view mir-ror.The frequency of mirror looks was also increased by 30 % with respect to the baseline in normal driving.In [12] the fixations were also analyzed by means of heat maps, noting that there was a common hot zone in both rear-view mirrors when the merging maneuver was performed in most driving tests.This area, located in the upper-inner part of the rearview mirror, is considered adequate for the placement of the assistance system to be developed later. System development The proposed assistance system uses cooperative systems (C-ITS) based on Vehicle-to-vehicle (V2V) communication, where vehicles share information on speed and position.This technology used in several experiments such as [13,14] makes the environment safer and less hostile to adjacent vehicles, thanks to knowing the internal variables of the vehicles. The variables of speed and position are the inputs of the mobile application that supports the assistance system.For the system interface, a simple bar design has been chosen (Fig. 1), based on previous ADAS development studies [15], where an intelligent speed adaption system was developed.The bars show in qualitative terms, how much to brake or accelerate once the vehicle starts to merge to the main road, guiding it to acquire an optimal speed.The main premises in the algorithm are: 1) the safety margin between vehicles must not be less than two seconds; 2) the maximum speed of the road in the acceleration lane must not be exceeded in any case; 3) the assumed acceleration and deceleration limits to reach are 2 and 4 m/s 2 respectively [16,17]. The reason for defining a safety margin in terms of time is because more speed requires more distance to brake.Numerous studies report that a driver reacts, in the worst case, with a reaction time of 1.5 s to a surprise event, as an object that moves suddenly on the driver's route [18].This is why in a conservative way two seconds of time is chosen as the safety margin, so this variable can be applied to any scenario due to its dependence on speed and space. On the algorithm are shown two main conditionals, the vehicle merge in front of the vehicle that is already in the main road, if safety conditions permit, or the vehicle merge behind the vehicle of the main road, either because it exceeds the maximum speed of the road or because the acсeleration need is excessive. The algorithm used, which is based on motion equations, is more intuitive and simple than the one presented in [19], where a decentralized algorithm for highway merging system was developed, which only indicated the need to accelerate, brake or enter. The code written generically with the variables is attached below, as well as a flowchart (Fig. 2 System validation in real driving Three merging maneuvers were carried out along the M-45 highway in Madrid, Spain.Two On-Board Unit (OBU) G5 communication modules embedded in each of the vehicles, send information to the application through the wireless network, which generates only visual warnings based on the speed information and the positioning collected by the Global Navigation Satellite System (GNSS) on a digital map (Fig. 3).The GNSS is an integrated satellite navigation receiver GLONASS + GPS + GALILEO + SBAS, whose sampling frequency for the position and speed values of each vehicle is 200 m/s and which are the inputs to the control algorithm of the merging assistance interface (Fig. 4). Fig. 4. Communications module and GNSS The communication modules are connected to two external antennas each one located on the roof of the vehicle, 2.4 and 5.0 GHz bandwidth (Fig. 5).The GNSS receiver are also fixed to the vehicle roof in order to obtain the best signal possible.The driver will perform the three merging maneuvers supported by the visual warnings provided by the assistance system.The cognitive load of the task is studied examining the pupil diameter and the fixations by means of an ocular tracking system. Results Considering the environmental difficulties involved in performing a real driving test, the results obtained in the merging maneuvers have been satisfactory.The position, velocity and acceleration values for each maneuver were analyzed, as well as the levels shown in the application.A total of 13 subjects between the ages of 25 and 45, instrumented with an eye-tracking, performed the circuit.The fixations can be seen in a heat map below, performing a merging maneuver (Fig. 6).As you can see, the driver looks mainly at the road and the merging assistance application, as well as the vehicle and control panels.Also shown in the following graph as an example, are the internal data with which the application works and the levels of warning it generates, the positive level is the need to accelerate and the negative, the need to brake. In the graphs, it can see to the left the velocities of each vehicle and to the right the distances to the lane end.The level is a dimensionless measure that indicates in the positive part the need to accelerate and in the negative part, the need to brake.As can be seen in Fig. 7, vehicle 1 is warned of the need to accelerate because, although it is closer to the lane end than vehicle 2, at first it starts with a speed of less than 2, and they will probably end up at the lane end.In this case vehicle 1 passes in front of vehicle 2 in the merging.In Fig. 8, the merging maneuver is very similar to the maneuver in Fig. 7, observing how the application warns of the need to accelerate at first, as it gives time to pass in front of vehicle 2. There is a negative peak downward in a particular instant because the speeds of the vehicles are equalized and vehicle 1 must accelerate if he wants to maintain its position with respect to vehicle 2. In Fig. 9, it can see the warnings at first but due to acceleration by vehicle 2, the application suggests passing behind the vehicle.---level; ---v1; ---v2; ---d1; ---d2 Merging 3 Тime, s ---level; ---v1; ---v2; ---d1; ---d2 In the Tab. 1 times and distances have been summarized, in which the application gives negative warnings, that is to say to brake and the-refore pass behind the vehicle that is within the main road.These values are considered important, because if the driver does not react quickly and there is a situation of not having enough time to proceed to brake safely. As can be seen from the results, there are no sharp level peaks, but the application suggests starting to brake gradually from level 20.Time and distance are values of similar range in all incorporations, which indicates that it would not show warnings to accelerate when the driver is in a critical situation near the lane end. CONCLUSION In this paper, a merging assistance system based on V2V communications has been developed with the aim of making the maneuver safer for the driver.In view of the results obtained, it can be concluded that the application has had a good performance in the real driving tests.In the heat maps, it has been observed that the system is the second point that has more fixations when the maneuver is carried out, behind the fixations to the own lane.This result is very coherent, given that for the driver the final point of the lane is the most important, and he has to arrive at this point with sufficient foresight to be able to merge.On the other hand, seeing the data shown in the graph above and the operation of the application, the merging assistance system is validated in real driving conditions.It has been proven that the system in no case would suggest the driver accelerate near the lane end, which makes it considered a conservative and reliable system in terms of safety. Fig. 1 . Fig. 1.Bar interface design ) to improve understanding: if a1 < amax & vf1 ≤ vmax #acceleration loop if a1 > 0 level = round (a1/amax) else #brake loop if t1 > t2 + T if v1 < v2 level = 2 else level = round (dec1/decmax) d1 = distance from vehicle 1 merging to the lane end point d2 = distance of vehicle 2 from the road to the end of lane t2 = time it will take for vehicle 2 to reach end of lane tfront = Time limit for vehicle 1 to pass in front tbehind = Time limit for vehicle 1 to pass behind T = safety time, used 2 s v1, v2 = velocities of both vehicles level = amount of acceleration or braking vmax = Maximum track speed in m/s amax = Maximum acceleration, using 2 m/s 2 decmax = Maximum deceleration, using 4 m/s 2 a1 = Instantaneous acceleration of 1 dec1 = Instantaneous deceleration of 1.
3,589.6
2019-12-05T00:00:00.000
[ "Computer Science", "Engineering" ]
High-throughput transformation of Saccharomyces cerevisiae using liquid handling robots Saccharomyces cerevisiae (budding yeast) is a powerful eukaryotic model organism ideally suited to high-throughput genetic analyses, which time and again has yielded insights that further our understanding of cell biology processes conserved in humans. Lithium Acetate (LiAc) transformation of yeast with DNA for the purposes of exogenous protein expression (e.g., plasmids) or genome mutation (e.g., gene mutation, deletion, epitope tagging) is a useful and long established method. However, a reliable and optimized high throughput transformation protocol that runs almost no risk of human error has not been described in the literature. Here, we describe such a method that is broadly transferable to most liquid handling high-throughput robotic platforms, which are now commonplace in academic and industry settings. Using our optimized method, we are able to comfortably transform approximately 1200 individual strains per day, allowing complete transformation of typical genomic yeast libraries within 6 days. In addition, use of our protocol for gene knockout purposes also provides a potentially quicker, easier and more cost-effective approach to generating collections of double mutants than the popular and elegant synthetic genetic array methodology. In summary, our methodology will be of significant use to anyone interested in high throughput molecular and/or genetic analysis of yeast. Introduction Saccharomyces cerevisiae (Yeast) is a widely studied and highly utilized eukaryotic model organism, ideally suited to high throughput genetic analysis. Key reasons for this include yeast's ability to exist in a haploid state, allowing more direct genotype-phenotype analyses, and efficient homologous recombination that allows easy editing of genomic sequence. Given this, many yeast genomic libraries are currently available, including epitope tagged ORF collections (GFP, TAP-tag etc.), as well as gene deletion, conditional expression, and over-expression libraries [1][2][3][4][5]. Among the approximate 6000 genes in the yeast genome, >31% are conserved between yeast and human species, most of which function in core aspects of a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 eukaryotic cell function [6,7]. For example, Nobel prize-winning studies in yeast have been fundamental to our understanding of the mechanism and regulation of the cell cycle [8,9], secretion [10,11] telomere biology [12] and autophagy [13]. Yeast is therefore a powerful model for understanding basic cell biology. Yeast has also proven itself a valuable model in the study of human disease. For instance, screening for suppressors or enhancers of toxicity associated with heterologous expression of mutant forms of human disease genes is increasingly common [14,15]. For example, expression in yeast of the human RNA binding protein TDP-43, which is associated with amyotrophic lateral sclerosis (ALS) and frontal temporal dementias, recapitulates the pathological phenotype of cytoplasmic aggregates and cellular toxicity, and lead to the identification of Ataxin-2 as a risk factor in human patients [16][17][18][19]. Furthermore, >20% of human disease genes have yeast homologs [20]. Yeast is therefore an attractive model to perform initial genetic and mechanistic analyses of various human diseases, particularly those that impact core cellular processes. Synthetic Genetic Array analysis (SGA) is an elegant high-throughput method that allows synthetic genetic interactions to be assessed by generation of combinatorial mutant yeast strains via a systematic mating approach, followed by analyses of growth phenotypes [21,22]. Screening for enhancers or suppressors of toxicity associated with expression of human disease proteins, including TDP-43, has also utilized SGA methods [18,19]. In a typical SGA experiment, a query strain is systematically mated to a library of yeast gene deletion strains to generate an array of combinatorial mutant strains for analysis. Although a powerful method, SGA involves numerous selection steps, prior generation of a query mutant strain compatible with the SGA selection steps and takes approximately 18 days from start to finish. An alternative and potentially quicker means to generate libraries of yeast combinatorial mutants would be to transform strains from the same yeast gene deletion library with a plasmid or PCR product to directly express, delete or modify the query gene of interest in yeast gene deletion strains of interest. Description of such a method is currently lacking. Lithium acetate based transformation [23,24] is a core yeast methodology, given its use in the expression of exogenous DNA via plasmid vectors, and homologous recombination-based modification of the genome with PCR-amplified DNA. Notably, a yeast transformation protocol for microtiter plate based transformation has been previously reported [23][24][25], but this still required significant manual handling steps. In this study, we expand on and further optimize these methods for use with liquid handling robotic systems now common in university and industrial settings. Our method is highly efficient, allowing 1200 individual strains to be transformed per day per robot; thus a whole yeast genomic library can be transformed in 6 days. In addition, our method provides an alternative approach to SGA methods for generation of combinatorial mutant yeast libraries. Materials and methods Liquid handling robot setup (S1-S5 Videos) Our protocol was implemented using a Biomek FX liquid handling robot. A detailed mechanistic description of the pipetting, agitation and transfer operations undertaken by the robot, and associated programming is provided in the S1 File. The majority of brands of liquid handling robots available on the market (Hamilton, Tecan) can implement and operate the program described in the S1 File. Yeast strain and growth conditions for liquid-handling robot manipulation (S1 Video). Day 1: Up to 12 96-well yeast gene knockout library plates [2] were thawed and pinned into standard sterile 96-well plates with 200 μl YPD liquid medium by a Singer ROTOR robot or sterilized prongs for the purpose of growing starter cultures. Yeast were cultured on a rotary shaker at 200 rpm, 30˚C for 24 hours. In addition, a number of 2.5 ml deepwell plates (VWR, Cat. No. 37001-520) equal to the number of starter culture plates were filled with one stainless steel bead (Biospec, Cat. No. 11079132ss) per well, and autoclaved for the following day. Day 2: All plates including deep-well plates, read plates, selective media plates and the correlated tips are loaded at this step as well. Inoculation and measurement of the OD 600 (S2 Video). 800 μl of YPD medium was added to the deep-well plates using a multichannel pipette, liquid handling robot or a microplate dispenser such as a Matrix WellMate (Thermo scientific). 150 μl of cultured yeast cells were then inoculated into the deep-well plates using a liquid handling robot. We utilized a Biomek FX, however other liquid handling robots by manufacturers such as Tecan and Hamilton are also suitable (see S1 File). Plates were briefly vortexed (600-1000 rpm, 1min) before the OD 600 of each yeast culture was measured using a plate reader (e.g. BioTek Synergy 2). If the value was too low in some wells, more cells were inoculated to make the OD 600 between 0.4-1.2. Note that the starting OD 600 value for the growth step is fully customizable. For instance, the liquid handling robotic software we developed (described in our supporting information) can normalize all wells to the same OD 600 value. Additionally, cells used to measure the OD 600 value can be re-transferred back to the deep-well plate using the liquid handling robotic protocol (S1 File, S1-S5 Videos) in order to achieve a desired specific volume/cell density. The deep-well yeast plate was then transferred to the shaking incubator (Liconic, Cat. No. STX44-SA), which is connected with the Biomek FX to culture for another 2 ½ -4 hours (1200 rpm, 30˚C). High-throughput transformation preparation. During the 2 ½ -4 hours incubation time, the following was prepared: 1. 96 well plates with 200 μl selective medium in each well for the transformants. Plate number should equal the number of yeast plates being cultured for transformation. Medium was added by a liquid handling robot, Matrix WellMate microplate dispenser (Thermo scientific) or multichannel pipette. 2. PEG (50%w/v) was aliquoted in 2.5 ml deep-well plates, at 100 μl per transformation plates in the transformation process. For example, for 12 transformation plates total, a volume of at least 1200ul is required (a 100ul excess was commonly added in case of minor pipetting errors) 3. Transformation mix ( Table 1): As with PEG, these volumes would be scaled according to the number of plates being transformed (50ul of transformation mix per plate). A suitable amount of plasmid or PCR product for each transformation reaction was added (e.g. plasmid:~400ng/well, PCR Product: 4500ng/well-see optimization in Results section). As a validation of our method in generating gene knockout strains, we utilized the following primers to delete and verify the efficiency of deletion for the gene VPS38 (Table 2). Read the OD of a plate after incubation (S3 Video). After 2 ½-4 hours the deep-well yeast culture plates are removed from a shaking incubator, and the OD 600 value is read. A range of OD 600 0.5-1.5 is optimal (see Results). Switch to transformation (S4 Video). When all the cells are ready to transform, all the deep-well plates from the incubator are transferred to the microplates "hotel" (Thermo Scientific Cat. No. 51021435). The temperature of the incubator will be shifted to 42˚C. Transformation (S5 Video). The deep-well plates are centrifuged at 1500g for 5 mins and then placed back on the liquid handling robot, previously loaded with a rack of filter tips (200 μl filter tips (Axygen, Cat. No. FXF-180-R-S), 50 μl filter tips (Beckman Coulter, Cat. No. A21586), PEG and transformation mix solution plates (see high throuput transformation section above). The liquid handling robot then removes the YPD medium from the yeast culture plates, adds 50 μl Transformation mix, vortexes the plates at 600 rpm for 1 minute, adds 100 μl PEG to each well and vortexes for 1 minute. In total, each 96-well transformation plate well utilizes two unique tips (50 and 200) for the whole process. In contrast, only one set of tips for dispensing the PEG solution and transformation mix are required, since no direct contact with the yeast culture is made during dispensing. Together, this protocol essentially eliminates the possibility of contamination between wells, and the possibility for human error. 96-well plates now containing yeast, PEG and the transformation mix were returned to the shaking incubator at 42˚C for 1-6 hours. Following completion of these steps for a single 96-well plate, the process was repeated for any additional yeast culture plates, which were all transferred automatically from the shaking incubator to the microplates "hotel" after incubation. All tips are also loaded onto the liquid handling robot from the connected microplate hotel. In total, 15 minutes of time on the liquid handling robot is required per plate transformation prior to the heat shock step. The number of plates transformed in a day will thus affect the minimum period of heat shock; for example, transforming 8 plates will necessitate a minimum 2 hour heat shock period (15mins x 8). Transformation plates are removed from the incubator after heat shock, and centrifuged at 1500g for 5 minutes. Plates are then returned to the Biomek FX, which removes excess media, then transfers 30μl of concentrated transformed cells into a 96-well plate with appropriate selective media. In total, these post-heat shock steps require 15 minutes to complete, thus 12 plates could be finished in about 3 hours. In summary, 6 hours of liquid robotic handling time is utilized for 12 96-well plate transformation (1152 strains). Adding in initial growth time (2 ½ -4hours), the entire process takes between 8-10 hours (i.e. 1 day). Selective media plates are incubated at 30˚C for 2-4 days on a shaker (200 rpm). To prevent well evaporation, especially at the edge of the plates, parafilm wrapping and an incubation in a humid box is recommended. The cells are then spotted on selective meida plates, ideally by using the Singer ROTOR robot to pin the cells (Singer Instruments); multichannel pipettes and prongs also work but risk human error. Yeast microscopy and analysis These methods have been described previously [26]. Briefly, stationary phase cells (OD 600 >3.0) expressing pRB1 plasmid (Table 3) were examined using a Deltavision Elite microscope via a 100x objective. The data was analyzed by Fiji [28]. Canavanine mutation frequency assay To determine the mutagenic potential of differing lengths of 42˚C heat shock, a canavanine resistance assay was utilized [29,30]. Briefly, wild type (WT) BY4741 cells were cultured to an OD 600 of~0.6 and then subject to our transformation method with differing 42˚C incubation times (0, 1, 3, 6 hours). For each reaction, 1/200th of total cells in a transformation reaction were plated onto YPD plates for colony counting (dilution avoids formation of lawns). The rest of the cells are plated onto the canavanine plates (60mg/L, Sigma-Aldrich, Cat. No. C-9758). Both plate sets were incubated for 2 days at 30˚C. The colony number was counted on both plate types, and the relative mutation frequencies of yeast subject to differing 42˚C heat shocks were calculated. Results !100ng of plasmid is suitable for high-throughput transformation of yeast over a broad OD 600 range To optimize our high-throughput transformation methodology using a liquid handling robot, the effect of varying plasmid and cell concentration was tested. First, to assess the importance of plasmid concentration, pRB1, a single-copy URA3 based plasmid expressing Pab1-GFP and Edc3-mCh was transformed into BY4741 WT yeast cells at differing concentrations, followed by selection for transformants on media lacking uracil. Pab1 is a poly(A)-binding protein that aids mRNA translation, while Edc3 is a mRNA decay factor. Pab1 and Edc3 are also cytoplasmic markers of conserved mRNA-protein (mRNP) foci called stress granules (Pab1) and P-bodies (Edc3), which are implicated in regulation of mRNA function [31]. As expected, at a cell density of 0.5, transformant numbers increased as plasmid concentration in the transformation reaction increased from 25ng to 400ng total ( Fig 1A). To assess the importance of cell density in our methodology, 100ng of pRB1 was transformed into yeast cells harvested at a range of cell densities (Fig 1B). At low concentrations (OD 600 <0.2), no transformants were observed. However, efficient transformation was observed at a broad OD 600 range, from 0.4 to 1.5 (Fig 1B). Harvesting cells at a high OD 600 (3.0) cells did not generate transformants, indicating that post-diauxic shift yeast are not suitable for high-throughput transformation. In summary, an OD 600 range of 0.4-1.5, and >100ng plasmid DNA is optimal for our automated transformation method. We next examined whether high OD 600 transformed cells exhibited any difference in plasmid expression or general cellular phenotypes, which we assessed by any changes in cell morphology or induction of stress granules and P-bodies relative to low OD 600 transformed cells. Stress granules (Pab1-GFP foci) and P-bodies (Edc3-mCh) were induced by growth to stationary phase (OD 600~3 .5), which results in robust induction of both granules, as well as their partial co-localization in the cytoplasm [27]. As shown in Fig 1C and 1D, there was no obvious difference in cell morphology. Additionaly, no difference was observed in the co-localization between stress granules and P-bodies, or in the average size or number of stress granules and P-bodies per cell. In summary, this indicates that no obvious problems result from transformation at a range of cell densities. Increasing heat shock up to 6 hours increases transformation efficiency with minimal effects on mutational frequency The next variable we optimized for our automated transformation method was the 42˚C heat shock duration (Fig 2A). Surprisingly, a 30 minutes incubation resulted in no transformants in our experimental conditions; this may reflect a shorter 30˚C or room temperature incubation time of the yeast in the transformation mix relative to other described methods [32,33] where 30 minutes at 42˚C does generate transformants. However, significant transformant numbers were observed with extended incubation times (1-6 hours) at 42˚C, with maximal transformant numbers observed with a 6 hour heat shock. Consistent with Fig 1A, transformant number was also increased by increasing plasmid concentration for all transformation reactions with differing 42˚C periods. Since 6 hours is considerably longer than that associated with current yeast transformation methods [23,24], we were curious to see if this had any detrimental effects to yeast. First, general cell morphology and pRB1 plasmid expression was again examined by microscopy. No obvious differences in cell morphology, Pab1 and Edc3 expression levels, and stress granule and P-body formation/co-localization were observed between cells subject to different 42˚C transformation periods (Fig 2B and 2C). Second, we assessed the effects of extended heatshock on the accumulation of spontaneous genomic mutations, by using a canavaine resistance assay [29,30]. Briefly, CAN1 is a gene that encodes an arginine permease. Canavanine, a toxic arginine analog, cannot be taken up in cells harboring deficient CAN1. Thus, cells harboring a WT CAN1 cannot grow on canavanine plates, whereas yeast that have acquired inactivating mutations in CAN1 do grow. We measured CAN1 mutation rates and found there was no statistically significant difference in relative mutation rates for cells incubated at 42˚C for between 1-6 hours ( Fig 2D). As expected, cell viability decreased with longer heat shock periods in BY4741 cells (data not shown), but this did not preclude obtaining transformants (Fig 2A). Taken together, these data demonstrate that a wide array of 42˚C heat shock time periods can lead to efficient transformation rates, thus offering considerable flexibility to the implementation of our protocol. Automated transformation protocol is a viable method for generation of combinatorial yeast mutant libraries Transformation of a linear DNA construct and manipulation of genomic sequence via homologous recombination is a well-established yeast methodology [34,35]. We were thus curious if our method could be utilized for semi-high throughput construction of combinatorial yeast gene deletion libraries via transforming specific gene deletion PCR product, akin to the result of synthetic genetic array methodologies. To test this, we generated via PCR a LEU2 cassette (Fig 3A) with 45 nucleotide ends homologous to sequence flanking the VPS38 ORF, which is a non-essential gene. We then tested transformation of different concentrations of the LEU2 cassette into BY4741 cells using our automated transformation method. After 3 days growing in the selective medium, cells were plated on -Leucine selective media. As expected, more transformants were observed with increasing amounts of the LEU2 cassette ( Fig 3B). Wild type cells and cells from three separate colonies transformed with different amount of the LEU2 cassette (30, 60 and 120ul 150 ng/ul PCR product; concentrated to a final volume of 10ul by a speedvac) were cultured, genomic DNA isolated, and the VPS38 locus assessed for the presence or successful deletion of the VPS38 gene using primers flanking the homologous recombination sites (Fig 3C). All 3 colonies which were examined showed successful deletion of VPS38. Finally, we demonstrated the applicability of our method to high throughput transformation of gene deletion library plates, wherein each well harbors a unique yeast gene deletion strain, some of which exhibit lower growth rates than WT/other gene deletion strains, thus making transformations potentially more challenging. Cells in the A1 well were removed prior to initial culture to act as a negative control. Importantly, we obtained 100% successful transformation of all strains, following growth to OD 600~0 .8, transformation with 4500ng total LEU2 cassette and incubation at 42˚C for 3 hours. This demonstrates the effectiveness of our method in generating combinatorial gene mutant libraries in yeast. Discussion Based on it's simple genome and high degree of conservation to human cells, yeast is an excellent model to study fundamental biological processes, and to rapidly screen for genetic (Table 2) from four single colonies (wild type, 1#: 30 μl spot, 2#: 60 μ spotl, 3#: 120 μl spot). Expected PCR product size of wild type: 1701 bp, VPS38 knockout: 2810 bp. (D) Plate 1 from the non-essential yeast knockout library was cultured and transformed with 4500ng VPS38 knockout LEU2 cassette using our methodology. Transformed cells were cultured in -Leucine SD media for 3 days and spotted on -Leucine media using prongs. modifiers of pathology observed in various human diseases [7,15]. High-throughput transformation is an important tool for genetic screening in yeast, and here we describe a highthroughput automated method that efficiently transforms plasmid or linear DNA constructs into yeast. Here, we have taken a previously published microtiter plate transformation method [23,24], and adapted it to a liquid handling robotic platform. In the process, we optimized key factors affecting transformation efficiency, such as plasmid amount, cell concentration and 42˚C incubation time. Additionally, we demonstrated its efficiency in plasmid transformation and gene deletion, making this method highly useful for genome-wide genetic analyses (Fig 4, S1-S5 Videos). We believe this is a useful advancement for the yeast community for the following reasons. First, use of a liquid handling robot essentially eliminates the possibility of human error if doing microtiter plate transformations by hand. Second, less time is spent doing a transformation reaction using the automated approach. Third, the throughput of the automated transformation method is far greater over a manual approach, with 12 plate transformations/day a realistic goal per liquid handling robot. These and other basic transformation advantages are highlighted in Table 4. In addition, our method offers a potential alternative method to SGA approaches for generating combinatorial yeast mutant libraries. Key advantages are summarized in Table 5. Additional benefits include being non-reliant on meiotic recombination events inherent in the SGA approach, which can lead to potential complications in obtaining double mutants when both genes are proximal on the same chromosome, or the loss of selective markers up or downstream of a given modified gene of interest if a recombination event occurs between them [36]. However, a caveat of our method that limits its applicability to whole genome analysis is the need to verify that gene modification/knockout has occurred at the correct locus with a given selection cassette. One viable approach to rapidly isolate yeast that have been correctly deleted for a gene of interest is to spot and serially dilute transformed yeast cells following 1 day of growth in selective media, such that 1-5 single colonies grow on the agar plate. These individual colonies can then be inoculated into liquid media by the Singer Rotor robot (or manually), and subject to a colony PCR protocol. Finally, PCR verification using primers to ensure correct integration of the KO cassette at the correct genomic and gel electrophoresis analysis, ideally using a high-throughput gel apparatus such as GE Healthcare, Ready-To-Run Unit and Thermo Scientific, E-Gel 96 system, will allow fast conformation as to which double mutant strains have been successfully constructed. Should PCR indicate a mixed population of correct and incorrect strains, streaking for single colonies and repeating the above process would be required. For this reason, we believe our method is best suited to creating targeted collections of double mutants in the 100-1000 strain range, rather than entire genomic libraries. To ensure maximal efficiency of correct insertion of any deletion cassette, it is good practice to utilize selectable markers for which no homologous sequence exists in yeast, and maximize the nucleotide length of sequence homologous to the target of interest. Generally, 45nts results in 80% insertion at the correct locus, whereas 60nts results in 90% correct insertion [37]. Another significant benefit of our method is the reduced use of consumables. Every individual transformed strain uses two tips (one 200 μl, one 50 μl) for the whole process including inoculation, measuring of cell concentration and the transformation reaction itself. However, only a single rack of tips is utilized for addition of the transformation mix and PEG mix for all plates transformed. In contrast, a manual approach would utilize a new rack of tips for PEG and transformation mix for every transformed plate. Hence, 12 plates transformed by our method needs 14 racks of P200 and 12 racks of P50 tips, while manual transformation needs Step 1, Inoculate overnight culture to the deep well plate. Grow another 2 ½ -4 hours to recover cells to the mid-log phase. Step 2, Prepare plasmid and transformation mix to transformation. Normally, an OD 600 range of 0.4-1.5, !100ng plasmid or 4500 ng PCR product is optimal for our automated transformation method. Step 3, Heat shock of the transformants for 3-6 hours. Step 4, Transfer the transformed cells to a liquid selective media plate to grow another 2-4 days. Pin the transformed strains onto appropriate selective media to generate the new library. 2-4 times as many tips with a greater risk of human error. In addition, in comparison to a SGA methodology, which utilizes 6 different types of selective media plates, our methodology only uses 1 type of final selective plate to generate combinatorial mutants. There are many potential applications of our method, depending upon the library of strains transformed, and the nature of the foreign DNA (plasmid or genome-modifying cassettes) introduced. For example, genome scale transformation of a plasmid bearing a tagged protein of interest into an alternatively tagged ORF genomic library (e.g. TAP-tag collection [5] could allow genome-wide interaction studies (e.g. Co-IP). Transformation of an RFP-tagged protein of interest into the yeast GFP library [38] could allow genome-wide co-localization studies. Analogous to SGA analysis, generation of combinatorial yeast mutant strains could also help elucidate complex genetic networks and identify enhancers or suppressor of a given gene of interest, including heterologous gene products implicated in human disease. In our method, a Biomek liquid handling robot from Beckman Coulter was used, however we are keen to stress that a variety of liquid handling robots from different manufacturers, such as Hamilton, Tecan, and others can also be adapted to our method. We are happy to advise in this matter if necessary. In summary, our method provides an efficient and inexpensive way to conduct high-throughput genetic network and functional analyses in yeast. S1 File. Supplemental methods for robot setup and video information. (DOCX) S1 Video. Deckload. This is the first module of the Transformation Method. It allows the operator to load new plate sets. Each set contains an overnight culture plate, a deep-well growth plate, a clear bottom plate to read the OD, a plate of selective media for the final transformed yeast, a set of 50 μl tips, a set of 200 μl tips, and a plasmid plate, unless 1 plate of plasmids is being used for all plates. The operator is required to direct the method to a csv file with Automated transformation in yeast the names and barcodes of the overnight culture plate, selective media, and the plasmid plate if individual plasmid plates are being used. This information is used for barcode verification of these plates while they are being loaded so that there cannot be any mix-ups of which overnight culture plate will be transferred into which selective media plate at the end of the method. The sets are loaded into the robot, and accessed by the robot in a fashion that ensures that all of the members of the plate set are used together and that there is no risk of parts of one set being confused with another set. (MP4) S2 Video. Inoculation. This module of the Transformation Method allows the operator to inoculate the deep-well growth plate from the overnight culture. After inoculation the operator may read the OD of the growth plate and choose to inoculate again to reach a higher OD, or load the data from the OD into the method and have the robot normalize all of the wells on the plate to the highest OD or an OD of the operator's choosing. (MP4) S3 Video. Read OD. This module allows the operator to read the OD of any of the deep-well growth plates. The culture is transferred into a clear bottom plate for the reading and then transferred back after the reading is completed so that none of the volume is lost. This allows the operator to read the OD as many times as is necessary without concern of depleting the culture. (MP4) S4 Video. Switch to transformation. When the operator confirms that they wish to perform the transformation, the robot changes into Transformation mode, the growth plates are moved to the ambient hotel and the incubator is changed to 42˚C. (MP4) S5 Video. Transformation. This module of the Transformation Method allows the operator to transform plates. The operator places a deep-well plate of PEG, a deep-well plate of plasmid (unless different plasmid plates are being used for each transformation), and additional tips on the robot deck. The robot then brings out the deep-well plate of yeast and prompts the operator to centrifuge it. The robot then decants off the growth media, adds the PEG, adds the plasmid, and puts the plate in the incubator (at 42˚C). After the plate is in the incubator, the operator has the option to transform another plate while the first plate is incubating or to wait for the first plate to be ready. When each plate has incubated for the desired time, the robot alerts the operator. Once the operator confirms that they are ready to continue the robot brings out the plate and prompts the operator to centrifuge the plate. When the operator returns the centrifuged plate to the robot, the robot decants off the supernatant, mixes the remaining pellet, and transfers the transformed yeast into a plate of selective media.
6,749.4
2017-03-20T00:00:00.000
[ "Biology" ]
Trees with power-like height dependent weight We consider planar rooted random trees whose distribution is even for fixed height h and size N and whose height dependence is given by a power function h α . Defining the total weight for such trees of fixed size to be Z N , a detailed analysis of the analyticity properties of the corresponding generating function is provided. Based on this, we determine the asymptotic form of Z N and show that the local limit at large size is identical to the Uniform Infinite Planar Tree, independent of the exponent α of the height distribution function Introduction The study of random geometric objects has been actively pursued in recent years.In particular, deep results have been obtained for various models of random planar maps (or surfaces) concerning local limits and scaling limits.Thus, the Uniform Infinite Planar Triangulation was constructed in [4] and the Uniform Infinite Planar Quadrangulation appeared in [8], see also [23] and [11,29], while initial studies of the scaling limit of planar maps in [10] have been continued in numerous papers leading to proofs of existence of the limiting measure as well as establishing important properties of the limit, see e.g.[27,24,25].While the subject is of natural interest in its own right within probability theory, important motivations and applications also arise from problems in theoretical physics, in particular statistical mechanics and quantum gravity, see e.g.[3] and references therein. A more classical topic in the same realm, and equally relevant from a physical point of view, is that of random trees, see [13] and references therein.Local limits and scaling limits have been constructed in this case as well and detailed information about the limiting measures can frequently be obtained using the highly developed theory of branching processes.We shall in this paper concentrate on local limit results, in particular relating to the Uniform Infinite Planar Tree (UIPT) that can be obtained as a local (or weak) limit of uniformly distributed finite rooted planar trees of fixed size tending to infinity [14].Indeed, this limit can be viewed as a particular instance of a more general local limit result for critical Bienaymé-Galton-Watson (BGW) trees conditioned on size, namely when the off-spring distribution is a geometric sequence, a result first established in [21] and [2].Earlier results on local limits of critical (and subcritical) BGW trees conditioned on height go back to H. Kesten [22] and turn out to yield the same limit distribution supported on one-ended trees.More recent results allowing more general conditionings can be found in [20] and [1].Properties of the limiting measures, e.g.relating to their Hausdorff dimension and spectral dimension, have been established in [6] and [15]. While the results just described depend heavily on the fact that the weights of individual trees are local, in the sense of being products of weights associated with the vertices of the tree, the case of non-local weight functions has been much less addressed in the literature.Thus, while the average height of various ensembles of planar trees has been of interest in many works, such as [12,28,18,9,30], it seems that properties of random planar trees with height-dependent weights have not been extensively explored, an exception being [26], where a model of depth weighted random recursive trees is studied, with branching probability depending on vertex height. In the present work we investigate a rather different case where the weight function f (h) depends on the height h of the tree and otherwise is constant for fixed size (see section 2 for a precise definition).It is easy to see that different limits can be obtained by a judicious choice of f .A detailed study of such cases will be given in [17].Here, we consider the case where f is a power function, f (h) = h α , making use of transfer theorems from analytic combinatorics.The main result, stated in Theorem 4.2, is that the local limit in this case equals the UIPT, independently of the value of the exponent α. The paper is organised as follows.Section 2 contains a precise definition of the metric and associated Borel algebra on the space of rooted planar trees to be considered.Furthermore, the finite size distributions, whose limits we aim at calculating, are defined.In section 3, the analytic behaviour of the generating function for the total weights of trees as a function of the size N is examined for fixed α, based on well known results for the corresponding generating functions for trees of fixed height.Some technical details of the calculations involved are deferred to an appendix.Applying transfer theorems, the asymptotic behaviour of the coefficients at large N is determined.These results are used in section 4 to prove existence of the weak limit and to identify it as the UIPT.Finally, section 5 contains some concluding remarks. Preliminaries In the following, T N will denote the set of rooted planar trees of size |T | = N with root vertex of degree 1.Here N ∈ N := {1, 2, 3, . . .} or N = ∞ (in which case T is assumed to be locally finite) and we set For T ∈ T the notation h(T ) will be used for the height of T , i.e. the maximal length of a simple path in T originating from the root.For fixed α ∈ R we consider the probability measures µ N , N ∈ N, on T given by µ where Z N is a normalisation factor, also called the finite size partition function, given by Thus, µ N is supported on T N and our goal is to study the weak limit of µ N as N → ∞ as a probability measure on T .Here T is considered as a metric space whose metric dist is defined as follows.Denoting the root of T ∈ T by v 0 , the ball B r (T ) of radius r in T around v 0 is by definition the subgraph of T spanned by vertices at distance at most r from v 0 , i.e. where d T designates the graph distance on T and V (G) denotes the vertex set of any given graph G.For T, T ′ ∈ T we then set It is easily verified that dist is a metric on T .In fact, it is an ultrametric in the sense that for any triple T, T ′ , T ′′ of trees in T we have The ball of radius s > 0 around a tree T will be denoted by B s (T ).If s = 1 r , where r ∈ N, it is seen that where T 0 = B r (T ).For additional properties of the metric space (T , dist), including the fact that it is a separable and complete metric space, see e.g.[14]. By definition, a sequence of probability measures (ν N ) N ∈N on T is weakly convergent to a probability measure ν on T T f dν N → T f dν as N → ∞ for all bounded continuous functions f on T .We refer to [7] for a detailed account of weak convergence of probability measures. As mentioned earlier, the main result of this paper is a proof that the weak limit of the sequence (µ N ) N ∈N exists and equals the UIPT, which will be described in more detail in section 4. A basic ingredient in the proof is the generating function W α for the Z N , defined by where g is a real or complex variable.It is known, and will also be shown below, that the sum is convergent for |g| < 1 4 and divergent for |g| > 1 4 .It will be convenient for the following discussion to define It is a known fact, and easy to verify, that X m fulfills the recursion relation , m ≥ 1, and X 1 (g) = g . This equation can be rewritten in linear form and hence solved explicitly.The result is given by (see e.g.[13]) Defining X 0 = 0 and z = 1 − 4g , this gives Noting that for z = x + iy ∈ C we have x > 0, and hence, by (6), X m is an analytic function of z away from the imaginary axis for all m.Note also that for fixed such z the right-hand side of (8) decays exponentially with m and hence the series in (5) converges.Taking z to be real we get, in particular, that W α is well-defined for 0 ≤ g < 1 4 .On the other hand, considering the denominator in (6) we see that it is an odd polynomial in z of degree m + 1 if m is odd and otherwise of degree m, and it is straightforward to see that its roots are given by Hence, it follows from (6) that these are precisely the (simple) poles of X m apart from z = 0 corresponding to p = 0 where both numerator and denominator have simple zeroes.Therefore, X m is analytic for |z| < |z 1 | and hence for |g| < g m where with a singularity at g = g m .Since g m → 1 4 as m → ∞ it follows from (5) that the sum defining W α (g) is divergent for g > 1 4 , as claimed above. 3 Generating functions Singular behaviour In order to establish the existence of the limit of (µ N ) we shall determine the asymptotic behaviour of Z N as N → ∞.This will be done by analysing the singularity of W α at g = 1 4 in more detail and using so-called transfer theorems [19].When considering W α as a function of z we will use the notation W α (z) for W α (g). Before stating the main result of this section, we note a few elementary facts that will be needed, formulated in the following three lemmas.Lemma 3.1.For |z| < 1 and m ≥ 1 we have where the coefficients for all k.Moreover, Proof.First, we note that the term of order k in the series expansion is identical to the one in the polynomial From this expression we see that the coefficient of (2mz) k is given by where the first inequality follows by replacing the prefactors by 1 and lifting the restriction r 1 + 2r 2 + ... + (k − 1)r k−1 = k − r, while the second inequality results from estimating the finite sum in parenthesis by the corresponding infinite sum.This proves the first part of the lemma. To show (11), we rewrite (12) for k ≤ m in the form Here, the latter sum can be estimated as above by while expanding the first term yields an expression of the form where the coefficient c m k is easily seen to fulfill the bound Combining the two estimates yields the claimed bound. The next lemma concerns properties of the denominator function appearing in (8), for small values of z.We use the notation A k for the Taylor coefficients of as well as where K and L are positive constants.b) For |z| ≤ tan( π m+1 ) we have where the coefficients and there exist positive numbers B k , k = 1, 2, . . ., independent of m, such that Proof.With notation as in Lemma 3.1 we have where it has been used that a m 1 = 1 and a m 2 = 1 2 .Hence, and ( 14) follows immediately from (10) with K = 2e 2 .Similarly, the bound (15) follows easily from (10) and (11) with L = 2(e 3 + 1).This proves part a) of the lemma. For the second part we note that z 2 Dm(z) is a meromorphic function of z which is analytic for |z| < tan( π m+1 ) as shown in section 2. The power series for z 2 /D m (z) in this disc is obtained by inverting that of Dm(z) z 2 , i.e. the coefficients c m 2k are determined by where c m 0 ≡ 1 .Using the the bound ( 14) just proven, we obtain which implies ( 16) by a simple induction argument.Using that Using the bounds of part a), this gives Since c 0 = A 0 = 1, the inequality ( 17) now follows by induction with B k defined recursively as B 0 = 0 and This concludes the proof. For fixed a > 0 we denote in the remainder of this paper by V a the wedge in the right half-plane given by Lemma 3.3.Let a > 0 and 0 < ǫ < 1 be given.Then there exist positive constants K 0 , K 1 , m 0 depending on a, and δ 0 depending on a and ǫ, such that the following statements hold. b) For z = x + iy ∈ V a , |z| ≤ δ 0 , and m ≥ m 0 , Proof.a) For any z ∈ C we have On the other hand, for z ∈ V a we have Combining these two estimates yields (19) with where O(z) is analytic and fulfills for z in a suitably small disc around 0, where c is some constant independent of m.Therefore, choosing δ 0 small enough, it follows that 2mz(1 Introducing the shorthand we get Consider first the last expression in the case where mx ′ ≥ 1.In the second term inside square brackets, the factor multiplying |z| is numerically bounded by a constant depending only on a.Hence, choosing δ 0 sufficiently small, the term in square brackets is bounded from below, say by1 2 , for |z| < δ 0 .On the other hand, if mx ′ ≤ 1 we observe that the factor inside round brackets is bounded while the prefactor |z| sinh(mx ′ ) can be estimated as follows.First, choosing δ 0 sufficiently small such that Then, choosing m 0 sufficiently large, the term in square brackets in ( 22) is bounded from below by 1 2 if m ≥ m 0 .Thus, we have shown that for m 0 sufficiently large and δ 0 > 0 small enough it holds that Given ǫ, choose δ 0 small enough such that |zO(z)| < |z| 1+a ǫ for |z| < δ 0 .For such z in V a we then have where from which the lower bound in (20) follows in view of (23).By a slight modification of the arguments above, the upper bound follows similarly. We are now ready to prove the first main result of this section. for z small in V a . Proof.We claim that the polynomial W α is given by replacing the summand in the definition (5) of W α by its Taylor polynomial of degree 2n, i.e. and that c α is given by where L n (t) stands for the Laurent polynomial of order 2(n − 1) for Setting we start by rewriting where in the last line we converted the integral over the positive real axis to a line integral along the half line ℓ z : t → tz inside the wedge V a , by using Cauchy's theorem and the fact that α + 2k − 2 < −1 for k ∈ {0, 1, .., n} implies that the integrand decays fast enough and uniformly to 0 at infinity inside V a , such that the integral along the circular part of the contour at infinity vanishes. It is convenient to split the sum and integral in the last expression into three regions given by 3) r −1 < m|z| , r −1 < t|z| , respectively 2 .Here, r will ultimately be chosen as a suitable function of |z| tending to 0 as z tends to 0, but for the moment we merely assume 0 < r < 1. Denoting the corresponding contributions to the series and integral in (28) by S 1 , S 2 , S 3 and I 1 , I 2 , I 3 , respectively, we have and we shall proceed by successively estimating the numerical values of S 1 , I 1 first, then S 2 − I 2 , and finally of S 3 , I 3 .In the remainder of this proof, "cst" will denote a generic positive constant independent of z and r within the stated ranges, and likewise O(w) will denote a generic function of order w for |w| small, i.e. |O(w)| ≤ cst • |w|. 2 More precisely, t is restricted according to 0 < t ≤ r |z| , r |z| < t ≤ 1 r|z| and 1 r|z| < t, respectively, where ⌊x⌋ denotes the integer part of x ∈ R. For the sake of simplicity, we shall in the following apply the notation in (29) and omit writing explicitly the relevant integer parts. S 1 and I 1 : It follows from Lemma 3.2 that which we henceforth assume to hold.It follows that For I 1 , on the other hand, we get S 2 − I 2 : We further decompose this expression as and proceed to establish bounds on A, B and C. In order to deal with A, we note that with the notation z ′ introduced earlier in (21) we have By Lemma 3.3 with, say, ǫ = 1 2 the denominator fulfills To estimate the numerator, we note the inequalities valid if m|z| 2 is bounded, which is the case if (39) holds and m|z| < 1 r .For m|z| ≤ 1 we hence get cosh(2mz and together with (37) and (38) this implies provided (39) holds, which we assume in the following. On the other hand, in the range 1 < m|z| < r −1 the inequalities (40) imply Combining this with the bound which follows from Lemma 3.3 a) and b) with ǫ = 1 2 , we get Thus, using (41) and (42), we obtain the bound valid for any (fixed) value of α ∈ R. To derive an upper bound for |B| it is useful to rewrite It is straight-forward to estimate the integrand using (19) and recalling that α < 1.One finds which yields the bound In order to estimate |C|, we first use the mean value theorem to write where and recalling (17), which holds for all m in the summation range provided z fulfills the condition we see that (46) implies for such values of z. S 3 and I 3 : Using Lemma 3.3 a) we have and similarly Lemma 3.3 b) yields Furthermore, using (16) we have Taking into account also the obvious bound it follows that Combining the estimates (32), ( 33), ( 49) and (54), we conclude that provided z, r and n fulfill conditions (31), ( 39) and (47).Choosing r = |z| β where 0 < β < 1, these conditions are evidently satisfied for |z| small enough.Noting that 1 − α − 2n > 0, it follows from (55) that if β is chosen such that This concludes the proof of Theorem 3.4. The next theorem is similar to the previous one and covers the case α > 1. Theorem 3.5.Assume α > 1 and let a > 0. Then W α is analytic in the right half-plane C + and there exists ∆ > 0 such that for z ∈ V a small, where Proof.Applying Cauchy's theorem as in (28) gives As previously, we split the sum and integral in (57) into three regions as in (29) and denote the corresponding contributions by S 1 , S 2 , S 3 and I 1 , I 2 , I 3 , respectively.The relevant estimates can then be obtained by suitably modifying the arguments of the previous proof as follows. The result for the remaining values of α, i.e. α = −(2n − 1), n ∈ N 0 , is stated in the following theorem whose detailed proof is deferred to Appendix A. Theorem 3.6.Assume α = −(2n − 1), n ∈ N 0 , and let a > 0. Then W α (z) is analytic in the right half-plane and there exists a polynomial P n (z) of degree 2n and a constant ∆ > 0 such that for z small in V a , where Coefficient asymptotics Recalling that z = √ 1 − 4g, it follows from Theorems 3.4, 3.5 and 3.6 that W α is an analytic function of g in the slit-plane C \ [ 1 4 , +∞).The asymptotic behaviour of the coefficients Z N in its power series expansion (4) around g = 1 4 , that will be needed in the next section, can be deduced by applying transfer theorems yielding the following result.Proposition 3.7.For fixed α ∈ R it holds for large N that where the constant C α is given by Proof.Consider first the case α = −(2n − 1) , n ∈ N 0 .With notation as in section VI.3 of [19], it follows from Theorems 3.4 and 3.5 with a > 1 that there exists a ∆-domain where φ a = π − 2 tan −1 (a) < π 2 and η > 0 can be chosen arbitrarily.Applying Corollary VI.1 in [19] then gives the result. For α = −2n + 1 we recall the well known fact (see e.g.section VI.2 in [19]) that This immediately implies that Applying Theorem VI.3 of [19] to the remainder term O(|z| 2n+∆ ) in (65) then us to conclude that as N → ∞, as well as that (−1) n A n = |A n |, since Z N by definition is positive.This completes the proof of the theorem. In the particular case, α = 0, the partition function can be calculated in closed form (see e.g.[14] for details) and is given by Moreover, its Taylor coefficients Z N are given by the Catalan numbers, For later use, we also note that Finally, we shall also need the asymptotic behaviour for large N of the Taylor coefficients Z N,M of the function i.e. the contribution to W α from trees of height at most M .Since each X m is a rational function of g by ( 6), the same holds for W α,M , and it has a unique pole closest to g = 0 which is simple and located at g M given by (9).Denoting its residue by r M , it follows that for N large. Clearly, the sets A M i , i = 1, . . ., R, are disjoint subsets of A, if N > |T 0 | + RM , and hence Moreover, for such where the modified partition function ZK,M is given by ZK,M = Here, the last inequality is a consequence of (73), assuming that M > M 0 .With notation as in section 3.2, the last sum in (76) equals Z K − Z K,M , and according to (66) and (69) the bound holds for fixed M and for N large enough, where we have also taken into account that g M ≤ 1 and , the right-hand side of (77) decays exponentially with N .It thus follows from (66), for fixed M and fixed N i , i = i 0 , fulfilling the summation constraints of (75), that Using this together with (76) and (74), we get for any M > M 0 and N sufficiently large that which is equivalent to (72). For ǫ > 0 and M > M 0 , let us denote the large-N limit of the right hand side of (72) by Λ(T 0 , M, ǫ), i.e. Λ(T and let Λ(T 0 ) := lim Lemma 4.2.For any r ∈ N it holds that Proof.We use an inductive argument.For r = 1 the statement trivially holds, so let r ≥ 2 be arbitrary and assume (80) holds for r−1.Recall that the R-factor in (79) originates from summing over the position i 0 of the long branch out of R branches.This branch has an ancestor j in T 0 at height r − 1, which is the root of the branch T i 0 .Letting R ′ denote the number of vertices at height r − 1, it follows that summing over trees T 0 of height r with T ′ 0 := B r−1 (T 0 ) fixed and with a marked root edge of the large branch amounts to summing over all possible choices of the remaining R − 1 edges at maximal height.Since the number of such choices equals Since the last expression in (81) equals Λ(T ′ 0 ), this completes the proof. Corollary 4.3.The sequence (µ N ) N ∈N of measures on T given by (1) is tight. Proof.It is easy to verify (see e.g.[14] for details) that sets of the form where (K r ) r∈N is any sequence of positive numbers, are compact.In order to establish (71) for sets of this form, it is sufficient to show that for every δ > 0 and r ∈ N there exists K r > 0, such that Indeed, choosing δ to be r-dependent of the form δ r = ε 2 r and defining C by (82) for the corresponding values of K r determined by (83), we obtain (84) Concluding remarks We note that, despite the widely varying singular behaviour of the generating function W α (g): it is finite at the critical point for α < 1, has a logarithmic divergence when α = 1 and a power-like divergence for α > 1, we have found that the local limit of the distributions (1) is independent of the exponent α ∈ R. Whether more general BGW trees respond in a similar way to a powerlike height coupling, we do not know, since our approach relies on knowing the explicit form of the generating function for fixedheight partition functions, given in (8), the analogue of which is not generally available.It is, however, conceivable that more general techniques based on recursion relations alone could be developed.Another way of extending the results of this paper would be allowing different forms of height couplings.In [17], we consider weights of exponential form, k h , at fixed size, partly motivated by findings in [16], where an analysis of certain statistical mechanical models of loops on random so-called causal triangulations is carried out.Via a bijective correspondence between causal triangulations and rooted planar trees, it turns out that some of those models can be related to models of planar random trees with exponential height coupling with k > 1.As shown in [17], it turns out that the local limits exhibit qualitatively different behaviours, depending on whether 0 < k < 1, k = 1, or k > 1. Recalling the definition (25) of W Here, the first term inside round parenthesis is harmonic and yields the contribution Next, we proceed to estimate |S − I| as before by splitting the summation and integration domains corresponding to r < m|z| ≤ 1 r and m|z| > 1 r and similarly for s.Calling the corresponding sums and integrals S 2 , S 3 and I 2 , I 3 , respectively, we first rewrite 1 is defined by(27).Consider first the contribution S 1 to this sum from m ≤ r |z| , where r satisfies (31).With notation as in Lemma 3.2 b) we rewrite
6,398
2021-12-14T00:00:00.000
[ "Mathematics", "Physics" ]
Myoblast 3D bioprinting to burst in vitro skeletal muscle differentiation Abstract Skeletal muscle regeneration is one of the major areas of interest in sport medicine as well as trauma centers. Three‐dimensional (3D) bioprinting (BioP) is nowadays widely adopted to manufacture 3D constructs for regenerative medicine but a comparison between the available biomaterial‐based inks (bioinks) is missing. The present study aims to assess the impact of different hydrogels on the viability, proliferation, and differentiation of murine myoblasts (C2C12) encapsulated in 3D bioprinted constructs aided to muscle regeneration. We tested three different commercially available hydrogels bioinks based on: (1) gelatin methacrylate and alginate crosslinked by UV light; (2) gelatin methacrylate, xanthan gum, and alginate‐fibrinogen; (3) nanofibrillated cellulose (NFC)/alginate‐fibrinogen crosslinked with calcium chloride and thrombin. Constructs embedding the cells were manufactured by extrusion‐based BioP and C2C12 viability, proliferation, and differentiation were assessed after 24 h, 7, 14, 21, and 28 days in culture. Although viability, proliferation, and differentiation were observed in all the constructs, among the investigated bioinks, the best results were obtained by using NFC/alginate‐fibrinogen‐based hydrogel from 7 to 14 days in culture, when the embedded myoblasts started fusing, forming at day 21 and day 28 multinucleated myotubes within the 3D bioprinted structures. The results revealed an extensive myotube alignment all over the linear structure of the hydrogel, demonstrating cell maturation, and enhanced myogenesis. The bioprinting strategies that we describe here denote a strong and endorsed approach for the creation of in vitro artificial muscle to improve skeletal muscle tissue engineering for future therapeutic applications. | INTRODUCTION Skeletal muscle has the great capacity to self-repair and regenerate in response to common acute injuries, such as exercise-induced damage (Giarratana et al., 2020;Ronzoni et al., 2021). This is principally due to a resident stem cell population that is mainly involved in skeletal muscle homeostasis and regeneration. It has been demonstrated that these muscular progenitor cells are able to fuse, forming myotubes even if treated with recombinant proteins (Agosti et al., 2020;Perini et al., 2015;Ronzoni et al., 2011Ronzoni et al., , 2017. However, when muscle loss becomes irreversible (e.g., in case of severe trauma, invasive surgeries, degenerative diseases, or because of aging), lesions are so critical that they impair muscle functionality (Young, 1964). In this scenario, muscle regenerative medicine can provide solutions (Langridge et al., 2021;Ronzoni et al., 2020). Several studies focused on the production of an ideal structure to induce muscle tissue regeneration, including biochemical components to ensure efficient myogenic differentiation and maturation, resulting in thick and elongated myotube formation (Kang et al., 2016). However, the current challenge is to ensure the uniform growth of muscle cells inside the biomaterial and to induce a contractile syncytium similar to the native skeletal muscle structure (Chen, 1993) despite, over the years, different biomaterials and scaffold designs have been experimentally and/or clinically evaluated for the repair of skeletal muscle tissue. In particular, porous three-dimensional (3D) scaffolds have been manufactured using natural or synthetic polymers (Melchels et al., 2012), hydrogels (Baar et al., 2005;Fedorovich et al., 2008;L'Heureux et al., 2006;Stevens et al., 2009;Visser et al., 2013), decellularized extracellular matrix (dECM), and their composites (Ott et al., 2008). Several advantages emerge from the use of such natural hydrogels, such as mimicking skeletal muscle environment, providing bioactive signaling for muscle differentiation, and reabsorbing the biomaterial to allow the in vivo interaction of myofibers (Lev & Seliktar, 2018). Advantageous is also the use of dECMs that preserve the native tissue architecture, facilitate the adhesion of muscular cells and promote the regeneration of the tissue area in which the damage is (Lev & Seliktar, 2018;Wolfe & Sell, 2011). Nevertheless, there are some limitations associated with the use of such natural materials; for instance, the inadequate supply of nutrients to the cells in the central portion of the bioconstruct or, regarding dECMs, long incubation times to observe the effective functional recovery of the damaged tissue is required (Smoak & Mikos, 2020). As for the synthetic polymeric matrices, they do not guarantee good cell adhesion, they are poorly absorbable and there is a greater risk of activation of immune response of the patients. Therefore they are not considered biocompatible (Lev & Seliktar, 2018). Costantini et al. (2017) encapsulated C2C12 murine myoblast into gelatin methacryloyl hydrogel (CELLINK ® GelMA -CELLINK AB, Gothenburg, Sweden) using 3D mold to evaluate 3D cell culture in terms of in vitro myogenesis; moreover, they demonstrated that both hydrogel stiffness and geometrical confinement play a crucial role in the differentiation of myogenic precursors in a threedimensional environment. Otherwise, Seyedmahmoud et al. (2019) encapsulated C2C12 not only in CELLINK ® GelMA, but also in CELLINK ® GelMA mixed with different percentages of alginate (6% and 8%). They demonstrated that alginate percentage can provide a more favorable mechanical microenvironment for murine myoblasts (C2C12) cell proliferation and an optimal niche to induce muscle tissue formation. Bauer and colleagues (Costantini et al., 2018) demonstrated that spreading and proliferation of C2C12 cells encapsulated into alginate-based hydrogel were impacted by both stiffness and stress relaxation behavior of the substrates created by 3D molding. In addition, Matthias et al. (Costantini et al., 2018) Furthermore, thanks to the advancement of additive manufacturing, three-dimensional bioprinting is nowadays a widely adopted technique for both manufacturing 3D scaffolds and constructs in various tissue engineering approaches (Nikolova & Chavali, 2019). In fact, BioP not only allows the production of scaffolds whose geometry can be controlled thanks to the use of specific software, but it can also be exploited for the manufacturing of different scaffolds based on different biomaterials in which different cell types can be encapsulated (Derby, 2012;Leong et al., 2003;Murphy & Atala, 2014). The outcome of BioP, which is a complex process defined by several steps, is conditioned by the printing technology and biomaterial adopted, which defines when combined with cells the so-called bioink (Groll et al., 2019;Matai et al., 2020;Ng et al., 2019). Bioprinting techniques can be classified according to the printing methods, in particular, it is possible to distinguish three main BioP techniques: inkjet, extrusion, and vat-polymerization (AmerDababneh & Bioprinting Technology, 2014). These techniques vary in precision and accuracy in the deposition of the material, stability, and cell survival. The inkjet-based BioP was the first technique to be implemented. The bioink solution is manipulated by generating droplets which are deposited on a substrate using a small nozzle. The jet delivered can be of three types: continuous, on command (drop-on-demand) and electrodynamic (Gudapati et al., 2016). This technique offers many advantages thanks to its simplicity, versatility, and control in the bioink deposition of the allowing to control the bioink volume to be deposited. The disadvantage is that inkjet technique does not allow to process high viscosity bioink. Extrusion-based BioP is a combination of a pneumatic or mechanical fluid dispensing system and an automatic robotic system for the extrusion and the 3D printing (Jiang et al., 2019). The bioink is dispensed by a deposit system on a substrate on which, thanks to a light, chemical solutions or thermal transistors, the crosslinking of the bioink takes place, thus obtaining the deposition of cells encapsulated in cylindrical filaments, allowing the creation of 3D structures. The mechanical extrusion of the bioink solution involves the use of a piston or a screw, while the pneumatic extrusion involves the use of compressed air. Although the extrusion BioP is the most used technique in this field, there are some limitations for the realization of the desired structure such as the shear effort and the limited selection of the material due to the need to encapsulate the cells inside the bioink and its rapid gelling. Vat polymerization-based bioprinting uses different photoinitiators and UV light during the bioprinting process for crosslinking the hydrogel (Ng et al., 2020). Although this technique allows for the creation of high-resolution 3D constructs, the UV light used for crosslinking can damage the cells with a consequent reduction in the ability of cells to proliferate and differentiate. Given such premises, also in the case of BioP for muscle regeneration, the selection of appropriate biomaterials and the resulting bioink is vital to obtain desired biological outcomes. Among the various solutions proposed by the literature and thanks to their features, hydrogels combined with MP cells (C2C12), are commonly used as bioink for skeletal muscle regeneration (Langridge et al., 2021;Malda et al., 2013). In fact, hydrogels are known to be material with high biocompatibility and biodegradability. In addition, their mechanical properties could be modulated by the amount of chemical, temperature, or photo-crosslinking, to modify the elastic modulus to be as much similar as skeletal muscle tissue (Fischer et al., 2020). Hydrogel-based bioinks interact with cells in vitro and in vivo, so their viscosity may be optimized to maintain cell integrity and viability during the printing process. For this purpose, it is possible to use natural (chitosan, alginate, collagen, fibrin, etc.) and synthetic | MATERIALS AND METHODS Murine myoblasts were mixed with three commercial hydrogels (CELLINK ® GelMA A, CELLINK ® GelXA FIBRIN, CELLINK ® FIBRIN) and extruded by pneumatic extrusion-based bioprinter (INKREDIBLE + ® ). In the resulting constructs, C2C12 proliferation and differentiation were analyzed at different time points (24 h, 7, 14, 21, and 28 days) using morphological tests (Live/Dead staining and immunofluorescence [IF]). Molecular biology tests were also performed to quantify the gene expression of specific myogenic markers involved in muscle fiber maturation. | Hydrogels and crosslinkers The experiments were performed using commercially available Gelatin-based hydrogel and alginate (CELLINK ® GelMA A), Xantan gum and Fibrinogen hydrogel (CELLINK ® GelXA FIBRIN) and nanofibrillated cellulose (NFC)/alginate-fibrinogen-based hydrogel (CEL- Gelatin-based and alginate hydrogel (CELLINK ® GelMA A). The chemical composition of this hydrogel is a blend of CELLINK® GelMA and alginate, offering a higher printability compared to pure CEL-LINK ® GelMA hydrogels. This is due to the provided softening of the alginate and to essential properties of native ECM that allow cells to proliferate and spread. CELLINK ® GelMA A 3D constructs were crosslinked by photopolymerization, or through the addition of the ionic crosslinking solution (50 mM CaCl2). Xanthan gum and Fibrinogen hydrogel (CELLINK ® GelXA FIBRIN). This hydrogel incorporates GelMA base, xanthan gum and alginate to enhance printability and stability of the 3D constructs, while fibrin improves muscle cell proliferation and differentiation. A combination of photoinitiator-assisted and ionic crosslinking was applied. In Table 1 are summarized the hydrogels and relative crosslinkers used for each round of 3D printing experiments. In addition, rheological tests were carried out directly by CELLINK (CELLINK AB) for each hydrogel ( Figure S1). | Bioprinting process Before starting the printing process, the bioprinter was placed under a sterile hood and UV light was turned on for 1 h to sterilize all the materials and surfaces. Hydrogel was mixed with C2C12 cells (10:1 ratio). The Cartridge was filled with bioink, then nozzle connected (inner diameter 0.25 mm) and finally placed into the printhead. The axes were homed, the z-axis was calibrated, and the pressure and printing speed was set according to standard guidelines (10-15 kPa and 1000 mm/min respectively for all bioinks tested). The 3D constructs were bioprinted on a Petri dish, then the crosslinking process was performed as follows. For chemical crosslinking, CaCl 2 droplets were applied to cover the whole 3D structure and immediately after, the samples were incubated for 5 min at room temperature (RT). The crosslinking solution was subsequently removed from the constructs and DMEM culture complete medium was added. Dishes were then incubated at 37°C and 5% CO 2 . Only the chemical crosslinking process was repeated weekly before medium refreshment to keep the three-dimensional structure unchanged and avoiding degradation. For UV crosslinking, 3D constructs were exposed once to UV light at 365 nm for approximately 3/5 s. | 3D structure To mimic morpho-physiology muscle fiber structure, 3D geometry lines formed by one layer were bioprinted. Line length was set at 20 mm, while the line thickness is given by the combination of pressure and printing speed. In this case, it is equal to 0.35 mm ( Figure 1). Given the simplicity of the structure considered, we directly implemented the G-code of the 3D virtual model. | Cell culture of 3D constructs 3D bioprinted constructs were cultured up to 28 days in DMEM complete medium at 37°C and 5% CO 2 . The culture medium was refreshed every 3 days. 3D constructs were crosslinked every 3 days for 5 min. Following 4 days of BioP, the differentiation process of C2C12-laden bioink was induced by using a differentiation medium (DM) composed by DMEM supplemented with 2% fetal bovine serum. | Live/dead staining To evaluate cell viability, we used the Live/Dead staining (Invitrogen); 500 μL of a solution consisting of 1.5 ml of Phosphate Buffered Saline (PBS), 3 μL of EthD-1 and 1.5 μL of calcein, was added to 3D constructs. Samples were incubated for 45 min in the dark, then the solution was removed, and cell nuclei were counterstained with 500 μL 4 0 ,6-diamidino-2-phenylindole (DAPI) for 10 min according to the protocol. Fluorescent image acquisition was carried out by semiconfocal microscope (ViCo confocal, Nikon). Viability and differentiation tests were performed as well as morphological and gene expression analysis at six different time points (1, 4, 7, 14, 21, and 28 days in culture). | Total RNA extraction and quantitative realtime PCR Expression levels of myogenic genes were analyzed on 3D bioprinted constructs by Quantitative real-time PCR (RT-qPCR). Total RNA derived from each sample was extracted and isolated at different time points using 300 μL of lysis buffer (TRIzol Reagent). Total RNA extraction was performed by using Direct-zol RNA Miniprep's reagents following the manufacturer protocol (Zymo Research). Total RNA was then quantified by NanoDropTM (Thermo-Fisher Scientific). cDNAs obtained from 350 ng of RNA were reverse transcribed using iScript™ cDNA Synthesis Kit (Biorad) and quantitative PCR analysis was performed using oligonucleotide primers | Immunofluorescence assay Immunofluorescence assay on in vitro 3D constructs was performed (1:40). Samples were counterstained with DAPI to detect nuclei, washed three times with a washing buffer, and ultimately mounted. Finally, sections were observed with a semi-confocal microscope (ViCo confocal, Nikon), supported by the ImageJ PRO 6.2 software. cells were not merged forming myotubes. This is probably due to a non-homogeneous diffusion of the crosslinking solution or to lower oxygen and nutrient levels within the 3D constructs (Figure 2i). | Live/dead staining Finally at 21 and 28 days, C2C12 cells merged forming myotubes even in the most central part of the 3D structure, and the alignment was promoted by the linear shape of the printed construct (Figures 2l,m). Regarding C2C12 cells laden in CELLINK ® GelXA hydrogel, 94% viability was observed at all the time points analyzed (Figure 2n-o). Nevertheless, at 7 and 14 days in culture, cells kept a round shape and slowly start to elongate only at day 21 especially at the borders of the constructs (Figure 2o-q). Live/Dead staining in proliferative conditions was also performed on CELLINK ® FIBRIN 3D constructs, crosslinked with CaCl 2 and Thrombin. We observed no advantages on cell viability, adhesion, spreading, and differentiation (data not shown). | Gene expression analysis of cell-laden structures by quantitative real-time PCR Gene expression analyses were performed to evaluate and validate the observed differentiation rate of C2C12 cells laden into CEL-LINK ® FIBRIN hydrogel at 7, 14, 21, and 28 days and into CELLINK ® GelXA FIBRIN hydrogel at 7, 14, and 21 days in culture in proliferative and differentiative conditions ( Figure 5 and Figure 6). The expression levels of myogenic genes such as MyoD and MCK in the 3D structures were detected by RT-qPCR normalized by the PGK gene. Regarding the CELLINK ® FIBRIN hydrogel, after 7 days of culture in proliferative conditions, MyoD and MCK were expressed 1.2fold higher than in 3D cultures (Figure 5a). Similarly, after 7 days in DM, the expression of both genes was 1.8-fold higher in 2D than in 3D (Figure 5a, p > 0.05). Thus, at the Gene expression was also evaluated for the other hydrogel (CELLINK ® GelMA A), but no statistical differences were highlighted among the samples (data not shown). In conclusion, CELLINK ® FIBRIN hydrogel, as indicated also by Live/Dead staining, improves myogenic gene signature, and proves to be the best bioink to promote myoblast alignment along the printed filament. | DISCUSSION In this study, we demonstrated the impact of different types of hydrogels on the viability, proliferation, and differentiation of murine myoblasts encapsulated in 3D constructs and manufactured by pneumatic extrusion based BioP. Skeletal muscle tissue engineering characterizes a revolutionary branch of regenerative medicine which aims to recreate in vitro muscles to be studied ex vivo and ultimately for the substitution of diseased or damaged muscle tissue. Up to the present time, many different strategies have been proposed even if not fully suitable for a potential therapeutic application (Fuoco et al., 2016;Levenberg et al., 2005;Shadrin et al., 2016;Sicari et al., 2014). One of the main issues encountered was the identification of the best hydrogel to achieve sarcomerogenesis and the parallel-oriented myofiber organization resembling the correct skeletal muscle structure. Therefore, to improve skeletal muscle tissue engineering, innovative techniques are required to produce engineered constructs with precise 3D structures. To date, pioneering technologies are revolutionizing many different manufacturing fields, including tissue engineering (Costantini et al., 2018). Especially 3D BioP techniques showed a prodigious potential for the rapid and cost-effective fabrication of cellularized structures, to build human-sized myo-constructs (Agosti et al., 2020;Mozetic et al., 2017;Ott et al., 2008). Although in this study we focused on extrusion-based bioprinting there are other works hat use different technique such as inkjet and vat polymerization. For example, inkjet-based bioprinting was used to fabricate biocompatible substrates used for fabricating an electrostimulation device to guide cell alignment and enhance myotubes differentiation (Fortunato et al., 2018). In this study, we tested multiple commercially available hydrogels characterized by specific composition and rheological capabilities to understand which is the best biomaterial that promotes the formation of a functionalized myo-construct. We did not perform a thorough rheological characterization of the different bioinks used to understand the shear stress experienced by the cells during the printing process (Lucas et al., 2020 Statistically significant values are indicated as *0.05 < P < 0.01 and **P < 0.01. Analysis of variance test was performed to evaluate data significance This could be related to the specific formulation and structure of each biomaterial. The internal structure of the hydrogels is crucial to metabolite transport inside the 3D constructs. Nutrient, oxygen and protein spreading, as well as cell migration and differentiation are supported by diffusion within any matrix with embedded cells .These findings denote a remarkable improvement, as it has been shown that fibrinogen-related biomaterials stimulate cell adhesion, spreading, and differentiation of multiple cell sources including myogenic progenitor cells, especially due to their biodegradable and non-immunogenic features (Almany & Seliktar, 2005;Centola et al., 2013;Fuoco et al., 2012Fuoco et al., , 2014Fuoco et al., , 2015. These characteristics, joint with the hydrogels composed of Fibrinogen/Gelatin allowed the creation of myo-constructs containing myogenic progenitors (C2C12) in precisely defined constructs promoting myotube formation and alignment (Figures 2 and 3). Even if recent studies investigated 3D printing techniques for skeletal muscle tissue engineering (Karande et al., 2004;Mironov et al., 2008), the results achieved were still poor, highlighting an unsatisfactory structural organization both in vitro and in vivo. Conversely, in this paper we showed a significant morphological organization of the myotubes, resembling mature sarcomerogenesis ( Figure 4). Finally, while the use of any of these hydrogels requires further optimization to maximize their functional and myogenic properties, the obtained results provide a knowledge advance in the field and a promising tool for skeletal muscle tissue engineering. | CONCLUSION We performed a comparative study of hydrogel behavior testing their myogenic properties over a long-time course (28 days) to analyze how the biomaterial matrix could improve muscle precursor cell (C2C12) viability and differentiation. The linear 3D printed structures were tested in vitro to assess their ability to stimulate myogenesis. Our results clearly showed that CELLINK ® FIBRIN and slightly less CELLINK ® GelXA FIBRIN hydrogels demonstrated the best potential to support the in vitro long-term differentiation of skeletal muscle cells in 3D constructs. After 21-28 days in culture, myogenic cells were able to fuse together forming structurally aligned myotubes, with high expression levels of specific skeletal muscle markers such as Myogenic Differentiation 1 and MCK genes. Due to all these findings, the results reported herein denote a significant enhancement to improve skeletal muscle tissue engineering.
4,751.6
2022-03-04T00:00:00.000
[ "Engineering", "Medicine", "Materials Science" ]
Scattering bound states in AdS We initiate the study of bound state scattering in AdS space at the level of Witten diagrams. For concreteness, we focus on the case with only scalar fields and analyze several basic diagrams which more general diagrams reduce to. We obtain closed form expressions for their Mellin amplitudes with arbitrary conformal dimensions, which exhibit interesting behavior. In particular, we observe that certain tree-level bound state Witten diagrams have the same structure as loop diagrams in AdS. Introduction Holographic correlators play a central role in checking and exploiting the AdS/CFT correspondence. Thanks to the recent breakthroughs of the bootstrap methods, holographic four-point functions of 1 2 -BPS operators with arbitrary Kaluza-Klein masses have been systematically computed at tree level in a plethora of string theory/M-theory models [1][2][3][4][5][6]. 1 These bootstrap methods rely only on symmetries and basic consistency conditions, and circumvent the enormous difficulties related to the traditional method which stalled progress in this field for many years. Note that according to the standard recipe of AdS/CFT, Kaluza-Klein modes of the AdS supergravity fields are dual to "single-trace" operators in the CFT. 2 In the bulk, they are mapped to states which are "single-particle". However, in the dual CFT there are also "double-trace" (or more generally, "multi-trace") operators which are normal ordered products of single-trace operators. Correlation functions involving such operators can be viewed in the bulk as scattering processes where some of the scattering states are multiple-particle "bound states". In principle, such correlators are already contained in the set of all "single-trace" correlators because we can produce "double-trace" operators from taking the OPE limit. In practice, however, computing these bound state correlators via such a detour through higher-point functions seems rather inefficient. Already computing five-point functions is a highly nontrivial task even equipped with bootstrap techniques [14,15], and going beyond that to higher multiplicities presents serious challenges for the current technology. Therefore, it will be of great interest to develop a more straightforward approach that allows us to directly apply the bootstrap strategy to such correlators with bound state operators. In this paper, we make progress in this direction by initiating a study of the underlying Witten diagrams. This is necessary because the properties of these diagrams related to bound state scattering processes have not been explored in the literature. 3 In particular, there is currently no knowledge of their analytic structure in Mellin space, which will become important if we want to adapt the bootstrap methods of [1,2,[4][5][6] to this case. Another motivation for looking into these diagrams comes from the recent work [18]. The series of papers [19][20][21][22][23] developed an alternative approach to the bootstrap methods to compute holographic correlators in the AdS 3 × S 3 background. This approach starts from a "heavy-heavy-light-light" (HHLL) limit of the four-point function. The correlator in this limit can be computed semi-classically as the fluctuation dual to the light operators in a supergravity background created by the heavy operators. By taking a formal limit where the heavy operators become light, the HHLL correlator can produce four-point functions with all light operators. Extending this method, [18] managed to compute all light four-point correlators at tree level with two single-particle states and two n-particle bound states. Interestingly, [18] found that for n ≥ 2 the correlators necessarily contain higher order polylogarithms while in the single-particle case at most dilogarithms appear. Curiously, these higher order polylogarithms also show up in loop-level correlators of singleparticle operators [11,[24][25][26][27][28][29]. This seems to imply that certain tree-level diagrams with external bound states might share structural similarity with AdS loop diagrams. 4 In this paper, we will provide strong evidence that there is indeed such a connection. As a first step towards a more systematic exploration, we will limit ourselves to studying diagrams of scalar fields in this paper. More precisely, we will mostly focus on the three diagrams depicted in Fig. 1 which contain up to two bound states and only one bulk-to-bulk propagator. The bound states are of the "bi-particle" type and correspond to double-trace operators in the CFT. This might seem a very small set of diagrams. However, we will explain how a vast array of bound state tree-level diagrams with more bulk-to-bulk propagators can be reduced to these basic diagrams. Moreover, even with just these three diagrams, we find that there is already a rich spectrum of behavior. We will see that the first two diagrams are similar in structure to tree-level exchange diagrams with single-particle external states while the last diagram resembles a one-loop diagram in AdS. Due to the technical nature of this paper, we offer below a brief summary of the sections to help the reader to navigate it, and highlight some of our main results. In Section 2, we review the basic technical ingredients which will be used in this paper. This includes the Mellin representation which recasts correlators in a form similar to flat-space amplitudes and manifests their analytic structure. We will also review two important properties of AdS propagators. One is an integrated vertex identity that allows us to integrate out an internal line of the diagram connected to two external lines via a cubic vertex. The other is the so-called split representation of the bulk-to-bulk propagator. These two properties of the bulk-to-bulk propagator are used in Section 3 in a warmup example where we compute the tree-level exchange Witten diagram with single-particle states as its external states. We reproduce the well known result in the literature and the Mellin amplitude takes the following form m are constants and ∆ is the conformal dimension of the exchanged scalar field. We used • to denote an external single-particle operator while later we will also use • to denote a two-particle bound state. We begin to consider diagrams with bound states in Section 4. We will compute Fig. 1a using two methods. The first method is based on the integrated vertex identity, and generalizes the single-particle case considered in the warm-up section. The second method computes the diagram by taking a coincidence limit of a five-point single-particle diagram. Both approaches lead to the same answer for the Mellin amplitude which has the following schematic form Here C (1) m are constants and ∆ 5 is the conformal dimension of the additional scalar line that makes the single-particle exchange diagram the bound state diagram Fig. 1a. From the expression, we find that the Mellin amplitude is quite similar to the exchange amplitude M •••• , except that the poles are now at shifted locations. In Section 5 we study the diagram in Fig. 1b which has two bound states and was dubbed "Type I". The method based on the integrated vertex identity can also be applied to this case and leads to the following Mellin amplitude Here the numerators C m do not depend on Mandelstam variables and ∆ 5 , ∆ 6 are the conformal dimensions of the two additional scalar lines. The amplitude again has a "treelike" analytic structure. We will also confirm this result by reproducing it from taking a coincidence limit of a six-point function. We consider the "Type II" two bound state diagram (Fig. 1c) in Section 6, which turns out to have a drastically different structure in Mellin space. The method using the integrated vertex identity no longer applies here because the all the vertices are quartic. However, we can still compute this diagram by taking the coincidence limit. We find that its Mellin amplitude has the form of a sum of simultaneous poles Remarkably, this is the same structure of one-loop correlators found in AdS 5 × S 5 IIB supergravity and AdS 5 × S 3 SYM [12,30,31]. This connection is further sharpened in Section 6.2 where we look at a family of examples of Fig. 1c with special conformal dimensions. We will show that in the flat-space limit the Mellin amplitudes of these diagrams reduce to the massless one-loop box diagram in flat space. Finally, in Section 7 we discuss how we can use the three diagrams in Fig. 1 as building blocks to obtain other more complicated diagrams. We conclude in Section 8 with an outline of future research directions. The paper also contains several appendices where we relegate additional technical details and collect useful formulae. Mellin representation To discuss holographic correlators, a convenient language is the Mellin representation [32,33]. In this formalism, holographic correlators in general display simple analytic structure. In particular, tree-level correlators with external single-particle states have Mellin amplitudes similar to flat-space scattering amplitudes. An n-point function of scalar operators is represented as a multi-fold inverse Mellin transformation Here we have defined x 2 ij ≡ ( x i − x j ) 2 , and we can set The variables γ ij then satisfy the same set of constraints as the flat-space Mandelstam variables, except that the external squared masses are now replaced by the conformal dimensions m 2 i = ∆ i . The function M(γ ij ) is defined to be the Mellin amplitude which contains all the nontrivial dynamic information. Let us also write down the case of n = 4 explicitly. We can write the correlator as are the conformal cross ratios. The function G(U, V ) is represented by 6) and the Mandelstam variables satisfy s + t + u = 4 i=1 ∆ i . Basics of AdS diagrams The Witten diagrams which we will consider in this paper are built from AdS propagators, following rules that are similar to the flat-space position space Feynman rules. These propagators are Green's functions in AdS and can further be divided into bulk-to-bulk or bulk-to-boundary depending on the points of insertions. To write down these propagators it is useful to introduce the so-called embedding space formalism. In this formalism, a point x µ in R d is represented by a null ray P A in a d + 2 dimensional embedding space The nonlinear conformal transformations are linearized in the embedding space as rotations in R d+1,1 . To make connection with the coordinates x µ , we can fix the rescaling degree of freedom and parameterize the null ray as where the signature is (−, +, +, . . . , +). The distance in R d is represented in the embedding space as The AdS space can also be conveniently represented by the embedding space. A point with Poincaré coordinates z µ = (z 0 , z) becomes a point Z in R d+1,1 Using this formalism, the bulk-to-boundary propagator of a scalar field with dimension ∆ reads G ∆ B∂ (P, Z) = 1 (−2Z · P ) ∆ . (2.11) The scalar bulk-to-bulk propagator is given by and (2.14) The bulk-to-bulk propagator satisfies the equation of motion The two simplest tree-level diagrams we can construct from these propagators are the exchange Witten diagram (Fig. 2a) Here we have used the notation W •••• , where the symbol • denotes a single-particle state. This is in anticipation of later discussions of diagrams with external bound states which are denoted by •. The contact Witten diagram W contact is commonly known in the literature as the D-function and is denoted by D ∆ 1 ∆ 2 ∆ 3 ∆ 4 . Before we proceed, let us point out two useful properties of these propagators which we will use in this paper. The integrated vertex identity The first useful property is an identity about the following three-point integral which involves two bulk-to-boundary propagators and one bulk-to-bulk propagator. The bulk-to-bulk propagator can be integrated out and the integral reduces to a sum of products of bulk-to-boundary propagators [34,35] and ] . (2.21) We will refer to this identity as the integrated vertex identity and it is diagrammatically depicted in Fig. 3. Using this identity we can, for example, write the exchange Witten diagram (Fig. 2a) as the sum of infinitely many contact Witten diagrams (Fig. 2b). The split representation The second useful property is the so-called split representation for the bulk-to-bulk propagator [36] and is illustrated in Fig. 4. The bulk-to-bulk to propagator can be written as a product of a pair of bulk-to-boundary propagators with dimension d 2 + c and d 2 − c along the principal series, and is further integrated over the boundary point and the parameter c. More precisely, we have where we have defined Warm-up: No bound states Let us warm up in this section with the simple case of an exchange Witten diagram (2.16) where all the external operators are dual to single-particle states. This is a standard example in the literature and the answer has been known for a long time. Our purpose of revisiting this example is to demonstrate the techniques reviewed in Section 2, which will later be applied to more complicated examples. Using the integrated vertex identity Let us first compute this diagram using the integrated vertex identity. Using (2.19), we can write the exchange Witten diagram (2.16) as This form of the answer is not particularly illuminating, therefore we now translate it into Mellin space. The Mellin amplitude of a D-function is just a constant [33] Simple manipulations then give the Mellin amplitude for the following type of functions which appear in the RHS of (3.1) with n = 4. Here we require the parameters n ij and ∆ n i to satisfy n ij = n ji , n ii = 0 , so that the external dimensions of each D Using the Mellin representation (3.2) on the RHS of (3.4), we find that, after appropriately shifting the variables, the Mellin representation of D Specifying to our current case, we arrive at the following expression for the Mellin amplitude of the exchange Witten diagram On the other hand, it is known that the Mellin amplitude has the following analytic structure This structure is anticipated from the large N expansion analysis [33] and can be rigorously derived by using the Casimir equation (equation of motion identity) in Mellin space. 5 Therefore, we can just focus on the poles at s = ∆ + 2m in (3.8), and we get Using the split representation In this subsection, we will use split representation to compute the Mellin amplitudes of W •••• . Using the split representation for bulk-to-bulk propagator G ∆ BB (Y, Z) in the definition (2.16), W •••• can be written as and (3.14) 5 More precisely, the identity is given by where Cas = − 1 2 (L AB 1 + L AB 2 )(L1,AB + L2,AB) is the bi-particle quadratic conformal Casimir built from the conformal generators L AB 1,2 acting on operators 1 and 2. This identity follows from (2.15) which is the equation of motion for the AdS scalar field, and translates into a difference equation for the Mellin amplitude in the Mellin space. For more details, see for instance Appendix C of the review [7]. The left and right three-point amplitudes can be recast in the Mellin representation through Here the integration measure [dγ] L [dl] l satisfies The integral over the boundary is conformal because 4 i=1 l i = d and can be evaluated through the Symanzik formula [37] dP 0 where the measure is constrained by After that, one can shift γ ij by γ ij → γ ij −γ ij for 1 ≤ i < j ≤ 2 as well as 3 ≤ i < j ≤ 4. This gives the correct coordinate dependence factor in the definition (2.1) and allows one to easily read off the Mellin amplitudes M •••• , which is given by By solving the constraints, we compute the integral over , where we defined and . (3.24) By pinching the c-contour between two colliding poles in c, we can find poles in s, given by As a result, M •••• can be written as and we find the same residues C (0) m as given in (3.11). Four-point function with one bound state We now proceed to compute the diagram with one bound state, as is depicted in Fig. 5. 7 We will use two approaches. The first approach (Section 4.1) uses the integrated vertex identity and is a straightforward generalization of the calculation presented in Section 3. The second approach (Section 4.2) obtains the bound state diagram from taking a coincidence limit of a five-point diagram with single-particle external states. propagators. For example, one may consider a diagram where the propagator with ∆5 starts from the same bulk point but ends on 4. However, this case is trivial because the bulk-to-boundary propagators satisfy the relation (P, Z) and the diagram reduces to the diagram (2.16) without bound states. Therefore, the only nontrivial bound state Witten diagram is Fig. 5 up to permutations. Using the integrated vertex identity The computation of this diagram is similar to that of Section 3.1. We apply the integrated vertex identity to the cubic vertex integral involving the three propagators with dimensions ∆ 1 , ∆ 2 and ∆. This again turns the diagram into infinite sums of D-functions Translating this result into Mellin space, we get an expression similar to (3.8). The expression has poles at s = ∆ + ∆ 5 + 2m for m ∈ Z ≥0 . Therefore, we can rewrite the amplitude as a sum over these simple poles. However, we can no longer use the equation of motion identity to rule out additional regular terms because the bi-particle Casimir operator for x 1,2 necessarily acts on the bulk-to-boundary propagator with dimension ∆ 5 as well. On the other hand, for special dimensions satisfying ∆ 1 + ∆ 2 − ∆ ∈ 2Z ≥0 , we can verify that regular terms from the two sums cancel and there is no regular part in the Mellin amplitude. 8 Therefore, we will assume in this subsection that the absence of the regular term is a general feature and the Mellin amplitude has the form But in the ensuing subsection, we will reproduce this result using a complementary method which also allows us to prove that the regular term is absent. Having determined the analytic structure (4.2), computing C (1) m is straightforward. These coefficients can be extracted from the residues of (4.1) in Mellin space and read where Note that setting ∆ 5 = 0 reduces the Witten diagram to the exchange Witten diagram in Section 3. We find reproducing the expression (3.9). From the coincidence limit In this subsection, we will rederive the Mellin amplitude M •••• by taking the coincidence limit of P 1 → P 5 in Fig. 6. Specifically, the four-point diagram where the integration measure [dγ ij ] 5 is constrained by for 1 ≤ i ≤ 5, and we defined s by The first step of the calculation is to compute the five-point Mellin amplitude M ••••• , which can be obtained by using the split representation. Because the calculation is very similar to the four-point case presented in Section 3.2, we will omit the details and just write down the result. The Mellin amplitude reads where . (4.10) We now start to take the coincidence limit. Let us first perform a shift γ 1j for 2 ≤ j ≤ 4 by γ 1j → γ 1j − γ j5 , which gives (4.11) Here the integration measure [dγ ij ] 5 is constrained by (4.12) In the limit that P 5 approaches P 1 , we can close the integration contour of γ 15 to the left in the γ 15 -complex plane. Due to the existence of Γ[γ 15 ], the leading contribution is given by the residue of pole at γ 15 = 0 and it is the only contribution which we need to keep. Physically, this corresponds to the fact that the limit x 2 15 → 0 is regular. Thus evaluating the integral over γ 15 leads to where the integration measure [dγ ij ] 4 is constrained by (4.14) The Mellin amplitude M •••• is thus given by Performing the integrals over γ 35 and γ 25 leads to an expression for M •••• , which agrees with (4.2). We will show this explicitly in Appendix A. Moreover, the approach of taking the coincidence limit also enables us to prove that the regular term vanishes when ∆ 1 > 0. For the sake of readability, we will only outline the proof here and leave the details to Appendix A. The starting point of the proof is to write M •••• as a sum of two parts by performing the remaining integral. Each part can be rewritten as a sum over poles, up to a regular term which we wish to show to be absent. The sum over all the poles can be performed and leads to a generalized hypergeometric function 3 F 2 . Thanks to a hypergeometric function identity, which is valid when ∆ 1 > 0, the summation over the poles turns out to be already the same as the original expression. This leads to the conclusion that the regular term must be absent. Four-point function with two bound states: Type I We now consider the Witten diagram with two bound states of Type I (Fig. 7). As in the previous section, we can also evaluate the diagram using two methods and we find the result is structurally similar to the one bound state case. Using the integrated vertex identity Because the diagram contains a cubic vertex, the method based on the integrated vertex identity can also be applied here. Using the identity on the cubic vertex connecting propagators with dimensions ∆ 1 , ∆ 2 , ∆, the diagram is reduced to D-functions With the help of (3.7) we can translate the result into Mellin space and obtain an expression for its Mellin amplitude which is similar to (3.8). It is not difficult to see that in Mellin space this diagram has poles at s = ∆ + ∆ 5 + ∆ 6 + 2m for m ∈ Z ≥0 . Like the diagram considered in Section 4, we also cannot use the equation of motion identity argument to rule out the regular term. But based on the explicit results with ∆ 1 + ∆ 2 − ∆ ∈ 2Z ≥0 , we will assume that the regular term vanishes in general and the Mellin amplitude has the form Later in Section 5.2, we will explain how the regular term can be shown to be absent by using the other method based on the coincidence limit. We can compute the residues and find where (5.4) Note that setting ∆ 6 = 0 reduces to the case considered in Section 4. Furthermore, there is a symmetry of exchanging (∆ 1 , ∆ 6 ) with (∆ 2 , ∆ 5 ), which is manifest in Fig. 7. From the coincidence limit Here s is defined as 6) and the residues are constants With M I •••••• in hand, we can proceed with taking the coincidence limit. Let us first identify P 6 and P 1 . This gives a five-point Witten diagram with one bound state where the integration measure [dγ ij ] 6 is constrained by for 1 ≤ i ≤ 6 and we defined s by Following the same steps as in Section 4.2, we can shift the Mellin parameters and evaluate the integral over γ 16 . After that, we compute the integral over γ 36 and γ 46 by using the first Barnes' lemma, leading to ] . where we have assumed ∆ 1 > 0 and the residue K (1) m is given by To get M I •••• , we need to further identify P 5 and P 2 , i.e., where we redefined s by Repeating the steps in Section 4.2, we arrive at an integral representation for M I •••• , which is given by ] . (5.16) Performing the integral over γ 15 finally leads to an expression for M I where we assumed ∆ 1 , ∆ 2 > 0 and the residue C (2) is given by In this case, one can also show that the regular term vanishes by using a similar argument based on the identity (C.2). Details of the computation can be found in Appendix B.1. Note that to compare the above C The two bound state diagram of Type II (Fig. 9) no longer contains cubic vertices and therefore can not be computed using the method based on the integrated vertex identity. However, the method using the coincidence limit can still be applied to this case. In this section, we will obtain its Mellin amplitude M II •••• from the six-point Mellin amplitudes Fig. 10, by taking the coincidence limit P 6 → P 1 together with P 5 → P 3 . The six-point Mellin amplitudes M II •••••• can be computed by using the split representation, giving Here s is defined as and the residues are constants . where the integration measure [dγ ij ] 6 is again constrained by (5.9) and we defined s by Similar to the previous sections, one can change variables and evaluate the integral over γ 36 , γ 46 and γ 56 , leading to Here M 2 ••••• (s , t ) and M 3 ••••• (s , t ) contain only single poles and are given by By contrast, M 1 ••••• (s , t ) contains simultaneous poles in s and t , and is given by , (6.10) with the residue The four-point Mellin amplitudes in Fig. 9 can be obtained by further identifying P 5 and P 3 in Fig. 11 W II •••• = lim Due to the fact that the expression (6.7) for M II ••••• contains three parts, M II •••• can be correspondingly written as where we have defined s and t as Performing the remaining integral finally leads an expression for M 1 •••• (s, t). The computation is technical and tedious. We leave the details in the Appendix B.2 and only write down the final result here The coefficients C m 1 m 2 in the second line are constants given by where K (1) n 1 n 2 has already been defined in (6.11). Substituting (6.17) and (6.14) into (6.13) leads to the final expression for M II •••• (s, t). Relation with AdS one-loop box diagrams The appearance of simultaneous poles in (6.17) is reminiscent of the Mellin amplitudes for one-loop box diagrams in AdS [12,30,31]. However, a direct comparison that pinpoints the precise AdS diagram is difficult. On the one hand, a closed form expression for one-loop box diagrams with generic conformal dimensions in any spacetime dimension is still absent. On the other, in the supersymmetric cases where there are explicit results [12,30,31] one works with the reduced correlator 9 . The one-loop correction of the reduced correlator does not admit a clear interpretation as a collection of one-loop Witten diagrams when the AdS radius is finite. The one-loop correction sees not only the AdS factor of the background but the internal space as well. Therefore, these diagrams are extended into the internal dimensions and are not pure AdS diagrams. In this subsection we will not attempt to find the exact AdS loop diagrams. Instead, we will content ourselves with confirming that the bound state amplitudes M II •••• (s, t) with certain conformal dimensions reduce to the flatspace massless box diagrams in the flat-space limit [33]. We leave the precise identification with AdS loop diagrams at a finite radius for the future. We consider a special class of bound state amplitudes M II •••• (s, t) with conformal dimensions ∆ 1 = ∆ 3 = ∆ 5 = ∆ 6 = 1 and ∆ 2 = ∆ 4 = ∆. The four-point function therefore has external dimensions 2, 2, ∆, ∆. In this case, we find that the Mellin amplitude with residues . (6.20) Note that the poles of the Mellin amplitude are precisely those corresponding to the doubletrace operators. This is the same situation as in the one-loop case [12,30,31]. Let us also mention that the sums in (6.19) can be performed and gives ] . (6.21) Here γ is the Euler constant and H x and ψ (n) [x] are the Harmonic number and polygamma function with order n respectively. Let us now examine the flat-space limit of this bound state Mellin amplitude. The flat-space limit corresponds to the high energy limit where both s, t become large [33]. The leading contribution in the sum (6.19) arises from the region with large m 1 , m 2 ∼ s, t. From the explicit expression, we find that C (2) m 1 m 2 has the following large m 1 , m 2 behavior This is a special case of the one-loop diagrams considered in [31] where the Mellin amplitudes have the form . The coefficients c m 1 m 2 are assumed to have the asymptotic behavior (6.24) in the large m 1 , m 2 limit. It was shown in [31] that in the flat-space limit the Mellin amplitude reduces to the massless one-loop box diagram in a D-dimensional flat spacetime. The behavior (6.22) implies D = 6. Therefore, we find that the bound state Mellin amplitude (6.19) becomes the 6D one-loop box diagram in the flat-space limit. A special case: The two-loop four-mass ladder diagram (a) The four-mass diagram (b) The fully massive diagram Figure 12: Flat-space two-loop ladder diagrams in momentum space (in black) and their dual diagrams (in green). The diagram (a) is a special limit of (b) obtained by taking Finally, as a consistency check of our results, let us consider a special case of (6.19) with ∆ = 1 and reproduce the result of an important conformal integral in the literature. This special case should correspond to the two-loop four-mass diagram in flat space which is depicted in Fig. 12a. In terms of the dual coordinates, the diagram 12a is defined by the integral (6.25) and can be obtained from the fully massive diagram 12b (6.26) by taking the massless limit x 1 → x 6 , x 3 → x 5 . To see why the diagram 12a should match M II •••• (s, t) ∆ i =∆=1 , let us note that the Mellin amplitude of the fully massive diagram was computed in [38] and turned out to be This is precisely the six-point exchange Witten diagram Fig. 10 with ∆ i = ∆ = 1. 10 Therefore, by further taking the coincidence limit the four-mass diagram 12a should be identical to the two bound state Witten diagram W II •••• . The result for the four-mass two-loop ladder diagram is well known in the literature and is given by [39,40] (6.29) with λ and ρ given by Here U and V are the conformal cross ratios defined in (2.5). Plugging the Mellin amplitude (6.19) with ∆ = 1 in the Mellin representation (2.6) and closing the contours for s and t to pick up the residues, we obtain an expansion in small U and V . It is not difficult to verify that up to an overall normalization the expansion matches precisely with the small U , V expansion of (6.29). Therefore, we can conclude that we have reproduced the four-mass two-loop ladder diagram in flat space as a special case of bound state Witten diagrams. More general diagrams From the basic diagrams we computed in the previous sections, we can construct a bevy of tree-level diagrams with one or two bound states by using the integrated vertex identity. In this section, we briefly explain how this works. Let us start with case with only one bound state. In Fig. 5 there is only one bulk-tobulk propagator. We can consider the more complicated diagrams with two bulk-to-bulk propagators by moving the bulk point of the bulk-to-boundary propagator with dimension ∆ 5 away from the quartic vertex to end on other propagators. The new diagrams contain only cubic vertices. There are three inequivalent possibilities which are depicted in Fig. 13. Using the integrated vertex identity on the green vertices eliminates the bulk-to-bulk propagator, and we reduce the three diagrams to the basic diagram W •••• . We now move on to the tree-level diagrams with two bound states. Let us first consider the case where the diagrams only have cubic vertices. These diagrams have three bulk-tobulk propagators. They can be obtained from the one bound state diagrams in Fig. 13 by further attaching another bulk-to-boundary propagator which starts from 2, 3 or 4 and terminates on the existing propagators. There are now many more diagrams. However, one can show that using integrated vertex identities twice allows us to reduce all these diagrams to W I •••• and W II •••• . Some examples of these diagrams are included in Fig. 14, and the integrated vertex identity is applied to the vertices in green. Similarly, one can consider tree-level diagrams with two bound states and two bulk-to-bulk propagators. This requires the diagrams to have one quartic vertex. One can use the integrated vertex identity to show that these diagrams reduce to the basic diagrams considered in this paper. In fact, they arise from the aforementioned case with three bulk-to-bulk propagators after using once the integrated vertex identity. Outlook In this paper, we initiated the study of tree-level Witten diagrams with external bound states. We considered the case with only scalar fields and focused on three basic diagrams using which we can construct an array of more complicated diagrams. We showed that these diagrams have simple analytic structures in Mellin space and obtained explicit expressions for their Mellin amplitudes for generic conformal dimensions. These explorations lead to many interesting directions for future research. • One immediate generalization is to include diagrams in which the internal propagators carry Lorentz spins. Such diagrams with spins up to two appear in correlators in fullfledged supergravity theories. The techniques which we developed in this paper are also useful in this more complicated scenario. In particular, the method based on the integrated vertex identity generalize straightforwardly to the spinning case. We expect that the Mellin amplitudes of these diagrams will be structurally similar to the ones studied here. • Once these diagrams with internal spinning operators have been computed, we can use them in the bootstrap calculation of four-point functions in 4d N = 4 SYM at strong coupling with two 1 4 -BPS operators and two 1 2 -BPS operators. The 1 2 -BPS operators are dual to single-particle states while the 1 4 -BPS operators are dual to bi-particle bound states. The starting point of such a calculation is an ansatz for the four-point function in terms of all possible diagrams with unfixed coefficients. We then use superconformal constraints to solve these unknowns. The superconformal kinematics of such "bound state" four-point functions has recently been analyzed in [41]. Such a bootstrap strategy is similar in spirit to the one first devised in [1,2]. • Relatedly, it would be interesting to see if our results in Mellin space, for special operator spectra appearing in top-down holographic models, can be re-expressed in position space in terms of the generalized Bloch-Wigner-Ramakrishnan functions found in the AdS 3 × S 3 case [18]. A good starting point is to consider the double-discontinuities [42,43], which are simpler objects but contain all the essential information. Being able to find an efficient algorithm to rewrite the Mellin results in terms of these building block functions will be useful for implementing the bootstrap strategy in position space. • Our analysis of the type II tree-level bound state diagram in Fig. 9 showed that it has intimate connections with one-loop diagrams in AdS. The resemblance between the two is manifested in the Mellin representation where they can be both represented as a sum of simultaneous poles. Moreover, we showed that the flat-space limits of certain bound state diagrams coincide with those of the AdS one-loop diagrams. It would be very interesting to develop a more systematic understanding in the future to establish a precise connection that remains valid at finite AdS radius. For example, a possible route is to consider generalizations of the integrated vertex identities [44]. It might be possible to use these identities to transform the diagrams and directly prove the equivalence between the bound state tree diagrams and certain one-loop diagrams. • It would also be of great interest to explore other methods for computing bound state processes in AdS. For example, a powerful technique is the AdS unitarity method initiated in [45], which mirrors the unitarity method in flat space. One can cut an AdS diagram into tree-level diagrams and "glue" them together in a sense that can be rigorously defined in the CFT language. For example, both diagrams Fig. 1b and 1c with two bound states can be viewed as the results of gluing together two five-point functions. The application of this method to AdS loop diagrams has already been streamlined in the literature. It would be very interesting to extend this technique to calculate bound state Witten diagrams and reproduce our results. Moreover, this perspective might also offer a more intuitive explanation of why these two diagrams have drastically different analytic structures. Another exciting direction to explore is to make connections with the Schwinger-Dyson equation in AdS, which will allow us to resum the 1/N corrections. Related works include [46][47][48]. • Note that in this paper we only considered correlators with at most two bi-particle bound states. Clearly, an important generalization of the analysis in the future is to include more bound state operators. It would also be interesting to look at correlators with multi-particle bound states which are obtained from taking the OPE limit of more than two single-particle operators. • Finally, it would also be interesting to consider bound state Witten diagrams in other backgrounds such as those providing holographic duals for boundary CFTs (or interface CFTs). The simplest example for holographic interface CFTs is the so called probe brane setup where there are localized degrees of freedom living on an AdS d subspace inside AdS d+1 [49,50]. Witten diagrams in this background with single-particle external states were systematically studied in [51,52], and the Mellin formalism for BCFT correlators was developed in [51]. The techniques developed in this paper will be useful for studying bound state scattering in these systems. A More details of M •••• In this appendix, we will show how to obtain (4.2) from (4.15). We first note that the γ 35 -integral can be computed by using the first Barnes' lemma: where the following identities have been used We then perform the integral over γ 25 through enclosing the contour to the left, leading to , (A.5) and . (A.6) We first focus on A 1 n . When ∆ 5 = 0, Γ[∆ 5 ] in the denominator forces n in Γ[∆ 5 + n] to be zero. Therefore, A 1 n reduces to When ∆ 5 > 0, as a function of s, A 1 n can be expanded as where F n (s) is the regular term and On the other hand, we note that where we wrote the sum over r as a generalized hypergeometric function 3 F 2 . Using the identity (C.2) for generalized hypergeometric function, we can replace the above 3 F 2 function by We thus conclude that the regular term F n (s) vanishes and A 1 n can be written as A 2 n can be obtained analogously. Actually, one can directly deduce A 2 n by switching ∆ 1 and ∆ 5 as well as ∆ 2 and s in A 1 n . This gives , (A.14) where we used the fact that A nr ∆ 1 ↔∆ 5 ,s↔∆ 2 = A rn . Substituting (A.13) and (A.14) into (A.4) thus leads to where we assumed ∆ 1 > 0. The sum over r in the first term can be computed by virtue of (C.2). This gives where the residue C (1) m is given by We note the second term in the above equality can be absorbed into the first term. After that, we use the identity (C.1) for generalized hypergeometric function, translating C B.1 Type I We start with the integral representation (5.16). After performing the γ 15 -integral, M I where we define A 1 n and A 2 n as , (B.2) and . (B.6) B 1 n and B 2 n can be obtained by following the steps in the Appendix A. Specifically, for ∆ 6 = 0 (∆ 1 = 0) we find that B 1 n = δ n,0 , while for ∆ 6 > 0 (∆ 1 > 0) one can expand B 1 n (B 2 n ) around its poles and show the regular term vanishes by using (C.2). Substituting the expression for B 1 n and B 2 n into (B.4) and shifting m by m → m − n then reproduce the expression (6.7). To get (6.17) for M 1 •••• , we evaluate the γ 45 -and γ 25 -integral in (6.15) by using the residue theorem, leading to where I 1 , I 2 and I 3 are given by , (B.8) and respectively. Let us focus on I 1 first. After defining A k 1 k 2 as I 1 can be written as . Following the steps in the Appendix A, A k 1 k 2 can be expanded around its poles at s = ∆ 3 + ∆ 4 − ∆ 5 − 2k 1 − 2k 2 + 2m. The absence of regular term can be verified by using (C.2). As a result, I 1 is expressible as (B.14) Here I 1 | ∆ 5 >0 is expressible as . In a similar way, one can derive an expression for I 2 , given by where all of the summation variables n 1 , n 2 , k 1 and k 2 run from zero to infinity. The computation of I 3 is more involved. We first note that I 3 vanishes when ∆ 5 = 0. Thus we only need to deal with the case when ∆ 5 > 0. In that case, we can expand I 3 around its poles at s = ∆ 3 +∆ 6 +∆+2k 1 +2n 1 and s = ∆ 3 +∆ 4 −∆ 5 +2k 1 −2k 2 −2r. One can directly check that the term obtained by the expansion of I 3 around s = ∆ 3 +∆ 4 −∆ 5 +2k 1 −2k 2 −2r is exactly I 1 | ∆ 5 >0 up to a minus sign. This leads to where F (s, t) is a possible regular term and the residue B 1 n 1 n 2 k 1 k 2 of poles at s = ∆ 3 + ∆ 6 + ∆ + 2k 1 + 2n 1 is given by (B.20) In the following, we will show that the regular term F (s, t) vanishes. Equivalently, we will prove that ∞ n 1 ,n 2 ,k 1 ,k 2 =0 To do this, we first rewrite the sum over k 1 as a generalized hypergeometric function ∞ n 1 ,n 2 ,k 1 ,k 2 =0 where we defined B n 1 n 2 k 2 as B n 1 n 2 k 2 = (−1) k 2 K (1) ] . (B.23) Using the identity (C.9), we can rewrite the generalized hypergeometric function, leading to ∞ n 1 ,n 2 ,k 1 ,k 2 =0 where A 1 and A 2 are given by (−1) k 1 +k 2 K (1) C Hypergeometric function identities In this appendix, we collect some useful identities for generalized hypergeometric functions 3 F 2 which were used in this paper. A useful identity, which has been used to translate (5.18) into (5.3), is Another useful identity, given by with n a non-negative integer, enables us to rewrite certain generalized hypergeometric functions 3 F 2 as Gamma functions [53]. This identity was used to show that the regular terms in M •••• (s, t) and M I •••• (s, t) vanish. To prove the absence of regular terms in M II •••• (s, t) we need to combine the above identity with the following three identities. The first two identities are [54] where we have defined Let us mention that an analytic continuation from the region Re(c) > 0 to the region c = 0, −1, · · · has to be implemented to make these two identities hold. To get the third identity, let us start with the following identity [55] 3 F 2 a 1 , a 2 , a 3 b 1 , b 2 1 = F 1 + F 2 , (C. 6) where F 1 and F 2 are given by and where F 3 and F 4 are given by and × 3 F 2 a 3 , a 3 − b 1 + 1, a 3 − b 2 + 1 a 3 − a 1 + 1, a 3 − a 2 + 1 1 , respectively. This identity holds when Re(c) > 0 and Re(b 2 − a 1 ) > 0.
10,593.8
2022-04-28T00:00:00.000
[ "Mathematics" ]
Natural Convection Flow of a Nanofluid in an Inclined Square Enclosure Partially Filled with a Porous Medium This work analyses free convection flow of a nanofluid in an inclined square enclosure consisting of a porous layer and a nanofluid layer using the finite difference methodology. Sinusoidal temperature boundary conditions are imposed on the two opposing vertical walls. Nanofluids with water as base and Ag or Cu or Al2O3 or TiO2 nanoparticles are considered for the problem. The related parameters of this study are the Darcy number, nanoparticle volume fraction, phase deviation, amplitude ratio, porous layer thickness and the inclination angle of the cavity. A comparison with previously published work is performed and the results are in good agreement. Detailed numerical data for the fluid flow and thermal distributions inside the square enclosure, and the Nusselt numbers are presented. The obtained results show that the heat transfer is considerably affected by the porous layer increment. Several nanoparticles depicted a diversity improvement on the convection heat transfer. Studies about free convective fluid flow and heat transfer in porous media domains have received considerable attention over the past few years and their findings are gaining significant importance. This is due to their ability to resolve a wide range of environmental situations or industrial applications such as, geothermal systems, thermal insulation, filtration processes, ground water pollution, storage of nuclear waste, drying processes, solidification of castings, storage of liquefied gases, biofilm growth, fuel cells. The problem of dealing with the fluid motions in the clear region and the porous medium has been studied for many years. Beavers and Joseph 1 presented the simple situation of the boundary conditions between a porous media and a homogeneous fluid. Poulikakos et al. 2 considered high value of Rayleigh in free convection in a fluid overlaying a porous bed using the Darcy model. Meanwhile, Beckermann et al. 3 studied free convective flow and heat transfer between a fluid layer and a porous layer inside a rectangular cavity. Free convective heat and mass transfer in solidification was studied by Beckermann et al. 4 . On the other hand, Chen and Chen 5 investigated convective stability in a superposed fluid and porous layer when heated from below. Heat transfer and fluid flow through fibrous insulation was studied by Le Breton et al. 6 . Singh and Thorpe 7 presented a comparative study of different models of free convection in a confined fluid and overlying porous layer. The problem with studying the solute exchange by convective within estuarine sediments had been considered by Webster et al. 8 . Goyeau et al. 9 discussed the problem of using one-or two domain formulations for the conservation equations. Meanwhile, Gobin et al. 10 analyzed the specified subclass of such problems where free convection occupies venue in a closed cavity with a partially-saturated porous medium. Nessrine et al. 11 have applied the non-Darcy model to study the flow and heat transfer characteristics in a pipe saturated porous medium. Sui et al. 12 used analytically the homotopy analysis method to study the convection heat transfer and boundary layer in a power-law fluid through a moving conveyor and inclined plate. Their results indicated that increasing the inclination angle clearly improved the rate of heat transfer. Bhattacharya and Das 13 considered numerically different values of Rayleigh and Nusselt numbers on natural convective flow within a square-shaped cavity. Thermal fluids are very important for heat transfer in many industrial applications. The poor thermal conductivity of classical heat transfer fluids like water and oils is the fundamental restriction in improving the rendering and the compactness of several engineering applications. A solid commonly has a greater conductivity than a fluid, for instance, the thermal conductivity of copper (Cu) is higher 700 times compared to that of water, while it is 3000 times greater than that of the engine oil. A novel mechanism for enhancing the heat transfer is by utilizing solid particles in the base fluid (i.e. nanofluids) in the area of sizes 10-50 nm 14 . Due to the small sizes and the large particular surface areas of the nanoparticles, nanofluids possess eminent characteristics like rising thermal conductivity, less blockage in the transit of the fluid flow, longer stabilization and homogeneity 15 . Consequently, nanofluids have an enormous area of possible enforcements like as in automotive, electronics, and nuclear applications where enhanced heat transfer and effective heat dissipation are desired. Ramiar et al. 16 studied the influences of axial conduction and variable properties on conjugate heat transfer of a nanofluid in a microchannel. Sundar et al. 17 investigated the enhancement of thermal conductivity and viscosity of nanodiamond-nickel nanocomposite nanofluids. Arani et al. 18 investigated the free convection in a filled nanofluid square cavity. Chamkha and Ismael 19 conducted a numerical study to solve the problem of differentially heated and partially layered vertical porous cavity filled with a nanofluid on free convection, for the first time by using the Darcy-Brinkman model. Sui et al. 20 gave an experimental study of multilevel equivalent agglomeration model for heat conduction enhancement in nanofluids. Zaraki et al. 21 applied theoretically the finite-difference method to investigate the effects of size, shape, the type of nanoparticles, the type of base fluid, the working temperature of free convective boundary layer heat, and the mass transfer of nanofluids. Lin et al. 22 conducted a numerical study on magnetohydrodynamic non-Newtonian nanofluid flow and heat transfer within a finite thin film and with the effect of the heat generation. They used four various kinds of nanoparticles where they concluded that the local Nusselt number decreased by increasing the solid volume fraction. Hamid et al. 23 used Buongiorno's model to investigate Non-alignment stagnation-point flow of a nanofluid past a permeable stretching/shrinking sheet. Based on the Darcy law, Zhang et al. 24 considered numerically the chemical effects on a boundary layer flow and heat transfer within a porous medium plate saturated with a fluid and three kinds of nanoparticles. Nevertheless, the study of free convective flow of a nanofluid in a square cavity partially consist with a porous medium based on Darcy model has not been undertaken yet. Lately, the problems of free convection in enclosures for different temperature conditions were given enormous interest by diverse investigations. Sarris et al. 25 examined the free convection in a closed cavity when the top wall has sinusoidal temperature condition whilst the remaining walls were kept insulated. Saeid and Yaacob 26 studied the free convective heat transfer in an enclosure with variable hot left wall temperature and a constant cold right wall temperature. Bilgen and Yedder 27 investigated the free convective heat transfer in a rectangular enclosure when the left vertical wall of the enclosure has a sinusoidal temperature distributions and the remaining walls were adiabatic. Deng and Chang 28 discussed numerically the convective heat transfer in a rectangular-shaped cavity when the vertical walls heated using variable sinusoidal temperatures. Sathiyamoorthy and Chamkha 29-31 investigated convection flow in an enclosure with linearly vertical heated walls. Bhuvaneswari et al. 32 considered MHD convection in a square enclosure with sinusoidal temperature fields on both vertical walls. Chamkha et al. 33 examined magnetohydrodynamics convection in a rectangular-shaped enclosure together with linearly concentrated and vertical heated walls. Cheong et al. 34 carried out a study on the effects of aspect ratio on free convective in an inclined rectangular enclosure with sinusoidal on the vertical left wall. Kefayati et al. 35 applied the Lattice Boltzmann methodology to study the magneto-convection flow of a nanofluid in a cavity. Ben-Cheikh et al. 36 analyzed the free convection flow of a nanofluid in a square enclosure heated by a variable temperature field within the bottom horizontal wall. Bouhalleb and Abbassi 37 employed the finite-volume element technique to solve the problem of free convective flow of a nanofluid in an inclined rectangular enclosure with a sinusoidal thermal condition on right vertical boundary. However, the subject of free convective fluid and heat flow in an enclosure partially saturated porous media with sinusoidally heating temperatures on the walls has not been considered yet. The goal of this work is to study the influence of Darcian natural convective flow of a nanofluid and heat transfer characteristics in an inclined square enclosure with a partly saturated porous layer using sinusoidal boundary conditions. Mathematical formulation We consider the steady 2D free convective fluid and heat flow in a square-shaped enclosure with length L, the left cavity part filled with a porous layer W, while the remainder of the cavity (L − W) is filled with a nanofluid, as depicted in Fig. 1. The vertical walls of the enclosure are heated non-uniformly temperature (sinusoidal temperature), whilst the upper and bottom horizontal walls are adiabatic. The outer boundaries are considered to be impermeable, whilst the nanofluid layer boundaries are assumed to be permeable. The pores are filled with a fluid composed with Ag or Cu or Al 2 O 3 or TiO 2 nanoparticles in water as a base fluid. According to the Boussinesq approximation, the physical properties of the fluid are fixed but varies for the density. With the above assumptions, the conservation equations for mass, Darcy and energy equations for steady free convection for the porous and the nanofluid layers will be considered separately 38 The conservation equations for mass, momentum and energy equation for the nanofluid layer are: where u and v are the velocity components in the x-direction and y-direction, the subscripts p, bf and nf are related to solid matrix of the porous medium, the clear fluid in the porous medium (water) and the nanofluid saturated in the porous medium. p denotes the pressure, ν is the kinematic viscosity, ϕ is the inclination angle of the cavity, T represents the dimensional temperature, K p is the permeability of the porous medium, g is the acceleration due to gravity, φ denotes the solid volume fraction, k nf represents the effective nanofluid thermal conductivity, ρ nf denotes the effective nanofluid density, μ nf is the effective nanofluid dynamic viscosity and all these quantities are as defined below The heat capacitance of the nanofluids can be determined by The nanofluids thermal expansion coefficient is given as The nanofluid thermal conductivity according to Maxwell-Garnett's (MG) model is given below: In terms of the stream function ψ and the vorticity ω, which are defined in usual way as: and also introduce the following non-dimensional variables: The dimensionless governing equations for the porous layer are: The dimensionless governing equations for the nanofluid layer can be written as: represents Rayleigh number of water, Da = K/L 2 is the Darcy number for porous layer and Pr = ν f /α f denotes Prandtl number of water. The dimensionless boundary conditions for solving equations (16)- (20) are: In our study the value of α fix at 1, and the subscripts + and − indicate that the respective quantities are evaluated while approaching the interface from the nanofluid and porous layers respectively. The local Nusselt number for both vertical walls, which are defined, respectively, by The fluid will gain heat from the half-heated vertical sidewall which yields Nu > 0. However, the fluid tends to lose the thermal heating from the half-cold vertical sidewall and hence Nu < 0. Adding the average Nusselt numbers of the two heated parts of the vertical boundaries will give the heat transfer rate of the cavity, given as 39 : Table 1. Apart from that, we also compared the present figures with the ones provided by Singh and Thorpe 7 for Ra = 10 6 , Da = 10 −5 , φ = 0, S = 0.5 and ϕ = 0 as depicted in Fig. 2. In addition, we have compared the current figures with the ones presented by by Deng and Chang 28 for pure fluid (Pr = 0.71) as shown in Fig. 3. Figure 3 demonstrates the results of this paper with the related figures presented by Deng and Chang 28 for sinusoidal boundary conditions on both sidewalls at Ra = 10 5 , γ = π 4 , ε = 1, φ = 0, S = 0 and ϕ = 0. These data results supply the dependability to gauge the precision of the current numerical methodology. Results and Discussion We provide in this part a numerical results for the streamlines of the porous/nanofluid-layers and isotherms of the porous/nanofluid-layers with various values of the Darcy number (10 −5 ≤ Da ≤ 10 −3 ), nanoparticle volume fraction (0 ≤ φ ≤ 0.2), phase deviation, (0 ≤ γ ≤ π), amplitude ratio, (0 ≤ ε ≤ 1), porous layer thickness (0.1 ≤ S ≤ 0.9), inclination angle of the cavity (0° ≤ ϕ ≤ 90°), Rayleigh number (Ra bf = 10 4 , 10 5 ) and the Prandtl number (Pr bf = 6.2). The values of the average Nusselt number are calculated for various values of ϕ and S. Table 2 lists the water base fluid (Pr bf = 6.2) with the thermo-physical properties of the considered nanoparticles. increment. The two streamlines in the symmetric cells tend to expand with the movement towards the porous layer. This behavior appears clearly in the nanofluid compared to that of pure fluid. Consequently, the strength of the flow circulation increases along with the alteration of the Darcy number. Affected by the constant temperature on the right wall, the isotherms pattern distributions appear closer near the left wall of the cavity. The flow behavior is clearly affected when the low value of amplitude ratio (ε = 0.3) and Darcy number are applied as the streamlines appointed three various cells within the cavity. At the center of the cavity, the major clockwise circulation cell appears clearly, while the two secondary anti-clockwise circulation cells take place closer to the adiabatic walls. When Da = 10 −3 , the strength of the flow circulation increases with the addition of the nanoparticles due to the nanofluid (see Ψ min values). The isotherm patterns are clearly influenced by the phase deviation increment, due to the heat transfer improvement. We observe significant changes in the flow structure by applying lower inclination angle value (ϕ = 30°), the streamlines appear as a clockwise rotating cell within the nanofluid layer. In other words, imposing a lower inclination angle value forces the streamlines to appear as a singular rotating cell which is similar to the constant temperature distribution. At a higher phase deviation value (γ = π), the flow behavior is significantly affected. The circulation of the streamlines is characterized by a singular anti-clockwise rotating cell. Increasing the inclination angle to the higher value (ϕ = 90°) leads to the redistribution of the streamlines and the , ε = 1, S = 0.5 and ϕ = 0°. Figure 10(a) clearly indicates that the absolute lower value of the strength of the flow circulation enhances with the addendum of the nanofluid. This is owing to the increment in the viscosity forces and the inertial force. The strong enhancement of the strength of the streamlines obtained with the Ag nanoparticles is owing to the greater thermal conductivity of Ag. In addition, this behavior leads to appear visibly with a greater concentration of nanoparticles (φ ≥ 0.1). However, we observe different effects on the absolute upper values of the strength of the flow circulation of the nanoparticles volume fraction increment, as presented in Fig. 10(b). This figure also shows that the strength of the flow circulation significantly decreases with the addendum of nanofluid for all nanoparticles kinds. Fig. 11(a). Obviously, the heat transfer enhancement on the right wall appears stronger than that on left wall due to the change in the phase deviation. The curved lines of the local Nusselt number on the left vertical wall are enhanced weakly by the changes in the phase deviation. Increasing the phase deviation from 0 to π caused the heating domain on the right wall to transfer to the top, while the cooling domain tends to transfer to the bottom which forces the local Nusselt number to appear with sinusoidal form. Figure 11(b) illustrates the effects of various values of amplitude ratio for the left and right vertical walls respectively on Nu and along the Y coordinates. At the right vertical wall there is no changing on the heat transfer with the absence of ε. Increasing the amplitude ratio value leads to a remarkable enhancement in the heat transfer. Furthermore, a higher amplitude ratio (ε = 1) substantially promotes the heat transfer in which the higher value of the local average Nusselt number can be obtained. Figure 12 clearly demonstrates the effects of various values of porous layer thickness and inclination angle respectively on the local Nusselt number and along the Y coordinates for water-Cu, Ra bf = 10 5 , Da = 10 −4 , γ = π 2 , ε = 1 and φ = 0.1. Figure 12 convection heat transfer is clearly increased by heating the lower part and cooling the upper part of the left wall. In other words, the heat transfer is enhanced with the heating of the lower half of the left vertical wall, while cooling the upper half of the wall tends to decrease the heat transfer enhancement. In the right vertical wall, from the curves of the local Nusselt numbers, we observe the significant effect of heat transfer through temperature distribution. Figure 12 , ε = 1, φ = 0.1 and S = 0.5. Due to the non-uniform temperature on the vertical walls, the enhancement of the heat distribution causes the heating domain on the right wall to transfer to the top, while the cooling domain tends to transfer to the bottom for all ϕ values. On the right wall, the Nusselt number curves tend to form a V shape for ϕ values affected by the velocity variation. The effects of various parameter values on Nu with the inclination angle of the cavity are clearly displayed in Fig. 13, for water-Cu, Ra bf = 10 5 , Da = 10 −4 and ε = 1. The convection heat transfer is significantly influenced by the inclination angle increment affected by the velocity variation. The strong enhancement of the rate of heat transfer is obtained by making changes to the phase deviation. The increment and the reduction of the average Nusselt number are indicated by the fixed inclination angle. The minimum reduction in the convection heat transfer rate appears with the absence of the phase deviation (γ = 0), while the maximum increment occurs when the value of γ is equal to π 4 . In addition, this graph shows that the best heat transfer enhancement is obtained with the smallest porous layer thickness (S = 0.2). Furthermore, the higher concentration of solid volume fraction (φ = 0.2) together with the smallest porous layer thickness are able to influence the heat transfer distribution which results in the higher average Nusselt number. Figure 14 illustrates the effects of various parameter values on the average Nusselt number with the porous layer thickness for water-Cu, Ra bf = 10 5 , Da = 10 −4 , γ = π 2 and φ = 0.1. The convection heat transfer is systemat- ically reduced with an increment in the porous layer thickness due to the resistance of the porous layer hydrodynamics. As the amplitude ratio increases, the heat transfer rate is enhanced and this is caused by the non-uniform temperature distribution. In addition, higher amplitude ratio (ε = 1) strongly enhances the heat transfer rate which leads to the maximum average Nusselt number values. This figure also indicates that the heat transfer rate assumes various behaviors with increment of the inclination angle due to changes in the velocity. Figure 14 , ε = 1, φ = 0.1 and ϕ = 0°. A very interesting result can be observed from this figure. At low S values (0.1 ≤ S ≤ 0.3), Ag appears with higher enhancement in the heat transfer rate compared to other nanoparticles. However, by increasing the thickness of the porous layer, the Al 2 O 3 nanoparticles tend to transport more heat within the cavity. In other words, adding Al 2 O 3 nanoparticles to the water in a cavity with a higher porous layer thickness helps to transport more heat due to the lower thermal expansion of the Al 2 O 3 nanoparticles. Conclusions This work considers the problem of natural convection flow of a nanofluid in an inclined square enclosure with a partially-saturated porous layer with variable sinusoidal temperature on two opposing sidewalls based on Darcy's law and the Boussinesq approximation. The finite difference methodology is utilized for solving the non-dimensional governing equations with the related boundary conditions. Detailed numerical data for the fluid flow and thermal distributions within the enclosure, and the local and average Nusselt numbers are exhibited graphically. The remarkable conclusions in the study are provided below: 1. A significant enhancement appears on the flow structure by applying lower inclination angle. This resulted in the appearance of the streamlines as a clockwise rotating cell within the nanofluid layer. In another words, imposing a lower inclination angle value is forced the streamlines to form a singular rotating cell which are similar to the constant temperature distribution. 2. The absolute lower values of the strength of flow circulation increase with the addition of solid volume and φ = 0.1. fraction, while the significant reduction on the absolute higher values of the strength of the flow circulation appears for all nanoparticles types. This happens due to the viscosity forces and the inertial force changing. 3. The curved lines of the local Nusselt number on the left vertical wall are enhanced weakly by changing the phase deviation. Meanwhile, the heating domain on the right wall upwards and the cooling domain moves downwards which results in the local Nusselt number to form a sinusoidal shape caused by the phase deviation enhancement. 4. The rate of heat transfer is considerably influenced by the inclination angle increment affected by the velocity variation. The huge rise of the heat transfer is gained by changing the phase deviation. The increment and the reduction of average Nusselt number are indicated by the fixed inclination angle. 5. At low porous layer thickness, Ag appears with a higher enhancement in the heat transfer rate compared to other nanoparticles. However, by increasing the porous layer thickness, the Al 2 O 3 nanoparticle tends to transport more heat within the cavity. This is due to the lower thermal expansion of the Al 2 O 3 nanoparticle.
5,237.4
2017-05-24T00:00:00.000
[ "Engineering", "Materials Science", "Environmental Science", "Physics" ]
A Mini-Review: Clinical Development and Potential of Aptamers for Thrombotic Events Treatment and Monitoring The unique opportunity for aptamer uses in thrombotic events has sparked a considerable amount of research in the area. The short half-lives of unmodified aptamers in vivo remain one of the major challenges in therapeutic aptamers. Much of the incremental successful therapeutic aptamer stories were due to modifications in the aptamer bases. This mini-review briefly summarizes the successes and challenges in the clinical development of aptamers for thrombotic events, and highlights some of the most recent developments in using aptamers for anticoagulation monitoring. Introduction Antibodies and their variants, such as antigen-bind fragments (Fab) and single-chain variable fragments (scFv), have received considerable attention in the area of biomedicine. In the past decade, there has been a modest increase in the number of biological drugs approved by the US FDA [1]. Almost all of the approved biologics are antibodies and antibody-drug conjugates, with a few additional enzyme products [1]. A largely under-explored area of biologics is nucleic acid-based aptamer therapies. The purpose of this review is to provide a summary and perspective of the clinical success and failure of aptamer-based therapeutic agents in the treatment and monitoring of thrombotic events. A Brief History of Aptamers Aptamers are molecular recognition elements (MREs) that include peptides and nucleic acids (RNA and ssDNA). Nucleic acid aptamers have received high interest in their use as therapeutic agents, targeting agents, and biosensing elements. Aptamers, by definition, can bind to various user-defined targets with high affinity and specificity [2]. Tuerk and Gold first described the process of discovering oligonucleotide binding elements closed to three decades ago [3]. Tuerk conducted an in vitro selection experiment, which was first described as the Systematic Evolution of Ligands by Exponential Enrichment (SELEX), to further understand mutations in the gene 43 protein in bacteria. Tuerk decided to mutate a hairpin loop that contained eight nucleotides. The project produced two RNA binding elements (aptamers) that had an equal affinity to gene 43 [4]. Ellington and Szostak developed a similar process and named their products aptamers, which means "fitting part" [5]. After these initial experiments, the length of aptamers was expanded to include up to 50 nucleotides to gain further insight into the tertiary structure and folding of aptamers [4]. In 1992, NeXstar started using aptamer technology to develop therapeutic agents similar to antibodies. The first aptamer that underwent a clinical trial phase was NX1838. This compound was approved by the FDA in 2005, and Clinical Advantages and Disadvantages of Aptamers Nucleic acid aptamers have several unique properties that give them many advantages for clinical applications. The first quality that distinguished them from small molecule therapeutics is that they have high affinity and specificity for their targets due to the SELEX process [3]. Nucleic acid aptamers generally have a low potential for systemic toxicity due to their composition of natural molecules [3]. Additionally, aptamers can be synthesized chemically without the use of live systems. Chemically produced aptamers generally are non-immunogenic, which is a significant advantage over the immunoglobulin complexes currently on the market [3]. Aptamers are also more thermally stable than immunoglobulins, which allows them to be transported and stored more conveniently for practical purposes. Once an aptamer is selected, it is inexpensive to produce, which may give aptamers a cost advantage for therapeutic purposes in comparison to immunoglobulins [9,14]. The primary clinical disadvantage of an aptamer is the short half-life in the body [3]. It is mainly due to nucleases degradations and high renal clearance. Nucleases within the bloodstream quickly degrade aptamers, which significantly reduces concentrations in the body, and thus prevents them from reaching their target sites [15]. This phenomenon is particularly significant with RNA aptamers. Ni et al. have recently reviewed the topic of chemical modifications of nucleic acid aptamers [16]. Briefly, the in vivo resistance to nucleases can be enhanced by modifying the main areas of susceptibility, the 5' and 3' terminal ends, and the phosphodiesterase backbone [3]. Modification to the sugar ring of the nucleoside and creating mirror images of the nucleic acid aptamer (Spiegelmers) can also enhance resistance to nuclease degradation. Additions of long bulky chains of biocompatible molecules, such as cholesterol and polyethylene glycol, can increase aptamers resistance to renal clearance [16]. However, modifying these areas after selection may risk alterations in binding affinity or specificity [17]. More recently, both academia and the industry have reported the introduction of unnatural or modified nucleotides to the library during aptamer selection. SomaLogic Inc. reported using five-position modified dC and dU to select SOMamer for proprotein convertase subtilisin/kexin type 9 (PCSK9) protein. The Hirao group reported using two highly hydrophobic unnatural bases, 7-(2-thienyl)imidazo [4,5-b]pyridine (Ds) and 2-nitro-4-propynylpyrrole (Px), to expand the genetic alphabet of their library to select high-affinity DNA aptamers. They termed this new SELEX variant ExSELEX [18]. In both cases, the modification produced aptamers with high affinity and enhanced nuclease resistance. Since the discovery, aptamers have been investigated for a wide array of therapeutic and biosensing applications. Many disease states have been researched [9]. The greatest success has been recognized in the treatment of age-related macular degeneration (ARMD), leading to the first approved aptamer drug, Macugen ® [5]. At the time of this review, ARMD is still the only FDA-approved aptamer for therapeutic applications. This may be because the clinical delivery of aptamer drugs to the localized area affected by ARMD has fewer pharmacokinetics challenges, i.e., adsorption, distribution, metabolism, and excretion. Nucleic acid aptamers that target various cancer cell surface proteins and biomarkers is a growing interest in both academia and the industry [19,20]. Aptamers have been shown to be used in conjunction with nanomaterials, such as carbon nanodots or gold nanoparticles, for applications in biosensing, payload delivery, or providing a direct therapeutic benefit by binding to their specific targets [21]. One additional disease state where aptamers may offer a unique opportunity is anticoagulation/antiplatelet therapy and monitoring. Summary of the Coagulation and Clotting Cascades In brief, anticoagulation agents work by interrupting the coagulation cascade. This cascade is triggered when the endothelium is damaged, and the tissue factor (TF) is exposed ( Figure 2). Then, factor VIIa responds and converts factor IX to IXa and factor X to Xa. Factor IXa complexes with factor VIIIa to further convert factor X to Xa. Factor Xa complexes with factor Va to convert factor II (prothrombin) into factor IIa (thrombin). In turn, thrombin converts factor I (fibrinogen) to factor Ia (fibrin). Calcium is necessary for the activity of many of the above clotting factors. Finally, the fibrin will form a mesh to promote clotting, and fibrin and fibrinogen will interact with platelet receptors to promote adhesion and form the clot. Many current drugs on the market work by inhibiting thrombin or factor X, or inhibit vitamin-K reductase, which is needed to carboxylate several factors [22]. In addition to anticoagulation, some drugs act on the clotting cascade. The clotting cascade is closely related to the coagulation cascade ( Figure 3). In the coagulation cascade, platelets are activated by the Von Willebrand factor (VWF) and collagen fibers exposed by initial endothelial injury. The activated platelet then upregulates GPIIb/IIIa receptors and releases thromboxane A2 (TxA2) and ADP to promote activation of additional platelets. Fibrin produced by the anticoagulation cascade and fibrinogen bind to GPIIb/IIIa receptors on platelets to promote adhesion to other platelets. Ultimately, this leads to the aggregation of platelets and clot formation at the site of injury [22]. Since there are a relatively large amount of proteins that are either directly or indirectly involved in the coagulation and clotting cascades, many of them have been investigated as therapeutic targets for inhibition by nucleic acid aptamers. The following sub-section summarizes different anticoagulation aptamers that had progressed into various stages of the pre-clinical testing and clinical trial (Table 1). Von Willebrand Factor Inhibitors ARC-1779 is a DNA aptamer developed by the Archemix Corporation to target the human von Willebrand factor (vWF) [23]. vWF interacts with GP1b on platelets to affect the P2Y12 receptor, which leads to platelet activation. By inhibiting the von Willebrand factor, ARC-1779 blocks platelet activation and platelet aggregation, as well as thrombin generation. ARC-1779 is a 40-nucleotide aptamer that underwent PEGylation and methylation to increase the duration of action in vivo [23]. The clinical utility of ARC-1779 was investigated for cerebral embolism and thrombotic thrombocytopenic purpura (TTP). The high dose completely blocked the target domain on the vWF, whereas the low dose (0.5 µg/ mL) did not, suggesting that the relationship between ARC-1779 and vWF inhabitation may be dose-dependent. Also, ARC-1779 did not resolve all the features of TTP during the trials [41]. During this study, the aptamer did not have any significant bleeding events, and was generally well tolerated. In 2009, phase II studies for clinical use in TPP was completed. However, the sponsor decided to withdraw the study for further investigation. Matsunaga et al. developed several DNA aptamers that bind to the von Willebrand factor. Nimjee et al. recently developed a new anti-VWF RNA aptamer named DTRI-031 [24]. This aptamer was truncated to a 30-mer from the original 60-mer candidate aptamer. A 5-uracil tail was added to the 3' position for antidote oligonucleotide binding. DTRI-031 was able to inhibit platelet aggregation in whole blood and prevent thrombosis in mice models in a dose-dependent way. Additionally, DTRI-03 was able to induce vascular recanalization in mouse and dog carotid arteries. The antidote oligonucleotide was also able to reverse the antiplatelet and anticoagulation effects of DTRI-031 [24]. Factor II/ IIa Inhibitors ARC2172, also known as NU172, is a DNA aptamer that is produced by ARCA Biopharma and Nuvelo [25]. This aptamer was designed as a direct thrombin (factor II) inhibitor. Thrombin is the activated form of prothrombin and is responsible for converting fibrinogen into fibrin. By decreasing the production of fibrin, the fibrin mesh does not form, which allows platelets to adhere to start clot formation. NU172 is a 26-nucleotide unmodified DNA aptamer [26]. Due to the unmodified nature of NU172, it has a relatively short duration of action, with a half-life of 10 min in vivo [22]. The reported short half-life was consistent with the literature [17]. The anticoagulation effect of this aptamer was determined by the activated clotting time (ACT). The dose-dependent anticoagulating effect of this aptamer was seen during Phase 1b trials [25]. No adverse events were reported in either of the phase I trials. An open-label small-size phase II study was started, but the current status of the study is unknown (U.S National Library of Medicine, Clinical trials.gov Identifier: NCT00808964). Muller et al. investigated the two DNA aptamers (HD1 and HD22) that bind to the exosite 1 and exosite 2 domain of thrombin, respectively [27]. In combination, the bivalent aptamer (HD1-22) binds to thrombin with a very high affinity of Kd = 0.65 nM. They were also able to confirm the bivalent aptamer's anticoagulant activity that is close to bivalirudin and superior to argatroban. An antidote-oligodeoxynucleotides was also developed to reverse HD1-22 anticoagulation effect in in vitro testing [27]. Bompiani et al. developed an optimized RNA aptamer, termed R9d14t, which can bind to both prothrombin (factor II) and thrombin (factor IIa) with high affinity (10 nM and 1 nM, respectively) [31]. It inhibited both thrombin formation and coagulation activities mediated by thrombin exosite I. The research group also developed a complementary oligonucleotide antidote to reverse the anticoagulation activities of R9d14t in in vitro settings [31]. No clinical studies have been conducted on R9d14t at this point. Kotkowiak et al. recently modified a previously identified thrombin binding aptamer RE31 with unlocked nucleic acid (UNA), locked nucleic acid (LNA), and β-L-RNA [28,42]. Modifications with UNA at position 15 T residual and LNAs altered the melting temperature of RE31 and generated twofold higher anticoagulation activity. All of the tested bases modifications did not change the G-quadruplex structure, and also increased the aptamer stability in human serum [28]. Wakui et al. reported using a variant of CE SELEX-microbead-assisted capillary electrophoresis (MACE) SELEX-to identify several thrombin-binding DNA aptamers with nanomolar affinity [29]. The reported aptamer, M08, also demonstrated 10 to 20-fold higher anticoagulation activity than other earlier thrombin binding aptamers in in vitro studies. The author also developed an antidote system for the M08 aptamer [29]. Most recently, Zhou et al. reported using a predetermined DNA nanoscaffold to identify a bivalent DNA aptamer (145-mer) that can bind to both the exosite I and exosite II of thrombin with femtomolar affinity [30]. The bivalent aptamer was able to produce an anticoagulation effect at as low as 5-nM aptamer concentration in human plasma samples. In addition to identifying novel thrombin-binding aptamers, several research groups recently reported using novel strategies to investigate the anticoagulation activities of HD1 and HD22 under various conditions. Derszniak et al. compared the effects of HD1 and HD22 under flow conditions. The author concluded that HD1 demonstrated stronger anti-thrombotic agents, while HD22 demonstrated weaker antiplatelet and anticoagulation activities due to its binding to exosite II [43]. Lin et al. utilized graphene oxide (GO) to adsorb HD1 and HD22 aptamers via π-π stacking [44]. The GO-aptamer assembly was tested to show superior anticoagulation activity to that of traditional anticoagulant drugs in vitro and in the animal model. The author also reported the high biocompatibility of the assembly in in vitro cytotoxicity and hemolysis assays [44]. Krissanaprasit et al. reported using an in silico design to construct RNA origami containing multiple thrombin-binding RNA aptamers [45]. After introducing the 2'-fluoro modification in C and U nucleotides, the RNA origami structure showed increased resistance to nuclease degradation and also greater anticoagulant activity than free aptamers, which can also be reversed by using ssDNA antidotes [45]. Amato et al. investigated the effect of the different linkers in constructing a dimeric HD1 aptamer [46]. Adenosine or thymidine residues and glycerol moiety were used as the linker in joining the two aptamers. The authors reported the linkers, although they did not alter the global conformation of the aptamers; the binding affinities were different among linker constructs. The authors concluded that the highest affinity dimeric construct demonstrated the highest anticoagulant activity and the most resistance to degradation in biological samples [46]. Factor IXa Inhibitors REG1, also known as pegnivacogin, is an RNA aptamer that acts as a direct factor IXa inhibitor [32]. A factor IXa inhibitor will prevent the formation of the IXa/VIIa complex, which is an activator of factor Xa. Factor Xa is the rate-limiting enzyme for coagulation, and by inhibiting the VIIa/IXa complex, the process is considerably slower. Factor IXa inhibitors have been investigated for their efficacy within anticoagulation and the simultaneous decreased risk of bleeding. Pegnivacogin is a PEGylated RNA aptamer composed of 31 nucleotides [32]. After administration, pegnivacogin reaches a maximum concentration in the plasma within 5 min or less, and can last for over 24 hours at doses of 0.7 mg/kg. An activated partial thromboplastin time (aPTT) is used to determine the efficacy of pegnivacogin. At doses of 0.7 mg/kg or greater, the aptamer resulted in a 2.5-times increase in aPTT from baseline. In addition to pegnivacogin, its reversal, anivamersen, was developed. Anivamersen is a 15-nucleotide RNA sequence, and it binds to its target using Watson-Crick base pairing [32]. The relationship between anivamersen and the reversal of pegnivacogin is dose-dependent, and the anticoagulation effect is reversed within 5 min of administration of anivamersen in a 1:1 ratio. Pegnivacogin has already undergone phase Ia, Ib, Ic, IIa, and IIb clinical trials. The phase IIb clinical trial studied patients undergoing PCIs or CABGS, and was deemed the RADAR trial. However, enrollment was stopped early after three patients had a severe allergic reaction in the REG1 arm [34]. The investigator later concluded the allergic reaction was due to patients' pre-existing antiPEG antibodies, and was not due to the aptamer itself [47,48]. The RADAR trial determined that pegnivacogin, in combination with its reversal agent, may lower the risk of bleeds in comparison to unfractionated heparin during artery sheath removal. Additionally, a phase 3 trial comparing the efficacy of REG1 to bivalirudin in preventing periprocedural ischemic was terminated due to formulation-induced allergic reactions and no evidence of efficacy [33]. During the REG 1 studying phases, the subcutaneous delivery of pegnivacogin in combination with the intravenous delivery of its reversing agent (REG2) was also studied in healthy individuals and showed no toxicities [35]. More recently, an in vitro study was published that showed a reduction in platelet aggregation as well as reversal when anivamersen was introduced [49]. A follow-up post-clinical trial study showed that antiPEG antibodies interacted with the PEGylated RNA aptamer and directly inhibited the anticoagulation properties of the aptamer both in vivo and in vitro [50]. Factor Xa Inhibitors The RNA aptamer 11F7t was first reported in 2010 as a factor Xa inhibitor [36]. It binds to factor Xa with high affinity (K D = 1.1 nM) and blocks Xa/ Va assembly. More recently, Gunaratne et al. invested the anticoagulation ability of 11F7t in combination with other factor Xa inhibitors currently on the market, i.e., rivaroxaban (oral), apixaban (oral), edoxaban (oral), or fondaparinux (subcutaneous) in in vitro assays [51]. The combination of 11F7t and Xa inhibitors achieved anticoagulation potency that is equal to unfractionated heparin, which is not achievable by any agents alone. Also, the administration of an inactive decoy Xa protein was able to neutralize the anticoagulation effects. 11F7f and in combination were also not affected by the immunoglobulins that resulted from heparin-induced thrombocytopenia. All of the findings suggested the potential for 11F7f in conjunction with other Xa to be an alternative for unfractionated heparin [51]. Factor XIa Inhibitors Factor Eleven Inhibitory Aptamer, FELIAP, is a DNA aptamer that was developed by the Canadian Blood Services at the Thrombosis and Atherosclerosis Research Institute, and McMaster University [37]. FELIAP acts as a factor XIa inhibitor. Factor XIa helps to convert factor IX to IXa, leading to preventing the activation of factor X, which will prevent the activation of thrombin. This ssDNA aptamer is 74 nucleotides in length (K D = 1.8 nM) [37]. It was hypothesized that FELIAP binds near the active site of factor Xia, since it competitively inhibited a substrate that was cleaved by factor XIa [37]. This theory was further supported because FELIAP did not prevent the activation of factor XI and inhibited two reactions mediated by factor XIa in in vitro experiments. A thrombin generation assay showed that FELIAP reduced the endogenous thrombin potential, while it increased the meantime to peak of thrombin [37]. No in vivo tests have been performed on FELIAP at this point. Additionally, Woodruff et al. identified two additional aptamers, 11.16 and 12.7, that noncompetitively inhibit factor XIa. 11.16 is an RNA aptamer with a 29-nucleotide random region sequence, and 12.7 is an RNA aptamer with a 49-nucleotide random region sequence [38]. Aptamer 12.7 inhibits FXI and FXIa, whereas 11.16 was found to inhibit FXIa. Both aptamers significantly inhibited the factor FXIa-mediated activation of factor IX. It was determined that the aptamer bound to an anion binding and serpin bindings sites on the factor XIa catalytic domain. However, the control aptamers that were used showed a small amount of inhibition. The author suspected that the negative charge of aptamers might contribute to the inhibition of this target. Aptamer 12.7 increased the aPTT time by approximately 15 seconds as compared to the control library during an aPTT assay. Aptamer 11.16 showed a similar increase in aPTT time compared to the control library, indicating it may interact with other components within the test [32]. At this point, no further analysis of either aptamer has been conducted. Factor XII/ XIIa Inhibitors Woodruff et al. reported an RNA aptamer, R4cXII-1 aptamer (K D = 8.9 nM/0.5 nM), that can inhibit the autoactivation of factor XIIa and the factor XIIa-mediated activation of factor XI [39]. It prolonged fibrin formation and thrombin generation in coagulation assays. However, this aptamer was not able to stop the activation of the plasma kallikrein through factor XII-mediated action [39]. No clinical testing data is available at this time. Kallikrein Inhibitors Kall1-T4 is an RNA aptamer developed by the Duke Medical Center [40]. Kall1-T4 targets both kallikrein (K D = 0.88 nM) and prekallikrein (K D = 0.28 nM). Kallikrein is responsible for factor VIIa amplification, which is one of the first components of the coagulation cascade. Prekallikrein is the precursor to kallikrein, and is cleaved by the factor VIIa and kininogen complex. Kallikrein plays a role in the inflammatory response. Kallikrein helps to activate bradykinin, which is responsible for vasodilation and inflammation. This RNA aptamer is 54 nucleotides in length [40]. An activated partial thromboplastin time (APTT) was performed to determine the anticoagulation effects of this aptamer. The results showed a dose-dependent increase in clotting time. During a kallikrein-mediated bradykinin assay, Kall1-T4 was determined to reduce the release of bradykinin significantly. No in vivo trials have been performed with Kall1-T4 thus far. Aptamer in Anticoagulation Monitoring The role of aptamer in anticoagulation monitoring has been investigated heavily in the past two decades. The biosensing application of aptamers to detect thrombin has been extensively explored since the identification of the first thrombin-binding DNA aptamer (TBA) in 1992 [52]. One of the most intriguing features of the TBA was the antiparallel G-quadruplex structure formation upon binding to thrombin ( Figure 4) [53]. Although other aptamers have been shown to have a G-quadruplex structure, only TBA has the induced G-quadruplex formation upon target recognition [54,55]. It was reported that the G-quadruplex structure in TBA was essential in targeting recognition and its inhibitory effect [53,56]. The uniqueness of the TBA leads to extensive research on its utility in biosensors utilizing various optical, electrical, electrochemical, and mass transferred principles. This topic was reviewed by Deng et al. and Ong et al. recently [57,58]. In contrast to the original research of NU172, HD1, and HD2 as previously discussed therapeutic agents, Trapaidze et al. interrogated their selectivity using surface plasmon resonance measurement [26]. HD22 fit a 1:1 aptamer to thrombin interaction, whereas HD1 and NU172 interacted with thrombin at multiple sites and fit a heterogeneous analyte binding model. It was reported that each aptamer could detect thrombin in the nanomolar range. Individually tested aptamers were introduced to albumin, and both HD1 and NU172 produced a signal in the presence of albumin alone. All three aptamers were able to detect thrombin in the presence of albumin during this experiment, thus confirming their selectivity of thrombin over albumin. HD1 and HD22 were added to the diluted mouse plasma to determine each aptamer's sensing ability in a more complex environment. HD1 provided similar signals on the SPR sensorgram with or without thrombin, but HD22 showed signal peaks corresponding to the concentration of thrombin. The author concluded that HD22 had the highest potential to be explored as a biosensor for thrombin [26]. In addition to the detection of clotting factors for thrombotic events, recently researchers have also begun investigating using aptamers to measure levels of direct oral anticoagulants. Direct oral anticoagulants (DOACs), such as direct thrombin inhibitors (dabigatran) and direct factor Xa inhibitors (e.g., apixaban) were developed as an alternative to the traditional vitamin-K antagonist (warfarin). Most patients do not require therapeutic monitoring for DOACs. However, it can be useful in managing surgical patients and patients with multiple comorbidities [59]. Currently, dabigatran can be monitored with thrombin time (TT) with high sensitivity. On the other hand, direct factor Xa inhibitors can only be measured with surrogate tests, such as anti-Xa level and thrombin generation assays. These assays are not routinely used due to a lack of extensive clinical studies [59,60]. Traditional liquid chromatography coupled with mass spectroscopy may also be used to precisely measure the drug level in human plasma, although it is labor-intensive, time-consuming, and requires costly equipment [61]. In 2018, Alhojani et al. used the SELEX process to identify four single-stranded DNA aptamers-DGB-1, DBG-2, DBG-4, and DBG-5-that could bind to dabigatran [62]. DBG-1 had the highest binding affinity to dabigatran, followed closely by DGB-5. DBG-1 underwent modification before it was immobilized on a gold-covered electrode that detected current. Dabigatran was added to the probe solution in varying concentrations. The change in target concentration showed a corresponding reduction of the electric current. Additionally, a conformational change was suspected when the dabigatran molecule bound to the aptamer. The author concluded that this aptamer has the potential for therapeutic monitoring of dabigatran as well as being a potential reversal agent with further exploration and research [62]. Conclusions and Future Perspective Since the first description of the thrombin-binding DNA aptamer, multiple aptamers have been isolated to target key clotting factors and co-factors for the treatment and monitoring of thrombotic events. [53]. Researchers have utilized various techniques, such as base modification and unnatural bases library expansion to modify the basic building blocks of nucleic acid aptamers, resulting in increased resistance to nuclease degradation and body retention. However, the lack of approved aptamer-based anticoagulation agents suggested that challenges remain in therapeutic uses of aptamers in the field. Likewise, although aptamers can act as capturing elements in different biosensing platforms, the road to fully commercialized aptamer-based rapid drug monitoring technologies is still somewhat distant. One crucial point to be noted is that aptamer screening and application is a highly multidisciplinary field of research. Collaborations between biologists, chemists, and engineers are essential to the success of aptamer-based technologies.
5,896.8
2019-07-26T00:00:00.000
[ "Biology" ]
Disseminated Cryptococcosis in an HIV-Negative Patient With Liver Cirrhosis and Asplenia: A Rare but Dreadful Disease Cryptococcosis (cryptococcal infection) is a severe life-threatening fungal infection. It is seen worldwide, specifically in immunocompromised, mainly in human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS)-infected individuals. Cryptococcal infection can present with meningitis, pneumonia, peritonitis, disseminated cryptococcosis, and cryptococcal fungemia. Here, we report the case of an HIV-negative Caucasian male in his early 50s with liver cirrhosis and asplenia who presented to our hospital with bilateral foot cellulitis and pneumonia. He was eventually diagnosed with disseminated cryptococcosis. Even with appropriate treatment, he developed multiorgan failure and finally expired. The disseminated cryptococcal infection has a very high mortality rate in patients with liver cirrhosis and asplenia. Liver cirrhosis is an independent risk factor, and asplenia is a comorbid condition for cryptococcal infection in HIV-negative patients. Healthcare providers should have a high suspicion of cryptococcosis in these patients. Early testing with cryptococcal antigen assay and initiation of an appropriate antimicrobial regimen can help minimize bad outcomes. Introduction Cryptococcus neoformans (CN) and Cryptococcus gattii (CG) are ubiquitous invasive fungi transmitted through the inhalation of microscopic spores and cause cryptococcosis [1]. Despite the lung being the site from where cryptococcus enters the body, meningoencephalitis is the most common clinical manifestation. CN infections are more common than CG infections and can cause meningitis, peritonitis, pneumonia, urinary tract infection, cellulitis, osteomyelitis, and even disseminated infection, causing multiorgan failure [2]. The most common symptoms in patients with cryptococcal infection are fever, nausea, vomiting, headache, neck stiffness, abdominal pain, difficulty breathing, and dizziness [1,2]. Worldwide, over a million cases of cryptococcosis are reported each year, with approximately 625,000 deaths [1]. The estimated incidence of cryptococcosis in the United States is about 0.4-1.3 cases per 100,000. In people with acquired immunodeficiency syndrome (AIDS), the incidence is 2-7 cases per 100,000, with a case fatality rate of about 12% [2]. The global incidence of cryptococcal infections in human immunodeficiency virus (HIV) has declined drastically over the last two decades owing to advances in antiretroviral therapy [2]. HIV/AIDS, decompensated liver disease, cell-mediated immunosuppressive regimen without calcineurin inhibitors, long-term steroid use, and autoimmune diseases are independent risk factors for invasive cryptococcosis [3]. Decompensated cirrhosis patients with Child-Pugh class B and C are more likely to have extrapulmonary cryptococcosis which is associated with increased mortality [4]. Invasive procedures, longterm steroid use, antibiotics use, malnutrition, asplenia, active cancer, diabetes mellitus, and solid organ transplantation are comorbid conditions for cryptococcosis in HIV-negative patients [5]. With increased awareness and testing, cryptococcosis is increasingly reported in patients with cirrhosis, accounting for 6-21% of all systemic fungal infections with a mortality rate of up to 76% [6,7]. Many cases of disseminated cryptococcal infection with liver cirrhosis are reported in medical literature and only two cases with asplenia. The disseminated cryptococcal infection was not previously reported in patients with combined asplenia and liver cirrhosis. This case report highlights the increased probability of invasive cryptococcal infection in asplenia with liver cirrhosis. We encourage healthcare providers to identify this dreadful infection early to improve patient outcomes. Case Presentation A Caucasian male in his early 50s was sent from his podiatrist's office to the emergency room for nonhealing bilateral foot ulceration, cellulitis, productive cough, and fever. His past medical history was significant for alcoholic liver cirrhosis, no ascites or paracentesis, and non-healing bilateral chronic foot ulcers requiring multiple rounds of oral and intravenous (IV) antibiotics. He was currently not on any medications. His previous wound cultures grew methicillin-resistant Staphylococcus aureus. He had quit alcohol four years ago and had never smoked. On examination, his temperature was 38.9°C (102.1°F), pulse rate was 117 beats/minute, respiratory rate was 20 breaths/minute, and blood pressure was 146/85 mmHg. He appeared in mild respiratory distress. Lungs were clear to auscultation with slightly decreased breath sounds in the left base and regular S1-S2 with tachycardia. His abdomen was non-distended and non-tender with normal bowel sounds. Bilateral extremities skin showed multiple superficial wounds on the dorsum of the feet, with erythema and purulent drainage. A basic workup in the emergency room showed an elevated white blood cell count (WBC) (18,600/µL) and lactic acid (5 mmol/L). Other laboratory values were normal, including liver function tests, creatinine, electrolytes, hemoglobin, platelets, coagulation panel, and urinalysis. He was admitted to the medical floor. X-rays of both feet showed no osteomyelitis, and a chest X-ray showed a small left lower lobe consolidation and a trace left pleural effusion. He was admitted for sepsis from left-sided pneumonia and possible recurrent cellulitis of bilateral foot ulcerations secondary to chronic venous stasis. Blood culture and wound culture were obtained, and he received goal-directed IV fluid resuscitation for sepsis and was started on IV vancomycin and piperacillintazobactam. The following day he complained of worsening shortness of breath, requiring high-flow oxygen and a transfer to the intensive care unit (ICU). To better identify the cause of his symptoms, a computerized tomogram (CT) of the chest, abdomen, and pelvis without IV contrast was performed. CT showed bilateral lung consolidation (more on the left), with moderate left and small right pleural effusions and slight ascites ( Figure 1). FIGURE 1: CT of the abdomen without contrast. CT of the abdomen without contrast showing cirrhosis (blue arrow), ascites (yellow star), and absent spleen (red arrow). A CT of the bilateral lower extremities without contrast showed subcutaneous edema suggestive of cellulitis. He underwent emergent left-sided thoracentesis to relieve his hypoxia and respiratory distress, and pleural fluid analysis showed elevated WBC, lactate dehydrogenase, and neutrophils, as shown in Table 1. Follow-up lactic acid was 7.8 mmol/L and 11 mmol/L. Ultrasound of the abdomen did not show any ascitic fluid amenable for paracentesis. He had a negative HIV antibody (enzyme-linked immunoassay) test and polymerase chain reaction. On hospital day four, his mentation deteriorated. He became hypotensive and anuric and developed respiratory failure requiring intubation. Vancomycin trough levels were normal, and we thought that the ongoing sepsis might be the reason for the overall decline in the clinical situation. Admission blood cultures grew CN. We were convinced he might have disseminated cryptococcal infection and immediately started him on liposomal amphotericin B. We could not administer flucytosine because of his end-stage liver disease. Serum cryptococcal antigen measured by indirect enzyme immunoassay (EIA) was 1:32. Follow-up blood and pleural fluid cultures grew CN. Because of fungemia and altered sensorium before intubation, a lumbar puncture was performed on day five, which showed an elevated opening pressure with significant cerebrospinal fluid (CSF) cryptococcal antigen titer, as shown in Table 2. He underwent repeated spinal taps with a slight improvement in opening pressures, and persistent yeast forms were seen on microscopic examination. He was finally diagnosed with a disseminated cryptococcal infection, likely secondary to liver cirrhosis, asplenia, and a recent course of prolonged antibiotics. His wound culture only grew Klebsiella and Pseudomonas. Unfortunately, his hospital course deteriorated. He developed shock and acute renal injury and required vasopressors and continuous renal replacement therapy. He developed disseminated intravascular coagulation and multiorgan failure with no meaningful outcome and progressively deteriorating health. His family decided to provide comfort care. The patient passed peacefully on day 14 of hospital admission. Discussion Disseminated cryptococcosis is defined by either a positive blood culture (fungemia) or a single positive culture from at least two organ systems [8]. Cirrhotic patients have compromised immunity, making them more prone to opportunistic infections, particularly CN. Asplenic patients are more prone to encapsulated bacterial infections, but some case reports suggest the prevalence of fungal infections, specifically CN. Although the respiratory tract is the usual port of entry for cryptococcus, the gastrointestinal tract can also serve as another potential entry site in immunocompromised patients with HIV, liver cirrhosis, and a history of renal disease [9]. In a retrospective study by Chuang et al., in HIV-uninfected patients with cryptococcosis, 36% had liver cirrhosis, 33% had diabetes mellitus, and 27% had autoimmune diseases. All patients with liver cirrhosis and disseminated cryptococcosis died within the first month [10]. A retrospective study by Zhou et al. showed that increased activated partial thromboplastin time and Child-Pugh class B or C were associated with increased mortality with cryptococcosis in liver cirrhosis. The Model for End-stage Liver Disease sodium score was significant for predicting 30-day mortality, and the Child-Pugh score was more helpful in predicting 90-day mortality [11]. The risk factors for cryptococcus in our patient were a Child-Pugh score of 9 (class B), asplenia, and multiple rounds of antibiotics before this hospital admission. The host immune response to cryptococcal infection includes cell-mediated (T cell and natural killer cell) and humoral (antibody) immunity. In cirrhosis, innate and adaptive immunity failure leads to altered intracellular signaling pathways, damaging gastrointestinal tract lymphoid tissues, and circulating immune cells. This phenomenon, called cirrhosis-associated immune dysfunction syndrome, is thought to contribute to the increased incidence of systemic fungal infections in cirrhosis [12]. The precise mechanism for increased mortality from fungal infections in asplenia in humans is not known. Still, animal models suggest the abnormal antimicrobial function of peritoneal macrophages (PM phi) affects intracellular fungal (Candida) destruction and increases mortality [13]. Cryptococcal infection can present as meningitis, pneumonia, peritonitis, and fungemia. Sometimes pleural effusion is also present along with pneumonia. Both pleural and ascitic fluid is usually exudative with predominant lymphocytes and culture negative for bacterial growth. Testing for cryptococcal disease has evolved over the last few years, with particular improvements in cryptococcal latex antigen (CrAg) testing. The sensitivity of CrAg is 97.5%, and the specificity is 85-100%. A new point-of-care lateral flow immunoassay is currently being used with a sensitivity and specificity of 99% and has a rapid turnaround time of as little as 12 hours [14]. In a retrospective, observational study by Cheng et al., the most common cryptococcal infection in liver cirrhosis is meningitis, pneumonia, fungemia, and skin and bone infection [4]. Blood, CSF, pleural fluid, and peritoneal fluid cryptococcal antigen assays rapidly identify cryptococcal infection in high-risk populations. Microscopic examination and cultures are incredibly beneficial but can delay diagnosis, thus missing the opportunity for early treatment [15]. The Infectious Disease Society of America (IDSA) recommends that all patients with pulmonary cryptococcosis and fungemia get tested for cryptococcal meningitis regardless of the risk factors. In a retrospective study by Bradley et al., around 40% of patients without HIV who had pulmonary cryptococcosis had disseminated disease, including meningitis [16]. Our patient had pulmonary, meningeal, and bloodstream cryptococcal infections. All patients should get tested for HIV. The current standard of therapy for cryptococcal meningitis and disseminated cryptococcal infection constitutes induction therapy with amphotericin B plus flucytosine for two weeks, followed by consolidation with 400 mg daily for fluconazole for eight weeks and 200 mg/day for six months as maintenance therapy (IDSA recommendation). Despite dose-limiting toxicity from amphotericin B, it is the standard treatment for disseminated cryptococcal infection as it is fungicidal. Using lipid carriers-liposomal formulation, lipid complex formulation, and a colloidal dispersion reduces the side effects of amphotericin B. Resistance to amphotericin B is extremely rare. Flucytosine should be used with amphotericin B as it is highly hepatotoxic and myelotoxic [17]. A study by Tariq et al. showed that antifungal therapy is underutilized in non-HIV immunocompromised populations [18]. A retrospective study published in July 2022 by Liu et al. showed amphotericin B, flucytosine combined with voriconazole rapidly improves clinical manifestation, decreases CSF opening pressure, clears cryptococcus in CSF in the early phase, substantially shortens the hospitalization time in non-HIV and non-transplant-associated cryptococcal meningitis [19]. Screening for cryptococcal antigen is of limited use in asymptomatic HIV-negative patients. A prospective study by Suh et al. showed that serum cryptococcal antigen positivity is very low in non-infectious hospitalized liver cirrhosis patients [20]. Further studies are needed to evaluate this recommendation. However, in people with HIV/AIDS, the CrAg test can positively detect the cryptococcal antigen in serum 22 days before symptoms of meningitis develop and helps save lives [21]. Our case demonstrates the importance of considering cryptococcal infection even in well-compensated liver cirrhosis (Child-Pugh class B) and other comorbid factors such as asplenia and prolonged antibiotics use. Our case was challenging because the patient presented with sepsis from both cellulitis and pneumonia, which later became a disseminated cryptococcal infection. Despite appropriate antimicrobial treatment, his hospital course worsened rapidly and was ultimately fatal. Unfortunately, we did not perform the serum CrAg test in our emergency room. Performing a serum CrAg test in the emergency room is helpful in either ruling in or ruling out cryptococcus infection early. It facilitates the early initiation of appropriate antifungal treatment in patients with risk factors. Conclusions CN is a common fungal infection in HIV and AIDS patients. Still, this dreadful infection can also affect other immunocompromised people (liver cirrhosis, diabetes mellitus, etc.). A high index of suspicion should be kept for cryptococcal infection in high-risk individuals as it has very non-specific symptoms such as fever, nausea, headache, and dizziness. Serum, CSF, and peritoneal fluid CrAg testing should be done along with blood and CSF cultures, and CrAg tests have a rapid turnaround time. Indian Ink CSF fluid analysis can also be used but has low sensitivity. Disseminated cryptococcal infections typically have poor outcomes even with aggressive treatment, and early identification might play a role in improving outcomes. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
3,221.4
2023-04-01T00:00:00.000
[ "Medicine", "Biology" ]
A photoelectron imaging study of the deprotonated GFP chromophore anion and RNA fluorescent tags † Green fluorescent protein (GFP), together with its family of variants, is the most widely used fluorescent protein for in vivo imaging. Numerous spectroscopic studies of the isolated GFP chromophore have been aimed at understanding the electronic properties of GFP. Here, we build on earlier work [A. V. Bochenkova, C. Mooney, M. A. Parkes, J. Woodhouse, L. Zhang, R. Lewin, J. M. Ward, H. Hailes, L. H. Andersen and H. H. Fielding, Chem. Sci. , 2017, 8 , 3154] investigating the impact of fluorine and methoxy substituents that have been employed to tune the electronic structure of the GFP chromophore for use as fluorescent RNA tags. We present photoelectron spectra following photoexcitation over a broad range of wavelengths (364– 230 nm) together with photoelectron angular distributions following photoexcitation at 364 nm, which are interpreted with the aid of quantum chemistry calculations. The results support the earlier high-level quantum chemistry calculations that predicted how fluorine and methoxy substituents tune the electronic structure and we find evidence to suggest that the methoxy substituents enhance internal conversion, most likely from the 2 pp * state which has predominantly Feshbach resonance character, to the 1 pp * state. Introduction Green fluorescent protein (GFP) is a fluorescent protein found in coelenterates, such as the jellyfish Aequorea victoria, which emits green light with a high quantum yield. 1 GFP is often employed as a molecular marker in biology as it may be genetically tagged, non-perturbatively, onto a protein of interest. 2 The deprotonated GFP chromophore, p-hydroxybenzylidene-2,3-dimethyl-4-imidazolinone (p-HBDI À ), has been the focus of numerous spectroscopic studies in the gas-phase [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] and in solution. [21][22][23][24][25][26][27][28][29][30] In 2001, the gas-phase absorption spectrum of p-HBDI À was reported to be very similar to that of the deprotonated form of GFP. 31 This observation triggered numerous investigations into the electronic structure and dynamics of the isolated chromophore anion aimed at shedding light on the roles of the chromophore and the protein environment in tuning the fluorescence properties. [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]32,33 Gas-phase studies have also allowed the higher lying electronically excited states to be characterised, [6][7][8][9][11][12][13][14][15][16]18 which is difficult in the protein because the UV absorption of the chromophore overlaps with the UV absorption of amino acid residues. Characterising the higher lying electronically excited states is particularly important for understanding their role in UV photoinitiated reactions of GFP, such as decarboxylation and cis-trans isomerisation. [34][35][36][37][38][39] Substituting methoxy groups and fluorine atoms at the ortho-positions of the phenolate building block of the chromophore ( Fig. 1) has been found to tune the pK a , the ability of the chromophore to bind to specific RNA sequences and the emission wavelength. 40,41 There is only one previous gas-phase study of these biomimetic chromophores, which compared photoelectron spectra of p-HBDI À with those of 3,5-difluoro-p-HBDI À (DF-HBDI À ) and 3,5-dimethoxy-p-HBDI (DM-HBDI À ) at 346 nm and 328 nm, and presented XMCQDPT2/aug-cc-pVTZ calculations of their vertical excitation energies (VEEs) and vertical detachment energies (VDEs). 14 From this study it was concluded that the UV photoelectron spectra of p-HBDI À should be interpreted in terms of direct S 0 -D 0 photodetachment and indirect detachment via the 3pp* state which has, predominantly, excited shape resonance character with respect to the D 0 continuum. It was found that the electron withdrawing effect of the fluorine atoms resulted in the VDE being raised from around 2.7 eV for p-HBDI À to 2.95 eV for DF-HBDI À . The VDE of DM-HBDI À was found to be similar to that of p-HBDI À , which was rationalised in terms of opposing inductive and mesomeric effects of the methoxy groups. In both biomimetic chromophores, the bright excited shape resonance was found to be shifted higher in energy and resonant autodetachment processes evident in the spectra were attributed to np*-D 0 and np*-D 1n detachment processes. Here, we build on this work and present significantly improved quality photoelectron spectra and photoelectron angular distributions (PADs) of the deprotonated anions of p-HBDI À , DF-HBDI À and DM-HBDI À following photodetachment over the wavelength range 364-230 nm. This allows us to establish that resonant detachment from the 2pp* electronically excited state and internal conversion from the 2pp* state to a lower lying electronically excited state, or S 0 , play a role in the electronic relaxation mechanism following UV photoexcitation. Experimental p-HBDI, DF-HBDI and DM-HBDI were synthesised using reported procedures. 40,42 Anion photoelectron spectra and photoelectron angular distributions were recorded using our electrospray ionisation (ESI) velocity-map imaging (VMI) instrument that has been described in detail elsewhere. 43 For the negative ion electrospray, 1 mM solutions of p-HBDI À or DF-HBDI À , in methanol, or DM-HBDI À , in 9/1 (v/v) methanol/water, were treated with a few drops of ammonia as a base to shift the equilibrium towards the deprotonated anion, enhancing the observed anion signal. Singly-charged anions produced by ESI were massselected in a quadrupole mass filter, accumulated in a hexapole ion trap and thermalised using Ar or He gas before being focused into the source region of a VMI photoelectron spectrometer. Wavelengths in the range 364-315 nm were generated by second harmonic generation of the output of a dye laser pumped by a frequency-doubled nanosecond Nd:YAG laser operating at 20 Hz. 230 nm was generated using the third harmonic of the output of the dye laser. The resulting photoelectrons were imaged on a 2D CCD detector coupled to a phosphor screen. Background counts arising from collisions with the detector or from ionization of background gas by the laser were also recorded and subtracted and the resulting images were inverted using the pBASEX method. 44 The eKE was calibrated using the photodetachment spectrum of I À and the eKE resolution was determined to be r5% for the measurements presented here. For a one-photon detachment process with linearly polarised light, the PAD can be expressed as where I(y) is the probability of photoelectron emission at an angle y with respect to the laser polarisation, P 2 (cos y) is the second-order Legendre polynomial and b 2 is the asymmetry parameter. 45 The two limiting values of b 2 are +2 and À1, corresponding to photoelectron emission predominantly parallel and perpendicular to the laser polarisation, respectively. Computational Anion geometries were optimised using density functional theory 46 (DFT) with the B3LYP functional 47,48 and the 6-311++G(3df,3pd) basis set 49 within the Gaussian09 program suite. 50 Vertical detachment energies (VDEs) were calculated using the equation-of-motion coupled-cluster method with single and double excitations for the calculation of ionisation potentials 51 (EOM-IP-CCSD) within the Q-Chem program package. 52 For p-HBDI À and DF-HBDI À , the EOM-IP-CCSD calculations were run with the aug-cc-pVDZ basis set, 53 while the smaller 6-311++G(d,p) basis set was used for the rotamers of DM-HBDI À to reduce computational expense. b 2 parameters were calculated over the relevant eKE range using ezDyson with the analytical averaging method to provide lab-frame angular distributions from the molecular frame PADs; 54 the Dyson orbitals for the S 0 -D 0 transition were obtained from the EOM-IP-CCSD calculations. The ezDyson calculations presented here use a plane wave instead of a Coulomb wave for the photoelectron wavefunction, which has been demonstrated to yield accurate trends in b 2 parameter for several molecular anions. [55][56][57][58][59] The 6-311++G(d,p) basis set was benchmarked against the previously used aug-cc-pVDZ basis set for the EOM-IP-CCSD and ezDyson calculations (Fig. S5 in the ESI †) for phenolate and p-HBDI À . For p-HBDI À , the calculated VDEs were lower by B0.05 eV and good agreement was found in the ezDyson predicted trends in the photoelectron angular distributions as a function of eKE. Our observation that reducing the number of polarisation functions, whilst maintaining the same number of diffuse functions, has a minimal impact on the calculated angular distributions for these closed-shell anions is in agreement with work reported by Anstöter et al. 58 3 Results Computational The B3LYP/6-311++G(3df,3pd) structures, EOM-IP-CCSD calculated VDEs and corresponding D 0 hole orbitals are presented in Table 1 for p-HBDI À , DF-HBDI À and the rotamers of DM-HBDI À . Of the four rotamers of DM-HBDI À , the most stable are the syn and anti forms, which are isoenergetic and stabilised relative to the others by two hydrogen-bonding interactions between the phenolate oxygen and hydrogen atoms on the methyl groups, with each hydrogen-bonding interaction stabilising the negative charge by B0.15 eV, in excellent agreement with previous EOM-IP-CCSD/aug-cc-pVDZ calculations of the VDEs of the four rotamers of 2,6dimethoxyphenolate. 57 The structure of the syn rotamer of DM-HBDI À is in agreement with the MP2/aug-cc-pVTZ optimised geometry obtained in ref. 14. Given that the barriers to interconversion between the rotamers of DM-HBDI À are expected to be small (being largely determined by the strength of the H-bonding interactions) we expect the populations of these rotamers to be determined by the temperature upon thermalisation to 300 K. Therefore, the higher energy rotamers are not expected to contribute strongly to the observed signal as these states are less thermally accessible at 300 K; for example, the population ratio of the 0.06 eV rotamer to the minimum energy rotamer is expected to be 1 : 10 at 300 K. EOM-IP-CCSD VDEs and orbital holes for the D 1 neutral radical states of p-HBDI À , DF-HBDI À and the four rotamers of DM-HBDI À are presented in Table S1 in the ESI † and show that, as reported previously for p-HBDI À , o-HBDI À and substituted p-HBDI À anions, the D 1 state corresponds to detachment of an electron from a non-bonding orbital localised on the C-O bond of the phenolate moiety and will from now on be referred to as the D 1n state. 14,15 In the case of the planar DMHBDI À rotamer, the D 1n threshold is isoenergetic with the D 1 state, which corresponds to direct detachment from the HOMOÀ1 p-orbital, with EOM-IP-CCSD/6-311++G(d,p) calculated VDEs of 4.63 eV and 4.64 eV to the D 1 and D 1n states, respectively. The VDEs calculated for p-HBDI À , DF-HBDI À and DM-HBDI À (syn and anti rotamers) are in good agreement with the EPT/ 6-311++G(3df,3pd) and XMCQPDT2 values reported in ref. 14. Photoelectron spectra The 364-230 nm photoelectron spectra of p-HBDI À , DF-HBDI À and DM-HBDI À are presented in Fig. 2 as a function of electron binding energy, eBE = hn À eKE, and in Fig. 3 as a function of eKE. The red lines in Fig. 3 mark the high eKE edges of any features with constant eKE in the spectra. Experimental VDEs are determined from maxima in the photoelectron spectra plotted as a function of eBE (marked with blue lines in Fig. 2). It is important to note that, experimentally, the VDE refers to the binding energy of the most intense vibronic band, which depends on the Franck-Condon overlap of the vibrational wavefunctions within the initial and final electronic states. This does not equate directly with the calculated VDE, which is the separation of the electronic potential energy surfaces at the minimum point of the ground state potential. However, in practise, the value of the computational VDE is often similar enough to the experimental VDE for these values to be compared, e.g. for identification of different isomers in a mixture. Previous high-resolution photoelectron spectra of cryogenically cooled p-HBDI À have shown that the ADE E VDE = 2.73 AE 0.01 eV. 10 There are peaks in the p-HBDI À spectra presented here at 2.74 AE 0.04 eV eBE (Fig. 2a), which is consistent with the high resolution measurement and other measurements of the S 0 -D 0 VDE. [5][6][7]11,13 There are two additional features at 4.3 AE 0.1 eV and 4.9 AE 0.1 eV, consistent with previous measurements of the S 0 -D 1n and S 0 -D 1 VDEs 6,15 and XMCQDPT2/aug-cc-pVTZ and EOM-IP-CCSD/aug-cc-pVDZ calculations. 14,15 As the wavelength decreases in the range 364-315 nm (3.41-3.94 eV), the S 0 -D 0 direct detachment feature broadens on the high eBE side. From the eKE distribution (Fig. 3a), this can mostly be attributed to the presence of an additional feature with constant eKE whose maximum intensity lies at around 0.8 eV eKE, consistent with previous experimental observations. 11,13 Electrons resulting from autodetachment processes from electronically excited states above the detachment threshold commonly give rise to features with constant eKE as a function of wavelength in the photoelectron spectra of anions, and results from the propensity for vibrational energy to be retained on autodetachment. 5,7,11,[13][14][15]18,57,[60][61][62][63][64][65][66][67][68] To identify the presence of weaker features in the spectra, we subtracted the 364 nm spectrum (which contains predominantly direct detachment signal) from the spectra at all other wavelengths, to produce difference spectra which reveal all features resulting from indirect detachment processes; these are presented in Fig. S1 in the ESI. † Table 1 B3LYP/6-311++G(3df,3pd) optimised structures, EOM-IP-CCSD VDEs and corresponding D 0 hole orbitals for p-HBDI À , DF-HBDI À and all possible rotamers of DM-HBDI À . For p-HBDI À and DF-HBDI À , the EOM-IP-CCSD calculations were carried out using the aug-cc-pVDZ basis set whereas a smaller 6-311++G(d,p) basis set was used for DM-HBDI À . The VDEs of the rotamers of DM-HBDI À in Table 1 are given relative to the corresponding local minimum, rather than the global minimum. The D 0 hole orbitals correspond to the Fock orbitals which contribute most strongly (weight 40.95) to the S 0 ÀD 0 detachment transition These spectra appear to contain an additional weak feature with maximum intensity around 0.7 eV eKE; this feature is visible in the 346 nm difference spectrum (where the intense feature peaking at 0.8 eV eKE is not present). To investigate this observation, vibrationally resolved spectra in the range 364-315 nm were recorded using He collision gas for improved cooling, and are presented in Fig. 4. In the He-thermalised spectrum at 315 nm, there is a clear peak at 1.03 eV which is not obvious in the broader, Ar-thermalised spectra. Difference spectra were also generated for the He-thermalised data set and are presented in Fig. S2 in the ESI. † In place of the broad, almost Gaussian features observed in the difference spectra for the Ar-thermalised data, the difference spectra using He collision gas show clear rising edges for two distinct states, peaking at 0.87 AE 0.05 eV and 1.03 AE 0.05 eV. The slight shift in the maximum of the resonant signal when comparing these two data sets (B0.87 eV in He compared with B0.80 eV in Ar) is attributed to the Ar-thermalised spectra being broadened to higher eBEs (lower eKEs) due to the increased population of higher vibrational levels, as expected at higher temperatures. We note that a recent gas-phase action spectroscopy study by Bieske and coworkers showed that a small fraction of the p-HBDI À E-isomer can be formed when using high collision energies with N 2 collision gas, so we cannot rule out the possibility that a contribution from the E-isomer is responsible for this small shift in the spectra of the Ar-thermalised anions. 17 As a result of the propensity for conserving vibrational energy during autodetachment, indirect photodetachment following resonant photoexcitation of an excited electronic state, S n , with excess vibrational energy, E v = hn À E(S n ), where E(S n ) is the adiabatic excitation energy (AEE) of S n , will result in the emission of electrons with eKE B hn À E(D 0 ) À E v B E(S n ) À E(D 0 ), where E(D 0 ) is the ADE. Thus, the photoelectrons are emitted with eKE corresponding to the S n -D 0 energy difference (Fig. 5). Using this approximation, the additional features with maxima at 0.87 eV eKE and 1.03 eV eKE can be attributed to detachment from resonances with AEEs around 3.60 eV (B344 nm) and 3.76 eV (B330 nm), respectively. Excited state calculations have been previously carried out for p-HBDI À at the CAM-B3LYP/6-311++G(3df,3pd), 11 ADC(2)/aug-cc-pVDZ 15 and XMCQDPT2/aug-cc-pVTZ 12,14 levels of theory, all of which identified a bright pp* transition (here labelled 3pp*, see Fig. S9 in the ESI †) between the HOMO and a p*-orbital localised on the phenolate ring (LUMO+1), which has excited shape resonance character with respect to the D 0 continuum, in the energy range of the presented measurements. Experimental observations of indirect detachment signal at wavelengths in the 355-300 nm range has therefore predominantly been assigned to autodetachment from the 3pp* state. 9,[11][12][13][14][15][16]18 Excited state calculations at the XMCQDPT2/aug-cc-pVTZ 12,14 level for p-HBDI À and m-HBDI À also identified another pp* transition (here labelled 2pp*, see Fig. S9 in the ESI †) between a p-orbital localised predominantly on the imidizolinone moiety (HOMOÀ1) to a p*-orbital which is delocalised across the p-system (the LUMO orbital), which has Feshbach resonance character with respect to the D 0 continuum. 12,14 The results of these quantum chemistry calculations are summarised in Fig. 6. Action spectroscopy measurements of m-HBDI À have observed signal attributable to autodetachment from the 2pp* state; 12 however, while it has previously been suggested that autodetachment from the 2pp* state contributes to the broadness of the photoelectron spectra of p-HBDI À in the range 335-302 nm, 13 the proximity of the 2pp* and 3pp* states in p-HBDI À has so far prevented the explicit identification of signal arising from autodetachment from the 2pp* state. Both AEEs measured here (3.60 eV and 3.76 eV) are in excellent agreement with the XMCQDPT2/aug-cc-pVTZ quantum chemistry calculations that determined the VEE of the 2pp* Feshbach resonance to be 3.74 eV (332 nm) and the VEE of the 3pp* excited shape resonance to be 3.78 eV (328 nm). 12,14 Together with anion photoelectron spectroscopy measurements, these quantum chemistry calculations of p-HBDI À demonstrated that the electronically excited state that would make the most intense contribution to the photoelectron spectra in the 346-315 nm range would be the 3pp* state, given that the 3pp* state has excited shape resonance character with respect to the D 0 continuum and a significant oscillator strength (0.08). 14 This work also showed that the full widths of the photoelectron spectra in the 346-315 nm range could be reproduced using calculated Franck-Condon profiles of the S 0 -D 0 and 3pp*-D 0 transitions convolved with a Gaussian with a 40 meV HWHM, if the AEE of the S 0 -3pp* transition was taken to lie at 348 nm (3.56 eV). We therefore assign the features with maxima at 0.87 eV eKE and 1.03 eV eKE to the 3pp* excited shape resonance and the 2pp* Feshbach resonance, respectively. As noted in the earlier work, 14 the excited shape resonance is considerably brighter than the Feshbach resonance, whose oscillator strength is only 0.02, and autodetachment from the 3pp* excited shape resonance is the dominant decay channel in this region. Nonetheless, the improved quality of our new spectra allow us to identify the contribution from the Feshbach resonance at 1.03 eV. In the spectra of p-HBDI À at 320 nm and 315 nm, there is an extremely weak additional feature at very low eKE (B0.05 eV). Such a low eKE feature is characteristic of indirect detachment from an electronically excited state lying close to a detachment threshold or thermionic emission from S 0 following internal conversion. Several previous experimental and computational studies of p-HBDI À in vacuo have demonstrated that the 1pp* state (which has shape resonance character with respect to D 0 ) lies just below the D 0 detachment threshold at B2.48 eV (500 nm), with many of the experimental studies finding evidence of vibrational autodetachment (VAD) following direct photoexcitation into the 1pp* state. 4,5,12,14,15,17,[69][70][71][72] However, a recent ion-mobility action spectroscopy study of p-HBDI À in the range 415-550 nm found that VAD is a minor channel with respect to isomerisation and internal conversion to the ground state, suggesting that the low eKE signal observed here could result from a combination of both VAD from the 1pp* state and thermionic emission processes. 17 For DF-HBDI À , there is a peak in all the spectra plotted as a function of eBE (Fig. 2b) around 2.98 AE 0.04 eV attributed to direct S 0 -D 0 detachment. This is close to the EOM-IP-CCSD/ aug-cc-pVDZ VDE (2.93 eV, Table 1) and values obtained from earlier measurements and EPT calculations. 14 There is an additional peak in the 230 nm spectrum at 4.5 AE 0.1 eV eBE that can be attributed to direct detachment to the D 1n threshold, which is predicted by our EOM-IP-CCSD/aug-cc-pVDZ calculations to lie at 5.06 eV; this value is also in agreement with earlier EPT and XMCQDPT2/aug-cc-pVTZ calculations which place this transition at 4.47 eV. 14 In DFHBDI À , the D 1 threshold, which corresponds to the product of direct detachment from the HOMOÀ1 p-orbital, is predicted to lie very close in energy to D 1n (at 5.18 eV at the EOM-IP-CCSD/aug-cc-pVDZ level of theory and at 4.66 eV according to previous EPT and XMCQDPT2/aug-cc-pVTZ calculations) and therefore likely also contributes to the feature at 4.5 eV. 14 Similar to p-HBDI À , the photoelectron spectra are broadened as the wavelength is decreased in the range 364-315 nm. This broadening can be attributed to features with maximum eKEs around 0.4 eV and 0.8 eV (Fig. 3b and Fig. S3, ESI †), corresponding to detachment from resonances with AEEs around 3.4 eV and 3.8 eV, respectively. Given that the electronwithdrawing fluorine substituents are expected to shift the energies of the electronic excited states of p-HBDI À , but do not alter the molecular symmetry, it is likely that these resonances are analogous to the 2pp* and 3pp* resonances that we observe in the spectra of p-HBDI À , with the 2pp* state lowered in energy as a result of greater stabilisation of the LUMO (which is delocalised Fig. 6 XMCQDPT2/aug-cc-pVTZ vertical excitation energies and vertical detachment energies for p-HBDI À , DF-HBDI À and DM-HBDI À from ref. 14. The energy region corresponding to the 364-315 nm spectra is shaded in blue. Detachment thresholds are shown in purple and the optically dark 1np* states are shown in grey. The 2pp* states have predominantly Feshbach resonance character with respect to their D 0 continua. The bright excited shape resonance in p-HBDI À is the 3pp* state but becomes the 4pp* state in DF-HBDI À and DM-HBDI À . across the p-system) relative to the HOMOÀ1 (which is predominantly localised on the imidazolinone moiety). This is consistent with earlier XMCQDPT2/aug-cc-pVTZ calculations of the VEEs of pp* excited states lying in the range 3.58-4.20 eV, see Fig. 6. 14 For DM-HBDI À , there is a peak in all the spectra plotted as a function of eBE (Fig. 2c) around 2.7 AE 0.04 eV attributed to direct S 0 -D 0 detachment. This is close to the EOM-IP-CCSD/ aug-cc-pVDZ VDEs for the syn and anti rotamer (2.61 eV and 2.62 eV, Table 1) and significantly higher than the VDEs for the rotamers with fewer hydrogen-bonds, suggesting that the spectra are dominated by contributions from the two lowest energy syn and anti rotamers. This value for the S 0 -D 0 VDE is also consistent with those obtained from earlier measurements and EPT and XMCQDPT2/aug-cc-pVTZ calculations, which corresponds to the VDE of the syn rotamer. 14 Despite the potential for population of the singly H-bonded rotamer at 300 K, there is no signal peaking at 2.5 eV eBE which could correspond to direct detachment from this rotamer. The rising edges of the DM-HBDI À spectra are shallower than those of p-HBDI À and DF-HBDI À . This could be a result of the spectra containing contributions from both the syn and anti rotamers. However, if there is a large conformational change upon ionisation, this could result in a shift of the maximum Franck-Condon overlap away from the S 0 -D 0 origin. Similar observations and interpretations were made for photoelectron spectra of DMPhO À compared with spectra of PhO À and DFPhO À . 57 There is an additional peak in the 230 nm spectrum at 4.3 AE 0.1 eV eBE which is consistent with the EOM-IP-CCSD S 0 -D 1n VDEs (Table S1 in the ESI †) (4.48 eV and 4.52 eV for the syn and anti rotamers, respectively); this value is also in agreement with earlier calculations. 14 Similar to p-HBDI À and DF-HBDI À , the photoelectron spectra are broadened as the wavelength is decreased in the range 364-315 nm. This broadening can be attributed to features with maximum eKEs around 0.7 eV and 1.0 eV ( Fig. 3c and Fig. S4, ESI †), corresponding to detachment from resonances with AEEs around 3.4 eV and 3.7 eV, respectively. Due to the electron withdrawing nature of the methoxysubstituents, the resonances at 3.4 eV and 3.7 eV are likely to correspond to the 2pp* and 3pp* states, respectively. This is consistent with earlier XMCQDPT2/aug-cc-pVTZ calculations of the VEEs of pp* excited states lying in the range 3.60-4.05 eV, see Fig. 6. 14 While our spectra agree with the excited state calculations from ref. 14, which use an optimised geometry similar to the syn rotamer, it is possible that there are small differences between the AEEs of the syn and anti rotamers which likely contributes to the overall broad appearance of the experimental spectra of DMHBDI À . Interestingly, the spectra of DM-HBDI À also have a fairly distinct low eKE feature centred around 0.05 eV eKE that is more pronounced in the 346-315 nm spectra than the 364 nm spectrum. Similarly to the equivalent feature observed in the 315 nm and 320 nm spectra of p-HBDI À , this low eKE feature is characteristic of indirect detachment from an electronically excited state lying close to a detachment threshold or thermionic emission from S 0 . Experimental studies of 2,6dimethoxyphenolate and DMHBDI À have demonstrated that the methoxy substituents have a relatively weak effect on the electronic structure of the chromophore due to the competition between the opposing inductive and mesomeric effects, so it is likely that the 1pp* electronic excited state of DMHBDI À lies close to the detachment threshold as observed for p-HBDI À and as suggested by XMCQDPT2/aug-cc-pVTZ VEE calculations for DMHBDI À . 14,57 Given that the shape of the feature at 0.05 eV eKE does not appear exponential (as is typical for thermionic emission 73,74 ) it seems likely that the feature is a result of VAD from 1pp*, however, this assignment is tentative as this signal corresponds to the centremost pixels of the photoelectron image which are the most susceptible to the amplification of noise by the pBASEX inversion method. Photoelectron angular distributions The 364 nm photoelectron images of p-HBDI À , DF-HBDI À and DM-HBDI À are presented in Fig. 7 together with the photoelectron spectra and b 2 parameters, plotted as a function of eBE and eKE. ezDyson calculations of the b 2 parameters that characterise direct S 0 -D 0 detachment in p-HBDI À , DFHBDI À and syn and anti DM-HBDI À are plotted in Fig. 8 as a function of eKE. The two limiting cases of b 2 are +2 and À1, corresponding to photoelectron emission predominantly parallel and perpendicular to the laser polarisation, respectively. In the absence of resonances in the continuum, b 2 o 0 corresponds to photodetachment from an orbital with p or p character and b 2 4 0 corresponds to photodetachment from an orbital with s or s character. For p-HBDI À , the experimental b 2 value is reasonably constant across the peak in the photoelectron spectrum and has an intensity-weighted average value of À0.15 AE 0.05, consistent with the calculated value of b 2 = À0. 35. The anisotropy can be understood qualitatively by considering partial waves of s and p character. 75 The orientation of the p partial waves with respect to the electric field vector of the light can be calculated by taking the direct products of the irreducible representation of the HOMO with those of the x, y and z axes in the molecular frame. For p-HBDI À (C s point group), the HOMO and z axis have A 00 symmetry, resulting in p partial waves having A 0 symmetry, i.e. perpendicular to the electric field vector of the laser light (b 2 o 0), in agreement with previous measurements. 13 The PAD of DF-HBDI À is strikingly different to that of p-HBDI À . b 2 B 0 on the low eBE edge of the photoelectron spectrum and rises to around 0.4 on the high eBE edge, i.e. the PAD is isotropic on the low eBE edge of the spectrum and becomes increasingly parallel to the electric field vector of the laser light with increasing eBE. This contrasts with our ezDyson calculations which predict the PAD of the direct detachment process to be perpendicular (b 2 o 0) for eKE 4 0.1 eV (Fig. 8). The increase in b 2 with increasing eBE can be rationalised in terms of an increasing contribution from autodetachment from a resonance, which is consistent with our observation of contributions to the photoelectron spectra from a resonance with AEE around 3.4 eV (365 nm). This is similar to a previous study of p-HBDI À following photoexcitation in the range 516-282 nm in which resonant autodetachment from the 2pp* state was found to give rise to a parallel PAD. 13 Similar PADs are observed for all the photoelectron images of DF-HBDI À following photoexcitation in the range 346-315 nm (Fig. S6, ESI †). At the shortest wavelengths it becomes apparent that the b 2 values become strongly positive at eKEs r 0.5 eV. For DM-HBDI À , the experimental b 2 value is reasonably constant across the peak in the photoelectron spectrum and has an intensity-weighted average value of 0.05 AE 0.05. Isotropic PADs are observed for all the photoelectron images of DM-HBDI À following photoexcitation in the range 346-315 nm (Fig. S7 in the ESI †). The calculated values for the syn and anti rotamers have values of 0.57 and À0.49, respectively, at this wavelength and an average value of 0.04. This suggests that the experimentally observed PAD could be predominantly the result of direct detachment. Nonetheless, given that there is still a weak feature at 0.05 eV which may be attributed to vibrational autodetachment from a vibrationally hot 1pp* state, or thermionic emission from the ground electronic state, it is possible that there is a weak contribution from autodetachment from the electronic excited state at 3.4 eV in the 364 nm spectrum which may also influence the observed PAD. ezDyson calculations of the b 2 parameters that characterise direct S 0 -D 0 detachment in the higher energy rotamers of DM-HBDI À , which do not appear to contribute to the presented spectra, are plotted as a function of eKE in S8 in the ESI. † It is interesting to note the opposing signs of the ezDyson PADs of the syn and anti rotamers of DM-HBDI À , which mirrors those for the syn and anti rotamers of 2,6-dimethoxyphenolate anion. 57 Similar effects have also been observed in parasubstituted phenolates. 16,58,76 4 Discussion From Fig. 6, it is clear that adding methoxy and fluorine substituents has the effect of removing the near degeneracy of the 2pp* and 3pp* states, which is reflected in our observation of two clearly distinct constant eKE features, attributed to resonances, in the photoelectron spectra of DF-HBDI À and DM-HBDI À , whereas in the photoelectron spectra of p-HBDI À the contribution from the feature at 1.03 eV eKE can only be seen clearly in the He-thermalised data at 315 nm. Our estimates of the AEEs of these features are consistent with the calculated VEEs from ref. 14 giving confidence in the high level calculations. Thus, we assign the features in the DF-HBDI À photoelectron spectra centered around eKEs of 0.4 eV and 0.8 eV, corresponding to AEEs of 3.4 eV and 3.8 eV, as resonant detachment from the 2pp* and 3pp* or 4pp* states (with calculated VEEs of 3.58 eV and 4.02 eV or 4.20 eV). For DM-HBDI À , we suggest that the features in the photoelectron spectra centered around eKEs of 0.7 eV and 1.0 eV, corresponding to AEEs of 3.4 eV and 3.7 eV, arise from resonant detachment from the 2pp* and 3pp* or 4pp* states (with calculated VEEs of 3.60 eV and 4.02 eV or 4.08 eV). However, given that the calculated oscillator strengths for the 2pp* state are zero in the substituted chromophores (compared with 0.02 in p-HBDI À ), we cannot rule out that the lower eKE resonant signal may arise from 1np*-D 0 autodetachment. Comparing the PADs recorded at 364 nm (3.41 eV) with the ezDyson calculations shows that, for the planar anions, direct detachment from S 0 to the D 0 continuum generates photoelectrons with a distribution predominantly perpendicular to the electric field vector of the laser light and that resonant autodetachment from the 2pp* state generates photoelectrons with a distribution that is predominantly parallel to the electric field vector of the laser light. Previous work has suggested that the 2pp* state in p-HBDI À , which has predominantly Feshbach resonance character with respect to the D 0 continuum, generates photoelectrons with a distribution that is predominantly parallel to the electric field vector of the laser light. 13 Our observation of positive b 2 values at eKEs r 0.5 eV in the photoelectron spectra of DF-HBDI À supports this suggestion. The photoelectron spectra of DM-HBDI À all have a low eKE feature that is characteristic of autodetachment from high lying vibrational states of a lower lying electronically excited state or thermionic emission from S 0 . This low eKE feature was previously assigned to detachment from the 1np* electronic excited state to the D 1n continuum; 14 however, since the work presented here shows that this feature is also observed in spectra at photon energies significantly below the calculated S 0 -D 1n VDE (4.05 eV), it seems likely that it arises from vibrational autodetachment from the 1pp* state following internal conversion from a higher lying state, or from thermionic emission from S 0 . Although this low eKE feature is quite pronounced in the photoelectron spectra of DM-HBDI À , there are also low eKE electrons observed in photoelectron spectra of DF-HBDI À and p-HBDI À , most notably in the shorter wavelength spectra. Since the photon energies employed in these experiments are far higher than the VEEs of the 1pp* state, it is unlikely to be populated directly. Thus, it is most likely that these low eKE electrons arise from internal conversion from the 2pp* state which has, predominantly, Feshbach resonance character with respect to the D 0 continuum and thus a potentially long enough lifetime for internal conversion to the 1pp* state to compete with autodetachment. It seems that addition of the methoxy substituents enhances the internal conversion process. This could result from a change in molecular geometry or symmetry caused by internal rotations of the methoxy groups of the anion upon electronic excitation. Such a change in geometry and/or point group could change the relative energies and/or symmetries of the electronic states in a way which enables more efficient population transfer to the lower states. The involvement of the 2pp* state in the electronic relaxation process is consistent with an earlier time-resolved study of the relaxation dynamics of p-HBDI À following UV photoexcitation. 13 Conclusions Photoelectron spectra and photoelectron angular distributions (PADs) of the deprotonated anions of p-HBDI À , DF-HBDI À and DM-HBDI À , following photodetachment in the 364-230 nm wavelength range have been presented and interpreted using a combination of previously published 14 and new quantum chemistry calculations. By examining the photoelectron spectra of DF-HBDI À and DM-HBDI À over a wider range of wavelengths an additional resonant photodetachment process via the 2pp* electronic excited state has been revealed. Analysis of the PADs suggest that indirect detachment via the 2pp* state, which has Feshbach resonance character with respect to the D 0 continuum, is likely to contribute to the spectra of p-HBDI À , DF-HBDI À and DM-HBDI À following UV photoexcitation in the 364-315 nm range. Furthermore, a low eKE feature that is most pronounced in the DM-HBDI À spectra and was previously assigned as 1np*-D 1n detachment has been reassigned as detachment from a vibrationally hot 1pp* state or thermionic emission from S 0 , following internal conversion from the 2pp* state. Although there are low eKE electrons observed in the spectra of p-HBDI À , DF-HBDI À and DM-HBDI À , this feature is most pronounced in the DM-HBDI À spectra, suggesting that the methoxy substituents enhance the rate of internal conversion from the 2pp* state relative to electron detachment. Improving our understanding of the competition between the various electronic relaxation processes is important for the rational design of chromophores and fluorescent proteins for new imaging applications; particularly in the case of these small molecular chromophores which are used as fluorescent tags for biological macromolecules such as RNA, without impeding their function. Fig. 8 Plots of the calculated b 2 anisotropy parameter as a function of eKE for the p-HBDI À anion (left), the DF-HBDI À anion (middle) and the DM-HBDI À anion (right). Calculated using ezDyson with EOM-IP-CCSD/aug-cc-pVDZ Dyson orbitals in the case of p-HBDI À and DF-HBDI À , and EOM-IP-CCSD/6-311++G(d,p) Dyson orbitals in the case of DM-HBDI À . The shaded regions indicate the eKEs accessible in the 364-315 nm wavelength range. Conflicts of interest There are no conflicts to declare.
8,797.6
2021-09-02T00:00:00.000
[ "Chemistry", "Physics" ]
First Observation of Top Quark Production in the Forward Region Top quark production in the forward region in proton-proton collisions is observed for the first time. The W þ b final state with W → μν is reconstructed using muons with a transverse momentum, p T , larger than 25 GeV in the pseudorapidity range 2.0 < η < 4.5. The b jets are required to have 50 < p T < 100 GeV and 2.2 < η < 4.2, while the transverse component of the sum of the muon and b-jet momenta must satisfy p T > 20 GeV. The results are based on data corresponding to integrated luminosities of 1.0 and 2.0 fb −1 collected at center-of-mass energies of 7 and 8 TeV by LHCb. The inclusive top quark production cross sections in the fiducial region are σðtopÞ½7 TeVŠ ¼ 239 AE 53ðstatÞ AE 33ðsystÞ AE 24ðtheoryÞ fb; σðtopÞ½8 TeVŠ ¼ 289 AE 43ðstatÞ AE 40ðsystÞ AE 29ðtheoryÞ fb:These results, along with the observed differential yields and charge asymmetries, are in agreement with next-to-leading order standard model predictions. The production of top quarks (t) from proton-proton (pp) collisions in the forward region is of considerable experimental and theoretical interest. In the standard model (SM), four processes make significant contributions to top quark production: t ¯ t pair production, single-top production via processes mediated by a W boson in the t channel (qb → q 0 t) or in the s channel (q ¯ q 0 → t ¯ b), and single top produced in association with a W boson (gb → tW). The initial-state b quarks arise from gluon splitting to b ¯ b pairs or from the intrinsic b quark content in the proton. Top quarks decay almost entirely via t → Wb. The SM predicts that about 75% of t → Wb decays in the forward region are due to t ¯ t pair production. The remaining 25% are mostly due to t-channel single-top production, with s-channel and associated single-top production making percent-level contributions. The enhancement at forward rapidities of t ¯ t production via q ¯ q and qg scattering, relative to gg fusion, can result in larger charge asymmetries, which may be sensitive to physics beyond the SM [1,2]. Forward t ¯ t events can be used to constrain the gluon parton distribution function (PDF) at a large momentum fraction, resulting in reduced theoretical uncertainty for many SM predictions [3]. Furthermore, both single-top and … with hadronization simulated by Pythia. The theoretical uncertainty on the cross-section predictions is a combination of PDF, scale, and strong-coupling (α s ) uncertainties. The PDF and scale uncertainties are evaluated following Refs. [16] and [18], respectively. The α s uncertainty is evaluated as the envelope obtained using α s (M Z ) ∈ [0.117, 0.118, 0.119] in the theory calculations. The event selection is the same as that in Ref. [19] but a reduced fiducial region is used to enhance the top quark contribution relative to direct W +b production. The signature for W +jet events is an isolated high-p T muon and a well-separated jet originating from the same pp interaction. Signal events are selected by requiring a high-p T muon candidate and at least one jet with ∆R(µ, j) > 0.5. For each event, the highest-p T muon candidate that satisfies the trigger requirements is selected, along with the highest-p T jet from the same pp collision. The primary background to top quark production is direct W +b production; however, Z+b events, with one muon undetected in the decay Z → µµ, and di-b-jet events also contribute to the µ+b-jet final state. The anti-k T clustering algorithm is used as implemented in FastJet [20]. Information from all the detector subsystems is used to create charged and neutral particle inputs to the jet-clustering algorithm using a particle flow approach [21]. The reconstructed jets must fall within the pseudorapidity range 2.2 < η(j) < 4.2. The reduced η(j) acceptance ensures nearly uniform jet reconstruction and heavy-flavor tagging efficiencies. The momentum of a reconstructed jet is corrected to obtain an unbiased estimate of the true jet momentum. The correction factor, typically between 0.9 and 1.1, is determined from simulation and depends on the jet p T and η, the fraction of the jet p T measured with the tracking system, and the number of pp interactions in the event. The high-p T muon candidate is not removed from the anti-k T inputs and so is clustered into a jet. This jet, referred to as the muon jet and denoted as j µ , is used to discriminate between W +jet and dijet events [19]. No correction is applied to the momentum of the muon jet. The requirement p T (j µ + j) > 20 GeV is made to suppress dijet backgrounds, which are well balanced in p T , unlike W +jet events, where there is undetected energy from the neutrino. Events with a second, oppositely charged, high-p T muon candidate from the same pp collision are vetoed. However, when the dimuon invariant mass is in the range 60 < M (µ + µ − ) < 120 GeV, such events are selected as Z(µµ)+jet candidates, which are used to determine the Z+jet background. The jets are identified (tagged) as originating from the hadronization of a b or c quark by the presence of a secondary vertex (SV) with ∆R < 0.5 between the jet axis and the SV direction of flight, defined by the vector from the pp interaction point to the SV position. Two boosted decision trees (BDTs) [22,23], trained on the characteristics of the SV and the jet, are used to separate heavy-flavor jets from light-parton jets, and to separate b jets from c jets. The two-dimensional distribution of the BDT responses observed in data is fitted to obtain the SV-tagged b, c and light-parton jet yields. The SV-tagger algorithm is described in Ref. [24], where the heavy-flavor tagging efficiencies and light-parton mistag probabilities are measured in data. The data samples used in Ref. [24] are too small to validate the performance of the SV-tagger algorithm in the p T (j) > 100 GeV region. Furthermore, the mistag probability of light-parton jets increases with jet p T . Therefore, only jets with p T < 100 GeV are considered in the fiducial region, which according to simulation retains about 80% of all top quark events. Inclusive W +jet production, i.e. where no SV-tag requirement is made on the jet, is only contaminated at the percent level by processes other than direct W +jet production. Therefore, W +jet production is used to validate both the theory predictions and the modeling of the detector response. Furthermore, the SM prediction for σ(W b)/σ(W j) has a smaller relative uncertainty than σ(W b) alone, since the theory uncertainties partially cancel in the ratio. The analysis strategy is to first measure the W +jet yields, and then to obtain predictions for the yields of direct W +b production using the prediction for σ(W b)/σ(W j). To an excellent approximation, many experimental effects, e.g. the muon reconstruction efficiency, are expected to be the same for both samples and do not need to be considered in the direct W +b yield prediction. The W +jet yield is determined by performing a fit to the p T (µ)/p T (j µ ) distribution with templates, histograms obtained from data, as described in Ref. [19]. The Z+jet contribution is fixed from the fully reconstructed Z(µµ)+jet yield, where the probability for one of the muons to escape detection is obtained using simulation. The contributions of b, c, and light-parton jets are each free to vary in the fit. Figure 1 shows the fit for all candidates in the data sample. Such a fit is performed for each muon charge separately in bins of p T (µ + j); the differential W +jet yield and charge asymmetry, defined as Fig. 2. To compare the data to theory predictions, the detector response must be taken into account. All significant aspects of the detector response are determined using data-driven techniques. The muon trigger, reconstruction, and selection efficiencies are determined using Z → µµ events [21,25]. The GEC efficiency is obtained following Ref. [21]: an alternative dimuon trigger requirement, which requires a looser GEC, is used to determine the fraction of events that are rejected. Contamination from W → τ → µ decays is estimated to be 2.5% using both simulated W +jet events and inclusive W data samples [26]. The fraction of muons that migrate out of the fiducial region due to final-state radiation is Migration of events in jet p T due to the detector response is studied with a data sample enriched in b jets using SV tagging. The p T (SV)/p T (j) distribution observed in data is compared to templates obtained from simulation in bins of jet p T . The resolution and scale for each jet p T bin are varied in simulation to find the best description of the data and to construct a detector response matrix. Figure 2 shows that the SM predictions, obtained with all detector response effects applied, agree with the inclusive W +jet data. The yields of W +c and W +b, which includes t → W b decays, are determined using the subset of candidates with an SV-tagged jet and binned according to p T (µ)/p T (j µ ). In each p T (µ)/p T (j µ ) bin, the two-dimensional SV-tagger BDT-response distributions are fitted to determine the yields of c-tagged and b-tagged jets, which are used to form the p T (µ)/p T (j µ ) distributions for candidates with c-tagged and b-tagged jets. These p T (µ)/p T (j µ ) distributions are fitted to determine the SV-tagged W +c and W +b yields. A fit to the p T (µ)/p T (j µ ) distribution built from the c-tagged jets from the full data sample is provided as supplemental material to this Letter [27]. Figure 3 shows that the W +c yield versus p T (µ + c) agrees with the SM prediction. Since the W +c final state does not have any significant contributions from diboson or top quark production in the SM, this comparison validates the analysis procedures. Figure 4 shows a fit to the p T (µ)/p T (j µ ) distribution built from the b-tagged jets from the full data sample. For p T (µ)/p T (j µ ) > 0.9 the data are dominantly from W decays. Figure 5 shows the yield and charge asymmetry distributions obtained as a function of p T (µ + b). The direct W +b prediction is determined by scaling the inclusive W +jet distribution observed in data by the SM prediction for σ(W b)/σ(W j) and by the b-tagging efficiency measured in data [24]. As can be seen, the data cannot be described by the expected direct W +b contribution alone. The observed yield is about three times larger than the SM prediction without a top quark contribution, while the SM prediction including both tt and single-top production does describe the data well. In Ref. [19], W+b is studied in a larger fiducial region (p T (µ) > 20 GeV, p T (j) > 20 GeV), where the top quark contribution is expected to be about half as large as that of direct W+b production. The ratio [σ(W b)+σ(top)]/σ(W j) is measured in the larger fiducial region to be 1.17 ± 0.13 (stat) ± 0.18 (syst)% at √ s = 7 TeV and 1.29 ± 0.08 (stat) ± 0.19 (syst)% at √ s = 8 TeV. These results agree with SM predictions, that include top quark production, of 1.23 ± 0.24% and 1.38 ± 0.26%, respectively. This validates the direct W +b prediction, since direct W +b production is the dominant contribution to the larger fiducial region. Various sources of systematic uncertainties are considered and summarized in Table 1. The direct W +b prediction is normalized using the observed inclusive W +jet data yields. Therefore, most experimental systematic uncertainties cancel to a good approximation. Since the muon kinematic distributions in W +jet and W +b are similar, all muon-based uncertainties are negligible with the exception of the trigger GEC efficiency. The datadriven GEC study discussed above shows that the efficiencies are consistent for W +jet and W +b, with the statistical precision of this study assigned as the systematic uncertainty. Mismodeling of the p T (µ)/p T (j µ ) distributions largely cancels, since this shifts the inclusive W +jet and W +b final-state yields by the same amount, leaving the observed excess over the expected direct W +b yield unaffected. The one exception is possible mismodeling of the dijet templates, since the flavor content of the dijet background is not the same in the two samples. Variations of these templates are considered and a relative uncertainty of 5% is assigned on the W boson yields. The jet reconstruction efficiencies for heavy-flavor and light-parton jets in simulation are found to be consistent within 2%, which is assigned as the systematic uncertainty for flavor-dependencies in the jet-reconstruction efficiency. The SV-tagger BDT templates used in this analysis are two-dimensional histograms obtained from the data samples enriched in b and c jets used in Ref. [24]. Following Refs. [19,24], a 5% uncertainty on the b-tagged yields is assigned due to uncertainty in these templates. The precision of the b-tagging efficiency measurement (10%) in data [24] is assigned as an additional uncertainty. To determine the statistical significance of the top quark contribution, a binned profile likelihood test is performed. The top quark distribution and charge asymmetry versus p T (µ + b) are obtained from the SM predictions. The total top quark yield is allowed to vary freely. Systematic uncertainties, both theoretical and experimental, are handled as Gaussian constraints. The profile likelihood technique is used to compare the SM hypotheses with and without a top quark contribution. The significance obtained using Wilks theorem [28] is 5.4σ, confirming the observation of top quark production in the forward region. The yield and charge asymmetry distributions versus p T (µ + b) observed at √ s = 7 and 8 TeV are each consistent with the SM predictions. The excess of the observed yield relative to the direct W +b prediction at each √ s is attributed to top quark production, and used to measure the cross-sections. Some additional systematic uncertainties that apply to the cross-section measurements do not factor into the significance determination. source uncertainty Theory 10% The uncertainties due to the muon trigger, reconstruction, and selection efficiencies are taken from the data-driven studies of Refs. [21,25]. The uncertainty due to the jet energy determination is obtained from the data-driven study used to obtain the detector response matrix. The uncertainty due to W → τ → µ contamination is taken as the difference between the contamination in simulation versus that of a data-driven study of inclusive W → µν production [26]. The luminosity uncertainty is described in detail in Ref. [29]. The total systematic uncertainty is obtained by adding the individual contributions in quadrature. The systematic uncertainties are nearly 100% correlated between the two measurements. In summary, top quark production is observed for the first time in the forward region. The cross-section results are in agreement with the SM predictions of 180 +51 −41 (312 +83 −68 ) fb at 7(8) TeV obtained at NLO using MCFM. The differential distributions of the yield and charge asymmetry are also consistent with SM predictions. [27] See Supplemental Material at the end of this Letter.
3,647
2015-01-01T00:00:00.000
[ "Physics" ]
The atmosphere-ocean general circulation model EMAC-MPIOM Introduction Conclusions References Introduction Coupled atmosphere-ocean general circulation models (AO-GCMs) are essential tools in climate research.They are used to project the future climate and to study the actual state of our climate system (Houghton et al., 2001).An AO-GCM comprises an atmospheric general circulation model (A-GCM), also including a land-surface component, and an ocean model (an Ocean General Circulation Model, O-GCM), also including a sea-ice component.In addition, biogeochemical components can be added, for example, if Correspondence to: A. Pozzer<EMAIL_ADDRESS>constituent cycles, such as the carbon, sulfur or nitrogen cycle are to be studied.Historically, the different model components have been mostly developed independently, and at a later stage they have been connected to create AO-GCMs (Valcke, 2006;Sausen and Voss, 1996).However, as indicated by the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4), no model used in the AR4 presented a complete and online calculation of atmospheric chemistry.The main motivation of this work is to provide such a model to the scientific community, which is indeed essential to effectively study the intricate feedbacks between atmospheric composition, element cycles and climate. Here, a new coupling method between the ECHAM/MESSy Atmospheric Chemistry (EMAC) model, (Roeckner et al., 2006;Jöckel et al., 2006, ECHAM5 version 5.3.02) and the ocean model MPIOM (Marsland et al., 2003, version 1.3.0) is presented, with the coupling based on the Modular Earth Submodel System (MESSy2, Jöckel et al., 2010).In the present study, only the dynamical coupling will be discussed.Hence EMAC is, so far, only used as an AO-GCM, i.e. all processes relevant for atmospheric chemistry included in EMAC are switched off.This first step towards including an explicit calculation of atmospheric chemistry in a climate model is needed to test the coupling, i.e. the option to exchange a large amount of data between the model components, and to maintain optimal performance of the coupled system. In Sect.2, different coupling methods are briefly reviewed, followed (Sect.3) by a technical description of the method used in this study.A run-time performance analysis of the model system is presented in Sect.4, and in Sect.5, results from EMAC-MPIOM are shown in comparison to other models and observations. Published by Copernicus Publications on behalf of the European Geosciences Union. As sketched in Fig. 1, at least two different methods exist to couple the components of an a AO-GCM: -internal coupling: the different components of the AO-GCM are part of the same executable and share the same parallel decomposition topology.In an operator splitting approach, the different components (processes) are calculated in sequence.This implies that each task collects the required information, and performs the interpolation between the grids. -external coupling: the different components (generally an atmosphere GCM and an ocean GCM) of the AO-GCM are executed as separate tasks1 , at the same time, i.e. in parallel.An additional external coupler program synchronises the different component models (w.r.t.simulation time) and organises the exchange of data between the different component models.This involves the collection of data, the interpolation between different model grids, and the redistribution of data. External coupling is the most widely used method, e.g. by the OASIS coupler (Valcke et al., 2006;Valcke, 2006).The OASIS coupler is used, for example, in the ECHAM5/MPIOM coupled climate model of the Max Planck Institute for Meteorology (Jungclaus et al., 2007) and in the Hadley Centre Global Environment Model (Johns et al., 2006).Also the Community Climate System Model 3 (CCSM3, Collins et al., 2006) adopts a similar technique for information exchange between its different components.Internal coupling is instead largely used in the US, e.g. in the new version of the Community Climate System Model 4 (CCSM4, Gent et al., 2011) and in the Earth System Modeling Framework (ESMF, Collins et al., 2005). Following the MESSy standard (Jöckel et al., 2005), and its modular structure, it is a natural choice to select the internal coupling method as a preferred technique to couple EMAC and MPIOM.In fact, the aim of the MESSy system is to implement the processes of the Earth System as submodels.Hence, the coupling routines have been developed as part of the MESSy infrastructure as a separate submodel (see A2O submodel below). MPIOM as MESSy submodel According to the MESSy standard definition, a single time manager clocks all submodels (= processes) in an operator and C2) of an AO-GCM (upper panel "internal method", as implemented here, lower panel "external method" as used for example in the OASIS coupler).The colours denote the different executables. splitting approach.The MPIOM source code files are compiled and archived as a library.Minor modifications were required in the source code, and all were enclosed in preprocessor directives (#ifdef MESSY), which allow to reproduce the legacy code if compiled without this definition.About 20 modifications in 11 different files were required.The majority of these modifications are to restrict write statements to one PE (processor), in order to reduce the output to the logfile.The main changes in the original source code modify the input of the initialisation fields (salinity and temperature from the Levitus climatology), with which the ocean model can now be initialised at any date.Another main modification is related to the selection of various parameters for coupled and non-coupled simulations.In the original MPIOM code, this selection was implemented with preprocessor directives, hence reducing the model flexibility at run-time.In the EMAC-MPIOM coupled system, the preprocessor directives have been substituted by a logical namelist parameter, and in one case (growth.f90) the routines in the coupled case were moved to a new file (growth coupled.f90). The main program (mpiom.f90) is eliminated and substituted by a MESSy submodel interface (SMIL) module (messy mpiom e5.f90).This file mimics the time loop of MPIOM with the calls to the main entry points to those subroutines, which calculate the ocean dynamics.For the entry points, initialisation, time integration and a finalising phase are distinguished.The MPIOM-library is linked to the model system, operating as a submodel core layer of the MPIOM submodel.Following the MESSy standard, a strict separation of the process formulations from the model infrastructure (e.g.time management, I/O, parallel decomposition etc.) was implemented.I/O units, for example, are generated dynamically at run-time.In addition, the two model components (EMAC and MPIOM) use the same high level API (application programmers interface) to the MPI (Message Passing Interface) library.This implies that the same subroutines (from mo mpi.f90) are used for the data exchange between the tasks in MPIOM and EMAC, respectively. The new MESSy interface (Jöckel et al., 2010) introduces the concept of "representations", which we make use of here.The "representation" is a basic entity of the submodel CHANNEL (Jöckel et al., 2010), and it allows an easy management of the memory, internal data exchange and output to files.New representations for the ocean variables (2-D and 3-D fields) have been introduced, consistent with the dimensioning of the original MPIOM arrays and compatible with the MPIOM parallel domain decomposition.Application of the CHANNEL submodel implies that no more specific output routines are required for the ocean model; the output files now have the same format and contain the same meta information for both the atmosphere and the ocean components.Furthermore, in the CHANNEL API, each "representation" is related to the high-level MPI API via a definition of the gathering (i.e.collecting a field from all tasks) and scattering (i.e.distributing a field to all tasks) subroutines.In case of the new MPIOM "representations", the original gathering and scattering subroutines from MPIOM are applied.As implication, the spatial coverage of each core is independently defined for the two AO-GCM components and constrained by the values of NPX and NPY set in the run-script, both for the atmosphere and for the ocean model.In fact, both models, EMAC and MPIOM, share the same horizontal domain decomposition topology for their grid-point-space representations, in which the global model grid is subdivided into NPX times NPY sub-domains (in North-South and East-West direction, respectively, for ECHAM5 and in East-West and North-South direction, respectively for MPIOM).Hence, the same task, which calculates a sub-domain in the atmosphere, also calculates a sub-domain in the ocean, and the two subdomains do not necessarily match geographically.An example is shown in Fig. 2, where possible parallel domain decompositions of EMAC and MPIOM are presented.A total of 16 tasks (specifically with NPX = 4 and NPY = 4) is used, and the color indicates the task number in the atmosphere and ocean model, respectively.Other decompositions are possible, depending on the values of NPX and NPY. The A2O submodel As described in Sect.3.1, the two components of the AO-GCM (EMAC and MPIOM) run within the MESSy structure, sharing the same time manager.To couple the two model components (EMAC and MPIOM) physically, some gridded information has to be exchanged (see Table 1).For this purpose, a new submodel, named A2O, was developed.In EMAC, a quadratic Gaussian grid (corresponding to the chosen triangular spectral truncation) is used, whereas MPIOM operates on a curvilinear rotated grid.The exchanged gridded information must therefore be transformed between the different grids. Additionally, because the period between two subsequent data exchange events is generally different from the GCMs time step, the variables needed for the coupling have to be accumulated and averaged before being transformed.The accumulation process is performed at each time step, by adding the particular instantaneous value, multiplied by the GCM time step length (in seconds), to the accumulated fields.The averaging is done at a coupling time step, by dividing the A. Pozzer et al.: The EMAC-MPIOM model accumulated fields by the coupling period (in seconds) and resetting the accumulated values to zero.This procedure also allows to change the GCMs time step and/or the coupling frequency during run-time. The submodel A2O (Atmosphere to Ocean, and vice versa) performs the required accumulation/averaging in time and the subsequent grid-transformation.The submodel implementation is such that three different setups are possible: -EMAC and MPIOM are completely decoupled, -EMAC or MPIOM are one-way forced, i.e. one component delivers the boundary conditions to the other, but not vice versa, -EMAC and MPIOM are fully coupled, i.e. the boundary conditions are mutually exchanged in both directions. The setup is controlled by the A2O CPL-namelist, which is described in detail in the Supplement.In Table 1 the variables required for the physical coupling are listed.The fields are interpolated between the grids with a bilinear remapping method for scalar fields, while a conservative remapping method is used for flux fields (see Sect. 3.3). For the interpolation the respective weights between the different model grid-points (atmosphere and ocean) are calculated during the initialisation phase of the model (see also Sect. 3.3).This allows that any combination of grids and/or parallel decompositions can be used without additional preprocessing. One of the main advantages of the coupling approach adopted in this study (internal coupling) is the implicit "partial" parallelisation of the coupling procedure.Generally, one problem of the coupling routines is that the required information must first be collected from the different tasks of one model component, then processed (e.g.interpolated) and finally re-distributed to the tasks of the other model component.This process requires a "gathering" of information from different tasks, a subsequent grid transformation, and a "scattering" of the results to the corresponding target tasks.This process is computationally expensive, in particular, if many fields need to be exchanged (as is the case for interactive atmosphere-ocean chemistry).In the internal coupling approach, only the "gathering" (or collection) and the grid-transformation steps are required.During the initialisation phase of the model system, each task (in any of the AO-GCM components) stores the locations (indices) and the corresponding weights required for the transformation from the global domain of the other AO-GCM component.These weights are calculated for the global domain of the other AO-GCM component, because the applied search algorithm (see Sect. 3.3) is sequential and in order to reduce the algorithm complexity in the storage process.Then, within the time integration phase, each task collects the required information from the global field of the other AO-GCM component.Due to this procedure, the interpolation is performed simultaneously by all tasks (without the need to scatter, i.e. to distribute information) and thus increasing the coupling performance (see Sect. 4).It must, however, be noted that the new version of the OASIS coupler (Version 4; Redler et al., 2010) supports a fully parallel interpolation, which means the interpolation is performed in parallel for each intersection of source and target sub-domains.This will potentially increase the run-time performance of OASIS coupled parallel applications. Grid-transformation utilising the SCRIP library For the transformation of fields between the different grids (i.e. from the atmosphere grid to the ocean grid and vice versa), the SCRIP (Spherical Coordinate Remapping and Interpolation Package) routines (Jones, 1999) are used.These state-of-the-art transformation routines are widely used, for instance in the COSMOS model and the CCSM3 model.The SCRIP routines allow four types of transformations between two different grids: -first-and second-order conservative remapping (in the MESSy system, only the first order is used), -bilinear interpolation with local bilinear approximation, -bicubic interpolation, -inverse-distance-weighted averaging (with a userspecified number of nearest neighbour values). The library has been embedded into the MESSy2 interface-structure as independent generic module (messy main gridtrafo scrip.f90). For the coupling of EMAC and MPIOM presented here, this module is called by the submodel A2O.It can, however, also be used for grid-transformations by other MESSy submodels.According to the MESSy standard, the parameters used by A2O for the SCRIP library routines can be modified from their default values by changing the A2O submodel CPL-namelist (see the Supplement). In Fig. 3, an example of a grid transformation with conservative remapping from the atmosphere grid to the ocean grid is shown.The patterns are preserved and the fluxes are conserved, not only on the global scale but also on the local scale. Analysis of the run-time performance The run-time performance is a critical aspect for climate models and the coupling as such must not drastically decrease the AO-GCM execution speed.In order to evaluate the run-time performance, we compare the EMAC-MPIOM performance with that of the COSMOS-1.0.0 model.Since both models share the same components (ECHAM5 and MPIOM), differences in the achieved efficiency can be attributed to the different coupling methods.In fact, the efficiency of the AO-GCM depends on the efficiency of the component models and on the load balancing between them. For the comparison, we compiled and executed both model systems with the same setup on the same platform: a 64bit Linux cluster, with 24 nodes each equipped with 32 GB RAM and 2 Intel 5440 (2.83 GHz, 4 cores) processors, for a total of 8 cores per node.The Intel Fortran Compiler (version 11.1.046)together with the MPI-library mvapich2-1.2 has been used with the optimisation option -O1 to compile both model codes.The two climate models were run with no output for one month at T31L19 resolution for the atmosphere and at GR30L40 resolution for the ocean.The atmosphere and the ocean model used a 40 and 144 min time-step, respectively.In both cases (EMAC-MPIOM and COSMOS), the same convective and large scale cloud parameterisations were used for the atmosphere, and the same algorithms for advection and diffusion in the ocean, respectively.The radiation in the atmosphere was calculated every 2 simulation hours.In addition, the number of tasks requested in the simulation were coincident with the number of cores allocated (i.e. one task per core). Since in COSMOS the user can distribute a given number of tasks almost arbitrarily between ECHAM5 and MPIOM (one task is always reserved for OASIS), the wall-clock-time required for one simulation with a given number of tasks is not unambiguous.To investigate the distribution of tasks for the optimum load balance, a number of test simulations are usually required for any given setup.Here, we report only the times achieved with the optimal task distribution.In contrast, EMAC-MPIOM does not require any task distribution optimisation and the simulation is performed with the maximum possible computational speed. Three factors contribute to the differences in the model performance: -The MESSy interface decreases the performance of EMAC in the "GCM-only mode" compared to ECHAM5 by ∼ 3-5 %, and therefore, EMAC-MPIOM is expected to be at least ∼ 3-5 % slower than COS-MOS (see the link "ECHAM5/MESSy Performance" at http://www.messy-interface.org). -EMAC-MPIOM calculates the interpolation weights during its initialisation phase, whereas COSMOS reads pre-calculated values from files.This calculation is computationally expensive and depends on the AO-GCM component resolutions and on the number of tasks selected.In fact, as seen before in Sect.3.2, each task calculates the interpolation weights from the global domain of the other AO-GCM component, with the interpolation algorithm scanning the global domain for overlaps with the local domain.This calculation is performed only during the initialisation phase. -The OASIS coupler requires a dedicated task to perform the grid transformations.Hence, for a very low core number, the single core used by OASIS limits the overall performance of the COSMOS model. The total wall-clock-time required to complete the simulation of one month shows a constant bias of 58 s for EMAC-MPIOM compared to COSMOS.This bias is independent on the number of tasks used and results from non-parallel process in EMAC-MPIOM, mainly caused by the different initialisation phases of the two climate models.To analyse the performances of the models, this constant bias has been subtracted from the data, so that only the wall-clock times of the model integration phase are investigated.In Fig. 4, the wallclock times required to complete the integration phase of one month simulation are presented, dependent on the number of cores (= number of tasks) used.The wall-clock-times correlate very well between COSMOS and EMAC-MPIOM (see Fig. 4, R 2 = 0.998), showing that the model scalability is similar in both cases.Overall, the difference in the performances can be quantified by the slope of the regression line (see Fig. 4).This slope shows that EMAC-MPIOM has an approx.10 % better scalability (0.89 times) than COSMOS.In general, the improvement in the performance is due to a reduction of the gather/scatter operations between the different tasks.In fact, as described in Sect.3.2, the EMAC-MPIOM model does not perform the transformation as a separate task sequentially, but, instead, performs the interpolation simultaneously for all tasks in their part of the domain. It must be stressed that this analysis does not allow a general conclusion, which is valid for all model setups, resolutions, task numbers, etc.Most likely, the results obtained here are not even to be transferable to other machines/architectures or compilers.However, it is possible to conclude that the coupling method implemented here, does not deteriorate the performance of the coupled model. Evaluation of EMAC-MPIOM In order to test, if the chosen coupling method technically works and does not deteriorate the climate of the physically coupled atmosphere-ocean system, we performed a number of standard climate simulations with EMAC-MPIOM and analysed the results.This analysis is not presented in full detail, because the dynamical components of EMAC-MPIOM (i.e.ECHAM5 and MPIOM) are the same as in the COSMOS model.Therefore, we refer to Jungclaus et al. (2007) for a detailed overview of the model climatology. The model resolution applied here for the standard simulations is T31L19 for the atmosphere component EMAC and GR30L40 for the ocean component MPIOM.This resolution is coarser than the actual state-of-the-art resolution used in climate models.However, near future EMAC-MPIOM simulations with atmospheric and/or ocean chemistry included will be limited by the computational demands and therefore are required to be run at such lower resolutions.It is hence essential to obtain reasonable results at this rather coarse resolution, which has been yet widely used to couple ECHAM5 with MPIOM.Following the Coupled Model Intercomparison Project (CMIP3) recommendations, three simulations have been performed with different Greenhouse gas (GHG) forcings: -a "preindustrial control simulation" with constant preindustrial conditions (GHG of the year 1850), hereafter referred to as PI, -a "climate of the 20 century" simulation (varying GHG from 1850 to 2000) hereafter referred to as TRANS, and -a "1 % yr −1 CO 2 increase to doubling" simulation (with other GHG of the year 1850), hereafter referred to as CO2×2. These simulations have been chosen to allow some of the most important evaluations that can be conducted for climate models of this complexity.In addition, the output from a large variety of well tested and reliable climate models can be used to compare the results with.Because these models had been run at higher resolutions and with slightly different set-ups, some differences in the results are expected, nevertheless providing important benchmarks.The series of annual values of the GHG for the TRANS simulations have been obtained from the framework of the ENSEMBLES European project and include CO 2 (Etheridge et al., 1998), CH 4 (Etheridge et al., 2002), N 2 O (Machida et al., 1995) and CFCs (Walker et al., 2000). Surface temperature As shown by Jungclaus et al. (2007), the sea surface temperature (SST) and the sea ice are the most important variables for the determination of the atmosphere-to-ocean fluxes and of the correctness of the coupling processes. In Fig. 5, the SST of simulation TRANS is compared to the SST from the Atmospheric Model Intercomparison Project (AMIP, Taylor et al., 2000), compiled by Hurrell et al. (2008) based on monthly mean Hadley Centre sea ice and SST data (HadlSST, version 1) and weekly optimum interpolation (OI) SST analysis data (version 2) of the National Oceanic and Atmospheric Administration (NOAA).Both datasets are averaged over the years 1960-1990.The correlation between the two datasets is high (R 2 = 0.97), which confirms that the model is generally correctly reproducing the observed SST. Although the correlation is high, it is interesting to analyse the spatial differences between the AMIPII data and the TRANS simulation.In Fig. 6 the spatial distribution of the differences corresponding to the data shown in Fig. 5 is presented.Although the deviation from the observed values is less than 1 K in most regions over the ocean, in some regions the deviation is larger.The largest biases (up to 6 K) are located in the North Atlantic and in the Irminger and Labrador Seas in the Northwestern Atlantic.Deviations of similar magnitude, but with opposite sign are present in the Kuroshio region.Despite the low resolution applied for the simulations (T31L19 for the atmosphere model and GR30L40 for the ocean), these results are in line with what has been obtained by the coupled model COSMOS (Jungclaus et al., 2007), where the biases of similar intensity are found in the same regions.Again, similarly to what has been obtained by Jungclaus et al. (2007), a warmer SST is observed at the west coasts of Africa and the Americas (see Fig. 6).This is probably due to an underestimation of stratocumulus cloud cover in the model atmosphere, which is also an issue with other models (e.g.Washington et al., 2000;Roberts et al., 2004), and possibly, an underestimation of the coastal upwelling in that region.Additionally, the cold bias in the North Atlantic SST is related to a weak meridional overturning circulation and associated heat transport.Finally, in the southern ocean, the too high SSTs near Antarctica and too low SSTs on the northern flank of the Antarctic Circumpolar Current (ACC) are mostly due to a positioning error of the ACC.(Rayner et al., 2003), both for the year 1900-1999 (not detrended). The surface temperature changes during the 20th century have been compared with model results provided for the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4).In Fig. 7, the global average surface temperature increase with respect to the 1960-1990 average is shown for simulation TRANS in comparison to a series of simulations by other models, which participated in the third phase of the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3, Meehl et al., 2007).The overall increase of the surface temperature is in line with what has been obtained by other climate models of the same complexity.The global surface temperature is somewhat lower compared to those of other models of the CMIP3 database in the 1850-1880 period, while the trend observed during the 1960-1990 period is very similar for all models. The tropical ocean seasonal mean inter-annual variability is shown in Fig. 8.It is known that ENSO (El Niño-Southern Oscillation) is the dominating signal of the variability in the Tropical Pacific Ocean region.Although in the East Pacific the simulated variability correlates well with the observed one (see Fig. 8), in the western Tropical Pacific, the model generates a somewhat higher inter-annual variability, which is absent in the observations.The cause is most probably the low resolution of the models.The ocean model, as applied here, has a curvilinear rotated grid with the lowest resolution in the Pacific Ocean (see also AchutaRao and Sperber (2006, and references therein) for a review on ENSO simulations in climate models).Although the variability is generally higher in the model than in the observations, an ENSO signal is observed, as shown in Fig. 9.In this figure, the monthly variability of the SST is depicted for the so called ENSO region 3.4 (i.e. between 170 • and 120 • W and between 5 • S and 5 • N).The model variability is confirmed to be higher than the observed one; nevertheless, the model reproduces the correct seasonal phase of El Niño, with a peak of the SST anomaly in the boreal winter.Compared to the difficulties in representing the correct inter-annual variability in the Pacific Ocean, in the Indian Ocean the model reproduces the observed patterns with better agreement to the observations.During July, August and September the model reproduces (with a slight overestimation) the correct variability in the central Indian Ocean, while the patterns produced by the model are qualitatively similar to the observed one during April, May and June.The model is, however, strongly overestimating the variability during October, November and December in the Indian Ocean, especially in the Southern part, while in January, February and March the simulated open ocean shows a too high inter-annual variability over the central-south Indian Ocean and a too low variability near the Northern coasts. Ice coverage The correct simulation of the ice coverage is essential for climate models, due to the albedo feedback.As shown by Arzel et al. (2006) there are large differences w.r.t.sea ice coverage simulations between the models used for the IPCC AR4.Arzel et al. (2006) showed that, although the multimodel average sea ice extend may agree with the observations, differences by a factor of 2 can be found between individual model simulations.In Fig. 10 the polar sea ice coverage fractions for September and March are shown, calculated as a 1960-1990 average climatology from the TRANS simulation.In the same figure the observations are also shown (Rayner et al., 2003), averaged over the same period.In the Northern Hemisphere (NH) winter, the warm Norwegian Atlantic current is present, impeding the ice formation at the Norwegian coast.Nevertheless, the model is clearly predicting a too high ice coverage, especially over the Barent Shelf and at the west coast of Svalbard.At the same time the model overestimates the presence of ice around the coast of Greenland and at the coasts of Newfoundland and Labrador.The model reproduces, with better agreement, the retreat of the sea-ice during summer, with a strong reduction of the sea ice in the Barents and Kara Seas.Again, a somewhat higher ice coverage is present at the east coast of Greenland and northern Iceland.In the Antarctic, the eastern coast of the Antarctic peninsula (Weddel Sea) is ice covered throughout the year.The model reproduces the right magnitude of the retreat of the ice during summer, although with some overestimation in the Ross Sea.During the Southern Hemisphere (SH) winter, an underestimation of the ice coverage is present at 30 • E, while an overestimation occurs over the Amundsen Sea.To compare the changes of the sea ice coverage during the 20th century, the annual sea ice coverage area has been calculated from the simulations TRANS and PI and compared with the dataset by Rayner et al. (2003), which is based on observations (see Fig. 11).The simulated sea ice coverage agrees with the observations, although with an overestimation (up to 8 %).In addition, the simulated inter-annual variability is much larger than what is observed.Nevertheless the model is able to mimic the decrease in the sea ice area coverage observed after 1950, although with a general overestimation. Thermohaline circulation and meridional overturning circulation Deep water formation mainly takes place in the North Atlantic Ocean, and in the northern and southern parts of the Greenland Scotland Ridge.The correct representation of deep water formation is important for climate models, to maintain the stability of the climate over a long time period.Figure 12 presents the maximum depth of convection estimated as the deepest model layer, where the diffusive vertical velocity is greater than zero.In the North Atlantic Ocean convection is present between Greenland and Newfoundland (Labrador Sea), with convection deeper than 1500 m.Although the model simulation agrees with the observations in this region (Pickart et al., 2002), a deep convection feature (which is the main region of deep water formation in the model) is present at the east coast of Newfoundland, which is clearly in contrast to the observations.The reason is a weak MOC (Meridional Overturning Circulation) which, combined with the strong presence of ice during winter in the Labrador sea (see Fig. 10), forces the deep water formation in the model to be located further to the South than what is observed.Nevertheless, strong convective movement occurs in the Greenland and Norwegian Seas, reaching up the coast of Svalbard.This zone of deep water formation is well known and appears to be well simulated by the model.In the SH, convection occurs mainly outside the Weddel Sea and Ross Sea, with some small convective events all around the Southern Ocean and with the major events occurring between 0 and 45 • E. Jet streams The jet streams are strong air currents concentrated within a narrow region in the upper troposphere.The predominant one, the polar-front jet, is associated with synoptic weather systems at mid-latitudes. Hereafter, jet stream always refers to the polar-front jet.The adequate representation of the jet stream by a model indicates that the horizontal temperature gradient (the main cause of these thermal winds) is reproduced correctly.In Fig. 13, the results from simulation TRANS are compared with the NCEP/NCAR (National Centers for Environmental Prediction/ National Center for Atmospheric Research) Reanalysis (Kalnay et al., 1996).The maximum zonal wind speed is reproduced well by the model, with the SH jet stream somewhat stronger than the NH jet stream ( 30 and 22 m s −1 , respectively).The location of the maximum wind, however, is slightly shifted poleward by 5 • .The vertical position of the jet streams is also 50 hPa higher than the observed.The NH jet stream has a meridional extension which is in line with what is observed, while the simulated SH jet stream is narrower in the latitudinal direction compared to the re-analysis provided by NCEP.In fact, the averaged zonal wind speed higher than 26 m s −1 in the SH is located between 40-30 • S in the model results, while it is distributed on a larger latitudinal range ( 50-25 • S) in the NCEP re-analysis data.Finally, while the NCEP data show a change of direction between the tropical and extratropical zonal winds, the simulation TRANS reproduces such features only in the lower troposphere and in the stratosphere, while in the upper troposphere (at around 200 hPa) westerly winds still dominate.Although some differences arise from the comparison, the general features of thermal winds are reproduced correctly by the model, despite the low resolution used for the atmosphere model (T31L19). Precipitation The representation of precipitation, being a very important climate variable, is still challenging for coupled climate models (Dai, 2006).The data from the Global Precipitation Climatology Project (GPCP, Adler et al., 2003) are used to evaluate the capability of EMAC-MPIOM in reproducing this important quantity.As for many other climate models, also the results from simulation TRANS show two zonal bands of high biased precipitation in the tropics, separated by a dry bias directly at the equator (see Fig. 14).These zonal bands (located over the Pacific Ocean) are persistent throughout the year and the magnitude is independent of the season.In addition, the Northern Intertropical Convergence Zone (ITCZ) is located slightly too far north compared to the observations during summer and autumn (see Fig. 15, JJA and SON), and too far south during winter and spring (see Fig. 15, DJF and MAM).For boreal autumn and winter the simulation shows Fig. 14.Zonally averaged difference in the precipitation rate (in mm day −1 ) between climatologies derived from simulation TRANS and from observations (Global Precipitation Climatology Project, 1979-2009, Adler et al., 2003). a distinct minimum at around 30 • S, which is weaker in the observations.Finally, the model largely underestimates the precipitation over Antarctica throughout the year and in the storm track during the NH winter.This is associated with the underestimation of the sea surface temperature in these regions. Climate sensitivity To estimate the climate sensitivity of the coupled model EMAC-MPIOM, the results from the CO2×2 simulation are analysed.The simulation yields a global average increase of the surface temperature of 2.8 K for a doubling of CO 2 .As mentioned in the IPCC AR4, the increase in the temperature for a CO 2 doubling "is likely to be in the range 2 to 4.5 • C with a best estimate of about 3 • C".The value obtained in this study is thus in line with results from the CMIP3 multi-model dataset.For the same experiment, for example, the models ECHAM5/MPIOM (with OASIS coupler) and INGV-SX6 show an increase of the global mean surface temperature of 3.35 K and 1.86 K, respectively.To calculate the climate sensitivity of the model, the mean radiative forcing at the tropopause (simulation CO2×2) was calculated for the years 1960-1990 as 4.0 W m −2 .This implies a climate sensitivity of the model of 0.7 K W −1 m 2 , in line with what has been estimated by most models from the CMIP3 dataset (e.g.ECHAM5/MPIOM, INGV-SX6, INM-CM3 and IPSL-CM4 with 0.83, 0.78, 0.52 and 1.26 K W −1 m 2 , respectively).Despite the usage of the same dynamical components, EMAC-MPIOM and ECHAM5/MPIOM do not present the same climate sensitivity, because of the different resolution and boundary conditions (GHG vertical profiles) used in the model simulations here considered. Summary and outlook A new internal coupling method, based on the MESSy interface, between EMAC and MPIOM is presented.It shows a comparable run-time performance as the external COSMOS coupling approach using OASIS3 under comparable conditions and for the set-up tested here.Despite the fact that the effective performances of the model components are not deteriorated by the new approach, it is hardly possible to estimate in general which coupling method yields the best performance of the climate model, because it is determined by the number of available tasks, the achievable load balance, the model resolution and complexity, and the single component scalability.Additionally, the scaling and load imbalance issues cannot be regarded separately, rendering a general statement about the performance and scaling features of the internal versus external coupling method hardly possible.The efforts for implementing either the internal or the external coupling approach primarily depend on the code structure of the legacy models to be coupled.In both cases, the legacy codes need to be equipped with additional infrastructure defining the interfaces.The external approach is by design potentially more favourable for less structured codes.Hence, in most cases, the external approach requires smaller coding effort to be implemented than the internal approach. To evaluate the EMAC-MPIOM model system, we performed selected climate simulations to prove that the EMAC-MPIOM climate is neither deteriorated by the new approach, nor does the new model system produce results that differ from those of other climate models under similar conditions and forcings. Following the MESSy philosophy, a new submodel (named A2O) was developed to control the exchange of information (coupling) between the AO-GCM components.However, since this submodel is flexibly controlled by a namelist, it can be used to convert any field present in one AO-GCM component to the other one and vice versa.Thanks to this capability, A2O can be used not only to control the physical coupling between the two AO-GCM components, but also to exchange additional information/fields between the two domains of the AO-GCM, including physical and chemical (e.g.tracer mixing ratios) data.Hence, as a future model development, the ocean biogeochemistry will be included via the MESSy interface and coupled to the air chemistry submodels of EMAC, using the AIRSEA submodel previously developed (Pozzer et al., 2006).This will allow a complete interaction between the two AO-GCM domains, exchanging not only quantities necessary for the physical coupling of EMAC and MPIOM (i.e.heat, mass and momentum as shown here) but also chemical species of atmospheric or oceanic interest, leading to a significant advancement towards a more detailed description of biogeochemical processes in the Earth system.Supplementary material related to this article is available online at: http://www.geosci-model-dev.net/4/771/2011/gmd-4-771-2011-supplement.pdf. Fig. 1 . Fig. 1.Coupling methods between the different model components (C1and C2) of an AO-GCM (upper panel "internal method", as implemented here, lower panel "external method" as used for example in the OASIS coupler).The colours denote the different executables. Fig. 2 . Fig. 2. Parallel (horizontal) "4 times 4" domain decomposition for a model setup with 16 tasks for the atmosphere model (upper panel) and the ocean model (lower panel).The color code denotes the task number. Fig. 3 . Fig. 3.Example of a grid transformation with the SCRIP library routines embedded in the generic MESSy submodel MAIN GRIDTRAFO and called by A2O: the precipitation minus evaporation field on the EMAC grid (top) has been transformed to the MPIOM grid (bottom) using the conservative remapping. Fig. 4 . Fig. 4. Scatter plot of the time (seconds wall-clock) required to simulate one month with the COSMOS-1.0.0 model (horizontal axis) and with the EMAC-MPIOM model with the same setup.The color code denotes the number of tasks used (for clarity the number of tasks used are shown also on the top of the points).In these simulations one task per core has been used.The regression line is shown in red and the result of the linear regression is denoted in the top left side of the plot.The constant bias of 58 s has been subtracted from the data. Fig. 5 . Fig. 5. Scatter plot of 1960-1990 average sea surface temperatures from the Taylor et al. (2000) dataset versus those resulting from simulation TRANS (in K). Fig. 6 . Fig.6.Surface temperature differences between the AMIP II(Taylor et al., 2000) dataset and the simulation TRANS (in K).Both datasets have been averaged over the years1960-1990. Fig. 8 . Fig. 8. Standard deviation of the seasonal mean inter-annual variability of the SST (in K).The left and right columns show results from the TRANS simulation, and from the HadISST data(Rayner et al., 2003), respectively, both for the year 1900-1999 (not detrended). Fig. 9 . Fig. 9. Standard deviation of monthly mean inter-annual variability of the SST (in K) averaged over the NINO3.4region.The black line shows results from the TRANS simulation, and the red line from the HadISST data(Rayner et al., 2003), both for the year 1900-1999 (not detrended). Fig. 10 . Fig. 10.Simulated and observed polar ice coverage.The upper and lower rows show March and September, respectively.Observations and results from simulation TRANS are averaged for the years 1960-1990.Observations are from the HadISST (Rayner et al., 2003) data set. Fig. 11 . Fig. 11.Global sea ice coverage (in 10 12 m 2 ).The black line shows the HadISST(Rayner et al., 2003) data, while the blue and the red lines represent the model results from simulations PI and TRANS, respectively.Dashed and solid lines represent annual and decadal running means, respectively. Fig. 13 . Fig. 13.Climatologically averaged zonal wind.The color denotes the wind speed in m s −1 as calculated from simulation TRANS for the years 1968-1996, while the contour lines denote the wind speed calculated from the NCEP/NCAR Reanalysis 1 for the same years.The vertical axis is in hPa. Fig. 15 . Fig. 15.Seasonal zonal average of climatological precipitation rate (in mm day −1 ).The red lines show observations from the Global Precipitation Climatology Project (1979-2009 climatology), the black lines represent results from the simulation TRANS (1950-2000 climatology). Table 1 . Variables to be exchanged by A2O for a physical coupling between EMAC and MPIOM.
9,305.8
2011-09-09T00:00:00.000
[ "Environmental Science", "Physics" ]
Emoji and Self-Identity in Twitter Bios Emoji are widely used to express emotions and concepts on social media, and prior work has shown that users’ choice of emoji reflects the way that they wish to present themselves to the world. Emoji usage is typically studied in the context of posts made by users, and this view has provided important insights into phenomena such as emotional expression and self-representation. In addition to making posts, however, social media platforms like Twitter allow for users to provide a short bio, which is an opportunity to briefly describe their account as a whole. In this work, we focus on the use of emoji in these bio statements. We explore the ways in which users include emoji in these self-descriptions, finding different patterns than those observed around emoji usage in tweets. We examine the relationships between emoji used in bios and the content of users’ tweets, showing that the topics and even the average sentiment of tweets varies for users with different emoji in their bios. Lastly, we confirm that homophily effects exist with respect to the types of emoji that are included in bios of users and their followers. Introduction With the rise of social media usage and online textbased communication, emoji, a simple but powerfully expressive set of visual characters (Danesi, 2016), have become a hugely popular means to express emotions, moods, and feelings over computermediated communication (Kelly and Watts, 2015). In the era of big data, with more and more people engaging with social media, researchers have begun to study the ways in which social media users include emoji in their posts, finding that emoji usage is associated with things like personality (Li et al., 2018), culture (Guntuku et al., 2019), * Authors contributed equally. and socio-geographical differences (Barbieri et al., 2016). Prior work has typically focused on how people use emoji within the posts that they make online (Ljubešić and Fišer, 2016;Robertson et al., 2018), or the way that they can be used as reactions to other content (Tian et al., 2017). However, emoji are also commonly used within user's self-created profiles. In this work, we specifically examine the inclusion of emoji in Twitter bios, which are short (160 characters maximum) texts describing a Twitter account. These bios are featured prominently on a user's profile page, and given their limited length, users often use this space succinctly express the essential information about their accounts. Therefore, we expect that the choice of emoji used in these bios will have a strong connection to a user's online self-identity, or the way that they seek to portray themselves to others on a social media platform. The goal of this paper is to give an overview of how emoji are used in Twitter bios from a computational linguistics perspective, that is, we treat emoji as a special category of tokens and make use of natural language processing methods to understand the major trends in the ways that people use emoji in their bios and what this says about both the things they tweet about and their follower network. Our results provides insights into the variety of ways in which people choose to present themselves online in their Twitter bios that may be overlooked when only considering non-emoji word tokens or only considering the ways that people use emoji in the content of tweets. More specifically, we ask, and subsequently describe the work done to answer, the following research questions: RQ1. How are emoji used in Twitter bios? As a first step, we seek to characterize the ways in which users use emoji in their bios. We look at the types of emoji that most commonly used in Twitter bios, and the position within the bios that emoji appear. We compare our findings to trends from the usage of emoji in tweets by the same set of users and note the differences. RQ2. What is the relationships between the emoji in a user's bio and the content that the user posts? Next, we explore the correlations that exist between the choice of emoji to be included in a user's bio and the content that that user tweets about. We consider this from the perspectives of word-level patterns, topic usage, and overall tweet sentiment. RQ3. Do users and their followers use emoji in their bios in a similar way? Last, we investigate the homophily of emoji usage with bios by studying the follower networks of our core set of users. We look at the similarities in both the absence or presence of emoji in users' bios as well as particular choices of emoji used. Background 2.1 Online Self-Identity Self-identity, or self-concept, is a collection of firm and noticeable beliefs about oneself (Sparks and Shepherd, 1992). From a general perspective, selfidentity gives the answers to the question "Who am I?". Many components make up self-identity together. The self-categorization theory asserts that the self-identity consists of at least two types of self-categorization: personal identity (what makes me unique?) and social identity (which groups do I belong to?) (Guimond et al., 2006). As social attributes are inherent, people reveal their self-identity when they communicate with others or interact with the outside world (Fisher et al., 2014). Expressing themselves is also a way for people to establish connections and bonds with the world. Therefore, social media provides a natural opportunity to study self-identity. Previous studies have shown that specific personality characteristics can be measured by analyzing linguistic behavior on social media using natural language processing techniques (Plank and Hovy, 2015). Other work analyzed the words, phrases, and topics collected from the Facebook messages, and linked these to personality traits and demographics of users (Schwartz et al., 2013). Twitter bios have been shown to be are particularly useful in discovering other aspects of self-identity such as political and religious affiliations (Rogers and Jones, 2019). Self-representation in Emoji While many studies related to online self-identity are based on the analysis of textual features, others have turned to emoji as important signals of users' identities. In one study, researchers looked at Twitter names and bios, uncovering stark differences in the emoji use of groups supporting and opposed to white nationalism (Hagen et al., 2019). Graells-Garrido et al. (2020) found that in two South American countries, different colour variations of heart emoji indicated users' opinions about abortions: tweets containing the green heart emoji ' ' were more likely to convey support of women's rights, while the blue heart emoji ' ' was more associated with stronger restrictions of abortions. In another study, researchers explored differences in emoji usage across cultures, finding that users from western countries tend to use more emoji than users from eastern countries (Guntuku et al., 2019). Although there were specific emoji that were found to be culturally specific (e.g. cooked rice ' '), it was suggested that many common emoji have similar meanings across cultures. It has been shown that usage of some emoji are also correlated with aspects of identity such as personality traits (Völkel et al., 2019), and the use of skin-tone modifiers in emoji has been linked to greater feelings of self-representation online, with no evidence that the skin-tones in emoji correlated with the expression of racist views online (Robertson et al., 2018(Robertson et al., , 2020. Other work found gender stereotypes in the use of male and female emoji modifiers: male modifiers were more frequently used in emoji related to business and technology while female modifiers were used in emoji related to love and makeup more often (Barbieri and Camacho-Collados, 2018). Data For our study, we sampled users from Twitter who tweeted between April and July 2020. Using the Twitter streaming API, we began collecting tweets and storing all user-level information available for each tweet, including the bio. In order to filter out both fake or less well-established accounts, we removed all accounts that had less than 100 followers, and to remove celebrity or other widely popular accounts, we filtered out those with more than 1000 followers. From the remaining set of users, we randomly sampled 20,000 users which have at least one emoji in their bios, and collected their most recent 200 tweets, as available, labeling this dataset "emojiBio". We also collected 200 tweets each for a set of 2,000 users who did not use any emoji in their bio as a control group, which we label the "nonEmojiBio" dataset. Finally, the "Followers" dataset contains the user-level information and recent tweets of the followers of the users of both the emojiBio and nonEmojiBio datasets. Details about the size of the datasets are presented in Table 1. As our dataset contains text written in many langauges, we first used the pre-trained fastText language identification model (Joulin et al., 2016a,b) to detect the language that each tweet or bio was written in. The most common languages in our datasets were English, Japanese, Spanish, and Portuguese, followed by others. After identifying the langauges, we tokenized the English-language texts using the NLTK (Loper and Bird, 2002) TweetTokenizer 1 and the texts detected as being written in other languages using the Polyglot multilingual tokenizer. 2 Emoji Usage in Bios First, we sought to characterize the use of emoji in users' bios, so we turn to just the emojiBio dataset. We contrast the most commonly used emoji 3 in bios and in tweets in Table 2, finding that facial expression emoji (' ', ' ', ' ', ' ', ' ', ' ') are more frequently used in tweets, while different variations of heart emoji (' ', ' ', ' ', ' ', ' ', ' ') are more frequently used in bios. Another emoji that is regularly used in bios is the rainbow emoji ' '. The sparkles emoji ' ' and the female sign emoji ' ' (not in top 10) are frequently used in both bios and tweets. We also checked the average position of emoji within users' bios and tweets, and found that in both cases, most emoji appear at the end of the text. These emoji at the end commonly signify the overall meaning or sentiment of the text. However, we noticed that the emoji in bios are, on average, used closer to the middle of the text than emoji that are used in tweets. There is also a nontrivial number of emoji used at the start of texts, which happens more often in bios than in tweets. Additionally, we found that is more common for users to use a single emoji as the entire content of a bio than as the entire content of a tweet (more details in Appendix B). Unicode Emoji 13.0 contains a total of 4,159 emoji in nine groups according to categories. We carried out analysis on emoji based on their predefined groups, and the results are shown in the Table 3. We found that the number of unique emoji in a category is directly correlated with the number of unique emoji from that group that appear in users' bios. However, after calculating the proportion of Table 4: Mutual information score rank of emoji in the bios group by top 20 emoji. the users who use at least one emoji from each group, we noticed that most users used emoji from the Smileys & Emotion group in bios, with a total of 44.3% of the 20,000 users, followed by the People & Body group with 20% of users including at least one emoji from that group. On the contrary, the number of users who used the emoji of the Food & Drink group is the least, accounting for only 5% of the total users. This suggests that users choose to represent themselves with more facial expressions, people-centric emoji, and emotions, which are connected to aspects of self-identity. We also found that many users use their bios to present their interests to others -some users use these types of emoji to express their love for certain singers or sports clubs. Next, we examine the relationships between sets of emoji that users include in their bios. We selected the top 20 emoji used in bios and computed the mutual information between the presence of these emoji in a user's bio and the presence of any other emoji. The emoji with the highest mutual information scores are presented in Table 4. 4 . We found that high-frequency emoji also had high mutual information scores for many other emoji, such as heart emoji of various colors: ' ', ' ',' '. This indicates that these high requency emoji are not used indiscriminately, but in particular ways and have patterns in the ways that they co-occur with other emoji. Another finding is that emoji which are similar to the original emoji have high scores. This finding suggests that similar or the same types of emoji are more likely to be used together. For example, in row 10, four types of ball emoji: basketball ' ', baseball ' ', tennis ' ', and American football ' ', appear in the ten emoji that provide the most mutual information for soccer ball emoji ' '. People who like football may also enjoy other ball sports, and using these ball emoji in the bios at the same time indicates that they are ball sports enthusiasts (either as players or spectators). Another example is that in the 14th row, there are eight national flag emoji out of the ten emoji that have the highest mutual information with the American flag emoji ' '. People may use multiple flags in the bios to imply their residences and national origin. Finally, we noticed that users tend to use emoji together that fit a specific context. For example, for the ring emoji ' ' in row 18, the most relevant emoji are kiss ' ', person with veil ' ', man in tuxedo ' ', and pregnant woman ' '. People may use these emoji in the bios to express their relationship status, potentially indicating whether they are engaged, married, or expecting a child. We also calculated the mutual information score of non-emoji tokens and the top 20 emoji, as shown in Table 5. Our dataset is multilingual, so the tokens obtained are also multilingual. We removed some tokens that do not capture any specific content information, such as some honorifics in Japanese. We found that the usage of emoji is related to words with similar meanings as the emoji, consistent with our previous findings that emoji with similar mean- ings had high mutual information. An example of this in the word-level results is in row 10 of Table 5, the tokens most related to soccer ball emoji ' ' are words in different languages with similar meanings related to soccer and player. This finding further confirms that people prefer to use relevant emoji in a specific context. There are many other examples with similar trends, such as the rainbow flag emoji ' ' in row 8 and the American flag emoji ' ' in row 14. Further, we observed that the heart emoji used in bios are more related to showing the love for celebrities or sports clubs, for example, "flamengo" (Row 1) is a sports club (shorthand name for Clube de Regatas do Flamengo), and "bts" (Row 4) is a Korean male singing group. The Relationship between Emoji in Bios and Tweeted Content Next, we explore the relationship between Emoji usage in bios and tweeted content. We start by comparing the overall trends in twitter usage between the sets of users with and without emoji in their bios in order to investigate whether there are notable differences in the volume of emoji, hashtags, and words (excluding emoji, hashtags, and stopwords) used by each group (Table 6). In terms of the quantity of words and hashtags, there are no significant differences between the emojiBio and nonEmojiBio datasets. In the emo-jiBio dataset, we noticed that there is increased usage of emoji in bios compared to tweets (3.05 emoji in bios compared to 0.73 in tweets). The fact that the character limit for tweets is more flexible than the limit for bios makes this result even more impressive. In the nonEmojiBio dataset, the average number of emoji that appear in tweets drops to 0.39, which is roughly half the rate of emoji usage in tweets found in the emojiBio group. In terms of hashtags, there is again an increased usage in bios which is similar between the two datasets. In terms of words, users who do not have emoji in their bios tend to use a slightly higher amount of words in their bios and tweets. Specifically, the users in the nonEmojiBio group used roughly 1 more word, on average, than their emojiBio counterparts, in both tweets and bios. In addition to differences in the number of words, hashtags, and emoji used, we expect that aspects of a user's identity that are revealed through emoji in their bios will be reflected in measurable ways in the content that they choose to tweet about. We perform a case study in which we select two particular interesting emoji that were common in users' bios, and compare the content of the tweets from users who had these emoji in their bios using both topic modeling and sentiment analysis. The emoji that we focus on for this case study are the rainbow emoji ' ', and the American flag emoji ' '. These emoji are both used with similar frequencies, but are rarely used together and represent distinct groups of users which we seek to understand through the lens of the twitter content that they generate. In our emojiBio dataset, the number users using these in bios are close at 324 (' ') and 302 (' '), while only two of the users use both emoji at the same time in their bios, so these two emoji can distinguish users well. These emoji also belong to different emoji subgroups within Unicode Emoji 13.0: the rainbow ' ' belongs to the sky & weather subgroup under the Travel & Places group, and the American flag ' ' belongs to the country-flag subgroup under the Flags group. Among the 324 users who use rainbow emoji ' ', 155 users use English in the bios, 46 Japanese, 33 Portuguese, and 31 Spanish. For comparison, among the 302 users who use the American flag ' ', 245 use English as the language in bios, 15 Spanish, 12 Japanese, and 9 Portuguese. The tweets involved also are multilingual, but are mostly written in English. For the analyses in this section, we first translated all non-English tweets into English using the Google Translate API. 5 Considering that the topic modeling and sentiment analysis methods that we use mostly rely on bag-ofwords representations of the text, issues with the grammatical accuracy of translated tweets will not have as large of an impact. After the translation, we have two sets of tweets corresponding to the two groups of users who used the emoji of ' ' and ' '. The number of tweets for each group are 61,239 and 58,376, respectively. We performed topic modeling using Latent Dirichlet Allocation (Blei et al., 2003) on the tweets of users who used the emoji ' ' and ' ' in their bios. We used the coherence score provided by the gensim Python library 6 to select the number of topics. We train a separate topic model for each group of users, and select four topics for each model. In Figure 1, we visualize the process of inferring topics by zooming in on the most relevant tokens for each of the topics within the set of tweets written by each group of users. The weights between topics are unequal, decreasing from top to bottom as presented in the figure. The topics of tweets from users who use rainbow emoji ' ' in the bios include words related to concepts like life, community, entertainment, and society. We notice some topics that contain more pleasant words, some related to gender identity, others to life and pets. The fourth topic appears to be related to issues of police brutality. However, on the whole, the tweets posted by users who use the American flag emoji ' ' in the bios are more heavy and serious. They are more concerned about topics related to police, president, and current affairs. Because of the massive surge in the #blacklivesmatter movement, caused by the death of George Floyd in the United States, broke out at the end of May 2020, and we downloaded user tweets during this time, there is a clear topic for this current affair. Besides, other current affairs discussed include Antifa and COVID, but these were part of the same topic. Comparing the two sets of different topics, we found that the different emoji included by the users in their bios are related to distinct topics, which also may reflect the self-identities of the users who used these emoji. The rainbow emoji ' ' often represents gay pride, as well as happiness and peace in general, so the corresponding tweets also mostly reflect the love of these users for life and others. In contrast, users who use the American flag ' ' are more concerned about national politics and current affairs within the United States. We also conducted a sentiment analysis on these two sets of tweets, using the Vader sentiment analysis tool (Hutto and Gilbert, 2014), giving the results presented in Figure 2. According to the figure, for the two datasets, the distribution of sentiment is fairly consistent overall, with more positive content than negative. While the amount of neutral sentiment in the two datasets is almost the same, the users with rainbow emoji ' ' in their bios tweeted more positive content overall, compared the the users with the US Flag emoji ' ' in their bios. Close to 40% of the tweets from users who use rainbow emoji ' ' in bios are positive, and less than 25% are negative. In contrast, less than 35% of tweets sent by users using the American flag ' ' in bios are positive, and close to 30% are negative. These sentiment analysis results are mostly consistent with the results of the topic modeling. The tweets sent by users who use rainbow emoji ' ' are more happy and light than those sent by users who use the American flag emoji ' ' in the bios. This case study suggests that users using different emoji in bios can reflect aspects of both their national identity and their personality. More specifically, this analysis shows that groups using some emoji in the bios generate more positive content than groups using other emoji. followers' bios, there was a considerable difference between the emojiBio and nonEmojiBio datasets. The followers of users that have emoji in their bios (emojiBio) have emoji in their bios as well 32.47% of the time. For the followers of users that do not have emoji in their bios (nonEmojiBio), this average percentage drops to 23.23%. Next, we selected three representative emoji from the set of most frequently used emoji in the emojiBio dataset, namely, green heart emoji ' ', soccer ball emoji ' ', and American flag emoji ' '. Also, to eliminate bias caused by only considering high-frequency emoji, we selected the lowfrequency dog face emoji ' ' used by a total of just 157 users in our emojiBio dataset. In Table 7, we list the ten most frequently used emoji in the bios by the followers (from our Followers dataset) of the users who use these four specific emoji and mark the emoji that are the same as the users in bold text. The green heart ' ' and the American flag ' ' are the emoji that are used most frequently by followers of users who also include these emoji. The soccer ball emoji ' ' ranks third, and the dog face emoji ' ' ranks fifth, only with several highfrequency emoji in front of them. There is a strong homophily relationship that indicates that the users use the same emoji with their followers in bios. Using the same emoji also reflects that emoji in the bios can reflect the users' self-identity in terms of group belonging, or their social identity. As an illustration, users using dog face emoji in bios may want to signal that they are dog lovers, and they may also chose to make online connections with others who are similar, leading to many other dog lovers in their networks. We also take a particular look at the highfrequency emoji used by followers of users who use the American flag ' '. Prior work on emoji and American political movements on Twitter (Hagen et al., 2019) pointed out that water ("blue") wave emoji ' ' is related to the US Democratic party, and pointed out that this emoji is frequently associated with hashtag #resist to express anti-white nationalist sentiments. We also observe the use of the red heart ' ' and blue heart ' ' emoji, two colors are are often associated with the US republican and democratic parties, respectively. These followers may be expressing their political opinions: they use the American flag emoji along with other more specific emoji express their particular views. Lastly, we notice several emoji related to religion in this column, indicating expressions of religious as well as political affiliations. In addition to the focused study on these four emoji, we also examined whether the emoji used in bios of Twitter users are either the same, or generally similar to those used by their followers in the entire dataset. To assess similarity we trained our own emoji embeddings with a skip-gram model (Mikolov et al., 2013) using the tweets and bios of the emojiBio dataset, and subsequently we created a similarity lexicon of emoji based on the cosine similarity between the vectors, considering one emoji to be similar to another if it was within the top ten nearest neighbors in the learned embeddings space. We found that the average percentages for common (i.e., exact matches) and similar emoji appearances between the users of the emo-jiBio dataset and their followers are 3.45% and 13.30%, respectively. The respective distributions of the percentage for common and similar emoji appearances are presented in Figure 3. We used permutation tests to confirm that the difference between these two values was statistically significant, and therefore conclude that the followers of a given user seem to have a considerably high probability to use the same, or similar emoji in their bios as the users they follow. Table 8 shows the five emoji for which followers used the same, or similar emoji as the users that they follow. Discussion We now give answers to our original research questions based on our results: RQ1. How are emoji used in Twitter bios? Our results showed that emoji are used in unique ways within users' bios on Twitter, even compared to the ways in which they are used in tweets. In general, emoji are positioned earlier in bios than in tweets, while there is a higher percentage of bios that start with an emoji compared to tweets. Also, it is more common for an emoji to be the only content of a bio than the only content of a tweet. Moreover, facial expression emoji are the dominant type of emoji in tweets, while different variations of heart emoji are dominant in bios. Specifically, the most popular emoji in bios are from the Smileys & Emotion group, while the least frequently used emoji are from the Food & Drink group. Furthermore, we noted that the most frequently used emoji in bios have a high mutual information with other emoji that are similar to them, or from the same category (e.g. hearts, balls, flags), or related to the same concept (e.g. relationship status). In their bios, people tend to use emoji to show their support for musical groups or sports teams (or sports in general), as well as things like countries that they come from or are currently living in. RQ2 . What is the relationships between the emoji in a user's bio and the content that the user posts? Compared to users who do not have any emoji in their bios, users with emoji in their bios use about twice as many emoji in their tweets, on average. They also use less words in both their tweets and bios. In our case study, topic models built from the tweets of the users that use the rainbow ' ' and the American flag ' ' emoji in their bios showed that users who have the rainbow emoji ' ' in their bios tweet about life, community, entertainment, and society, whereas users who have the rainbow emoji ' ' in their bios tweet about police, president, and current affairs. Also, it was shown that tweets of users that have the rainbow emoji ' ' in bios convey a more positive sentiment on average compared to users that use the American flag ' ' in their bios. This is just one example to showcase the fact that the types of emoji that people choose to include in their bios reflect larger views, opinions, and sentiments that are expressed in the content of their tweets. RQ3. Do users and their followers use emoji in their bios in a similar way? The usage of emoji in bios also led us to some conclusions related to homophily effects in Twitter. First, our results indicate that followers of users who have emoji in their bios, are more likely to have emoji in their bios as well. We also found that users tend to use the same, or similar emoji in their bios as the users they follow. For example, followers of users with the green heart emoji in their bios also had other colored hearts in their bios, with the green heart being the most common used by the followers. These findings suggest that there are indeed similarities within user networks in the ways in which emoji are used in Twitter bios. Conclusion We have presented an overview of the ways in which Twitter users include emoji in their bios, and what kinds of things we can learn about those users from the particular emoji that they use. Using a range of approaches, we have shown that emoji are an important component to consider when examining the ways in which users present themselves to others in online settings like Twitter. The emoji that users choose to include reveal important aspects of their self-identities, such as the teams and musicians that they support, the activities they enjoy, their national and political identities, and show their similarities with their followers in these same aspects. At the same time, we have only brushed the surface of the types of in-depth analyses that could be performed by consider specific sets of emoji and examining how these relate to the identities of the users who include them in their bios. This work can provide an important complementary view to other work on online-self identity that mainly focuses only on the plain text content. A Differences across languages A language identification analysis was conducted for the combined data of the emojiBio and nonEmo-jiBio datasets to identify the most frequently used languages in the tweets and bios. The analysis was conducted using the fastText language identification tool (Joulin et al., 2016b), (Joulin et al., 2016a) and the language distribution is presented in Figure 4. English Japanese Spanish Portuguese B Positioning Analysis The positioning analysis distribution for bios and tweets is presented in Figure 5. The results of the positioning analysis indicate that emoji appear earlier in bios than in tweets. For each emoji, its positional value was calculated by computing its distance from the first character of the text and dividing it by the overall length of the text. Therefore, emoji that were used at the beginning of the text had a positional value of 0, whereas emoji that were used at the end of the text had a positional value of 1. C Group Analysis In the group analysis, we divided the bios into four groups according to the language used, and we calculated the mutual information score for all emoji that appeared. Table 9 shows the 25 emoji with the highest scores in each group. We observed that in all language groups, there were multiple national flag emoji amongst the results. In most cases, those flags belong to countries where the respective language is spoken as a first or second language by a considerable portion of the population. While emoji grouping is analyzed in Chapter 4, it is also important to consider that Unicode Emoji also provides standards for subgroups of emoji. Specifically, each emoji belongs to a group and also belongs to a subgroup under the group, which makes the classification more specific. For example, the grinning face emoji ' ' belongs to the face-smiling subgroup under the Smileys & Emotion group. Each group contains a different number of subgroups, and overall there are 98 subgroups. We counted the total number of times the emoji from each subgroup appeared in users' bios and sorted them in descending order. Table 10 demonstrates the ten most popular subgroups. The results suggest that the most frequently used subgroup is emotion while the face-smiling subgroup also belongs to the same category (Smileys & Emotion), showing that people are commonly using emoji to express their sentiments in bios. The second most frequently used subgroup is countryflag, which implies that users regularly use emoji in their bios to reveal their nationality or the countries where they have lived. Animal-mammal and plant-flower are also frequently used. These emoji are used to express the love of users for animals or plants, but also for decoration reasons, to make bios more attractive. Another interesting finding was that the zodiac subgroup ranks seventh. This finding shows that people like to use symbolic emoji to tell others about their zodiac, which they consider as a part of their self-identity. To confirm that the emoji could be accurately grouped in clusters, we conducted a statistical analysis based on the results of the mutual information scores for the 20 most frequently used emoji in bios. Specifically, we divided the 20 most frequently used emoji into groups and subgroups, and we plotted two heat maps which illustrate the categorization of emoji, as shown in the Figure 6. While calculating the mutual information scores between a group and a specific emoji, we did not consider that emoji as part of the group, to ensure normalization. The results show that each emoji achieved a higher mutual information score with the group or subgroup in which it belongs. This suggests that emoji in bios are more commonly used with other emoji from the same group or subgroup. D Topic Modeling We conducted supplementary experiments on the topic modeling analysis of Chapter 5. Specifically, we used the LDAvis tool to visualize the results, and the topic distributions are shown in Figure 7. The topic distributions visualize the weight of each topic and the connection between different topics. More precisely, the circles represent the topics, and the distance between the circle centers determine the connection between the topics. More prevalent topics are represented by larger circles. E Frequency Analysis The results of the frequency analysis showed that the popularity of words and hashtags varies greatly between bios and tweets. Table 11 presents the most frequently appearing English words and hash-
8,276.8
2020-11-01T00:00:00.000
[ "Computer Science" ]
DO FOREIGN FUNDS AFFECT INDONESIAN CAPITAL MARKET LIQUIDITY? Trading Volume (Liquidity) is the perfect complement to returns. Therefore, research regarding volume becomes an essential thing in the capital market. Various factors influencing Liquidity (TV) are bid-ask spread, interest rates, exchange rates, and foreign funds. The study uses Amihud Illiquidity Ratio (AIR) as a proxy for liquidity. The data is the Indonesian Composite Index for liquidity and all variable, daily basis; 2016-2019 period, divided into odd and even semester, with the linear regression model. The results showed that the bid-ask spread decreases trading volume, exchange rate, and interest rates had a positive effect on AIR, where the greater the depreciation and the higher the interest rates, the lower the liquidity. For 2019, the foreign fund has a negative coefficient, which shows foreign funds increase capital market liquidity. In 2019, there will be a strengthening effect between foreign funds and the exchange rate INTRODUCTION In 2019, the Indonesian Capital Market, which has a market capitalization of around Rp 7299 trillion or equivalent to $ 523 billion, consists of 668 firms. The Composite Index at the end of 2019 was 6329, up around 2.4% compared to the beginning of the year. At the end of 2016, the Composite index was 5297, up around 17% compared to the beginning of the year. Some shares do not have trading transactions in Indonesia terminology 'gocapan' [the price Rp50/share]. This stock is very detrimental to investors because it becomes a pseudo record of the value of its investments. For this reason, liquidity is an integral part of the progress of the capital market, and especially for investors. Shares with high trading volume (HTV) are known in the LQ 45 group, contribute around 70% of the market, and become an indicator for the movement of Indonesian shares. This trading volume is a pair of prices. Even the amount of profits that can be obtained from investors can be shown by changes in price (time weight) but more real return based on dollar weight, where the volume is a critical component. The Indonesian Capital Market is an open capital market, where foreign investors are around 30% Investor. In addition, foreigners also enter the capital market as brokers (securities firms), where ownership is allowed up to 99% (Regulation No. 20 / POJK.04 / 2016). This situation has the opportunity to cause the position of 'foreign-foreign linkage' and 'domestic domestic linkage'. Although it is not forbidden, it seems to make two groups. In the bond sector, foreign ownership is even greater, at around 47%. Therefore, in Indonesia, retail investors are increasing, the situation of retail investors is easy to panic, so it tends to follow foreign investors. It why foreign domination becomes interesting to study. Dvořák (2005) shows that foreign investors prefer to use foreign securities. There is majority ownership by foreign such as HMSP (Tobacco Industries) by Philips Morris (98%); a leading firmmineral water (Aqua) goes private because Danone owns 100%. Unilever Tbk has purchased various the best local brand products (Sariwangi tea, Bango soy sauce, etc.). This situation shows foreign domination and makes the Indonesian-open economy. Naturally, foreign transactions can be a leading indicator for individual investor stock transactions. As an open country with a managing floating exchange rate pattern, the rupiah exchange rate dynamically changes according to the market demand-supply. The depreciation of the Rupiah causes a decrease in the value of the investment needed. It also decreases the return obtained. In the International Financial Management literature, it is known as the Effective Financing Rate (EFR). EFR is an essential concept because it shows the real return obtained by investors. In this case, the determinant of return comes from two things: (i) the nominal return of assets and (ii) exchange rate changes. The second source is the dominant source, where the returns can change quickly. This second source can make nominal returns change sharply, either increase or decrease. However, if the currency changes (primarily depreciation) sharply, it causes psychological effects, creates panic, and creates panic selling in the capital market. Interest rates are substitute products for the capital market. Bank Indonesia (Central Bank of Indonesia) has JIBOR products as a market benchmark rate, which can be an investment benchmark. With a linkage financial system, it will be easy to move funds from the capital market to the money market, vice versa, so that investors have attractive investment choices. On a daily, the strength of JIBOR is the small, 'certain-positive' return; it can be as a compensation for the risk of fluctuations in the capital market. If the situation is high-interest rates and the market is bearish, it can increase transactions through selling transactions. On the Indonesian stock exchange, the tick size is set based on price intervals. This tick size produces a bid-ask spread, which investors consider as one of their investment decisions. The width of the bid-ask spread shows the width of the disparity or expectations of investors. If the bid-ask spread wide, it will not cause transactions because the two parties are waiting for each other. In contrast, (Karpoff, 1987), which states disagreement among investors as the cause of transactions, the situation of widening bid-ask spreads, although show disparity, but it shows as asymmetric information, thereby reducing investment actions and subsequently reducing transactions. Research instead shows the opposite situation, namely the effect of trading volume on the bid-ask spread. The results showed the negative effect of trading volume on a bid-ask spread (Chung & Charoenwong, 1998) (Ahn et al., 1996(Frank & Garcia, 2011. The bid-ask variable is rarely used as an explanatory factor, and in the research, trading volume being the explanatory variable for the bid-ask spread. Actually, in fact, and theoretically are reverse, the existence of a bid-ask is the driving factor for the transaction. Firstly, The bid-ask spread factor can explain two different things, namely: (i) the wider the bid-ask spread indicates disagreement between investors (heterogeneous in belief), which further encourages transactions; (ii) it shows asymmetry information so that both parties carry out the transaction. The above data summary shows exciting facts on the Indonesian Capital Market and several variables. It is essential to research trading volume because the volume has a significant contribution to the market. The second, an interesting factor about foreign phenomenon and exchange rate, an Rhee & Wang (RW) shows that foreign investors negatively affect 'future liquidity. One proxy for liquidity in RW is the ratio of change in bid-ask. Ding et al. show that foreign institutional investors encourage liquidity both in SOE (State-Owned Enterprises) and non SOEs. Lee & Chung, found that foreign investors had a positive effect on Amihud Illiquidity, but had a negative effect on bid-ask spreads. Nguyen et al. (2017) show that foreign investors negatively affect liquidity on the Vietnamese Stock Market. Thanatawee also found that foreign investors negatively affected liquidity. These findings indicate that foreign investors are not catalysts for the capital market, as expected. Pavabutr & Yan (2007) examined the effect of foreigners on trading volatility, where foreign influences positively affect the volume; in this case, if a local investor sells, the foreigner will buy. Vo, (2016); also examines the effect of foreign on Trading Volume; Agudelo (2010) examined the effect of Foreign Investment ownership on liquidity (bid-ask) and found negative results. Dvorak (2005) had researched concerning foreign investors in Indonesia, where local investors better understood the situation in the short term, but foreign brokers were better at picking long term winners. Thus, local investors who choose foreign brokers will get a higher return than foreign investors. In this case, the foreign influencing can be explained through two events. First, (through trading activity) where foreign carry out transaction activities than are followed by domestic; second through the information channel, where foreigners have information potential and serve as a reference for investing. Chkili & Nguyen (2014) examined the exchange rate fluctuations and stock market return in BRICS countries (Brazil, Russia, India, China, and South Africa) by dividing two periods (low volatility regime, high volatility regime) and using a model VAR. The results show that the stock market return affects the exchange rate more in both periods. Theoretically, in my opinion, this is not appropriate, where the flow of foreign funds, through the exchange, enters the stock market. Theoretically, changes in the stock markets in developing countries cannot create significant changes in exchange rates. Gong & Dai (2017) examines the exchange rate fluctuations and the behaviour of foreign investors (herding) on the Chinese Capital Market. His research shows that interest rates rise, and the yuan depreciates, causing herding, especially in the bearish market- Li et al. (2020) research about the RMB Index and stock market liquidity in China. The results show that both have a strong positive correlation, especially during the tightening monetary policy period. Ng et al. (2016) examined the effect of Foreign Direct Investment (FDI) and Foreign Portfolio Investment (FPI) on Trading Volume (TV) in 39 countries. They Obtain FDI to affect TV negatively, while FPI affects positively. Research do not show a moderating relation between the exchange rate and foreign. Even though the exchange rate and foreign are very close, where one changes sharply, it will be followed by a sharp change in the other variables. A little research about the effect of bid-ask on trading volume. Research instead shows the opposite situation, namely the effect of trading volume on the bid-ask spread. The results showed the negative effect of trading volume on a bid-ask spread Chung & Charoenwong, (1998) Ahn et al., (1996), Frank & Garcia, (2011). This means that the High trading volume (HTV), the smaller the information asymmetry, then the smaller the bid-ask spread. While research regarding the effect of interest rates on the stock market find: (i) negative results on the Jordanian Capital Market; Colombo; Pakistan, (ii) positive results in the short term and negative results in the long-term in China, and (iii) there is no relationship between interest rates and stock returns in Malaysia Ali Khrawish et al., 2010), (Amarasinghe, 2015) (Ahmad et al., 2012 ) (Ye & Huang, 2018) (Murthy et al., 2016). (Callen & Fang, 2015) examines the interest rate (short term) as an estimate of stock prices in the next year. In this case, no literature examines the relationship between interest rates and trading volume. It is due, theoretically, interest rates are substituting products for the 'price' of the stock as an investment instrument. Various concepts regarding Liquidity has been summarized by (Kumar & Misra, 2015). Liquidity is an interesting study because of the Liquidity is a measure of a stock exchange activity. If time weight returns refer to price changes, then dollar weight returns refer to the rupiah's value, which refers to two components: price and transaction volume. Thus, price and volume are a pair. Amihud (2002) shows this relationship with the concept of illiquidity. The concept of illiquidity is the ratio of absolute return to trading volume. METHOD Data relating to liquidity uses the Indonesian Composite Index (CI). Data were obtained from IDX in the 2016-2019 study period. The research divided the analysis into two-semester for an economic considering-time phrase so that the result can be compared. This study uses the Amihud-Illiquidity Ratio (AIR). There are three proxies of transactions (liquidity), namely: volume (number of shares traded), value (rupiah transaction value), and trading frequency. This study uses all of these proxies. Returns are used as absolute values. AIR uses the ratio of return and trade transactions. Thus, the higher the liquidity, the AIR will go down. I use AIR as a proxy for liquidity, because: (i) as a ratio, AIR scale or reduce the impact of trading volume fluctuation; (ii) a couple of return-trading volume, instead of trading volume solely. Foreign transaction data (in Billion Indonesian Rupiah) are shown in net foreign transactions, where the positive value indicates the purchase value is greater than the selling, and vice versa. The higher the foreign transaction, the more impact on the capital market is, so the transaction volume is higher, but not necessarily to changes in return. Thus, the AIR will be smaller. Thus, Foreign negatively affects the Illiquidity Ratio (AIR) Exchange rates are indicated by Rp/$, where an increase in the exchange rate shows the rupiah's depreciation. The higher the exchange rate, the greater the weakening of the rupiah. The impact of the stock market is not only (changes in negative-large returns) but also followed by transactions, although it is expected that changes in returns are greater than a change in transactions, thus the higher AIR. Thus, the exchange rate has a positive effect on the Illiquidity Ratio (AIR) The Jakarta Interbank Offer Rate (JIBOR) is proxy for the interest rate. Interest rate is substituted for stock prices. Thus, the higher the interest rate, the lower the stock price, which indicates the change in absolute return to be higher. It means that interest rates have a positive effect on the Illiquidity Ratio (AIR) Bid-ask spread is obtained from IDX; shows the difference between the purchase price (bid) and the stock price (ask) on average. The higher the bid-ask, the greater the 'information asymmetry' that occurs. Conceptually, the greater the 'bid-ask spread' will reduce trading transactions so that the illiquidity ratio will be even greater. Thus, the effect of bid-ask spreads will be positive on the illiquidity ratio (AIR). Does the exchange rate has a driving factor for foreign transactions, and then it is measured by a multiplicative factor. If the exchange rate (rupiah depreciation) will encourage foreign transactions, the effect on liquidity will be even greater. Thus, the expected coefficient of this variable is negative and significant. However, if the exchange rate does not encourage foreign transactions, then the coefficient of this variable is positive or negative but not significant. The contribution of this research is to introduce this interactive factor as an explanation of the transaction. The analysis is divided per semester to compare the effect of these variables. Thus, the model can be given as follows: RESULT AND DISCUSSION As stated in part method, research compare Semester I & II to know (analyze) data change. Secondly, Research use regression result to answer whether the chosen variables affect trading volume, especially the interrelation among foreign fund and exchange rates. The influence of chosen variables on trading volume stated both in table 1 (odd semester) and table 2 (even semester). The Bid-ask spread coefficients are positive, and most are significant, except for 2017 (odd semester, all proxy liquidity, value, volume, frequency), the coefficient is negative and significant. The positive coefficient indicates that the bid-ask spread increases, AIR increases, or trading activity decreases. This situation does not reflect heterogeneous in beliefs (Karpoff), but rather show the 'costs' of trading. This cost of trading causes both parties to postpone the transaction, waiting for the other party to increase their reaction. Psychologically, the attitude of waiting indicates the expectation of a reaction, where this is common in transactions with speculative motives; or have expectations of a gain. The wider the bid-ask spread shows the asymmetric information of both parties. Unlike Karpoff's assumption, this situation shows that both parties are waiting, so they tend to reduce transactions, both regarding value, volume, and frequency. It shows that information flow is not yet smooth, and further shows inefficient market situation. What can be done is to supply information throughout the trade transaction time so that the accumulation of information (both bid and ask side) is adequate and equal so that the price of equilibrium is faster achieved. I can't compare this result with the previous study because they use bid-ask spread as inverse, to be dependent variable Chung & Charoenwong, (1998), Ahn et al., (1996), Frank & Garcia, (2011). Concerning foreign influence, there is a change in the coefficient sign; in 2016-2017, positive coefficient in 2018, especially 2019, becomes negative. It means that in the last year (2019), the foreign inflow followed by an increase in transactions. This different pattern has several strategic implications, namely: (i) an increase in foreign transactions, indicating a more liquid market situation; (ii) if foreigners are considered as 'market leaders', then domestic transactions are followed; (iii) transactions are carried out in a more frequent amount (frequency); with more shares (volume) and a higher value (Value). It causes capital market oil to become more liquid. This situation can be interpreted as good news, but it can also be interpreted as bad news. It is good news because it shows the high trading volume (HTV), bad news because foreign is a 'signal' market. It makes the capital market more vulnerable to global issues. Its result is contrasted with Rhee & Wang (2009) research over ten years ago. Compared to Rhee & Wang's situation, Indonesia's current situation is different as shown by (i) a more global economy, (ii) a smaller transaction size (a lot containing 100 shares instead of 500 shares), (iii) more investors. If its result is compared to the ASEAN Capital Market, this result is also contrasted with the Vietnam Capital Market (Nguyen et.al (20 17) and the Thailand Capital Market (Thanatawee, 2019). Further research is needed to find out the differences in these results, including the difference in investment scale. Regression results 2019, for foreign variable, is negative significant (table 1 & 2, for all trading volume proxies). It means foreign influence trading volume linearly. The results can be taken into consideration for investors, especially SRO. We hope, Indonesian Stock Market, not broken easily because of the negative foreign fund, although foreign fund as oil for transactions. Exchange rates have positive and significant coefficients, except for 2019 (odd semester) for the proxy of value (AIR). While for the even semester, there were also more positive coefficients, but they were not significant. The negative coefficient of the exchange rate was found in several years, but it was not significant. Thus, in general, the exchange rate has a positive effect on AIR. Generally depreciation will cause trade transactions to decrease. This situation means that the rupiah depreciation does not encourage foreigners to conduct transactions. Some explanations that can be given are ; (1) The absence of additional foreign as hot money, due to the depreciation of the rupiah; (2) Existing foreign investors, being irrelevant with the exchange rate, which as rational investors will conduct transactions, based on rational considerations; (3) Depreciation of the rupiah is not an incentive for foreign funds, which means foreign parties, instead it considers the expectation of the future exchange rate, when it will return funds outside, according to the concept of the Effective Financing Rate. For this result, uncontrolled depreciation has a negative impact not only on the economy but also on the capital market. The absence of hot money in the depreciation situation indicates future concern that there will be even more significant depreciation. In financial markets, expectations have an important role. Depreciation situations tend to be poorly perceived, so as undesirable situations. For foreign investors with high depreciation, the real return obtained decreases. There is a possibility of the negative net foreign, and pushed stock prices down. Thus AIR goes down. In this case, depreciation is more appropriate to refer to large (negative returns) than changes in the volume of transactions themselves. With this argument, to maintain about the exchange rate, it is not only 'a necessary condition' but also 'a sufficient condition' because the exchange rate got worse has the catastrophic impact. This research result accordance with Gong & Dai (2017), Li et al., (2020) The interest coefficient is primarily positive and significant, except in 2018 & 2019 in the even semester. It shows that an increase in interest rates will reduce transactions on the capital market. This situation shows that interest rates are the 'substitute product' for the capital market. Several arguments can be given: (i) JIBOR interest rates are quite high and specific, so it becomes an attraction for investors; (ii) the smooth flow of money and capital markets, causing funds to move quickly. The influence of interest rates can be a concern for the Central Bank. This result similar to Ali Khrawish et al. (2010), Amarasinghe (2015), Ye & Huang (2018). In the capital market, some stocks have high volatility, but others are 'sleep' stocks. The existence of a risk-free rate can be a reference as a comparison of investment returns. This situation can serve as a guideline for SRO to better coordinate with Central Bank and better understand investor behaviour in the context of investment asset selection. Because interest rates and shares are substitutes, SRO can view interest rates as a competitor to stocks. For this reason, high stock volatility must be informed as a good thing, namely the potential for higher profits. On the other hand, sleep stocks can be a handicap for the capital market, as a disincentive for investors, so considers investing in the money market (interest rates). The interest rate has a pretty high rate, risk-free rate, as sweeter, to tease investor, move (i) a part of the fund, (ii) temporarily, (iii) especially the lousy economy, like now. Moderation Variable, otherwise the exchange rate moderates foreign if find the negative and significant coefficients. Results show only in 2018 (for all AIR proxies) and semester II; 2018-2019 for all proxies. It shows that in the last year (2019), it tends to be an interaction between foreign and the exchange rate. If foreign inflows and the rupiah depreciates, the amount of trading volume will increase. In the early part, it was stated that the foreign inflow tended to increase transactions, but the expectation of depreciation encouraged selling panic transactions. In this case, depreciation will only increase foreign transactions if the foreign funds are already in Indonesia. Thus, there is no evidence of exchange rates moderating foreign investors. As expected, that depreciation is a consideration for foreign investors to conduct transactions in the capital market. Depreciation will increase transactions but in the form of net selling-foreign transactions. With many retail investors, less information, this situation can be the worst to be panic selling. For this matter, it is necessary to provide education to retail investors. Some things that can be referred to: (i) if foreigners sell, it's time to become the owner; (ii) the capital market phenomenon is the fundamental of the company, (iii) the perspective of investment is long-term, instead of short-term investment. A communication strategy is needed by the SRO so that retail investors are not easily swayed. The fact is, foreign investors, become, a 'signal' that should not be doubted, but needs to be dealt with so that the adverse effects can be eliminated/reduced. I II I II I II I II I I II I II I II I II I CONCLUSIONS AND SUGGESTION The results of the study provide several policies or investment implications. First, bid-ask spreads tend to reduce trading transactions. It means that the high bid-ask spreads become enormous transaction costs, unlike the Karpoff concept. However, it is also not recommended to decrease the bidask spread, considering that the bid-ask price of the upper group's shares is relatively low, around a maximum of 0.5%. In transactions, investors ignore the bid-ask spread but rather the acceleration of the price itself. This price acceleration is influenced by various factors, not merely bid-ask spreads. Second, the depreciating rupiah has a negative impact on stock trading transactions. Thus, it is of concern to the relevant stakeholders the importance of managing the exchange rate. For the capital market, it needs to be anticipated about the impact of changes in exchange rates, for example, applying the lower limit (auto rejection) so that no sharp decline occurs. Rising interest rates become a substitute for the capital market. Thus, the interest rate policy can be seen by investors as an alternative investment. Foreign is an indicator of transactions in the capital market; for that reason, it is necessary to watch out the flow of large amounts of foreign funds. Together with the exchange rate, the changes in the exchange rate and foreign funds greatly affect the capital market. Illiquidity Ratio of Amihud, using the concept of absolute return. In this case, if both negative return and positive return cannot be distinguished. Also, if the value of the ratio is large, then is it caused by a large change in return and/or by a high trading volume? It can be a concern for further research. From the description above, several policy implications can be taken, namely: (1) Bid-ask spreads need not be minimized, (2) Government needs to manage the exchange rate, (3) The interest rate is a policy that affects the capital market, (4) Foreign fund is an oil for the capital market, so it must be anticipated by regulators if there are changes in large numbers.
5,755.4
2021-09-25T00:00:00.000
[ "Economics" ]
MOBFinder: a tool for mobilization typing of plasmid metagenomic fragments based on a language model Abstract Background Mobilization typing (MOB) is a classification scheme for plasmid genomes based on their relaxase gene. The host ranges of plasmids of different MOB categories are diverse, and MOB is crucial for investigating plasmid mobilization, especially the transmission of resistance genes and virulence factors. However, MOB typing of plasmid metagenomic data is challenging due to the highly fragmented characteristics of metagenomic contigs. Results We developed MOBFinder, an 11-class classifier, for categorizing plasmid fragments into 10 MOB types and a nonmobilizable category. We first performed MOB typing to classify complete plasmid genomes according to relaxase information and then constructed an artificial benchmark dataset of plasmid metagenomic fragments (PMFs) from those complete plasmid genomes whose MOB types are well annotated. Next, based on natural language models, we used word vectors to characterize the PMFs. Several random forest classification models were trained and integrated to predict fragments of different lengths. Evaluating the tool using the benchmark dataset, we found that MOBFinder outperforms previous tools such as MOBscan and MOB-suite, with an overall accuracy approximately 59% higher than that of MOB-suite. Moreover, the balanced accuracy, harmonic mean, and F1-score reached up to 99% for some MOB types. When applied to a cohort of patients with type 2 diabetes (T2D), MOBFinder offered insights suggesting that the MOBF type plasmid, which is widely present in Escherichia and Klebsiella, and the MOBQ type plasmid might accelerate antibiotic resistance transmission in patients with T2D. Conclusions To the best of our knowledge, MOBFinder is the first tool for MOB typing of PMFs. The tool is freely available at https://github.com/FengTaoSMU/MOBFinder. Introduction Plasmids are usually small, double-stranded, and circular DNA molecules found within bacterial cells [1].Being separate from the bacterial chromosome, plasmids have the ability to replicate independently and can be transferred between bacteria through conjugation [2].Bacteria, specifically pathogenic strains, can acquire antibiotic resistance genes or virulence factors via plasmid-mediated horizontal gene transfer, aiding their ability to adapt to various environments [3]. Plasmid classification is important for investigating multiple properties of plasmids, such as host range, replication patterns, and mobilization mechanisms [4].Many classification schemes have been developed according to the distinct characteristics of plasmids, including taxonomic classification, replicon typing (Rep), incompatibility typing (Inc), mate-pair formation typing (MPF), and mobilization typing (MOB).In taxonomic classification, plasmids are categorized based on their host bacteria [5].Rep typing classifies plasmids according to genes controlling their replication, known as replication initiation genes [4,6].Inc typing takes advantage of the fact that plasmids with similar replication or partition systems are incompatible within the same cell, categorizing plasmids based on compatibility [6].MPF typing is based on genes encoding the MPF system, which consists of proteins that mediate contact and DNA exchange between donor and recipient cells during conjugation [4,7].Finally, MOB typing classifies plasmids based on the relaxase gene, which is present in all transmissible plasmids [8][9][10].And plasmids with different relaxase types are categorized as different MOB types, each of possesses a distinct transmission mechanism that determines its taxonomic host range [4,11].This variation among different MOB types is critical in researching the spread of virulence traits, the emergence of antibiotic resistance, and the adaptation and evolution of bacteria.Moreover, MOB typing has been found to be effective for identifying novel mobilizable plasmids that were previously unassigned to any Rep or Inc types, and for investigating the mobilization characteristics of plasmids with similar mobilization systems [12,13]. Recently, many experimental and computational schemes have been devised for plasmid typing, as well as to explore the diversity and functionality of plasmids (Table 1).For example, plasmid taxonomic PCR (PlasTax-PCR) [14], PCR-based replicon typing (PBRT) [15], and degenerate primer MOB typing (DPMT) [12] are multiplex PCR methods for identifying plasmids with analogous replication or mobilization systems.PlasTrans, based on deep learning, identifies mobilizable metagenomic plasmid fragments [16].Web servers such as PlasmidFinder [6], pMLST, and oriTfinder [17] were established based on collected maker gene databases and alignment-based methods to facilitate Rep, Inc, and MOB typing.COPLA [5], based on average nucleotide identity, performs taxonomic classifications of complete plasmid genomes with an overall accuracy of 41%. For the MOB typing, MOBscan [18] uses the HMMER model to annotate relaxase genes and classify plasmids accordingly.MOB-suite [19,20] performs plasmid typing for plasmid assemblies. First, it uses Mash distance to cluster plasmid assemblies into clusters; then, it uses marker gene databases to annotate them.Table 1.Experimental and computational schemes developed for plasmid classification. Metagenomic sequencing makes it possible to obtain all plasmid DNA from microbial communities at once, and a number of computational tools for identifying plasmid fragments from metagenomic data have been developed, such as PlasFlow [21], PlasmidSeeker [22], PlasClass [23], PPR-Meta [24] and PlasForest [25].As DNA fragments of plasmids and bacteria are intermingled in metagenomic data [26], recognizing the transmission mechanisms and host ranges of plasmids can be challenging.To this end, it is crucial to annotate MOB types of metagenomic plasmid fragments. However, this is difficult when plasmid assembly fragments are incomplete and essential genes for annotation are lacking.Therefore, it is worthwhile to consider alternative methods.Given that plasmids of the same MOB type have similar transmission mechanisms and host ranges, their genomic signatures (e.g., GC content and codon usage) tend to also be alike, not only relaxase [4,27].In this context, neural networks, which have demonstrated strong performance in the classification and identification of biological sequences [28,29] could be useful.Furthermore, language models [30,31] derived from such neural networks have also showcased their impressive ability to characterize sequence features [32,33].In this methodology, short sequences of nucleotides (referred to as k-mers) or amino acids are analogous to "words", and the longer sequences of DNA or proteins are analogous to "sentences".Through the application of unsupervised learning on large datasets, each "word" is linked to a feature vector that captures its context, offering a more sophisticated analysis than the traditional k-mer frequency method, which simply counts the occurrence of nucleotide sequences without acknowledging their biochemical characteristics.Unlike the conventional method, this language model-based approach assesses sequences based on their contextual importance across different genetic environments, positioning contextually similar sequences close together in a multidimensional space.This technique provides deeper insights into the biochemical complexities of nucleotide sequences, thereby furnishing a more comprehensive understanding of an organism's functional biology [34].To characterize the features of plasmids within the same MOB type, we employed language models to perform the MOB annotation.In addition to the relaxase-coding gene, language models exhibit the ability to capture more biological features and associations within comparable mobilization systems, making it possible to perform MOB annotation for metagenomic plasmid assemblies.(3) Classification model ensemble and optimization.Several classification models, specifically designed for different lengths, were trained and integrated to predict fragments of different lengths. Evaluations against a test dataset demonstrated that MOBFinder is a powerful tool for MOB typing of plasmid fragments and bins.Its application to a cohort of patients with type II diabetes (T2D) revealed a potential correlation between some MOB types and the spread of antibiotic resistance genes among T2D patients.This suggests that MOBFinder is an effective data analysis approach for investigating plasmid-mediated horizontal gene transfer within microbial communities. The workflow of MOBFinder To annotate the MOB type of plasmid fragments in metagenomics, we designed MOBFinder (Figure 1).As MOB-suite [19,20] didn't offer a quantitative likelihood score for the outcomes and some plasmids would be classified into multiple MOB types (Figure S1), we constructed a benchmark dataset using a high-resolution MOB typing strategy for categorizing complete plasmid genomes (Figure 1B, 1C).Then, based on a language model and random forest, we designed an algorithm to perform MOB typing for PMFs (Figure 1D, 1E).from the NCBI were subjected to MOB typing.(D) Those complete genomes were also used to train a 4-mer language model using the skip-gram algorithm, allowing each 4-mer to be represented by a 100-dimensional word vector.For a DNA fragment, the average word vector of all 4-mers on its sequence serves as the feature vector for that DNA.(E) We constructed simulated metagenomic contigs from the complete genomes that had been MOB typed as a benchmark and encoded these contigs into word vectors.Then these word vectors were used to train a random forest algorithm. Then the trained model, with metagenomic DNA fragments as input, was used to predict the MOB typing of the corresponding DNA fragment based on its word vectors. MOB typing of complete plasmid genomes Traditionally, plasmid MOB typing of complete plasmid genomes has been a bioinformatics task based on the analysis of relaxase sequence similarity.The practice of annotating MOB types through BLAST similarity searches using representative sequences of different MOB type relaxases has gradually evolved into the standard method for MOB typing [4,19,20].In this work, we constructed a benchmark dataset of simulated metagenomic contigs based on complete plasmid genomes with known MOB types.Previous studies have included a relatively small number of plasmids in their analyses.To further expand the MOB typing training dataset, we annotated the newly collected plasmid complete genome data for MOB typing according to relaxase information. However, employing these criteria, we observed that some relaxases were annotated as belonging to multiple MOB types.To eliminate ambiguous annotations and construct a more reliable dataset for the training of MOBFinder, we imposed the stricter criteria mentioned above.After the expansion of protein sequences, local relaxase databases were built using the 'makeblastdb' command for MOB typing of plasmid genomes. Plasmid genomes were retrieved from the NCBI nucleotide database using the keywords 'complete' and 'plasmid,' and incomplete fragments were removed manually for further analysis.The accession list of these plasmids is provided in Supplementary Table 1.For each plasmid genome, coding sequences were extracted from the genebank file, and blastp [37] was employed to search for the best alignment of local relaxase databases.Here, we defined the mob_score to measure the likelihood of homology: where qcov_max and bitscore_max represent the query coverage and bitscore corresponding to the match with the highest bit score, respectively.To identify plasmid genomes encoding known relaxase families, we set a mob_score threshold of 0.5, which was established in conjunction with a minimum query coverage of 50% and a minimum bitscore of 100.To further enhance the reliability of our classification, we introduced an e-value cutoff, conservatively set at 1e-10, to complete the plasmid genome classification (Figure 1C).In instances where plasmid genomes yielded no blast results or exhibited an e-value exceeding 0.01, we categorized them as non-MOB. Word embeddings using a language model To characterize the features and patterns within each MOB category and use numerical word vectors to represent them, we utilized a skip-gram language model [30,31] to learn from plasmid genomes. Using a sliding window, the model calculated the likelihood between segmented words and outputted a probability distribution over the context words.The training steps were as follows (Figure 1D): (1) Word generation.Since DNA sequences are composed of different nucleotide characters, we used a k-mer sliding window to generate overlapping input words.For example, with k=4, 'ATCGCTGA' would be segmented into 'ATCG,' 'TCGC,' 'CGCT,' 'GCTG,' and 'CTGA'.In this step, unique words were generated. (2) Word encoding initialization.Each word was initially assigned a random vector. (3) Skip-gram model.We employed a standard skip-gram model as described in previous studies [30,31] to generate word vectors through the dna2vec module [31].A two-layer neural network was used to construct the skip-gram model.The initialized vectors were used as input, and the output was a probability distribution over the input words.Layer 1 was a hidden layer to convert the initialized vectors into a 100-dimensional word vector representation as predefined by Ng [31]. Layer 2 was used to compute and maximize the probability of the correct context words using the negative sampling function, with the size of context words set to 20 (10 words for upstream and downstream, respectively) as pre-set by Ng [31]. (4) Model training.For each input plasmid genome, we used an optimization algorithm to minimize the loss function.Then, using the default settings, we used backpropagation to update the neural network parameters (word vectors) for 10 epochs. (5) Word vector extraction.After the training process, the word vectors in the hidden layer were extracted to characterize the plasmid fragments. Benchmark dataset construction Because there are no real metagenomic data to serve as a benchmark, using simulated data as a benchmark dataset is a common approach when developing bioinformatics tools [16,24].Therefore, in the development of MOBFinder, we artificially generated simulated datasets through the following steps: (1) For classified plasmid genomes in each MOB category, we randomly split them at a proportion of 70% and 30% to construct the training and test datasets. (2) Training dataset.To predict plasmid fragments with different lengths, we generated contigs of different length ranges: 100-400 bp, 401-800 bp, 801-1200 bp, and 1201-1600 bp.For each MOB class in each length range, we randomly generated 90000 artificial contigs.Plasmid fragments longer than 1600 bp were segmented into shorter contigs and predicted using models designed for the corresponding lengths. (3) Test dataset.Because some plasmid fragments in real metagenomics datasets were much longer, we generated four length groups to assess the performance of MOBFinder: Group A with a length range of 801-1200 bp, Group B with a length range of 1201-1600 bp, Group C with a length range of 3000-4000 bp, and Group D with a length range of 5000-10000 bp.For each MOB class in these four groups, 500 fragments were randomly extracted. Classification algorithm To efficiently handle the training dataset and improve the robustness of MOBFinder, we employed random forest to train four predictive models using the training dataset.The detailed steps are as follows (Figure 1E): (1) Word representation calculation.For each contig in the training dataset, we used a 4-mer sliding window to generate overlapping words and transformed them into numerical word vectors using trained word embeddings.To characterize the underlying features and patterns of the input contigs, we summed all the word vectors to compute their average as input of random forest. (2) Classification model training.To improve the performance of MOBFinder, we trained four classification models on different lengths in the training dataset: 100-400 bp, 401-800 bp, 801-1200 bp, and 1201-1600 bp.The number of trees was set to 500 to generate predictive models. (3) Model ensemble.The four trained models were ensembled into MOBFinder to make more accurate predictions.For fragments shorter than 100 bp, we used a model designed for 100-400 bp to predict the MOB type.For those longer than 1600 bp, we segmented them into short contigs and made predictions using the corresponding model.For example, a fragment with a length of 4000 bp would be segmented into three contigs: two with a length of 1600 bp and one of 800 bp.After predicting fragments with the corresponding models, we aggregated and calculated the weighted average scores for each MOB class, and the MOB type with the highest score was selected as the final prediction result for the input fragment. (4) Plasmid bin classification.Metagenomic binning is an essential step in the reconstruction of genomes from individual microorganisms.Thus, we designed MOBFinder to perform MOB typing on both plasmid contigs and plasmid bins.If the input is a plasmid bin, MOBFinder predicts the likelihood of each MOB class for fragments within the bin.For each MOB category, MOBFinder aggregates the scores of each sequence within the bin and calculates the weighted average scores based on the sequence length.The MOB category with the maximum score is selected as the prediction result. Performance validation A test dataset was used to assess the performance of MOBFinder and compare it to MOB-suite and MOBscan.Because MOBscan can only predict MOB type using plasmid protein sequences rather than DNA sequences, we first annotated the proteins in the plasmid fragments of the test set using Prokka (RRID:SCR_014732) [38] and then used MOBscan to predict the MOB type based on the annotated proteins.We calculated overall accuracy, kappa, and run time by comparing the predicted classes and true classes.We used the online server of MOBscan to perform the MOB annotation, and the calculation of run time for MOBScan was confined to the duration spent on preprocessing with Prokka locally.The overall accuracy was the proportion of accurate predictions.The kappa value was calculated to quantify the performance of MOBFinder.An AUC value between 0.5 and 1 indicates that the model performs better than random chance, and a higher AUC value indicates better prediction capability. Annotation and analysis of T2D metagenomic data Metagenomic sequencing data (SRA045646) were retrieved from the NCBI short read archive (SRA) database to investigate whether the plasmids within different MOB classes were associated with antibiotic resistance enrichment in T2D patients, as suggested by previous studies [39,40].All metagenomic data were preprocessed using the same protocols.PRINSEQ (RRID:SCR_005454) [41] was used to remove low-quality reads and bowtie2 (RRID:SCR_016368) [42] was used to remove host reads by aligning them to the human GRCH38 reference genome downloaded from the ENSEMBL database.We excluded metagenomic samples that did not pass quality control.Because the abundance of plasmids in metagenomes was much lower than that of bacteria, we only retained samples with more than 10,000,000 paired-end reads for downstream analysis (Supplementary Table 2). To improve the efficiency and accuracy of assembly, we used MEGAHIT (RRID:SCR_018551) [43] to generate metagenomic contigs.PPR-Meta (RRID:SCR_016915) [24] was utilized to identify and extract plasmid fragments from the assembled fragments while filtering out bacteria and phage sequences.COCACOLA [44] was employed to cluster plasmid fragments into bins based on sequence similarity and composition.This allowed us to investigate the plasmid fragments from same originate and enabled better annotation and analysis of their functions. MOBFinder was applied to annotate the MOB types in each plasmid bin.The average fragments per kilobase per million of each plasmid bin was calculated using bowtie2 to represent its abundance. Next, we analyzed the significance of differences in plasmid bins and various MOB types between healthy and T2D groups using the Wilcoxon rank-sum test.The calculation of p values was adjusted for multiple comparisons using the Benjamini-Hochberg method (denoted as p.adjust).ABRicate (RRID:SCR_021093) [45] was utilized to annotate antibiotic resistance genes (identity>50% and qcov>50%) in each plasmid bin, based on four antibiotic resistance gene databases [46][47][48][49].The Tukey's Honest Significant Difference test was performed to compare the identified resistance genes among different MOB classes.All statistical analyses were conducted using R. MOB typing of plasmid genomes To construct the benchmark datasets, we obtained 90,395 complete plasmid genomes and categorized them into 11 MOB categories using blast ( Overall performance of MOBFinder We evaluated the overall performance of MOBFinder in terms of accuracy, kappa, and run time, and compared the tool to MOBscan and MOB-suite.MOBscan did not perform well, achieving low accuracy and kappa values across sequences of varying lengths, while MOB-suite exhibited marginally better performance than MOBscan when handling sequences of greater length (Figure 3A, 3B).In comparison, the accuracy of MOBFinder ranged from 70% to 77%, a significant improvement of at least 59% over MOB-suite (Figure 3A).The kappa of MOBFinder ranged between 67% and 75% and was approximately 65% higher than that of MOB-suite (Figure 3B). Moreover, MOBFinder exhibited a shorter run time in the test dataset, with a more gradual increase trend (Figure 3C).In general, these results indicate that MOBFinder greatly outperformed the other tools, and consistently improved in accuracy and consistency as the sequence length increased. Evaluation by MOB category Next, to evaluate the discrimination ability of MOBFinder for each MOB type, we calculated the balanced accuracy, harmonic mean, and F1-score using the test dataset (Figure 3D).It demonstrated the highest performance for MOBB and MOBM, while its ability to identify non-MOB types was comparatively low.For MOBM, the balanced accuracy and harmonic mean reached up to 99% and the F1-score exceeded 96% for all length groups.For non-MOB, the balanced accuracy was 65%, the harmonic mean was 49%, and the F1-score was 40%.Compared to MOB-suite, MOBFinder exhibited much better performance in predicting all MOB classes.Even for non-MOB, it showed an approximate 13% improvement over the other tools in terms of balanced accuracy, 34% in terms of harmonic mean, and 24% in terms of F1-score. In AUC analyses (Figure 4), all values were greater than 0.8, indicating that the tool effectively distinguished between positive and negative samples in each MOB class.In fact, most values were higher than 0.9, except for MOBT and non-MOB.The performance differences by MOB type might be attributable to the differences in host ranges and sequence features among types.Additionally, the imbalance in the training dataset for each MOB type may also be a primary factor contributing to the performance disparities. Application to T2D metagenomic data In a previous study, enrichment analysis of fecal samples identified antibiotic resistance pathways in patients with T2D [40].The precise mechanism of this enrichment, however, remained elusive. We used MOBFinder to analyze real T2D metagenomic data [39].After preprocessing and assembly, 2,217,064 metagenomic fragments were generated, and plasmid assemblies were identified using PPR-Meta.Subsequently, the plasmid fragments were clustered into 55 bins and annotated using MOBFinder.By employing MOBFinder, we assigned 2 bins to the MOBF class, 8 bins to MOBL, 17 bins to MOBQ, and identified 28 bins as non-MOB (Figure 5A).Furthermore, we detected 15 bins that exhibited significant differences between the T2D group and a control group.Among them, 1 bin was classified as MOBF, 2 as MOBL, 5 as MOBQ, and 7 as non-MOB (Figure S2).Among above MOB types, MOBQ contains the highest number of bins enriched in T2D, while MOBF is widely present in Escherichia and Klebsiella (Figure 2B), and that some strains of Klebsiella are resistant to multiple antibiotics, including carbapenems [53], these two MOB types might contribute to antibiotic resistance in T2D patients.Indeed, when we compared the average abundance of each MOB type between the T2D group and the control group (Figure 5B), the abundances of MOBF and MOBQ were significantly greater in the T2D group.In addition, these two MOB types can be transferred among multiple bacterial species.This suggests that an increase in these two MOB types could potentially raise the risk of bacterial infection among individuals with T2D.Subsequently, we used four databases [46][47][48][49] to detect drug resistance genes in four MOB types (Figure 6).The number of such genes was significantly higher in MOBF than in the other three MOB types.This suggests that MOBF plasmids may carry more drug resistance genes than the other MOB types.Furthermore, the increase in MOBF and MOBQ plasmids could result in more bacteria acquiring drug resistance genes, thereby leading to more antibiotic resistance pathways in T2D patients.In summary, our results demonstrate the utility of MOBFinder for annotating plasmid fragments in metagenomes, uncovering the potential mechanisms underlying the antibiotic resistance enrichment in metagenomic analysis. Discussion We developed MOBFinder based on a language model and the random forest algorithm to classify plasmid fragments and bins from metagenomics data into MOB types.First, using the relaxasealignment method, plasmid genomes were classified into distinct MOB categories.Analyses revealed substantial differences in parameters such as the number, average length, and GC content of plasmid genomes across MOB types.Additionally, there were noteworthy differences in the host ranges among different MOB classes.These results suggest the potential of utilizing sequence features from different MOB types for PMF MOB typing.To characterize the plasmids within each MOB type, we used the skip-gram model to generate word vectors.Our tool demonstrated superior overall performance compared to other tools.Specifically, for each MOB category, MOBFinder exhibited significant improvements in balanced accuracy, harmonic mean, and F1-score, with values reaching up to 99% for the first two measures in the MOBM category. Traditionally, k-mer frequency models and one-hot encoding have commonly been employed to digitize biological sequences, extensively applied across various machine learning algorithms [54]. However, both models simply mark or count the frequency of various characters in sequences, failing to reflect the biological significance underlying each character.These models may also encounter dimensionality issues [54].For instance, in the k-mer model, if k is set to 8, the dimensionality of the k-mer vector of each DNA sequence becomes 4 8 , which is problematic in metagenomics where most fragment lengths do not reach this magnitude.This would result in significant noise in the feature vector and cause overfitting.Similarly, in the one-hot model, for a sequence of length L using 4-mers as the base unit, it would require L one-hot vectors each with a dimensionality of 4 4 .In such instances, if the dataset for training is not sufficiently large, this representation method could also lead to overfitting due to high dimensionality.In contrast, word vector models offer a superior solution to these problems.Such models initially perform a random initialization of vectors for each "word."Taking the skip-gram algorithm utilized in this study as an example, the dimension of a random vector can be 1-of-n, where n represents the size of the vocabulary [30].Following unsupervised pre-training on large datasets, the algorithm maps characters with similar contexts to similar feature spaces.The dimensions of the coordinates (i.e., the word vectors) of these feature spaces will be lower than those of the initial random vectors.Thus, through unsupervised pre-training on large datasets, language models can compress highdimensional initial vectors into lower-dimensional word vectors (e.g., MOBFinder's word vectors have a dimensionality of 100), enabling the feature vectors to contain more character information while effectively avoiding dimensionality issues during supervised training. In a metagenomic sequences classification task, 4-mer is widely used as the basic unit in various bioinformatics tools [55], thus MOBFinder takes this as a "word."To assess the impact of training word vectors with different k-mer lengths on performance, we compared models with k-mer lengths of 2, 3, 4, 5, 6, 7, and 8 (Figure S3).We observed lower overall accuracy and kappa values for k=2. At k=4, the balanced accuracy, harmonic mean, F1-score, and AUC values stabilized across different MOB types.Subsequently, as the k-mer length increased, there was no significant improvement in accuracy or other metrics, while the run time gradually increased.Therefore, we chose a k-mer length of 4 for training word vectors and developing MOBFinder. Interestingly, in an analysis of T2D metagenomic sequencing data [39], we noted a significant increase in MOBF and MOBQ type plasmids in T2D patients.Moreover, we found more drug resistance genes in the MOBF class, whose dominant hosts are Klebsiella and Escherichia, which are associated with the spread of multidrug resistance.Although previous analyses of gut metagenomic data from patients with T2D have reported enrichment of drug resistance pathways [40], our results suggest a potential reason for it: the increased abundance of MOBF and MOBQ type plasmids in the guts of individuals with T2D may disseminate more antibiotic resistance genes, resulting in such enrichment. At present, databases contain a large amount of human metagenomic data derived from secondgeneration sequencing.However, understanding of the functions of numerous disease-linked microbial sequences remains limited, attributable to the incomplete nature of metagenomic fragments.The development of MOBFinder enables MOB annotation for plasmid fragments from metagenomics data and provides a powerful tool for investigating the transmission mechanisms of plasmid-mediated antibiotic resistance genes and virulence factors. Conclusions In summary, MOBFinder is a tool for MOB typing of plasmid fragments and bins from metagenomic data.Analyses of classified plasmid genomes unveiled notable differences in sequence characteristics and host ranges across MOB types.Hence, we employed a language model to extract the sequence features specific to each MOB type and represented them using word vectors. Additionally, we boosted prediction accuracy by training and integrating several random forest classification models.MOBFinder surpassed other tools in performance tests and successfully detected an increase in certain MOB type plasmids in T2D patients.Importantly, these MOB type plasmids harbor potential drug-resistance genes, thus offering an explanation for the observed antibiotic resistance in T2D individuals.This suggests that MOBFinder could potentially aid the formulation of specific medications to curb drug resistance transmission.We anticipate that MOBFinder will be a powerful tool for the analysis of plasmid-mediated transmission.Click here to access/download;Figure Click here to access/download;Figure Click here to access/download;Figure Click here to access/download;Figure Click here to access/download;Figure Click here to access/download;Figure Also, all the related scripts and data have also submitted to the GigaDB server.We apologize for the confusion between the Wilcoxon rank-sum test and the Wilcoxon signed-rank test.We have corrected this mistake in the revised manuscript. In Line 368 and 472 of the revised manuscript, the statistical method has been corrected to "the Wilcoxon rank-sum test". Since MOBscan can only predict the MOB type with plasmid proteins, we annotated the plasmids in the test set with Prokka, then manually submitted them to the MOBscan website for MOB type annotation. Given that MOBScan operates as an online tool and cannot be executed locally, the calculation of MOBScan's run time was confined to the duration spent on preprocessing with Prokka locally."(Please refer to Line 313-319 in the revised manuscript). -> Actually, it can be executed locally using the scripts included in https://github.com/santirdnd/COPLA/.It may not be necessary to run MOBscan locally (it may be okay that they manually submitted them to the MOBscan website), but I'll inform you regardless. We are very grateful to Reviewer 1 for reminding us that MOBscan can be run locally. In the revised manuscript, we have removed the statement "MOBscan can only predict the MOB type with plasmid proteins" and revised the corresponding description to "We used the online server of MOBscan to perform the MOB annotation, and…" (Please refer to Line 312-313 in the revised manuscript). Line 418-421 In the comparison, it was observed that MOBscan did not perform well, achieving low accuracy and kappa values across sequences of varying lengths, while MOB-suite exhibited marginally better performance than MOBscan when handling sequences of greater length (Figure 3A, 3B).(Please refer to Line 418-421 in the revised manuscript). -> Do the authors' results contradict the following general expectation?MOB-typer utilizes BLAST, whereas MOBscan utilizes hmmscan, and therefore, MOBscan is expected to retrieve more distantly related proteins than MOB-typer. We would like to thank Reviewer 1 for the discussion regarding BLAST and HMMscan.Firstly, we acknowledge that for more distantly related proteins, sequence searching based on HMM exhibits higher sensitivity than BLAST.However, we believe that our results do not contradict this theory.There are two reasons that might explain why, in this manuscript, the performance of tools based on HMM appears slightly inferior to those based on BLAST. (1) The number of reference sequences can impact the performance of the tools.In MOB-suite, a large number of reference sequences are used for BLAST sequence alignment, whereas in MOBscan, the number of relaxase sequences used to profile HMM files for some MOB types is not very large.For instance, for the MOBF type, MOBscan utilizes 146 relaxase sequences for configuring HMM files, while MOBsuite employs 396 sequences to construct the BLAST database.The difference in the number of reference sequences could potentially lead to MOBscan's performance being slightly inferior to that of MOB-suite. (2) The aim of this study is not to design new methods for identifying novel relaxases. In our test data, the relaxases all come from sequenced plasmids, so there is some homology with the relaxases in the database.When the query sequence and the database sequence have high homology, the performance of BLAST may not necessarily be worse than HMM.In fact, existing studies have shown that methods based on BLAST can sometimes outperform those based on HMM when the homology is high (Ref: PMID: 25140992). 4. MOB-suit and MOBscan are represented by blue lines, orange lines and gray lines respectively. -> should be "MOB-suite" We thank Reviewer 1 for the careful checking of the manuscript.In Line 427-428 of the revised manuscript, we have revised "MOB-suit" to "MOB-suite". 5. I suggest receiving English language editing before publishing the paper. "For the MOB typing, MOBscan [18] uses the HMMER model to annotated the relaxases and further perform MOB typing."-> should be "For the MOB typing, MOBscan [18] uses the HMMER model to annotate the relaxases and further perform MOB typing." We are sorry for the grammatical error.In Line 109 of the revised manuscript, we have revised the sentence as "For the MOB typing, MOBscan [18] uses the HMMER model to annotate relaxase genes and classify plasmids accordingly". In addition, we have used the copy editing service of the GigaScience journal to refine the language through the whole manuscript.Here, we would like to express our sincere gratitude to Reviewer 2 for the positive comments on our work, describing our revised manuscript: "The manuscript has been improved substantially and all the initial concerns have been addressed satisfactorily." To Reviewer #2: We are very thankful for Reviewer 2's suggestions during the first revision process, which greatly enhanced the clarity, depth, and academic value of our paper. In hoping that the above revision has clarified all the points by two reviewers and given a point-by-point response to all the concerns, we hereby submit our revised manuscript to the journal.We thank you for your kind consideration. Note: The initial version of the article has been made public on bioRxiv (https://doi.org/10.1101/2023.12.06.570414), and beyond that, it has not been published in any other traditional journals. Microbiome Medicine Center, Department of Laboratory Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China Email<EMAIL_ADDRESS>Thus, we presented MOBFinder, a tool for annotating MOB types from plasmid metagenomic fragments (PMFs).MOBFinder can process single or multiple plasmid DNA sequences, and provides predicted MOB types for each input fragment, including MOBB, MOBC, MOBF, MOBH, MOBL, MOBM, MOBP, MOBQ, MOBT, MOBV and non-MOB.Moreover, it provides the option to annotate plasmid bins from metagenomics data.An overview of this work is shown in Figure 1A, and the development of MOBFinder involved the following steps: (1) Benchmark dataset construction.Plasmid complete genomes obtained from the National Center for Biotechnology Information (NCBI) were classified into different MOB types based on relaxase databases.Then, to simulate plasmid fragments in metagenomic data, an artificial benchmark dataset of varying lengths is generated.(2) Word embeddings.Numerical word vectors were generated using skip-gram to characterize the sequence features of different MOB categories. Figure 1 . Figure 1.Flowchart of the technical approach utilized in this study.(A) General workflow of the development and testing of MOBFinder.(B) Using plasmid relaxases with known MOB types as reference sequences, we developed a database of relaxases from the non-redundant (NR) database representing different MOB types.(C) Utilizing the relaxase database, complete plasmid genomes (a) was calculated to assess the overall consistency between the predictions and true classes, which took into account the possibility of random prediction.Po represented observed accuracy [Po = (A11 + A22 + ... + Ann) / N], where A11, A22, and Ann represented the values on the diagonal of the confusion matrix and n represented the number of MOB categories.N represented the total number of samples.Pe represented the expected accuracy [Pe = (E11 + E22 + ... + Enn) / N^2], where E11, E22, and Enn were the expected values in each cell of the confusion matrix, n was the number of MOB classes, and N was the total number of samples.The run time was recorded using the command 'time' in Linux. = ( − )/(1 − ) () = ( + )/2 () ℎ = 2 * * /( + ) () 1 − = 2 * * /( + ) () For each MOB category, we also calculated the balanced accuracy (b), harmonic mean (c) and F1score (d).Considering the class imbalance within the training dataset, balanced accuracy was used to measure the average accuracy of each MOB category, where TPR was the true positive rate [TRP = true positives / (true positives + false negatives)] and TNR was the true negative rate [TNR = true negatives / (true negatives + false positives)].The harmonic mean provided an overall evaluation of the model's performance, where Sn and Sp represented sensitivity [Sn = true positives/ (true positives + false negatives)] and specificity [Sp = true negatives / (true negatives + false positives)], respectively.The F1-score combined precision and recall, providing a balanced measure of the model's performance, where precision was the number of correct positive predictions out of all positive predictions [precision = true positives / (true positives + false positives)] and recall was the number of correct positive predictions out of all actual positive predictions.[recall = true positives / (true positives + false negatives]].A receiver operating characteristic (ROC) curve was used to visualize the performance of MOBFinder in predicting each MOB category, where the x-axis and y-axis were the false positive rate (FPR) and true positive rate (TPR).Plots closer to the left and top indicate higher TPR and lower FPR, which means better performance.For each MOB class, the area under the curve (AUC) Figure 2 . Figure 2. Benchmark dataset construction using a high-resolution strategy.(A) Proportion of Figure 3 . Figure 3. Overall performance of MOBFinder and comparison to MOB-suite and MOBScan.Evaluation and comparison in terms of (A) accuracy, (B) kappa, and (C) run time (C).The four fragment length groups in the test dataset were Group A (801-1200 bp), Group B (1201-1600 bp), Group C (3000-4000 bp), and Group D (500-10000 bp).(D) For each MOB type, the balanced accuracy, harmonic mean, and F1-score were used to assess the performance of MOBFinder and compared to MOB-suite and MOBscan.Since MOB-suite and MOBscan do not include the prediction of MOBL, only the results of MOBL from MOBFinder are provided.MOBFinder, MOBsuite and MOBscan are represented by blue lines, orange lines and gray lines respectively. Figure 4 . Figure 4. ROC curves and AUC values for MOBFinder.The curves were plotted using the output scores of MOBFinder, and the AUC values were calculated to quantify the performance of the tool for each MOB class. Figure 5 . Figure 5. Annotation of T2D-related plasmid bins using MOBFinder.(A) Heatmap of plasmid bins between T2D patients and controls.Each column represents a sample, and each row represents a plasmid bin.(B) Comparison of the abundance of the four identified MOB types between T2D patients and controls.The p-value was calculated using the Wilcoxon rank-sum test, adjusted using the Benjamini-Hochberg method for multiple comparisons.(*p.adjust < 0.05, **p.adjust < 0.01, and ***p.adjust< 0.001.) Figure 6 . 3 . 5 . Figure 6.Comparison of resistance genes among different MOB types.Four databases were used to identify antibiotic resistance gene within each MOB type, and the p-value was calculated using Tukey's Honest Significant Difference test.The two groups without significance markings indicate no statistical difference.(*p-value < 0.05, **p-value < 0.01 and ***p-value < 0.001.) Figure 1 Figure 1 Figure6.tifAccording to the journal requirement, we have registered the tool in the bio.tools and SciCrunch.orgdatabases.In Lines 585-586 of the "Availability of Source Code and Requirements" section of the revised manuscript, we have added the following Comments: I would like to commend you on the revisions made to your manuscript following the initial round of reviews.It is evident that considerable effort has been put into addressing the concerns and suggestions raised during the first review.The changes and additions you have implemented have significantly enhanced the clarity, depth, and scholarly value of your paper.The manuscript has been improved substantially and all the initial concerns have been addressed satisfactorily.I support the publication of this manuscript in GigaScience. Table 2 ) . We removed 22,470 of them potentially classified into more than one MOB class, leaving 67,925 classified genomes for the training and optimization of MOBFinder (Figure2A).Our analysis results revealed significant differences in the number, average length, and GC content of plasmid genomes among MOB types. [52]bly, non-MOB types included the genomes with the most and longest average length, whereas MOBB and MOBM had the fewest plasmid genomes and shortest average length, respectively.In terms of GC content, MOBL had the lowest and MOBQ had the highest amounts.Moreover, plasmids of different MOB types exhibited diverse host ranges at the genus level (Figure2B).MOBB was predominantly found in Bacteroides, Hymenobacter, Parabacteroides, Phocaeicola and Spirosoma.Particularly, Phocaeicola has been detected in the human gut and possessed the gene for porphyran degradation through horizontal gene transfer[50].MOBC, MOBF, MOBH, and MOBP were all found in Escherichia and Klebsiella.And Klebsiella is a multidrug-resistant bacterium that has demonstrated resistance to multiple antibiotics[51].MOBL, MOBT, and MOBV were mainly discovered in Bacillus and Enterococcus.Almost all MOBM type plasmid genomes were present in Clostridium and Enterocloster, and some species in Clostridium could cause various diseases[52].MOBQ demonstrated a broader host range, including Acinetobacter, Agrobacterium, Table 2 . Number, average length, and GC content of plasmid genomes for each MOB type. Table 1 . Experimental and computational schemes developed for plasmid classification. Table 2 . Number, average length, and GC content of plasmid genomes for each MOB type.
9,294.6
2024-01-02T00:00:00.000
[ "Computer Science", "Biology" ]
Processing Chain for Localization of Magnetoelectric Sensors in Real Time The knowledge of the exact position and orientation of a sensor with respect to a source (distribution) is essential for the correct solution of inverse problems. Especially when measuring with magnetic field sensors, the positions and orientations of the sensors are not always fixed during measurements. In this study, we present a processing chain for the localization of magnetic field sensors in real time. This includes preprocessing steps, such as equalizing and matched filtering, an iterative localization approach, and postprocessing steps for smoothing the localization outcomes over time. We show the efficiency of this localization pipeline using an exchange bias magnetoelectric sensor. For the proof of principle, the potential of the proposed algorithm performing the localization in the two-dimensional space is investigated. Nevertheless, the algorithm can be easily extended to the three-dimensional space. Using the proposed pipeline, we achieve average localization errors between 1.12 cm and 6.90 cm in a localization area of size 50cm×50cm. Introduction For the correct solution of inverse problems, such as source reconstruction of biomedical sources, it is essential to know the exact position and orientation of the measuring sensors with respect to the source besides measuring the biomedical signals. Especially in magnetic measurements the sensors do not necessarily have a fixed position and orientation. Thus, a determination of the position and orientation at the beginning of a measurement is not sufficient. Much more desirable is a continuous estimation of the sensor's position and orientation simultaneously with the measurement [1,2]. Magnetic tracking systems are used in many applications, e.g., in indoor positioning systems [3,4] or to locate medical devices inside the body [5,6]. Moreover, magnetic localization approaches are used to determine the position of the subject relative to the sensor array in biomagnetic measurements. A procedure for determining the subject relative to the measuring sensor array, either once at the beginning or also simultaneously with the measurement, was presented in [7]. In [1,2] a method for determining the positions of the individual sensors in a flexible on-scalp MEG system relative to the subject was investigated, which can also be applied during measurement. Until now, mainly SQUIDs (Super Conducting Quantum Interference Devices) [8] and recently OPMs (Optically Pumped Magnetometers) [9,10] are used for the measurement of biomagnetic signals. Unfortunately, these sensors require a magnetically well shielded environment and are therefore inconvenient in operation. Magnetoelectric sensors, on the other hand, do not require any shielding, no expensive cooling system, and are also very small in size, which makes them ideal for array applications. The sensors are composed of a magnetostrictive and a piezoelectric layer and use the resonant structure of a cantilever [11]. Detection limits in the sub-nT regime have been reached recently [12][13][14][15] using modulation techniques such as the ∆E-effect [16] for the detection of low-frequency magnetic fields. Figure 1. General overview of a medical system operating with magnetic sensors. The measurements are performed simultaneously with localizing the sensors. After transforming the signals into the digital domain, the signals are processed and analyzed. Since the analysis of the measured magnetic signals is not in the focus of this contribution, the corresponding box is depicted in gray. In Section 2, the magnetoelectric sensor used in this paper will be presented and characterized. After explaining the so-called forward problem in Section 3, the real-time localization approach will be presented in Section 4. The presented localization processing chain will be verified by measurements presented in Section 5. The paper closes with a conclusion and an outlook in Section 6. Magnetoelectric Sensor For the proof of principle an exchange bias ∆E-effect sensor as depicted in Figure 2a will be used in this contribution. A sensor of the same type has already been used in a previous work [15]. The sensor is based on a polysilicon cantilever with a size of 1 mm width, 3 mm length, and 50 µm thickness. The cantilever is covered by a 4 µm thick magnetic multilayer (20 × Ta/Cu/MnIr/FeCoSiB) and a 2 µm thick piezoelectric layer (AlN). Further details about the fabrication process of the sensor can be found in [15]. The magnetic multilayer consists of ferromagnetic and antiferromagnetic layers, which ensure the self-biasing of the sensor and thus lead to a shift of the magnetization curve of the sensor [18]. Hence, the sensor can be operated without applying an external bias field, which is especially favorable regarding array applications. The sensor is connected to a low-noise JFET charge amplifier [19] and placed on a printed circuit board. The whole sensor system is encapsulated in a 2.1 mm thick brass cylinder for electrical shielding. As shown in [15], the localization of the magnetoelectric sensor can be performed simultaneously with a measurement without loss of information or degradation of the signals. The first bending mode was used to localize the sensor, while an artificial heart signal was measured in the second mode using the ∆E-effect. Hence, also in this contribution only frequencies around the first bending mode will be used for the transmission of the localization signals. The amplitude and phase response around the first bending mode of the magnetoelectric sensor used are shown in Figure 2c. The characterization measurements have been performed in a magnetically, electrically, and acoustically shielded environment [11]. The magnitude and phase response of the sensor have been measured applying a magnetic field of b ac = 1 µT on the sensor's long axis. The performance of the sensor can be determined as described in [20]. The sensor has a resonance frequency of f r = 7.712 kHz and a −3 dB bandwidth of bw -3dB = 10.2 Hz. Since the brass cylinder acts as a low-pass filter with a cut-off frequency of approximately f c ≈ 1.5 kHz [15], the sensor's performance will be improved when removing the brass cylinder. Nevertheless, the encapsulation is necessary for electrical shielding and acts as a mechanical protection. Moreover, the limit-of-detection in the first bending mode with brass cylinder is still sufficient, because the coils can simply emit higher field amplitudes. The maximum sensitivity of the sensor is reached when a magnetic field is applied on the sensitive axis of the sensor. However, the sensitive axis of the sensor is not necessarily equal to the long axis of the sensor. There can be a tilt γ between these two axes [15,21], which is visualized in Figure 2b. (c) Figure 2. (a) Exchange bias ∆E-effect sensor used in this study. The sensor is based on a cantilever of size 3 mm × 1 mm. The cantilever is placed on a printed circuit board and connected to a low-noise JFET charge amplifier [19]. The sensor is encapsulated by a brass cylinder for electrical shielding and mechanical protection. (b) Visualization of the relationship between the sensitive and the long axis of the sensor. (c) Magnitude and phase response of the sensor in the first bending mode applying a magnetic field of b ac = 1 µT. The sensor has a resonance frequency of f r = 7.712 kHz and a bandwidth of bw -3dB = 10.2 Hz. Forward Problem For the localization of the magnetoelectric sensor, coils are placed outside the localization area as shown in Figure 3. If the distance between the sensor and the coil is large enough, the magnetic field of the coil i at the sensor position r s (t) can be approximated by the field of a magnetic dipole [22]: Here, µ 0 is the permeability of vacuum, m c,i (t) the magnetic dipole moment of the coil i, and r cs,i (t) = r s (t) − r c,i the distance vector between the sensor at position r s (t) and the coil i at position r c,i . The superscript T denotes the transpose of the vector. The positions of the coils r c,i are fixed during the measurement and therefore time independent. In this study, N c = 6 coils have been used. The coils have an effective diameter of 2.6 cm and consist of about 350 turns of enameled copper wire with a wire cross section of 0.13 mm 2 . The impedances of the six coils, separated into magnitude and phase, are shown in Figure 4. The signal measured by the sensor can be described as a voltage at the output of the sensor system. At the location of the sensor a superposition of the magnetic fields of the coils is present. Due to the directional characteristic of the sensor d s (t), only a part of the applied magnetic field is picked up. The conversion of magnetic field into voltage by the sensor system (including the charge amplifier) is described by the impulse response h s (t). Equation (2) is valid at least for the frequencies around the first bending mode [15]. For simplification, no noise sources are considered here. (a) (b) Figure 3. Real (a) and schematic (b) measurement setup for the localization of magnetoelectric sensors. The coils are placed outside of the localization area and transmit orthogonal signals, which are measured by the sensor. The localization area (box bounded by white stripes in (a)) is of size 50 cm × 50 cm. Figure 4. Coil impedances separated into magnitude and phase. In (a) the whole spectrum from 100 Hz up to 1 MHz is shown, so that the resonance of the coils can be seen. In (b) the frequency range is scaled to the frequency range of the excitation signals. Localization Processing Chain For the estimation of the sensor's position and orientation an inverse problem must be solved. The processing chain for solving this inverse problem is shown in Figure 5. For this purpose, the localization area is first divided into a discrete grid containing N p different position-orientation-pairs P = [p 1 , . . . , p j , . . . , p N p ], with p j = [ r T p,j , d T p,j ] T consisting of a position vector r p,j (containing x, y, and z components) and an orientation vector d p,j (directivity described by roll ϕ, pitch θ, and yaw ψ). The lead-field matrix describes the forward problem for the defined position-orientation-pairs in P. That means, the lead-field matrix entry of row i and column j is defined as and thus describes the influence of coil i on the sensor, if the sensor would have the position and orientation described by p j . The distance vector is defined as r cp,ij = r p,j − r c,i and the orientation vector d p,j can be described by the angles θ and ψ (ϕ is always zero here) using [23]: Equation (4) is a reduced magnetic dipole equation. The prefactor of Equation (1) is neglected and the magnetic dipole moment of the coil is reduced to the orientation of the coil. Signal Generation and Equalizer To separate the mixed signals received by the sensor, the signals of the coils must be orthogonal. Two signals x i (n) and x j (n) are orthogonal for n ∈ {L 1 , . . . , L 2 }, if the following condition is fulfilled [24]. Extending this equation to the constant E i being 1, the signals are called orthonormal [24]. This is necessary to extract the individual coil amplitudes from the sensor signal and make them comparable. Different approaches can be used for the generation of orthogonal signals, e.g., using a TDMA (Time Division Multiple Access), an FDMA (Frequency Division Multiple Access) or a CDMA (Code Division Multiple Access) approach [25]. Due to the small bandwidth of the sensor, a TDMA approach is used in this contribution. Thus, the excitation signals are cosine signals at the resonance of the magnetoelectric sensor [15]. The signals are weighted with a Hann window w(n) [26] of length L sig , so that a smoothed in-and outfading of the signals is ensured. Additionally, the condition L mf ≥ L sig must be fulfilled. If L mf > L sig , there is a pausing time between two consecutive coil signals. This is important when considering the impulse response of the sensor. The excitation signals are repeated every L r = N c L mf samples and transmitted by the coils after D/A conversion. It should be noted that L r = L mf if an FDMA or a CDMA approach is chosen, because the signals are transmitted simultaneously by all coils. As can be seen from Equation (1) the magnetic field of a coil is proportional to the driving current. Since the output of the D/A converter is proportional to a voltage, the excitation signals x ex (n) = [x ex,1 (n), . . . , x ex,N c (n)] T are linearly deformed in amplitude and phase. This can be described by the impulse response of the coil h c,i (n), denoting the relationship between the voltage and the current of the coil i. Additionally, the signals are modified by the impulse response of the magnetoelectric sensor h s (n), as can be seen from Equation (2). Thus, the signals x ex (n) must be equalized either prior to the deformation due to the coil and sensor impulse response or before they are forwarded to the matched filter. In this contribution, the matched filter impulse responses are adapted to match with the modified transmitted signals, so that they are again comparable to the coil signals measured by the sensor. Each coil excitation signal is adjusted individually by the equalizer withĥ c,i (n) andĥ s (n) denoting the approximated impulse responses of the coil i and the magnetoelectric sensor, respectively. The factorĝ c,i describes the influence of other components in the measurement setup. This includes for example different gains of the individual coil amplifier channels and different conversion factors of the coils from current to magnetic field. This is, e.g., due to variances in the number of windings. These values are approximated by a constant for the considered frequency range. The values for the six coils and amplifier channels used in this study are given in Table 1. The equalized signals are calculated according to and forwarded to the matched filter. The weighting factor g eq,i = 1 E i ensures that each equalized signal has the same correlation output value one between the signals x eq,i (n) andx eq,i (n) at lag zero. The factor E i is the auto-correlation output ofx eq,i (n) at lag zero. In Figure 6 the cross correlation of the signals x eq,i (n) andx eq,i (n) at time lag zero is visualized. It is obvious that adjacent coils signals have cross correlation values different than zero. This is due to the decay behavior of the sensor impulse response. Nevertheless, the shown values are still sufficient for separating the coil signals. Matched Filter As described in Equation (2), the input signal of the sensor is a superposition of the magnetic coil signals. Additionally, noise sources superimpose the signal. To obtain a high signal-to-noise ratio (SNR) the coil amplitudes can be increased, which leads to a high energy consumption. Alternatively, a matched filter [27] can be used, which increases the SNR and thus can perform well also with lower energy consumption. This additionally makes the algorithm more robust against distortions. Hence, to obtain the amplitudes of the coil signals measured by the magnetoelectric sensor, the sensor input signal x in (n) is matched filtered with the equalized coil excitation signals The matched filter output can be evaluated every L r samples, since all coil signals have then been completely transmitted once with the value m i (k) corresponding to the amplitude of the coil signal i at the sensor. These amplitudes are summarized in the vector m(k) = [m 1 (k), m 2 (k), . . . , m N c (k)] T and forwarded to the localization algorithm. Localization Algorithm For the estimation of the position and orientation of the sensor, the matched filter output vector m(k) is compared with the columns of the lead-field matrix A. As stated in Equation (4), each column a j describes the coil amplitudes that would be measured by the sensor (after being filtered by the matched filter), if the sensor would occupy the defined position and orientation pair described by p j . To be more robust against gain uncertainties and to ensure comparability between the measured coil amplitudes and the lead-field matrix columns, the vectors are normalized to the respective absolute maximum value beforehand. The values for the cost function c(k) = [c 1 (k), . . . , c j (k), . . . , c N p (k)] T are calculated by [15]: The estimated sensor position and orientation is then given by the position-orientation pair of the forward problem with the minimum cost function valuê p(k) = p l(k) with l(k) = argmin j c j (k) (14) and can also only be determined every L r samples. It is obvious that localization errors occur due to the discretization of the localization area. If the sensor is not directly located on a defined grid point, the localization error will be at least the distance between the closest grid point and the sensor's location. Unfortunately, even higher localization errors can occur in some forward model configurations, due to the shape of the cost function c(k). Further information can be found in Appendix A. To overcome this problem a higher resolution is required. However, this will increase the computational complexity dramatically and hence can endanger the real-time capability of the system. By increasing the resolution iteratively, the localization can be performed with high accuracy and a moderate increase in computational complexity. The flow chart for the iterative localization process is shown in Figure 7. The first iteration is calculated as described before. Instead of considering only one estimated position-orientation-pair as stated in Equation (14), the N b best position-orientation-pairs are taken into account. The minima and maxima of the included position-orientation-pairs plus a fraction (one step size) of the previous grid form the boundaries of the new grid. The new grid is again divided into N p position-orientation-pairs and the forward problem as described in Equation (4) is calculated. From then on, the steps are repeated as described in the upper part of this section. Grid refinement stops when either the resolution between two adjacent grid points is less than a specified resolution or the maximum number of iterations N it has been reached. The estimated position-orientation pair is given in the last iteration as described by Equation (14). Postprocessing To mitigate possible outliers in the localization results, a linear Kalman filter for smoothing the estimated localization outcomes is used. The measurement equation of the system can be described byp (k) = Hs(k) + n m (k), (15) with the state variables s(k), the observation model H transforming the states into the measurement variables, and supposing white Gaussian measurement noise n m (k) [28]. Assuming a linear system, the state variables are updated via the equation The matrix F is the state transition matrix and n p (k) is white noise with zero mean [28]. Due to the noise processes the measurement variables are subject to errors. The Kalman filter attempts to predict the states and thus reduces the influence of the noises stated in Equations (15) and (16). The calculations are performed according to the descriptions in [28]. Based on the N m = 6 measured variables summarized inp(k), there are N s = 3 · N m = 18 states available, when additionally considering the velocity and acceleration of the measured variables. The initialization of the variables and covariance matrices are described in Appendix B. The enhanced localization output is denoted asp enh (k). It is worth noting that the Kalman filter outputs should not lie outside the localization area and are thus restricted to the boundaries of the localization grid. Measurements and Results For the proof of principle, the measurements were performed in a two-dimensional space, i.e., only considering the x and y components of the sensor position and only considering the angle yaw ψ for the orientation of the sensor. The z component of the sensor's position, as well as the orientation angles roll ϕ and pitch θ are assumed to be zero. Nevertheless, the proposed method can easily be performed in the three-dimensional space without any restrictions. More coils should be used for this, positioned in the threedimensional space. Additionally, a smaller initial grid resolution might be used to keep the computational complexity low. The measurements were performed with a real-time system developed at the chair of Digital Signal Processing in Kiel [29]. A picture of the graphical user interface of the tool is shown in Figure 8. The parameter used for the measurements (as defined in the sections above) are listed in Table 2. The localization area is limited to values between 0 cm and 50 cm in x and y direction and between −90 • and 90 • for the yaw angle. Value 6 2048 28,672 192 10 10 49,419 The waiting time between two consecutive coil signals must be rather high due to the decay of the impulse response of the sensor. Consequently, a total time of L r f s = 896 ms, with f s being the sampling rate, is needed to completely transmit the signals and thus to generate one localization result. This time can be shortened tremendously when using a sensor with a higher bandwidth. The sensor is placed in fixed positions and with fixed orientations to determine the accuracy of the algorithm. The tested position-orientation pairs p s,j were chosen randomly and are shown in Figure 9. The arrow directions represent the sensor's long axis. The tilt between the sensor's sensitive and long axis has been approximately determined by manually rotating the sensor in a Helmholtz coil. The tilt was measured outside a shielded environment and results in γ ≈ −45 • . However, the tilt of the sensor depends on various factors, such as the strength of the bias field [30] (e.g., the earth's magnetic field) and is therefore only an approximation. The results of the localization for the position-orientation pair p s,3 over time are shown as an example in Figure 10. Here, x s,j , y s,j , and ψ s,j denote the x and y component and the yaw angle of the sensor at position p s,j , respectively. The localization output without the Kalman filter is described byx s,j (k),ŷ s,j (k), andψ s,j (k) and after smoothing by the Kalman filter byx enh s,j (k),ŷ enh s,j (k), andψ enh s,j (k). There are some variances of the localization result over time. This is mainly due to the presence of noise, which leads to slightly varying amplitudes at the sensor and thus to small variations in the localization outcomes. The offset error might be due to cross talk between the coil amplifier channels and coupling of the magnetic coil signals into the cables and electronics. Additionally, it can be seen that the Kalman filter smooths the estimation output over time, so that outliers in the localization results do not have such a high impact. To quantify the accuracy of the algorithm, a localization error is defined according tō for the position estimation and for the orientation estimation. The number of localization outcomes is set to N meas = 50, according to a measurement time of 44.8 s. The accuracy of the localization results for all tested position-orientation-pairs is shown in Figure 11. The localization error is lying between 1.12 cm and 6.90 cm for the position estimation and results in a mean error of about 3.44 cm. The error for the orientation estimation is between 3.02 • and 16.76 • . This results in a mean error of about 11.23 • . When considering fixed positions and orientations of the sensor, the localization output can be averaged over time and compared to the real sensor position/orientation afterwards. When doing so, the localization accuracy can be slightly improved and results in values between 0.46 cm and 6.52 cm for the position estimation and 1.54 • and 15.35 • for the orientation estimation. The average error reduces to 3.14 cm and 10.54 • , respectively. The high errors for the estimation of the sensor's position can result from the noise and the cross talk in the measurement system. Due to the different distances between the coils and the sensor the SNR is dependent on the sensor position/orientation. For example, looking at position p s,9 an average SNR of about 9.5 dB is obtained at the sensor. Higher coil currents would lead to an improved SNR. Nevertheless, the goal was to localize with a minimum amount of energy. To avoid cross talk, the cables as well as the amplifier channels must be shielded. Additionally, the remaining cross talk can already be considered when setting up the forward problem or with an appropriate initial calibration. However, since the focus of this work is on the real-time localization pipeline and the calibration will be very extensive, it is not the subject of this work, but will be taken into account in our future work. The high error variance of the orientation estimation can partly be due to the change of the sensor's sensitive axis with respect to the bias field. Even a rotation in the earth's magnetic field can tilt the sensitive axis [30]. This problem only occurs with the sensors presented here and not with other types of magnetic field sensors. Furthermore, possible calibration errors do not only influence the position estimation but also the orientation estimation. Conclusions and Outlook An algorithmic pipeline for localizing magnetic sensors in real time was presented in this contribution. Besides the localization algorithm itself, pre-and postprocessing steps for an enhanced estimation of the sensor's position and orientation have been described. The potential of the proposed algorithm was emphasized by measurements with a magnetoelectric exchange bias ∆E-effect sensor. Nevertheless, the proposed method can be applied to any type of magnetic field sensor. Only the coil excitation signals must be adapted to the properties (frequency range, dynamic range, etc.) of the magnetic sensor used. Using the magnetoelectric sensor, a mean localization error of 3.44 cm has been reached. For the proof of principle the localization of the sensor has been limited to the two-dimensional space. Nevertheless, the localization can be easily extended to the three-dimensional space. The achieved results are comparable with other magnetic position estimation approaches. In [4] a 3 × 3 m grid was used for localizing, achieving an accuracy of less than 10 cm. Localizing with a 3D sensor in a grid size of 8 × 7 cm an accuracy of 2.6 mm could be reached in [31]. In [2] a localization accuracy of ≤ 2 mm has been achieved, using coils for the localization of the sensors in a flexible MEG system. That shows that our localization method performs well, but can still be improved. However, the localization method investigated in this contribution can be performed in real time and in parallel to magnetic measurements without any degradation. Additionally, the robustness in noisy environments is increased by the usage of a matched filter and the smoothing of the localization outcomes via the linear Kalman filter. Moreover, the usage of a magnetoelectric sensor can be advantageous with respect to later medical applications due to its small size and the low production and operation costs. Until now, the biggest limiting factor is the hardware used. Moreover, only a simplified model of the magnetoelectric sensor has been used, where the sensor is reduced to a point model. A more detailed model of the sensor, which also considers the dimensions of the sensor as well as a bias field dependent tilting of the sensor's sensitive axis, could improve the localization accuracy. The results can be further improved using multiple sensors included in an array with fixed distances and orientations as described in [1]. To reduce the transmitting time of the coils and thus to increase the rate of localization outcomes, FDMA or CDMA approaches would be beneficial. This would require an adaption of the sensor hardware. . Exemplary cost function. The sensor is located at point A. Due to the relatively coarse grid (black lines), there will be a localization error of at least the distance between the point B and point A. However, due to the shape of the cost function, the minimum that is crossing the grid lines-and thus the localization outcome-is at point C. Appendix B. Initialization of the Kalman Matrices As already stated in Section 4.4 the calculations of the Kalman filter are performed as described in [28]. The initialization of the matrices is filled as described for a discrete Wiener process acceleration model [28]. Here, I M 1 denotes a unit matrix of size M 1 × M 1 and 0 M 1 ×M 2 a matrix filled with zeros of size M 1 × M 2 . E{. . .} is the expected value operator and ∆t = L r f s , with f s being the sampling rate. Considering the high measurement noise and assuming a slowly moving or fixed sensor (i.e., low process noise), the estimated variances are set toσ 2 p = 0.001,σ 2 m = 10, andσ 2 s = 500. These values were chosen exemplary.
6,861.4
2021-08-01T00:00:00.000
[ "Engineering", "Physics" ]
Partially hyperbolic diffeomorphisms with a trapping property We study partially hyperbolic diffeomorphisms satisfying a trapping property which makes them look as if they were Anosov at large scale. We show that, as expected, they share several properties with Anosov diffeomorphisms. We construct an expansive quotient of the dynamics and study some dynamical consequences related to this quotient. Introduction The purpose of this paper is twofold. On the one hand, we provide some mild contributions to the classification problem of partially hyperbolic diffeomorphisms in higher dimensions. On the other hand we study the dynamics of certain partially hyperbolic diffeomorphisms as well as give evidence of certain pathological phenomena which must be dealt with in order to understand better this kind of dynamics. We shall consider partially hyperbolic diffeomorphisms f : M → M admitting a splitting of the form T M = E cs ⊕ E u in the pointwise sense (see below for precise definitions). It is well known ( [HPS]) that the unstable bundle E u is uniquely integrable into a foliation called unstable foliation and denoted as W u . On the other hand, the integrability of the bundle E cs is a subtler issue (see [BuW]). As in other results aiming at the classification of partially hyperbolic diffeomorphisms in higher dimensions (for example [Bo, Carr, Go]) we shall ignore this issue for the moment by assuming that f is dynamically coherent. We say that a partially hyperbolic diffeomorphism is dynamically coherent if there exists an f -invariant foliation W cs tangent to E cs . In dimension 3, there are several classification type results which do not assume integrability to start with (see for example [BoW,BBI,HP,HP 2 ,P 3 ]). See also section 6 of this paper where we remove this hypothesis under a different assumption. This work will concern partially hyperbolic diffeomorphisms which verify a dynamical condition which makes them look, from far apart, as Anosov diffeomorphisms: We will say that a dynamically coherent partially hyperbolic diffeomorphism f : M → M with splitting T M = E cs ⊕ E u has a trapping property if there exists a continuous map D cs : M → Emb 1 (D cs , M ) such that D cs (x)(0) = x, the image of D cs (the closed unit ball of dimension dim E cs ) by D cs (x) is always contained in W cs (x) and they verify the The autor was partially supported by CSIC group 618, FCE-3-2011-6749 and the Palis Balzan project. The main point of this paper is to recover the same type of results that are valid for Anosov diffeomorphisms ( [Fr, Man, N]) in this setting. Several examples enjoy this trapping property and it is important in order to obtain dynamical consequences ( [M, Carv, BV, P, BF, FPS, Rol]), however, the point here is to avoid the usual assumption that the trapping property is seen at "small scale" (a similar approach is pursued in [P 4 ] in the 3-dimensional case and with one dimensional center). See also [CP] for a related notion of chain-hyperbolicity. A relevant point is to obtain results concerning partially hyperbolic diffeomorphisms without knowledge a priori on the global structure of the invariant foliations. To support this point of view we give in section 6 a weaker assumption which implies dynamical coherence and our trapping property. 1.1. Statement of results. In this paper, we will consider partially hyperbolic diffeomorphisms in one of the weakest forms (see [BDV, Appendix B] for a survey of possible definitions). We explicit the definition we shall use to avoid confusions with other references. A C 1 -diffeomorphism f : M → M is partially hyperbolic if there exists a Df -invariant continuous splitting T M = E cs ⊕ E u and N ≥ 1 such that for every x ∈ M and for every pair of unit vectors v cs ∈ E cs (x) and v u ∈ E u (x) one has that: Notice that with this definition f might be partially hyperbolic and f −1 not. This will not be a problem here, the results can be easily adapted to other settings. The definitions of dynamical coherence and the trapping property are the ones given in the introduction. Theorem 1.1. Let f : M → M be a dynamically coherent partially hyperbolic diffeomorphism with splitting T M = E cs ⊕ E u and satisfying the trapping property. Assume moreover that one of the following holds: An expansive quotient of the dynamics will be constructed under the hypothesis of the theorem. This will be enough to obtain that results of nonexistence of Anosov diffeomorphisms depending on the Lefschetz formula hold for partially hyperbolic diffeomorphisms in our setting (see for example [Sh, GH]). This is done in section 3 and the end of section 4. It might be that this quotient is of independent interest. Under certain assumptions (resembling those of the theory of Franks-Newhouse-Manning [Fr, N, Man]) we will see in section 4 that the quotient map is in fact transitive by translating the proofs in the Anosov setting to ours. In section 5 some mild dynamical consequences are derived and some questions which might will help to understand better the panorama are posed. Finally, in section 6 we give weaker conditions which ensure dynamical coherence. Appendix A presents an example which improves a decomposition constructed in [Rob] showing that the quotients might be quite wild while respecting (some of) the dynamical conditions. Acknowledgements: This is an improvement of part of my thesis [P 2 ,Section 5.4] under S. Crovisier and M. Sambarino, I thank both for several discussions and motivation. I also benefited from discussions with A. Gogolev, N. Gourmelon and M. Roldán. This work is dedicated to Jorge Lewowicz (1937Lewowicz ( -2014 who in particular has inspired several of the ideas here presented. Notation Along this paper M will denote a closed d-dimensional manifold and f : M → M a partially hyperbolic diffeomorphism. Except in section 6 we shall assume that f is dynamically coherent and verifies the trapping property. Along the paper we shall assume that d ≥ 3 (for the case of d = 2 stronger results can be obtained with easier proofs, see for example [P 2 , Section 4.A]). Given any foliation F on M we shall denote as F(x) to the leaf through x, F ε (x) to the ε-disk around x in the induced metric of the leaf andF will always denote the lift of F to the universal coverM of M . Here, foliation means a continuous foliation with C 1 -leafs tangent to a continuous distribution (foliations of class C 1,0+ according to [CC]). 3. An expansive quotient of the dynamics Denote as D cs x to D cs (x) (D cs ) and D cs x to D cs (x)(D cs ). We can define for each Some obvious properties satisfied by the sets A x are: • The set A x is a decreasing union of topological balls (it is a cellular set). In particular, A x is compact and connected. We would like to prove that the sets A x constitute a partition of M , so that we can quotient the dynamics. For this, the following lemma is of great use: Lemma 3.1. For every y ∈ W cs (x), there exists n y such that f ny (D cs y ) ⊂ D cs f ny (x) . The number n y varies semicontinuously on the point, that is, there exists U a small neighborhood of y such that for every z ∈ U we have that n z ≤ n y . Proof. Consider in W cs (x) the sets The sets E n are clearly open (by continuity of f and D cs ) and verify that E n ⊂ E n+1 because of the trapping property. Of course, x ∈ E n for every n ≥ 1. Using the trapping property and continuity of D cs again, it follows that n≥0 E n is closed. Indeed, let y belong to the closure of n≥0 E n , so, for z ∈ n≥0 E n close enough to y one has that f (D cs y ) ⊂ D cs f (z) . Since z ∈ E k for some k one deduces that y ∈ E k+1 showing that n E n is closed. Being non-empty, one deduces that n E n = W cs (x) as desired. The fact that the numbers n y varies semicontinuosly is a consequence of the fact that E n is open (n y is the first integer such that y ∈ E n ). One can show now that the sets A x constitute a partition of M . Proof. There exists a uniform n 0 > 0 such that for every x ∈ M and y ∈ f (D cs f −1 (x) ) we have that f n 0 (D cs y ) ⊂ D cs f n 0 (x) . To see this, notice that for each x ∈ M there exists k x with this property thanks to the previous Lemma and the fact that f (D cs f −1 (x) ) is compact. To prove the claim it is enough to show that the numbers k x vary semicontinuously with the point x. This follows with a similar argument as the one in Lemma 3.1 by using continuity of the disks, the map f and the trapping property. For z ∈ A x and n ≥ 0 we have that A z ⊂ f n (D cs f −n (x) ): Indeed, the existence of n 0 gives that f n 0 (D cs f −n 0 (f −n (z)) ) ⊂ D cs f −n (x) which gives the desired inclusion. This gives A z ⊂ A x . By symmetry we deduce that given x, y ∈ M the sets A x and A y are either disjoint or coincide. Consider two points x, y such that y ∈ W uu (x). We denote Π uu x,y : D ⊂ D cs x → D cs y as the unstable holonomy from a subset of D cs x into a subset of D cs y . An important useful property is the following: Lemma 3.3. The unstable holonomy preserves fibers, that is Π uu x,y (A x ) = A y . Proof. It is enough to show (by the symmetry of the problem) that Π uu (A x ) ⊂ A y . For n large enough we have that f −n (Π uu (A x )) is very close to a compact subset of D cs f −n (x) and thus f −n (Π uu (A x )) ⊂ D cs f −n (y) which concludes. Lemma 3.4. The equivalence classes vary semicontinuously, i.e. if x n → x then: Proof.Using the invariance under unstable holonomy, it is enough to show that the classes vary semicontinuously inside center-stable manifolds. The proof of this fact is quite similar to the previous ones, particularly, the proof of Proposition 3.2. We get thus a continuous projection by considering the relation x ∼ y ⇔ y ∈ A x . Since π is continuous and surjective, it is a semiconjugacy. Notice that a priori, the only knowledge of the topology of M/ ∼ one has is that it is the image by a cellular map of a manifold (some information on these maps can be found in the book [D] and references therein). For instance, we do not know a priori if the dimension of M/ ∼ is finite. This will follow from dynamical arguments after we prove Theorem 3.5 (combined with [M 2 Given a homeomorphism h : X → X of a compact metric space X, we denote the ε-stable (ε-unstable) set as We say that a homeomorphism has local product structure if there exists Expansive homeomorphisms verify that diam(h n (S ε (x))) → 0 uniformly on x for ε < α. Theorem 3.5. The homeomorphism g is expansive with local product structure. Moreover, π(W cs (x)) = W s (π(x)) and π is injective when restricted to the unstable manifold of any point. Proof.The last two claims are direct from Lemma 3.1 and the definition of the equivalence classes respectively. We must show the existence of a local product structure and that will establish expansivity also. First choose ε > 0 such that an unstable manifold of size 2ε cannot intersect the same center stable disk in more than one point. This is given by the continuity of the bundles E cs and E u . Consider x ∈ M and a neighborhood U of A x . Using Lemma 3.4 one knows that there is a neighborhood V of A x such that for every y ∈ V one has that A y is contained in U . One can choose U small enough so that for every y ∈ V one has that W u ε (y) ∩ D cs x is exactly one point. Moreover, by the continuous variation of the D cs -disks, one has that, maybe by choosing V smaller, it holds that for every y, z ∈ V one has that W u 2ε (y) ∩ D cs z is exactly one point. Since the image of V by π is open, one gets a covering of M/ ∼ by open sets where there is product structure. By compactness one deduces that there exists a local product structure for g and since the intersection point is unique one also obtains expansivity of g. 3.1. Some remarks on the topology of the quotient. This section shall not be used in the remaining of the paper. We shall cite some results from [D] which help understand the topology of M/ ∼ . Before, we remark that Mañe proved that a compact metric space admitting an expansive homeomorphism must have finite topological dimension ([M 2 ]). Corollary IV.20.3A of [D] implies that, since M/ ∼ is finite dimensional, we have that it is a locally compact ANR (i.e. absolute neighborhood retract). In particular, we get that dim(M/ ∼ ) ≤ dim M (see Theorem III.17.7). Also, using Proposition VI.26.1 (or Corollary VI.26.1A) we get that M/ ∼ is a d−dimensional homology manifold (since it is an ANR, it is a generalized manifold). More properties of these spaces can be found in section VI.26 of [D]. Also, in the cited book, one can find a statement of Moore's theorem (see section IV.25 of [D]) which states that a cellular decomposition of a surface is approximated by homeomorphisms. In particular, in our case, if dim E cs = 2, we get that M/ ∼ is a manifold (see also Theorem VI.31.5 and its Corolaries). Lateer, we shall see that if M = T d then the quotient M/ ∼ is also a manifold (indeed M/ ∼ is homeomorphic to T d ). The same should hold for infranilmanifolds but we have not checked this. Transitivity of the expansive homeomorphism In this section g : M/ ∼ → M/ ∼ will denote the expansive quotient map we have constructed in the previous section. The quotient map will be as before denoted by π : M → M/ ∼ . In general, it is not yet known if an Anosov diffeomorphism must be transitive. Since Anosov diffeomorphisms enter in our hypothesis, there is no hope of knowing if f or g will be transitive without solving this longstanding conjecture. We shall then work with similar hypothesis to the well known facts for Anosov diffeomorphisms, showing that those hypothesis that we know guaranty that Anosov diffeomorphisms are transitive imply transitivity of g as defined above. Remark 4.1. It is well known that transitivity of g amounts to showing some form of uniqueness of basic pieces. This is quite direct if one assumes some knowledge on the structure of the foliations of f , for example, if for every x, y ∈ M one has that D cs x ∩ W u (y) = ∅ then it follows that g is transitive. In this paper we rather concentrate on information which does not rely a priori on knowledge of the structure of the foliations. In particular, we shall prove in this section the following two results: Theorem 4.2. Assume f : M → M is a dynamically coherent partially hyperbolic diffeomorphism with the trapping property and dim E u = 1. Then M is covered by R d and homotopically equivalent to T d . Theorem 4.3. Assume f : M → M is a dynamically coherent partially hyperbolic diffeomorphism with the trapping property and M is covered by R d and homotopically equivalent to T d . Then f is isotopic to a linear Anosov automorphism L, the manifold M and the quotient M/ ∼ are homeomorphic to T d and g is topologically conjugate to L. Putting together Theorems 4.2 and 4.3 can be compared to Franks-Newhouse theory ([Fr, N]) on codimension one Anosov diffeomorphisms. It is possible to prove directly (with an argument similar to the one of Newhouse but taking care on the quotients) that g is transitive when dim E u = 1 without showing that M = T d (see [P 2 ,Section 5.4] for this approach). Theorem 4.3 is reminiscent of Franks-Manning theory ( [Fr, Man] see also [KH,Chapter 18.6]). It is natural to expect that property this result should hold if we consider M an infranilmanifold, but we have not checked this in detail. It is reasonable to extend the conjecture about transitivity of Anosov diffeomorphisms to expansive homeomorphisms in manifolds with local product structure. See the results in [V, ABP]. 4.1. Proof of Theorem 4.2. This proof is an adaptation of quite classical ideas (see for example the Appendix in [ABP]) with some arguments of [N]. The main point is to show that W cs is a foliation by leafs homeomorphic to R d−1 . This follows from the trapping property (and Lemma 3.1). Having this, one can lift the foliations W cs and W u to the universal cover and show that a leaf ofW u cannot intersect a leaf ofW cs more than once using Haefliger's argument ( [CC]) and the fact that all leafs are simply connected. To prove that the universal cover of M is R d one must show that given a leaf ofW u it intersects every leaf ofW cs . This follows with exactly the same proof of Lemma (5.2) of [Fr] once one knows that every leaf of W cs is dense. Lemma 4.4. Every leaf of W cs is dense in M . Proof.We use here some of the ideas of [N]. Consider a leaf L of W cs and let Λ = L which is a closed W cs -saturated set. We must show that Λ = M , for this, it is enough to show that every point . Under this assumption, and using the fact that f −1 contracts uniformly leafs of W u one deduces that Λ is also saturated by W u showing that Λ is open and closed. Being non-empty, one deduces that Λ = M as desired. So, to prove the lemma it is enough to show the following: Proof.First, notice that without loss of generality 1 , one can assume that f (Λ) = Λ. 1 A way to see this is to consider a spectral decomposition for g. If there is a leaf of W cs which is not dense, one can consider Λ to be the preimage by π of an attractor of g. Considering an iterate, one has the desired property. See [N]. We claim that if x ∈ Λ is a point such that one connected component of W u (x) \ {x} does not intersect Λ then A x is periodic (or equivalently, π(x) is a periodic point for g). Moreover, there are finitely many such periodic points. To see this, notice that there exists ε > 0 such that if three points of the past orbit of π(x) by g are at distance smaller than ε one has that the unstable manifold of one of the points in the orbit intersects Λ in both connected components. Since such points are invariant one deduces that π(x) must be periodic for g. Now assume there is a point x such that its unstable manifold does not intersect Λ on one side. Let Σ be the boundary of D cs x which is a topological sphere (of dimension ≥ 1 since d ≥ 3). As we mentioned, every point in Σ verifies that the unstable manifold in both sides intersect Λ, and by continuity and the intersection point, one obtains a continuous map from ϕ : Σ × [0, 1] → M which verifies that ϕ(z, 0) = z, ϕ(z, 1) is in Λ and maps {z} × [0, 1] to a compact part of W u (z). We can moreover assume that ϕ(Σ × {t 0 } is contained in a leaf of W cs for every t 0 using continuity (see [N]). Since ϕ(Σ × {t 0 }) separates W cs (ϕ(z, t 0 )) giving a compact region one can prove that the unstable manifold of x intersects W cs (ϕ(z, t 0 )) for every t 0 ∈ [0, 1] and therefore that it intersects Λ giving a contradiction. See [N] for more details. Now, we have a global product structure in the universal cover which implies thatM = R d and moreover, we get that the space of leafs of the foliationW cs is homeomorphic to the real line R (and can be identified with a single leaf ofW u ). The action by deck transformations induces an action on the space of leafs ofW cs which does not have fixed points since all leafs of W cs are simply connected. By Holder's theorem this implies that π 1 (M ) is free abelian and thus isomorphic to Z k . Since the universal cover is contractible, one deduces that M is homotopy equivalent to T d as desired. 4.2. Proof of Theorem 4.3. We shall follow the proof given in [KH] chapter 18.6. Before we start with the proof, we shall recall Theorem 18.5.5 of [KH] (the statement is modified in order to fit our needs, notice that for an expansive homeomorphism with local product structure the specification property is verified in each basic piece): Proposition 4.5 (Theorem 18.5.5 of [KH]). Let X a compact metric space and g : X → X an expansive homeomorphism with local product structure. Then, there exists h, c 1 , c 2 > 0 such that for n ∈ N we have: c 1 e nh ≤ P n (g) ≤ c 2 e nh where P n (g) is the number of fixed points of g n . We shall use several time the very well know Lefschetz formula which relates the homotopy type of a continuous function, with the index of its fixed points (see [Fr 2 ] Chapter 5). Definition. Let V ⊂ R k be an open set, and F : V ⊂ R k → R k a continuous map such that Γ ⊂ V the set of fixed points of F is a compact set, then, Remark 4.6. In general, if we have a map from a manifold, we can include the manifold in R k and extend the map in order to be in the hypothesis of the definition. The value of I Γ (F ) does not depend on how we embed the manifold in R k . For hyperbolic fixed points, it is very easy how to compute the index, it is exactly sgn(det(Id − D p f )). Since the definition is topological, any time we have a set which behaves locally as a hyperbolic fixed point, it is not hard to see that the index is the same. Lefshetz fixed point formula for the torus can be stated as follows: is the action of h in homology. The first thing we must show, is that the linear part of f , that is, the action L = f * : Lemma 4.8. The matrix L is hyperbolic. Proof.We can assume (maybe after considering a double covering and f 2 ) that E cs and E u are orientable and its orientations preserved by Df . So, it is not hard to show that for every fixed point p of g n , the index of π −1 (p) for f is of modulus one and always of the same sign. So, we know from the Lefshetz formula that This implies that L n is hyperbolic using Proposition 4.5 since the only way to have that estimate on the periodic orbits is that L is hyperbolic (see the argument in Lemma 18.6.2 of [KH]). It is standard to show the existence of a semiconjugacy h : Proof.It is enough to show that for every x ∈ T d / ∼ there exists y ∈ T d such that π −1 (x) ⊂ h −1 (y). For this, notice that any lifting of π −1 (x) (that is, a connected component of the preimage under the covering map) to the universal covering R d verifies that it's iterates remain of bounded size. This concludes by the remark above on h. Now, we shall prove that if f : R d → R d is any lift of f , then there is exactly one fixed fiber of π for f . Proof.Since f n is homotopic to L n which has exactly one fixed point and each fixed fiber of π contributes the same amount to the index of f n it must have exactly one fixed fiber. This allows us to show that g is transitive: Proposition 4.11. The homeomorphism g is transitive. Proof.First, we show that there exists a basic piece of g which projects byh to the whole T d . This is easy since otherwise, there would be a periodic point q in T d \h(Ω(g)) but clearly, the g−orbit ofh −1 (q) must contain non-wandering points (it is compact and invariant). This concludes, since considering a transitive point y of L and a point in Ω(g) ∩h −1 (y) we get the desired basic piece. Now, let Λ be the basic piece of g such thath(Λ) = T d . Assume that there existsΛ = Λ a different basic piece and z a periodic point ofΛ, naturally, we get thath −1 (h(z)) contains also a periodic point z ′ in Λ. By considering an iterate, we can assume that z and z ′ are fixed by g. With this in hand, we will continue to prove that the fibers of h coincide with those of π proving that g is conjugated to L (in particular, T d / ∼ ∼ = T d ). First, we show a global product structure for the lift of f . Notice that when we lift f to R d , we can also lift its center-stable and unstable foliation. It is clear that both foliations in R d are composed by leaves homeomorphic to R cs and R u respectively (the unstable one is direct, the other is an increasing union of balls, so the same holds). Lemma 4.12. Given x, y ∈ R d , the center stable leaf of x intersects the unstable leaf of y in exactly one point. Proof.The fact that they intersect in at most one point is given by the fact that otherwise, we could find a horseshoe for the lift, and thus many periodic points contradicting Lemma 4.10 (for more details, see Lemma 18.6.7 in [KH]). The proof that any two points have intersecting manifolds, is quite classical, and almost topological once we know that both foliations project into minimal foliations (see also Lemma 18.6.7 of [KH]). Now, we can conclude with the proof of Theorem 4.3. To do this, notice that the map h conjugating f with L is proper, so the preimage of compact sets is compact. Now, assume that A x , A y are lifts of fibers of π such that h(A x ) = h(A y ) we shall show they coincide. Consider K such that if two points have an iterate at distance bigger than K then their image by h is distinct. We fix x 0 ∈ A x and consider a box D n K of f n (x 0 ) consisting of the points It is not hard to show using Lemma 4.12 that there existsK independent of n such that every pair of points in D n K in the same unstable leaf of W u have distance along W u smaller thanK (this is a compactness argument). An analogous property holds for W cs . This implies that if f n (A y ) ⊂ D n K for every n ∈ Z then A y and A x must be contained in the same leaf of W cs . In fact we get that f −n (A y ) ⊂ W cs K (f −n (x 0 )) for every n ≥ 0 and so we conclude that A x = A y using Lemma 3.1. 4.3. Some manifolds which do not admit this kind of diffeomorphisms. The arguments used in the previous section also allow to show that certain manifolds (and even some isotopy classes in some manifolds) do not admit dynamically coherent partially hyperbolic diffeomorphisms satisfying the trapping property. A similar argument to the one used in the previous section yields the following result (see [GH] for sharper results in the same lines): This leads to a natural question: Is every dynamically coherent partially hyperbolic diffeomorphism with the trapping property homotopic to an Anosov diffeomorphism?. One should notice that expansive homeomorphisms admitting transverse stable and unstable foliations share many properties with Anosov diffeomorphisms (see for example [V, ABP]) but it is not known if every such homeomorphism is topologically conjugate to an Anosov diffeomorphism. Also, let us remark that there exist examples of dynamically coherent partially hyperbolic diffeomorphisms which are isotopic to Anosov and robustly transitve while not satisfying the trapping property: See [P 2 ,Section 3.3.4] Some dynamical consequences In this section we shall look at what type of dynamical properties can be recovered in the spirit of [P] (see also [Carv, BV]). We recall that a quasi-attractor Λ is a chain-recurrence class satisfying that it admits a decreasing basis of neighborhoods U n satisfying that f (U n ) ⊂ U n (see [P 2 , Chapter 1] and references therein). Since a quasi-attractor is saturated by unstable manifolds and the quotient we have defined which conjugates f to an expansive homeomorphism is injective on unstable manifolds, one expects that whenever the quotient map g is transitive (as it is ensured in some cases by Theorem A.1) there is a unique quasi-attractor. Unfortunately, showing this would involve showing that there are fibers of the semiconjugacy which are trivial and this is a subtle issue as the example presented in Appendix A shows. We are however able to show uniqueness of the quasi-attractors under a mild assumption resembling chain-hyperbolicity as defined in [CP]. Proposition 5.1. Let f : M → M be a dynamically coherent partially hyperbolic diffeomorphism satisfying the trapping property. Assume moreover that the quotient map g defined above is transitive and that there exists a point x ∈ M such that A x = {x}. Then, f has a unique quasi-attractor. Proof.Consider Λ a quasi-attractor for f and let π : M → M/ ∼ be the semiconjugacy to g : M/ ∼ → M/ ∼ constructed in section 3. Since π is injective along unstable manifolds, one obtains that π(Λ) contains the unstable set of any point z ∈ M/ ∼ such that z = π(y) with y ∈ Λ. Since g is expansive with local product structure and transitive, one know that the orbit of W u (z) is dense in M/ ∼ (see for example [KH,Chapter 18]). As a consequence, we get that every quasi-attractor must intersect A x = {x}. Since different quasi-attractors must be disjoint, this implies uniqueness of the quasi attractor under the assumptions of the proposition. Remark 5.2. In the case where E cs = E s ⊕ E c with dim E c = 1 and a trapping property is verified by leafs tangent to E c one can show that f satisfies a trapping property. From the construction of π one sees that A x is either a point or a closed interval. Using [J] one sees immediately that the conditions of the previous proposition are satisfied. Another property, related to [P] is the following: Proposition 5.3. Let f : M → M be a dynamically coherent partially hyperbolic diffeomorphism satisfying the trapping property. Assume moreover that the quotient map g defined above is transitive and the image of every open set of a center-stable leaf by π is either a point or has non-empty interior in the stable manifold of g. Then f has a unique quasi-attractor and every other chain-recurrence class of f is contained in a periodic disk of W cs . Proof.Consider a quasi-attractor Λ. As in the previous proposition, one has that π(Λ) = M/ ∼ and it is saturated by W u . One can easily show that for every x ∈ M the boundary of A x is contained in Λ: indeed, consider any y ∈ ∂A x and a neighborhood U of y in W cs (y). From our hypothesis one has that π(U ) has non-empty interior in the stable manifold of π(y). Iterating backwards and using the semiconjugacy and using the density of unstable sets for g one obtains that f −n (U ) intersects Λ for some n. Invariance of Λ and the fact that the choice of U was arbitrary gives that y ∈ Λ = Λ. The rest of the proposition follows by applying Proposition 2.1 of [P]. As we have explained, the hypothesis we demand in this section might follow directly from the fact that f has the trapping property but the example presented in Appendix A strongly suggests that counterexamples might exist. Question 1. Does there exists a dynamically coherent partially hyperbolic diffeomorphism of T 3 with splitting T T 3 = E cs ⊕ E u and with the trapping property such that it admits more than one quasi-attractor? Such that it has chain-recurrence classes (different than the quasi-attractor) which are not contained in periodic center-stable discs? See [P 4 ] for related discussions. A weaker trapping property and coherence In this section we shall present a weaker trapping property without requiring dynamical coherence a priori and show that it is enough to recover the initial proposition. One would hope that this property is shared by certain partially hyperbolic diffeomorphisms isotopic to Anosov though it is not so clear that it holds (see [P 3 ] for results in this direction). The proof is completely analogous to the one presented in section 3 of [BF] but in a slightly different context. One important point is the fact that we do not assume that the trapping property occurs in a small region and hope this might find applications. Let f : M → M be a partially hyperbolic diffeomorphism with splitting T M = E cs ⊕ E u . As before, we denote as cs = dim E cs and u = dim E u and D σ is the σ-dimensional open disk and D σ its closure. We will assume that f verifies the following property: ( * ) There exists a continuous map B : 0)) and the following trapping property is verified: The main result of this section is the following: Theorem 6.1. Let f : M → M be a partially hyperbolic diffeomorphism verifying property ( * ), then, f is dynamically coherent with a trapping property. Proof.First, we will denote as D cs x to the set of points y ∈ B(x) (D cs × D u ) such that f n (y) ∈ B(f n (x)) (D cs × D u ) We claim that D cs x is a manifold everywhere tangent to E cs and moreover one has that a trapping property f (D cs To show this, notice first that expansivity of the unstable manifolds implies that D cs x cannot intersect an unstable manifold more than once. Also, the trapping property verified by the maps B gives that every point a ∈ D cs verifies that B(x)({a}× D u ) intersects D cs x . The fact that it is a C 1 manifold everywhere tangent to E cs follows by classical graph transform arguments (see [HPS] or [KH,Chapter 6]). An important fact of the above is that one can view D cs x as a limit of disks D n x where D n x is any disk inside Image B x with the following property: Any family of such disks will converge to D cs x by the arguments sketched above (see the proof of Theorem 3.1 of [BF] for more details in a similar context). To finish the proof is then enough to show that the plaques D cs x are coherent in the sense that if y ∈ D cs x then D cs x ∩ D cs y is relatively open in D cs x . To see that this holds in general we shall argue in a similar way as in Lemma 3.1 to take advantage of the trapping property as well as the continuity of the map B. For each x ∈ M we consider the set W cs x defined as n f −n (D cs f n (x) ). Notice that W cs x is an immersed copy (in principle not injective) of R dim E cs in M . We shall use in W cs x the topology induced by this immersion (i.e. the intrinsic topology and not the one given as a subset of M ). To show that E cs is integrable, it is enough to show that W cs x is a partition of M and that each leaf is injectively immersed. Assume then that y ∈ W cs x , we must show that D cs y ⊂ W cs x . This will conclude since by local uniqueness this gives that W cs x is injectively immersed and that the sets W cs x are disjoint or coincide. Consider, for x ∈ M the sets E n = {y ∈ W cs x : f n (D cs y ) ⊂ D cs f n (x) }. If one shows that W cs x = n E n one completes the proof of the Theorem since f n (D cs y ) ⊂ D cs f n (x) implies that D cs y ⊂ W cs x which as argued above will imply that {W cs x } x∈M is an f -invariant foliation tangent to E cs . The proof that the union n E n is closed in W cs x is the same as in Lemma 3.1. The proof of openness is slightly more delicate that in that case since we do not know coherence in principle. However, coherence is easy to establish for points which are nearby and that this is exactly what we need to show to show openness. To see that E n is open consider z is close enough to y verifying that f n (D cs y ) ⊂ D cs f n (x) . We must show that z ∈ E n . To see this, it is enough to show that given y ∈ M one has that for z in a small neighborhood of y in D cs y it holds that f (D cs z ) ⊂ D cs f (y) . By continuity of B and the trapping property, it follows that for z in a neighborhood of y in D cs y the image by f of Image B z traverses the image of B f (y) . The characterization of D cs z as limits of disks as explained above implies that f (D cs z ) ⊂ D cs y as desired and concludes the proof. It is natural to expect that for a partially hyperbolic diffeomorphism f : T 3 → T 3 with splitting T T 3 = E cs ⊕ E u (with dim E cs = 2) isotopic to a linear Anosov automorphism with two-dimensional stable bundle, property ( * ) will be satisfied. To show this, one possibility would be to show injectivity of the semiconjugacy to the linear model along unstable manifolds but we have not succeed in doing so. A positive answer would improve the results of [P 3 ] in this context. Appendix A. A non-trivial decomposition of the plane admitting homotheties We shall denote as d 2 : R 2 → R 2 to the map The goal of this appendix is to prove the following Theorem: Theorem A.1. There exists a C ∞ -diffeomorphism f : R 2 → R 2 and a constant K > 0 such that the following properties are verified: -There exists a Holder continuous cellular map h : R 2 → R 2 such that The C ∞ norm of f and f −1 is smaller than K. A direct consequence of this Theorem is the existence of h : R 2 → R 2 whose fibers are all non trivial and cellular (decreasing intersection of topological disks), the existence of these decompositions of the plane had been shown by Roberts [Rob]. A.1. Construction of f . We start by considering a curve γ = {0}×[− 1 4 , 1 4 ]. Clearly, γ ⊂ B 0 = B 1 (0) the ball of radius one on the origin. Consider also the sets B n = B 2 n (0) for every n ≥ 0. It follows that We shall define f : R 2 → R 2 with the desired properties in an inductive manner, first in B 0 and then in the annulus B n \ B n−1 with arbitrary n ≥ 1. (c) f 0 (V 0 i ) ⊂ V 0 i for i = 1, 2. Assume now that for some sufficiently large constants K 1 and K 2 > 0 we have defined a C ∞ -diffeomorphism f n : B n → B n−1 and disjoint open connected sets V n 1 and V n 2 (homeomorphic to a band R × (0, 1)) such that: (I1) f n | B n−1 = f n−1 and V n−1 i ⊂ V n i for i = 1, 2. (I2) The C ∞ -distance between f n and d 2 in B n is smaller than K 1 . (I3) f n (V n i ) ⊂ V n−1 i for i = 1, 2 and f n n (V n i ) disconects B 0 . (I4) V n i contains balls of radius 1 10 in every ball of radius K 2 2 in B n . (I5) f n coincides with d 2 in a K 2 10 -neighborhood of ∂B n . We must now construct f n+1 and the sets V n+1 i assuming we had constructed f n and V n i . To construct f n+1 and V n+1 i we notice that in order to verify (I1), it is enough to define f n+1 in B n \B n−1 as well as to add to V n i an open set in B n+1 \B n which verifies the desired hypothesis. Consider d −1 2 (V n i ) ∩ B n+1 \B n . Since V n i satisfies property (I4) one has that d −1 2 (V n i ) contains a ball of radius 1 5 in every ball of radius K 2 of B n+1 \B n for i = 1, 2. Now, we consider a diffeomorphism ϕ n which is K 1 − C ∞ -close to the identity, coincides with the identity in the K 2 10 -neighborhoods of ∂B n+1 and ∂B n and such that ϕ n (V n i ) contains a ball of radius 1 10 in every ball of radius K 2 2 of B n+1 for i = 1, 2. The existence of such ϕ n is assured provided the value of K 1 is large enough with respect to K 2 . We define then f n+1 in B n+1 \B n as d 2 • ϕ −1 n which clearly glues together with f n and satisfies properties (I2) and (I5). To define V n+1 i we consider a very small ε > 0 (in order that ϕ n (V n i ) also verifies (I4)) and for each boundary component C of ϕ n (V n i ) (which is a curve) we consider a curve C ′ which is at distance less than ε of C inside ϕ n (V n i ) and such that each when it approaches C ∩ ∂B n the distance goes to zero and when it approaches C ∩ ∂B n+1 the distance goes to ε. This allows to define new V n+1 i as the open set delimited by these curves united with the initial V n i . It is not hard to see that it will satisfy (I3) and (I4). We have then constructed a C ∞ -diffeomorphism f : R 2 → R 2 which is at C ∞ distance K 1 of d 2 and such that there are two disjoint open connected sets V 1 and V 2 such that f (V i ) ⊂ V i . and such that both of them are K 2 2dense in R 2 . A.2. Proof of the Theorem. We first show the existence of a continuous function h : R 2 → R 2 conjugating f to d 2 which is close to the identity. This argument is quite classical: consider a point x ∈ R 2 , so, since d C 0 (f, d 2 ) < K 1 we get that the orbit {f n (x)} is in fact a K 1 −pseudo-orbit for d 2 . Since d 2 is infinitely expansive, there exists only one orbit {d n 2 (y)} which α(K 1 )-shadows {f n (x)} and we define h(x) = y (in fact, in this case, it suffices with the past pseudo-orbit to find the shadowing). We get that h is continuous since when x n → x then the pseudo-orbit which shadows must rest near for more and more time, and then, again by expansivity, one concludes. This implies also that h is onto since it is at bounded distance of the identity. Now, consider any ball B of radius 100α(K) in R 2 , it is easy to see that f (B) is contained in a ball of radius 50α(K) and then, we get a way to identify the preimage of points by h. Consider a point x ∈ R 2 , we get that h −1 (h(x)) = n>0 f n (B 100α(K) (f −n (x))) which implies that h is cellular. It only remains to show that the image under h of both V 1 and V 2 is the whole plane. Proof.We shall show that h(V i ) is dense. Since it is closed, this will imply that it is in fact the whole plane, and using the semiconjugacy and the fact that f (V i ) ⊂ V i we get the desired property. To prove that h(V i ) is dense, we consider an arbitrary open set U ⊂ R 2 . Now, choose n 0 such that d −n 0 2 (U ) contains a ball of radius 10α(K). We get that h −1 (d −n 0 2 (U )) contains a ball of radius 9α(K) and thus, since α(K) > K, we know that since V i is K/2-dense, we get that (U )) = ∅ which using the semiconjugacy gives us that h (V i This concludes. Holder continuity of h follows as in Theorem 19.2.1 of [KH] (see also [P 2 ]). Notice that the exponent of Holder continuity cannot be larger than 1 2 since the boundary of V i is sent as a space-filling curve.
11,424
2014-08-14T00:00:00.000
[ "Mathematics" ]
Modeling and Nonlinear Control of a Wind Turbine System Based on a Permanent Magnet Synchronous Generator Connected to the Three-phase Network Received Aug 26, 2017 Revised Dec 17, 2017 Accepted Jan 18, 2018 This article presents nonlinear control of wind conversion chain connected to the grid based on a permanent magnet synchronous generator. The control objectives are threefold; i) forcing the generator speed to track a varying reference signal in order to extract the maximum power at different wind speed (MPPT); ii) regulating the rectifier output capacitor voltage; iii) reducing the harmonic and reactive currents injected in the grid. This means that the inverter output current must be sinusoidal and in phase with the AC supply voltage (PFC). To this end, a nonlinear state-feedback control is developed, based on the average nonlinear model of the whole controlled system. This control strategy involves backstepping approach, Lyapunov stability and other tools from theory of linear systems. The proposed statefeedback control strategy is tested by numerical simulation which shows that the developed controller reaches its objectives. Keyword: INTRODUCTION In this work, we have interested wind energy. Till now, there are two types of wind turbines: the wind speed fixed directly connected to the grid by the stator and variable speed wind turbines commissioned by the stator and the rotor by means of electronic power converters [1]. Thanks to the numerous advantages it has compared to other types of electrical machines (robustness, maintenance, price), the permanent magnet synchronous machine is interesting for use as a generator coupled to a wind turbine [2], [3], [4]. In this article, we will look at the study of a complete chain of wind energy conversion shown in Figure 1 which includes a synchronous permanent magnet generator that converts wind energy into an output voltage whose amplitude and frequency vary depending on the wind speed and AC/DC/AC converters which connects of the generator and the network via a LCL filter. In order, nonlinear or having non-constant parameters systems, conventional control laws may be inadequate because they are not strong especially when the demands on accuracy and other dynamic characteristics of the systems are strict. We must use control laws insensitive to parameter variations, to disturbance and nonlinearities. For this purpose, several tools are available in the literature, such as proportional integral (PI) controllers [5], [6], [7]. this mode of control is insufficient to control of such complex system where the parameters are not stable over time, other research interested by advanced method [8], [9]. but also, this technique shows a chattering phenomenon which has a negative effect in the mechanical part of the machine, moreover, platitude approach is described in another article [10]. In this article, we present a backstepping to control power converters associated with the generator, The Backstepping approach is a systematic and recursive design methodology for nonlinear feedback control [11]. The control to apply in different parts of the system. The generator-side converter is mainly used to control the generator speed to extract the maximum output power at various wind speeds [12], [13], the network-s ide converter is mainly used to control the reactive power on the one hand and maintain tension in the DC bus capacitor of the constant value and the make the current output of the inverter in the phase withe the grid voltage. The rest of the paper is organized d as follows: the model mathematical of the w hole system is presented by the equation of states in Section II; Section III describes in details the proposed control strategy based on backstepping approach for the system studied, while in Section IV gives simulation results demonstrating the good performance of the proposed method in MATLAB/Simulink, then a conclusion at the end of paper. Generator Modeling To simplify the system of equations with variable coefficients, a model in the Park reference of this machine will be used. The Park reference is simpler to manipulate because the electrical quantities evolve like continuous magnitudes. One can switch from one to the other marker using transit matrices. Since the voltages are the input variables, the output quantities (currents) can be expressed as a function of these, the state model of the synchronous machine, expressed in the rotating d-q reference frame linked to the rotor, is the following [14]: x an averaging value over a cutting period of a real signal x . Modeling the Inverter and Filter Association After presenting the model of the machine, we will expose the complete drive system where the controlled inverter is associated with the LCL filter connected to the network [15], [16]: x an averaging value over a cutting period of a real signal x . are the dq-components of the capacitor and grid voltage and in the output of inverter respectively. The inverter is featured by the fact that the grid d-and q-voltage can be controlled independently. In fact, one has: where: d u , q u represent the average d-and q-axis of the 3-phase duty ratio system ( DESIGN CONTROL 3.1 Control Objectives In this work, we are interested in ordering a wind energy conversion chain to produce electrical energy and inject it into the grid. To achieve this objective, the backstepping approach associated with the Lyapunov tools is used to guarantee the asymptotic stability of the closed loop system. From a control point of view, this is expressed into the following three objectives: i) Speed control: Force the speed of the generator to follow a reference signal varies. ii) PFC: the current injected into the network must be sinusoidal and in phase with the AC supply voltage. iii) Controlling the voltage dc v of the DC bus voltage to a reference given dcref v . This is usually set to a constant value equal to the nominal voltage of the inverter input. Regulator Design for Synchronous Generator The control strategy adopted uses cascaded loops, two internal loops for controlling the currents of axis d and q and an external loop for controlling the speed of the generator. Usually, the d-axis current must be regulated to zero in order to keep the flux constant in the air gap of the machine [17]. The q-axis current must track a reference signal from the external speed loop [18]. To solve this tracking problem, the following errors are defined: x x e   (12) Using the equations (1-3) we get the dynamics errors following: In order that the derivative of the test is still negative, it must take the derivative of the form ; is a positive design parameter introduced by the backstepping method, which must always be positive and non-zero to meet the stability criteria of Lyapunov function. Following the backstepping approach and in order to ensure the tracking speed stability, the virtual control * 2 x is chosen as follow: To ensure the stability and convergence of component 2 x and its * 2 x Reference; we consider the second Lyapunov following function: (18) using (24), (30), we will select the law of command 1 u as follows: Id regulator: We will define the 3rd Lyapunov function to ensure the stability of 3 x : PFC and DC Voltage Controller A regulator will be designed in three steps Fig.2, so that the currents ) ( 4abc x injected into the must be sinusoidal and in phase with the system of grid voltage. By applying the backstepping control method, the control law established using equations (5)(6)(7)(8)   (21) where   * 4dq x is the reference signals (assumed bounded and sufficiently differentiable). Figure 2. Block Diagram Grid-side Converter Control System After choosing the two functions of lyapunov and following the method of backstepping we define the two virtual commands Where the design parameters quantity   Stabilization of Subsystem (18) The third design step consists in choosing the actual control signals, Using (7-23) following the backstepping approach, the control law for the system is given by: Figures 6 and 7 respectively, shows the q-component of the stator current iq and the electromagnetic torque. The curve of the current i sq tends to a constant value is of the same shape as that of the couple; it is deduced that the electromagnetic torque is directly proportional to the current isq presented in the Figure 7. Figures 8 and 9 respectively, shows represent the curve of the DC bus voltage vdc for a reference vdc_ref and the current i2 inject into the network with the supply voltage. In the Figure 8 the value of DC-link remains stable at a constant value to give as reference. Also, the current remains all time sinusoidal and in phase with the network voltage complying with the PFC requirement demonstrate by Figure 9 et 10. In the second 0.25 we have varied the reference of the voltage vdc, it is noted that the voltage of the DC bus is enslaved quickly to its new reference, which shows the robustness of our regulator. The Figure 11, show the reactive power injected into the three -phase network (equal to zero) and the electrical power P produced by the machine transferred to the grid by the three-phase inverter. From the simulation results obtained, for the wind energy system connected to the three-phase network, it can be noted from the first view, that the regulator based on the backstepping approach has a better performance than other regulator, such as directed flow control (FOC) [19], Linearization [20], Direct Torque Control (DTC) [21] and Sliding Mode approach [9], [22], especially in dynamic regime, when the controlled part is subjected to perturbations and variations of system parameters (our case), the algorithms of the classical [23] control using proportional, integral and derivative controllers remain incapable of control such nonlinear system. For so-called sophisticated control laws, there is a major problem which is the necessity of using a mechanical sensor (speed, load torque). This imposes an additional cost and increases the complexity of assemblies. For sliding-mode control, there are technological and physical limitations, such as switching delays or small-time constants at the actuator, which is why a Chattreing phenomenon appears on the sliding surface, this one characterizes by strong oscillations around the surface of commutation; Where we can observe in all the figures presented in our simulations basead on backstepping approach that the response time is very short so that the signals restored to the maximum values, so this rapid correction is of the speed or currents or voltages translated the continuation of our system and assured the performance of our regulator in the injection part the power to the network it is observed that our regulator respects the characteristics of our grid that it is the frequency or the phase, on the one hand, of on the other hand, the power injected into the network, we note that the reactive power tends towards a zero value and the active power reacts with the speed variation of the generator when the speed reaches the maximum the power is totally transmitted to the network. CONCLUSION In this paper, we have considered the problem of controlling the wind turbine synchronous generator connected to the power network through power electronic AC/DC/AC converters connected to the grid via LCL filter. The system dynamics have been described by the averaged 6 th order nonlinear state-space model (1)(2)(3), (5)(6)(7)(8). Based on such a model, the Lyapunov stability and averaging theory are used to design nonlinear controller defined by equations (19,20,27,28). the later guarantees quite interesting average performances; It is formally established that the control objectives are actually achieved in average with a quite satisfactory accuracy. Speed following his reference perfectly, the tracking quality is quite satisfactory as the response time is small after the change the wind speed, the current injected into the network remains sinusoidal all the time and in phase with the voltage of the network complying with the requirement PFC., the continuous bus DC-link follows a constant reference value. The formal results are confirmed by several simulations.
2,787.4
2018-06-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Pulse length effects on autoionizing states under the influence of intense SASE XUV fields The Fano absorption line shape of an autoionizing state encodes information on its internal atomic structure and dynamics. When driven near-resonantly with intense extreme ultraviolet (XUV) electric fields, the absorption profile can be deliberately modified, including observable changes of both the line-shape asymmetry and strength of the resonance, revealing information on the underlying dynamics of the system in response to such external driving. We report on the influence of the XUV pulse parameters at high intensity that can be achieved with a free-electron laser (FEL) with statistically broadened spectra based on self-amplified spontaneous emission (SASE). More specifically, the impact of the FEL pulse duration is studied for the example of the doubly excited 2s2p resonance in helium, where line-shape modifications have been measured with XUV transient absorption spectroscopy in Fraunhofer-type transmission geometry. A computational few-level-model provides insight into the impact of different average pulse durations of the stochastic FEL pulses. These findings are supported by measurements performed at the Free-Electron Laser in Hamburg (FLASH) and provide further insight into XUV strong-coupling dynamics of resonant transitions driven by intense high-frequency FEL sources. Introduction If one excites an atomic or molecular system with more energy than it needs to be ionized, phenomena like fluorescence, Auger decay or semi-stable resonance states embedded in the continuum-so-called autoionizing states-come into play. A special species of the latter with direct electron-electron inter- * Authors to whom any correspondence should be addressed. Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. action are doubly excited states, which have been extensively studied in helium (for a review see e.g. [1]). The decay of such two-electron transitions is dominated by the interaction of the doubly excited state with the singly ionized continuum. In helium this typically occurs on the order of a few tens to hundreds of femtoseconds (fs) for the lower lying states [2,3], which eventually leads to the ejection of one of the two electrons. Autoionizing states hereby exhibit a characteristic asymmetric absorption line shape which has been measured with synchrotron extreme ultraviolet (XUV) light [2,[4][5][6]. The theoretical description was given by Fano [7], attesting the system a quantum-interferometric character, and explaining the asymmetric line shape through the phase-sensitive interaction of different electron configurations, nowadays well known as the Fano profile. The autoionizing states in helium represent a fundamental example of a correlated two-electron transition in a natural atomic system and can be used as a benchmark for the theoretical description of the Coulombic three-body problem [1,8]. Over the years a general interest has developed, how this correlated two-electron system changes when driven with strong electric fields. Initial theoretical work already predicted interesting effects in strong laser fields [9][10][11], also under the influence of FEL pulse shapes [12]. On the experimental side, the modification of autoionizing states in helium in the presence of strong DC electric fields have been reported [13]. More recently, AC strong-field effects have been experimentally realized through the time-resolved XUV interaction in the presence of strong near-infrared laser fields, where attosecond XUV pulses are generated with high-harmonic generation, and a precise time synchronization between both pulses is possible. These experiments thus enabled a direct time-domain view into the strong-field driven autoionization dynamics in helium, both through XUV attosecond transient absorption spectroscopy (TAS) [14][15][16][17][18][19][20] as well as with electron photoemission spectroscopy [3,21,22]. With the recent availability of intense and ultrashort XUV pulses generated by free electron lasers [23] it became possible to directly investigate strongly coupled resonant transitions with intense XUV pulses, and an absorption line-shape distortion in helium has been observed [24]. Further theoretical investigations of the influence of FEL pulse parameters on these autoionization dynamics have been performed [25][26][27]. Here we present a study on the FEL pulse-length-dependent modification of the 2s2p autoionizing absorption line shape in helium when driven directly by intense and partially coherent XUV fields which are generated through self-amplified spontaneous emission (SASE). This is performed within the TAS scheme, an all-optical method that works in transmission mode and is sensitive to the interference of the incoming and scattered light in forward direction, which recently has been experimentally realized with intense FEL pulses [24,28]. With a few-level model simulation, the impact of high-intensity XUV pulse parameters on the line-shape modification is investigated by a variation of the average pulse duration of stochastically structured FEL pulses. We find that with increasing FEL pulse duration, but fixed average bandwidth, the magnitude of observable line-shape asymmetry changes decreases significantly. These model predictions can be intuitively understood through the peak intensity distribution of individual temporal SASE spikes contained in each pulse, and are further confirmed by transmission-mode TAS measurements of the 2s2p resonance in helium at the Free-Electron Laser in Hamburg (FLASH). We would like to note that absorption line-shape modifications are not a unique feature of multi-electron resonances and can also be observed for one-electron transitions [15]. FEL pulse-length effects are expected to become important whenever the resonance lifetime and FEL pulse duration are comparable. Strongly coupled 1s 2 -2s2p transition in helium in intense SASE fields The measurement concept is illustrated in figure 1. In the case of the first dipole-allowed two-electron resonance in helium, a single photon is sufficient to excite both electrons simultaneously from the ground state (1s 2 ) into the doubly excited state 2s2p. With its energy (60.15 eV) above the single ionization threshold (24.6 eV), this is an autoionizing state, having a lifetime of 17 fs, after which the excitation almost exclusively decays into the single-ionization continuum of the groundstate helium ion He + [2,3]. Since this ionization can also happen directly, without involving the doubly excited state, the corresponding transition amplitudes interfere and the resonant absorption line shows the typical asymmetric Fano shape in the limit of weak (perturbative) XUV fields [7]. Dressing the system resonantly with a more intense XUV field, strong-coupling effects start to occur [10], and for the case of short pulses, the involved two-electron energy levels can be transiently shifted during the interaction with the external fields [15,24]. A SASE-based FEL pulse has a substructure containing coherent sub-spikes [29][30][31]. The minimum duration of these intensity spikes can be related to the averaged spectral bandwidth of the pulse ensemble according to the time-frequency Fourier relationship. For a 0.4 eV average spectral bandwidth at 60 eV, this corresponds to a temporal duration of about 5 fs, which is significantly shorter than the autoionization lifetime of the 2s2p excitation. The short-pulse dressing with an isolated and intense spike thus leads to an impulsive change in the initial phase Δϕ of the XUV-induced dipole response according to Δϕ ∼ ΔE · ΔT, where ΔE and ΔT are the energy level shift and pulse duration, respectively, due to the interaction with the spike. This phase shift Δϕ can be directly measured in a modified absorption line-shape q-parameter with ϕ = 2 · arg(q − i) [15], since its signature is measured in forward direction and imprinted on the transmitted FEL pulses. Thus, the XUV-induced strong-coupling regime can be quantified via the asymmetry of the measured absorption line shape, as parametrized by the well-known Fano q parameter [7]. Interestingly, the 17 fs lifetime of the 2s2p excitation in helium falls right between the typical intra-pulse spike (5 fs) and pulse-envelope duration (∼100 fs) at FLASH. The natural question arises: what is the influence of different pulseenvelope durations on this nonlinear excitation process driven by such statistical XUV pulses from a SASE FEL. Simulation A computational few-level model is employed to investigate the influence of SASE pulse duration onto the strongly driven double excitation, including only the involved resonant transitions and their coupling, which results in the Hamiltonian (1) Figure 1. Sketch of the measurement and physics scheme. (a) SASE-based FEL pulses of either short (I) or long (II) duration are focused into a moderately dense helium gas target, leading to a single-photon two-electron excitation of an autoionizing state. The transmitted XUV light contains a dipole response (illustrated with blue color) of the target, dominant for each strong feature of the SASE structure (illustrated with violet color). It is spectrally dispersed and recorded with an XUV-sensitive CCD camera. In combination with the reference spectra acquired before or without the target, the optical density (OD) is determined, shown here averaged for short (I) and long (II) pulses, respectively. (b) Energy-level scheme of helium. From the two-electron ground state (1s 2 ) the near-resonant photons couple either to the autoionizing two-electron resonance (2s2p, violet arrow), which decays via the configuration interaction (V CI , orange arrow) with a life time of 17 fs into the singly ionized continuum (1sEp), or it ionizes directly into the same continuum state (green arrow). As the final state of singly ionized helium is identical, both pathways interfere. Besides the ground state (E g = 0 eV) and the 2s2p doubly excited configuration state at energy E e , the continuum is modeled as an ultra-broad, thus fastly decaying state with a complex eigenenergy, being effectively the loss channel to this simulation. Its spectral position (E c = 32.65 eV) and width (Γ /2 = 39.73 eV) as well as the dipole matrix element for direct ionization (d gc = 0.6753 a.u.) are chosen such that the non-resonant absorption cross section is reproduced in the region of interest [32,33] for the limiting case of weak XUV pulses. The coupling of the doubly excited configuration state to the continuum state is described by the configuration interaction matrix element V CI = 0.0373 a.u.. This coupling, as well as the eigenenergy of the doubly excited configuration state (E e = 60.12 eV) and its dipole matrix element to the ground state (d ge = −0.04932 a.u.), are determined such that the model reproduces tabulated values for the 2s2p absorption line shape in the weak-field limit [2,34]. For the real-valued electric field E(t) to be representative of the XUV pulses of FLASH, which are generated by the SASE process, the partially-coherent pulse model [35] is utilized. The starting point is a flat-phased spectral amplitude of desired position and width corresponding to a typically averaged spectral intensity measured at a SASE FEL. This is combined with a fully random spectral phase and Fourier-transformed into the time domain. It is then windowed with a Gaussian envelope of a width corresponding to the required overall pulse duration. These pulses obtain a SASE-like stochastic structure in the time domain. The subspike features exhibit a duration on the order of 5 fs, corresponding to the bandwidth of the initial averaged spectrum of 0.4 eV full width at half maximum (FWHM). Inserting the Hamiltonian into the time-dependent After having solved the TDSE, the complex-valued time-dependent dipole moment d is Fourier transformed back into the spectral domain obtainingd(ω) = dt d + (t) e −iωt and one evaluates the optical density OD(ω) after transmission through a moderately dense helium target with (2) Hereby the electric field spectrumẼ(ω) = dt E + (t) e −iωt is also obtained after the Fourier transform of the complexvalued electric field E + (t). The interferometric superposition of the input spectrumẼ(ω) and the macroscopic polarization response η ·d(ω) follows Maxwell's equations of propagating electric fields through the target medium [36] where the transmitted spectrum is measured in forward direction. The parameter η = 10 −4 hereby represents the macroscopic particle density, interaction length and other fundamental constants. It is set sufficiently small to suppress propagation effects such that the resulting OD directly encodes the microscopic single-atom response. The . . . in equation (2) represents an average over an ensemble of 200 pulses of the same fluence but with different initial (randomized) phases to account for fluctuating SASE intra-pulse intensity spikes across the ensemble. Considering the stochastic nature of SASE pulses, the fluence, i.e. the temporally or spectrally integrated pulse intensity distribution, is a well-defined measure to compare different FEL pulses across the statistical ensemble. It is also the fluence which relates to directly accessible experimental parameters like the total pulse energy and the focal spot size. First the computational model is applied to perform a fluence scan for a short ensemble-averaged pulse duration of 30 fs ( figure 2). The 2D-plot shows significant line-shape changes toward a more symmetric line shape, validated by the lineouts (see figures 2(b)-(d)). For the quantification of the absorption line-shape parameters, the shape is fitted with the normalized Fano profile [15]: with the reduced energy ε(ω) = 2(ω − ω r )/Γ , the asymmetry parameter q, the line width Γ , resonance position ω r , the amplitude a, and the offset b. The fit allows the quantitative extraction of the asymmetry q-parameter and the line strength a. By design the fit of the absorption line delivers the literature values in the weak-field limit (figure 2(d)), provided by synchrotron measurements of this transition with q = −2.75 and line width Γ = 37 meV [2]. With increasing fluence up to 15 J cm −2 the absorption line shape evolves toward a more symmetric profile up to q ≈ −7 (figures 2(b), (c) and (e)). This is in good agreement with previous experimental findings [24]. This fluence scan is repeated for a much longer average pulse duration of 150 fs (figure 3). In contrast to the case above, the line-shape asymmetry shows a significantly reduced variation with fluence (figures 3(b)-(d)). This is quantitatively confirmed by the fit of the Fano profile and the resulting q-parameter (figure 3(e)). Despite the markedly different behavior of the asymmetry of the absorption line, its amplitude shows an almost identical trend (compare figures 2(f ) and 3(f )). Hence, the strength of the absorption line appears to be independent of the pulse duration but rather correlates to the fluence of the pulses. This indicates that the total ionization yield, i.e., the reduced population of the ground state, which is independent of the SASE sub-structure of the pulse and only depends on the total pulse fluence, is the dominating contribution to the observed reduction in amplitude. Normalizing to the initial absorption amplitude in the weak-field limit, its decrease can thus serve as a fluence metric and as such enables a direct comparison with experimental results. In order to investigate the pulse-duration dependence of the line-shape asymmetry, a high fluence of 15 J cm −2 is selected. A set of different pulse durations between 20 fs and 180 fs are used for a systematic scan in the computational model. The resulting OD(ω) (see equation (2)) is fitted in the same way (see equation (3)) and the extracted asymmetry parameter q is plotted as a function of the averaged pulse duration ( figure 4). Moving toward shorter pulse duration we observe q to become increasingly negative. Non-trivial strong-coupling effects, which may change the balance between the transition into the doubly excited state and the ionization continuum, thus seem to be more relevant for short average SASE pulse durations. For a longer pulse duration we observe the asymmetry to change much more slowly, approaching the reported weakfield limit (q = −2.75), indicating that strong-coupling effects become less prominent here. It is to be noted that the fluence is the same here for all pulses and all durations. Since the overall pulse durations between 30 and 180 fs are all larger than the lifetime of the state, the explanation for the strong sensitivity of the q parameter on this duration can be found in the temporal (sub-)structure of the SASE pulses. As introduced above (see figure 1 and section 2), the individual spikes that are contained in the SASE pulse are much shorter than the excited-state lifetime (17 fs) as well as the average pulse duration. It is predominantly the interaction with a single spike that leads to a transient energy shift of the two-electron resonance, which is then encoded in a modified line-shape asymmetry. Therefore, a change in the average pulse duration effectively leads to a different number of temporal intrapulse spikes, the number of 'micro-experiments' so to speak. Keeping the fluence constant, the SASE pulse energy is then distributed over a larger number of individual spikes for increasing average pulse duration. This obviously causes a decrease of the peak intensity that is reached in each individual intra-pulse spike, thus explaining why the line-shape asymmetry is less affected in this regime. For a quantification of the bunched intra-pulse intensity distribution of the individual spikes, the histogram (figure 5) depicts the peak intensity probability comparing a short (30 fs) and long (150 fs) average pulse duration for an ensemble of 100 pulses each. This directly confirms the above explanation. For shorter pulses, peak intensities as high as 1.5 × 10 15 W cm −2 can be reached, while for longer pulses the peak intensity barely reaches above 5 × 10 14 W cm −2 and the fluence is distributed over many more low-intensity spikes. From previous work with 5 fs Gaussian pulses [24] we found that the upper 10 14 W cm −2 is the expected intensity regime where impulsive strong-coupling effects start to play a significant role. As we have shown here, SASE pulses with much longer average duration can contain several of such subspikes in that intensity regime. For a fixed fluence that will not cause ionization depletion before the peak of the pulse envelope is reached, strong-coupling XUV effects (e.g. as shown here, the modification of the q parameter) thus increase for shorter pulses. Experimental For an experimental demonstration of the computational findings above, a home-built XUV-TAS setup was connected to the beamline BL2 at FLASH. The FEL was operated in single-bunch mode at 10 Hz repetition rate. An ellipsoidal mirror focused the beam down to a spot size of a few ten micrometers onto a helium gas target with a path-length density on the order of 10 17 cm −2 . After transmission through this moderately dense target, the beam was spectrally dispersed with a variable-line-space grating onto a flat surface, where it was measured with a back-illuminated XUV-sensitive CCD camera. A parasitic reference spectrometer, installed in the FLASH beamlines [37], recorded online single-shot spectra before the target which are used for the experimental quantification of the OD. To control the fluence of the XUV pulses, the pulse energy was controlled with a facilitybased gas absorber filled with a tunable density of molecular nitrogen, which acts as a non-resonant absorber medium in the spectral region of interest. The resulting pulse energy was parasitically measureded for each individual pulse with a gas-monitor-detector (GMD) [38,39]. The pulse energy is distributed over several ten μJ and reaches an average of 75 μJ in the high pulse-energy setting, that is without any nitrogen absorption along the beam path. Considering losses of the XUV optics, the beamline transmission is estimated at 50%, leading to about 36 μJ pulse energy available on target. This corresponds to a fluence on the order of ∼10 J cm −2 . With an assumed constant focal spot size, which is reasonable since the nitrogen gas absorber has not been observed to modify the far-field beam parameters, fluence and pulse energy are directly proportional to each other. Since the pulse energy is the directly measured quantity during the experiment, it will be used as the equivalent parameter to the fluence in this section. The spectral position of the XUV pulse was tuned to 60 eV, energetically slightly below the 2s2p resonance, with a spectral bandwidth of 0.4 eV. Measuring the spectral intensity S trans (ω, E p ) after the transmission through the helium target, the OD is determined following Beer-Lambert's law by where S ref (ω, E p ) is the reference spectral intensity measured parasitically with the FLASH online spectrometer. All measured quantities are sorted by the pulse energy E p according to the GMD reading. Hereby the gas absorber was used to increase the number of events for low pulse energy. The measured ensemble spectra are averaged . . . over at least 200 events. Pulse-energy scans have been performed for two different sets of FEL machine parameters (figures 6 and 7) and show significantly different behavior. From typical FEL conditions at this pulse energy, the pulse duration of the first setting is estimated to be ∼75 fs [40]. The results of the OD over pulse energy ( figure 6) show a clear modification of the line-shape asymmetry. With increasing pulse energy the Fano line-shape asymmetry changes from the weak-field limit q-parameter of q ≈ −2 toward a more symmetric shape around q ≈ −6 (figure 6(e)), while the normalized amplitude decreases to 0.3 (figure 6(f )). The second measurement (figure 7) was performed during a later campaign with a different set of machine parameters. By spectral analysis of the spectral intensity of individual FEL pulses, the pulse duration is estimated at 130 fs, whereas other parameters (photon energy and the on-target fluence) are comparable to the previous measurement. The recorded absorption lines show similar decrease in line strength (amplitude a, compare figures 6(f ) and 7(f )), but the asymmetry q-parameter (compare figures 6(e) and 7(e)) now follows a different trend. Indeed, the q-parameter remains approximately constant, which is compatible with the previous discussion of long average pulse duration. With the above mentioned direct connection between amplitude a and fluence (i.e., the pulse energy), created by ground-state depletion, the on-target fluence is assumed equal for both experiments. Therefore, the influence of the average duration of the SASE pulse on the line-shape parameters can be confirmed also experimentally, and is in qualitative agreement with the theoretical findings from above. Conclusion To summarize, we have demonstrated that the average pulse duration of SASE XUV pulses has a significant impact on XUV strong-coupling-induced line-shape asymmetry changes of the 2s2p double-excitation resonance in helium. The line strength (amplitude a) has been shown to mainly depend on the fluence and appears rather independent of the intrinsic temporal structure of the SASE pulses. The line-shape asymmetry q by contrast has been found to strongly depend on the average SASE pulse duration. We have explained this observation computationally with the stochastic intra-pulse spiked intensity structure, where the line-shape asymmetry is mainly impacted by the peak intensity of individual spikes. Our findings are also in agreement with experiments utilizing different FEL parameters. In the future, the understanding of such XUV strong-coupling effects will be helpful for the interpretation of FEL-based spectroscopy and imaging studies at intermediate and up to very high intensities, and is not limited to autoionizing transitions. This understanding can also turn into tools for characterizing FEL pulses on one hand, and for understanding mechanisms in XUV/x-ray coherent control of atoms and molecules on the other hand.
5,332.6
2020-11-10T00:00:00.000
[ "Physics" ]
In vivo Anticoagulant Activity of Immediate Release Tablets of Dabigatran Etexilate Mesylate Cocrystals Dabigatran Etexilate Mesylate (DEM), a salt of prodrug dabigatran etexilate, is a potent, oral, reversible and direct thrombin inhibitor with low oral bioavailability. The present research investigation focused on the formulation of immediate release (IR) tablets of DEM cocrystals and evaluation of In vivo anticoagulant activity. The results of the study showed that the formulated IR tablets of DEM showed improved efficacy in comparison with the plain drug by enhancing the pre-compression parameters such as bulk density, tap density, Carr's index, angle of repose and Hausner's ratio and post-compression parameters like thickness and weight variation, hardness and friability, In vitro dissolution parameters. The improved efficacy was confirmed by improvement in the pharmacodynamic parameters such as cutaneous bleeding time and clotting time indicative of enhanced bioavailability of dabigatran. Thus, it can be concluded that the IR tablets of dabigatran cocrystals can be proven to be more effective in producing the anticoagulant effect in clinical practice as compared to the plain drug resulting in more patient compliance. Introduction Anticoagulant agents are used worldwide to regulate blood coagulation in both healthy and diseased conditions such as cancer, diabetes mellitus and cardiovascular diseases [1]. Despite the development, proven efficacy, and advancement of these drugs, most of them are associated with several undesirable adverse effects such as bleeding of mild or severe in vital organs, while some are associated with drug and food interactions [2]. Warfarin-based therapy requires constant drug plasma concentration monitoring and its indirect action mechanism affects various coagulation factors showing a high risk of bleeding [3]. Heparin and its fractional forms are not useful in the treatment of chronic thrombotic diseases due to its poor oral bioavailability and high molar mass [4]. Modern non-vitamin K antagonist oral anticoagulants achieve their effect through direct inhibition of key coagulation factors, such as thrombin or FXa [5]. They do not possess the aforementioned issues. Despite this, their medical use is associated with the risk of life-threatening bleeding events, which require urgent administration of a specific reversal agent [6]. Hence, research interest in the discovery of novel anticoagulant drugs from other chemical classes with less toxicity and fewer side effects has been increasing. 2 In vivo Anticoagulant Activity of Immediate Release Tablets of Dabigatran Etexilate Mesylate Cocrystals Dabigatran etexilate is a low-molecular-weight non-peptide prodrug that gets converted to its active form, dabigatran by ubiquitous esterases upon oral administration. It is a univalent, competitive, reversible and potent direct inhibitor of thrombin, the final effector in blood coagulation [7]. Thrombin molecule consists of one active site and two secondary binding exosites. Exosite 1 acts as a domain for binding of substrates such as fibrin to enhance orientation for binding to the active site, whereas, exosite 2 acts as a domain for binding of heparin. Dabigatran acts by binding to the active site of thrombin, thus inactivating both fibrin-bound and unbound thrombin [8,9]. The ability to inhibit fibrin-bound thrombin is an important theoretical advantage of dabigatran over the heparins because bound thrombin can continue to trigger thrombus expansion [10,11]. Other advantages of dabigatran in comparison with other anticoagulants include predictable anticoagulant effect, rapid onset of action, low potential for drug-drug and drug-food interactions and target-specificity towards coagulation enzyme. By inhibiting thrombin, dabigatran prevents the conversion of fibrinogen into fibrin, positive feedback amplification of coagulation activation, cross-linking of fibrin monomers, platelet activation and inhibition of fibrinolysis [12,13]. Dabigatran Etexilate Mesylate (DEM) is a salt form of the prodrug dabigatran etexilate. It belongs to BCS Class II drugs i.e., low solubility and high permeability class with absolute bioavailability of about 3-7 % followed by oral administration. It has an elimination half-life of 12-17 hours, with a dose of 75 mg, 110 mg, and 150 mg. It is prone to acid and in the presence of moisture, it undergoes degradation by hydrolytic pathways. DEM has strong pH-dependent solubility, in acid media, decreases with an increase in pH and is practically insoluble in basic media [14,15]. Hence, it is necessary to enhance the aqueous solubility and dissolution rate of DEM. It is also essential to maintain the stability of DEM in basic pH to achieve faster onset of action, minimize the variability in absorption and improve its overall oral bioavailability. A technique called "cocrystallization" has recently gained significant attention in drug delivery by improving the physicochemical properties of drugs such as melting point, solubility, dissolution rate, stability, and bioavailability without changing its chemical structure [16][17][18]. The attempt is made to formulate the immediate release tablet dosage form of DEM cocrystals to improve the rate of dissolution, absorption, bioavailability and efficacy of the DEM and to carry out its In vivo anticoagulant activity to confirm the same. Material Dabigatran Etexilate Mesylate was received as a gift sample from Microlab research lab Bangalore India. Dabigatran etixilate mesylate cocrystals are prepared in-house. Microcrystalline cellulose was received as a gift sample from Colorcorn Asia Pvt. Ltd., Goa, India. Crosspovidone was received as a gift sample from Concept Pharmaceutical Pvt. Ltd., Aurangabad, India. All other chemicals and reagents are of analytical grades. Animals For acute oral toxicity studies female Swiss albino mice (22-25 g) were used and for In vivo anticoagulant activity, Wistar rats (150-200 g) were used for In vivo anticoagulant activity. They were procured from the National Toxicology Centre, Pune. Animals were housed in polypropylene cages (38 cm × 23 cm × 10 cm) under standard laboratory conditions at 25 ± 2°C temperature, 60 ± 5% RH and 24 °C; 12:12 h dark/light cycle with free access to standard pelleted diet (Pranav Agro Industries Ltd., Sangli, India) and water ad libitum. Animals were fasted overnight on the day of study. Proper care and maintenance of the animals was undertaken following the guidelines of the Committee for Prevention, Control and Supervision of Experimental Animals, Govt. of India. Ethical Clearance All the studies involving animal experiments were carried out in accordance with the experimental protocols approved by the Institutional Animal Ethics Committee of School of Pharmacy, Dr. Vishwanath Karad, MIT WPU, M.S. India (Protocol No. MIP/IAEC/2019-20/M2/01). Preparation of DEM Cocrystals DEM co-crystals synthesis was performed using the solvent evaporation technique. Screening of formation of DEM co-crystals was performed by various coformers in an optimal molar ratio (1:1, 1:2 and 1:3). A mixture of 1:1 DEM and Tartaric acid was dissolved in ethanol and continually stirred at 40 -50 ºC for 15 min. The solvent evaporated at 60-65 ºC when stored for 3 h in a hot air oven. Cocrystals were triturated in mortar and pestle and stored at room temperature [18][19][20][21]. Formulation of IR Tablets of DEM Cocrystals IR tablets of DEM cocrystals were prepared by direct compression method using crospovidone (CP) as super-disintegrant to improve the dissolution of the drug. HPMC K4M was used as a binder. Microcrystalline cellulose (MCC) and lactose monohydrate were used as diluents, respectively. And Talc was used as a glidant. DEM cocrystals equivalent to 75 mg of DEM and all the excipients except magnesium stearate were taken in mortar. All the ingredients were co-ground in a pestle and motor and then aerosil and magnesium stearate were added and mixed for 10 minutes. Then powder blend was mixed well for 15 to 30 min. The blends were passed through the # 80 sieve. Lubrication was done using magnesium stearate. The final blend was compressed on a Remake Mini Press II D Tooling 8 station compression machine equipped with concave punches to a weight of 300 mg/tablet. The compressed tablets were evaluated for pre-and post-compression parameters [22]. Physical Evaluation of IR Tablets of DEM Cocrystals The prepared blends were evaluated for pre-compression parameters like bulk density, tap density, Carr's index, angle of repose, and Hausner's ratio and post-compression parameters like thickness and weight variation, hardness and friability, In vitro dissolution study [23]. Stability Studies of Tablets Studies were carried out for 45 days for the optimized batches of DEM cocrystals IR tablets at a temperature 40±2°C/ RH 75±5% [24] Acute Oral Toxicity Study of Tablets The acute oral toxicity test was performed using the Acute Toxicity Class (ATC) method according to the Organization of Economic Co-operation and Development (OECD) guideline 423. Female Swiss albino mice were weighed and randomly divided into two groups (6 mice /group). The first group served as the test group and was administered orally with the suspension of powdered DEM cocrystals IR tablets in distilled water at a single dose of 2000 mg/kg body weight. The second group served the control group and received only distilled water at a volume of 10 ml/kg body weight. Observations were noted at 1, 2, 4 and 6 h after administration of test substance and recorded systematically. The visual observations like changes in the skin and fur, eyes and mucous membranes were recorded. Further, respiratory, circulatory, autonomic and central nervous systems, as well as somatomotor activity and behavioral pattern, were observed. The number of survivors was recorded initially after 48 h and then a further 14 days with once-a-day observation [25]. Evaluation of In vivo Anticoagulant Activity of IR Tablets of Dabigatran Cocrystals Using Cutaneous Bleeding Time Model Wistar rats were divided into three groups (n=6). Group I was considered as control group and was administered with distilled water at a dose of 1 ml/kg. Group II was administered with the suspension of plain DEM in distilled water at a dose of 50 mg/kg orally while group III was administered with the suspension of powdered tablets from an optimized batch of IR tablets of DEM cocrystals at the dose of 50 mg/kg orally. After one hour, the rats of all groups were anesthetized by intraperitoneal administration of ketamine (100 mg/kg) and xylazine (10 mg/kg) and placed individually in a plastic rat holder with several openings from one of which the animal tail was taken out. The tail was cleaned properly with water-wetted cotton. Then incision (10 mm long and 1.5 mm deep) was made with a scalpel between 8 and 9 cm from the tip of the tail (Figure 1). The bleeding time was assessed at intervals of 15s [26]. Evaluation of In vivo Anticoagulant Activity of IR Tablets of Dabigatran Cocrystals Using Clotting Time Model Wistar rats were divided into three groups (n=6). Group I was considered as control group and was administered with distilled water at a dose of 1 ml/kg. Group II was administered with the suspension of plain DEM in distilled water at a dose of 50 mg/kg orally while group III was administered with the suspension of powdered tablets from the optimized batch of IR tablets of DEM cocrystals at the dose of 50 mg/kg orally. After one hour, the rats of all groups were anesthetized by intraperitoneal administration of ketamine (100 mg/kg) and xylazine (10 mg/kg) and blood was withdrawn into the capillary tube through retro-orbital route to fill 3/4 th of the capillary tube. Clot formation was checked after every 30 seconds by breaking a piece of a capillary tube and slightly stretching apart the two ends of the broken capillary tube. The time at which a thread-like structure called fibrin extends between the two ends of the capillary tube is noted down as clotting time [26]. Statistical ANALYSIS The results were expressed as Mean + SEM (n=6). Comparison between the groups was made by one-way analysis of variance (ANOVA) followed by Tukey's Kramer Multiple Comparison test using Instat Graph Pad software (version-3). Evaluation of Pre-Compression Parameters of IR Tablets of DEM Cocrystals Bulk density was found in the range of 0.291±0.012 to 0.363±0.014 g/cm 3 , tapped density between 0.346±0.014 and 0.514±0.023 g/cm 3 , using the above two density data, Hausner's Ratio and Compressibility Index (CI) were calculated. The powder blends of all formulations with Hausner's ratio <1.25 indicated better flow properties. The compressibility index was found to range between 0.01 and 0.22% and the compressibility and flowability data indicated an excellent flowability of all powder blends. The better flowability of all powder blends was also evidenced from the angle of repose (in the range of 25.66 ±2.16 to 30.45 ±1.49) which is below 40 θº, indicating good flowability. Evaluation of Post-Compression Parameters of IR Tablets of DEM Cocrystals Tablet weights varied between 282 ±0.06 and 310 ±0.06 mg, hardness between 5 and 7 kg/cm 2 (average 6 kg/cm 2 ), thickness between 3.20 and 3.60 mm (average 3.4mm) and friability ranged from 0.04% and 0.13% (average 0.40%). The results of drug content and physical examination of all formulations were found to be within the official limits. Among all the 13 batches of formulations Batch F8 showed all the results within specification and was considered to be the optimized batch. In-vitro Dissolution Study of IR Tablets of DEM Cocrystals The data showed that the optimized batch shows a good dissolution profile and the same was selected for further study for in vivo activity. The optimized batch showed the highest drug release 100.5 % at 60 minutes when compared to all other batches. The obtained results of In vitro drug release showed a relationship between the binder and disintegrant concentration and the In vitro release of DEM cocrystals IR tablets. Stability Study of IR tablets of DEM Cocrystals All physical parameters of the optimized batch of tablets were found to be within the standard range when kept under accelerated stability conditions during the stability studies indicating that the formulation showed good stability. Acute Oral Toxicity Test of IR Tablets of DEM Cocrystals The results showed that immediate release tablets of DEM cocrystals were found to be safe without any mortality and morbidity up to the dose of 300 mg/kg. Effect of IR Tablets of DEM Cocrystals on In vivo Anticoagulant Activity Using Cutaneous Bleeding Model Bleeding time is the time elapsed from the moment the tail is incised to the first arrest of bleeding. Compared with the control group, all treatments significantly (P<0.001) prolonged the cutaneous bleeding time. Untreated animals (control group) showed a mean bleeding time of 47 secs while the animals treated with test drug suspensions showed comparatively increased mean bleeding time. However, as compared to plain DEM (mean bleeding time -105 secs), the DEM Tablets (mean bleeding time-157 secs) exhibited more significant (P<0.001) efficacy in enhancing the bleeding time which correlated well with pharmacokinetic data (Figure 2). Hence, we could conclude that the developed formulation exhibited better anticoagulation activity than plain drug suspension by improving the oral bioavailability of DEM [8,18]. Effect of IR Tablets of DEM Cocrystals on In vivo Anticoagulant Activity Using Clotting Time Model Compared with the control group, all treatments Advances in Pharmacology and Pharmacy 10( (Figure 3). Hence, we could conclude that the developed formulation exhibited better anticoagulation activity than plain drug suspension by improving the oral bioavailability of DEM [27,28]. Conclusion It can be concluded from the present study that the pH-dependent solubility and short absorption window of DEM was overcome by formulating it in cocrystal form and converting it into IR tablet formulation. The formulation showed significant improvement in the rate of dissolution, absorption and bioavailability of DEM, which in turn improved its In vivo anticoagulant activity by virtue of enhanced cutaneous bleeding time and clotting time. Thus, the improvement in pharmacokinetic and pharmacodynamic parameters of DEM would enhance the efficacy and increase patient compliance.
3,558.8
2022-01-01T00:00:00.000
[ "Materials Science" ]
The influence of twist angle on the electronic and phononic band of 2D twisted bilayer SiC Silicon carbide has a planar two-dimensional structure; therefore it is a potential material for constructing twisted bilayer systems for applications. In this study, DFT calculations were performed on four models with different twist angles. We chose angles of 21.8°, 17.9°, 13.2°, and 5.1° to estimate the dependence of the electronic and phononic properties on the twist angle. The results show that the band gap of bilayer SiC can be changed proportionally by changing the twist angle. However, there are only small variations in the band gaps, with an increment of 0.24 eV by changing the twist angle from 5.1° to 21.8°. At four considered twist angles, the band gaps decrease significantly when fixing the structure of each layer and pressing the separation distance down to 3.5 Å, 3.0 Å, 2.7 Å, and 2.5 Å. A noteworthy point is that the pressing also makes the band linearly smaller at a certain rate regardless of the twist angles. Meanwhile, the phonon bands are not affected by the value of the twist angle. The optical bands are between 900 cm−1 and 1100 cm−1 and the acoustic bands are between 0 cm−1 and 650 cm−1 at four twist angles. Introduction Thanks to the successful preparation of graphene, twodimensional (2D) honeycomb structures have attracted a great deal of research interest in recent years. 1,2Over the past two decades, many research groups have investigated various honeycomb-structured materials, such as single-walled and multi-walled carbon nanotubes and fullerene spheres. 3][6][7][8][9][10] One of the most extraordinary discoveries about the TBLG is the "magic angle" of 1.1°, where the TBLG becomes a superconductor with nearly at bands. 4,7,11,12ased on this, many studies have attempted to investigate the twisted bilayer structure of other twisted 2D materials, such as black phosphorene, 13 blue phosphorene, 14 and molybdenum disulde. 15This research has shown that twisted bilayer structures can alter the band structure, with different twist angles affecting the bandwidth and causing variations in the direct and indirect band gaps.Changing the band gap results in changing the sensitivity of the material to photons.Thus, the twisted bilayer systems are commonly used in optoelectrical devices such as photosensors/photodetectors, [16][17][18][19] transistors 20,21 and even solar cells. 22,235][26] The 2D SiC is a semiconductor with a wide band gap of 2.30-2.55eV, but in the nanoribbon form, SiC behaves as a zigzag-type metal and an armchair-type semiconductor. 27,28Different from the buckled 2D materials, silicon carbide (SiC) has a planar structure with sp 2 -characterized bonding. 291][32] The graphene-like planar structure and the band gap exibility response to stress or strain 33 make SiC an ideal conventional material for the multilayer twist-based applications. In general, these research results show that the stacking and twisting bilayer is a promising area to improve the performance of electronic, photonic and phononic properties of SiC.This motivates our team to investigate the twisted bilayer SiC (TBSC).We used the density functional theory (DFT) approach to study the fundamental properties of TBSC via the twist angle.We calculated the electronic, phonon band with respect to four twist angles: 21.8°, 17.9°, 13.2°and 5.1°, and extracted the change tendency of band gap and phonon band when the twist angle or interlayer distance changes.The results clearly show that the twisting and pressing bilayer can tune the electronic character of TBSC, but not the phonon band. General method We used rst-principles calculations to study the effect of twist angle on the electronic and phonon properties of bilayer silicon carbide.We used the Spanish Initiative for Electronic Simulations with Thousands of Atoms (SIESTA) soware 34,35 to perform our DFT calculation.First, we constructed a 704-atom SiC monolayer with the Si-C bond length of 1.79 Å 36,37 and obtained the relaxed coordinates of the lattice (Fig. 1).Then, we stacked two relaxed SiC layers with respect to the AA-stacked conguration and rotated them by a certain relative angle.In all cases, we chose the Si-Si stacking position as the origin of twisting (see Fig. 2).We found some special twist angles that allow us to extract the respective unit cells of these systems with an appropriate number of atoms (see the following section for details).At each twist angle we analyzed the energy and phonon band structures of the relaxed and xed unit cell. To estimate the inuence of the interlayer distance on the electronic band structure, we performed the DFT calculations on the SiC bilayer systems with different interlayer distances.The initial separation distance of the two SiC layers is 2.22 Å. Aer relaxation and obtaining the optimized separation distance, we set up several unrelaxed unit cells with all atoms xed and decrease the separation distances until the twisted SiC bilayer complex becomes a conductor. DFT parameters SIESTA is based on the linear combination of atomic orbitals. 34,35The exchange-correlation functional used in the DFT calculation is the generalized gradient approximation of Perdew, Burke, and Ernzerhof. 38][41] Thereby, higher-level theories, hybrid functionals, or GW method are usually conducted to obtain the more precise band gap value. 41However, the band structure is marginally affected by the dispersive interactions and this study mainly focuses on the variation of the band gap.Thus, the GGA-PBE exchangecorrelation function is still suitable for adjusting the different band gaps between models. The pseudopotential used is generated by the improved Troullier-Martins scheme 42 with three core and four valence orbitals.We also implied the double zeta polarized with the split norm of 0.3.The k-point grid is generated from the Monkhorst-Pack (MP) scheme.In the SiC monolayer relaxation, we used the k-point grid of (3 × 3 × 1), but in all twisted bilayer models the (5 × 5 × 1) grid was used.The grid cutoff energy is 200 Ry and the electronic temperature in the Dirac-Fermi function is 300 K. To obtain the optimized structure, we set the force tolerance to 0.02 eV Å −1 for the conjugate gradient run.To obtain the phonon band, we used the relaxed coordinations of each twisted model and set the atomic displacement of 0.04 bohr radius.From the force-constant matrix, we deduced the phonon frequencies by using the package Vibra of SIESTA. Twist angle Obviously, the new lattice vectors must satisfy the inclusive lattice vectors of both layers and are chosen to be as small as possible.In other words, the new lattice vectors are the least common multiple of the inclusive lattice vectors.So, to nd the unit cell of the twisted bilayer model, we started by calling the lattice vectors of the rst layer as ða 1 !; b 1 !Þ (the blue arrows in Fig. 2) and the upper layer as ða 2 ! ; b 2 !Þ (the red arrows in Fig. 2).We determined the unit cell of both layers by nding the closest repeating points of the origin.Such a point would satisfy the equation where m and n are the smallest possible integers.Transforming eqn (1), we get the ratio (2) with 4 as the twist angle. For each value of 4, we calculated the value of the right-hand side in eqn (2).Then we tried different values of m and n, but not more than 8 to limit the size of the unit cells.If the error between the right-hand side in eqn ( 2) and the m/n value is less than 0.001, we will record this twist angle 4. By gradually increasing the twist angle, we repeated these steps and found certain values of 4, m, and n (listed in Table 1).These values are suitable not only for SiC bilayer, but also for all materials with hexagonal 2D structure.Because of the symmetry of the SiC layer, we only considered the 4 values in the 0°-30°range.In this study, we investigated four twist angles, including 5.1°, 13.2°, 17.9°, and 21.8°, which refer to the largest and three smallest unit cells shown in Table 1. Results and discussion The warp of relaxed twisted bilayer structure At all angles considered, the relaxed structures of the twisted SiC bilayer are warped like the twisted 2D bilayer graphene 43 (see Fig. 3 and 4).The warps obtained with respect to the twist angles of 21.8°, 13.2°and 5.1°are similar to each other.The top and bottom SiC layers have the symmetrical structure.At each corner of these unit cells, the upper layer has an upward peak, and the lower layer has a downward peak.In the upper layers, the atoms are gradually shied downward as they move away from the peaks, creating two symmetrical concavities as shown in Fig. 3.According to the color shown in Fig. 3, the warp intensity is found to be similar in all the three models. The situation is, however, different for the 17.9°twist angle model.In addition to the corners, there are two more peaks that Fig. 3 The height of atoms in the and lower layers of 5.1°(a and b), 13.2°(c and d) and 21.8°(e and f) twisted model. emerge inside the unit cell (as shown in Fig. 4).It is worth noting that all peaks are paired and symmetrical.The peaks are always outward facing and are only found where the same sites in the SiC hexagon of both layers are stacked.Because we have chosen the Si atom as the origin of the rotation, the Si atoms in four corners are directly stacked, forming four peaks each.In this model (i.e., the 17.9°twist angle model) the two peaks inside the unit are related to the stack of carbon atoms (at x = 32.5 and y = 19.7 in Fig. 4) and the centers of the hexagons (at x = 34.1 and y = 9.9 in Fig. 4).In addition, the optimized bilayer separation distance increases as the twist angle decreases (see Table 2, Fig. 6).This tendency is consistent with the case of twisted bilayer blue phosphorene, 22 where the distance between two blue phosphorene layers decreases with the relative twist angle.Although that research 22 chose the AB-stacked conguration for twisting, the repeated AA stacking was also found in the moiré pattern.Thus, there is an equivalence between AA and AB stacking in the twisted bilayer systems.In addition, when the twist angle changes from 5.1°to the untwisted state, the interlayer distance slightly decreases by 0.11 Å.This could be the result of the disappearance of the moiré pattern and the layers are no longer warped. The band structure of relaxed twisted bilayer SiC The electronic band structures shown in Fig. 5, 6 and Table 3 illustrate the variation of energy levels with twist angle.As a result of increasing the number of atoms in the unit cell, the band is found to be densest in the 5.1°case and less dense in the 17.9°, 13.2°, and 21.8°cases, in that order.The bands just above and below the Fermi level are at in all studied models, which has already been observed in other twisted materials. 22,44s a result, the direct and indirect band gaps are not so different from each other.The value is only slightly reduced as the twist angle decreases, as shown in Table 3 and Fig. 6.This indicates that twisting may not be an efficient method to tune the semiconducting property of the SiC bilayer. In addition, for a comprehensive commentary, we also considered the inuence of the interlayer distance on the bandgap, which is shown in Table 4 and Fig. 7.When the interlayer distance is taken to be the same, the bandgap decreases as the twist angle decreases, just as in the case of optimized distance.In addition, compressing the bilayer models also causes the band gap to decrease.When the interlayer distance decreases to 2.7 Å, the bandgap becomes vanishes for the 5.1°case and is nite but less than 0.5 eV for other cases, meaning that the twisted SiC bilayer complexes become conductors.This variation is also observed in the case of zero twist angle (untwisted).Because the zero twist angle structure has smaller values of band gap, we calculated at different points of interlayer distance to show the tendency, and the result is approximated to a linear decrement (shown in Table 5 and Fig. 7).The increase in band gap is proportional to the increase in twist angle and interlayer distance.It may indicate that there is discernible charge transfer between the layers.Because the transfer of charge between the layers will induce electrostatic interactions between the layers, thereby changing the energy level.When the charge from the lower layer transfers to the upper layer, the electrostatic potential of the upper layer will increase, and the electrostatic potential of the lower layer will decrease.This results in the energy level of the conduction band (upper layer) moving up, and the energy level of the valence band moving down.Therefore, the band gap increases.As the twist angle and interlayer distance increase, the overlap between the layers decreases, and charge transfer increases, so the band gap increases further. In general, this effect of the interlayer spacing squeeze on the band gap is more signicant than the twist angle decrement.These results may suggest a theoretical way to tune the conductivity property of SiC bilayer by twisting and pressing.Furthermore, our results show an almost linear relationship between the band gap energy and the interlayer distance (shown in Fig. 7).A notable point is that the band gap decrement has a similar rate in all twist angle models. The phonon band Since the size of the considered models exceeds our computational capacity, we have used only the (1 × 1) unit cell system for the calculation of the phonon band and the comparison between them.Each twisted unit cell has a different number of Paper RSC Advances atoms and symmetry, which results in a varying number of phonon branches in each conguration.In the case of the 5.1°twist angle structure, with 508 atoms in a unit cell, the phonon branches in Fig. 8 are so dense that it is difficult to distinguish individual branches.Overall, the phonon dispersion for all the twisted structures, except for the 5.1°twisted angle, shows no imaginary branches, indicating that they are considered thermodynamically stable.In all cases of twist angles, these phonon bands are almost the same.The frequencies of the acoustic bands and lower optical bands are within the range of 0 cm −1 to 650 cm −1 , while the higher optical bands range from 900 cm −1 to 1100 cm −1 .Those four structures all exhibit a phonon band gap ranging from 650 cm −1 to 900 cm −1 , created by the considerable weight difference between two constituent atoms, Si and C.These results are in good agreement with other bare 2D-SiC phonon calculations, with the highest frequency of about 1100 cm −1 and a phonon band gap between 650 cm −1 and 900 cm −1 . 31,45The unchanged phonon dispersion band indicates that the phononic property of the twisted bilayer SiC model is not affected by the twist angle. Conclusions We have chosen the angles of 21.8°, 17.9°, 13.2°and 5.1°to calculate the inuence of the twist angle on the relaxed structure, electronic and phonon band of the twisted bilayer SiC model using DFT.The twist always causes a symmetric distortion of the optimized bilayer structures.Wherever two atoms of the same element form a line orthogonal to the lattice plane, two peaks appear.The interlayer distances of these peaks tend to decrease as the twist angle increases.Meanwhile, the values of the bandgaps are proportional to the twist angles.When the unrelaxed bilayer models are pressed vertically, the band gaps decrease at the same rate regardless of the twist angle. Compared to the pressing effect, the bilayer twist shows less inuence on the band gap decrement.However, the effect of the twist angle on the phonon property is trivial and negligible. Since this study can only deal with a limited number of cases, it is necessary to consider more twist angles and extend the model in the future.They would give more special and novel properties of this twisted bilayer SiC system. Fig. 2 Fig. 2 The closest repeated point in the bilayer model with the twist angle of 21.8°. Fig. 4 Fig. 4 The height of atoms in the upper (a) and lower layer (b) of 17.9°twisted model. Fig. 6 Fig.6The values of direct and indirect band gaps and the optimized interlayer distance with respect to the twist angles. Fig. 7 Fig.7The relation between the band gap energy and the interlayer distance of twisted and untwisted SiC bilayer. Table 1 The appropriate values of 4 and the respective m,n Table 2 The optimized separate distances of bilayer with respect to the twist angles.These values are represented by the Si-Si distance at the origin of rotation Table 3 The direct and indirect band gap energy of the relaxed models with different twist angles Table 5 The dependence of direct band gap (eV) on the interlayer distance for the zero twist angle (untwisted).The dashes represent the band gap energies which are smaller than 0.15 eV or don't exist, meaning that they are conductors © 2023 The Author(s).Published by the Royal Society of Chemistry RSC Adv., 2023, 13, 32641-32647 | 32645
3,921.4
2023-10-31T00:00:00.000
[ "Materials Science", "Physics", "Engineering" ]
Detecting Community Structures in Networks by Label Propagation with Prediction of Percolation Transition Though label propagation algorithm (LPA) is one of the fastest algorithms for community detection in complex networks, the problem of trivial solutions frequently occurring in the algorithm affects its performance. We propose a label propagation algorithm with prediction of percolation transition (LPAp). After analyzing the reason for multiple solutions of LPA, by transforming the process of community detection into network construction process, a trivial solution in label propagation is considered as a giant component in the percolation transition. We add a prediction process of percolation transition in label propagation to delay the occurrence of trivial solutions, which makes small communities easier to be found. We also give an incomplete update condition which considers both neighbor purity and the contribution of small degree vertices to community detection to reduce the computation time of LPAp. Numerical tests are conducted. Experimental results on synthetic networks and real-world networks show that the LPAp is more accurate, more sensitive to small community, and has the ability to identify a single community structure. Moreover, LPAp with the incomplete update process can use less computation time than LPA, nearly without modularity loss. Introduction A complex system from nature, society, or any other field can usually be represented as a complex network: a structure with vertices and edges between vertices [1][2][3][4][5][6][7][8][9]. Community structure is a very important property of complex networks, which is generally described as a group of vertices: the edges in a group are denser, and the edges between groups are sparser [2]. Even in weighted networks, though they may consist of differentiated mass of connected vertices, there may still exist as distinct communities groups of vertices within which the edges are denser and between which the edges are sparser [3]. More and more algorithms are proposed and developed to detect the community structure, especially in recent years, such as Girvan-Newman algorithm (GN) [2], spectral clustering [4], spin-glass model [5], the algorithm proposed by Clauset, Newman, and Moore (CNM) [6,7], partition method using integrating attributes of vertices [8], and extremal optimization [9]. The fastest one in these algorithms for community detection is label propagation algorithm [10]. Zhang et al. generalize LPA to weighted networks by calculating the probability of every label (WLPA) [11]. In a network, the two vertices neighbors are called neighbors if they are connected by an edge. Suppose that every vertex has a label to denote the community it belongs to. If a vertex's most neighbors have the same label, the label is called the maximal label of the vertex. The LPA can be described as follows. Initially, assign each vertex a unique initial label. At each iterative step, each vertex updates its label to its current maximal label in a random order. When there are multiple maximal labels, the vertex will randomly pick one of them as its label. If the label of every vertex in the network is its maximal label, the algorithm will be terminated. This is LPA's asynchronous version. We 2 The Scientific World Journal will not consider the synchronous version because of the potential label oscillations as discussed in [10]. LPA has three remarkable features. The first feature of LPA is its near-linear time complexity. For a network of vertices and edges, the time complexity of LPA is ( + ). The second feature is that its capability of community detection is scale-independent. It is not affected by the resolution limit as the methods based on modularity. The third famous feature of LPA is its randomness, which includes random initial label, random order of label update, and randomly picking one of the maximal labels as the vertex label when the maximal label is not unique. Due to the randomness in label propagation, when LPA is used to detect the communities in a network, any information about this network except its vertices and edges need not be provided, and then the multiple community structures usually are obtained. On one hand, the randomness can make those community structures which are hard to be found in other fixed algorithms be detected easily by LPA; on the other hand, the small communities are likely to be missed, and trivial solutions are more likely to be obtained [12]. To detect small communities, LPA must be run many times, which makes its first two advantages less apparent. After Newman introduced the modularity to measure the quality of network division, Barber and Clark proposed a modularity-specialized LPA (LPAm) to constrain the label propagation process [13][14][15]. Though it is prone to get stuck in poor local maximum of modularity, LPAm is still a nearlinear time algorithm [16]. Liu et al. introduce an advanced modularity-specialized label propagation algorithm by combining LPAm with multistep greedy agglomerative algorithm, which is called LPAm+ [16,17]. LPAm+ does not cost nearlinear time any more, but it is more stable than LPAm. However, due to the usage of modularity, the capability of the two algorithms will be affected by resolution limit, though modularity is as a key fitness indicator [18,19]. The obtained community structure using LPA-proposed by Leung et al. is still scale-independent because the algorithm does not involve modularity optimization [12]. The algorithm encourages a stronger local community by adding a decreasing score assignment for each label in label propagation process to deter the occurrence of trivial solution successfully. Leung et al. provide an idea to save the running time by avoiding label update of those vertices with high neighbor purity [12]. It does do well in saving time while not doing well in accuracy because the neighbor purity condition ignores contribution of the small degree vertices to community detection. In this paper, we propose LPAp by adding the prediction process of percolation transition and introduce the incomplete condition in label propagation process to reduce the running time. According to the principle of percolation transition, when there are multiple maximal labels, we predict the sizes of the new communities and let the vertex choose the label which can make the smallest community as its label. LPAp is applicable to weighted networks. The incomplete condition considers both the neighbor purity and the vertex degree. Then we apply it to community detection on synthetic networks and real-world networks. As we will show, the LPAp is more accurate and can be faster than the original algorithm. Problem Description An unweighted network of vertices can be described by an × adjacency matrix , whose element is For a weighted and undirected network of vertices and edges, denote the weight of connection between and by ; then the element of its × adjacency matrix is given by An unweighted network can be considered as a weighted network, and the weight of every edge in this network is 1. For a weighted network, the degree of a vertex can be given by and the number of edges can be represented by In most cases, the groups of vertices in a network identified by a community detection algorithm are assumed to be communities irrespective of whether these groups satisfy a specific definition or not as mentioned in [6,8]. Then the quality of network division is measured by modularity, whose value is in [0, 1] [13,14]. For a given network, the larger the modularity obtained by a method for network division, the better the quality of this division. Suppose a network is unweighted, and it is divided into communities. The number of edges in community can be given by and the sum over all degrees of vertices in community can be given by The modularity can be calculated according to the sum of contributions to every community [3,16]: When the modularity is used to measure the quality of division for a weighted network, should indicate the sum The Scientific World Journal 3 over all weights of edges in community . We can recalculate it as so that (7) can be applied for weighted networks [3]. Denote the label of vertex by . In LPA, after random initial label association, for every vertex , its label updates in a random order according to the following rule: where new indicates 's new label and ( , ) is the Kronecker delta. The labels of all the vertices are updated iteratively until every vertex satisfies the following condition: where indicates 's current label. When is replaced by in (9) and (10), LPA becomes WLPA which is applicable for weighted networks. Using LPA, a trivial solution is often obtained; that is, the whole network is divided into a single community structure. If a single community evolves to dominate the whole network in community detection process, the community is called "monster community, " which eats small community structures [12]. If monster community occurs in label propagation process, a trivial solution is obtained in all probabilities. Sometimes the trivial solution indicates a network's indivisibility. There is a kind of homogenous network whose edges are so dense that we cannot subdivide the network, and we call it a single community structure though there is no community structure [14]. We cannot divide it into some groups of vertices, and the edges in each group are denser than the edges in the network. An example of the single community network is a complete graph, any two vertices of which are neighbors [25]. A monster community obtained by LPA is meaningful in this case. The network should not be subdivided. In most cases, the trivial solution is meaningless for community detection; there are usually multiple communities in the divided network. For example, ER random model is a kind of typical random networks proposed by Erdös-Rényì in 1965 [26]. For an ER random model, whose average degree is 4, when the network size is between 100 and 10000, LPA always identifies all the vertices in its giant connected component as a single community in [10]. ER random models are considered as the homogenous networks with no community [10]. The result of LPA seems to be right. But in fact ER random models whose edges are dense tend to have a handful of large communities, while ER random models whose edges are sparse tend to have many small communities [5]. LPA does not find the real community structures in ER random models whose edges are sparse. For a network, when one of its real communities is not much smaller than anyone of other real communities, a monster community can still occur and eat the real community in label propagation process. Though the sizes of real communities are equal, the monster community could occur in label propagation process. In Figure 1 we give a simple example of "monster community" occurrence in label propagation process. For its generalization we give every edge the same weight 1. A letter in a circle indicates a vertex, and the number near the circle indicates the vertex's label. The network in Figure 1 is constructed by two equal-size communities intuitively. According to (7), the modularity of two-community division should be 0.36. Label update sequences of vertices are a-e-b-c-d-f at iteration 1 and ba-f-c-e-d at iteration 2. After vertex a updates its label to 4 at iteration 2, every vertex's label in the network is its current maximal label. The termination condition of label propagation algorithm is satisfied, and the algorithm is terminated. It is noteworthy that vertex b chooses 4 as its label at iteration 2, which leads all the vertices to choose 4 as their labels at last. Then the modularity can be calculated by The modularity value of this division is the minimum, so it is not a good division. But it is reasonable; the termination condition of LPA is satisfied. Due to monster community, the right communities cannot be detected in label propagation process. It occurs in label propagation process. We can infer that the probability of monster community occurrence can be decreased if we make a change on label propagation process, so that the probability that trivial solutions are obtained can be decreased. There are two reasons why the community structures obtained by LPA are not unique: the termination condition is a condition, not a measure to be optimized, and the other reason is its randomness. As mentioned in Section 1, the assignment of the initial label is random, the sequence of label propagation at each iterative step is random, and when there are multiple labels the most neighbors of a vertex have, the vertex will pick one randomly. If we change the termination condition into a measure or change the sequence of label propagation into a fixed sequence, the effect will be global. In other words, the LPA may lose the capability of 4 The Scientific World Journal detecting multiple solutions. Thus we only make a change in locality to keep the precious feature of label propagation process. Without changing the termination condition and the sequence of label propagation, we will give a priority list when there are multiple maximal labels in the next section. Prediction Process of Percolation Transition When there is a path between two vertices, we call the two vertices "connected. " Two connected vertices can be connected by one edge directly or by some other vertices and edges indirectly. If any two vertices in a graph are connected, the graph is called connected graph. Achlioptas et al. consider percolation transition phenomenon in random network construction process, which is known as the emergence of a giant connected component, and point out that "percolation transitions in random networks can be discontinuous" [26]. They start with isolated vertices and then add edges one by one. Which edge will be added is decided by selection rule, and different rules will lead to different points in time when the giant connected component occurs. The percolation in network construction is successfully delayed by their nonrandom edge selection rule. According to community detection process using LPA in original network 0 , we can construct a new network 1 ; 0 and 1 have the same vertices. If we consider two vertices with the same label in 0 as two connected vertices in 1 , then we can transform label propagation process in 0 into the network construction process in 1 as follows. The initial status of the new network 1 is also isolated vertices because of the unique initial labels in label propagation process. For any vertex , denote its neighbor set in original network 0 by 0 . In 1 , vertex chooses a vertex as its new neighbor from 0 according to the following rules. Rule 1. If there are no connected vertices from 0 , and there is no edge between and any vertex from 0 , we add an edge between and a vertex chosen from 0 randomly. Rule 2. If there are no connected vertices from 0 , and there is an edge between and a vertex from 0 , we delete the edge between and the vertex and add an edge between and a vertex chosen from 0 randomly. Rule 3. If there exist one or more groups of connected vertices from 0 , and there is no edge between and any vertex from 0 , we add an edge between and a vertex randomly chosen from the largest one in these groups (the largest group). When the largest group is not unique, we first choose one randomly from these largest groups. Rule 4. If there exist one or more groups of connected vertices from 0 , and there are one or more edges between and the vertices from the unique largest group, we will not add or delete any edge. Rule 5. If there exist one or more groups of connected vertices from 0 , and there is an edge between and some isolated vertex from 0 , we delete the edge between and the vertex and add an edge between and a vertex chosen from the largest group randomly. When the largest group is not unique, we first choose one randomly from these largest groups. Rule 6. If there exist one or more groups of connected vertices from 0 , and there are one or more edges between and the vertices which are not from the unique largest group, we delete the edges between and this group. If this group is not connected after deleting those edges, we add edges between the vertices in this group to make it connected again and then add an edge between and a vertex chosen from the largest group randomly. According to these rules, the edges in 1 will be added, deleted, or kept in a random order iteratively until every vertex has a neighbor in 1 . For a given 0 we can obtain multiple 1 , but these " 1 " have the same connectivity. If 1 is a connected graph, all the vertices in 0 have the same label. Hence, a trivial solution obtained by LPA can be represented by a connected graph obtained by network construction process, and a monster community in LPA can be represented by a giant connected component. The occurrence process of monster community in Figure 1 can be transformed into the network construction process in Figure 3. In the construction process of several different random graphs, the probability of the giant connected component occurrence is nearly zero when the number of edges is less than /2, but it will increase rapidly when the more edges are added [26,27]. The larger the current largest connected component, the earlier the giant connected component emergence and the earlier the connected graph formation. Accordingly, we need to avoid adding those edges which will lead to a larger connected component at earlier iterative steps. In the network construction process transformed by label propagation process, the process of adding edges is not random, and the rule of adding edges encourages the formation of a strong local community. As we discussed in Section 2, changing the process greatly is not good for community detection. When there are multiple optimal edges according to the rule of LPA, local communities are not enough encouraged in LPA, which is reflected in the random selection of multiple maximal labels. Thus we should strengthen local communities and constrain their expansion in LPAp. We will show the edge selection rules in Figure 4. As shown in Figure neighbors. In the network, the edge between a and b, the edge between b and c, the edge between d and e, the edge between b 1 and b 2 , and the edge between b and b 1 have been added. When vertex a chooses the vertex it will connect to, it can keep the edge 1, or cancel edge 1, and then add edges 2, 3, or 4. When the rule of adding edges is from LPA, vertex a will choose one of the edges 1, 2, 3, and 4 randomly regardless of the label status on the outside of the circle. If vertex a chooses edge 1 or 2, a connected component whose size is 5 will form; if vertex a chooses edge 3 or 4, a connected component whose size is 3 will form. Intuitively, the probability of a giant component occurrence when vertex a chooses edge 1 or 2 is greater than the probability when vertex a chooses edge 3 or 4. Accordingly, choosing edge 3 or 4 can strengthen local community and constrain its expansion. Hence, we add the prediction of percolation transition to delay the occurrence of monster community in label propagation from iteration 2. When there are multiple maximal labels 1 , 2 , . . ., , we give a vector = ( 1 , 2 , . . ., ) to record the sizes of new communities for every maximal label after propagating its label. Suppose that is the minimum element of , and then we choose the label as the label of vertex . In Figure 5 we show that how the added prediction process stops the occurrence of monster communities in the same update sequence b-a-f-c-e-d at iteration 2. As shown in Figure 5, due to the prediction process when the label of vertex b is updated, the algorithm predicts the sizes of potential communities after b's label updating: 5 and 2. Then the algorithm chooses 3 as b's label. So is it when the label of vertex a is updated. We cannot infer that prediction process can stop the occurrence of monster community, but it does delay that. Incomplete Update Condition The added prediction process will make the running time longer, so we give an optional strategy to save time. A vertex can be obviously within a community because its so many neighbors' labels are the same as its label. Leung et al. give the concept of neighbor purity of unweighted networks to measure this property of vertices [12]. Denote the neighbor where | ⋅ | is the measure of a set. According to the neighbor purity of vertices, Leung et al. try to incompletely update the labels from iteration 2 to save the running time [12]. They only update the labels of vertices whose neighbor purities are greater than a preset threshold. But exchange of a shorter running time is a lower modularity. The modularity after incomplete update may decrease by 30% [12]. The reason is that only using neighbor purity as the criterion of incomplete update, the labels of small degree vertices could not be updated from the earlier iterative steps. The incomplete update condition ignores the contribution of small degree vertices to the whole community structure and performance. We first generalize the definition of neighbor purity to weighted networks. For a vertex , its neighbor purity can be given by Now we consider the small degree vertices in incomplete update condition. Denote the mean degree of the network by 0 ; the incomplete update condition can be given by where is a preset threshold. For a given vertex , if incomplete update condition (14) is satisfied, then its label will not be updated. When is a small degree vertex, − 0 < 0, its label will be updated until the global termination condition is satisfied. The Scientific World Journal 7 Algorithm Description For a given, weighted, and undirected network of vertices with an adjacency matrix , LPAp can be described in the following steps. (1) Initialize the labels of vertices in the network: give every vertex a unique label randomly. (3) Generate a random number permutation ( ) without repetition from 1 to and set an update mark = 1. (4) According to the sequence in ( ) , for every vertex , find out the maximal label . (8) Calculate the size vector and find out the minimum of . (13) Run a breath-first search to separate the communities which are disconnected but with the same label [10]. In the LPAp, the termination condition is still a condition; thus the community structure may be still not unique. The ability to find some communities which cannot be found by some fixed algorithm remains due to its randomness. What we have done is reducing the probability of trivial solutions and keeping near-linear time complexity of the algorithm. Experiments In this section, we carry out community detection on synthetic networks and real-world networks to verify the performance of LPAp, using a tablet PC with 4 GB RAM and a 1.7 GHz dual-core processor running MATLAB 2012b. Synthetic Networks. LPAp is evaluated from three aspects: the sensitivity of detecting small community, the ability to identify single community structure, and the effect of incomplete update on the speed of community detection and the quality of network division [28]. The most commonly used synthetic networks are computer-generated networks with 128 vertices, composed of 4 communities with 32 vertices to verify the performance of the algorithms for community detection [6,29]. The degree of each vertex is 16 on average. Denote the mean number of edges between different communities by out . Set the threshold = 1. For each out , we first run the LPAp on ten computer-generated networks and give the results of LPA and CNM for comparisons as near-linear time algorithms. When the number of communities obtained by LPA or LPAp is 1, the algorithm obtains a monster community. The fraction of vertices classified correctly (FVCC) is usually used as accuracy measure of an algorithm on the networks whose community structures are preset or known [6,13]. In Figure 6 we give FVCC by CNM, LPA, and LPAp, and the numbers of communities obtained by LPA and LPAp are given. Due to the randomness in the three algorithms, every algorithm will run ten times on each network, and each point we will show is an average. Error bars in Figure 6 indicate standard deviations. As shown in Figure 6, the community structure becomes fuzzy gradually until the whole network is identified as a single community by LPA when out increases. The process in LPAp is delayed and more communities are obtained, which makes LPAp more accurate than LPA and CNM. In Figure 7 we give a community detection process on an unweighted computer-generated network using LPAp. For the sake of clarity, the network in this case is a 32-vertex version, where out = 1, and the degree of each vertex is 4 on average. It is designed to consist of four 8-vertex communities. The vertices from the same community are given the same shape and the same color. As we can see, the network consists of four groups of vertices. The edges between the vertices 1-8, the edges between the vertices 9-16, the edges between the vertices 17-24, and the edges between the vertices 25-32 are denser than the edges between the four groups. We run LPAp on it to detect the four communities. After vertex 16's label is updated to 30 at iteration 3, the algorithm is terminated. LPAp identifies three communities exactly and subdivides the fourth 8-vertex community into two 4-vertex communities. The subdivision of a community is considered as right division using FVCC as an accuracy measure, so FVCC by LPAp is 100% in this case. The modularity of this result is 0.50, which is a little lower than the modularity of real division −0.52. Our algorithm is applicable to weighted networks; thus we give the weighted version of computer-generated networks by assigning each edge a random weight in [0, 1]. The weight is random for each edge. For each out , using this weighting method on the ten used unweighted networks, we obtain ten weighted computer-generated networks [30]. CNM is an algorithm which is applicable to weighted networks, and LPA has been generalized to weighted networks, so they can be used for comparison [6,11]. In Figure 8, we give FVCC by the three algorithms and the number of communities obtained by WLPA and LPAp. We run each algorithm ten times for each network, and each result is still an average. Error bars in Figure 8 indicate standard deviations. As shown in Figure 8, when out increases, the numbers of communities decrease gradually and FVCC accuracies of the three algorithms decrease. We get the similar results to the results of the unweighted case. When out = 7, the LPA has identified the network as a monster community. The random assignment of different weights breaks the preset community structures; thus the FVCC accuracies of these algorithms on weighted networks are usually lower than their results on unweighted networks, respectively. In Figure 9 we give a community detection process on a weighted computer-generated network using LPAp. Weigh every edge of the network in Figure 7 using 1, 2, 3, or 4 randomly to obtain its weighted version. To make it intuitive, the widths of lines indicate the weights of edges. The widths 1-4 represent the edge weights 1-4, respectively. For the sake of comparison, the initial label of every vertex and the label update sequences of vertices are the same as the labels and the sequences in Figure 7. The vertices from the same community are given the same shape and the same color. The number in a vertex indicates the number of the vertex, and the number near a vertex indicates its label. As shown in Figure 9, the community structures in weighted network are not as clear as the community structures in unweighted network. We run LPAp to detect the four The Scientific World Journal preset communities. After vertex 16's label is updated to 28 at iteration 3, the algorithm is terminated. LPAp identifies two communities exactly and subdivides two 8-vertex communities into four smaller communities. FVCC by LPAp is still 100% in this case. The modularity of this result is 0.49, which is much lower than the modularity of real division −0.54. As mentioned in [11], different weights assigned randomly may break the equilibrium within a community or mix different community structures, which makes communities harder to find. LPA and WLPA tend to find monster community, and this is the reason why their FVCC accuracies decrease so fast. By adding the prediction process in LPAp we can delay the occurrence of monster community, and small communities are easier to be detected. That is the reason why the result of LPAp is more stable though the standard deviations of LPAp are a little larger than CNM's. An efficient algorithm should not divide a single community network into more communities though the algorithm may tend to find small community structures. We use three kinds of networks to verify the ability of LPAp to identify single community structure. The first one is complete graph. This network is a typical single community structure. The second one is the ER random graph (ER model) whose mean degree is 4. It is not connected, and we use its giant connected component [26]. The last one is computer-generated network (CG network) when out = 8, which has four fuzzy communities. The number of communities detected by an algorithm on these networks can indicate its capability of identifying single community. In order to make the results more intuitive, the sizes of complete graph and ER random models are preset as 128. In the second column and the third column of Table 1, we give the numbers of communities, respectively, using LPA and LPAp on one complete graph, ten ER random models, and ten computer-generated networks. In the fourth column and the fifth column of Table 1, we give the numbers of communities, respectively, using WLPA and LPAp on one weighted complete graph, ten weighted ER random models, and ten weighted computer-generated networks. The weighting method is assigning each edge a random weight in [0, 1] and the weight is random for each edge. The threshold is still preset as 1. We run every algorithm ten times for every network. The data in Table 1 are presented as the form ± , where is the mean number of communities and is the standard deviation of the number of communities. As shown in Table 1, the numbers of communities obtained by LPAp are much closer to the actual number than LPA's. On an ER random model the number of communities obtained by LPAp always changes strongly because the communities are not strong enough and these random networks are different. As mentioned in [5], the ER random model whose edges are spare tends to have many small communities. On ER random model, LPAp tends to identify many small communities while LPA and WLPA always obtain a trivial solution. Though LPAp is more sensitive in detecting small communities, the results of LPAp are still the real result 1 for complete graphs. This phenomenon shows that LPAp has still the ability to identify single community, though the network has been weighed. Communities of weighted ER models and CG networks are slightly different from their unweighted versions. That is because the assignment of random weight changes the structures of networks. To evaluate the effect of incomplete update on the speed of community detection and the quality of network division, we give three different versions of LPAp. (1) LPAp without incomplete update condition: we set = 1 at (6) of LPAp in Section 5 to make the vertices update their label completely and denote the version of LPAp by LPAp-c. (2) LPAp with incomplete update condition which only considers neighbor purity: we change the condition ⋅ Sgn( − 0 ) ≥ into ≥ at (6) of LPAp in Section 5 to make all the vertices update their label incompletely irrespective of their degrees and denote the version of LPAp by LPAp-N. We run LPA, LPAp-c, LPAp-N, and LPA-k on ten used unweighted computer-generated networks when out = 5 and then run WLPA, LPAp-c, LPAp-N, and LPA-k on ten used weighted computer-generated networks when out = 5. In Table 2 we list the running time (in seconds), the mean modularity value avg , and the standard deviation of the modularity value by these algorithms, respectively, when = 0.5. As shown in Table 2, the added prediction process does use extra time, while the mean modularity increases significantly. The incomplete update condition which only considers neighbor purity brings a remarkable decrease of running time, with a lower modularity than LPA or WLPA. Denote the running time of LPA/WLPA and LPAp-k by 0 and and the modularity of network division using LPA/WLPA and LPAp-k by 0 and . We give and to measure the effect of incomplete update condition on running time and the quality of network division: In Figure 10 we give and for different incomplete update thresholds and 0 = 16 on ten used unweighted computer-generated networks and ten used weighted computer-generated networks with out = 5. Each point we will show is an average, and the error bars are smaller than the points. As shown in Figure 10, when > 0.5, LPAp-k is more accurate than LPA and WLPA. In incomplete update process invalid propagation of labels is avoided and that brings a significant decrease in running time. It is worth noting that when = 1, LPAp-k is LPAp-c and it uses a little extra time to obtain a much higher mean modularity. Thus the incomplete update process is not a thing we have to do because LPAp-c has done well in accuracy and speed. Certainly, if we need a faster algorithm, incomplete update process will make LPApk less time-consuming. Real-World Networks. There are two ways to verify the performance of a new algorithm for community detection. One is running it on synthetic networks such as those networks in Section 6.1, and the other is running it on realworld networks. In Table 3 we describe six commonly used real-world networks and their basic properties. In the six real-world networks in Table 3, the club network, the football network, and the dolphin network are usually used for showing the concrete results of community detection as the networks whose community structures are known. The weighted versions of the club network and football network are commonly used too. In Zachary's karate club network, 34 vertices represent the members of a university karate club, and 78 edges represent their social relations. There are two real communities in club network. In US college football network, the vertices represent college teams, and an edge between two vertices indicates there is one or more games between the two teams during the regular seasons in 2000 [2]. There are 11 conferences and 5 independent teams as real communities of football network. In the weighted football network, there are three edges whose weights are 2 in the 613 edges. At first, we run LPAp and LPA on unweighted club network ten times and run LPAp and WLPA on weighted club network ten times. In Table 4 we give the maximal modularity, the mean modularity, and the standard deviation of modularity obtained by these algorithms on unweighted version and weighted version of club network, respectively. As shown in Table 4, the modularity of solutions by LPAp is more stable than LPA's, and the mean modularity of LPAp's solutions is higher than LPA's. In Figure 11 we also give the graphical communities by LPAp on unweighted club network as a concrete result. The vertices which are divided into the same community are given the same shape and the same color in Figure 11. The dotted line in the middle provides a real division of the club network. Though the controversial vertex 10 is classified wrongly, the modularity of the solution in Figure 11 is 0.42, which is much higher than 0.37-the modularity of its real community structure. Next we will execute the same algorithms on unweighted football network and weighted football network ten times, and in Table 5 we give the maximal modularity, the mean modularity, and the standard deviation of modularity obtained by these algorithms on unweighted version and weighted version of football network, respectively. In Table 5 we get a similar result to Table 4, and the solutions by LPAp are more stable and accurate than LPA's. That is because the prediction process delays the occurrence of monster community which will lead to a low mean modularity. In Figure 12 we also show a concrete result of LPAp on unweighted football network. The vertices with the same color and same shape are from the same conference, and the vertices in the same box are divided into the same community by the algorithm. The modularity of the solution shown in Figure 12 is 0.60, which is higher than 0.55-the modularity of its real community structure. The graphical result of LPAp is the same as the result we get by WLPA in [11]. However, we can get the result by directly running LPAp several times without aggregate, while the same result is obtained by aggregate of multiple results using WLPA for many times. Finally, we compare FVCC with Girvan-Newman algorithm (GN), LPA, the algorithm using eigenvectors of matrices (EV), spin glass model (SG), LPAm, and LPAp on three real-world networks whose community structures are known in Table 6 and list the mean modularity and number of communities by these algorithms on six real-world networks for comparison in Table 7 [2,4,5,9,10,15,16]. Due to the randomness in LPA, LPAm, and LPAp, each result of these algorithms is an average over ten runs for each network. As shown in Tables 6 and 7, though the quality of network division by LPAp is a little shy of the modularity for SG, it is still higher than the modularity by most other algorithms, such as CNM, EV, and LPAm. Moreover, FVCC by LPAp is as high as SG. It is worth noting that SG is a complex algorithm whose time complexity is ( 3 ), while LPAp is a near-linear time algorithm. Conclusions In this paper we propose a label propagation algorithm with prediction of percolation transition and incomplete condition to detect community structures in complex networks. The algorithm keeps the near-linear time complexity. To compare with LPA, LPAp is more stable, more efficient, and faster if necessary. The following features of the algorithm can be demonstrated. (1) By considering label status as edge status between two vertices, the community detection process can be transformed into network construction process. Accordingly, the occurrence of monster community in community detection can be delayed by delaying a giant connected component in network construction process. (2) Delaying the occurrence of an uninteresting monster community will provide more opportunity to normal community; thus adding prediction process makes LPAp more sensitive in small community and the ability to identify a single community is maintained. (3) LPAp with incomplete update condition, where the contribution of small degree vertices is considered, can make the computation time be shortened to less than 2/3 of the original label propagation algorithm, and there is nearly no modularity loss.
9,259.6
2014-07-07T00:00:00.000
[ "Computer Science" ]
Capacitive DC links in power electronic systems-reliability and circuit design : Capacitive DC links are an important part in voltage source power electronic converters, which contribute to cost, size and failure rate on a considerable scale. With more and more stringent constraints brought by industrial applications, the capacitive DC links encounter reliability aspect challenges. This paper presents a review on the reliability design and improvement of capacitive DC links from three aspects: 1) Quantitative reliability prediction for DC-link capacitors; 2) Reliability-oriented design of passive DC-link capacitor banks; and 3) Advanced active DC links to exceed the limits of passive DC-link capacitors. Key solutions for each aspect are highlighted and discussed with case studies. This review serves to provide a picture of state-of-the–art research on the reliability design and improvement of capacitive DC links, highlight the key milestones in this area, and identify the corresponding challenges and future research directions. Introduction Capacitive DC links are widely used in power electronic systems to filter the harmonic currents, buffer the instantaneous power difference between the input source and output load, and minimize the voltage variation in the DC link [1] . In three-phase applications such as Adjustable Speed Drive(ASD), and Wind Turbine(WT) systems, the instantaneous power is six times that of the fundamental frequency under grid voltage balanced condition, and two times that of fundamental frequency under unbalances [2][3] . In single-phase rectifier or inverter applications such as Photovoltaic(PV), and Fuel Cell(FC) system, the conversion between DC and AC power will typically introduce a double fundamental frequency pulsation power and ripple voltage harmonics at the DC link of the power conversion system [4] . The low-frequency voltage harmonics are detrimental to the DC side utility of the converter, deviate Maximum Power Point Tracking (MPPT) in a renewable energy system, and impact the power quality and reliability of the power grid [5][6] . In order to decouple the impact between the two stages connected through DC links, capacitive DC links are applied. The most commonly used passive DC links are capacitive ones, which are one of the highest failure rate components in power electronic systems and contributes to more than 20% failures in certain applications [7] . From the system-level aspect, capacitor is the bottleneck of the power electronic systems [8] . The failure probability contribution of a DC-DC converter system discussed in [8] is shown in Fig.1. It reveals that the DC-link capacitors contribute to the highest failure probability among other components and mostly determine the lifetime of the power electronic system. Fig.1 Failure probability of components and system in a DC-DC converter application [8] With more stringent reliability constraints brought by automotive, aerospace, and energy industries, the design of DC links encounters the following challenges [1,9] : ①Capacitors are one kind of the stand-out components in terms of failure rate in field operation of power electronic systems; ② Cost reduction pressure from global competition dictates minimum design margin of capacitors without undue risk; ③Capacitors are to be exposed to more harsh environments (e.g., high ambient temperature, high humidity, etc.) in emerging applications, and ④Constraints on volume and thermal dissipation of capacitors with the trends for high power density power electronic systems. From the capacitor end-user perspective, the effort of overcoming the challenges can be divided into three categories which are reviewed in this paper: ①Electrothermal-lifetime modeling to support model-based sizing of capacitors [10][11][12][13][14][15] ; ② Multi-objective optimization of passive capacitor banks in terms of cost, size, efficiency, and reliability [16][17][18][19] ; ③New capacitor concepts based on active switching circuits [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] . The first effort is an analysis tool to predict the reliability performance of the capacitive DC links. Based on the reliability assessment, the reliability-oriented design and optimization solutions for the applied passive capacitor bank can be provided. In the third effort, active DC links with power electronic circuit to exceed the physical limits of the passive DC-link capacitors are reviewed. The challenges and opportunities for future research direction are finally addressed. The structure of this paper is as follows: Section 2 presents the reliability prediction of capacitors with physical of failure understanding; Section 3 and 4 present the advanced technologies for reliability improvement in terms of passive and active DC links, followed by the conclusions. Lifetime prediction of DC-link capacitors DC-link capacitors can fail due to intrinsic and extrinsic factors, such as design defect, material wearout, operating temperature, voltage, current, moisture, mechanical stress, and so on. Generally, the failures can be divided into catastrophic failures due to single-event overstress and wear-out failures due to the long-term degradation of capacitors. The state-of-the-art methods for lifetime prediction of capacitors for the wear-out failures can be divided into two categories: lifetime prediction for constant operating condition and lifetime prediction for long-term mission profile. Lifetime prediction for constant operating condition For the constant operating condition, lifetime prediction with a simplified lifetime model is commonly used in power electronic applications. The most widely used lifetime model for capacitors is shown in (1), which describes the influence of temperature and voltage stress [1] : where L and L 0 are the lifetime under the using condition and testing condition, respectively. V and V 0 are the voltage at use condition and test condition, respectively. T and T 0 are the temperature in Kelvin at use condition and test condition, respectively. E a is the activation energy, K B is Boltzmann's constant 8.62×10 −5 eV/K, and n is the voltage stress exponent. Therefore, the values of E a and n are the key parameters to be determined in the above model. In [36], the E a and n are found to be 1.19 and 2.46, respectively, for high dielectric constant ceramic capacitors. In [37], the ranges of E a and n for MLC-Caps are 1.3~1.5 and 1.5~1.7, respectively. The large discrepancies could be attributed to the ceramic materials, dielectric layer thickness, testing conditions, etc. With the trend for smaller size and thinner dielectric layer, the MLC-Caps will be more sensitive to the voltage stress, implying a higher value of n. Moreover, under different testing voltages, the value of n might be different as discussed in [38]. Lifetime prediction for long-term mission profile In the above lifetime prediction method shown in section 2.1, the operating condition is assumed to be constant. However, in real power electronic applications, the conditions are always changing with the environment (e.g., ambient temperature, relative humidity and vibration), user's behavior (e.g., loading conditions and input variations) and the status of the system itself (e.g., parameter variation and devices degradation) [11] . In recent years, lifetime prediction for long-term mission profile has been proposed [8][9][10][11] . Differing from the lifetime prediction in constant operating condition, this method takes into account the long-term and variable loading conditions. Furthermore, various sources of uncertainties exist (e.g., tolerances in component parameters, modeling errors) Therefore, a statistical approach based on Monte-carlo simulation is applied [39] . Compared with the method in section 2.1, the lifetime prediction for long-term mission profile is more closely related to the real operating condition. The lifetime estimation procedure is shown in Fig.2. It includes three Fig.2 Mission profile based lifetime prediction procedure [11] major steps: electro-thermal loading analysis, damage accumulation, and Monte-carlo simulation based variation analysis. A mission profile (i.e., ambient temperature, loading condition) is applied as the input. The output is the lifetime of the capacitor with a certain confidence level (e.g., 90 %). Electro-thermal loading analysis Thermal stress is critical to capacitor wear out. The ripple current and ambient temperatures are the contributors to the capacitor hot-spot temperature. For electrolytic capacitors, the dominant degradation mechanisms are electrochemical reaction in the oxide layer and the electrolyte vaporization [40][41] . The thermal stress leads to an increase of Equivalent Series Resistor (ESR) over time. In particular, the increase of capacitor power loss causes a higher operating temperature inside the capacitor. The hot-spot temperature of the capacitor, which is effected by the current stress and ambient temperature, is presented by [11] 2 h a h a r m s 1 where T h is the hot-spot temperature, T a is the ambient temperature, R ha is the equivalent thermal resistance from hotspot to ambient, ESR( f i ) is the equivalent series resistance at frequency f i , I rms ( f i ) is the RMS value of the ripple current at frequency f i . Damage accumulation The linear and nonlinear accumulated damage model is developed to describe the real damage progress [11,13] . The wear out of the capacitor is indicated by an increase of ESR. Damage is then defined as the ratio of instantaneous to final ESR growth. As an example, the formulated nonlinear model that accounts for the effects of these processes, but without a specific identification and is represented by q is a function of lifetime L and material constants. l i and L i are the instantaneous equivalent operating time and total lifetime under the same loading condition, respectively. By accumulating the damage, the dynamical stresses are converted into static values for each type of temperature stress. Taking the accumulated damage into the lifetime model, the equivalent hot-spot temperature can be derived. Monte-Carlo analysis and lifetime prediction The application of the lifetime model results in a fixed accumulated damage. It is far from reality since the capacitor parameter variations and the statistical properties of the lifetime model are ignored. In field operations, the time to end-of-life for the capacitors could vary within a range due to the tolerance in physical parameters and the difference in the experienced stresses. Therefore, a statistical approach based on Monte-carlo simulation is applied [8][9][10][11] . The sensitivity of the lifetime to temperature tolerance-related parameters can be evaluated individually or collectively. Finally, the distribution of the end-of-life of the capacitors can be obtained, allowing a lifetime analysis with a specified confidence level. Closed-loop modelling process Along with the damage accumulation, the capacitance reduction and ESR increase lead to an increasing of DC-link voltage ripple and changing of DC-link current, which accelerate the degradation process of the capacitor. A feedback loop is considered in the lifetime prediction procedure to represent the accelerated degradation. Advanced passive DC links-reliability oriented design for capacitor banks For the applications where single capacitor cannot fulfill the voltage rating or capacitance requirements, capacitor bank is always used as the energy buffer by connecting several capacitors in parallel for larger capacitance, or in series for higher voltage rating. In the ultra-compact converters with cost constraints, there are some design challenges for the capacitor banks: ①Uneven temperature distribution among the capacitors inside the bank due to thermal coupling and uneven boundary conditions, which leads to a part of capacitors severity aging [17][18][19] . The temperature estimation based on a single capacitor becomes over-simplified; ②Longlifetime series of capacitors can be used to improve the reliability of the capacitor bank, however, at the expense of compromised performance in cost, power density, etc. [40][41] . At the beginning of reliability-oriented design, thermal loading distribution of the capacitor bank needs to be predicted, so as to acquire the lifetime of individual capacitor. Considering the self-heating and thermal coupling effects, the lumped thermal model of the capacitor bank can be written in the matrix form as [18] loss, where T i (i=1,2…m) is the monitoring point temperature, P loss , j with ( j=1,2…n) is the power losses of each capacitor, T a,i (i=1,2…m) is the ambient reference temperature at the monitoring points, and Z i , j (i=1,2…,m and j=1,2…n) is the coupling thermal impedance between the monitoring point and the reference point. In particular, Z i , j (i= j) is the self-heating thermal impedance. From the lifetime model shown in (1), it can be seen that the rated lifetime, voltage stress and thermal stress are the key factors affecting the capacitor lifetime. Worth being noted is that if the voltage stress is under the rated voltage, it will introduce negligible effect on the lifetime. Therefore, design variables considered in this paper are the rated lifetime and thermal stress of each capacitor [18] . Lifetime matching of individual capacitors Capacitor manufactures provide products with different classes of rated lifetime. The useful lifetime of capacitors is dependent on both its rated lifetime and the actual stress conditions. In a capacitor bank with multiple capacitors, the thermal coupling among the capacitors varies with physical location. The electrothermal stresses among individual capacitors may be different, thereby. One way to match the lifetime of the individual capacitors is by selecting capacitors with different rated lifetime to configure the capacitor bank. The optimized variables are the rated lifetime of individual capacitor L rated,1 , L rated,2 , ..., L rated,m , and the optimization target is to minimize the lifetime difference among capacitor cells, where the mathematical model are shown below [18] X is the temperature variance. L m and L rated,m are the lifetime and the rated lifetime of the individual capacitor m, respectively. L Target is the lifetime target of the capacitor bank. A capacitor bank with nine electrolytic capacitors connected in parallel is used as a case study [18] . For conventional solution, 2000 hours rated lifetime series products are used for all individual capacitors. Based on the lifetime prediction shown in section 2.1, the lifetime of individual capacitor can be obtained, where the shortened lifetime of capacitor is a 4.7 year lifetime only at the middle of the capacitor bank, which cannot reach the 5 year lifetime target as shown in Fig.3. Based on the lifetime matching method, 3000 hours rated lifetime series product is used for the middle capacitor in the hybrid bank. Therefore, all the capacitors can satisfy the 5 year lifetime target with a slight rise in cost. Fig.3(a),conventional design: individual capacitor uses 2000hours rated lifetime product, so that the capacitor at the middle has a 4.7 year lifetime which is lower than the 5 year. In Fig.3(b), hybrid design: partial capacitors use a 3000 hours series product for the middle capacitor and the others use 2000 hours series product, so that all the capacitor can reach 5 years lifetime target [18] . Thermal stress matching Power loss is the source for the thermal stress, which is determined by the current spectrum and the ESR of each capacitor. The current spectrum at low-frequency bandwidth depends on the capacitance (or impedance at specified frequency) of the capacitor, because of current sharing among capacitors. The specified capacitor current can be obtained as C B a n k 1 2 where n is the number of capacitors in a bank, k is the k-th capacitor in the bank, C n is the capacitance, and i Bank is the total current of the capacitor bank. ESR is also related to the capacitance, which is given by [40] where tanδ is the Dispassion Factor(DF) and δ is the loss angle. Based on (7) and (8), the power loss of the k-th capacitor can be derived as It can be seen that the power loss of the capacitor is linear with capacitance. Therefore, the hot-spot temperature can be obtained based on the power loss and the proposed lumped thermal model. Worth being noted, in the same series products, the size normally changed with the capacitance, which will further affect the thermal impedance because of the variation of the heat spread area. Therefore, the temperature redistribution method should hybrid different downsize series products, which series provides different capacitances for the same size for a capacitor bank. Therefore, the power loss is the only variable corresponding to the hot-spot temperature as well as the lifetime of individual capacitor. The optimization model is defined as where the optimized variable is the capacitance for individual capacitor in the bank. A case study is presented in Fig.4. With the same capacitance 470F/450V for individual capacitor, the thermal loading distribution is uneven. Based on the proposed thermal stress matching method, the capacitance for individual capacitor can be optimized to balance the temperature, where four 750F/450V, four 620F/450V and one 390F/450V capacitors are used. The total capacitance of the optimized solution is the same as with the design target and the cost keeps at a comparative level. Overview of active DC links To exceed the limits of power density, capacitance, voltage rating, reliability, and cost, various new capacitor concepts with the aid of active switching circuits have been proposed. The majority of the applications are for DC links, i.e., active DC links. Active DC links are achieved by switching devices and significantly reduced passive components (e.g., capacitors, inductors, or both) [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] . Its performance depends largely on the active switching circuits and less on the dielectric materials and manufacturing constraints as the passive capacitors. It provides a new perspective to optimize the reliability, cost, or power density less compromised constraints compared to conventional passive capacitor bank design. The typical active DC link configurations are shown in Fig.5 [25] . AB is the DC terminal of the main circuit and CD is the AC terminal. A aux B aux is the input terminal of the auxiliary circuit and C aux D aux is the output terminal. An energy storage element is connected with the output terminal to balance the instantaneous power. Because the buffer capacitor is not directly connected with the DC link having a voltage-ripple constraint, the buffer capacitor can be reduced to allow large voltage ripple. Figs.5(a)~(c) show the solutions connected in series with the main circuit on the DC side. When there is ripple current flowing through the DC-link capacitor, the auxiliary circuit will generate a voltage ripple in order to minimize the DC-link voltage ripple ratio for system specification. Fig.5(d) shows the auxiliary circuit connected in parallel with the main circuit on the DC side. If there is ripple current on the DC link, the auxiliary circuit can be implemented as a current source to compensate the current directly. Therefore, no current and voltage ripple can be observed on the DC link ideally. Figs.5(e) and 5(f) present the auxiliary circuit connected with the main circuit in series and parallel on the AC side. The instantaneous power can be compensated directly on the AC side by instantaneous power calculation, mitigating the ripple on the DC side. Following the topology derivation method in [25], different topologies with active DC links can be obtained based on the general structures in Fig.5. A two-terminal active capacitor Although many active DC links have been proposed in recent decades, they are rarely used in commercial products because of issues with cost, efficiency, and complexity. In order to make an active DC link more practical, a two-terminal active capacitor is proposed with the same level of convenience in use as passive capacitors and the feasibility in achieving better performance in cost, reliability, and/or power density [23] . The two-terminal active capacitor has the following features: ① Has two terminals only without any additional connection because of the proposed control method and the applied self-power circuit, making it possible to be packaged as a conventional capacitor from the end-user's perspective, and ②Has impedance characteristics equivalent to a bulky capacitor or as a variable capacitor within a certain range of frequency depending on the control and switching frequency of its active switches for applications of interest. In principle, there are various choices of the passive elements and active circuit architectures for the active capacitor. A cost benchmarking of different active DC-link solutions for a 2.2kW single-phase inverter application is presented in [25,[42][43]. The capacitors, Fig.5 Typical structure diagrams of the active DC link (Aux) connected with the main circuit (Main) [25] inductors, and semiconductor switches used in the inverter are sized with the same design margins and to system-level specifications (e.g., lifetime, output current total harmonic distortions). The results reveal that a few types of active DC links can achieve lower inverter design cost compared to a passive DC link in the scenario of a relatively high reliability requirement, which is relevant to many industry applications. In particular, solutions having a series-connected auxiliary circuit [45] , and [46] are the most cost-effective ones. The methods presented in [45] and [46] enable lowest design cost since the auxiliary circuit processes the ripple voltage of the capacitor connected in series to it, and the ripple current of the DC link only. However, none of the active capacitive DC links can be used as a plugand-play active capacitor, since they have more terminals than a conventional capacitor, e.g., connections to an external power source for gate drivers and controller, and/or external feedback signals from the main circuits. The circuit diagram of the two-terminal active capacitor is shown in Fig.6. v AB and i AB are the voltage and ripple current of the active capacitor, respectively. It consists of active switches, passive elements, a sampling and conditioning circuit, and self-powered controller and gate drivers. There are two power terminals A and B only, making it as convenient as a conventional passive capacitor from application point of view. As shown in Fig.6, the full-bridge circuit processes the ripple voltage and ripple current of C 1 only, implying a low VA rating. A voltage control strategy is proposed based on internal voltage signals v C1 and v C2 only, as shown in Fig.6, which does not require any current information from external circuits. Therefore, it enables fully independent operation of the active capacitor without any feedback signals from external circuits. The control objective is to shape the impedance seen from AB terminals to be that of an equivalent passive capacitor of interest. The experimental prototype of the two-terminal active capacitor is shown in Fig.7. Based on the specification of the case study, the impedance curves of the active capacitor and the comparable passive capacitor are shown in Fig.8. For a frequency at 120Hz or above, the impedance of the active capacitor is equivalent or lower than a passive capacitor with 34.4J rated energy storage. It implies that the active capacitor can achieve the same or even better harmonic filtering with 16.9% energy storage compared to a passive capacitor. Fig.6 An implementation of the two-terminal active capacitor with a voltage control strategy [23] (v c1 , v c2 and v c3 are capacitor voltages. G HPF (s) and G LPF (s) is the high pass filter(HPF) and low pass filter(LPF), respectively. G 2 (s) is the voltage controller for stabilizing v c2 ) Fig.7 Prototype of the two-terminal active capacitor Fig.8 Bode diagram of the impedance of the active capacitor in the DC-link application [23] Conclusions The scientific challenges and existing studies on capacitive DC links are discussed in this paper. Among others things, the reliability aspect is especially addressed. Electro-thermal and lifetime modeling of capacitors to support model-based sizing and optimization of capacitor banks are presented. Two ways for useful lifetime matching of individual capacitors in a bank are briefly introduced with a case study. Besides passive solutions, the concepts and corresponding implementations of active capacitors are reviewed. A DC link with a two-terminal active capacitor is demonstrated by a case study, which represents the state-of-the-art active capacitive DC link solutions. From the authors' perspective, further research is needed to address the following scientific challenges in the topic discussed in this paper: (1) Better mission profile data is needed to have better analysis of the actual stress levels of capacitors in power electronic applications. While extensive research has been done on the modeling of ripple current stress and internal temperature rise of capacitors, little study has been done on the modeling of the capacitor ambient temperature, which is affected by the heat dissipation of adjacent components, cooling system, and enclosure design, besides the environmental conditions. The capacitor bank thermal modeling discussed in this paper is an example of this effort. System-level thermal modeling and long-term mission profile data are essential to the capacitor ambient temperature profile analysis. (2) These is still a lack of study on the impact of humidity and mechanical stresses on the wear out of capacitors, and on the catastrophic failure of capacitors due to single-event extreme stress. Besides the thermalrelated failure mechanisms presented in this paper, an understanding of the above failure mechanism will help to have a more comprehensive reliability analysis of capacitors. (3) Solid proof of the reliability performance of active capacitive DC links is absent in literature, even though theoretical analysis shows the potential benefits. The two-terminal active capacitor concept enables not only the same level of convenience in use as passive capacitors, but also can be used to perform the accelerated degradation testing, which provides an opportunity for an experimentally comparative study on the reliability performance of active DC links and passive DC links.
5,741.4
2018-09-01T00:00:00.000
[ "Engineering" ]
Pythagorean Theorem Reporting Category Geometry Topic Working with the Pythagorean Theorem Primary SOL 8.10 The student will a) verify the Pythagorean Theorem; and b) apply the Pythagorean Theorem. The Pythagorean Theorem This is not strictly algebra, but it's an interesting cross reference between equation solving and geometry. On any right-angled triangle, we say that the hypotenuse is the edge facing the 90 • corner. (i.e. the longest edge). Problem 3: Determine all possibles values (x, y) such that 2 x+1 + 3 y = 3 y+2 − 2 x The only time that a power of 2 is equal to a power of 3 is 2 0 = 1 = 3 0 . That is, x − 3 = 0 and y − 1 = 0. So x = 3 and y = 1 is the only solution for x, y. The Pigeonhole Principle In its strict sense, the Pigeonhole Principle is a combinatorial result. The idea is very simple. If I have 9 pigeons and only 8 holes to hold them, then at least one of the holes must have more than 1 pigeon, right? i.e. The Pigeonhole Principle states that given n items and p holes to put them in, where n > p. At least one of the p holes has to have more than 1 item in it. Problem 4: How many people do you need in a party at minimum to have 2 people born in the same month? 13 Problem 5: 25 students each earn a grade of A, B, or C, the most frequently occurring letter grade be the grade for at least students? 9 Averages are Fun The average value of n numbers is the sum of the numbers divided by how many numbers there are: 10b + a = 10(7) + 4 = 74 How can you write a 3 digit integer this way? Problem 8: A 2-digit number minus 54 equals the 2-digit number but with the digits reversed. Find all possible such 2-digit numbers. We may rewrite this question as: 10b + a-54=10a Notice that none of b or a can be 0, since that would make either 10b + a or 10a + b a single digit number. We are looking for pairs of single digits a, b such that b − a = 6. The possible such pairs are (a = 1, b = 7), (a = 2, b = 8), and (a = 3, b = 9) Together We Are Strong! Construction problems usually refer to getting some set amount of work done by a working unit over some amount of time. Here is what to remember: Amount per unit × # of units = Total amount VI Here is what you should NEVER do: Moooooo! If cow-1 is eating the grass, the grass will last cow-1 4 hours. If cow-2 is eating the same patch of grass, the grass will last cow-2 2 hours. If cow-1 and cow-2 both eat the grass, then the grass will last them 3 hours, because 4+2 2 = 3 since cow-1 will eat some and cow-2 will eat some but neither will eat all, so the grass will last their average. This approach is INCORRECT and there is no way to reason behind this approach other than your sheer intuition telling you that you should take their average. Problem 9: The cow eating grass problem on the previous scenario with the question being: if cow-1 and cow-2 both eat the grass together, how long will the grass last them? Let the total amount of grass be n. Cow-1 can eat n ÷ 4 amount of grass per hour; Cow-2 can eat n ÷ 2 grass per hour. Then cow-1 and cow-2 combined can eat n 4 + n 2 = 3n 4 amount of grass per hour. Then the patch of grass n will last cow-1 and cow-2, together, We'll end with a miscellaneous problem: The digits 1, 2, 3, 4, 5 and 6 are each used once to compose a six digit number abcdef such that the three digit number abc is divisible by 4, bcd is divisible by 5, cde is divisible by 3 and def is divisible by 11. What is this number abcdef ? abcdef = 324561 (Hint: Look for where to start on the problem, which number is easy to start tackling down. Try to remember your divisibility rules, and in case you didn't know, a number is divisible by 11 if and only if the sum of its odd-positioned digit minus the sum of its even-positioned digit is divisible by 11. For example, 231 is divisible by 11 because 2 + 1 − 3 = 0 and 0 is divisible by 11; likewise 3927 is divisible by 11 because 7 + 9 − (2 + 3) = 11 which is divisible by 11.)
1,134
2020-06-01T00:00:00.000
[ "Mathematics" ]
DEM simulations of quasi-two-dimensional flow of spherical particles on a heap without sidewalls Surface flows of granular materials find several important applications in both nature as well as industry. The effect of sidewalls on such flows is known to be large. Here, we study the rheology of such flows on a quasi two-dimensional heap without sidewalls, at different mass flow rates. It is seen that the surface angle of the heap, for all the mass flow rates, is the same and corresponds to the neutral angle. System variables such as the velocity, volume fraction and stresses are reported as a function of depth from the free surface of the heap. The friction coefficient and volume fraction are also studied as a function of the scaled local shear rate and these are also found to be independent of the mass flow rate. The behaviour observed in the present work is different from that reported in previous studies of surface flows with side walls. Introduction In granular mechanics, a surface flow encompasses a shallow layer of material flowing over a 'fixed bed' composed of the same material. These flows are of paramount importance for the description of several natural phenomena such as flow of sand dunes, avalanches, pyroclastic flows, etc. From an industrial viewpoint also, these special types of flow have applications in transportation, storage, and mixing of materials such as grains, powders, pellets, etc. The erosion or deposition of material into the 'static' bed is postulated to be governed by the difference between the surface angle and neutral angle (surface angle of the heap when there is no erosion or deposition of materials ) of the heap [1,2]. However, it was later shown that the bed is actually not static. Rather, the bed region shows a 'creeping' motion with velocity profiles decaying exponentially within the heap [3]. Surface flows in channels with sidewalls have been extensively studied, and the stability of such flows has been attributed to the friction of the sidewalls [4]. In such confined systems, the surface angle has been shown to increase with increase in mass flow rate [5]. Similar flows have been studied in rotating cylinder systems, and the transition between the 'creeping' bed and 'flowing' layer has been stated to be analogous to a glass transition [6]. Development of constitutive relations to describe such flows is a challenge and the µ(I) rheology, which relates the ratio of the stresses to a rescaled shear rate, has been successful in characterising the behaviour in the flowing part of such systems [7,8]. The role of sidewalls in such flows has been investigated by varying the channel width from 20 particle diameters to about 600 particle diameters, and it has been observed that, with increase in the channel width, flows * e-mail<EMAIL_ADDRESS>* * e-mail<EMAIL_ADDRESS>become slower and thicker [9]. It has also been reported that frictional side-walls aid in order-disorder transition of granular flows of monodisperse spheres down an incline with a bumpy base, whereby both the friction coefficient and the packing fraction is altered [10]. Recently, it has also been reported that, for surface flow over an unconfined asymmetric conical heap, the surface angle is significantly lower than the angle of repose as well as independent of flow rates [11]. Hence, the presence of side-walls seems to affect the flow behaviour on heaps quite significantly. Here, our aim is to study surface flow of granular materials in a system where the boundary effects in the spanwise direction are minimal. In the following sections, the simulation methodology and the rheological characterisation of the resulting system are discussed. The paper is organised into the following sections: section 2 covers the methodology used, significant results are highlighted in section 3 and section 4 summarises the findings. Simulation Methodology Flow of particles on the surface of a heap, composed of the same material as that of the flowing layer, is simulated by means of the soft-particle discrete element method (DEM), using the open source software LAMMPS (http://lammps.sandia.gov) [12]. The simulation domain (see fig. 1) consists of a rectangular box, having a rough base, formed by pouring particles of 1 mm diameter before the start of the heap formation process. Periodic boundary conditions are applied in the y direction. The length (L x ) of the simulation domain is 400d and the width (L y ) of the box is 15d where d is the diameter of the particles. Such a system mimics the behaviour of a real 3D heap. Particles having diameter d = 1 mm and density ρ p = 2.5 g/cm 3 are poured from a slit at the top left corner of the simulation box ( fig. 1).Results for four different mass flow rates viz. 57.2 g/s, 73.0 g/s, 88.3 g/s and 92.9 g/s are reported in section 3. Three different slit widths viz., 7.5d, 9.5d and 11.5d are used in order to achieve the mass flow rates of 57.2 g/s, 73.0 g/s and 88.34 g/s respectively. All components of the velocity of the particles being poured are kept zero for these cases. For the mass flow rate of 92.9 g/s, the slit width is kept fixed at 11.5 d while the z component of the velocity of the particles being poured is changed to -2 cm/s. In the simulations, the force between particles is modelled using the Hertzian model, i.e., the normal pushback force for two overlapping particles is proportional to the area of overlap of the two particles. The elastic constant for normal contact k n = 2570000 dynes/cm 2 , elastic constant for tangential contact k t = 2k n /7, viscoelastic damping constant for normal contact γ n = 5000 s −1 and the visco-elastic damping constant for tangential contact γ t =γ n /2 are used. The value for the coefficient of static friction is taken as 0.5. The values of all these parameters are the same as those for the H3 model in ref. [14]. The equations of motion are integrated by utilizing the velocity-Verlet integration scheme with a time step of ∆t = 10 −5 s. The quantities of interest (such as velocity in the flow direction, volume fraction, stresses etc.) are averaged over bins of dimension 100d × L y × 1d centred around x = 250d (shown by the green box in fig. 2). The shear rate (γ) is obtained by numerically differentiating the velocity profile. Results and Discussion For all four cases, the angle of inclination of the free surface was found to be 21.6 • and corresponds to the neutral angle [2], since the heap is steady and there is no erosion or deposition. A similar behaviour was observed in ref. [11], where the equilibrium neutral angle is nearly constant for almost a three-fold increase in the mass flow rate in an experimental study of surface flow over an asymmetric conical heap without side-walls. As can be seen from fig. 2, the flow in the heap without side-walls is comprised of a fluid-like zone, and a fixed bed, where the material flows relatively slowly. The bottom wall consists of a bumpy base formed by randomly fixing a layer of particles to it. The right wall is kept open so that the particles can flow out from it. Due to the absence of any resistance to flow at the exit as well as steeper surface angles, the particles accelerate near the end of the free surface. An inspection of the streamlines plotted in fig. 3 shows that the flow is a slightly converging flow, with the streamlines progressively coming closer to each other with length in the flow direction. However, in the central region (x = 20cm to x = 30cm), the flow may be approximated to be unidirectional and fully developed. Fig. 4 gives the variation of the system variables with depth (z p ) in the flowing region for the range of mass flow rates studied. For all these plots, the origin is fixed at the free surface of the heap. The coordinates have been suitably transformed, so that z p is perpendicular to the free surface of the heap (as shown in fig. 2). We can also see that the profiles of the system parameters are qualitatively the same for all the flow rates. Hence, the inflow conditions do not affect the heap flow significantly. The velocity profiles are smooth and mostly linear in the flowing zone while showing an exponential decay deeper into the heap, as reported in previous studies [3]. The exponential decay of velocity with depth inside the heap can be clearly seen by plotting the velocity profile on a semi-log scale ( fig. 4 b). The local shear rate (γ) does not vary linearly with √ z p ( fig. 4 c), indicating that the velocity profiles do not follow a Bagnold scaling (v x ∝ z 3/2 ) [15] and hence do not follow the µ(I) model of ref. [7]. Although at first glance, the volume fraction seems to be constant (fig 4d), a closer inspection reveals that, with increasing depth inside the heap, the volume fraction actually increases slightly. The normal stress and the shear stress profiles show a linearly increasing trend with depth from the free surface into the heap. The Cauchy equations for stresses in 2-D predict the slope for a plot of normal stress (σ zz ) versus depth (z p ) to be ρ b gcosθ and that for shear stress (τ xz ) vs. z p to be ρ b gsinθ. These values have also been calculated assuming a φ value of 0.6 and are plotted as dashed lines in fig. 4(e) and (f), respectively. It can be clearly seen that the slopes of the profiles obtained from the simulations are same as those predicted from the Cauchy equations. The ratio of the macroscopic time scale for bulk deformation to the microscopic time scale for grain rearrangements, also known as the inertial number (I), is a convenient way to characterize granular flows into either slow flows (I < 0.1) or rapid flows (I > 0.1) [20]. Here, I is calculated as: The ratio of the shear stress (τ xz ) to the normal stress (σ zz ) gives the effective friction coefficient (µ). Fig. 5 shows the variation of the effective coefficient of friction (µ) with the inertial number (I). It can be seen that, for most parts of the system, the value of µ is constant, around 0.36, with deviations occuring near the free surface of the heap shown in fig. 4 a, b, c, and d as a dotted line corresponding to I = 0.09. Data for layers about 10 particle diameters wide lie above the cut off at I = 0.09. Fig. 6 shows the spatial distribution of inertial number for different parts of the system. The span of I indicates that, for most part, the system is in a regime of slow or quasi-static flow, with I < 0.1. However, the velocities in the flowing region are quite high. Also, the data for different mass flow rates collapses to a single curve for I < 0.1, and hence,in this region, the µ(I) rheology is independent of the mass flow rate. For I > 0.03, the friction coefficient begins to decrease with increasing values of I. Such behaviour has been previously reported by [17] for I > 0.4. This decreasing trend was attributed to the rapid reduction in solid volume fraction at higher Inertial numbers and consequent transition to a kinetic regime. For flow in the quasi-static to inertial regime, taking place in a split-bottom shear cell, a modified µ(I) rheology, where, the effective friction coefficient varies as logarithmic of the inertial number, was suggested [16]. For the present system, when I < 0.1, the friction coefficient (µ) is proportional to log(I). We fit the data for I < 0.1 to the following relation: From fig. 5, the values of µ 0 and α are 0.38 and 0.0012, respectively. So, for all practical purposes, α is negligible compared to µ 0 and hence, the µ(I) rheology for the present system essentially reduces to the Coulomb Yield Condition (τ xz ≤ σ zz tan θ), for slow granular flows. Also, as shown recently in ref. [18] and [19], the granular temperature also plays a key role in determining the rheology of such systems and hence, along with µ and I, the granular temperature also needs to be considered,especially for the region where I > 0.03. Fig. 7 shows the variation of the solid volume fraction (φ) with the inertial number (I). As with the case of µ and I, for I < 0.1, there seems to be a reasonable collapse of all the data to a single curve. We fit the data to an equation of the form: From fig. 7, it can be seen that equation 3 fits the data below I < 0.1. The values of φ 0 and β are 0.598 and 0.0049, respectively. Summary In this work, the flow of granular materials in a quasi-two dimensional heap without sidewalls was characterised. Results for four different mass flow rates, obtained by means of DEM simulations, were presented. For all the different mass flow rates studied, the surface angle of the heap was found to be 21.6 • , which appears to be the lowest possible angle that can support flow in the absence of side-wall friction. A similar behaviour has been noted for surface flow over an asymmetric conical heap without sidewalls, where, the equilibrium neutral angle is independent of the flow rate [11]. The velocity profiles decayed exponentially with depth inside the heap. In the flowing zone, the velocity profiles were nearly linear, as opposed to Bagnold [15] profiles. The data for friction coefficient µ as well as the volume fraction φ collapses to a single curve for all flow rates, and was found to be proportional to the logarithm of the inertial number I below I < 0.1.
3,259.2
2021-01-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
Transforming a Patient Registry Into a Customized Data Set for the Advanced Statistical Analysis of Health Risk Factors and for Medication-Related Hospitalization Research: Retrospective Hospital Patient Registry Study Background: Hospital patient registries provide substantial longitudinal data sets describing the clinical and medical health statuses of inpatients and their pharmacological prescriptions. Despite the multiple advantages of routinely collecting multidimensional longitudinal data, those data sets are rarely suitable for advanced statistical analysis and they require customization and synthesis. Objective: The aim of this study was to describe the methods used to transform and synthesize a raw, multidimensional, hospital patient registry data set into an exploitable database for the further investigation of risk profiles and predictive and survival health outcomes among polymorbid, polymedicated, older inpatients in relation to their medicine prescriptions at hospital discharge. Methods: A raw, multidimensional data set from a public hospital was extracted from the hospital registry in a CSV (.csv) file and imported into the R statistical package for cleaning, customization, and synthesis. Patients fulfilling the criteria for inclusion were home-dwelling, polymedicated, older adults with multiple chronic conditions aged ≥65 who became hospitalized. The patient data set covered 140 variables from 20,422 hospitalizations of polymedicated, home-dwelling older adults from 2015 to 2018. Each variable, according to type, was explored and computed to describe distributions, missing values, and associations. Different clustering methods, expert opinion, recoding, and missing-value techniques were used to customize and synthesize these multidimensional data sets. Results: Sociodemographic data showed no missing values. Average age, hospital length of stay, and frequency of hospitalization were computed. Discharge details were recoded and summarized. Clinical data were cleaned up and best practices for managing missing values were applied. Seven clusters of medical diagnoses, surgical interventions, somatic, cognitive, and medicines data were extracted using empirical and statistical best practices, with each presenting the health status of the patients included in it as accurately as possible. Medical, comorbidity, and drug data were recoded and summarized. JMIR Med Inform 2021 | vol. 9 | iss. 5 | e24205 | p. 1 https://medinform.jmir.org/2021/5/e24205 (page number not for citation purposes) Taushanov et al JMIR MEDICAL INFORMATICS Introduction The transition from paper-based patient records to electronic health records has provided unprecedented access to vast amounts of diverse clinical and health data at the point of care [1]. Undoubtedly, this transition offers a huge opportunity to exploit patient registries for scientific, clinical, and health-policy purposes. An electronic health record is the systematized collection of patients' digitally stored health information. The term patient registry is generally used to distinguish registries focused on health information from other data sets, but there is currently no consistent definition in use [2]. The World Health Organization (WHO) describes registries in health information systems as "a file of documents containing uniform health information about individual persons, collected in a systematic and comprehensive way, in order to serve a predetermined purpose" [3]. Properly designed and executed patient registries can provide a real-world view of clinical practice, patient outcomes, safety, and comparative effectiveness [4,5]. Several national registries (eg, the National Committee on Vital and Health Statistics, or the Agency for Healthcare Research and Quality, both in the United States) are used for a broad range of purposes in public health and medicine as part of "an organized system for the collection, storage, retrieval, analysis, and dissemination of information on individual persons who have either a particular disease, a condition (eg, a risk factor) that predisposes the occurrence of a health-related event, or prior exposure to substances (or circumstances) known or suspected to cause adverse health effects" [1]. Other terms used to refer to patient registries are clinical registries, clinical data registries, disease registries, and outcomes registries [5,6]. A patient registry can be a powerful tool for observing the course of a disease, understanding variations in treatment and outcomes, examining factors that influence prognosis, describing care patterns, including the appropriateness of care and disparities in its delivery, assessing effectiveness, monitoring safety and harm, and measuring some aspects of the quality of care [1,6]. National and international statistics document elevated rates of hospitalization and emergency department admissions among polymedicated, home-dwelling older adults with multiple chronic conditions, and these are often caused by medication-related problems (MRPs) [7][8][9][10]. However, the determining factors of medication-related hospitalizations are poorly understood and require more investigations based on existing patient data [11]. The associations between age, comorbidities, polypharmacy, and adverse effects on health outcomes and health care consumption have been reported in multiple studies of emergency departments and hospitals, but the underlying mechanisms have often been unclear [12][13][14]. Several studies have demonstrated that one-quarter of the emergency department admissions for polymedicated, home-dwelling older adults are related to the inappropriate prescription of medicines or unsatisfactory medication management [15,16]. Poor medication management, inappropriate medicine prescription, and drug-drug interactions are frequent causes of admission [17,18]. The risk of MRPs increases not only with old age and comorbidities but also with the number of medications prescribed and with certain classes of medicines, such as medicines for cardiovascular diseases and diabetes [9,19]. The mechanisms behind those high rates of hospitalization in relation to MRPs deserve more attention. More knowledge and understanding of the factors predisposing and precipitating hospitalization and MRPs among polymedicated, home-dwelling older adults are needed too. This paper aims to describe the method used to transform and synthesize a raw, multidimensional, patient registry data set to prepare it for exploitation as a database with which to examine predictive and survival analysis among hospitalized older inpatients. Study Design This multidimensional, retrospective, patient registry-based study explored the methods required to transform and synthesize a raw data set into a suitable database for further analysis of descriptive, predictive, and survival statistics to identify the risk factors that might induce MRPs among discharged, polymedicated older inpatients. Population and Sample The multidimensional patient registry included 140 variables routinely collected during hospital stays by older adult inpatients aged 65 years old or more, living at home before hospitalization, with at least five prescribed medicines at discharge from hospital. The extracted data set was composed of a sample of 20,422 hospitalizations from 2015 to 2018, with similar numbers of annual hospitalizations: 5134, 5095, 5125, and 5068, respectively. Medicines prescribed before hospital admission were not considered in the analysis due to a lack of data accuracy and validity. Indeed, information on medication at hospital admission is often collected from patients themselves, who may not accurately report their prescriptions, particularly in cases of unplanned hospitalization. Data Set Extraction and Importing The hospital data set was extracted from a public teaching hospital's registry, delivered to the investigators in a CSV (.csv) format file via an encrypted email and saved on a secure server. Finally, the data set was imported into the R statistical package for cleaning, data transformation, and synthesis [20]. Routinely collected data included information derived from patients' medical and clinical statuses (patient-reported data, clinical examination, medical diagnoses, or medicines prescribed). The data set had to be cleaned up and synthesized to be suitable for analyzing descriptive, predictive, and survival statistics. Data Cleaning and Transformation Clinical coding was carried out directly by health care professionals during routine daily care, using a pre-established drop-down menu. Official clinical coding of established medical (10th revision of the International Statistical Classification of Diseases and Related Health Problems [ICD-10]) and surgical diagnostics (CHOP) is mandatory under Swiss Federal Office of Public Health regulations. The variables represented by free text in the original database were excluded. The distributions of each variable in the data set were explored, according to type (categorical and continuous variables), in order to identify any extreme values and obtain a better view of missing values and associations. Our data cleaning and transformation were guided by a literature review on cleaning-up large data sets, the quantity of information available to us, and the study aim [21]. One major challenge was to find a way to select or summarize a significant volume of information so that further descriptive and predictive statistical analyses could be performed (ie, summarize as many variables as possible, while losing the least amount of information). The large number of variables describing an inpatient's somatic and cognitive status and medical diagnoses represents a significant challenge: we must find a balance between the variability of data and the essential, detailed information they provide without losing the ability to perform descriptive, predictive, and survival analyses [22]. Description of the Sociodemographic and Hospitalization Data Set The sociodemographic data set-almost exclusively composed of ordinal variables-included just 2 categorical variables (sex and place of discharge) and 1 continuous variable (age). There were no missing sociodemographic variables except among the place-of-discharge data. The hospitalization data set included 2 continuous variables (date of entry and discharge) and 1 categorical variable (the personal identification data number [PID]). These 3 variables enabled us to compute the length of stay (LOS) and the frequency of hospitalization and rehospitalization, respectively. Rehospitalization rates were important health status indicators in relation to drug prescriptions. Many polymedicated, home-dwelling older adults were hospitalized more than once during the 4-year study period. Almost one-third (n=3678) of older inpatients were rehospitalized 3 times or more; a small fraction was hospitalized more than 9 times. We found 18 polymedicated, home-dwelling older adults who were rehospitalized 17 times and considered them as outliers. Besides computing the average age and hospital LOS, no other interventions were necessary to clean up this section of the data set. Our analyses found an almost equal distribution of men and women, with an average age close to 79 (SD 7.7). Most older inpatients were discharged home after an average LOS of about 10 days (Multimedia Appendix 1). Description of the Somatic Data Set Nurses routinely collect clinical data during hospitalization using a drop-down menu, and the data set was composed of 18 categorical variables: 16 measured as ordinal variables (mobility, changing position, falls in the last year, exhaustion, upper-and lower-body care, upper-and lower-body [un]dressing, eating, drinking, micturition and defecation-related movements, hearing, vision, verbal expression, and pain intensity) and 2 measured as nominal variables (altered gait and chronic pain). Missing values in the data set were resolved by recoding them as "not available" (NA; Multimedia Appendix 2). Description of the Cognitive Data Set Inpatients' cognitive status was measured at an ordinal level using 5 categorical variables. More than 72.60% (14,826/20,422) of adults showed no deterioration in their cognitive status (Multimedia Appendix 3). Description of the Medical Diagnoses and Surgical Interventions Data Set This data set of medical information was composed of patients' principal medical diagnosis and 4 secondary medical diagnoses (active or passive comorbidities), based on the WHO's ICD-10 adopted by Switzerland's health care system [23]. This was completed with the patient's principal surgical intervention and 4 additional surgical interventions, based on Switzerland's surgical classification system (named CHOP) [24]. This data set showed no missing values (Multimedia Appendix 4). The data set has no specific coding for MRPs (the corresponding ICD-10 is "Poisoning by drugs, medicaments and biological substances") [25]. Description of the Prescribed Medicines Data Set The hospital data set showed that discharged patients had been prescribed 2370 different medicines. This huge number of medicines and their heterogeneous therapeutic focus needed a structured classification built based on best practices (Multimedia Appendix 5). Based on expert opinion and a literature review on medicine classification systems, we chose the Anatomical Therapeutic Chemical (ATC) classification system's 14 top-level codes to structure the set of prescribed medicines [25,26] (Multimedia Appendix 6). Synthesizing the Raw Data Set Summarizing the data set was especially challenging because most of the variables documented different parts of inpatients' overall health status, with all the diverse dimensions of their somatic and cognitive conditions. Special attention was given to the large data set of prescribed medicinal treatments. In many fields, the most common means of coping with such difficulties is the use of statistical clustering, a technique which combines all the available information (all variables) to reveal one or several underlying dimensions or health concepts. In addition, the data set's large number of variables and dimensions made it extremely complex to investigate the relationships and interactions between the different somatic and cognitive variables. The data set should allow the analysis of the risks of adverse health outcomes and their relationships with the medicines prescribed. For this reason, computing every variable in the same model may not be the optimal modeling choice if we consider the multidimensional aspect and dependency between those variables. This is especially true if these variables are significant (P<.01) for the discrimination and discovery of mechanisms leading to rehospitalization and a nonreturn home due to medical conditions and MRPs. In the absence of any scientific models, this study used an empirical approach. Overview Little research to date has explored specific combinations or clusters of clinical data and health status. Our study's objective was to transform and synthesize valuable inpatient health information (health concepts such as mobility), rather than to reduce the dimensions of the data. It is, therefore, worth considering a larger number of principal components in the analysis to explain a larger part of the data variability. Almost all the studies which have examined specific comorbidities start from a specific disease rather than examining all the co-occurring clinical and medical conditions [27,28]. Nosology clusters groups of diseases, disorders, or syndromes with meaningful associations into a type of classification, so that diseases, for example, within a cluster, are very similar to one another, but are dissimilar to diseases in other clusters [29]. Among older inpatients, some associations are useful for identifying those at risk of in-hospital adverse clinical events and death in relation to those disease or health-syndrome clusters. A large variety of clustering methods exist in the literature. However, the majority are focused on either continuous or nominal data alone. Only a limited number of techniques and strategies manage to incorporate both variable types into the same clusters [30]. Distance Measurement This approach aims to create a measure of the distance between individuals or sequences that includes nominal and continuous variables. The Gower distance is the most widely used distance measure, and it can be used to calculate the distance between 2 entities whose shared attribute has a mixture of categorical and numerical values [31]. However, because it uses a range of continuous variables to determine the distance and assumes that nominal variables have a distance of either 0 or 1, the Gower distance may underestimate the impact of continuous variables because they are valued at 1 much less often than nominal variables are. Furthermore, weightings are selected arbitrarily. However, they define each data type's contribution to the overall distance. As with all distance measures, the Gower distance should be used as an input for clustering methods, such as k-means. K-Means Method The k-means algorithm is mainly used for continuous variables [32]. Several other applications, such as the R statistical package KAMILA [33], integrate different types of variables. In this case, it uses the probabilities of a multinomial distribution for the discrete variables. The continuous variable distribution is estimated using univariate kernel densities [34]. The probabilities resulting from both distribution types are added together to obtain a measure of how close an observation is to the center of each cluster. K-Medoids Method The k-medoids method is a more robust version of k-means [35]. The difference is that in k-medoids real data points are selected as cluster centers, whereas in k-means the centers are the computed averages. The PAM function in the R statistical package KAMILA is a popular application of this approach [33,34]. Multiple Correspondence Analysis The standard method for clustering factor variables is multiple correspondence analysis [36]. This model is implemented in the FactoMineR and PCAmixdata R packages. It splits all factors into multiple binary variables and applies a type of principal component analysis. The principal components obtained are then usually clustered using a k-means algorithm. Hierarchical Cluster Analysis Our data analysis strategy applied a hierarchical cluster analysis, using the ClustOfVar R package [37,38]. As with any statistical analysis, results of a hierarchical cluster should not be accepted as they first appear, but should be taken as suggestions or questioned instead. When the final set of groups of variables was defined, a statistical model to cluster the individuals within each group was applied. This created one new variable for each group, indicating the type of characteristics the individual displayed in his/her health status assessment. For example, if we separate the individuals into 3 groups according to their cognitive status, we might obtain a variable indicating that a person belongs to a group with significant, minor, or no cognitive impairment. This type of aggregated variable was used in our final analysis of risk factors. Our analysis explored several different clustering methods. However, the results displayed here most often used the following variable clustering procedure. First, a one-factor analysis model was typically used; second, the most important latent factors were selected. At this stage, it was essential to obtain accurate clustering rather than reduce the dimensionality, which takes place in the final cluster partition. Third, these factors were considered as variables and served as the input to a k-means clustering algorithm. Finally, the number of clusters was then selected using the Rousseeuw silhouette statistic, also with regard to the interpretability of the resulting partition [39]. Two-Step Clustering Framework In this approach, n and p denote the numbers of the patients and health conditions (indicators), respectively. The data can thus be represented by an n × p matrix, where the observed value for the ith column and the jth row of the data matrix is 1 or 0, indicating the presence or absence of the ith health condition for the jth respondent (i = 1,…, p; j = 1,…, n). In the 2-step clustering approach, step 1 involves clustering the p conditions into non-overlapping groups of clinical or health conditions. Based on individual patterns in these groups of clinical and medical conditions, step 2 involves clustering the n respondents into clusters which correspond to different patterns of clinical or health conditions. To thoroughly analyze the data and identify the MRPs leading to adverse health outcomes-such as rehospitalization, nonreturn home, and early death [40,41]-among older adult inpatients, a literature review was conducted [27]. Treatment of Missing Data As in every real-life data collection exercise, missing values are unavoidable, and it is important to define how these are integrated into the study. Four approaches were considered: ignoring all observations with 1 or more missing values; defining "NA" as a separate potential variable value; replacing every missing value by the mode of the corresponding variable; or performing multiple imputations on the data set. The first approach was obviously inappropriate, especially in cases where the number of missing data was significant (P<.01). Considering NA as a separate modality for each variable inflates the number of modalities, but it reduces the possibility of bias due to incorrect imputation methods. Nevertheless, for the sake of comparison, it was also tempting to consider the 2 latter approaches. Before choosing between simple replacement using the variable's mode value and multiple imputation, we had to test for the type of missing data. If data are missing completely at random, we can simply impute using the mode. However, if this possibility is rejected, multiple imputation is theoretically more appropriate. The Little test (1988) [42] examines the null hypothesis H0: the data are missing completely at random. This test was applied to all subclusters of variables and the null hypothesis was rejected for every data set. This indicated that multiple imputation could be performed as an optional solution for estimating missing values. Finally, defining NA values became our primary choice for the treatment of missing values. By creating an NA variable (an empty variable that does not influence the cluster result), all observations with an NA variable were still taken into account in the cluster analyses. This is why each cluster analysis contains every hospitalization (N=20,422). Ethical Considerations The hospital data set was coded and its use was contractually limited by the participating hospital center. Furthermore, because the data sets included highly sensitive electronic patient records from a hospital registry, ethical approval was sought before any synthesis or analysis. Data were stored on a dedicated secure data server, which included a log registry. Each access flow to the secure data environment was documented, and each change required approval. Only users working on the project and requiring access to the data were allowed to use the selected multifactor authentication mechanism in the secure environment. The Human Research Ethics Committee of the Canton of Vaud (CER-VD) (2018-02196) approved the study on February 1, 2019. Transformation of the Data Set The original data set required some adjustments before our plan of analysis could move forward. Four empty variables and 1 observation containing mostly 0 or unavailable values were removed from the data set. The labels for all variables were rewritten and clarified, and many medicine names in French had accents and unreadable symbols corrected. Missing Data Tests made using both the BaylorEdPsych and RBtest R packages confirmed that the missing-completely-at-random hypothesis could be rejected [42]. Observations within each subcluster of the data set that only contained missing values were recoded as NA. Their presence might have been due to incorrect inputs, human or software error, or unavailable parts of some questionnaires. Missing data had very little impact on the sample size, appeared to be random, and concerned the first 4300 observations, especially. After recoding these observations, the cognitive status variables showed no more separate missing observations, and we had a complete data set. Clustering of Clinical and Medical Data Most of the hospital variables were partially independent and gathered into several groups according to the dimension of the patient's measured/assessed clinical and medical status. We used an empirical approach suggested by health care experts (FP, HV, and AvG) in an attempt to present homogenous groups within the set of variables. In cases involving clear and meaningful clustering, we relied on expert recommendations or opinions taken from a comprehensive literature review [27,33]. However, when evidence was scarce, we clustered variables using statistical methods. The results from statistical methods were compared against those from expert opinion, which served as a validation tool for addressing any possible subjectivity in those expert opinions [27,33]. Seven groups of clusters were developed: somatic/physical health conditions (3 orange groups in Figure 1), cognitive health conditions (green textbox in Figure 1), total number of prescribed medications based on the ATC classification, diagnoses based on the ICD-10 (yellow textbox in Figure 1), and the surgical interventions based on CHOP (gray textbox in Figure 1). Besides these more apparent distinctions between variables, other underlying subclusters may be present within these groups. This point is beyond the scope of this paper, however, and will be documented elsewhere with a complementary, within-group analysis (the presence of an interpretable clustering of variables within a group before clustering individuals). An examination of the place of discharge variable confirms this: of 20,422 hospitalizations, only 131 patients (<1%) were documented to have died during hospitalization. Bearing in mind that there was no explicit variable indicating this worst outcome, we developed indicators that were suggestive of imminent death or a highly and irreversibly deteriorated health condition. Based on a literature review of polymorbidity, 6 clinical indicators from the data set were associated with a functional deterioration leading to progressive decline and poor health status [43]: (1) restricted mobility, (2) incapacity to change position, (3) altered alertness, (4) altered orientation, (5) altered gait, and (6) reduced or absent cognitive skills necessary to carry out the activities of daily living. Each of these variables indicated a deteriorating health status. To ensure that only severely deteriorating health problems were captured, we only considered patients to be endangered if they had multiple problems. We therefore created a variable indicating the number of problems present, with values ranging from 0 to 6 (Multimedia Appendix 7). More than half of the sample presented with at least one deteriorated health condition. However, only a small fraction of the older adult patients had 4 or more deteriorated health conditions at discharge. Overview The cognitive data cluster (green textbox in Figure 1) was composed of 5 variables indicating cognitive status level ( Table 1). As with many other variables in the total data set, cognitive data were considered nominal because they each had a small number of modalities. The first 400 observations in the data set were excluded from the cognitive status analysis because they contained only missing values and were excluded from other analyses for the same reason. These missing values were explained by the fact that new data variables were introduced into the hospital register during the first semester of 2015. Cognitive Status Clustering The R ClustOfVar package was used to perform a hierarchical clustering of the cognitive health variables to investigate any possible relationships and the presence of subclusters within these variables. The results did not suggest any clear interpretable structure within the variables included, as illustrated by the dendrogram (Figure 2). They indicated that only single-variable clusters (singletons) could be separated, one at a time, to form separate and not very distinct clusters. This information failed to provide any useful solution to our problem because it makes no sense to cluster individuals using a single variable. This result, combined with the small total number of 5 other data set clusters, led us to the conclusion that the 6 data set clusters illustrating different cognitive conditions should be considered together in the same clustering algorithm. Multiple correspondence analysis was used to cluster individuals according to their cognitive status because all the variables were categorical. Even though the first 2 principal components do not explain much of the data (5310/20,422, 26.00%), we were able to discern the 4 most discriminant variables for clustering (and the importance of their categories). For further analysis, we selected numerous principal components (n=9) because of their relatively low explanatory power (65% of the variance). We found multiple different clustering partitions with respect to the number of clusters. Some groups and features were found systematically in all the partitions. This enabled us to make the following generalizations about the results, regardless of the number of clusters: • The majority of observations indicated that cognitive status was not altered at the time of the assessment. We found a good solution and form in every cluster, including the largest cluster. • When increasing the number of clusters, observations with average or poor cognitive status were split and nuanced. • One group of individuals with mainly missing values was excluded from the analysis. The optimal number of clusters was determined using the silhouette statistic (Figure 3). For each number of clusters, this statistic measures how similar each observation is to its own cluster in comparison to all other clusters, that is, the extent to which observations are grouped together. The results indicated that the 3-cluster solution would be the most appropriate in terms of within-and between-cluster distances. However, a partition using 2 clusters provided greater simplicity and also had a statistically sustainable silhouette value. Two-Cluster Solution Hierarchical clustering using 2 classes created a dominant group of 18,339/20,422 (89.80%) older inpatients with full cognitive ability and a smaller group of 2083/20,422 (10.20%) inpatients with cognitive impairment. The 2-cluster solution was differently distributed over the 5 variables and according to the type of diagnoses (ICD-10; Table 1), and it was highly significant (P<.001). Two other variables (number of medications prescribed and primary diagnosis) were added to the analysis for experimental purposes but were not included in the clustering model. A difference was observed in the average number of medications prescribed (9.63 vs 10.47; P<.001) between groups, and the primary diagnosis also appeared to be different (0.10 vs 0.08; P<.001; Table 1). Somatic Variables and Their Clustering Into Subclusters Multiple variables showed modalities that did not correspond exactly to those described in the list (Multimedia Appendices 1-6). The risk of falling variable in the list of somatic data (orange textbox, Figure 1) is continuous, and it was thus recoded into a 3-modality factor as no risk (0 falls), moderate risk (1-4 falls), and high risk (≥5 falls in the last year). The number of somatic variables is large and heterogeneous, making the direct clustering of individuals challenging. We considered the hypothesis that there were probably dissimilarities in this whole set of somatic variables, and starting from this assumption, we split the variables into subclusters. In the absence of any validated techniques, tools, or evidenced-based literature, we developed an empirical subcluster clustering strategy. The initial separation of the variables was guided by information retrieved from a literature review of communicable somatic diseases completed with the authors' experiences and expertise in patterns of somatic illness [27,28]. Four subclusters of somatic variables were constructed: mobility, health difficulties, capacities for the activities of daily living, and other health risks (orange textbox in Figure 1). The mobility subcluster was composed of the clinical variables of movement, changing position, altered gait, balance disorders, and past and recent falls. The general health status subcluster included exhaustion, hearing, vision, verbal expression, drowsiness, sleep rhythm, sleep impairment, pain intensity, and chronic pain. The capacities for the activities of daily living subcluster were composed of upper-and lower-body care, upper-and lower-body (un)dressing, eating, drinking, and micturition-and defecation-related movements. The other health risks subcluster was composed of clinical variables assessing the risks of sores, wounds, malnutrition, and falling during hospitalization. To reinforce the authors' opinions, a statistical validation model of the variable clustering was computed using the hierarchical clustering functions of the R ClustOfVar package (Figure 4). Findings showed some differences between the authors' opinions and the statistical model. To optimize the composition of somatic health status variable subclusters, an adapted version was selected for further data analysis following discussion and a consensus agreement. Three subclusters of somatic variables were considered. The mobility subcluster was composed of the movement, changing position, and altered gait variables. The general health impairments subcluster included exhaustion, hearing, vision, verbal expression, risk of falling, chronic pain, and pain intensity. The capacities for the activities of daily living subcluster included upper-and lower-body care, upper-and lower-body (un)dressing, eating, drinking, and micturition-and defecation-related movements. Grouping Individuals Within the Somatic Health Status Subcluster After separating the variables, the somatic health status subclusters of mobility, health impairments, and capacities for the activities of daily living were themselves partitioned, with the aim of discovering any possible underlying groupings of inpatients. Mobility Subcluster Using the silhouette statistic failed to give a clear optimal number of subgroupings n ( Figure 5). Our analysis demonstrated similar and increasing average silhouette widths as n increased. Consequently, we chose a 2-cluster partition, deciding that this best separated the variables in terms of interpretability of results and a clear implicit difference between the groups: a grouping of persons with mostly full mobility (n=12,540) and a grouping with an impaired mobility status (n=7,880). Roughly two-thirds of individuals had few or no mobility problems ( Table 3). The remaining individuals exhibited problems in at least one of the three variables. That number is large but not surprising when considering the sample population's advanced age. The χ 2 tests confirmed a clear difference between the groups across all variables (Table 3). Our analysis highlighted that the group with full mobility status was prescribed significantly fewer medications (P<.01) than the group with impaired mobility (9.07 vs 10.74). Health Impairments Subclusters Calculating the silhouette statistic suggested that the 4-cluster groupings solution was optimal, even though the results appear very surprising. However, we decided on the 2-grouping solution, mainly because it is easier to interpret ( Figure 6 and Table 4). Capacities for the Activities of Daily Living Subcluster The 2-cluster solution appeared appropriate and confirmed the silhouette statistic, which highlighted the 2, 8, and 10-cluster solutions (Figure 7). We distinguished 1 large cluster grouping of 17,836/20,422 (87.34%) individuals composed of mainly autonomous inpatients with almost full capacity to carry out the majority of the activities of daily living. The second cluster grouping of more dependent inpatients included 2573/20,422 (12.60%) individuals with at least one serious problem in handling their activities of daily living. Overall, the partitioning into 2 cluster groupings was relevant in light of our aim to demonstrate that the observations were significantly different Synthesizing ICD-10 and CHOP Diagnoses Clustering the large data set with more than 2000 different ICD-10 and 800 different CHOP diagnoses into general clusters was not interpretable. To make it suitable for further analysis, the ICD-10 data set was recoded into 4 groups: physiological systems, mental illnesses, oncological diseases, and others. The CHOP diagnoses were also recoded into 4 groups: physiological systems, sensorial, other, and measurement instruments for diagnostics (Table 6). Summary of Synthesized Registry Data The different clustering and recoding methods resulted in the data set presented in Table 7. Principal Findings This paper describes the rationale and methods used to synthesize a large, routinely collected data set of clinical and medical information concerning polymedicated home-dwelling older adults during hospitalization. The electronic patient records from a hospital center provided a valuable data resource for researchers wishing to perform a variety of analyses to explore health risk determinants, medication prescribing, rehospitalization, and death rates. Prospectively collecting research data is often time-consuming and expensive, resulting in biased samples of highly selected individuals, who are often unrepresentative of real-life patients [21]. Data that are already available for use in anonymized electronic patient records provide a valuable opportunity for a variety of different research designs and are particularly useful in the design of registries for evaluating patient outcomes [44]. In some situations, using population-based registries is even preferable to collecting primary data because selection bias due to nonresponders is not a problem [21]. However, large patient registries are sometimes also inconvenient as they frequently present raw data sets and, for several different reasons, they may not be immediately suitable for performing advanced statistical analyses [22]. Those large data sets usually need to be transformed, cleaned-up, and synthesized to be usable for advanced descriptive and predictive statistical analyses. Our 4-year population-based data set was composed of polymedicated home-dwelling older inpatients with multiple chronic conditions, hospitalized and perhaps rehospitalized in a hospital center in the French-speaking part of Switzerland. The data came from multiple data set sources and were not easily exploitable for advanced statistical analyses, forcing the research team to explore and develop a synthesizing strategy for a large set of variables so as to respond to our research question. Synthesizing a large number of heterogeneous variables in a finite set of specific medical, clinical, and medication data groups was carried out using the principles of cluster methodologies [30,32] and following Olsen's recommendations for best practices in the analysis of population-based registries [22]. Most of the variables documenting patients' health status fulfilled the criteria for clustering into different groups according to the dimensions of their health status. Despite the existence of a large number of clustering algorithms, we observed that clustering variables remains a challenge [37]. First, our data set covered a large number of different domains, and it is often the case that clustering algorithms must be applied to heterogeneous sets of variables, creating an acute need for robust, scalable clustering methods for mixed continuous and categorical-scale data [45]. Current clustering methods for mixed-type data are generally unable to equitably balance the contributions of continuous and categorical variables without strong parametric assumptions. Second, stable cluster analysis is strongly dependent on the data set, especially on how well separated and how homogeneous the clusters are. In the same clustering exercise, some clusters will be more or less stable than others [46]. To overcome this challenge, our study used a combined empirical and statistical approach. In the empirical approach, the variables in the clusters and subclusters were selected following expert opinion (FP, HV, and AvG), presenting the most homogeneous groups possible within the set of variables described in the literature [47]. In the statistical approach, we used the most appropriate clustering methods and compared the results with the experts' opinions, which served as a validation tool to address any possible subjectivity in those opinions. Both methods were implemented independently and compared. This approach was similar to that used in 2 recent studies exploring frailty and comorbidity patterns [27,28]. Although this study developed 6 clusters based on best practices and the previously mentioned empirical statistical approach, other underlying subclusters could also be present within them. This was also noted in the study by Newcomer et al [48] which used agglomerative hierarchical clustering methods to identify clinically relevant subclusters based on groupings of coexisting conditions in a large sample of hospitalized adults. This study demonstrated that constructing subclusters should not rely solely on an explicit statement indicating the worst outcome, such as death. Clinical indicators documenting functional deterioration which led to a progressive decline and a poor health status were integrated into the 7 clustered data sets. A recent population-based registry study by Vuik et al [49] confirmed the utility of this kind of approach and concluded that health status could not only be based on sociodemographic characteristics and medical diagnoses such as age or morbidity, but should also consider specific assessments of clinical care and patient function. The procedure used in this study can be summarized as a 7-step approach to transforming and synthesizing a raw, multidimensional, hospital patient registry data set into an exploitable database: 1. Write a protocol including a problem statement, research questions or hypotheses, and data extraction methods incorporating inclusion and exclusion criteria. 2. Explore the hospital register's data catalog (content of administrative, clinical, medical, and drug data; frequency of assessment; types of measurement-health scores, structured observations, free text-as well as the period of data available) in collaboration with the hospital's clinical data warehouse. 3. Request ethical approval from an ethics committee for the use/reuse of existing patient data. 4. Select the most appropriate data for responding to the research questions/hypotheses. 5. Prepare the data set for further analysis by extracting hospital register data into a CSV (.csv) or Excel (.xls) format, cleaning the data in that format's file and importing the data set into a statistical package such as R, SPSS, or STATA. 6. Analyze missing data and strategies to address missing values based on best practice. 7. Synthesize the data with regard to the research questions by recoding and clustering. Strengths and Limitations The strengths of our retrospective registry study lie in its huge sample, allowing us to explore the data's variability and homogeneity in depth. Clustering data risks reducing their variability and the information that can be extracted from them, and some clinical variables showed a significant number of missing values. This fact raises questions about the accuracy and quality of the clinical data assessed, which would require measures of interrater reliability among the health care professionals inputting data into the registry. However, because this was beyond the study's aims, we did not explore interrater scores of clinical assessments or health care professionals' scoring of routinely assessed clinical data. Another limitation to our study was that the sample was restricted to inpatients aged 65 years or older. Because this retrospective, register-based study was part of a larger project [50] focused on medication management among polymedicated, home-dwelling older adults with multiple chronic conditions, we did not have the ethics committee's approval to extend our extraction of data from the hospital register to all hospitalized adults. Furthermore, our analysis did not consider medicines prescribed before hospital admissions due to a lack of data accuracy and validity. Finally, and surprisingly, our hospital data set revealed a low mortality rate. Considering the incidence of death in the region, our database showed that it was limited in its representativeness of mortality. Older inpatients presenting with a severe functional decline or at the end of their life probably left the hospital early to die at home or in a nursing home/intermediate care clinic. Research Perspectives Transforming and synthesizing electronic health records is an intermediate stage in the process of subsequently investigating risk profiles and predictive and survival outcomes. Proceeding to these types of analyses requires that each patient has a personal identifier (PID) for computing survival, predictive risk factors, re-admission rates, unplanned institutionalization, and other clinical outcomes explored in cohort and case-control studies. In addition, survival analysis must be performed up to 18 months after discharge-beyond our data analysis cut-off point. Within the framework of a trajectory analysis of health care, all the longitudinal data on 1 patient should be on the same horizontal line in the spreadsheet used for calculations. To do this, each patient must have a unique code allowing data to be linked across multiple hospitalizations. Risk and predictive analyses could be organized using multiple linear logistic regression models (generalized estimating equation [GEE statistics]). In this study, the data synthesized to date will enable our research to be completed with additional longitudinal survival analyses. The construction of sequences of hospitalizations and rehospitalizations will allow us to better understand the impact of certain events from a longitudinal perspective. The registry data have some limitations because observations are equally spaced in time and all start from the same point, in 2015. However, this study promises to provide valid and robust results, because, despite the sample period, the next hospitalization may in fact be the best measure of treatment impact. For instance, the consequences of treatment decisions taken during one hospitalization (such as medications prescribed or surgical interventions) might only be measurable when the older inpatient needs to be rehospitalized. Yet those unequal periods between hospitalizations may actually prove to be advantageous because they provide a period of effect-that is, a period selected naturally by the evolving health status specific to each older inpatient (eg, inappropriate treatments make inpatients return to hospital at the exact moment their health worsens). A survival analysis would need to be performed to measure the impact of each important intervention (medical act or medication prescription).
9,752.4
2020-09-09T00:00:00.000
[ "Medicine", "Computer Science" ]
Why Moist Dynamic Processes Matter for the Sub‐Seasonal Prediction of Atmospheric Blocking Over Europe In recent years, there has been growing evidence that latent heat release in midlatitude weather systems such as warm conveyor belts (WCBs) contributes significantly to the onset and maintenance of blocking anticyclones (blocked weather regimes). Still, numerical weather prediction (NWP) and climate models struggle to correctly predict and represent atmospheric blocking in particular over Europe. Here, we elucidate the representation of WCB activity in 20 years of extended winter (1997–2017) of European Centre for Medium‐Range Weather Forecast's IFS reforecasts around the onset of blocking over Europe (EuBL) employing different perspectives. First, we show that the model struggles to predict EuBL onsets already at 10–14 days lead time in line with a misrepresentation of WCB activity in the ensemble mean. However, we also find cases with accurate EuBL forecasts even in pentad 4 (15–19 days). This subset of successful forecasts at extended‐range lead times goes in line with accurate WCB forecasts over the North Atlantic several days prior to the blocking onset. Second, investigating the time‐lagged relationship of blocking onset and WCB activity, we find that WCB activity over the North Atlantic emerges well prior to the onset of the block and that different pathways into EuBL exist in the reforecasts compared to reanalysis. Finally, we find indication of predictability associated with a Rossby wave train emerging from the North Pacific. Although our study can not disentangle the roles of intrinsic predictability limits and model deficiencies, we show that correct predictions of EuBL go along with distinct patterns of WCB activity. Introduction Atmospheric blocking describes the formation of persistent, large-scale anticyclonic circulation anomalies that block the westerly flow and eastward propagation of synoptic eddies (Berggren et al., 1949;Rex, 1950).Blocking can be associated with extreme weather events such as flooding in adjacent regions, heat waves or thunderstorm episodes in summer (Mohr et al., 2019;Oertel, Pickl, et al., 2023;Pfahl & Wernli, 2012) and cold spells in winter (Buehler et al., 2011;Ferranti et al., 2018).Thus, the accurate prediction of blocking on sub-seasonal to seasonal (S2S) time scales (more than 10-15 days but less than a season ahead, depending on the definition) is desirable for decision makers to prepare for extreme weather events and to issue early warnings.Early blocking studies developed theories for the formation and maintenance of blocking by planetary waves or orographic forcing (Charney & DeVore, 1979;Hoskins & Karoly, 1981).However, they could not explain some observed characteristics such as the rapid onset (Nakamura & Huang, 2018) or the fluctuation in size and intensity during the blocking life cycle (Dole, 1986). These restrictions point to the importance of transient eddies and synoptic-scale processes for the formation and maintenance of atmospheric blocking (Shutts, 1983).Until recently, the evaluation of these processes has almost exclusively been done on the basis of dry dynamics (Colucci, 1985;Yamazaki & Itoh, 2013).However, Pfahl et al. (2015) showed that moist dynamic processes, in particular latent heat release (LHR) due to cloud formation, are of first-order importance for the onset and maintenance of atmospheric blocking.An overview on how midlatitude LHR affects the large-scale circulation is given in various studies (e.g., Grams & Archambault, 2016). Here we briefly summarize the main findings. Intense LHR occurs in poleward ascending air streams in the warm sector of extratropical cyclones, in so-called warm conveyor belts (WCBs) (Madonna et al., 2014;Wernli, 1997).In a WCB relatively warm and humid air from the lower troposphere starts ascending slantwise along the tilted (moist-)isentropes of a baroclinic zone.Condensation sets in and the associated LHR accelerates vertical motion and facilitates cross-isentropic ascent.This injects an air mass into the upper troposphere resulting in divergent outflow, a lifting of the tropopause, a poleward deflection of the upper-level jet (Grams et al., 2011;Pomroy & Thorpe, 2000) and ultimately the formation of an upper-level ridge.If sustained for a longer time period, the diabatically enhanced outflow can result in the onset of a block (e.g., Grams & Archambault, 2016;Maddison et al., 2019;Riboldi et al., 2019;Steinfeld & Pfahl, 2019).To quantify the effect of diabatically enhanced outflow for the onset and maintenance of blocks, Steinfeld et al. (2020) conducted numerical experiments for various case studies suppressing diabatically enhanced outflow through switching off LHR.Without LHR and its associated divergent outflow blocks were found to be considerably weaker or even absent.Using a quantitative diagnostic framework based on potential vorticity thinking, Teubler and Riemer (2021) and Hauser et al. (2023aHauser et al. ( , 2023b) ) confirmed the importance of moist dynamics for the amplification of upper-tropospheric ridges and blocking. WCBs are challenging to predict due to the small-scale cloud-microphysical processes associated with their air stream (Oertel et al., 2020;Oertel, Miltenberger, et al., 2023) thereby facing both intrinsic limits of predictability and problems in an accurate representation in models.In a previous study, we could show that skillful predictions of WCBs in current numerical weather prediction (NWP) models are possible only until around 8-10 days (Wandel et al., 2021).Despite apparent intrinsic limits of predictability for the WCB, for example, Maddison et al. (2020) showed that there is still room for improvements in the representation of diabatic processes in NWP models which again have the potential to improve the prediction of blocking. The representation of atmospheric blocking in NWP and climate models has been investigated in numerous studies in the last two decades.Many studies point to the underestimation of blocking frequency (negative bias) over the European region (d'Andrea et al., 1998;Masato et al., 2014).This bias increases with longer lead time (Jia et al., 2014;Quinting & Vitart, 2019) and can be reduced through higher horizontal and vertical resolution (Anstey et al., 2013;Davini et al., 2017;Dawson et al., 2012).Remarkably, comparing the forecast skill for different weather regimes, Büeler et al. (2021) found that the year-round forecast horizon for blocking over the Central European region is 3-5 days shorter compared to other large-scale flow regimes.NWP models particularly struggle predicting the onset of EuBL (Ferranti et al., 2015;Rodwell et al., 2013).These difficulties can partly be linked to its lower intrinsic predictability (Faranda et al., 2016;Hochman et al., 2021), but might also be a result of physical processes, such as LHR in WCBs, which are still difficult for the models to accurately capture (cf.Maddison et al., 2020). For an individual case study, Grams et al. (2018) highlighted the role of WCBs for the onset of blocking over Europe in one of the most severe forecast busts in the ECMWF's integrated forecasting system (IFS) in the last decade.They find that a misrepresentation of the WCB in the ensemble forecasting system amplified the initial condition error and triggered a nonlinear feedback mechanism.The WCB communicated the forecast error from small scales to the upper troposphere and downstream, which led to the missed onset of the block.Other studies also point to the amplification of errors in the WCBs (Pickl et al., 2023) or highlight the generation of errors in potential temperature and potential vorticity in the WCBs, which can lead to downstream errors in the Rossby wave pattern (Berman & Torn, 2019;Martínez-Alvarado et al., 2016).Moreover, teleconnections from the Caribbean and North Pacific region affect the occurrence of large-scale weather regimes, including blocking, in the European region (Michel & Rivière, 2011;Michel et al., 2012) and the teleconnection itself is again modulated by WCB activity (Quinting et al., 2024). However, so far a systematic investigation of the role of WCBs for the prediction of atmospheric blocking over Europe is still missing, primarily due to the lack of a diagnostic framework. Here, we employ different perspectives to investigate, for the first time, the systematic link between WCBs and blocking in ECMWF's IFS S2S reforecasts and reanalysis in the extended winter period from 1997 to 2017.We address the following three research questions: • What is the link between WCB activity and EuBL onset and how well is it represented in reforecasts at different forecast lead times?• Is there a link between WCB representation and correct forecasts of EuBL onset? • Do teleconnections from the North Pacific region play a role for the prediction of EuBL? We focus on EuBL onsets in different pentads (lead times of 0-4 days, 5-9 days, 10-14 days, and 15-19 days) since forecast skill for Atlantic-European weather regimes and WCBs on average vanishes in week 2 (7-14 days) (Büeler et al., 2021;Osman et al., 2023;Wandel et al., 2021).Onsets of large-scale and persistent flow regimes at lead times of 5-20 days are of particular interest from a sub-seasonal prediction perspective, because, due to their persistence, they strongly influence the circulation even beyond lead times of 20 days. The data, the definition of EuBL, and the Eulerian metric to identify WCBs are introduced in Section 2. We then elucidate the potential link of WCB activity and EuBL onset from four different angles.(a) In Section 3, we first investigate how well WCB activity around EuBL onset is represented in the ensemble mean of IFS reforecasts at different forecast lead times.(b) Section 4.1 then explores potential causalities between the representation of WCB activity in individual ensemble members and the prediction of EuBL onset.(c) Furthermore, we explore the representation of the time-lagged evolution of WCB activity prior to EuBL onset depending on the ability of the model to predict EuBL onset in Section 4.2.(d) Finally, the role of upstream precursors from the Pacific for the prediction of EuBL is analyzed in Section 5 and the study ends with concluding remarks in Section 6. Reforecasts and Reanalysis We use the ECMWF's IFS sub-seasonal ensemble reforecasts (Vitart, 2017) for the extended winter period (NDJFM) from 1997 to 2017 to analyze WCBs and 500-hPa geopotential height (Z500).The ensemble reforecasts contain in total 11 members, of which one member is an unperturbed control forecast.To increase our sample size, we use all reforecasts for IFS cycles CY43R1, CY43R3, and CY45R1, which were operational between 22.11.2016 and 11.6.2019.Reforecasts were computed twice-weekly during that period for the respective calendar day and the last 20 years.This yields a total of 1,641 initialization times in the study period 1997-2017.Consistently with the initial conditions of the reforecasts, we employ ERA-Interim reanalysis data (Dee et al., 2011a) for verification.Both data sets are retrieved on a regular 1.5°× 1.5°latitude-longitude grid and remapped to 1°× 1°grid spacing using linear interpolation.We calibrate the reforecasts by calculating WCB and Z500 anomalies relative to the 90-day running mean model climatology at a given lead time derived from the 20-year reforecast data using all cycles.Anomalies for ERA-Interim are computed against ERA-Interim climatology for 1997-2017.This approach eliminates the systematic bias between ERA-Interim and the reforecasts. Atlantic-European Weather Regimes To identify blocking over the European region, we use seven year-round Atlantic-European weather regimes based on 5-day low-pass-filtered geopotential height (Büeler et al., 2021;Grams et al., 2017).Thus, we refer to atmospheric blocking using the definition of blocked weather regimes.Weather regimes are quasi-stationary, persistent, and recurrent large-scale flow patterns in the midlatitudes (Michelangeli et al., 1995;Vautard, 1990) and reflect the variability of the large-scale extratropical circulation on sub-seasonal timescales.An accurate prediction of large-scale flow regimes is particularly important since it yields more useful information about different surface variables (e.g., temperature and precipitation) after forecast day 10-15 compared to the direct NWP model output (Bloomfield et al., 2021;Mastrantonas et al., 2022).Blocking over the European region (EuBL) is the dominant blocked regime in winter (compared to "Scandinavian Blocking" in summer) and occurs at around 11% of winter days.For the computation of the regime patterns the interested reader is referred to Büeler et al. (2021). Warm Conveyor Belts The stages of WCB inflow, ascent, and outflow are identified using a novel framework of convolutional neural networks (CNNs) introduced by Quinting and Grams (2022a).The CNN-based metric (ELIAS2.0) is designed to evaluate WCBs in large data sets at low spatio-temporal resolution for which the original trajectory-based WCB definition (Wernli & Davies, 1997) is not applicable.The method now facilitates for the first time a systematic study of WCBs in a large data set.ELIAS2.0 takes five meteorological parameters as predictors, which are characteristic of each WCB stage.It then predicts conditional probabilities of occurrence of the respective WCB stage from which we derive two-dimensional binary WCB footprints by applying grid point specific decision thresholds.Four of the predictors are derived from temperature, geopotential height, specific humidity and the horizontal wind components at the different isobaric surfaces.For WCB ascent the fifth predictor is the climatological occurrence frequency according to the trajectory-based WCB data.For WCB inflow (outflow) the fifth predictor is the conditional probability of WCB ascent predicted by ELIAS2.0 24 hr before (after) the considered inflow (outflow) time. The CNN method successfully reproduces the climatological distribution of WCBs (Quinting & Grams, 2022a) which was first found with the trajectory-based approach (Madonna et al., 2014).Moreover, the CNN method skillfully identifies WCBs at instantaneous time steps (Quinting & Grams, 2022a). Significance Testing To corroborate our main findings, we test the ERA-Interim-based anomalies of WCB inflow, WCB outflow, and Z500 for statistical significance using a Monte Carlo approach.Considering all EuBL dates (see Section 2.5.1),we randomly generate 1,000 series of equal sample size.The dates in each random series are composed of a randomly chosen year from the 1997-2017 reforecast period and a randomly chosen day.In order to account for the seasonal variability, each day is chosen from a 14-day period around the corresponding date in the EuBL date series.We then take the average for each of the 1,000 series and determine the 5th and 95th percentiles of the distribution.Anomalies of the original EuBL date series that either fall below the 5th percentile or exceed the 95th percentile are considered to be significant. Forecast Perspective and Lead Times In our study, we focus on EuBL events in the extended winter period from 1997 to 2017.Following Grams et al. (2017) and Büeler et al. (2021), an EuBL onset is identified at the first time when the respective weather regime index I wr exceeds a threshold of 0.9 and remains above this threshold for at least five consecutive days.In total, there are 38 EuBL "unique events" in ERA-Interim in NDJFM during 1997-2017.In the following, we refer to onset in a given pentad, if the onset date lies within that pentad (see schematic in Figure 1).In order to directly compare ERA-Interim to the reforecasts, we treat ERA-Interim as a "perfect ensemble member" for each forecast initialization time and identify EuBL onset and life cycles in the same manner as for individual ensemble members of the reforecasts.Thus, we match ERA-Interim to each available reforecast initialization time and lead time.When using ERA-Interim as perfect ensemble member for all available forecasts, the number of EuBL onsets increases because the 38 unique events can be captured multiple times in consecutive forecasts.We call this the "forecast perspective."In the forecast perspective 98 EuBL onsets are found as the 38 ERA-Interim events are captured on average by 2.6 forecasts (cf.discussion of Table 1 in Section 2.5.2).This approach weights the ERA-Interim events according to the available initialization times of the reforecasts and allows for a direct comparison to the events in the reforecasts.Reforecasts are evaluated in pentad 2 (forecast day 5-9), pentad 3 (day 10-14) and pentad 4 (day 15-19), but the main focus of the study is on pentad 3. Different Approaches to Link WCBs and EuBL To assess the role of WCBs for the onset of EuBL, we analyze Z500 and the WCB activity using two different approaches: First, we calculate 5-day mean composites around ERA-Interim onsets to understand characteristics of EuBL in reanalyses and to evaluate the representation of the patterns in the reforecasts at lead times of pentad 2, pentad 3, and pentad 4. We focus on pentads for fixed lead times rather than centered composites around the actual onset date to avoid biases due to mixing different forecast lead times.Second, we investigate the 6 days prior to onset using lagged composites which are stratified according to the individual onset dates in reanalysis and reforecasts.This approach allows for a direct comparison of the time-lagged evolution of the fields while giving hints to potential precursors of the blocked regime.Furthermore, we distinguish between the ensemble mean of the reforecasts and individual ensemble members which are selected depending on their forecast skill.On the one hand, the ensemble mean of the reforecasts is used to evaluate their ability in representing ERA-Interim EuBL onsets across different lead times.On the other hand, single ensemble members from different initialization times are grouped together, depending on their ability to represent EuBL to explore potential deficiencies in the model related to the link of WCB activity and blocking onset.Ensemble members that correctly (within 2 days of the verifying date) capture an ERA-Interim EuBL onset are defined as "Hits," members which do not capture the onset as "Misses."Furthermore, we include all ensemble members in our study that predict an EuBL onset while no event is analyzed in ERA-Interim ("False Alarms"). As stated above, we find 38 "unique EuBL events" in ERA-Interim in the 20-year period 1997-2017 which can occur in either pentads 2, 3, or 4 of the available reforecasts when ERA-Interim is used as perfect ensemble member.Accounting for consecutive forecast initialization times according to the "forecast perspective," these are captured in 98 of the available reforecasts irrespective of the pentad (not shown).Thus, 98 × 11 ensemble members = 1,078 individual forecasts could either identify (Hit) or not identify (Miss) these EuBL onsets (cf.Table 1 last column sum of Hit and Misses for different pentads, and discussion below).In addition, forecasts can issue a false alarm. For 34 out of the 38 unique ERA-Interim events, there is at least one ensemble member in at least one of the available initialization times that correctly captures the EuBL onset (Hit) at a leadtime of pentad 2 (4 unique events are completely missed) (Table 1 top row).This number of captured unique events decreases for onsets at lead times of pentad 3 and 4 (29 and 26 unique events, respectively).In fact, each unique EuBL event is captured by more than one forecast since the reforecasts are initialized multiple times per week (see "forecast perspective" with 72/45/34 initial times with a Hit for EuBL onsets at lead times of pentad 2/3/4).When the EuBL is captured by these forecasts, most of the times 1-2 ensemble members correctly predict the onset in pentad 2. However, there are even events that are correctly predicted by 3-11 ensemble members, which results in a total of 271 individual ensemble members (25% of all possible ensemble members) capturing an EuBL onset in pentad 2. For correctly predicted onsets in pentad 3, this number decreases to 65 (6%) with the events being mostly captured by 1-2 ensemble members and some by 3-4.In pentad 4, there are 41 ensemble members (3% of all possible ensemble members) that capture the 26 unique EuBL events.These numbers illustrate that the accurate representation of EuBL becomes more challenging with forecast lead time.Still, in pentad 4, 26 out of the 38 unique events are captured by at least one ensemble member, which provides robustness to our further analysis. The Misses category contains all ensemble members which do not predict the onset of the observed EuBL event (middle rows in Table 1).For all pentads, all of the 38 observed unique events belong to Misses category, meaning that there is at least one ensemble member in at least one available forecast initialization time missing the onset.In fact, except for pentad 2, in all of the 98 available reforecasts there is at least one member missing the onset (forecast perspective).For pentad 2, there is one initial time with all 11 members correctly capturing the onset, therefore only 97 of the available 98 initialization times belong to the Misses category.We also note that the number of "ensemble members" in Hits and Misses must add up to 1,078 which is the number of available individual ensemble members (see begin of this section).There is a very high number of individual ensemble members missing EuBL onset, consistent with the fact that on average only 1-2 ensemble members belong to the Hits category.Naturally, the number of ensemble members in the Misses category is highest for onsets in pentad 4 when the number of Hits is lowest. Lastly, the ensemble members which predict an EuBL onset that is not observed in ERA-Interim make up the False Alarms category (bottom rows in Table 1).Out of the 1,641 forecast initialization times in the reforecast period, there are 362 with at least one ensemble member with a False Alarm in pentad 2 (forecast perspective). The number of forecast initialization times with False Alarms is even higher in pentads 3 and 4. In fact on average 1-2 ensemble members in each of these forecasts issue a false alarm although we found individual initial times with 3-8 false alarm members in pentad 2. The Role of WCBs for EuBL Prediction In this Section, we start our exploration of the potential link between WCB activity and EuBL onset by first focusing on ERA-Interim and the ensemble mean reforecast.To this end we investigate the spatial patterns of Z500, as well as WCB inflow and outflow occurrence frequency anomalies around EuBL onsets in pentad 3 (10-14 days) for both data sets. The 5-day mean ERA-Interim Z500 field shows a significant positive Z500 anomaly of more than 90 gpm extending from western Europe to Scandinavia (Figure 2c) following the time of EuBL onset.This anomaly reflects the developing block over Europe.Upstream, a negative anomaly ( 60 to 90 gpm) indicates a trough over the western and central North Atlantic.The strong Z500 anomalies are in line with the climatological pattern of the EuBL regime (green dashed contour in Figure 2, see Grams et al. (2017)).The large-scale circulation over North America and the North Pacific indicates a weakly undulated jet stream (dense isohypses, black in Figure 2c) with an anomalous Rossby wave packet along the midlatitude jet (reflected in pairs of negative Z500 anomalies over the western part of the North Pacific/ western North America and positive Z500 anomalies over the eastern North Pacific/East coast of North America).Anomalies over North America and the North Atlantic are statistically significant.We note that Rossby wave activity upstream of blocking over Europe and the embedment of blocking into an amplified Rossby wave train emerging from the Caribbean has already been established by various studies (e.g., Altenhoff et al., 2008;Michel & Rivière, 2011).We further discuss upstream precursors and the role of Rossby wave activity in Section 5. Significantly enhanced WCB inflow occurs during the EuBL onset in a region stretching from south of Newfoundland into the central North Atlantic (5-day mean anomalies of 4%-8%; Figure 2a).The strongest WCB inflow anomalies are located just south and ahead of the amplified North Atlantic trough (negative Z500 anomaly in Figure 2c).The air masses typically converge in the WCB inflow region and are subsequently lifted to the mid and upper troposphere due to strong vertical lifting in the vicinity of surface cyclones. The air masses then reach the upper troposphere further to the northeast of the inflow region.Indeed, around EuBL onsets, significantly enhanced WCB outflow frequencies occur northeast of the inflow region in an area around eastern Greenland, Iceland and over the Norwegian Sea (Figure 2b).Here, the 5-day mean outflow frequency anomalies reach 4%-6%, that is, more than double the climatological frequency in that region.The air masses likely influence the upper-level ridge building (cf.Grams et al., 2018;Steinfeld & Pfahl, 2019, and summary in introduction for the physical processes involved), which subsequently leads to the onset and persistence of the block over Europe. In summary, we find a co-occurrence of WCB outflow and the emergence of a strong positive Z500 anomaly in the region of the block in the climatological composite of all EuBL onsets in NDJFM 1997-2017.Although we can not infer causality from these results, they suggest that WCBs might play a vital role in the formation of the blocked regime over Europe consistent with prior work (Pfahl et al., 2015;Steinfeld & Pfahl, 2019). Before investigating how well these patterns are represented in IFS reforecasts, we evaluate the overall forecast skill for predicting WCBs (Figure 3).We here discuss the Fair Brier Skill Score (FBSS) (Ferro, 2014) for WCB outflow frequencies based on the CNN-based WCB metric ELIAS2.0(Quinting & Grams, 2022a).This also provides an update for the WCB skill assessment with an earlier version of the Eulerian WCB metric using logistic regression instead of the CNN (Wandel et al., 2021).This analysis shows that the WCB forecast skill vanishes at medium-range forecast lead times in forecast week 2 (between day 8-14) with the exponential decay of WCB skill leveling off between pentad 2 (day 5-9) and 3 (day 10-14).While pentad 2 still shows some WCB skill, forecasts for pentad 3 are only slightly better than a climatological reference forecast.Therefore, to be able to distinguish between pentad 2 with some skill and pentad 3 with hardly skill, we focus here on pentads rather than weeks. In the following, we briefly evaluate how well the ensemble mean reforecasts can predict the large-scale circulation (in terms of Z500) anomalies and WCB activity around EuBL onsets as found for ERA-Interim (Figure 2).We show the same fields as in Figure 2 but for the ensemble mean of reforecasts and EuBL onsets during pentads 2, 3, and 4 (Figure 4).In pentad 2 the ensemble mean reforecast is able to represent WCB inflow, its associated outflow and the downstream Z500 pattern of the incipient block accurately (Figures 4a-4c).Overall the anomalies are somewhat weaker compared to ERA-Interim (Figure 2).This has to be expected as we look at the ensemble mean with only 25% of the ensemble members correctly predicting EuBL onset in pentad 2 (cf.discussion of Table 1 in Section 2.5.2).In pentad 3, and in particular in pentad 4 the anomaly patterns are hardly discernible from climatology, reflecting the little skill of the ensemble mean to predict EuBL onset.In summary, we find that the IFS ensemble mean reforecast fails predicting the Z500 anomaly associated with EuBL onset from pentad 3 onwards along with a lack of predicting the co-occurring WCB activity.Still in pentad 2, we find a consistent pattern of WCB inflow and outflow upstream of the emerging Z500 anomaly with a somewhat different spatial structure of WCB outflow frequency anomalies in reforecasts compared to ERA- Interim.Interestingly, the reforecast consistently shows an upstream Rossby wave pattern over North America also in pentad 3. The Link Between WCB Activity and Correct Predictions of EuBL Onset The previous section established that WCB outflow is enhanced upstream of the positive Z500 anomaly associated with EuBL in both ERA-Interim and IFS reforecasts at least for pentad 2. Comparing ensemble members which accurately predict (Hits) EuBL onset, do not predict it (Misses), or predict EuBL onset when no onset is observed (False Alarms) allows us to shed more light on the question if the WCB activity is mere coincidence or more systematically linked to EuBL onset.Thus, we divide the individual ensemble members in the three categories depending on their ability to predict EuBL onset (see Section 2.5.2). We now focus on pentad 3 in which 65 ensemble members capture the observed EuBL onset.The analysis of the Z500 field shows large positive Z500 anomalies of more than 90 gpm centered over the British Isles (Figure 5c) consistent with the circulation anomalies around the onset in ERA-Interim (Figure 2c).The negative anomalies over the western and central North Atlantic are even larger for the Hits compared to ERA-Interim.In line with the Z500 pattern over the North Atlantic and Europe, we find strongly enhanced WCB frequencies over the central North Atlantic for the WCB inflow and centered around Greenland and Iceland for the WCB outflow (Figures 5a and 5b).As for Z500, the WCB anomaly patterns strongly resemble the 5-day mean frequency anomalies around EuBL onsets in ERA-Interim (Figures 2a and 2b).This finding holds even for the 41 ensemble members with a hit in pentad 4 (Figures S1a-S1c in Supporting Information S1). However, the bulk of ensemble members (1,013 of 1,078) misses the EuBL onset in pentad 3 and consistently these ensemble members only show a weak positive Z500 anomaly which is displaced to the northeast compared to ERA-Interim (Figure 5f).Interestingly, there is hardly any enhanced WCB activity in the North Atlantic region (Figures 5d and 5e).Likewise in pentad 4, ensemble members in the Misses category fail predicting the Z500 pattern and do not capture the enhanced WCB activity over the North Atlantic at all (Figures S1d and S1e in Supporting Information S1). Our findings are in line with the climatological evidence regarding the chain of physical processes by which LHR can affect blocking (e.g., Grams & Archambault, 2016;Hauser et al., 2023b;Pfahl et al., 2015;Steinfeld & Pfahl, 2019;Teubler et al., 2023).We conclude that there is a strong link between the correct representation of local WCB activity in the North Atlantic-European region and the correct representation of the large-scale circulation around EuBL onsets in NWP models.This holds even for lead times beyond 10 days when the average WCB forecast skill has already vanished and the prediction of EuBL becomes increasingly challenging (Figure S1 in Supporting Information S1).Thus, a correct prediction of WCBs could provide a window of forecast opportunity for the prediction of blocked regimes over Europe even on sub-seasonal time scales. Time-Lagged Evolution Prior to EuBL Onset So far, we analyzed the onset of EuBL based on ERA-Interim, the ensemble mean of the reforecast and for different subcategories.While this approach gives an overview over the fields around the onset, it does not solve the chicken-and-egg problem: WCB activity might emerge prior to the onset of the regime and, via the chain of processes described in the introduction, directly result in ridge building and the development of the block over Europe.Or the block could deviate cyclones over the North Atlantic in a way so that enhanced WCB activity occurs upstream of the block.In order to shed light on this problem, we calculate lagged composites of WCB frequency anomalies in ERA-Interim and for the three subcategories of the reforecasts (False Alarms, Hits, and Misses) prior to EuBL onset.Again we focus on results for EuBL onsets in pentad 3. The WCB outflow activity in ERA-Interim prior to EuBL onsets is significantly enhanced over eastern Canada 6 to 4 days before the onset (Figure 6a).On the other hand, frequencies are below average over the central North Atlantic around Iceland and Greenland.Subsequently, the enhanced outflow activity over eastern Canada shifts eastwards 4 to 2 days prior to the EuBL onset (Figure 6e).Here, WCB frequencies are significantly enhanced from eastern Canada and the southern tip of Greenland to western Europe (anomalies around 5%). 2 to 0 days before the EuBL onset, we find a northeastward shift of the main WCB activity with highest and significantly positive frequency anomalies over the Norwegian Sea and the northwestern part of Europe (Figure 6i).Outflow frequencies are significantly lower than normal over the western North Atlantic. In summary, in ERA-Interim, we find significantly enhanced WCB outflow frequencies already during the six days before the onset of EuBL and well before a positive Z500 anomaly is evident in the European region.This enhanced WCB outflow gradually leads to ridge building upstream of the EuBL region.The associated uppertropospheric anticyclonic circulation anomaly propagates downstream (Figure S4 in Supporting Information S1) and establishes a positive Z500 anomaly in the region of the incipient block over Europe around onset (Figure S4i in Supporting Information S1).Thus, it is the WCB activity and associated outflow that precedes the onset of blocking.Interestingly, also over the North Pacific WCB activity is enhanced in the 6 days prior to EuBL onset, pointing toward a potential downstream development, which we discuss in more detail in Section 5. We now investigate to what extent the different subcategories of the reforecasts capture the enhanced WCB frequencies over the North Atlantic and Europe prior to EuBL onsets.In the False Alarms category only 4 to 2 days (Figure 6f), and in particular in the 2 days (Figure 6j) prior to a wrongly predicted EuBL onset, enhanced WCB activity emerges over Southern Greenland.The outflow anomaly is shifted westward and the branch toward Northwestern Europe is missing compared to ERA-Interim (cf.Figures 6i and 6j).Thus, in the case of False Alarms the model predicts a non-observed EuBL onset via a different pathway.Interestingly, also the upstream WCB activity over the North Pacific is weaker for False Alarms compared to ERA-interim. Ensemble members not predicting EuBL onset when they should (Misses category, Figures 6d, 6h, and 6l) only capture the initially enhanced WCB activity over eastern North America 6 to 4 days before onset, but later no anomalous WCB activity occurs, despite some signals upstream over the North Pacific. In contrast to False Alarms and Misses, the Hits category reflects the anomalous WCB activity compared to ERA-Interim remarkably well (Figures 6c, 6g and 6h).Not only do ensemble members in the Hits category reflect the anomalous WCB activity upstream over the North Pacific, but also the local flow amplification in the North does not feature anomalous WCB activity.Erroneously predicted EuBL onsets in the False Alarm category establish blocking via a different pathway with outflow focused more westward over Greenland. The fact that WCB activity is already enhanced up to 6 days prior to the onset of EuBL corroborates its likely important contribution to the establishment of a persistent blocked regime over Europe.In addition, we show that only if forecasts capture this WCB activity, blocking is correctly predicted.Our qualitative result is further confirmed quantitatively in a quasi-Lagrangian potential vorticity framework recently established by Hauser et al. (2023b) and Hauser (2023). The Role of Upstream Precursors We now focus on the role of upstream precursors from the North Pacific for the prediction of EuBL.So far, we found evidence for concomitant WCB activity over the North Pacific region around EuBL onsets (Figures 2a and 2b), which is linked to Rossby wave activity emerging from the western North Pacific (Figure 2c). As the downstream propagation of Rossby waves from the North Pacific toward Europe typically requires 5-10 days, we now investigate the evolution of WCB activity and Z500 fields in ERA-Interim in pentads 1 and 2 prior to EuBL onsets in pentad 3.During both pentads positive WCB inflow anomalies are found over the western North Pacific (frequencies around 20%, anomalies around 4%) and over the central North Pacific (anomalies around 4%-6%) (Figures 7a and 7d).In both regions, this might be explained by higher cyclone activity and associated WCBs ahead of upper-level troughs as indicated by weak but non-significant negative Z500 anomalies (Figures 7c and 7f).Consistent with the WCB inflow, enhanced WCB outflow occurs downstream over the northern part of the central Pacific and further south over the eastern part of the ocean basin (Figures 7b and 7e). Over the North Atlantic WCB activity is weak and only partly significantly different from climatology.WCB and Z500 anomalies (shading), absolute fields (contours), black dots and green contours as in Figure 2. In summary, WCB activity is enhanced over the North Pacific 5-10 days prior to EuBL onset.However, WCB frequency anomalies as well as Z500 anomalies are hardly significant making it difficult to identify a clear precursor pattern. We now evaluate if precursor patterns differ depending on the ability of IFS reforecasts to correctly predict EuBL onset in pentad 3. Therefore, we first contrast ERA-Interim Z500 fields for two subcategories based on Hits and Misses in reforecasts.Note that we weight unique EuBL events according to the ability of the reforecast in predicting the respective EuBL onset.Here, this ability is measured by the number of ensemble members in the Hits and Misses category for each event.If an EuBL event is well predicted (many members in the Hits category), it weights more in the ERA-Interim Hits subcategory and weights less heavy in the Misses subcategory.On the other hand, if an EuBL event is poorly predicted (only few or no members in the Hits category), it is weighted less heavily in the ERA-Interim Hits subcategory and more heavily in the Misses subcategory.The subcategory based on the Hits contains 29 of the 38 unique EuBL events while the subcategory based on the Misses contains all 38 events. Around EuBL onset in pentad 3, both subcategories show the developing block over Europe with similar positive Z500 anomalies (Figures 8c and 8f).Both subcategories also show the concomitant amplified Rossby wave pattern over the North Pacific and North America (cf.also Figure 2c).However, upstream Z500 anomalies are slightly stronger when reforecasts successfully predict EuBL (Hits) (Figure 8c).This becomes even more noticeable in pentads 2 and 1 prior to the EuBL onset.If reforecasts miss to predict EuBL onset, the upstream anomalous Rossby wave pattern is almost absent (Misses, Figures 8d and 8e).In contrast, a marked upstream Rossby wave pattern is evident in the subcategory based on Hits (Figures 8a and 8b) which emerges from the western North Pacific already in pentad 1. Associated with the Rossby wave pattern in the Hits subcategory is anomalously high WCB activity over the North Pacific (Figures S6a and S6b in Supporting Information S1).The collocation of above average Z500 and 1) and black contours show absolute Z500 fields (5100-5800 gpm, every 100 gpm).Green contours as in Figure 2. Supporting Information S1), these anomalies are smaller than in the Hits category.Together with a mean Z500 close to climatology this indicates less favorable conditions for WCB development that would further amplify the flow. These results suggest that EuBL events with a Rossby wave packet emerging from the North Pacific already in pentad 1 enable a successful prediction of EuBL onsets in ECMWF's IFS reforecasts, while the model misses the onset prediction in the absence of this Rossby wave packet.That the presence of long-lived Rossby wave packet emerging from the North Pacific are associated with enhanced predictive skill over the North Atlantic-European region is in line with Grazzini and Vitart (2015).It overall indicates an enhanced intrinsic predictability in such situations which may be favored by modes of intraseasonal variability such as the Madden-Julian Oscillation (MJO) that importantly modulates the North Pacific WCB activity (Quinting et al., 2024).With regard to the practical predictability and the case that small-scale processes in WCBs are only insufficiently reproduced in NWP models, the increased WCB activity on the other hand may reduce this increased intrinsic predictability. Lastly, we directly evaluate the large-scale circulation in IFS reforecasts for the three subcategories Hits, Misses, and False Alarms, in pentads 1 and 2 prior to EuBL onsets in pentad 3. Recall that False Alarms show EuBL onset independent of ERA-Interim.Consistently, the Z500 anomalies for False Alarms show the trough-ridge couplet typical for EuBL onset in pentad 3 (Figure 9g).However, upstream anomalies are weak and there is no distinct upstream Rossby wave pattern in pentads 1 and 2 (Figures 9a and 9d).IFS reforecasts missing EuBL onset in pentad 3 strongly underestimate the developing block over Europe (Figure 9i) and also feature only weak and indistinct upstream anomalies (Figures 9c and 9f).However, successful reforecasts (Hits) not only correctly represent the Z500 anomalies at EuBL onset in pentad 3 (Figure 9h), they also show a marked concomitant upstream Rossby wave pattern, evident in pentad 2, too, and emerging from the western North Pacific in pentad 1 (Figures 9b and 9e).It is noteworthy, that the composites of successful EuBL onset predictions (Hits, Figures 9b, 9e and 9h) hardly differ from the corresponding Z500 patterns in ERA-Interim (Figures 8a-8c).These results corroborate that an upstream Rossby wave packet emerging from the western North Pacific 5-10 days prior to EuBL onset provides an additional potential window of forecast opportunity for EuBL in pentad 3. Conclusions In this study, we investigate Z500 and WCB activity around the onset of blocked weather regimes over Europe (EuBL) in ECMWF's IFS sub-seasonal reforecasts and ERA-Interim reanalyses (NDJFM;1997-2017).EuBL onset is not well predicted by the reforecasts at 10-14 days lead time, which is partly due to its low intrinsic predictability (Faranda et al., 2016;Hochman et al., 2021).Our study newly suggests that for lead times beyond 10 days the model struggles predicting the flow amplification, in particular the ridge building prior to EuBL onset.We find that this is due to a strong link between the Rossby wave amplification around EuBL and enhanced WCB outflow over the central and eastern North Atlantic well before the block establishes.The model misrepresents WCB activity, which ultimately dilutes skill for EuBL forecasts.This mechanistic link between upstream WCB activity and blocking over Europe, which we found and described here in a qualitative way based on composites has recently been confirmed by Hauser et al. (2023b) and Hauser (2023) in a quantitative potential vorticity framework. The role of WCB activity for EuBL onset was evident independent of different perspectives taken: For EuBL onsets at early lead times in pentad 2 (5-9 days), the ensemble mean of the reforecasts can predict the WCB activity and incipient block relatively well.This is in line with the overall WCB forecast skill, which is still sufficient on these time scales (Wandel et al., 2021).However, onsets in pentad 3 (10-14 days) and pentad 4 (15-19 days) are challenging for the NWP model.Consistently, the model strongly underestimates WCB activity and subsequently the developing block prior to onsets in pentad 3 and on average misses onsets in pentad 4 completely. In addition stratification of the reforecasts according to Hits, Misses, and False Alarms of EuBL onset in pentad 3 and pentad 4 shows different pathways toward EuBL onset.For successful predictions of EuBL onset (Hits), the model accurately represents the enhanced WCB activity prior to EuBL onset, whereas for Misses it completely misses WCB activity.Finally, time-lagged analysis reveals that the enhanced WCB activity emerges from the western North Atlantic already 6 days prior to EuBL onset -well before any indication of blocking over Europe.Thus, a correct representation of WCB activity provides a potential window of forecast opportunity for EuBL forecasts beyond 10 days.In contrast for False Alarms, enhanced WCB activity only emerges directly ( 2 to 0 days) prior to blocking onset and WCB outflow occurs farther to the west over eastern Greenland and Iceland missing enhanced WCB outflow over Europe, which is evident in ERA-Interim and the Hits forecasts.This shows that the model has an additional potentially erroneous pathway into EuBL that is not found in reanalysis. We further find a precursor Rossby wave packet emerging from the North Pacific region prior to successful EuBL onset predictions.The downstream development of this Rossby wave pattern goes along with enhanced WCB activity over the North Pacific region up to 10 days prior to EuBL onset.The Rossby wave packet propagates downstream enhancing WCB activity first over eastern North America and later over the central North Atlantic.These upstream precursors are remarkably similar in ERA-Interim and successful reforecasts (Hits) but missing for erroneous EuBL forecasts (False Alarms and Misses). Though the Rossby wave packet emerging from the North Pacific is associated with increased skill over the Atlantic European region (Grazzini & Vitart, 2015), forecast errors related to small-scale processes inside WCBs could affect the Rossby wave pattern and subsequently lead to errors in the prediction of EuBL onsets.Still, the Rossby wave packet seems to represent a window of forecast opportunity for the prediction of EuBL onset into sub-seasonal timescales which likely depends on an accurate representation of WCB activity conditioned on the MJO in the North Pacific region, too (Quinting et al., 2024). In summary, our results highlight the role of moist dynamic processes for the correct prediction of EuBL onset.On the one hand, a correct representation of WCB activity in the North Atlantic region in the days prior to EuBL onset results in correct EuBL forecasts.On the other hand, a correct representation of a Rossby wave packet emerging from the North Pacific extends correct EuBL forecasts into sub-seasonal lead times.Interestingly, the North Pacific Rossby wave pattern is also amplified by WCB activity, in line with recent findings by Quinting et al. (2024) who highlighted the potential role of WCBs in shaping tropical-extratropical teleconnection patterns. If and how the MJO further affects EuBL onset should be a subject of future work.Our results suggest, that-in line with similar findings by (e.g., Maddison et al., 2019) -improving the representation of processes inside WCBs and of the associated extratropical cyclones in general (Büeler et al., 2024) in NWP models likely yields a better representation of EuBL life cycles, too.However, we also note that there are intrinsic limits of predictability for WCBs which even a perfect model will not be able to overcome. Figure 1 . Figure 1.Schematic of weather regime life cycle based on weather regime index I wr .The onset is defined as the day when I wr exceeds a certain threshold.Here, regime onsets in pentad 2 (day 5-9), pentad 3 (day 10-14) and pentad 4 are investigated.They can occur at any day in a given pentad. Figure 3 . Figure 3. Area-averaged Fair Brier Skill Score (FBSS) for DJF 1997-2017 at different forecast lead times for WCB outflow.The area-average of the FBSS is computed over the North Atlantic (20-90N, 100W-20E), North Pacific (20-90N, 120E-120W) and for the entire Northern Hemisphere.Error bars centered on forecast lead times day 3, 5, 7, and 9 show the difference between the 10 and 90th percentile of the sampled data (variability of the FBSS) and are used to estimate the significant differences between the ocean basins. Figure 8 . Figure 8. Evolution of 5-day mean Z500 in ERA-Interim weighted with the number of ensemble members in the Hits category (a-c; 29 unique events, 65 ensemble members, see Table 1) and (d-e) Misses (d-f; 38 unique events, 1,013 ensemble members) for each respective forecast initial time in (a, d) pentad 1 and (b, d) pentad 2 before ERA-Interim EuBL onsets in (c, f) pentad 3. Z500 anomalies (shading), absolute fields (contours) and green contours as in Figure 2. Table 1 Left: Number of "Unique Events" of Observed EuBL Onsets in ERA-Interim in NDJFM (1997-2017)for Which There Is at Least One Reforecast Initialization Time in the Hits or Misses Category Note.There are in total 38 unique EuBL onsets.Middle: Number of initialization times for which at least one ensemble member captures (Hits) [does not capture (Misses)] the EuBL onset according to the "forecast perspective" and number of individual "ensemble members" which capture (Hits) [do not capture (Misses)] the EuBL onset.Bottom: number of initialization times for which at least one member falsely predicts an EuBL onset ("Forecast perspective") and total number of "ensemble members" falsely predicting an EuBL onset.See Section 2.4 for more explanation.
10,763.2
2024-04-11T00:00:00.000
[ "Environmental Science", "Physics" ]
On a strong form of propagation of chaos for McKean-Vlasov equations This note shows how to considerably strengthen the usual mode of convergence of an $n$-particle system to its McKean-Vlasov limit, often known as propagation of chaos, when the volatility coefficient is nondegenerate and involves no interaction term. Notably, the empirical measure converges in a much stronger topology than weak convergence, and any fixed $k$ particles converge in total variation to their limit law as $n\rightarrow\infty$. This requires minimal continuity for the drift in both the space and measure variables. The proofs are purely probabilistic and rather short, relying on Girsanov's and Sanov's theorems. Along the way, some modest new existence and uniqueness results for McKean-Vlasov equations are derived. Introduction This note develops a simple but apparently new approach to analyzing McKean-Vlasov stochastic differential equations, of the form dX t = b(t, X t , µ t )dt + σ(t, X t )dW t , µ t = Law(X t ), ∀t ≥ 0, in which the drift is merely bounded and measurable, with fairly weak continuity requirements in the measure variable. The volatility σ is nondegenerate and independent of the measure, and this enables a line of argument based on Girsanov's theorem which leads to a much stronger propagation of chaos result than usual, along with some new results on existence and uniqueness. Propagation of chaos here refers to the convergence of the n-particle system, defined by the SDE dX n,i 1 n n j=1 δ X n,j t , to the solution law µ of the McKean-Vlasov equation. Precisely, propagation of chaos typically means that the empirical measures µ n (say, on the path space) converge weakly in probability to the deterministic measure µ, or equivalently that the law of (X n,1 , . . . , X n,k ) converges weakly to the product measure µ ⊗k for any fixed k. In our context, we show that in fact the law of (X n,1 , . . . , X n,k ) converges in total variation to µ ⊗k . Moreover, the sense in which µ n converges in probability to µ can be strengthened; rather than working with the usual weak topology induced by duality with bounded continuous test functions, we work with the stronger topology induced by duality with bounded measurable test functions. In particular, our results will assume that b(t, x, µ) is continuous in µ in this stronger topology (or, for some results, in total variation) and merely measurable in (t, x), and our coefficients may be path-dependent as well. McKean-Vlasov equations have been studied in a variety of contexts since the seminal work of McKean [18]. Sznitman's monograph [21] is a classic introduction, and Gärtner's results [11] remain among the most general on existence, uniqueness, and propagation of chaos results for models with (weakly) continuous coefficients. More recently, interacting diffusion models of this form have enjoyed something of a renaissance, due in part (but certainly not entirely) to new applications in mean field game theory [17], and this is one impetus for revisiting these classical questions here. The McKean-Vlasov equations arising in mean field game theory can involve feedback controls obtained via Nash equilibrium problems. Regularity for these controls can be hard to come by, and this motivates a better understanding of somewhat more pathological dynamics. For instance, the recent work of [4] on mean field games with absorbing states naturally gives rise to McKean-Vlasov systems with discontinuous and path-dependent coefficients. Several authors have studied McKean-Vlasov systems with various kinds of discontinuities arising in a variety of concrete applications. Noteworthy classes of examples include interactions based on ranks [20,14] and quantiles [8,16], to which our results apply in certain cases. One such example given in Section 2.4, where we show that the particle approximation of Burgers' equation given in [3,13] holds in a stronger sense. While several papers have studied McKean-Vlasov equations with discontinuities, the coefficients are often continuous enough, in the sense that the set of discontinuities has measure zero with respect to any candidate solution (see, e.g., [7]). In such a situation one can still apply the usual weak convergence arguments, which are not available for the general discontinuities in x we allow, more in the spirit of [13]. We lastly mention the interesting recent works [19,6] that deal with similarly irregular coefficients but less general interaction terms, with no results on propagation of chaos. While our existence and uniqueness results differ from those mentioned above, the main novelty of this work is the strong propagation of chaos result, Theorem 2.5. Section 2 below states the main results, and proofs are given in Sections 3 and 4. It is worth stressing that all of the proofs are purely probabilistic. Notation and topologies. Let E be a Polish space. For a signed Borel measure γ on E, define the total variation norm Let P(E) denote the set of Borel probability measures on E. For µ, ν ∈ P(E), define the relative entropy Let B(E) denote the set of bounded measurable real-valued functions on E. Define τ (E) to be the coarsest topology on P(E) such that the map µ → E φ dµ is continuous for each φ ∈ B(E). This topology is somewhat well known in large deviations literature as the τ -topology. Notably, (P(E), τ (E)) is not separable or metrizable. The map E n ∋ (x 1 , . . . , x n ) → 1 n n j=1 δ x j ∈ P(E) need not be measurable with respect to the Borel σ-field of (P(E), τ (E)), and we will need to work with a smaller σ-field for which we recover this measurability. Define E(P(E)) to be the smallest σ-field on P(E) such that the map µ → E φ dµ is measurable for each φ ∈ B(E). It is well known that E(P(E)) coincides with the Borel σ-field on P(E) generated by the topology of weak convergence [2, Corollary 7.29.1]. The McKean-Vlasov equation. Fix a time horizon T > 0 and a dimension d ∈ N. Let C = C([0, T ]; R d ) denote the path space, endowed with the supremum norm. We will be interested in McKean-Vlasov equations of the form 2) stated more precisely in Definition 2.2 below. The data of the problem are coefficients and an initial law λ 0 ∈ P(R d ). For µ ∈ P(C) and t ∈ [0, T ], let µ t ∈ P(C) denote the law of the process stopped at time t, defined as the image of µ through the map C ∋ x → x ·∧t ∈ C. At various points in the sequel, we will refer to the following assumptions: , and σ is jointly Borel-measurable. In addition, the coefficients are progressive in the sense that Moreover, there exists a unique strong solution to the driftless SDE, For each µ ∈ P(C), the following function is sequentially τ (C)-continuous at µ: Remark 2.1. If one is careful about integrability, the assumptions can undoubtedly be relaxed to cover unbounded coefficients and stronger topologies for the continuity of b(t, x, µ) in µ. We prefer to avoid obscuring the main line of argument with such generalities. Definition 2.2. We say µ ∈ P(C) is a weak solution of (2.2) if there exists a filtered probability space (Ω, F, F, P) supporting a progressively measurable d-dimensional process X, a d-dimensional F-Wiener process W , and an F 0 -measurable random vector ξ with law λ 0 , such that P • X −1 = µ and The closest result to Theorem 2.4 that we know of seems to come from the paper [5], from which we borrow the proof idea. A nearly identical form of Theorem 2.3 was given in [13, Theorem 2.2] and [4, Theorem C.1], though our proof seems to be much simpler. Propagation of chaos. For n ∈ N, let (X n,1 , . . . , X n,n ) denote a weak solution on some filtered probability space (Ω, F, F, P) of the SDE system where W 1 , . . . , W n are independent d-dimensional F-Wiener processes, and ξ 1 , . . . , ξ n are i.i.d. and F 0 -measurable with law λ 0 . Under assumptions (E) and (A), a standard argument by Girsanov's theorem guarantees the existence and uniqueness in law for this SDE system. Theorem 2.5. Assume (E) and (A) hold. Suppose there exists a weak solution µ of (2.2). For each 0 ≤ s < t ≤ T , assume that the function F s,t : P(C) → R defined by is τ (C)-continuous and E(P(C))-measurable. Assume lastly that there exists L > 0 such that Then the following hold: The closest result we know of to Theorem 2.5 is that of [1, Theorem 3], which proves (3) above even when k can grow with n, but only when the coefficients (in particular, the interactions) take a very specific form. The assumption (2.3) in Theorem 2.5 is worth commenting on, so we point out two notable sufficient conditions. First, in light of Pinsker's inequality, assumption (B1) is sufficient for (2.3). For a second example, suppose σ is the identity, and the initial law λ 0 satisfies R d exp(a|x| 2 )λ 0 (dx) < ∞ for some a > 0. Then, by boundedness of b and exponential integrability of Brownian motion, there existsã > 0 such that It follows [12, Proposition 6.3] that there exists C > 0 such that µ satisfies the transport inequality W 1 (µ, ν) ≤ CH(ν|µ), ∀ν ∈ P(C), where W 1 denotes the 1-Wasserstein metric on P(C). If we assume b(t, x, ·) is Lipschitz with respect to W 1 , uniformly in (t, x), then it follows that (2.3) holds. Remark 2.6. Conclusion (3) of Theorem 2.5 implies in particular that for each k ∈ N and φ 1 , . . . , φ k ∈ B(C). In fact, for fixed k, this convergence is uniform over all φ 1 , . . . , φ k ∈ B(C) satisfying |φ i | ≤ 1. 3)) the same local large deviation bounds as in [9, Theorem 5.2], but in the stronger topology τ (C). However, to deduce from this a full LDP in the topology τ (C) analogous to [9, Theorem 5.1], one would need to establish exponential tightness of µ n in the same topology, which does not seem feasible. Remark 2.9. It is not true in the setting of Theorem 2.5 that P(lim n µ n = µ) = 1, where the limit is taken in τ (C). In fact, P(lim n µ n = µ) = 0, because for each ω ∈ Ω the countable set S(ω) = {X n,i (ω) : n ∈ N, 1 ≤ i ≤ n} satisfies both µ n (ω)(S(ω)) = 1 and µ(S(ω)) = 0, as µ is nonatomic. 1 In general, a sequence of discrete measures can never τ (C)-converge to a nonatomic measure, so we cannot hope to improve the convergence in probability stated in Theorem 2.5 (2). For this reason, we cannot state a version of Theorem 2.5 in line with more traditional propagation of chaos results (e.g., [11, Theorem 3.1]), in which the initial states X n,i 0 are taken to be deterministic but with a prescribed limit λ 0 = lim n 1 n n k=1 δ X n,k 0 . A rank-based interaction. A notable class of examples related to Burgers' and porous medium type PDEs fits into our framework. Consider the one-dimensional case d = 1, with σ ≡ 1 and where G : [0, 1] → R is Lipschitz continuous. The corresponding McKean-Vlasov equation is Letting V (t, x) = µ t (−∞, x], one expects (cf. [20,13,3]) that V is the unique generalized solution of the Burgers-type equation where G is an antiderivative of g, and this reduces to Burgers' equation when g(x) = x. The corresponding n-particle approximation is where X n,i 0 are i.i.d. with law λ 0 , and W i are independent Brownian motions. All of the assumptions of our Theorems 2.3, 2.4, and 2.5 hold in this example. Notably, our Theorem 2.5(3) is considerably stronger than [3,Theorem 3.2] or [13,Theorem 2.4], which provide only weak convergence. Existence and uniqueness proofs The proofs of both Theorems 2.3 and 2.4 rely on the following change of measure argument. Let (Ω, F, F = (F t ) 0≤t≤T , P ) denote a filtered probability space supporting an F-Wiener process W and an F 0 -measurable random vector ξ : For each µ ∈ P(C), define a measure P µ ∼ P by where we define E t (M ) = exp(M t − 1 2 [M ] t ) for any continuous martingale M . Girsanov's theorem implies that defines a P µ -Wiener process, and dX t = b(t, X, µ)dt + σ(t, X)dW µ t . Then, a measure µ ∈ P(C) is a weak solution of (2.2) if and only if P µ • X −1 = µ. For t ∈ [0, T ] and µ, ν ∈ P(C), abbreviate H t (ν|µ) := H(ν t |µ t ). Let Φ(µ) := P µ • X −1 for µ ∈ P(C). For any µ, ν ∈ P(C), we have Assumption (A) and nondegeneracy of σ imply that W and X generate the same filtration. Hence, and so Proof of Theorem 2.3. We use Banach's fixed point theorem on the complete metric space (P(C), · TV ). For any µ, ν ∈ P(C), we use (3.1) along with assumption (B1) to get By Pinsker's inequality, Conclude by Picard iteration. 2 Proof of Theorem 2.4. This proof is by Schauder's fixed point theorem, on the topological vector space of bounded signed measures on C endowed with the weak * topology induced by B(C). Note that the induced topology on the subset P(C) is exactly τ (C). Proceeding as in (3.1), for any µ ∈ P(C) we have where the constant c > 0 comes from assumption (A). Hence, For the reader worried about measurability of the integrand s → ν s − µ s 2 TV , notice that we may write from which it is clear that the total variation norm is lower semicontinuous and thus Borel measurable with respect to the topology of weak convergence on P(C). Sub-level sets of relative entropy are convex, compact, and metrizable in τ (C) [10,Lemma 6.2.12]. Hence, to apply Schauder's theorem it remains only to show that Φ : P(C) → P(C) is sequentially τ (C)-continuous. Fix ν, µ ∈ P(C), and use Pinsker's inequality with (3.1) to get As a function of ν, the right-hand side is sequentially τ (C)-continuous at ν = µ by assumption (B2), and this completes the proof. Proof of Theorem 2.5 We first introduce some notation, used in the proof of both claims (1) and (2). We transfer the problem set up to a convenient probability space. Let (Ω, F, P ) be a probability space supporting an i.i.d. sequence of processes X i with law µ. For n ∈ N, let F n = (F n t ) 0≤t≤T denote the filtration generated by (X 1 , . . . , X n ). There exist i.i.d. Wiener processes W 1 , W 2 , . . . such that and such that W i is adapted to the filtration generated by X i . For n ∈ N, let . Define a measure P n on (Ω, F n T ) by dP n /dP = Z n T , where we define the density process By Girsanov's theorem, W n,i · := W i · − · 0 σ −1 b(t, X i , µ n )dt defines a P n -Wiener process, and dX i t = b(t, X i , µ n )dt + σ(t, X i )dW n,i t . Hence P n • (X 1 , . . . , X n ) −1 is a weak solution of the n-particle system, and in the notation of Section 2.3 we have P • (X n,1 , . . . , X n,n ) −1 = P n • (X 1 , . . . , X n ) −1 . Proof of (1). Fix a E(P(C))-measurable open set U ⊂ P(C) containing µ. The goal is to show that lim n→∞ P n (µ n / ∈ U ) = 0. (4.1) Fix p, q ∈ (1, ∞), and let p * and q * denote the conjugate exponents, p * = p/(p − 1) and q * = q/(q − 1). Assume p and q are such that M = LT pq/2 is an integer, for reasons which will be clear later. Define t j = jT /M for j = 0, . . . , M . We will show inductively that, for each j, Indeed, once this is established, it is easy to complete the proof of (1) as follows: By taking j = 0 and noting that P n and P agree on F t 0 = F 0 , it follows from (4.2) that lim sup Noting that lim x→∞ ( x x−1 ) x = e, we may send p, q → ∞ in the above to get (2.4).
4,069.8
2018-05-11T00:00:00.000
[ "Mathematics" ]
Comparative analysis of double-stranded RNA degradation and processing in insects RNA interference (RNAi) based methods are being developed for pest management. A few products for control of coleopteran pests are expected to be commercialized soon. However, variability in RNAi efficiency among insects is preventing the widespread use of this technology. In this study, we conducted research to identify reasons for variability in RNAi efficiency among thirty-seven (37) insects belonging to five orders. Studies on double-stranded RNA (dsRNA) degradation by dsRNases and processing of labeled dsRNA to siRNA showed that both dsRNA degradation and processing are variable among insects belonging to different orders as well as among different insect species within the same order. We identified homologs of key RNAi genes in the genomes of some of these insects and studied their domain architecture. These data suggest that dsRNA digestion by dsRNases and its processing to siRNAs in the cells are among the major factors contributing to differential RNAi efficiency reported among insects. German cockroach, Blattella germanica, is effective for many genes tested at different life stages both by injection as well as feeding methods [17][18][19][20] . In Aedes aegypti, dsRNA injection is the most successful methods of delivery 21,22 whereas, dsRNA feeding has been found to be successful only in few cases 23,24 . In the desert locust, Schistocerca gregaria, an orthopteran insect, significant knockdown of target genes was observed after dsRNA injection, but the RNAi response was much less after dsRNA feeding 25 . Further, many caterpillars are not very amenable to RNAi either by injection or feeding dsRNA 16 . For widespread use of RNAi methods for insect pest management, it is important to characterize and understand the mechanisms of RNAi in different insects. In the current study, we have performed in silico identification and characterization of the genes coding for the major components of RNAi machinery including Dicers, Argonauts, R2D2, Sid-1, and dsRNases from insects belonging to major orders and analyzed their domain architecture. We also studied dsRNA degradation and processing using naked and labeled dsRNA, respectively. Results We analyzed the degradation of dsRNA by the dsRNases present in the body fluids (lumen contents and hemolymph) and the processing of dsRNA to siRNA in 37 insects from five orders. Incubation of dsRNA with different concentrations of body fluids and feeding/injection of 32 P labeled dsRNA were performed. As described below, a wide variation in dsRNA digestion by dsRNases and in dsRNA processing was detected in insects tested. Coleoptera. Popillia japonica, Epilachna varivestis, Coccinella septempunctata, Disonycha glabrata, Leptinotarsa decemlineata, Acalymna vittatum, Epitrix fuscula, Diabrotica undecimpunctata, Chauliognathus pensylvanicus, Tribolium castaneum and Agrilus planipennis from order Coleoptera were included in the study. The concentration of body fluid required to degrade 50% of dsRNA (CB50) in coleopteran insects varied from 0.05 to 36.86 mg/ml. P. japonica and C. septempunctata showed higher dsRNA degrading efficiency. Therefore, the CB50 values for these insects are lower when compared to those in other coleopteran insects such as A. planipennis where the dsRNA was not degraded completely even at a high concentration of body fluid (16 mg/ml) (Fig. 1a). Coleopteran insects showed an efficient processing of fed or injected dsRNA to siRNA. A band equivalent to the size of siRNA was detected in the total RNA isolated from dsRNA fed or injected coleopteran insects tested except in P. japonica fed on dsRNA (Fig. 1b). Lepidoptera. Spodoptera frugiperda, Heliothis virescens, Spilosoma virginica, Manduca sexta, Cydia pomonella, Iridopsis humaria, Trichoplusia ni, Colias eurytheme and Estigmene acrea from Lepidoptera were included. A very low concentration of hemolymph from these insects was found to be sufficient to degrade dsRNA within an hour (Fig. 2a). Among the lepidopteran insects tested, S. frugiperda hemolymph showed the highest dsRNA degradation activity, and the M. sexta hemolymph showed the lowest dsRNA degradation activity. After injection or feeding with labeled dsRNA, no band in the size range of siRNA was detected in the total RNA isolated from lepidopteran insects tested (Fig. 2b). Hemiptera. Ten hemipteran insect species (Acyrthosiphon pisum, Halyomorpha halys, Anasa tristis, Nezara viridula, Murgantia histrionica, Oncopeltus fasciatus, Bemisia tabaci, Lygus hesperus, Podisus maculiventris and Zelus longipes) were studied. CB50 value for hemipteran insects ranged between 0.07 mg/ml and 6.56 mg/ml. The lowest value was recorded for A. pisum, and the highest value was recorded for M. histrionica (Fig. 3a). No siRNA band was detected in the total RNA isolated from the three hemipteran insect species fed on dsRNA (Fig. 3b). Bands equal to the size of siRNA were detected in the total RNA isolated from A. tristis, N. veridula, O. fasciatus and L. hesperus injected with labeled dsRNA (Fig. 3b). The total RNA isolated from other hemipteran insects showed light siRNA band. Diptera. Allograpta obliqua, Drosophila melanogaster, Musca domestica, Anastrepha suspensa and Aedes aegypti from Diptera were included in the study. Body fluids obtained from all the dipteran insects studied, showed the degradation of dsRNA (Fig. 4a). The CB50 values (2.83 mg/ml to 4.98 mg/ml) for dipterans are in the following order: A. obliqua > D. melanogaster > M. domestica > A. suspensa > A. aegypti. The siRNA band was detected in the total RNA isolated from M. domestica and A. aegypti injected with dsRNA (Fig. 4b). No siRNA band was detected in the total RNA isolated from A. aegypti fed with dsRNA (Fig. 4b). Orthoptera. Syrbula admirabilis and Gryllus texensis from Orthoptera were included. Body fluid collected from G. texensis (CB50 2.47 mg/ml) showed higher dsRNase activity than that in S. admirabilis (CB50 11.02 mg/ ml, Fig. 5a). Both the orthopteran insect species showed processing of injected dsRNA to siRNA. Interestingly, in the case of S. admirabilis the dsRNA fed was processed to siRNA (Fig. 5b). Characterization of core RNAi machinery genes: Identification of domains. Bioinformatic analysis was employed to identify and compare domain architecture of core RNAi genes among insects included in the dsRNA degradation and processing studies described above, (the sequence information was obtained from publicly available databases). The Dicer family proteins contained a Helicase ATP BIN, a Helicase CTER, a Dicer double-stranded RNA-binding domain (Dicer-DSR), a PAZ (Piwi/Argonaute/Zwille) domain, two Ribonuclease III and a carboxy-terminal dsRNA-binding domain (dsRBD). Interestingly, the domain architecture of Dicer1 is similar in most insects except in the case of D. melanogaster, A. pisum, S. litura and B. mori as the Helicases ATP BIN domain is absent in these insects. Whereas, Ribonuclease III and dsRBD could not be detected in S. litura ( Fig. S3a; Table 1). Analysis of Dicer2 sequences also showed the presence of signature domains in most of them except in the case of S. gregaria where only two Ribonuclease III domains were detected ( Fig. 6a; Table 1). Analysis of putative insect Systemic RNA Interference Deficiency-1 (Sid-1) proteins (Sil A, Sil B and Sil C) showed that these proteins contain tandem repeats of 11 trans-membrane domains separated by extra and intracellular domains ( Fig. 6d; Table 1). Low complexity transmembrane regions were detected in Sils from M. sexta, S. litura, B. mori, T. castaneum and S. gregaria. The proteins coded by putative dsRNase genes studied contain Endonuclease NS (DNA/RNA non-specific endonuclease)/NUC domain and a predicted signal peptide ( Fig. 6e and Table 1). Diabrotica undecipunctata, Du; Chauliognathus pensylvanicus, Cp; Tribolium castaneum, Tc and Agrilus planipennis, Ap. 300 ng dsGFP was incubated with 20 μl of serially diluted (0.007 to 16 mg/ml with 1XPBS) body fluid for 1 hr. The samples were run on 1% agarose gel. The relative band intensity was quantified by using ImageJ software. Values are relative to the control, arbitrarily fixed at 100%. The CB50 values were calculated using probit analysis. (see Figure S1a for complete gel images). (b) dsRNA processing study in coleopteran insects. Eight million CPM 32 P labeled dsGFP was injected/fed to Popillia japonica, Pj; Epilachna varivestis, Ev; Coccinella septempunctata, Cs; Disonycha glabrata, Dg; Leptinotarsa decemlineata, Ld; Acalymna vittatum, Av; Epitrix fuscula, Ef; Diabrotica undecipunctata, Du; Chauliognathus pensylvanicus, Cp; Tribolium castaneum, Tc and Agrilus planipennis, Ap. Total RNA were isolated from these insects at 72 hr after injection/feeding and resolved on 8M urea 16% polyacrylamide gels. The gels were dried and exposed to a phosphor Imager screen, and the image was scanned using a phosphorImager. The lanes labeled dsGFP and siRNA show intact 32 P labeled dsRNA and 23 nucleotide siRNA respectively. (see Figure S2 for complete gel images). Discussion The efficiency of RNAi has been reported to be variable among insects studied 15,16,26 . In the current study, two different parameters, processing of injected or fed dsRNA to siRNA and degradation of dsRNA by the body fluids, were studied in 37 insect species from five different insect orders (Coleoptera, Lepidoptera, Hemiptera, Diptera, and Orthoptera). A large variability was observed in the dsRNA degradation ability of body fluids collected from insects belonging to these orders (Fig. 7). The CB50 values varied from 0.05 to 36.86 mg/ml. Injected or fed dsRNA was processed into siRNA in all the coleopteran insects tested. Whereas, siRNA band was not detected in total RNA isolated from any of the lepidopterans insects tested. The dsRNA processing efficiency of insects from other orders tested showed the values somewhere between the values in coleopteran and lepidopteran insects. In most of the dipteran and hemipteran insects tested, the injected dsRNA was processed to siRNA, but no siRNA was detected in the total RNA isolated from the insects fed on dsRNA. These data suggest that variability in dsRNA degradation by dsRNases and its processing to siRNA could contribute to variable RNAi efficiency observed in insects. The efficiency of systemic RNAi response also varies among different insect species. Injection of a small amount of dsRNA induces systemic RNAi response in C. elegans and T. castaneum 27,28 but in lepidopteran insects, a large amount of dsRNA induces only poor RNAi response 29 . Therefore, it is important to study the molecular machinery of RNAi in different insects. Differences in structure and expression of genes coding for proteins involved in RNAi (Dicers, Argonautes, R2D2, Sid-1 like proteins and dsRNases) could affect the efficiency of RNAi. Therefore, in the current study, besides studying the processing and degradation of dsRNA, we also examined the presence of core components of RNAi machinery in different insect species. We retrieved the sequences of core RNAi pathway genes (Dicers, Argonautes, Sid-1 like proteins and dsRNase) of different insects from NCBI, UniProt and i5k databases and analyzed domain architecture of proteins coded by these genes. Coleopteran insects showed efficient processing of dsRNA into siRNA as evidenced by the presence of a 21-23 bp band in the total RNA isolated from insects injected or fed with labeled dsRNA (Fig. 1b). Our collection of insects included seven coleopteran insects for which RNAi has not been studied previously (See Supplementary Table S1). In addition, a high concentration of body fluid from coleopteran insects is required to degrade dsRNA suggesting that the abundance and/or expression of genes coding for dsRNases is lower in these insects when compared to that in insects from other orders. Several groups 11,14,15,[30][31][32][33][34] reported well-functioning RNAi in coleopteran insects. Duplication of, Argonaute2 in T. castaneum [35][36][37][38] and Argonaute2 and Dicer2 genes in L. decemlineata 39 suggests that duplication of core RNAi pathway genes in coleopteran insects could also contribute to higher RNAi efficiency observed in these insects. In dsRNA fed L. decemlineata, the gene coding for Ago2b is expressed at higher levels as compared to the other core RNAi pathway genes (Dcr2a, Dcr2b and Ago2a) 39 . Some of the technical limitations of the current study including the differences in the efficiency of body fluid collection from insects studied could have contributed to some variability in dsRNA degradation ability of the hemolymph, among insects studied. However, it is likely that the differences in the expression levels and activities of dsRNA degrading enzymes contribute to the variability in dsRNA degradation ability observed among insects studied. Taken together, our data and results from the previous studies suggest that duplication of some genes coding for RNAi machinery proteins, lower levels dsRNases and efficient processing of dsRNA to siRNA are among the major contributors to higher RNAi efficiency in coleopteran insects. In our study, four lepidopteran insects were tested, and no siRNA was detected in the total RNA isolated from these insects after feeding or injecting dsRNA (Fig. 2b). Also, dsRNA degradation was observed at a lower concentration of hemolymph when compared to that in coleopteran insects. Gene silencing through injection of dsRNA is reported to have variable success in lepidopteran insects 13,16 . Experiments on C. pomonella suggests the presence of functional RNAi in this insect as dsRNA mediated knockdown of cullin-1 gene was observed and, affected the larval growth. Although, no mortality was noticed in this experiment, quantification of C. pomonella cullin-1 mRNA levels by quantitative reverse-transcriptase real-time PCR revealed a dose-dependent knockdown of the target gene. However, no change in the expression levels of four other genes (maleless, musashi, a homeobox and pumilio) tested was observed in the dsRNA mediated knockdown studies 40 . In a few other cases, successful in-vivo RNAi experiments have led to important insights into lepidopteran physiology and development. However, in general, RNAi doesn't appear to be an efficient process in lepidopteran insects. This has stimulated the research on the identification of limiting factors responsible for reduced RNAi efficiency in lepidopteran insects. The entry of dsRNA into cells through the endocytic pathway has been reported 38,41 . In our recent study, we showed entrapment of dsRNA in acidic bodies in the cells from lepidopteran insects, H. viresences and S. furgiperda 33 . However, the induction of RNAi was shown in some caterpillars with multiple injections or feeding of higher doses of dsRNA 16,42,43 . In M. sexta, dsRNA caused a dose-dependent induction of genes coding for Dcr2 and Ago2, and triggered RNAi 44 . The injection but not feeding of dsGFP in B. mori caused an increase in the expression of genes coding for Dcr2 and Ago2 45 . In B. mori, overexpression of Ago2 showed both dsRNA and shRNA mediated RNAi response 46 . The midgut lumen contents (pH 11 or 12) collected from lepidopteran insects contain non-specific nucleases which degrade fed dsRNA 43,45,47,48 . Although there are some reports of successful RNAi response in lepidopteran insects after administration of large quantities of dsRNA in general, RNAi response in these insects does not appear to be as efficient as observed in some coleopteran insects. The major contributing factors include degradation by dsRNases and inefficient processing of dsRNA to siRNA. and Aedes aegypti, Aa; were analyzed as described in Fig. 1a legend. (see Figure S1d for complete gel images). (b) Processing of dsRNA in dipteran insects. Total RNA isolated from Allograpta obliqua, Ao; Drosophila melanogaster, Dm; Musca domestica, Md; Anastrepha suspensa, As; and Aedes aegypti, Aa were resolved on ureaacrylamide gels as described in Fig. 1b legend. (see Figure S2 for complete gel images). Figure 5. (a) dsRNA degradation assay in orthopteran insects. The dsRNA degradation by body fluids collected from Syrbula admirabilis, Sa; and Gryllus texensis, Gt; were analyzed as described in Fig. 1a legend. (see Figure S1e for complete gel images). (b) Processing of dsRNA in orthopteran insects. Total RNA isolated from Syrbula admirabilis, Sa; and Gryllus texensis, Gt; was resolved on urea-acrylamide gels as described in Fig. 1b legend. (see Figure S2 for complete gel images). A variation in the processing of injected/fed dsRNA and dsRNA degradation in the body fluid of hemipteran insects was observed, in the current study. All the tested insects displayed efficient degradation of dsRNA; low concentration of body fluid (CB50 0.07 mg/ml) was sufficient to degrade the dsRNA in the case of the pea aphid (Acyrthosiphon pisum) (Fig. 3a). The processing of dsRNA to siRNA was not observed in pea aphid (Fig. 3b) suggesting that dsRNA gets degraded quickly in A. pisum before it could be accessed by the RNAi machinery inside the cell. The degradation of dsRNA in the gut or salivary secretions of L. lineolaris and A. pisum have been reported 49,50 . There are only a few reports of successful RNAi in A. pisum [51][52][53][54] . The poor RNAi response observed in hemipteran insects compared to that in coleopteran insects might be due to the salivary secretions causing degradation of dsRNA. Both the salivary secretions and body fluid from aphids were able to degrade the dsRNA, and the administered dsRNA was not able to provoke a response in the expression of the siRNA core machinery genes. The dsRNA-degrading nucleases likely contribute to poor RNAi efficiency in hemipteran insects 52 . Several previous reports suggest that dipteran (mosquitoes and flies) insects are sensitive to dsRNA or siRNA mediated gene silencing 13,[55][56][57] . The processing of injected dsRNA into siRNA was observed in A. aegypti and M. domestica but not in D. melanogaster (Fig. 4b). A high concentration of the body fluid from all these insects was required for the dsRNA degradation (Fig. 4a). In D. melanogaster, dicer1 but not dicer2 is essential for miRISC translational repression 58 . The genomes of D. melanogaster and two mosquito species (Anopheles gambiae and A. aegypti) also do not contain sid-1-like genes 35 . The injection of dsRNA/siRNA triggers silencing of the target gene in orthopteran insects (locusts and crickets) 59 . Silencing of development and molting related genes (Lm-TSP and Chitin synthase 1) in locusts induced mortality 60,61 . We detected siRNA bands in the total RNA isolated from dsRNA injected S. admirabilis and G. texensis. These data support the previous reports on the efficient RNAi upon injection of dsRNA in orthopteran insects (Fig. 5b). The concentrations of hemolymph required to degrade dsRNA varied drastically in these insects; the CB50 value for S. admirabilis are lower (2.47 mg/ml) compared to that of G. texensis (11.02 mg/ml) (Fig. 5a). Only injection, but not feeding dsRNA caused knockdown of a target gene in case of S. gregaria 25 and L. migratoria 26 . Expression of non-specific nucleases in the gut of S. gregaria was suggested as the reason for inefficient feeding RNAi in S. gregaria 25 . Lower levels of Dcr2 or Ago2 have been suggested as a limiting factor for reduced RNAi observed in the reproductive tissues of S. gregaria 62 . Injection but not feeding dsRNA induces robust RNAi response in L. migratoria [63][64][65] . Published reports and the data included in this paper suggest that dsRNA degradation by dsRNases, transport of dsRNA into and within the cells and processing of dsRNA to siRNA are among the major contributing factors for an inefficient RNAi, in insects. Research aimed at uncovering the molecular basis of these mechanisms as well as developing the methods to overcome these limitations should help in improving RNAi efficiency and wide-spread use of this technology in the development of novel methods for controlling pests and disease vectors. Methods Collection of insects. The insects used in the studies were collected either from the University Farm (University of Kentucky) or laboratory maintained cultures. In the present study, a total of 37 insect species from five orders (Coleoptera, Lepidoptera, Hemiptera, Diptera, and Orthoptera) were tested. The identification of farm-collected insects was done in the Department of Entomology, University of Kentucky, Lexington, USA. Collection of body fluid and gel retardation assay. Body fluid was collected from each insect except in the case of lepidopteran larvae, where hemolymph was collected into microcentrifuge tubes containing phenylthiourea dissolved in 1XPBS and kept on ice to prevent melanization. Hemocytes were removed by centrifugation at 13,000 rpm for 10 min at 4 °C. The supernatant was transferred to a new tube and stored at −20 °C for dsRNA degradation assay as described previouly 26,33,43 . Protein concentrations were estimated using Bradford's assay 66 . Different dilutions of body fluid were prepared based on total protein concentration. The range of serially diluted body fluid used was 0.007 to 16 mg/ml. 300 ng of dsGFP was incubated with the body fluid for 1hr at room temperature. The "concentration of body fluid required to degrade 50% of dsRNA" (CB50) was calculated for different insects using probit analysis 67 . Samples were mixed with loading dye and run on a 1% agarose gels and the gels were stained with ethidium bromide. The dsRNA was visualized with an AlphaImager ™ Gel Imaging System (Alpha Innotech, San Leandro, CA) under UV light. The results were analyzed using the Image-J software, and the relative band intensity was calculated as described previously 68 . Synthesis of 32 P UTP labeled and unlabeled dsGFP. 32 P UTP labeled and unlabelled dsGFP were synthesized as described previously 33 . The quality and quantity of dsGFP were checked by agarose gel electrophoresis and NanoDrop-2000 spectrophotometer (Thermo Fisher Scientific Inc., Waltham, MA), respectively. The radioactivity of the labeled dsGFP was measured using a scintillation counter.
4,832.8
2017-12-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Quantropy There is a well-known analogy between statistical and quantum mechanics. In statistical mechanics, Boltzmann realized that the probability for a system in thermal equilibrium to occupy a given state is proportional to exp(-E/kT) where E is the energy of that state. In quantum mechanics, Feynman realized that the amplitude for a system to undergo a given history is proportional to exp(-S/i hbar) where S is the action of that history. In statistical mechanics we can recover Boltzmann's formula by maximizing entropy subject to a constraint on the expected energy. This raises the question: what is the quantum mechanical analogue of entropy? We give a formula for this quantity, which we call"quantropy". We recover Feynman's formula from assuming that histories have complex amplitudes, that these amplitudes sum to one, and that the amplitudes give a stationary point of quantropy subject to a constraint on the expected action. Alternatively, we can assume the amplitudes sum to one and that they give a stationary point of a quantity we call"free action", which is analogous to free energy in statistical mechanics. We compute the quantropy, expected action and free action for a free particle, and draw some conclusions from the results. Introduction There is a famous analogy between statistical mechanics and quantum mechanics. In statistical mechanics, a system can be in any state, but its probability of being in a state with energy E is proportional to exp(−E/T ) where T is the temperature in units where Boltzmann's constant is 1. In quantum mechanics, a system can move along any path, but its amplitude for moving along a path with action S is proportional to exp(−S/i ) where is Planck's constant. So, we have an analogy where making the replacements E → S T → i formally turns the probabilities for states in statistical mechanics into the amplitudes for paths, or 'histories', in quantum mechanics. In statistical mechanics, the strength of thermal fluctuations is governed by T . In quantum mechanics, the strength of quantum fluctuations is governed by . In statistical mechanics, the probabilities exp(−E/T ) arise naturally from maximizing entropy subject to a constraint on the expected value of energy. Following the analogy, we might guess that the amplitudes exp(−S/i ) arise from maximizing some quantity subject to a constraint on the expected value of action. This quantity deserves a name, so let us tentatively call it 'quantropy'. In fact, Lisi [5] and Munkhammar [7] have already treated quantum systems as interacting with a 'heat bath' of action and sought to derive quantum mechanics from a principle of maximum entropy with amplitudes-or as they prefer to put it, complex probabilities-replacing probabilities. However, seeking to derive amplitudes for paths in quantum mechanics from a maximum principle is not quite correct. Quantum mechanics is rife with complex numbers, and it makes no sense to maximize a complex function. But a complex function can still have stationary points, where its first derivative vanishes. So, a less naive program is to derive the amplitudes in quantum mechanics from a 'principle of stationary quantropy'. We do this for a class of discrete systems, and then illustrate the idea with the example of a free particle, discretizing both space and time. Carrying this out rigorously is not completely trivial. In the simplest case, entropy is defined as a sum involving logarithms. Moving to quantropy, each term in the sum involves a logarithm of complex number. Making each term well-defined requires a choice of branch cut; it is not immediately clear that we can do this and obtain a differentiable function as the result. Additional complications arise when we consider the continuum limit of the free particle. Our treatment handles all these issues. We begin by reviewing the main variational principles in physics and pointing out the conceptual gap that quantropy fills. In Section 2 we introduce quantropy along with two related quantities: the free action and the expected action. In Section 3 we develop tools for computing all these quantities. In Section 4 we illustrate our methods with the example of a free particle, and address some of the conceptual questions raised by our results. We conclude by mentioning some open issues in Section 5. 1.1. Statics. Static systems at temperature zero obey the principle of minimum energy. In classical mechanics, energy is often the sum of kinetic and potential energy: where the potential energy V depends only on the system's position, while the kinetic energy K also depends on its velocity. Often, though not always, the kinetic energy has a minimum at velocity zero. In classical mechanics this lets us minimize energy in a two-step way. First we minimize K by setting the velocity to zero. Then we minimize V as a function of position. While familiar, this is actually somewhat noteworthy. Usually minimizing the sum of two things involves an interesting tradeoff. In quantum physics, a tradeoff really is required, thanks to the uncertainty principle. We cannot know the position and velocity of a particle simultaneously, so we cannot simultaneously minimize potential and kinetic energy. This makes minimizing their sum much more interesting. But in classical mechanics, in situations where K has a minimum at velocity zero statics at temperature zero is governed by a principle of minimum potential energy. The study of static systems at nonzero temperature deserves to be called 'thermostatics', though it is usually called 'equilibrium thermodynamics'. In classical or quantum equilibrium thermodynamics at any fixed temperature, a system is governed by the principle of minimum free energy. Instead of our system occupying a single definite state, it will have different probabilities of occupying different states, and these probabilities will be chosen to minimize the free energy Here E is the expected energy, T is the temperature, and S is the entropy. Note that the principle of minimum free energy reduces to the principle of minimum energy when T = 0. But where does the principle of minimum free energy come from? One answer is that free energy F is the amount of 'useful' energy: the expected energy E minus the amount in the form of heat, T S. For some reason, systems in equilibrium minimize this. Boltzmann and Gibbs gave a deeper answer in terms of entropy. Suppose that our system has some space of states X and the energy of the state x ∈ X is E(x). Suppose that X is a measure space with some measure dx, and assume that we can describe the equilibrium state using a probability distribution, a function p : Then the entropy is while the expected value of the energy is: Now suppose our system maximizes entropy subject to a constraint on the expected value of energy. Using the method of Lagrange multipliers, this is the same as maximizing S − β E where β is a Lagrange multiplier. When we maximize this, we see the system chooses a Boltzmann distribution: . One could call β the coolness, since working in units where Boltzmann's constant equals 1 it is just the reciprocal of the temperature. So, when the temperature is positive, maximizing S − β E is the same as minimizing the free energy: In summary: every minimum or maximum principle in statics can be seen as a special case or limiting case of the principle of maximum entropy, as long as we admit that sometimes we need to maximize entropy subject to constraints. This is quite satisfying, because as noted by Jaynes, the principle of maximum entropy is a general principle for reasoning in situations of partial ignorance [4]. So, we have a kind of 'logical' explanation for the laws of statics. 1.2. Dynamics. Now suppose things are changing as time passes, so we are doing dynamics instead of statics. In classical mechanics we can imagine a system tracing out a path q(t) as time passes from t = t 0 to t = t 1 . The action of this path is often the integral of the kinetic minus potential energy: where K(t) and V (t) depend on the path q. To keep things from getting any more confusing than necessary, we are calling action A instead of the more usual S, since we are already using S for entropy. The principle of least action says that if we fix the endpoints of this path, that is the points q(t 0 ) and q(t 1 ), the system will follow the path that minimizes the action subject to these constraints. This is a powerful idea in classical mechanics. But in fact, sometimes the system merely chooses a stationary point of the action. The Euler-Lagrange equations can be derived just from this assumption. So, it is better to speak of the principle of stationary action. This principle governs classical dynamics. To generalize it to quantum dynamics, Feynman proposed that instead of our system following a single definite path, it can follow any path, with an amplitude a(q) of following the path q. He proposed this formula for the amplitude: where is Planck's constant. He also gave a heuristic argument showing that as → 0, this prescription reduces to the principle of stationary action. Unfortunately the integral over all paths is hard to make rigorous except in certain special cases. This is a bit of a distraction for our discussion now, so let us talk more abstractly about 'histories' instead of paths with fixed endpoints, and consider a system whose possible histories form some space X with a measure dx. We will look at an example later. Suppose the action of the history x ∈ X is A(x). Then Feynman's sum over histories formulation of quantum mechanics says the amplitude of the history x is: . This looks very much like the Boltzmann distribution: Indeed, the only serious difference is that we are taking the exponential of an imaginary quantity instead of a real one. This suggests deriving Feynman's formula from a stationary principle, just as we can derive the Boltzmann distribution by maximing entropy subject to a constraint. This is where quantropy enters the picture. Quantropy We have described statics and dynamics, and a well-known analogy between them. However, we have seen there are some missing items in the analogy: Our goal now is to fill in the missing entries in this chart. Since the Boltzmann distribution comes from the principle of maximum entropy, one might hope Feynman's sum over histories formulation of quantum mechanics: Unfortunately Feynman's sum over histories involves complex numbers, and it does not make sense to maximize a complex function. So let us try to derive Feynman's prescription from a principle of stationary quantropy. Suppose we have a set of histories, X, equipped with a measure dx. Suppose there is a function a : X → C assigning to each history x ∈ X a complex amplitude a(x). We assume these amplitudes are normalized so that X a(x) dx = 1, since that is what Feynman's normalization actually achieves. We define the quantropy of a by: One might fear this is ill-defined when a(x) = 0, but that is not the worst problem; in the study of entropy we typically set 0 ln 0 = 0. The more important problem is that the logarithm has different branches: we can add any multiple of 2πi to our logarithm and get another equally good logarithm. For now suppose we have chosen a specific logarithm for each number a(x), and suppose that when we vary the numbers a(x) they do not go through zero. This allows us to smoothly change ln a(x) as a function of a(x). To formalize this we could treat quantropy as depending not on the amplitudes a(x), but on some function b : X → C such that exp(b(x)) = a(x). In this approach we require X e b(x) dx = 1, and define the quantropy by: Then the problem of choosing branches for the logarithm does not come up. But we shall take the informal approach where we express quantropy in terms of amplitudes and choose a branch for ln a(x) as described above. Next, let us seek amplitudes a(x) that give a stationary point of the quantropy Q subject to a constraint on the expected action: The term 'expected action' is a bit odd, since the numbers a(x) are amplitudes rather than probabilities. While one could try to justify this term from how expected values are computed in Feynman's formalism, we are mainly using it because A is analogous to the expected value of the energy, E , which we saw earlier. Let us look for a stationary point of Q subject to a constraint on A , say A = α. To do this, one would be inclined to use Lagrange multipliers and look for a stationary point of But there is another constraint too, namely To do this, the Lagrange multiplier recipe says we should find stationary points of where λ and µ are Lagrange multipliers. The Lagrange multiplier λ is the more interesting one. It is analogous to the 'coolness' β = 1/T, so our analogy chart suggests that we should take We shall see that this is correct. When λ becomes large our system becomes close to classical, so we call λ the classicality of our system. Following the usual Lagrange multiplier recipe, we seek amplitudes for which holds, along with the constraint equations. We begin by computing the derivatives we need: Thus, we need The constraint X a(x) dx = 1 then forces us to choose This is precisely Feynman's sum over histories formulation of quantum mechanics if λ = 1/i ! Note that the final answer does two equivalent things in one blow: • It gives a stationary point of quantropy subject to the constraints that the amplitudes sum to 1 and the expected action takes some fixed value. • It gives a stationary point of the free action: subject to the constraint that the amplitudes sum to 1. In case the second point is puzzling, note that the 'free action' plays the same role in quantum mechanics that the free energy E − T S plays in statistical mechanics. It completes the analogy chart at the beginning of this section. It is widely used in the effective action approach to quantum field theory, though not under the name 'free action': as we shall see, it is simply −i times the logarithm of the partition function. It is also worth noting that when → 0, the free action reduces to the action. Thus, in this limit, the principle of stationary free action reduces to the principle of stationary action in classical dynamics. Computing Quantropy In thermodynamics there is a standard way to compute the entropy of a system in equilibrium starting from its partition function. We can use the same techniques to compute quantropy. It is harder to get the integrals to converge in interesting examples. But we can worry about that later, when we do an example. First recall how to compute the entropy of a system in equilibrium starting from its partition function. Let X be the set of states of the system. We assume that X is a measure space, and that the system is in a mixed state given by some probability We assume each state x has some energy E(x) ∈ R. Then the mixed state maximizing the entropy with a constraint on the expected energy for some value of the coolness β, where Z is the partition function: To compute the entropy of the Boltzmann distribution, we can thus take the formula for entropy and substitute the Boltzmann distribution for p(x), getting Reshuffling this, we obtain a formula for the free energy: Of course, we can also write the free energy in terms of the partition function and β: We can do the same for the expected energy: This in turn gives In short: if we know the partition function of a system in thermal equilibrium as a function of β, we can easily compute its entropy, expected energy and free energy. Similarly, if we know the partition function of a quantum system as a function of λ = 1/i , we can compute its quantropy, expected action and free action. Let X be the set of histories of some system. We assume that X is a measure space, and that the amplitudes for histories are given by a function a : X → C obeying X a(x) dx = 1. We also assume each history x has some action A(x) ∈ R. In the last section, we saw that to obtain a stationary point of quantropy with a constraint on the expected action we must use Feynman's prescription for the amplitudes: for some value of the classicality λ = 1/i , where Z is the partition function: As mentioned, the formula for quantropy here is a bit dangerous, since we are taking the logarithm of the complex-valued function a(x), which requires choosing a branch. Luckily, the ambiguity is greatly reduced when we use Feynman's prescription for a, because in this case a(x) is defined in terms of an exponential. So, we can choose this branch of the logarithm: Once we choose a logarithm for Z, this formula defines ln a(x). Inserting this formula for ln a(x) into the formula for quantropy, we obtain We can simplify this a bit, since the integral of a is 1: We thus obtain: This quantity is what we called the 'free action' in the previous section. Let us denote it by the letter Φ: In terms of λ, we have Now we can compute the expected action just as we computed the expected energy in thermodynamics: This gives: The following chart shows where our analogy stands now. Statistical Mechanics Quantum Mechanics states: x ∈ X histories: x ∈ X principle of stationary quantropy principle of minimum energy principle of stationary action (in T → 0 limit) (in → 0 limit) The quantropy of a free particle Let us illustrate these ideas with an example: a free particle. Suppose we have a free particle on a line tracing out some path as time goes by: Then its action is just the time integral of its kinetic energy: where v(t) =q(t). The partition function is then where we integrate an exponential involving the action over the space of all paths. Unfortunately, the space of all paths is infinite-dimensional, so Dq is ill-defined: there is no 'Lebesgue measure' on an infinite-dimensional vector space. So, we start by treating time as discrete-a trick going back to Feynman's original work [2]. We consider n time intervals of length ∆t. We say the position of our particle at the ith time step is q i ∈ R, and require that the particle keeps a constant velocity v i between the (i − 1)st and ith time steps: Then the action, defined as an integral, reduces to a finite sum: We consider histories of the particle where its initial position is q 0 = 0, but its final position q n is arbitrary. If we do not 'nail down' the particle at some particular time in this way, our path integrals will diverge. So, our space of histories is X = R n and we are ready to apply the formulas in the previous section. We start with the partition function. Naively, it is But this means nothing until we define the measure Dq. Since the space of histories is just R n with coordinates q 1 , . . . , q n , an obvious guess for a measure would be However, the partition function should be dimensionless. The quantity λA(q) and its exponential are dimensionless, so the measure had better be dimensionless too. But dq 1 · · · dq n has units of length n . So to make the measure dimensionless, we introduce a length scale, ∆x, and use the measure It should be emphasized that despite the notation ∆x, space is not discretized, just time. This length scale ∆x is introduced merely order to make the measure on the space of histories dimensionless. Now let us compute the partition function. For starters, we have Since q 0 is fixed, we can express the positions q 1 , . . . , q n in terms of the velocities v 1 , . . . v n . Since dq 1 · · · dq n = (∆t) n dv 1 · · · dv n this change of variables gives But this n-tuple integral is really just a product of n integrals over one variable, all of which are equal. So, we get some integral to the nth power: but we will apply this formula to compute the partition function, where the constant playing the role of α is imaginary. This makes some mathematicians nervous, because when α is imaginary, the function being integrated is no longer Lebesgue integrable. However, when α is imaginary, we get the same answer if we impose a cutoff and then let it go to infinity: or damp the oscillations and then let the amount of damping go to zero: So we shall proceed unabashed, and claim Given this formula for the partition function, we can compute everything we care about: the expected action, free action and quantropy. Let us start with the expected action: This formula says that the expected action of our freely moving quantum particle is proportional to n, the number of time steps. Each time step contributes i /2 to the expected action. The mass of the particle, the time step ∆t, and the length scale ∆x do not matter at all; they disappear when we take the derivative of the logarithm containing them. Indeed, our action could be any function of this sort: where c i are positive numbers, and we would still get the same expected action: And since we can diagonalize any positive definite quadratic form, we can state this fact more generally: whenever the action is a positive definite quadratic form on an n-dimensional vector space of histories, the expected action is n times i /2. For example, consider a free particle in 3-dimensional Euclidean space, and discretize time into n steps as we have done here. Then the action is a positive definite quadratic form on a 3n-dimensional vector space, so the expected action is 3n times i /2. We can try to intepret this as follows. In the path integral approach to quantum mechanics, a system can trace out any history it wants. If the space of histories is an n-dimensional vector space, it takes n real numbers to determine a specific history. Each number counts as one 'decision'. And in the situation we have described, where the action is a positive definite quadratic form, each decision contributes i /2 to the expected action. There are some questions worth answering: (1) Why is the expected action imaginary? The action A is real. How can its expected value be imaginary? The reason is that we are not taking its expected value with respect to a probability measure, but instead, with respect to a complex-valued measure. Recall that The action A is real, but λ = 1/i is imaginary, so it is not surprising that this 'expected value' is complex-valued. (2) Why does the expected action diverge as n → ∞? We have discretized time in our calculation. To take the continuum limit we must let n → ∞ while simultaneously letting ∆t → 0 in such a way that n∆t stays constant. Some quantities will converge when we take this limit, but the expected action will not: it will go to infinity. What does this mean? This phenomenon is similar to how the expected length of the path of a particle undergoing Brownian motion is infinite. In fact the free quantum particle is just a Wick-rotated version of Brownian motion, where we replace time by imaginary time, so the analogy is fairly close. The action we are considering now is not exactly analogous to the arclength of a path: Instead, it is proportional to this quadratic form: However, both these quantities diverge when we discretize Brownian motion and then take the continuum limit. The reason is that for Brownian motion, with probability one the path of the particle is nondifferentiable, with Hausdorff dimension > 1 [6]. We cannot apply probability theory to the quantum situation, but we are seeing that the 'typical' path of a quantum free particle has infinite expected action in the continuum limit. (3) Why does the expected action of the free particle resemble the expected energy of an ideal gas? For a classical ideal gas with n particles in 3d space, the expected energy is in units where Boltzmann's constant is 1. For a free quantum particle in 3d space, with time discretized into n steps, the expected action is Why are the answers so similar? The answers are similar because of the analogy we are discussing. Just as the action of the free particle is a positive definite quadratic form on R n , so is the energy of the ideal gas. Thus, computing the expected action of the free particle is just like computing the expected energy of the ideal gas, after we make these replacements: The last remark also means that the formulas for the free action and quantropy of a quantum free particle will be analogous those for the free energy and entropy of a classical ideal gas, except missing the factor of 3 when we consider a particle on a line. For the free particle on a line, we have seen that ln Z = n 2 ln 2π∆t λm (∆x) 2 . Setting K = 2π∆t m (∆x) 2 , we can write this more compactly as ln Z = n 2 (ln K − ln λ). We thus obtain the following formula for the free action: Note that the ln K term dropped out when we computed the expected action by differentiating ln Z with respect to λ, but it shows up in the free action. The presence of this ln K term is surprising, since the constant K is not part of the usual theory of a free quantum particle. A completely analogous surprise occurs when computing the partition function of a classical ideal gas. The usual textbook answer involves a term of type ln K where K is proportional to the volume of the box containing the gas divided by the cube of the thermal de Broglie wavelength of the gas molecules [8]. Curiously, the latter quantity involves Planck's constant, despite the fact that we we are considering a classical ideal gas! Indeed, we are forced to introduce a quantity with dimensions of action to make the partition function of the gas dimensionless, because the partition function is an integral of a dimensionless quantity over position-momentum pairs, and dpdq has units of action. Nothing within classical mechanics forces us to choose this quantity to be Planck's constant; any choice will do. Changing our choice only changes the free energy by an additive constant. Nonetheless, introducing Planck's constant has the advantage of removing this ambiguity in the free energy of the classical ideal gas, in a way which is retroactively justified by quantum mechanics. Analogous remarks apply to the length scale ∆x in our computation of the free action of a quantum particle. We introduced it only to make the partition function dimensionless. It is mysterious, much as Planck's constant was mysterious when it first forced its way into thermodynamics. We do not have a theory or experiment that chooses a favored value for this constant. All we can say at present is that it appears naturally when we push the analogy between statistical mechanics and quantum mechanics to its logical conclusion-or, a skeptic might say, to its breaking point. Finally, the quantropy of the free particle on a line is Again, the answer depends on the constant K: if we do not choose a value for this constant, we only obtain the quantropy up to an additive constant. An analogous problem arises for the entropy of a classical ideal gas: without introducing Planck's constant, we can only compute this entropy up to an additive constant. Conclusions There are many questions left to tackle. The biggest is: what is the meaning of quantropy? Unfortunately it seems hard to attack this directly. It may be easier to work out more examples and develop more of an intuition for this concept. There are, however, some related puzzles worth keeping in mind. As emphasized by Lisi [5], it is rather peculiar that in the path-integral approach to quantum mechanics we normalize the complex numbers a(x) associated to paths so that they integrate to 1: X a(x) dx = 1. It clearly makes sense to normalize probabilities so that they sum to 1. However, starting from the wavefunction of a quantum system, we obtain probabilities only after taking the absolute value of the wavefunction and squaring it. Thus, for wavefunctions we impose X |ψ(x)| 2 dx = 1 rather than X ψ(x) dx = 1. For this reason Lisi calls the numbers a(x) 'complex probabilities' rather than amplitudes. However, the meaning of complex probabilities remains mysterious, and this is tied to the mysterious nature of quantropy. Feynman's essay on the interpretation of negative probabilities could provide some useful clues [1]. It is also worth keeping in mind another analogy: 'coolness as imaginary time'. Here we treat β as analogous to it/ rather than 1/i . This is widely used to convert quantum mechanics problems into statistical mechanics problems by means of Wick rotation, which essentially means studying the unitary group exp(−itH/ ) by studying the semigroup exp(−βH) and then analytically continuing β to imaginary values. Wick rotation plays an important role in Hawking's computation of the entropy of a black hole, nicely summarized in his book with Penrose [3]. The precise relation of this other analogy to the one explored here remains unclear, and is worth exploring. Note that the quantum Hamiltonian H shows up on both sides of this other analogy.
7,022.4
2013-11-04T00:00:00.000
[ "Physics" ]
Small Area Disease Risk Estimation and Visualization Using R Small area disease risk estimation is essential for disease prevention and control. In this paper, we demonstrate how R can be used to obtain disease risk estimates and quantify risk factors using areal data. We explain how to define disease risk models and how to perform Bayesian inference using the INLA package. We also show how to make interactive maps of estimates using the leaflet package to better understand the disease spatial patterns and communicate the results. We show an example of lung cancer risk in Pennsylvania, United States, in year 2002, and demonstrate that R represents an excellent tool for disease surveillance by enabling reproducible health data analysis. Introduction Disease risk mapping analyses can help to better understand the spatial variation of the disease, and allow the identification of important public health determinants.These analyses are essential to inform programmes of disease prevention and control.The increased availability of geospatial disease and population data has enabled to study a number of health outcomes worldwide such as influenza and cancer in developed countries (Moraga and Ozonoff, 2013;Moraga and Kulldorff, 2016), and neglected tropical diseases (Moraga et al., 2015;Hagan et al., 2016). Areal disease data often arise when disease outcomes observed at point level locations are aggregated over subareas of the study region due to several reasons such as patient confidentiality.Producing disease risk estimates at areal level is complicated by the fact that raw rates can be very unstable in areas with small populations and for rare diseases, and also by the presence of spatial correlation that may exist due to spatially correlated risk factors (Leroux et al., 2000).Thus, generalized linear mixed models are often used to obtain disease risk estimates since they enable to improve local estimates by accommodating spatial correlation and the effects of explanatory variables.Bayesian inference in these models may be performed using the Integrated Nested Laplace Approximation (INLA) approach (Rue et al., 2009) which is a computational alternative to MCMC that allows to do approximate Bayesian inference in latent Gaussian models.This approach is implemented in the R package called INLA (Rue et al., 2017) (http://www.r-inla.org/).Small area disease estimates can be visualized through maps, greatly facilitating effective communication.R provides excellent tools for visualization including packages for making interactive maps such as leaflet (Cheng et al., 2017).The maps created with leaflet support interactive panning and zooming which is very convenient to examine small areas in detail. In this paper, we illustrate the use of R for performing disease risk mapping analysis using areal data.First, we introduce disease risk models for areal data and give a brief overview of INLA.In Section 2.4 we show how to estimate lung cancer risk and quantify risk factors in Pennsylvania, United States, in year 2000.Specifically, we discuss how to compute the observed and expected disease counts in the Pennsylvania counties, how to obtain disease risk estimates by fitting a spatial disease risk model using INLA, and how to build interactive maps showing the risk estimates using leaflet.Finally, the conclusions are presented. Disease risk models Disease risk estimates in areas can be obtained by computing the Standardized Incidence Ratios (SIRs).For area i, i = 1, . . ., n, the SIR is obtained as the ratio of the observed to the expected disease counts: SIR i = Y i /E i .The expected counts represent the total number of disease cases that one would expect if the population of the specific area behaved the way the standard (or regional) population behaves.The expected counts can be calculated using indirect standardization as where r (s) j is the disease rate in stratum j of the standard population, and n j is the population in stratum j of the specific area.The SIR corresponding to area i, SIR i , indicates whether the area i has more (SIR i > 1), equal (SIR i = 1) or fewer (SIR i < 1) cases observed than expected from the standard population.When applied to mortality data, the ratio is commonly known as the Standardized The R Journal Vol.10/1, July 2018 ISSN 2073-4859 Mortality Ratio or SMR. Although in some situations SIRs can give a sense of the disease's spatial variability, very extreme values can occur in areas with small populations owing to the small sample sizes involved.In contrast, disease models are preferred to obtain disease risks estimates because they enable to incorporate covariates and borrow information from neighboring areas to improve local estimates, resulting in the smoothing or shrinking of extreme values based on small sample sizes (Gelfand et al., 2000).A common approach is to model the observed counts Y i , i = 1, . . ., n, using a Poisson distribution with mean E i × θ i , where E i is the expected counts and θ i is the relative risk in area i.Then, the log risks are modeled with a sum of an intercept to model the overall disease risk level, and random effects that account for extra-Poisson variability in the observed data (Lawson, 2009).Areas with relative risks θ > 1 and θ < 1 are areas with high and low risks, respectively.Areas with θ = 1 have the same risk as expected from the standard population. The general model in disease mapping is expressed as Here, α denotes the overall risk level, u i is a spatial structured random effect that models the spatial dependence between the relative risks, and v i is an unstructured exchangeable random effect that models uncorrelated noise.Often, other covariates or random effects are also included to quantify risk factors and deal with other sources of variability. A model commonly used in disease mapping is the Besag-York-Mollié (BYM) model (Besag et al., 1991).In this model, the spatially structured component u i is modelled with the conditional autoregressive (CAR) distribution which smoothes the data according to a certain adjacency structure given by a neighborhood matrix that specifies two areas are neighbours if they have a common boundary.The CAR distribution is expressed as where ūδ i = n −1 δ i ∑ j∈δ i u j , and δ i and n δ i represent, respectively, the set of neighbours and the number of neighbours of area i.The unstructured component v i is modelled using independent and identically distributed normal variables with zero-mean and variance equal to σ 2 v . INLA Traditionally, Bayesian inference has been implemented via MCMC methods which make inference tractable for complex models but may present convergence and computation time problems.Integrated Nested Laplace Approximation (INLA) is a computational less-intensive alternative to MCMC designed to perform approximate Bayesian inference in latent Gaussian models (Rue et al., 2009).These models include a very wide and flexible class of models ranging from generalized linear mixed to spatial and spatio-temporal models.Specifically, models are of the form , where y are the observed data, x represents a Gaussian field, and θ are hyperparameters.Observations y i are assumed to belong to an exponential family with mean µ i = g −1 (η i ).The linear predictor η i accounts for effects of various covariates in an additive way, Here, α is the intercept, {β k }'s quantify the linear effects of covariates {z ki } on the response, and { f (j) (•)}'s are a set of non-linear or smooth functions defined in terms of some covariates {u ji }.This formulation permits to accommodate a wide range of models thanks to the very different forms that the functions { f (j) } can take including the disease risk models previously introduced.INLA uses a combination of analytical approximation and numerical integration to obtain approximated posterior distributions of the parameters that can then be post-processed to compute quantities of interest like posterior expectations and quantiles. The R Journal Vol.10/1, July 2018 ISSN 2073-4859 The INLA approach is implemented in the R package INLA.This package is not on CRAN because it uses some external C libraries that make difficult to build the binaries.Therefore, when we install the package we need to use install.packages()adding the URL of the INLA repository: install.packages("INLA", repos = "https://inla.r-inla-download.org/R/stable",dep = TRUE) To fit a model using INLA we need to take two steps.First, we need to write the linear predictor of the model as a formula object in R.Then, we run the model calling the inla() function where we specify the formula, the family, the data and other options.Results can be inspected with the summary() function and the posterior distributions can be post-processed using a set of specific functions provided by INLA.Further details about how to use all these functions will be given in the disease mapping example in next Section. Example: lung cancer risk in Pennsylvania In this Section we present an example of small area disease mapping study where we estimate the risk of lung cancer in Pennsylvania counties in year 2002.We use the data contained in the R package SpatialEpi (Kim and Wakefield, 2016).The data contain the counties population which was obtained from the 2000 decennial census, and the lung cancer and smoking proportions which were obtained from the Pennsylvania Department of Health.We show how to calculate the observed and expected disease cases, and the the SIRs in each of the counties.We also obtain disease risk estimates and quantify risk factors by fitting a Bayesian model using INLA.Finally, we show how to make interactive maps of the risk estimates using leaflet. Data We start by loading the SpatialEpi package and attaching the pennLC data. library(SpatialEpi) data(pennLC) By typing ?pennLC we see pennLC is a list with the following elements: • geo: a data frame of county ids, and longitude and latitude of the geographic centroid of each county, • data: a data frame of county ids, number of cases, population and strata information, • smoking: a data frame of county ids and proportion of smokers, • spatial.polygon: a SpatialPolygons object with the map of Pennsylvania. pennLC$data contains the number of lung cancer cases and the population at county level, stratified on race (white and non-white), gender (female and male) and age group (under 40, 40-59, 60-69 and 70+). We now create a data frame called d with columns containing the counties ids, the observed and expected number of cases, the smoking proportions and the SIRs.Specifically, d will contain the following columns: • id: id of each county, • Y: observed number of cases in each county, • E: expected number of cases in each county, • smoking: smoking proportion in each county, • SIR: SIR of each county. Observed cases pennLC$data contains the cases in each county stratified by race, gender and age.We can obtain the number of cases in each county, Y, by aggregating the rows of pennLC$data by county and adding up the observed number of cases. d <-aggregate(x = pennLC$data$cases, by = list(county = pennLC$data$county), FUN = sum) aggregate() returns a data frame where the first row is the county and the second column is the observed number of cases in each of the counties.We set the column names of the returned object equal to id and Y. Expected cases Now we calculate the indirectly standardized expected number of cases in each county as explained in Section 2.2.That is, we use the strata-specific rates from the the Pennsylvania population (standard population), and apply them to the population distribution of the county.The expected counts represent the total number of disease cases one would expect if the population in the county behaved the way the Pennsylvania population behaves.We can do this by using the expected() function of SpatialEpi.This function has three arguments, namely, • population: a vector of population counts for each strata in each area, • cases: a vector with the number of cases for each strata in each area, • n.strata: number of strata considered. Vectors population and cases have to be sorted by area first and then, within each area, the counts for all strata need to be listed in the same order.All strata need to be included in the vectors, including strata with 0 cases.Hence, to get the expected counts we first sort the data using the order() function where we specify the order as county, race, gender and finally age. pennLC$data <-pennLC$data[order(pennLC$data$county, pennLC$data$race, pennLC$data$gender, pennLC$data$age), ] Then we call the expected() function to obtain the expected counts E in each county.In the function we set population equal to pennLC$data$population and cases equal to pennLC$data$cases.There are 2 races, 2 genders and 4 age groups for each county, so number of strata is set to 2 x 2 x 4 = 16. population <-pennLC$data$population cases <-pennLC$data$cases n.strata <-16 E <-expected(population, cases, n.strata) Now we add the vector E to the data frame d which contains the counties ids (id) and the observed counts (Y), making sure the E elements correspond to the counties in d$id in the same order.To do that, we use match() to calculate the vector of the positions that match d$id in unique(pennLC$data$county) which are the corresponding counties of E. Then we rearrange E using that vector. Smokers proportions We also add to d the variable smoking which represents the proportion of smokers in each county.We add this variable using the merge() function where we specify the columns for merging as id in d and county in pennLC$smoking.d <-merge(d, pennLC$smoking, by.x = "id", by.y = "county") SIRs Finally, we compute the vector of SIRs as the ratio of the observed to the expected counts, and add it to the data frame d. Add data to map The map of Pennsylvania counties is given by the SpatialPolygons object called pennLC$spatial.polygon.Using this object and the data frame d we can create a SpatialPolygonsDataFrame called map, that will allow us to make maps of the variables in d.In order to do that, we first set the row names of the data frame d equal to d$id. Mapping variables We can visualize the observed and expected disease counts, the SIRs, as well as the smokers proportions in an interactive chropleth map using the leaflet package.We create the map by first calling leaflet() and adding the default OpenStreetMap map tiles to the map with addTiles().Then we add the Pennsylvania counties with addPolygons() where we specify the areas boundaries color (color) and the stroke width (weight).We fill the areas with the colours given by the color palette function generated with colorNumeric(), and set fillOpacity to a value less than 1 to be able to see the background map.We use colorNumeric() to create a color palette function that maps data values to colors according to a given palette.We create the function using the parameters palette with the color function that values will be mapped to, and domain with the possible values that can be mapped.Finally, we add the legend by specifying the color palette function (pal) and the values used to generate colors from the palette function (values).We set opacity to the same value as the opacity in the areas, and specify a title and a position for the legend. library(leaflet) l <-leaflet(map) %>% addTiles() pal <-colorNumeric(palette = "YlOrRd", domain = map$SIR) l %>% addPolygons(color = "grey", weight = 1, fillColor = ~pal(SIR), fillOpacity = 0.5) %>% addLegend(pal = pal, values = ~SIR, opacity = 0.5, title = "SIR", position = "bottomright") We can improve the map by highlighting the counties when the mouse hovers over them, and showing information about the observed and expected counts, SIRs, and smoking proportions.We do this by adding the arguments highlightOptions, label and labelOptions to addPolygons().We choose to highlight the areas using a bigger stroke width (highlightOptions(weight = 4)).We create the labels using HTML syntax.First, we create the text to be shown using the function sprintf() which returns a character vector containing a formatted combination of text and variable values and then applying htmltools::HTML() which marks the text as HTML.In labelOptions we specify the labels style, textsize, and direction.Possible values for direction are left, right and auto and this specifies the direction the label displays in relation to the marker.We choose auto so the optimal direction will be chosen depending on the position of the marker.labels <-sprintf("<strong> %s </strong> <br/> Observed: %s <br/> Expected: %s <br/> Smokers proportion: %s <br/> SIR: %s", map$id, map$Y, round(map$E, 2), map$smoking, round(map$SIR, 2)) %>% lapply(htmltools::HTML) l %>% addPolygons(color = "grey", weight = 1, fillColor = ~pal(SIR), fillOpacity = 0.5, highlightOptions = highlightOptions(weight = 4), label = labels, labelOptions = labelOptions(style = list("font-weight" = "normal", padding = "3px 8px"), textsize = "15px", direction = "auto")) %>% addLegend(pal = pal, values = ~SIR, opacity = 0.5, title = "SIR", position = "bottomright") Figure 1 shows a snapshot of the interactive map created using leaflet showing the SIRs in the Pennsylvania counties.We can examine the map and see which counties have SIR equal to 1 indicating observed counts are the same as expected counts, and which counties have SIR greater (or smaller) than 1, indicating observed counts are greater (or smaller) than expected counts.This map gives a sense of the disease risk across Pennsylvania.However, SIRs are misleading and insufficiently reliable in counties with small populations.In contrast, model-based approaches enable to incorporate covariates and borrow information from neighboring counties to improve local estimates, resulting in the smoothing of extreme values based on small sample sizes.In the next section we will show how to obtain disease risk estimates using a Bayesian model using INLA. Modeling In this Section we specify the model for the data, and detail the required steps to fit the model and obtain the disease risk estimates using INLA. Model We specify a model assuming that the observed counts Y i are conditionally independently Poisson distributed, where E i is the expected count and θ i is the relative risk in area i.The logarithm of θ i is expressed as follows: where β 0 is the intercept, β 1 is the coefficient of the smokers proportion covariate, u i is an structured ), and v i is an unstructured spatial effect, v i ∼ N(0, σ 2 v ). Neighbourhood matrix We create the neighbourhood matrix needed to define the spatial random effect using the poly2nb() and the nb2INLA() functions of the spdep package (Bivand, 2017).First, we use poly2nb() to create a neighbours list based on areas with contiguous boundaries.Then, we use nb2INLA() to convert this list into a file with the representation of the neighbourhood matrix as required by INLA that is saved in the working directory.Then we read the file using the inla.read.graph()function of INLA, and store it in the object g which we will later use for specifying the spatial disease model with INLA. Inference using INLA As stated in Section 2.4.3, the model includes two random effects, namely, u i for modeling spatial residual variation, and v i for modeling unstructured noise.We need to include two vectors in the data that denote the indices of these random effects.We call re_u the vector denoting u i , and re_v the vector denoting v i .We set both re_u and re_v equal to 1, . . ., n, where n is the number of counties.In our example, n = 67 and this can be obtained with the number of rows in the data (nrow(map@data)). map$re_u <-1:nrow(map@data) map$re_v <-1:nrow(map@data) We specify the model formula by including the response in the left-hand side, and the fixed and random effects in the right-hand side.Random effects are set using f() with parameters equal to the name of the variable and the chosen model.For u i , we use model = "besag" with neighbourhood matrix given by g.For v i we choose model = "iid".formula <-Y ~smoking + f(re_u, model = "besag", graph = g) + f(re_v, model = "iid") We fit the model by calling the inla() function.We specify the formula, family, data, and the expected counts, and set control.predictor equal to list(compute = TRUE) to compute the posterior means of the linear predictors. Results We can inspect the results object res using summary(). summary(res) ## ## Call: ## c("inla(formula = formula, family = \"poisson\", data = map@data, ", " We see the intercept β0 = -0.3236with a 95% credible interval equal to (-0.6212, -0.0279), and the coefficient of smoking is β1 = 1.1567 with a 95% credible interval equal to (-0.0810, 2.3853).This indicates that the smokers proportion has a positive although non significant effect on disease risk.We can plot the posterior distribution of the smoking coefficient.We do this by calculating a spline smoothing of the marginal distribution of the coefficient with inla.smarginal() and then plot it with ggplot() of ggplot2 package (Wickham and Chang, 2016) (see Figure 2).The disease risk estimates and uncertainty for each of the counties are given by the mean posterior and the 95% credible intervals of θ i , i = 1, . . ., n which are in the data frame res$summary.fitted.values. Here, column mean is the mean posterior and 0.025quant and 0.975quant are the 2.5 and 97.5 percentiles, respectively.We add these data to map to be able to make maps of these variables.We assign column mean to the estimate of the relative risk, and columns 0.025quant and 0.975quant to the lower and upper limits of 95% credible intervals of the risks. Mapping disease risk We show the estimated disease risk in an interactive map using leaflet.In the map, we add labels that appear when mouse hovers over the counties showing information about observed and expected counts, SIRs, smokers proportions, RRs, and lower and upper limits of 95% credible intervals. Summary In this article we have shown how to obtain small area disease risk estimates, and generate interactive maps that help the understanding and interpretation of the results.First, we have introduced disease risk models using areal data, and have given an overview of the INLA package for performing Bayesian inference.Then, we have given a practical example where we have estimated lung cancer risk in Pennsylvania in 2002.We have conducted the analyses using several R packages such as spdep for spatial data manipulation, SpatialEpi for calculating the expected disease counts in the Pennsylvania counties, INLA for performing Bayesian inference, and leaflet and ggplot2 for visualization of the results. One limitation of disease models based on areal data is that they are often subject to ecological bias.This bias occurs when associations obtained from analyses that use variables at an aggregated level lead to conclusions different from analyses that use the same variables measured at an individual level (Robinson, 1950).Therefore, whenever point data are available, it is preferable to use disease models without aggregating data and predict disease risk in a continuous surface (Moraga et al., 2017;Diggle et al., 2013). It is also possible to use R to build tools to better communicate the results to stakeholders and the general public.For instance, summaries and maps of disease risk estimates can be presented in interactive dashboards using flexdashboard (Allaire, 2017), and web applications using shiny (Chang et al., 2017).One example of such web application is the SpatialEpiApp package (Moraga, 2017a,b) which is useful for disease risk mapping and the detection of clusters.This is an easy to use application where users simply need to upload their data and click several buttons that execute the tasks and process the outputs, making spatial analysis methods accessible to multiple disciplines.SpatialEpiApp creates interactive visualizations by using the packages leaflet for rendering maps, dygraphs (Vanderkam et al., 2017) for plotting time series, and DT (Xie, 2016) for displaying data tables, and enables the generation of reports by using rmarkdown (Allaire et al., 2017).In conclusion, R represents an excellent tool for disease surveillance by enabling reproducible health data analysis. Figure 1 : Figure 1: Snapshot of the interactive map created using leaflet showing the lung cancer SIRs in Pennsylvania counties in 2002. Figure 2 : Figure 2: Posterior distribution of the coefficient of covariate smokers proportion. Figure 3 : Figure 3: Snapshot of the interactive map created using leaflet showing the lung cancer RRs in Pennsylvania counties in 2002. Figure 4 : Figure 4: Snapshot of the interactive maps created using leaflet showing the lung cancer SIRs (left) and RRs (right) in Pennsylvania counties in 2002 using the same scale. Then we merge pennLC$spatial.polygonand d matching the SpatialPolygons member Polygons ID slot values with the data frame row names.
5,625.2
2018-06-07T00:00:00.000
[ "Computer Science", "Environmental Science", "Medicine" ]
Widely tunable black phosphorus mid-infrared photodetector Lately rediscovered orthorhombic black phosphorus (BP) exhibits promising properties for near- and mid-infrared optoelectronics. Although recent electrical measurements indicate that a vertical electric field can effectively reduce its transport bandgap, the impact of the electric field on light-matter interaction remains unclear. Here we show that a vertical electric field can dynamically extend the photoresponse in a 5 nm-thick BP photodetector from 3.7 to beyond 7.7 μm, leveraging the Stark effect. We further demonstrate that such a widely tunable BP photodetector exhibits a peak extrinsic photo-responsivity of 518, 30, and 2.2 mA W−1 at 3.4, 5, and 7.7 μm, respectively, at 77 K. Furthermore, the extracted photo-carrier lifetime indicates a potential operational speed of 1.3 GHz. Our work not only demonstrates the potential of BP as an alternative mid-infrared material with broad optical tunability but also may enable the compact, integrated on-chip high-speed mid-infrared photodetectors, modulators, and spectrometers. The bandgap of ultrathin black phosphorus can be tuned by a vertical electric field. Here, the authors leverage such electric field to extend the photoresponse of a black phosphorus photodetector to 7.7 μm, opening the doors to various mid-infrared applications. B lack phosphorus 1-8 (BP) recently has gained significant attention from the optoelectronic community, due to its moderate bandgap in mid-infrared (mid-IR) 3 , its high carrier mobility [9][10][11] , and its intrinsic layered nature, which allows for the on-chip monolithic integration with electronics and various optical waveguides [12][13][14] . Previously, a number of BP photodetectors [12][13][14][15][16][17][18][19][20] have been demonstrated. However, for BP photodetectors relying on efficient interband transitions, the cutoff wavelength is around 3.7 μm, which is determined by its bulk bandgap of around 0.33 electron volt 3 . Extending its operation into longer wavelength range can significantly improve its functionalities. Furthermore, if the light-BP interaction leveraging interband transitions can be dynamically tuned within a wide spectral range in mid-IR, optical devices beyond photodetectors, such as on-chip high-speed mid-IR optical modulators and spectrometers, may be built based on this concept of dynamic response tuning. Such high-speed mid-IR optical modulators and spectrometers can play an important role in many applications, including free-space communications, sensing, and surveillance 14 . Alloying arsenic with BP can form a new type of narrow gap semiconductor, black arsenic phosphorus (b-AsP) 21 . B-AsP has a bandgap even smaller than that of BP and b-AsP photodetectors show a decent photoresponse at mid-IR wavelength up to 8.2 μm 22 . On the other hand, recent transport studies in the BP thin film show that a vertical electric field can effectively reduce its transport bandgap by shifting its band edge energies [23][24][25] . In fact, such strong tuning of transport bandgap has also been observed in few-layer transition metal dichalcogenides 26 . However, lightmatter interaction and carrier transport can respond to the external stimuli very differently in layered materials and it remains unclear how the external electric field can modify the light-BP interaction. In fact, in bilayer molybdenum disulfide (MoS 2 ), the vertical electric field has been shown to significantly reduce its transport bandgap and the bandgap tuning is almost linearly dependent on the vertical electric field 26 . However, the light-matter interaction in a biased bilayer MoS 2 is dominated by direct optical transitions within the same layer, which are almost field-independent 26 . On the contrary, at the direct bandgap (K point) in a strongly biased bilayer MoS 2 , the conduction and valence states are localized in different layers, respectively, leading to minimum optical transition dipole and weak light-matter interaction at the direct bandgap energy 26 . In this work, we demonstrate widely tunable mid-IR photodetectors operating up to 7.7 μm based on a hexagonal boron nitride (hBN)/BP/hBN-sandwiched heterostructure. Here the hBN dielectric not only guarantees an ultra-clean interface for efficient photo-carrier collection but also prevents BP from oxidation. The high device quality allows for the observation of the strongest intrinsic photoconductive response near the chargeneutrality point in BP. The widely tunable photoresponse combined with theoretical calculations show the promising future of BP thin film as an alternative and high-quality material for mid-IR photonics especially for integrated photonic systems. Results Device fabrication and basic characterization. The schematic and optical images of the BP photodetector based on a dual-gate transistor structure are shown in Fig. 1a, b, respectively. Here, BP films are sandwiched by hBN flakes and transferred onto a silicon substrate covered with a 90 nm-thick silicon dioxide using the polymer-free dry transfer method 27 . All exfoliation and transfer processes are performed in an argon-filled glovebox with oxygen and moisture concentrations below 0.1 part per million (ppm) to prevent BP from oxidation. As shown in Fig. 1c, the crosssectional high-resolution transmission electron microscopy (HR-TEM) image demonstrates the clean and oxidation-free BP/hBN interfaces. This is further evidenced by the elemental mapping (see Methods) for nitrogen (blue), phosphorus (green), and oxygen (red), as shown in the right panel of Fig. 1c. No oxygen signal is observed within the hBN/BP/hBN heterostructure. The thickness of BP, top and bottom hBN layers are also accurately determined from the cross-sectional HR-TEM. In this device, the BP thin film consists of 10 layers and the top and bottom hBN are 23 and 9 nm-thick, respectively. The top hBN dielectric consists of two hBN layers transferred separately. The first hBN layer is used to sandwich the BP film. The second hBN layer is transferred after the deposition of the interdigitated chromium/gold (3/27 nm) fingers for photocurrent collection, and it covers the channel and the electrodes. As a result, the second top hBN layer prevents the electrical shorting between the gate electrode and the interdigitated fingers for photocurrent collections. These interdigitated electrodes are designed along the Y-direction (zigzag) of BP thin flakes such that the carrier collection is along the Xdirection (armchair), in order to achieve the highest carrier mobility and photo-responsivity. The BP crystalline direction was identified by the polarization-resolved Raman spectroscopy, in which the intensity ratio of peaks A g 2 and A g 1 reaches a maximum value of 3.8 when the polarization of excitation laser is aligned along the X-direction as shown in Fig. 1d 28 . Detailed Raman analysis is presented in Supplementary Fig. 1 and Supplementary Note 1. Platinum thin film (~7 nm) is used as the partially transparent top gate with a mid-IR transmission efficiency of around 60% ( Supplementary Fig. 2) and silicon is used as the back gate. The detailed device fabrication process is discussed in the Methods section and Supplementary Fig. 3. Transport measurement in a 10-layer BP device. We first performed two-terminal transport measurements as a function of top gate voltage V tg in a 10-layer BP device (Fig. 2a) at different static back gate bias V bg ranging from −25 to 30 V at 77 K. We infer a two-terminal field-effect hole mobility of 1600 cm 2 V −1 s −1 and electron mobility of 220 cm 2 V −1 s −1 at V bg = 0 V. The mobility is estimated in the linear region of the transfer curve using μ = L/W ⋅ 1/(C tg V ds ) ⋅ dI ds /dV tg , where L = 1.6 μm and W = 25 μm is the length and the total width of the device, respectively, I ds is the source-drain current and C tg = ε t ε 0 /d t is the top gate capacitance per unit area. Here, ε 0 is the permittivity of vacuum, and ε t and d t (23 nm) are the relative permittivity and thickness of the top hBN dielectric, respectively. In the dual-gate configuration, two displacement fields generated in top dielectrics are utilized to control the doping and potential difference across thin BP films, where ε b is the overall relative permittivity of back gate dielectric (including 90 nm SiO 2 and a 9 nm-thick bottom hBN flake), d b (99 nm) is the total thickness of back gate dielectric, and V t0 and V b0 are charge-neutrality point voltages due to the unintentional doping. We find a small V t0 value (~0.5 V), which indicates that the BP sample is very intrinsic. Figure 2a shows source-drain current I ds as function of top gate voltages with source-drain bias of 100 mV at different static back gate bias V bg . With decreasing back gate bias, the charge-neutrality point of BP shifts to higher top gate bias and scales linearly with V bg (Supplementary Fig. 4), indicating that two gates with opposite signs are able to effectively compensate the doping. At charge-neutrality points, the displacement fields across BP channel satisfy the relation D = D t = D b . From this relation, the relative permittivity of hBN is estimated to be~3.1 in our devices. This value is consistent with the results reported previously 29 . As shown in Fig. 2a, the minimum conductance at charge-neutrality point increases significantly at higher displacement field, clearly indicating the bandgap decrease. A quantitative study of the bandgap reduction as function of the displacement field has previously been performed using temperature-dependent four-terminal conductance measurements at charge-neutrality points 23 or using the scanning tunneling microscope 25 . In addition, the bandgap tuning in few-layer BP has also been explored theoretically and using angleresolved photoemission spectroscopy 23,25,[30][31][32][33][34] . Here, in order to theoretically explore the impact of the vertical electric bias on the light-BP interaction, we include the crystalmomentum dependence in a tight-binding model 23 to describe the single-particle optical conductivity, based on the Kubo formula in thin BP films. The detailed modeling process of the BP optical conductivity under bias is presented in Supplementary Note 2. In this model, the change of optical oscillator strength due to the bias-induced separation of electron and hole wave functions is taken into account, leading to optical conductivity within single-particle framework. Figure 2b shows the calculated results of the optical absorption edge, the energy at which the increase of the optical conductivity is the sharpest (Supplementary Fig. 5), as a function of displacement field in 9-, 10-, and 11-layer BP films. Tunable mid-IR photoresponse in the 10-layer BP device. Next we measured the tunable photoresponse of the 10-layer BP photodetector under the vertical displacement field at three representative mid-IR wavelengths, 3.4, 5, and 7.7 μm at 77 K. The IR light was chopped and focused on samples to generate an alternating photocurrent (I ph ), which was then collected by a lock-in amplifier referenced to the chopping frequency (details are further shown in Methods) at a bias voltage V ds of 300 mV. At 3.4 μm (photon energy is above the bandgap of unbiased BP), the hBN-sandwiched BP device always shows maximum photocurrents at charge-neutrality points (Fig. 3a) regardless of the back gate bias V bg , which is distinctively different from previous studies 13,19 in which the photo-gating effect dominates the photocurrent due to the presence of significant trap states. With increasing carrier doping, the photocurrent decreases dramatically, primarily due to higher photo-carrier scattering rate and hence reduced carrier lifetime. This is the key signature of the photoconductive effect 14 , in which photo-generated electrons and holes drift in opposite directions and are eventually collected by drain and source leads. At V ds = 300 mV, we achieve an extrinsic responsivity of 136 mA W −1 at the charge-neutrality point at V bg = 0 V. The extrinsic responsivity is defined as R ex = I ph /(P in A device /A laser ), where P in is the incident laser power, A device is the effective sample area, and A laser is the laser spot area. At higher displacement fields, the peak photocurrent decreases slightly. This can be due to two mechanisms. First, under bias the absorption edge tuning leads to re-distribution of the oscillator strength. This can lead to the reduction of oscillator strength at the transition energy corresponding to 3.4 μm light (photon energy of 365 meV). Second, bandgap shrinkage results in the increase in the free intrinsic carrier density, which can reduce the photo-carrier lifetime, leading to the reduced photoresponse. In Fig. 3b, we plot the top gate-dependent photocurrent at different static back gate bias under illumination of the 5 μm mid-IR light (with photon energy of 248 meV) with polarization along the X-direction of BP. We start to observe significant photocurrent when |V bg | > 20 V, corresponding to a displacement field |D| > 0.78 V nm −1 in BP when charge-neutrality condition is achieved. Again, at any given back gate bias beyond the threshold voltage of around 20 V, the photocurrent peaks when charge-neutrality condition is satisfied. Moreover, the peak photocurrent increases significantly with increasing displacement field and reaches~80 nA at a displacement field of 1.14 V nm −1 (V bg = −30 V, V tg = −7.7 V). This yields an extrinsic responsivity of around 8 mA W −1 at V ds = 300 mV. We further examine the photoresponse under the 7.7 μm light excitation in Fig. 3c. The observation of photocurrent clearly indicates efficient light-BP interactions at 7.7 μm (with photon energy of 160 meV) when |D| > 1.09 V nm −1 . We further plot the peak photocurrent at the charge-neutrality condition as a function of the displacement field in BP in Fig. 3d. We obtain the threshold displacement fields (−0.73 and −1.07 V nm −1 at hole side, and 0.73 and 1.11 V nm −1 at electron side) at which the optical absorption edge (defined as the energy at which the optical conductivity increase is the largest) of the 10-layer BP shrinks to 250 and 160 meV, respectively. We plot these data in Fig. 2b and find excellent agreement between experimental results and theoretical predictions for the 10-layer BP. As discussed in Supplementary Note 2, at zero bias we assume that the absorption edge in BP thin film is its bandgap (330 meV). Here we want to mention that the theoretical results on the tuning of optical absorption edge are in fact similar to the values for bandgap edge tuning 23 . However, the theoretical results on optical conductivity shown in Supplementary Fig. 5 provide additional information on the strength of the light-matter interaction in BP under bias. As discussed below and in Supplementary Note 2, although in biased BP smaller photoresponse for 5.5 and 7 μm light is observed, if compared with 3.4 μm light, whose photon energy is above the unbiased BP bandgap of 330 meV, theoretical calculations indicate that the optical conductivity at low energy in biased BP can still be significant. Figure 4a plots the extrinsic responsivity as a function of source-drain bias at 3.4, 5, and 7.7 μm wavelengths, respectively. The device is operated at the charge-neutrality point at back gate bias of 0, 30, and 35 V for the three wavelengths, respectively. The responsivity increases linearly with V ds when V ds < 1 V, and only shows very slight saturation at larger V ds . This can be attributed to a higher carrier drift velocity at larger V ds , which can be expressed as μV ds /L. At a source-drain bias of 1.2 V, the device exhibits an extrinsic responsivity of 518, 30, and 2.2 mA W −1 at 3.4, 5, and 7.7 μm, respectively. If taking into account the optical absorption of Pt top gate film (40%) and BP (~3% for 5 nm-thick thin film) 35 , the intrinsic responsivity (R in ) are estimated to be 28.7, 1.6, and 0.122 A W −1 at 3.4, 5, and 7.7 μm, respectively. As shown in the Supplementary Fig. 5, the optical conductivity of biased BP at low photon energy is in general smaller than that at high photon energy (above 330 meV). The light absorption of 5 and 7.7 μm photons in 5 nm-thick BP under bias is below 3%. As a result, the estimated intrinsic responsivity at 5 and 7.7 μm are lower bound values due to the smaller absorption coefficients at longer wavelength. At 3.4 μm, the product of internal quantum efficiency IQE and photoelectric current gain G, defined as IQE ⋅ G = R in E ph /e, reaches 10.5 at V ds = 1.2 V, where E ph is the photon energy and e is the elementary charge. This high IQE ⋅ G is attributed to a short carrier transit time τ tr-h = L 2 /μ h V ds~1 3 ps for holes and τ tr-e~9 7 ps for electrons in our hBN-sandwiched BP devices. Assuming a unity electron-hole pair generation possibility, we can estimate the photo-carrier lifetime 36 τ life = τ tr-h τ tr-e /(τ tr-h + τ tr-e ) ⋅ IQE ⋅ G to be around 120 ps at the charge-neutrality point. This value agrees very well with previously reported value of~100 ps using time-resolved reflection measurements 37 . Since the photo-carrier lifetime is well below nanosecond, the optoelectronic devices based on tunable BP can potentially operate at a speed beyond 1 GHz. The estimated 3-dB bandwidth f 3dB of the operation in our current device at charge-neutrality point is around 1.3 GHz (f 3dB = 1/2πτ life ) 38 , if the RC time constant of the device is much shorter than the carrier lifetime. Although tuning the device away from the charge-neutrality point leads to reduced photo-carrier lifetime and hence lower quantum efficiency as shown in Fig. 3a-c, the reduced photo-carrier lifetime can result in an increase in operational bandwidth, which is desirable for high speed applications. Another important figure of merit for detectors is the noise equivalent power (NEP) 36,39 . It denotes the minimum required incident light power to achieve power signal to noise ratio of 1 ðI 2 ph =hi 2 n i ¼ 1Þ at 1 Hz bandwidth 36 36 . As the operating frequency in our tunable BP device is well beyond 10 kHz as shown in Supplementary Fig. 6, we neglect the contribution from 1/f flicker noise, which is only significant at low operating frequencies (<10 kHz) 19,40 . By taking into account these three noise mechanisms, the NEP at the three mid-IR wavelengths is extracted and plotted in Fig. 4b. Here the device is operated at the charge-neutrality point at back gate bias of 0, 30, and 35 V for 3.4, 5, and 7.7 μm wavelengths, respectively, where the NEP is optimal. Detailed calculation of NEP is shown in Supplementary Note 3. The device shows a NEP of 0.03, 35, and 672 pW Hz −1/2 at 3.4, 5, and 7.7 μm wavelengths, respectively, at V ds = 1.2 V. The dark currents are 8.6 × 10 −4 , 3.42, and 6.75 μA for the three wavelengths, respectively, as shown in Fig. 4b. The NEP of our tunable BP photodetector at 3.4 μm is better than that obtained in b-AsP FETs (~2 pW Hz −1/2 ) and b-AsP/MoS 2 heterojunctions (~0.2 pW Hz −1/2 ) 22 . For longer wavelengths, the NEP increases to 35 pW Hz −1/2 at 5 μm and 672 pW Hz −1/2 at 7.7 μm, due to smaller responsivity and higher operating darker current. These values are larger than that reported in b-AsP-based devices 22 . However, our tunable BP photodetectors show advantages of potential operating frequency of gigahertz, which is significantly larger than that (~kHz) in b-AsP devices 22 and our devices have unique spectral tunability. On the other hand, the b-AsP devices have the advantage of room-temperature operation. If compared with the state-of-the-art wide-band mid-IR detectors, such as mercury cadmium telluride (HgCdTe)-based detectors with NEP below 10 −6 pW Hz −1/2 for~10 μm IR light operating at 260 K 41 , the NEP of our tunable BP device is much larger. However, our demonstration is based on an ultrathin BP film (~5 nm) in which the light absorption is around 3% for 3.4 μm light and even smaller for 5 and 7.7 μm light. Further integration with optical structures (such as cavities and waveguides) is expected to significantly improve its performance. Moreover, our photodetectors have widely tunable spectral response and can be integrated with electronics readily due to the layered and nontoxic nature of BP. In short, although the device performance reported here is still not as good as state-ofthe-art HgCdTe photodetectors, there is still plenty of room for the improvement. More importantly, our devices have many unique features such as wide tunability and potential for monolithic integration, which are not available in traditional HgCdTe photodetectors. The photoresponse of BP thin films also shows strong angledependence at all three representative mid-IR wavelengths. Figure 4c shows the photoresponse at 5 μm with light polarization at different angles as an example. The responsivity reaches the maximum value when light polarization is along the X-direction (armchair), while reaches the minimum value along the Ydirection. This is attributed to the small optical absorption coefficient along the Y-direction of BP originating from its unique puckered lattice structure 42 . Discussion The ability to tune the photoresponse of BP in a wide mid-IR spectral range implies that BP could be an alternative material for mid-IR photonics. The demonstration of tuning here is also distinctively different from previously results on BP optical property tuning with a single gate, in which both the Franz-Keldysh and Pauli-blocked Burstein-Moss effects 43-45 play a role. The Burstein-Moss effect 43,44 in highly doped BP leads to a blueshift of optical absorptions at high charge carrier densities due to blockings of low-energy optical transitions by band filling. The Franz-Keldysh effect [43][44][45] redshifts the BP absorption edge when the doping of BP is below~3 × 10 12 cm −2 , while the Burstein-Moss dominates and blueshift of the absorption is expected at higher doping concentrations 43,44 . Here the dual-gate configuration provides an additional gate variable and allows us to reach the charge-neutrality position of BP at different biasing field, leading to significant redshift of the light absorption edge, highly desirable for mid-IR applications. Moreover, in our 10layer BP, the exciton binding energy is negligibly small and excitonic effect can be ignored 3 . In summary, a widely tunable mid-IR photodetector based on the hBN/BP/hBN heterostructure has been demonstrated for optoelectronic applications beyond the cutoff wavelength of pristine BP. Strong photoresponse generated from intrinsic photoconductive effect is observed at wavelengths up to 7.7 μm under a moderate displacement field. High intrinsic mobility and strong photoresponse in a broad mid-IR wavelength range, together with its layered nature make BP a promising material for high-speed mid-IR photodetectors, modulators, and waveguideintegrated on-chip spectrometers. Methods Fabrication of hBN-sandwiched BP mid-IR photodetectors. BP crystals are purchased from HQ Graphene with purity >99.995%, synthesized by the Sn/SnI 4 approach 46, 47 . BP and hBN thin flakes were first mechanically exfoliated from bulk crystals onto silicon substrates covered with a 90 nm-thick SiO 2 in an argon-filled glovebox with oxygen and moisture concentrations lower than 0.1 ppm. The hBN/ BP/hBN heterostructures were assembled using the polymer-free dry transfer method described in ref. 27 . The heterostructure stack was annealed at 633 K in a hydrogen/argon (flow ratio 3:97) environment at atmosphere pressure (1 atm) for 6 h to further improve its quality. Before the fabrication of electrodes, a polarization-resolved Horiba HR LabRaman 300 system (532 nm laser as the excitation) was used to characterize the crystal direction of BP. Then a 200 nm PMMA was spun onto samples and a Vistec 100 kV electron-beam lithography system was used to define the shape of multiple interdigitated electrodes. The exposed top hBN layer was etched using the Oxford Plasmalab 100 Reactive Ion Etching System in a CHF 3 /O 2 (40/4 standard cubic centimeter per minute) environment. Chromium/gold (3/27 nm) films were then evaporated to form contacts. Another BN flake was transferred to cover the whole sample area. At last a 7 nm-thick Pt top gate was evaporated after electron-beam lithography to fully cover the BP channel. Transmission electron microscopy characterizations. Following thermal evaporation of 6-10 nm of amorphous carbon on the sample for improved conductivity, a thin device cross-section lamella was prepared using a focused Ga-ion beam in a FEI Helios DualBeam machine. HR-TEM imaging was performed using an FEI Tecnai F-30 instrument. The lamella was exposed to air for < 5 min during transfer to the HR-TEM. Images were acquired at a working voltage of 300 kV. Inter-layer distances of different strata in the hBN/BP/hBN stack were obtained from fast Fourier transform of selected areas in the device cross-section. Elemental mapping of the device cross-section was performed using electron energy loss spectroscopy in combination with a Gatan imaging filter, yielding an energyfiltered transmission electron microscopy image. Transport and mid-IR photocurrent measurements. The dual-gate transport measurements were performed using an Agilent B1500A semiconductor parameter analyzer in a low-temperature stage (model HFS600E-PB4 from Linkam Scientific Instruments) mounted on a Bruker Fourier Transform Infrared spectrometer (FTIR). Chopped light from a helium-neon laser (3.4 μm) and quantum cascade lasers (5 and 7.7 μm) was coupled to the FTIR and then focused on the samples using the Hyperion 2000 microscope. The generated alternating photocurrent was then collected by a Lock-in amplifier (Model SR830) referenced to the chopping frequency of 1.3 kHz. The diameters of the laser spot for 3.4, 5, and 7.7 μm lasers are~14, 16, and 16 μm, respectively. Data availability. The data that support the findings of this study are available from the corresponding author upon reasonable request. Photo-responsivity and polarization-sensitivity in 10-layer mid-IR BP photodetector. a Photo-responsivity as a function of source-drain bias at charge-neutrality points. For 3.4, 5, and 7.7 μm incident lasers, the back gate bias is 0, 30, and 35 V, respectively, and the incident laser powers are 40, 50, and 100 μW, respectively. b NEP and dark currents at chargeneutrality points at the three representative mid-IR wavelengths. For photocurrent measurements at 3.4, 5, and 7.7 μm, the back gate bias is 0, 30, and 35 V, respectively. c Scatters show the photocurrent at different laser polarizations for the 5 μm incident laser. The incident power is 50 μW. Here the incident angle of 0°corresponds to the laser polarization aligned with the X-direction (armchair) of BP. Solid lines are fitting curves from the equation I ¼ I max À I min ð Þ cos 2 θ þ I min , where θ is the polarization angle referenced to the X-direction of BP, and I max and I min are the photocurrent intensities along Xand Y-direction, respectively
6,150.2
2017-11-22T00:00:00.000
[ "Physics" ]
Synthesis of β-Maltooligosaccharides of Glycitein and Daidzein and their Anti-Oxidant and Anti-Allergic Activities The production of β-maltooligosaccharides of glycitein and daidzein using Lactobacillus delbrueckii and cyclodextrin glucanotransferase (CGTase) as biocatalysts was investigated. The cells of L. delbrueckii glucosylated glycitein and daidzein to give their corresponding 4'- and 7-O-β-glucosides. The β-glucosides of glycitein and daidzein were converted into the corresponding β-maltooligosides by CGTase. The 7-O-β-glucosides of glycitein and daidzein and 7-O-β-maltoside of glycitein showed inhibitory effects on IgE antibody production. On the other hand, β-glucosides of glycitein and daidzein exerted 2,2-diphenyl-1-picrylhydrazyl (DPPH) free-radical scavenging activity and supeoxide-radical scavenging activity. Introduction Glycitein and daidzein are important and bioactive isoflavones isolated from soybeans whose pharmacological properties such as anticancer, anti-inflammatory, neuroprotective, anticarcinogenic OPEN ACCESS effects, and protective effects against bone loss, hormone-dependent and -independent cancers, cardiovascular diseases, and autoimmune diseases and have been widely studied [1][2][3][4][5][6][7][8][9][10][11]. Despite these pharmacological activities, their use as medicines and functional food-ingredients is limited, because they are scarcely soluble in aqueous solution and poorly absorbed through oral administration. Glycosylation is an important method for the conversion of water-insoluble and unstable organic compounds into the corresponding water-soluble and chemically stable derivatives. Mizukami et al. reported that glucosyl conjugation was far more effective than cyclodextrin complexation at enhancing the water solubility of hydrophobic compounds such as curcumin [12]. Recently, absorption efficiency of a lipophilic flavonoid, i.e., quercetin, has been reported to be much improved, when converted into its glycoconjugates [13,14]. Glycosylations of glycitein and daidzein are of importance from the viewpoint of pharmacological development of soy isoflavones. We report here the synthesis of βmaltooligosaccharides of glycitein and daidzein by sequential glycosylation with Lactobacillus delbrueckii and cyclodextrin glucanotransferase (CGTase). We also report their inhibitory activity for IgE antibody formation, 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity, and supeoxide-radical scavenging activity. Anti-allergic activity of β-glycosides of glycitein and daidzein The effects of glycitein β-glycosides 2-7 on IgE antibody formation were investigated by an in vivo bioassay using 7S-globulin from soybean as an antigen [17]. The average rat plasma IgE level after treatment of 7S-globulin with or without test compounds was examined. As shown in Table 1 (Table 1). It has been reported that tocopheryl β-glycosides showed inhibitory effects on IgE antibody formation [18,19]. Recently, we reported that 7-O-β-glycosides of genistein and quercetin showed antiallergic activities, whereas the β-glycosides whose sugar is attached at other phenolic hydroxyl groups, exhibited no anti-allergic actions [20]. These findings suggested that the C-7 β-glucoside and β-maltoside of glycitein and/or daidzein did not attenuate the anti-allergic activity, and that phenolic hydroxyl groups at the 4'-position might be necessary for glycosides of glycitein and daidzein to act as anti-allergic species. Studies on the anti-allergic mechanism(s) of the β-glycosides of glycitein and daidzein synthesized here are now in progress. Anti-oxidant activity of β-glycosides of glycitein and daidzein The antioxidative activities of glycitein β-glycosides 2-7 and daidzein β-glycosides 9-14 were determined by an in vitro bioassay of their DPPH radical scavenging activity. The antioxidant activities were expressed as IC 50 values and are summarized in Table 2 (10) showed DPPH free-radical scavenging activity, whereas the β-maltosides and β-maltotriosides of glycitein and daidzein 4-7 and 11-14 had no antioxidant activity. The results obtained here suggested that monoglucosides of glycitein and daidzein might be useful free-radical scavenging antioxidants with high aqueous-solubility. The superoxide-radical scavenging activity of glycitein β-glycosides 2-7 and daidzein β-glycosides 9-14 were expressed as IC 50 values, summarized in Table 2 (9) showed superoxide-radical scavenging activity. The results obtained here suggested that monoglucosides of glycitein and daidzein could be potential superoxide-radical scavenging antioxidants. Bacterial strain and culture conditions Culture medium used for growth of L. delbrueckii subsp. bulgaricus (Okayama University of Science) had the following composition (in grams per liter): 20 g of lactose, 5 g of yeast nitrogen base, 20 g of bacto casitone, 1 g of sorbitan monooleate, 2 g of ammonium citrate, 5 g of sodium acetate, 2 g of K 2 HPO 4 , 0.05 g of MnSO 4 , 0.1 g of MgSO 4 . The cells were grown in the culture medium with continuous shaking on a rotary shaker (120 rpm) at 30 °C. Production of β-glucosides of glycitein and daidzein by L. delbrueckii The cultures of L. delbrueckii were grown in 500 mL conical flasks containing 200 mL of culture medium at 30 °C. Prior to use for the experiments, the cells were harvested by centrifugation at 8,000 g for 15 min. The β-glucosides of glycitein and daidzein were prepared as follows: substrate (0.2 mmol/flask, 2 mmol total) was added to ten 300 mL conical flasks containing L. delbrueckii cells (5 g) and glucose (1 g) in freshly prepared culture medium (100 mL). The mixture was incubated with continuous shaking on a rotary shaker (120 rpm) for 5 days at 30 °C. The reaction mixture was centrifuged at 8,000 g for 15 min to remove the cells and the supernatant was extracted with n-butanol. The n-butanol fraction was purified by preparative HPLC on YMC-Pack R&D ODS column to give the β-glucoside products. Production of β-maltooligosides of glycitein and daidzein by CGTase To a solution containing β-glucosides of glycitein or daidzein (0.1 mmol) and starch (5 g) in sodium phosphate buffer (25 mM, pH 7.0) was added CGTase (100 U). The reaction mixture was stirred at 40 °C for 24 h, and then the mixture was centrifuged at 3,000 g for 10 min. The supernatant was subjected on to a Sephadex G-25 column equilibrated with water to remove CGTase. The fractions containing glycosides were purified by preparative HPLC on YMC-Pack R&D ODS column to give the β-maltooligoside products. Spectral data of new compounds are as follows: used as a positive control. After 20 min at 25 °C, the absorbance was measured at 517 nm. The percentage reduction of the initial DPPH adsorption, i.e., the free-radical scavenging activity, was calculated as follows: E = [(A c − A t )/A c ] × 100, where A t and A c are the respective absorbance at 517 nm of sample solutions with and without the test compounds. Antioxidant activity was expressed as the 50% inhibitory concentration (IC 50 ). Superoxide-radical scavenging activity Superoxide was generated by the xanthine-xanthine oxidase system [21]. Reaction mixture contained 4 mM xanthine (50 μL), various concentration of sample in ethanol (50 μL), 2 mM nitro blue tetrazolium (NBT, 50 μL), of 0.3 nkat/mL xanthine oxidase (50 μL) and 0.1 M phosphate buffer (pH 7.4) in a total volume of 2 mL. Vitamin C was used as a positive control. The reaction mixture was incubated at 25 °C for 10 min and the absorbance was read at 560 nm. Percent inhibition was calculated by comparing with control without test compound but containing the same amount of alcohol. The IC 50 value is shown as the sample concentration at which 50% of superoxideradical was scavenged. Conclusions The β-maltooligosaccharides of glycitein and daidzein were successfully produced through two-step biocatalytic glycosylation by L. delbrueckii and CGTase. The 7-O-β-glucoside of glycitein and daidzein and 7-O-β-maltoside of glycitein inhibited IgE antibody formation. On the other hand, β-glucosides of glycitein and daidzein exerted DPPH free-radical scavenging activity and supeoxideradical scaventing activity.
1,666
2010-07-29T00:00:00.000
[ "Chemistry", "Medicine" ]
Cuckoo male bumblebees perform slower and longer flower visits than free-living male and worker bumblebees Cuckoo bumblebees are a monophyletic group within the genus Bombus and social parasites of free-living bumblebees, upon which they rely to rear their offspring. Cuckoo bumblebees lack the worker caste and visit flowers primarily for their own sustenance and do not collect pollen. Although different flower-visiting behaviours can be expected between cuckoo and free-living bumblebees due to different biological constraints, no study has yet quantified such differences. Here, we provide the first empirical evidence of different flower-visiting behaviours between cuckoo and free-living bumblebees. We recorded the flower-visiting behaviour of 350 individual bumblebees over two years in a wild population of the entomophilous plant Gentiana lutea, of which they are among the main pollinators. In cuckoo bumblebees (28.9% of the total), we only found males, while we found both workers and males in free-living bumblebees. Cuckoo bumblebees visited significantly more flowers for longer time periods than both free-living bumblebee workers and males within whorls, while differences at the whole-plant level were less marked. Free-living bumblebee males visited more flowers and performed slightly longer flower visits than workers. Behavioural differences between cuckoo male bumblebees and free-living bumblebee workers are likely related to different foraging needs, while differences between cuckoo and free-living bumblebee males may be caused by differences in colony development and a delayed mating period of free-living bumblebees. The longer visits made by cuckoo male bumblebees will likely negatively affect plant reproductive success through increased within-plant pollen flow. Introduction Bumblebees are primitively eusocial bees with an annual life cycle usually divided into three distinct phases. First, a solitary phase involving hibernation of mated gynes followed by nest foundation (from late summer to early spring), then a multiplicative phase during which the founding queen produces workers (from mid-spring to mid-summer), and finally a reproductive phase during which males and gynes are produced (mid-late summer) (Alford 1975;Lhomme and Hines 2019). As in other groups of social insects, social parasitism has evolved in several species of bumblebees, called cuckoo bumblebees (Lhomme and Hines 2019). There are currently 27 species of cuckoo bumblebees (subgenus Psithyrus) recognized worldwide (Lhomme et al. 2021). These species completely lack the worker caste (i.e., all the female individuals are fertile females) and do not have specialized structures on the hind legs (i.e., corbiculae) to collect pollen to feed their larvae. Consequently, females of cuckoo bumblebees usurp the nests of host social bumblebees (hereafter referred to as free-living bumblebees) almost always killing the host queen to force the host workers to rear their offspring (Fisher 1987). Bumblebees are important pollinators of both wild flowering plants and crops (e.g., Thomson and Goodell 2001;Pellissier et al. 2012;Fründ et al. 2013;Andrikopoulos and Cane 2018), and a few species are exploited commercially in agriculture (Velthuis and van Doorn 2006;Goulson et al. 2008). As most pollinators, free-living bumblebees show intraspecific differences in foraging behaviour between sexes mainly in relation to offspring provisioning (Smith et al. 2019). Workers visit flowers to collect pollen to feed the larvae and nectar for energy intake, while males visit flowers to seek nectar to fuel their flight and to look for mates (Goulson 2010). Consequently, foraging females have generally high levels of flower constancy (Russell et al. 2017), which increase pollination efficiency by transferring conspecific pollen between flowers of a same species (Goulson 2010). However, male bumblebees can also inadvertently carry large amounts of pollen that can potentially contribute to pollination (Jennersten et al. 1991;Ostevik et al. 2010;Wolf and Moritz 2014). Although flower visiting behaviour of free-living bumblebees has been widely studied, behaviour of cuckoo bumblebees remains almost unstudied (but see Bateman and Rudall 2014), and to our knowledge there is still no assessment of similarities or differences between parasitic and non-parasitic bumblebees. In this article, we investigate the flower-visiting behaviour of cuckoo and free-living bumblebees in a wild population of the entomophilous Gentiana lutea subsp. symphyandra, where both bumblebee types were abundant and among its most important pollinators. Specifically, we aim to answer two main questions: (1) do cuckoo and free-living bumblebees differ in the time spent visiting flowers and in the number of flowers visited, and (2) do cuckoo bumblebees display different foraging behaviours from those of males and workers of freeliving bumblebees? An evaluation of the different flower-visiting behaviours between parasitic bumblebees and their hosts would increase our knowledge of their biology and allow us to estimate differences in pollination efficiency with implications for plant fitness. Study site and species This study was carried out in a natural population of Gentiana lutea L. subsp. symphyandra (Murb.) Hayek situated on the eastern face of Mount Grande (Northern Apennines; Bologna, Italy), within the Habitat Directive site "IT4050002 SIC-ZPS Corno alle Scale" (1380-1460 m a.s.l.; 44° 8′ 57″ N, 10° 52′ 10″ E). The study population is located in a steep clearing surrounded by a beech (Fagus sylvatica L.) forest. Gentiana lutea subsp. symphyandra is a perennial herb growing on mountains of south-eastern Europe that flowers between June and July (Rossi et al. 2015). The tall fertile stems bear several yellow flowers with wideopen corollas of about 4 cm in diameter (personal observation MG), grouped in pseudo-whorls (hereafter: whorls) that flower sequentially from the bottom to the top. Flowers of G. lutea are functionally and ecologically generalized (sensu Ollerton et al. 2007), being efficiently visited by several insect species belonging to four orders (Hymenoptera, Diptera, Coleoptera, Lepidoptera) with different physiological and energetic requirements (Rossi et al. 2014). Among these, bumblebees have the highest pollinator performance, fidelity and visitation frequency in the studied population (Rossi et al. 2014). Although G. lutea is a self-compatible species, the partial flower dichogamy reduces within-flower selfing, while spontaneous or pollinator-mediated selfing within plant (i.e. geitonogamy) is more likely to occur (Rossi et al. 2014). Seeds developed from selfpollination have lower viability and germination than cross-pollinated seeds (Rossi et al. 2016). Bumblebee observations and sampling Field observations were carried out over a total of six days in two non-consecutive years (three days in both 2013 and 2015), between July 14 th and July 23 rd each year, during the flowering peak of G. lutea subsp. Data analysis Since the focus of this article is not to describe temporal patterns of changes and because we did not find significant differences between years in a preliminary analysis, we pooled data from 2013 and 2015 in all analyses. Moreover, in a preliminary analysis we did not find significant differences between species in either the time spent on flowers or the number of flowers visited, therefore we pooled species data within cuckoo and free-living bumblebee types. We used Pearson's product-moment correlation coefficients to test for correlations between the number of flowers visited and the time spent visiting flowers within whorls and plants by bumblebees. To test whether bumblebees showed differences in the time spent visiting flowers we used linear mixed models (LMMs). We used the log-transformed time (measured in seconds) spent foraging on flowers by individual bumblebees as response variable, and bumblebee type (i.e., males of cuckoo and free-living bumblebees, workers of free-living bumblebees) as fixed effect. We tested two separate LMMs for the time spent within whorls and the time spent on the whole plant. In addition, we fitted two different LMMs including the number of whorls displayed by plants as additive factor or in interaction with bumblebee type to test for effects on the time spent on flowers at the whole-plant level. We included the observation interval, nested within day and year, as a random effect to account for temporal changes. In LMMs on the time spent on the whole plant, the best models in terms of goodness-of-fit were chosen based on the Akaike Information Criterion with correction for small sample sizes (AICc), selecting the model(s) with the lowest AICc value (Burnham and Anderson 2002). If ΔAICc values for some next best models were lower than 2, we used model averaging to calculate a weighted average of the parameter estimates (Burnham and Anderson 2002). After selecting the best models we performed pairwise comparisons between bumblebee types by estimating leastsquares means with Tukey-adjusted p-values (Lenth 2020). To evaluate differences in the number of flowers visited by bumblebees we fitted generalized linear mixed models (GLMMs) with Poisson error distribution and with the numbers of visited flowers as response variable, bumblebee type as fixed effect and observation intervals nested within day and year as random effects. We tested two separate GLMMs for the number of flowers visited within whorls and the number of flowers visited on the whole plant, and then performed pairwise comparisons between bumblebee types. Results We observed a total of 350 bumblebees during 7.5 hours of observations over the two years of sampling, corresponding to three species of cuckoo bumblebees and four species of free-living bumblebees (Table 1). Bombus rupestris was the most abundant species among cuckoo bumblebees, while B. terrestris and B. lapidarius were the most abundant species among free-living bumblebees (Table 1). Overall, we only found cuckoo bumblebee males, and workers were more frequent than males in free-living bumblebees (ratio 1.6:1). Cuckoo bumblebees visited significantly more flowers per whorl (4.74 ± 0.4) than free-living bumblebee workers (2.67 ± 0.2), but no more than males (4.73 ± 0.6), while free-living bumblebee males visited significantly more flowers than workers ( Figure S1A, Table S4). We did not find significant differences in the number of flowers visited at the whole plant level between bumblebee types ( Figure S1B, Table S5). Discussion In this article we explored differences in the flower-visiting behaviour of cuckoo and free-living bumblebees. We provide the first empirical evidence of slower and significantly longer flower visits performed by cuckoo bumblebees compared to free-living bumblebees. We found that male cuckoo bumblebees visited more flowers and spent more time during each visit than both free-living bumblebee workers and males at the whorl level, but these differences were less marked at the plant level. Free-living bumblebee males visited more flowers than workers but not for significantly longer time periods. Flower-visiting behaviour The different behaviours observed between cuckoo bumblebee males and free-living bumblebee workers can be mainly related to different foraging needs. Bumblebee workers need to collect pollen for colony development, while at the same time keeping a positive balance between energy intake (through nectar) and the energy spent visiting flowers (Hodges and Wolf 1981;Pyke 1984). By contrast, cuckoo bumblebee males, as free-living males, do not collect pollen and visit flowers mainly for nectar. Consequently, the need to optimise energy intake over expenditure during foraging bouts is probably less important for cuckoo bumblebees than for free-living bumblebee workers. Bumblebee workers visited fewer flowers for slightly shorter periods than free-living males. Though differences in visit duration were not statistically significant, mainly because of the high response variability of free-living bumblebee males, the different observed behaviours remain biologically relevant and support previous literature (Ostevik et al. 2010;Wolf and Chittka 2016). Shorter visits made by workers are likely driven by different foraging needs compared to males, as males do not collect pollen to sustain the colony similarly to cuckoo male bumblebees. Moreover, visits made by workers are usually driven by the quality and abundance of floral rewards, as workers tend to prioritize highly rewarding flowers to collect both pollen and nectar to optimize foraging bouts, while male visits are less constrained by reward availability (Smith et al. 2019). The longer visits and the higher number of flowers visited within whorls by cuckoo bumblebees cannot be attributed to different foraging requirements compared to free-living bumblebee males since neither of them provide pollen for the nest and mainly visit flowers for their own sustenance and to look for mates (Goulson Corbet 1991), we frequently observed almost lethargic movements by cuckoo bumblebees during flower visits within whorls of G. lutea. We hypothesize that differences between cuckoo and free-living male bumblebees can be linked to different mating periods. Cuckoo bumblebee females tend to usurp free-living bumblebee nests at the beginning of colony development, when only the few workers of the first brood have been produced (Kreuter et al. 2012;Lhomme et al. 2013). Moreover, cuckoo bumblebees start to lay male and female eggs within 10 days from invasion (Lhomme et al. 2013), while free-living bumblebees start to lay male eggs between 30 and 40 days after the emergence of the first brood, and the last group of diploid eggs laid before male eggs usually develops into gynes (Bogo et al. 2018). However, the developmental time of cuckoo and free-living bumblebee sexual morphs (males and gynes) is comparable (Küpper and Schwammberger 1995;Bogo et al. 2018). Consequently, a gap between the mating periods of cuckoo and free-living bumblebees can be expected. In our observations we only found cuckoo bumblebee males, suggesting that the peak flowering of G. lutea occurred when the mating period of cuckoo bumblebees was toward the end, therefore reducing the need for males to actively look for partners. By contrast, free-living bumblebees were likely still at the beginning of their mating period, compelling free-living male bumblebees to actively seek out partners and increase their mobility. In addition, cuckoo and free-living bumblebees may greatly differ in mating duration (e.g., 3 min versus more than 26 min, respectively; Lhomme et al. 2013), which can potentially affect male foraging behaviour, for example through distinct energetic requirements. Differences in the metabolism of cuckoo and free-living males may further emphasize behavioural differences. Further studies are needed to explore these possibilities. Implications for pollination Shorter visits made by worker bumblebees tend to favour constant foraging patterns and short-term floral specialization, which in turn can translate into a more efficient pollen dispersal between plants of a same species within a given population (Smith et al. 2019 lutea. Although we could not directly link bumblebee visits to plant fitness, we expect the longer flower visits of cuckoo bumblebees to reduce fruit set and seed set and germination. A previous study on the same population showed that self-pollinated flowers of G. lutea produced seeds with lower weight and germination rates compared to both open-pollinated and cross hand-pollinated flowers (Rossi et al. 2016). The reduced fitness can be partially explained by inbreeding depression or by intrinsic genetic problems (e.g. founder effect or bottleneck) as a consequence of the isolation from other natural populations (Rossi et al. 2016). However, the high abundance of cuckoo bumblebees observed in this study can act in addition to these factors and further reduce plant fitness in the study population. The transfer of pollen within plants facilitated by the longer visits of cuckoo bumblebees can increase geitonogamy and pollen discounting (Brunet 2005), ultimately reducing plant fitness. Among the 350 bumblebees observed in this study, almost a third (28.9%) were cuckoo bumblebee males. Because they are among the main pollinators of G. lutea (Rossi et al. 2014), we expect them to play an important role in the pollination success and ultimately on population persistence. Further analyses aimed directly at evaluating the effects of bumblebee visits on plant fitness would help understand the balance between positive (e.g., pollen flow) and negative (e.g., geitonogamy) contributions of cuckoos compared to free-living bumblebees. Given the widespread presence of cuckoo bumblebees, we expect that their visits can significantly affect the reproductive success of several plant species, and encourage future research to directly address this overlooked topic. Data accessibility Data and R scripts for analyses are available online: https://doi.org/10.5281/zenodo.5161010
3,574.2
2021-12-15T00:00:00.000
[ "Biology" ]
Smart Health Prediction Using Machine Learning The "Smart Health Prediction Using Machine Learning" system, based on predictive modelling, predicts the disease of patients/users on the basis of the symptoms that the user provides symptoms as an input to the system. The application has three login options: user/patient login, doctor login, and admin login.The device analyses the symptoms given by the user/patient as input and provides the likelihood of the disease as output based on the prediction using the algorithm. Smart health predictions are made by the implementation of the Naïve Bayes Classifier. The Naïve Bayes Classifier measures the disease percentage probability by considering all its features that is trained during the training phase.Exact interpretation of disease data benefits early patient/user disease prediction and provides clear vision about the disease to the user. After a prediction, the user/patient can consult a specialist doctor using a chat consulting window. It uses machine learning algorithms and database management techniques to extract new patterns from historical data. The Forecast Accuracy can improve with the use of a machine learning algorithm and the user/patient will get fast and easy access to the application. Introduction Machine learning is a generative method of producing predictive modelling using certain instances. It's an AI branch that promotes the idea that machines can learn from data, recognise patterns, and make decisions with minimal human intervention. Machine learning is a programming algorithm that uses sample data or previously collected data to optimise results with high accuracy. There are two stages of the machine learning algorithm: preparation and research. The signs and symptom logs of the user/patient are used to predict the illness. Machine Learning technology offers a strong application forum in the medical sector to address health disease prediction concerns based on the user/patient experience.We use machine learning to keep track of all signs and diseases. Machine learning technology helps predictive models to rapidly analyse data and produce meaningful results more quickly. With the aid of technology, the user/patient may make an informed decision to see a doctor about their particular symptoms, resulting in improved patient health services.The Naïve Bayes Classifier technique is used to analyse a large amount of data obtained. For each sub-field of Disease Predictions, we also demonstrated how symptom data storage combined with data classification can assist the administrative, clinical, academic and educational aspects of Disease Prediction from Symptoms. There are a host of data collection issues that can be discussed in terms of health prediction. [1][2][3][4][5] Project Analysis 2.1. Objective There is some sort of resources available to predict smart health. However, chronic diseases have been studied in particular and a level of risk has been identified. However, these methods are not widely used for disease prediction in general disease. Smart health prediction helps in the diagnosis of multiple diseases by analysing patient symptoms using a perfect fitting Machine Learning Algorithm technique. Existing Method The framework predicts chronic diseases for a specific area and population. Disease Prediction is for specific diseases only. In this method, Big Data and Convolutionary Neural Networks Algorithm are used to predict disease risk. The method uses Machine Learning algorithms for S-type data, such as K-nearest neighbours and Decision Tree. The machine has an accuracy value of 94.8 percent for some diseases.In the previous paper, we simplified machine learning algorithms to predict effective chronic disease outbreaks in disease-prone populations. We are testing updated prediction models using real-world hospital data from certain specific regions/area's. Using structured and unstructured patient/user data, we suggest a new multimodal disease risk prediction algorithm for Convolutionary Neural Networks. [6][7][8][9][10]. Proposed Method If someone is actually diagnosed with some sort of disease, they need to see a doctor/physician which is both time consuming and expensive too. It can also be difficult for the user to reach of doctors and hospitals so, the disease cannot be detected. Because, if the above procedure can be done with electronic software application that saves time and resources, it could be better for the patient to do the process runs smoothly. Smart health prediction is a web-based programme that predicts a user's illness based on their symptoms that the user/patient can feel. Data sets for the Smart Health Prediction Framework have been compiled from various health-related websites. The consumer will be able to assess the likelihood of a disease on the basis of the symptoms presented in the web-application.The aim of this project is to create a web platform that can predict disease events based on a range of symptoms. Users can choose from a range of symptoms and find diseases with probabilistic estimates and conditions. Table.1 Efficiency Comparison NB -Naïve Bayes, LR -Linear Regression, K*-K th Nearest&DT -Decision Tree Focused on a machine learning algorithm, we proposed a general method of disease prediction. We used Naïve Bayes algorithms to identify patient data because medical data are increasing at an exponential rate, requiring the processing of existing data in order to predict exact disease based on symptoms. By having the input as a patient record, we were able to get accurate general disease risk prediction as an output that helped us understand the degree of disease risk prediction. Because of this method, disease prediction and risk prediction could be achieved over a short period of time and at a low cost. In terms of accuracy and time, the results of Naïve Bayes and other algorithms are compared, and the accuracy of the Naïve Bayes algorithm is higher than the other algorithms mentioned in Figure1. Algorithm and Architecture 3.1. Naïve Bayes Algorithm The Naïve Bayes algorithm is a simple dynamic method for creating models for assigning class labels to problem instances to find a mapping to object. Class labels are chosen from a finite set of choices. It is a family of algorithms based on a general principle, not a particular algorithm. According to this principle, the value of each function of all Naive Bayes Classifiers is independent of the value of other features.For example, if the fruit is orange, round, and around 10cm-15cm in diameter, we might call it an orange. The Naïve Bayes algorithm also takes into account each feature to determine if the fruit is an orange. Fig.1: Algorithm Flow Diagram There are a n-variety of probability models, but for some of them, the Naïve Bayes algorithm performs best in supervised learning model. Architecture The goal of this project is to produce a web application forum for predicting disease manifestations on the basis of different symptoms and conditions. The user will pick different symptoms and find the diseases with their probabilistic data from the collected set of datasets. Conclusion Required clinical symptom related information can be obtained from historical knowledge in the suggested methodology by planning datasets using the Naïve Bayes algorithm. Smart health can only be achieved if the system responds in this way. These datasets will be compared with the incoming queries and an Association Rule Mining Report will be generated. Given that this new solution will be based on real historical data, it would provide accurate and prompt results that would allow patients to get an urgent diagnosis. Web-Application such as sending a doctor remotely for a chat session are often provided so that patients can speak directly with physicians. As a result, in the true sense, this web system will be predictable and also produce high accuracy with fairness.
1,709.4
2021-03-23T00:00:00.000
[ "Computer Science", "Medicine" ]
Endophytic Fungal Diversity in Medicinal Plants of Western Ghats, India Endophytes constitute an important component of microbial diversity, and in the present investigation, seven plant species with rich ethnobotanical uses representing six families were analyzed for the presence of endophytic fungi from their natural habitats during monsoon (May/June) and winter (November/December) seasons of 2007. Fungal endophytes were isolated from healthy plant parts such as stem, root, rhizome, and inflorescence employing standard isolation methods. One thousand five hundred and twenty-nine fungal isolates were obtained from 5200 fragments. Stem fragments harbored more endophytes (80.37%) than roots (19.22%). 31 fungal taxa comprised of coelomycetes (65%), hyphomycetes (32%), and ascomycetes (3%). Fusarium, Acremonium, Colletotrichum, Chaetomium, Myrothecium, Phomopsis, and Pestalotiopsis spp. were commonly isolated. Diversity indices differed significantly between the seasons (P < 0.001). Species richness was greater for monsoon isolations than winter. Host specificity was observed for few fungal endophytes. UPGMA cluster analysis grouped the endophytes into distinct clusters on the basis of genetic distance. This study is the first report on the diversity and host-specificity of endophytic fungal taxa were from the semi evergreen forest type in Talacauvery subcluster of Western Ghats. Introduction The microbes residing in the internal parts of plant tissues called "endophytes" constitute a group of plant symbionts and are a component of microbial diversity.Endophytes offer plethora of unknown advantages to the host with immense applications in agriculture and medicine [1,2].Recently, challenging hypotheses related to endophyte diversity [3], their role in oxidative stress protection [4], heavy metal tolerance [5], and as components of tropical community ecology [6,7] have emerged.A perusal of the literature over the past decades indicated many ethnomedicinal plant species with rich botanical history, sampled from unique ecological niches species are known to harbor potential endophytic microbes [8]. There has been an increasing surge of interest among the research groups for the isolation of endophytes from the tropical plant species [9,10], owing to high plant diversity.One such region represents the Western Ghats, stretching a length of 1,600 Km from the river Tapti in the state of Gujarat to the Southern tip of Kerala, recognized as one of the 34 hot spots of biodiversity.The Western Ghats represent rich flora with enormous species diversity as well as endemic taxa and are therefore recognized as one among the hot spots of the world [11].Western Ghats are divided into seven subclusters.A proposal to include and declare 39 sites in this region as the World Natural Heritage Cluster Site by UNESCO is underway (http://www.atree.org/wgunesco whs). India has many regions of unique ecological niche harboring variety of medicinal plants.One such region in the peninsular India is Kodagu District, the land of coffee cultivation.Kodagu is situated in the Western Ghats of peninsular India and is known for its majestic mountain ranges, coffee plantations, and teak wood forests.The Talacauvery subcluster (12 ∘ 17 to 12 ∘ 27 N and 75 ∘ 26 to 75 ∘ 33 E) of the Western Ghats is situated in Kodagu.The altitude ranges from 1525 above mean sea level.Annual precipitation of 3525 mm is largely restricted during May to October, 2 International Journal of Biodiversity Ten plants * were sampled, pooled, and used for isolation of endophytes [13]. although premonsoon showers are not uncommon during February to April.The average temperature is 23 ∘ C. Kodagu has a reservoir of forest belts and diverse vegetation ranging from tropical wet evergreen forests to scrub jungles.Several tribes residing in the forests still use medicinal plants of ethnopharmacological importance as the source of natural medication for their ailments [12].Ethnomedicinal plants are often used in ayurvedic medicinal system in India for the treatment of various diseases.Despite the reports of ethnomedicinal plants of this region, the biodiversity and the endophytic microbes of this region remain unexplored.Therefore, in the present investigation, seven medicinal plants representing six families were subjected to diversity studies on fungal endophytes during two seasons. Plant Materials and Study Site.Plant parts such as stem, root, rhizome, and inflorescence were collected from seven healthy medicinal plant species: Tylophora asthmatica, Rubia cordifolia, Plumbago zeylanica, Phyllanthus amarus, Eryngium foetidum, Centella asiatica, and Zingiber sp.inhabiting the natural vegetation of the Talacauvery Region of Western Ghats, located at 012 ∘ 17 to 012 ∘ 27 N and 075 ∘ 26 to 075 ∘ 33 E of Kodagu, Karnataka, during the monsoon (May to June) and winter seasons (November-December) of 2007 (Table 1).The natural vegetation is an evergreen/semievergreen type of forests.The mean temperature was 23 ∘ C and mean annual precipitation is 3525 mm.Herbarium specimens of the plants were prepared and submitted to the herbarium collections in the DOS in Botany, University of Mysore.Ten individual plants from each were pooled for isolations.The samples were placed in polyethylene bags, labeled, transported in ice box to the laboratory, and placed in a refrigerator at 4 ∘ C until isolation.All samples were processed within 24 h of collection. Isolation and Identification of Endophytic Fungi.Samples were washed thoroughly in distilled water, blot dried, and first immersed in 70% ethanol (v/v) for one min followed by second immersion in sodium hypochlorite (3.5%, v/v) for three minutes.They were rinsed three times in changes of sterile distilled water and dried on sterile blotters under the airflow to ensure complete drying.Bits of 1.0 × 0.1 cm size were excised with the help of a sterile blade.A total of 5200 segments from stem, roots, inflorescence, and rhizomes of plant species were placed on water agar (2.5%) supplemented with the antibiotic streptomycin sulphate (100 mg/L).Forty segments were plated per plate.The plates were wrapped in clean wrap cling film and incubated at 22 ∘ C with 12 h light and dark cycles for up to 6 to 8 weeks.The effectiveness of surface sterilization of tissues was checked by placing the aliquots of sterilants on agar plates and observing fungal colonies if any for two weeks [14]. Periodically the bits were examined for the appearance of fungal colony and each colony that emerged from segments was transferred to antibiotic-free potato dextrose agar medium (PDA, 2%) to aid identification.The morphological identification of the isolates was done based on the fungal colony morphology and characteristics of the reproductive structures and spores [15][16][17].Sporulation was induced by inoculating cultures onto sterilized banana leaf bits (one cm 2 ) impregnated on potato dextrose agar in petri dishes.All fungal mounts were made on microscopic glass slides in lactophenol-cotton blue and sealed with nail polish.Cultures which failed to sporulate were grouped as mycelia sterilia.All the fungal isolates have been catalogued as DST# series with plant code and maintained as culture collections of the department by cryopreservation on PDA overlaid with 15% glycerol (v/v) at -20 ∘ C in a deep freezer. Data Analysis. Isolation rate (IR), the measure of fungal richness of a sample, was calculated as the number of isolates obtained from tissue segments, divided by the total number of segments, and expressed as fractions but not as percentages [18].The colonization frequency (CF), expressed as percentage, was calculated according to Kumaresan and Suryanarayanan [19] as follows: %CF = Number of tissue segments colonized by a fungus Total number of tissue segments plated ×100. ( International Journal of Biodiversity The percentage of dominant endophytes () was calculated based on the %CF divided by the total number of endophytes × 100 [20].Differences in the extent of colonization of the samples were analyzed by univariant analysis of variance (oneway ANOVA) and Tukey's honestly significant difference (HSD) as post hoc test using the statistical software SPSS16.0.The fungal isolations were considered for analysis of ANOVA and Tukey's HSD. Simpson and Shannon diversity indices were calculated for endophytic fungi from different seasons with Estimate , software (version 6, http://viceroy.eeb.uconn.edu/estimates/).Species richness was calculated using the online web page rarefactor calculator (http://www2.biology.ualberta.ca/jbrzusto/rarefact.php). Rarefaction indices were employed to compare the species richness among the plant species during two seasons.The expected number of species in isolations was calculated [21].Unweighted pair group method with arithmetic mean (UPGMA) cluster analysis was applied for all the isolates from plant species based on the number of isolates recovered from each plant species using a dendrogram constructed based on Nei's genetic distances [22] using tools for population genetics analysis (TFPGA) software [23]. Results A total of 1529 isolates were obtained from 5200 tissue fragments from seven medicinal plant species.The extent of endophytes colonization varied in plant parts with stem fragments harboring 80% of endophytic isolates followed by root (19.22%).In other plant parts, colonization was lower.Isolations of endophytes from various plant parts showed greater numbers of endophytes during monsoon than winter (Figure 1).The high isolation rates (IR) of fungal endophytes were recorded as 1.41 to 1.58 for T. asthmatica in both seasons, while in Zingiber sp., low rates of isolations were obtained (Figure 2).Thirty-one fungal taxa were identified which consisted of coelomycetes (65%), hyphomycetes (32%), and ascomycete isolations of 3%.The frequency of fungal colonization (%CF) differed among the seven plant species (Table 2).Fusarium sp., Acremonium, Chaetomium, and Phoma are some of the endophytes with high colonization frequency.The dominant fungal genera include Fusarium spp.(% = 10.64) and Acremonium (% = 9.48).Few endophytic fungi such as A. strictum had wide distributions in host plants and were isolated from most plants with the exception of Zingiber sp. and Plumbago zeylanica, whereas species of Fusarium, Trichoderma, Curvularia, and Penicillium were isolated from more than three plant species.Host-specificity was observed for few of the fungal endophytes isolated from two of the seven medicinal plants (Table 2).Colletotrichum dematium, Nigrospora oryzae, Heinesia rubi, Pestalotiopsis guepinii, and unidentified red pycnidial form were isolated from the stem segments of T. asthmatica only, while in Rubia cordifolia one endophytic Periconia exhibited specificity.P. islandicum and T. viride were isolated from root segments of Phyllanthus amarus. Diversity indices of fungal endophytes varied within plant species as well as between seasons (Table 3).High Shannon-Weiner diversity index was recorded for T. asthmatica ( 1 = 2.60) and P. amarus ( 1 = 2.27) during monsoon and winter seasons, whereas low indices were recorded for E. foetidum and Zingiber during monsoon and winter seasons, respectively.42% of the total 31 taxa were found in monsoon season, while 55% of them colonized in both seasons.Simpson index (1/) was high for T. asthmatica with a richness of 19 fungal species during monsoon season, while P. amarus recorded highest richness of species during winter season.Rarefaction curves calculated for the endophytic fungal isolations indicated maximum species richness for T. asthmatica and P. amarus during monsoon and winter isolations, respectively (Figures 3 and 4).Differences in the number of isolates and colonization frequency differed significantly between seasons ( < 0.001) as indicated in Table 4. Nei's genetic distance between endophytes isolated from plant species ranged from 0.3185 (between populations of Zingiber and C. asiatica) to 0.9116 (between populations of Zingiber and T. asthmatica) (Table 5) which was widely ranged.This indicates a closer relationship between endophytic fungal patterns of both plants.In general, fungal species from T. asthmatica is most distanced from the other plants studied.In order to represent the relationships among plant species, cluster analysis (UPGMA) was used to generate a dendrogram based on Nei's genetic distances between populations (Figure 5).In this dendrogram, all plants form a distinct cluster.When the transect line was placed at approximately 0.4 on the distance scale, two distinct groups were formed.The first cluster was formed by CA-ZN, while PZ-EF formed the second cluster.PHY, RC, and TA are found outside the cluster. Discussion Medicinal plants are considered as a repository of "endophytic microbes" living in the internal tissues of plants.The quest for identifying novel bioactives from the endophytic fungi has resulted in the sampling of host plants such as herbs, shrubs, tree species, and vines in unique places of ecological adaptations around the rainforests of the world.Such niches harbor great species diversity, unintervened by human activities.Efforts in this direction to sample plants located in the rainforests around the world with potential ethnomedicinal values have resulted in the isolation of fungal endophytes, unique to a particular plant species with distinct bioactivity. Endophyte Colonization in Medicinal Species.The medicinal plant species were sampled from the Talacauvery subcluster situated in the Kodagu District of Western Ghats of Southern India.This region is among one of the 34 hot spots of biodiversity.Recently, a proposal to include this biodiversity spot in the list of UNESCO Heritage cluster site is underway (http://www.atree.org/wgunesco whs).The natives as well as the ethnic tribes inhabiting this region still depend on the plants as a source of medicine to cure some of the ailments [12].Seven medicinal plant species assigned to six plant families were selected for the study in natural populations in two seasons from a single location from the study area stretching over an area of 25 kilometers.Sampling was conducted during monsoon and winter seasons, as two of the herbaceous species, E. foetidum and Zingiber sp., grow only till the second half of the year (June to December) and their nonavailability during summer (March to May) makes it difficult to consider the summer season for endophytic analysis. From 5200 segments of plant materials a total of 1529 isolates were obtained; these were grouped into 31 taxa.Mycelia sterilia, the fungal taxa that failed to sporulate, were also reported from this study.This fungal group is prevalent in endophytic studies [24].The fungal endophytes were analyzed from four plant parts, namely, stem, root, rhizome, and inflorescence; however, their occurrence in root and inflorescence was investigated for few plant species only, as the phenology and sampling of plants never correlated with seasons.The leaves were not considered for isolations since some of the plants were climbers and stragglers with delicate hairy surfaces and stringent surface sterilization techniques would render them unsuitable for plating on agar medium.Relative percentages of endophytic isolations from stem segments were greater (80.37%) than isolations from roots (19.22%).Our results are supported by the earlier work of Huang et al. [25] on 29 traditional Chinese medicinal plants that fungal endophytes are more frequent in stem tissues than roots.Among the fungal taxa, coelomycete isolations were more dominant than hyphomycetes and have been found in earlier studies in endophytes of tree species [26]. Endophytes such as Colletotrichum, Phoma, Acremonium, Chaetomium, Botryodiplodia, and Trichoderma were isolated with % > 2.0.Few fungal taxa that are less frequently isolated are Pestalotiopsis, Penicillium islandicum, Cladosporium herbarum, Alternaria alternata, F. graminearum, Phomopsis, and Sphaeronema.Colletotrichum spp.are the most frequently encountered endophytic fungi [18], whereas Pestalotiopsis spp.are well documented as endophytes of many rainforest plants [27,28], tropical tree species, namely, Terminalia arjuna [29], Azadirachta indica [30], and many herbs and shrubs [25,31,32].It is necessary to screen newer plant species for the isolation of fungal endophytes, as Hawksworth and Rossman [33] estimate that there are still millions of species of fungi yet to be identified.Differences in the colonization frequencies of endophytes during two seasons were observed and more isolation during monsoon season is due to the fact that the slimy conidia of fungal spores are dispersed better by rain splashes and germination of conidia is influenced by climatic factors [34]. Host-Specificity of Fungal Endophytes. We observed that some fungal taxa exhibited host-specificity, a phenomenon often associated with endophytes.Three plant species, T. asthmatica, R. cordifolia, and P. amarus, were host-specific to endophytes.The red pycnidial endophyte (TA-005) was isolated from the stem fragments of T. asthmatica only, suggesting the host-specificity of this endophyte.Pestalotiopsis guepinii was isolated from the stem segments of T. asthmatica.It has been reported as an endophyte of Wollemia nobilis, growing in Sydney, Australia [35].Heinesia rubi, P. islandicum, and TA-005 are new reports of fungi as endophytes.Host-specificity of endophytic fungi has been observed earlier for grasses [36], orchids [37], and forest tree species [38,39].Recently, Sun et al. [40] reiterated the term "host-specificity" as taxa that occur exclusively on a stated host but not on other hosts in the same habitat [41].Our studies also indicate the host-specificity of endophytes as the plant species were sampled from a single habitat. Seasonal Diversity of Fungal Endophytes. Diversity indices for fungal endophytes as analyzed by Shannon-Weiner ( 1 ) and Simpson (1/) indices indicated differences in seasonal variation and species richness.High indices were noted for T. asthmatica ( 1 = 2.6) and P. amarus ( 1 = 2.27) during monsoon and winter seasons, respectively.The fungal species did not differ significantly between plant species, whereas they differed between seasons ( < 0.001).Seasonal variation in fungal isolates and colonization frequency has been reported for many host plants [42,43].High colonization frequency as well as the species richness of endophytic fungi is limited to leaf segments rather than stem or bark segments of host plants sampled from five medicinal species of Kudremukh Region of Western Ghats [44].Species richness in our study is limited to stem fragments among the plant parts considered for analysis. Most studies on fungal endophytes in tropics have revealed remarkable patterns of endophyte colonization and estimates of diversity in foliages of forest tree species representing various sites such as Panamanian Forest [45] and Iwokrama Forest Reserve, Guyana [39].In the Nilgiri Biosphere Reserve, Western Ghats, India, 75 dicotyledonous species in three different tropical forest types were sampled to study foliar endophytes and diversity [10].The endophyte diversity in forest types was limited due to loose host affiliations among endophytes.Studies on foliar endophytes from the sampling of herbaceous and shrubby medicinal plant species have revealed differences in the colonization rates as well as seasonal diversity in Malnad Region of Bhadra Wildlife Sanctuary in Southern India [32,46]. The present study provides firsthand information on the diversity and seasonal influence on the colonization frequencies of endophytic fungi from selected medicinal plants from one of the subclusters of biodiversity hot spots in the Western Ghats of Southern India.Although the isolation and analysis of endophyte communities in herbs, shrubs, and trees are not uncommon, each of the studies is unique with reference to number of hosts, species of fungal endophytes, and their specificity.The fungal endophytes have been subjected to fermentation studies, and extracts are being tested for biological activities. Conclusion The study provides firsthand information on the diversity and seasonal influence on the colonization frequencies of endophytic fungi from seven medicinal plants from one of the subclusters of biodiversity hot spots in the Western Ghats of Southern India.The present investigation is the first isolation of endophytes from the medicinal species and their plant parts.Though isolation of endophytes has been accomplished from various forest types and locations around the globe, each study is unique in documenting newer endophytic taxa.We are currently working on the fermentation of fungal endophytes to obtain newer antioxidants with therapeutic applications. Figure 1 : Figure 1: Relative seasonal isolations of fungal endophytes from plant parts of medicinal species. Table 2 : Colonization frequency * of endophytic fungi isolated from plant parts of seven medicinal plant species. Table 1 : Details of medicinal plants collected from the natural habitats of Talacauvery subcluster of Western Ghats. Table 3 : Diversity indices ( 1 ) and species richness of the medicinal plants. Table 4 : ANOVA table of seasonal variation of endophytic fungi analyzed from seven medicinal plant species. Table 5 : Nei's genetic distance of plant species analyzed for endophytic fungi.
4,408.6
2014-05-11T00:00:00.000
[ "Environmental Science", "Biology", "Medicine" ]