id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
486202
pes2o/s2orc
v3-fos-license
Spontaneous-emission rates in finite photonic crystals of plane scatterers The concept of a plane scatterer that was developed earlier for scalar waves is generalized so that polarization of light is included. Starting from a Lippmann-Schwinger formalism for vector waves, we show that the Green function has to be regularized before T-matrices can be defined in a consistent way. After the regularization, optical modes and Green functions are determined exactly for finite structures built up of an arbitrary number of parallel planes, at arbitrary positions, and where each plane can have different optical properties. The model is applied to the special case of finite crystals consisting of regularly spaced identical planes, where analytical methods can be taken further and only light numerical tasks remain. The formalism is used to calculate position- and orientation-dependent spontaneous-emission rates inside and near the finite photonic crystals. The results show that emission rates and reflection properties can differ strongly for scalar and for vector waves. The finite size of the crystal influences the emission rates. For parallel dipoles close to a plane, emission into guided modes gives rise to a peak in the frequency-dependent emission rate. I. INTRODUCTION Photonic crystals are a well-studied subject nowadays, both theoretically and experimentally [1]. Of fundamental importance is the prediction [2] that in threedimensional photonic crystals that meet a tough combination of requirements, light propagation will be completely inhibited in all directions and a photonic band gap will show up for certain frequencies of light. It is important for technology that photonic crystals can be created that guide light with low losses, and bend light on a scale of an optical wavelength. The latter properties do not require a band gap in all three dimensions. A photonic-band-gap crystal would reflect light for all angles of incidence, when the frequency of the light lies within the gap. However, lower-dimensional photonic crystals such as Bragg mirrors can also be omnidirectional mirrors, without having a band gap [3,4,5]. Thus, external light sources can only give an indication that there is a band gap or a proof that there is no gap. Internal light sources such as excited atoms do a better job in probing a band gap, because only a gap would completely inhibit spontaneous emission by internal sources [2]. For the same reason, a photonic-band-gap crystal would be a whole new playground in quantum optics, both when one is interested in spontaneous emission in itself, and in processes which normally are obscured or made less efficient because of spontaneous emission. Not * Electronic address: c.m.wubs@tn.utwente.nl; URL: http://tnweb.tn.utwente.nl/cops/ only emission rates would be strongly modified inside a band-gap crystal, but also resonant dipole-dipole interactions, for example, as they are mediated by the electromagnetic field [6]. The focus of this paper is on spontaneous-emission rates of visible light. For atomic transition frequencies in the band gap of an infinite three-dimensional photonic crystal, emission rates vanish everywhere in the inhomogeneous structure. In practice, such a uniform suppression of emission rates has not yet been observed for visible light: evidence of crystals exhibiting a full photonic band gap in the visible has not been reported to date. Even when in the future such crystals will exist, position-dependent emission rates will occur at the edges of the crystals. In general, spontaneous-emission rates of inhomogeneous dielectrics with a high refractive-index contrast, including photonic crystals, are strongly position and orientation dependent. Calculated spontaneous-emission rates in this paper will prove this point. Also, finite-size effects will show up in our calculations. The model studied here is a finite photonic crystal consisting of a finite number of parallel and infinitely thin planes. More about our model will be said later in this Introduction. In many experiments, dipole orientations are hard to control. When averaged over dipole orientations, spontaneous-emission rates are proportional to a quantity called the 'local optical density of states' (LDOS) [7,8]. The concept of a local density of states was borrowed from solid-state physics. The local optical density of states was first named the 'local radiative density of states' [7], which is the same quantity. Interestingly, the calculation of position-dependent spontaneous-emission rates also has a bearing on the in-terpretation of measurements performed with a near-field scanning optical microscope (or NSOM). In these measurements, a sample is illuminated through the tip of the microscope, and scattered light is recorded. In a simple model, the disturbance of the optical field by bringing the tip of the microscope to the sample is assumed to be weak, and the tip is modelled as a dipole with a certain strength and orientation. Then, if the light scattered in all directions would be recorded, the measured signal would be proportional to the local spontaneous-emission rate at the position of the tip in the absence of the tip [9]. After the above general considerations, we now turn to the topic how spontaneous-emission rates inside photonic crystals are actually being calculated. The existence of a band gap in an infinite photonic crystal can be inferred from a band structure calculation, which for a three-dimensional photonic crystals is an art in itself (see the recent review [10]). Quite another and more difficult matter is it to calculate emission rates inside infinite crystals [11,12]. Emission rates inside or near finite photonic crystals are even harder to calculate. Other interesting quantities would be near-field or far-field spectra of internal sources, or dipole-dipole interactions and superradiance effects of atoms embedded in a finite threedimensional photonic crystal, to name a few complex processes in a complex environment. In such cases, results of calculations are hard to check and -even if correctthey might not give much insight. It is therefore very useful to study complex processes in simplified models for photonic crystals. Widely used is the so-called quasi one-dimensional model (or isotropic model) for photonic crystals [13], where it is assumed that the red edge of the stop bands of the crystal occur at the same band-edge frequency for all three-dimensional propagation directions, and similarly for the blue band edge. Such a model will describe qualitatively correct the processes well inside the band gap, while overestimating effects of the photonic crystal at the edges of the gap. The isotropic model also neglects all position and orientation dependence of emission rates outside the band gap. Inspired by the model calculations, more realistic numerical calculations have recently appeared that indeed show the weaknesses of the isotropic model [12]. In this paper another simple model is proposed, one which takes into account the strong spatial and orientational dependence of optical properties and the finite size of the crystals. On the other hand, it gives up the existence of a full band gap, as only variations of the refractive index in one dimension are considered. Dielectric slabs are modelled as infinitely thin planes, which will be called plane scatterers. A multiple-scattering formalism is set up in which optical modes and the Green function (a tensor, really) can be calculated exactly for crystals consisting of an arbitrary number of plane scatterers. The present model is a generalization of previous work that treated scalar waves only [5]. The inclusion of polarization of light will turn out not to be straightfor-ward. Infinitely thin planes were used as model systems in photonics before, for example in [14] where light propagation was considered in one dimension only and for an infinite crystal. The model was generalized in [15], where infinite photonic crystals were built of infinitely thin planes and their band structure was determined for waves propagating in three dimensions. In [16], both infinite and finite crystals of planes were considered and their transmission and reflection properties were studied with the use of transfer matrix methods. The infinite crystal was again considered in [17,18,19,20] and named the 'Diraccomb superlattice'. Frequency-dependent emission rates were determined for several positions in the unit cell and both TE- [17,19] and TM-waves [18,20] were considered. The periodicity of infinite crystals makes that Bloch's theorem can be used in the analysis. Finite photonic crystals do not have this advantage and the analysis of the optical properties is usually more difficult. Positiondependent spontaneous-emission rates remain to be explored in model photonic crystals consisting of a finite number of plane scatterers, where light propagation in all three dimensions is taken into account, for both polarization directions. It has been known for a long time that spontaneousemission rates of atoms change when positioned at distances on the order of the wavelength of light away from a mirror [21,22,23]. A recent surprise was the measurement and analysis that even a distant mirror (25cm away) can change emission rates when lenses are used and when the atomic positions are controlled with a subwavelength precision (a few nanometers) [24,25]. More complicated than emission near a mirror is it to calculate emission rates of atoms in or near onedimensional photonic crystals. A multi-purpose formalism for calculating optical modes in layered dielectrics [26] was used in [4] to calculate emission rates inside finite periodic layered structures, especially inside structures that reflect light incoming from all directions, the so-called 'omnidirectional mirrors' [3]. Interestingly, light transmission through finite onedimensional photonic crystals can be found exactly in terms of the transmission through a unit cell, the number N of unit cells, and in terms of the Bloch wave vector of the corresponding infinite crystal structure. In [27], this is shown for a simple unit cell containing two layers, but it was also proven for general unit cells [28]. This remarkable result was reviewed in [29], where its importance is stressed not only in optics but also in acoustics, quantum mechanics, and other branches of physics. In the T-matrix formalism of this paper (which differs from the usual transfer matrix method for layered dielectrics), we find similar analytical results, also involving the Bloch wave vector. Such attractive analytical results are not available for more complex dielectrics such as finite two- [30,31] or three-dimensional photonic crystals [32], so that in those cases the use of efficient numerical techniques is essential. The advantage of our plane-scatterer model is that modes and Green functions (and therefore emission rates) can be calculated exactly in a Lippmann-Schwinger formalism, for every finite crystal size, and that light propagation in all directions is taken into account. Lippmann-Schwinger formalisms are more commonly used [33], but when the finite volumes of dielectric scatterers are fully taken into account, numerical discretization of the dielectric is required and the model stops being simple [31,34]. To be sure, the simplicity of our model entails that in some aspects it becomes less realistic, as will be stressed where appropriate. In Sec. II, multiple scattering of light is introduced and central equations are derived in representationindependent notation. In Sec. III the free-space Green tensor is regularized and a T-matrix of a plane scatterer for light waves is derived. Sec. IV discusses all optical modes (propagating and guided modes, including polarization) that exist in crystals of plane scatterers. Position and orientation dependent spontaneous-emission rates are calculated in Sec. V. Conclusions can be found in Sec. VI. II. MULTIPLE-SCATTERING THEORY FOR VECTOR WAVES Some important equations of multiple-scattering theory [33] will be presented, mostly in representationindependent notation, for light in arbitrary inhomogeneous dielectrics. In later sections a particular class of dielectrics will be studied and a suitable representation is chosen, but then the involved notation might obscure the general structure of the equations. The wave equation for the electric field E 0 (r, ω) in free space is The symbol I denotes the unit tensor in threedimensional space. The solutions of Eq. (1a) are plane waves with wave vector k and polarization direction normal to k. With the free-space wave equation (1a), a Green tensor (or dyadic Green function) is associated that satisfies Let L(r, ω) be the quantity between curly brackets in Eqs. (1b) and (2). Both equations can be considered as the real-space representations of an abstract tensor operator L(ω) operating on the vector field E 0 (ω) and on the Green function G 0 (ω), respectively: The identity operator in real space is denoted by 1 1 and it has the property r|1 1|r ′ = δ 3 (r − r ′ ); confusion with the unit tensor I should not arise; the ⊗ denotes the tensor product. In the presence of an inhomogeneous dispersive linear dielectric, the wave equation for the electric field is modified into where the frequency-dependent optical potential V is defined in terms of the dielectric function ε(r, ω) as The delta-function on the right-hand side defines the potential as a local quantity (which the T-matrix, to be defined shortly, is not). In other words, this delta function appears for any potential. The electric field E 0 (ω) is modified into E(ω), and the two fields are related through the Lippmann-Schwinger (LS) equation One can check that indeed the field E(ω) that satisfies Eq. (6a) is also a solution of Eq. (4). The solution of Eq. (6a) can be found iteratively in higher and higher orders of the optical potential V, as given by the multiplescattering series in Eq. (6b); the (dyadic) T-matrix in Eq. (6c) by definition is the formal sum of the infinite summation in Eq. (6b). The T-matrix is a 3 × 3 tensor. By combining Eqs. (6b) and (6c), the formal solution for the T-matrix is The scattering problem is solved exactly once the Tmatrix is known. There may exist optical modes that are bound to the scatterer. Such bound modes correspond to solutions of the LS equation (6a) in the absence of an incident field; with Eq. (7) we can rewrite this homogeneous equation as It follows that bound solutions of the electric field will correspond to the poles of the T-matrix. Actually, Eq. (6c) also shows that a nonzero solution for E(ω) can only occur when T(ω) has a pole. The T-matrix not only solves the scattering problem for incident fields but also contains all information about bound modes. In the presence of the dielectric the Green function also changes, from G 0 to G. The latter satisfies The solution for the Green function analogous to Eq. (6) for the electric field is the three-dimensional Dyson-Schwinger equation It can be verified that a solution of (10a) also is a solution of Eq. (9). The problem how to find such a solution is solved once the T-matrix (7) is determined, because an iteration of Eq. (10a) analogous to the series expansion (6b) for the electric field shows that the Green function can also be expressed in terms of the T-matrix, as given by Eq. (10b). Equation (10) also holds when the total potential V(ω) is a sum of single-scatterer potentials V α (ω). By iterating one finds that the total T-matrix for an arbitrary number N of these scatterers is Often it is more convenient make an equivalent expansion in terms of the single-scatterer T-matrices [33]: The frequency dependence was suppressed in Eqs. (11) and (12). The form Eq. (12) of the total T-matrix will be used later in this paper, for model systems where the infinite summation can be performed explicitly. III. PLANE SCATTERERS FOR VECTOR WAVES The general results of multiple-scattering theory that were presented in Sec. II will now be applied to dielectrics that can be described as a collection of parallel planes. A suitable representation is chosen, and specific forms of the potential V, the free-space Green function G 0 , and the incoming electric field E 0 are determined. With this, T-matrices for a single plane and for an arbitrary number of planes are derived. A. Dyadic Green function in plane representation A solution for the free-space dyadic Green function can be found in three-dimensional Fourier space. By translational invariance, k|G 0 (ω)|k ′ must be equal to (2π) 3 δ 3 (k − k ′ )G 0 (k, ω). The Green function G 0 (k, ω) satisfies Here,k denotes a unit vector in the direction of the wave vector k. Equation (13) is a 3 × 3 matrix equation whose representation diagonalizes in the polarization basis {k,σ 1 ,σ 2 } with the longitudinal directionk and two orthogonal transverse directionsσ 1,2 . The solution of (13) is where j denotes σ 1 or σ 2 . All six non-diagonal elements of the Green tensor are zero in this representation. This is the retarded Green function once we assume that the frequency ω has an infinitesimally small positive imaginary part. The above Fourier representation is not what we need. It is convenient to work in the "plane representation": in two-dimensional Fourier space in the directions parallel to the planes and in real space in theẑ-direction perpendicular to the planes. For the polarization representation choose the orthonormal basis {ŝ k ,v k ,ẑ}. Here, z is the unit vector in the z-direction;v k is the unit vector in the direction of the projection of the wave vector k on the plane, so that the wave vector k has a component k in thev k -direction and its full representation is (0, k , k z ); the s k -polarization direction is orthogonal to the optical plane that is spanned by the other two basis vectors. Then the operator L(ω) has the form k , z|L(ω)|k The Green function in the same representation becomes k , z|G 0 (ω)|k ′ , z ′ = (2π) 2 δ 2 (k − k ′ )G 0 (k , z, z ′ , ω), and the transformed Eq. (13) is a system of differential equations: The G pq 0 are the components of G 0 and their arguments (k , z, z ′ , ω) were dropped for brevity. By choosing the plane representation, the matrix elements of G 0 only depend on the magnitude and not on the orientation of k . All components involving an s-label are zero, except the ss-component. G ss 0 satisfies the same differential equation as the Green function g 0 of the Helmholtz equation for scalar waves, so that for ω > 0 we have The variable k z is not independent from k , but rather an abbreviation for [(ω/c) 2 − k 2 ] 1/2 . The remaining coupled differential equations of Eq. (16) can also be solved (again for ω > 0), now that G ss 0 is known: Green functions in the right-hand sides are understood to have the arguments (k , z, z ′ , ω). The above method of solving differential equations does not give a value for the sign-function when z is equal to z ′ . The Green function components (18) can alternatively be found from an inverse Fourier transformation This integration can only be performed in a representation that does not co-rotate with k ′ z . The basis of Eq. (14) is not adequate, but again the basis {ŝ k ,v k ,ẑ} suits well. With this Fourier method one finds the value 0 for the sign-function in Eq. (18b) when z equals z ′ : for z = z ′ the relevant integrands in Eq. (19) are antisymmetric in the variable k ′ z . B. Regularization of the Green function The T-matrix of a plane scatterer for vector waves can be found by solving the appropriate Lippmann-Schwinger equation (6). A plane wave incident from z = −∞ with wave vector k and arbitrary amplitude E 0 and transverse polarization vector σ k = (σ s , σ v , σ z ) is scattered by a plane at z = z α . Because of the symmetry in the in-plane directions, it is convenient to choose the plane representation for the LS equation. In terms of the Diracnotation, the electric field is a "ket"; the plane representation is found by taking the inner product of Eq. (6) for the electric field with the "bra" k , z|, and by inserting the unit operator at the positions of the dots in the representationindependent equation (6). The incident field takes the form E kσ,0 (k , z, ω) = E 0 σ k exp(ik z z). The solution of the LS equation corresponding to this incident field is E kσ (ω). The LS equation in the mixed representation becomes A plane is assumed to be infinitely thin and it can be described by the optical potential V(z, ω) = V (ω)δ(z −z α )I. (A specific model potential will be chosen in Sec. III F.) The integral can be evaluated immediately and we get The usual way to solve this equation would be to put the position z equal to z α and to solve for E kσ (k , z α , ω). The result would then be inserted back into the above equation to obtain an expression for E kσ (k , z, ω). However, the Green tensor G 0 is not defined when the positions z and z α are identical, because of the delta function in the component G zz 0 [Eq. (18d)]. One could just neglect the delta function, as might be correct in other situations [35], but it will be argued in Sec. III C that this procedure would be wrong in our case. Therefore, the Green tensor (18) is not suited for setting up a theory for the scattering of vector waves by infinitely thin planes. It is known that "regularization" of Green functions is sometimes needed when modelling finite-sized scatterers as mathematical objects with zero volume, in order to have a model that is relevant for optics. (Not always: regularization was not needed for scalar waves scattering off planes [5].) In a regularization procedure usually a cutoff parameter is introduced that modifies the behavior of Green functions at distances much smaller than optical wavelengths, and mathematical problems are thus overcome. In some cases, the regularization parameter can be sent to infinity in the final stage, while in other cases the cutoff parameter must be kept finite. For example, for point scatterers the problem of diverging Green functions occurs both for scalar and for vector waves. Point scatterers have been studied extensively and several regularization schemes have been proposed (see [36] and references therein). The same regularization procedure will now be chosen for plane scatterers as was done before for point scatterers [36]: a high-momentum cutoff is introduced in three-dimensional Fourier space: instead of the freespace Green function G 0 (k, ω) of Eq. (14), a regularized free-space Green functionG 0 (k, ω) will be used. The latter is defined in terms of the former as The cutoff momentum Λ is assumed to be much larger than the magnitude ω/c of the optical momentum, so that at optical wavelengthsG 0 ≃ G 0 . The effect of this cutoff in the real-space representation is also known [36]. Here its effect on the Green function in the plane representation is important. After an inverse Fourier transformation, again only in the z-direction, one obtains (again for ω > 0) In the right-hand sides, the arguments (k , z, z 1 , ω) of the Green functions were dropped; Λ is short-hand notation for (Λ 2 + k 2 ) 1/2 ; again, the sign-function is zero when its argument is. All components of the regularized Green tensor consist of two parts: an oscillating and a decaying part, as a function of |z − z 1 |. The decay occurs at a distance that is a tiny fraction of an optical wavelength. For Λ|z − z 1 | 1, the regularized Green function approaches the unregularized one. If one would take the limit Λ → ∞, then all the components in (24) approach the unregularized components of Eq. (18), and in particular the limit of the last term in G zz 0 gives the delta function that made the regularization procedure necessary. However, Λ is kept finite for the moment, so thatG zz 0 has a finite term that grows with Λ. With this result, the Green-function regularization is complete and a theory of scattering by vector waves from plane scatterers can be set up. C. T-matrix of a plane for vector waves The regularization entails that the Green function is replaced by its regularized version in the LS equation (21). HereG ss 0 stands forG ss 0 (k , z α , z α , ω), and similarly for the other components. The off-diagonal elements of the Green tensor are all zero when the position z is equal to z α , so that the equation can be solved for every component separately. By inserting this result into the LS equation for general z, one finds (26) where the T-matrix for scattering from a plane by arbitrarily polarized light is given bỹ The scattering of the s-polarization component of the light can be considered independently from thev andẑ directions, according to Eqs. (26) and (27). It can be verified with Eqs. (24,27) that since Λ ≫ (ω/c), the matrix componentT ss for all practical purposes is equal to the T-matrix for scalar waves, and the same holds for the Green tensor componentG ss 0 : the regularization was not necessary for s-polarized light and fortunately it does not affect the scattering properties of s-polarized light. The need for regularization did show up in the description of scattering of p-polarized light, and there the cutoff might influence light scattering. Incoming p-polarized light is characterized by its amplitude E 0 , wave vector k, and its polarization stateσ , so that for optical purposes these terms can be neglected. For finite very large Λ we arrive at the following effective description: where the ss-component of the T-matrix is equal to V (ω) [1 − V (ω)G ss 0 ] −1 , and analogously for the vvcomponent. The Green functions have arguments (k , z, z α , ω). In this effective description, -where the T-matrix is denoted by T rather thanT -the cutoff parameter Λ does not occur anymore. The cutoff was necessary in order to set up a scattering theory and it shows up in the elements of the scattering theory such as the T-matrix (27) and the regularized Green function (24). It does not show up in the electric field, and precisely this enables us to arrive at the effective description. Note also that the (large) value of G zz 0 has become irrelevant. The effective description that is obtained here after a regularization is different from a theory where the delta function in G zz 0 [see Eq. (18d)] would simply be removed [35]. Leaving out the delta function in the LS equation (21) will result in a nonzero T zz , in contrast with Eq. (28). Furthermore, the T-matrix would have the unwanted effect that the transmitted part of an incoming wave would not be parallel to the incoming wave. The conclusion is that a regularization of the Green function was necessary, even when in the end the cutoff could be sent to infinity. The equation (28) defines a true mode of the electromagnetic field in the presence of a single plane scatterer, in terms of a linearly-polarized incoming plane wave with arbitrary angle of incidence. This is not the complete set of modes. Other modes, not corresponding to an incoming wave, will be discussed in Sec. (IV B), both for a single plane and for a crystal of planes. D. Transmission and energy conservation The transmission of light through the plane can be found by choosing z > z α in Eq. (28). The transmitted wave can be expressed in terms of the incoming wave as E kσ (k , z, ω) = τ (k , ω) · E kσ,0 (k , z, ω), with the transmission matrix which has nonzero elements Both for purely s-polarized and for purely p-polarized light, the transmitted electric field is a polarizationdependent scalar times the incoming electric field vector. Energy conservation puts a constraint (called 'optical theorem') on the form that the T-matrix of an elastic scatterer can take. The optical theorem for a plane that scatters scalar waves was found before [5]. Since s-waves map on scalar waves, the optical theorem for the sscomponent of the T-matrix can be given immediately: The most general T-matrix satisfying this requirement has the form where the optical potential F s (k , ω) must be a realvalued function. For reflection and transmission of p-polarized light, only the matrix element T vv is important. Again, we are interested in the form that this matrix element can take when optical energy is conserved. This is the case when theẑ-component of the Poynting vector is the same before and after the plane. An incoming p-polarized plane wave gives the electric field (28). With a Maxwell equation the accompanying magnetic field B can also be found. In SI-units, and in terms of the complex fields E and B, the cycle-averaged Poynting vector is equal to Re[E * (r, t) × B(r, t)]/(2µ 0 ) [37]. When a harmonic wave of frequency ω coming from z = −∞ scatters off the plane, the Poynting vector is proportional to At the other side of the plane one finds |1 − ik z (c/ω) 2 T vv /2| 2 . By equating the two, the optical theorem for a plane that scatters p-polarized light is found to be This differs from the optical theorem for s-polarized light. Also, the most general solution of the optical theorem is different: where the optical potential F p (k , ω) is real. E. T-matrix for N planes Now that the Green function and the T-matrix of a single plane are known, a multiple-scattering theory can be set up. Assume that there are N plane scatterers, placed at arbitrary positions. Assume them to be parallel, so that s-and p-polarized light do not mix in the scattering process. In the general expression (12) for the T-matrix of a complex dielectric in terms of its simple parts, Green functions are always sandwiched between T-matrices of scatterers at different positions. For unequal plane positions z α and z β , the value ofG 0 (k , z β , z α , ω) is finite and it can be taken to be equal to the unregularized G 0 (k , z β , z α , ω), because different planes are at optical distances apart. Further regularizations are therefore not required in order to find the N -plane T-matrix. As shown in the Appendix, higher-order terms in the series (12) correspond to higher-order matrix multiplications of N × N matrices. The multiplication property makes that for parallel planes, the series (12) can be summed exactly. In the Appendix it is shown how the summation can be done even in the general case that all planes may have different optical properties, and are placed at arbitrary non-coinciding positions. Here we specify that all planes are identical. This gives the central result of this section, the N -plane T-matrix for scattering by vector waves: αβ (k , ω) is a 3 × 3 matrix; the only two nonzero spatial components are Here, I N is the N × N unit matrix. Arguments (k , ω) were temporarily dropped for readability. The The calculation of T (N ) (k , ω) boils down to the inversion of an N × N matrix for the two transverse polarization directions separately. From now on, assume that the N planes are placed at regular distances from each other, with a spacing a between neighbors. The necessary matrix inversions in Eq. (35) can then be performed analytically for both polarization directions. The s-wave case maps identically on the situation for scalar waves, for which the analytical inversion was discussed at length in [5]; for p-waves the inversion trick goes analogous and it will not be presented here. A result from the analytical inversion is that T-matrix elements and therefore the optical properties of the Nplane crystal strongly depend on the Bloch wave vectors K s (k , ω) and K p (k , ω). For p-polarized light the Bloch wave vector is given by arccos(C p )/a, with C p = cos(k z a) + C ′ p sin(k z a); the constant C ′ p in terms of the single-plane T-matrix is In general, C p is a complex constant. However, if the optical theorem (32) holds, then the imaginary part of C p becomes identically zero, and the single-plane T-matrix will be of the form (33). Likewise, K s is defined as arccos(C s )/a for a quantity C s that becomes real when the optical theorem Eq. (30) for s-polarized light holds [5]. In those cases, the expressions for C s,p become rather simple, The most general T-matrices (31) and (33) feature as yet unspecified optical potentials F s,p . These should be real when energy is conserved, but for the rest they can be arbitrary functions with the frequency and the in-plane wave vector as variables. In [5], plane scatterers were introduced as a simplified model for dielectric slabs of finite thickness d and nondispersive dielectric function ε(ω) = ε. The optical potential for the plane scatterer in this model is obtained via the limiting process of making the thickness d of the dielectric slab smaller and increasing the polarizability (ε − 1), while keeping their product constant and equal to the "effective thickness" D eff . (The quantity D eff /a is called the "grating strength" in [19,20].) Following the same limiting procedure as in [5], we find the optical potential F s,p (k , ω) = −V (ω) = D eff (ω/c) 2 , identical for the two polarizations. Spatial dispersion and anisotropy would have shown up in the optical potentials as a k -and k -dependence, respectively. These two phenomena were neglected already as early as in the wave equation (4). In general, p-polarized light differs from s-polarized light in that the former will have a Brewster angle at which no light is reflected from a dielectric interface (n 1 → n 2 ). The Brewster angle θ B equals tan −1 (n 2 /n 1 ). In the limiting procedure for going from a finite slabin-air to an infinitely thin plane-in-air, the dielectric contrast √ ε/1 is going to infinity and consequently the Brewster angle becomes 90 • in that limit. Therefore, in our limiting procedure, a plane scatterer will not have a Brewster angle at the same angle as the finite dielectric slab that one starts out with. In line with this, in [15] a single-plane reflection for p-polarized light was determined that is nonzero for all angles of incidence. The p-polarized propagating modes for a system of plane scatterers will therefore differ substantially from the corresponding modes in a slab structure. The absence of a Brewster effect was also noticed in [18] where the infinitecrystal version of the plane-scatterer model is treated. A. Propagating modes The optical modes are the harmonic solutions of the wave equation (4). With the solution (34) of the T-matrix, the modes that correspond to a nonzero incoming plane wave can be given explicitly as These propagating (or radiative) modes are labelled by the incoming wave vector k and polarization σ k . The sine of the angle of incidence (with respect to a vector normal to the planes) is equal to k c/ω. The amplitudes of the s-polarized modes [with σ k = (1, 0, 0)] are identical to the corresponding amplitudes for scalar waves; the ppolarized modes [σ k = (0, k z /k, −k /k)] have no scalar analogues. For light with wave vector and frequency such that C s,p > 1 [see Eq. (37)], the Bloch wave vector is purely imaginary for the elastic scatterers that we consider. Similarly, for C s,p < −1, the Bloch wave vector equals π plus an imaginary number. In both situations the light will feel a stop band, meaning that it will be 100% reflected when falling on a semi-infinite system of planes. Otherwise, when −1 < C s,p < 1, the Bloch wave vector is real and light can propagate inside the crystal. More will be said about the Bloch wave vectors later in this section. Some plots of mode profiles will now be presented. Assume that light comes in from the left. For perpendicularly incident light, there is no difference between s-and p-polarization. In Fig. 1 the mode profiles (or squared absolute values of mode functions) for s-and p-polarized light inside a ten-plane crystal are compared both for an incoming angle of 30 • and for 60 • . Fig. 1(a) shows that at an angle of 30 • the mode profiles corresponding to both polarizations do not differ much yet. Both modes decay rapidly inside the crystal structure and are reflected (almost) completely. The Bloch wave vectors are complex for both polarizations. Only for the s-wave the polarization directions of the incoming and the reflected wave are equal, so that the amplitude of its mode profile at the left side of the crystal is four times the amplitude of the incoming electric field. The situation is different at an incoming angle of 60 • , as shown in Fig. 1(b): the mode profile of the s-polarized light again rapidly decays inside the crystal (and the corresponding Bloch wave vector again has an imaginary part), whereas the p-polarized light can propagate inside the crystal and is transmitted almost completely (and the Bloch wave vector is real). For this frequency and incoming angle, the crystal is a good polarization filter. The mode profiles in Figs. 1(a,b) of the s-polarized waves are continuous whereas p-polarized waves are discontinuous at the positions of the planes. This reflects the boundary conditions: the tangential components of the electric fields must be continuous and the normal components must show a jump at a dielectric interface. The electric field of s-polarized light only has a tangential component, while p-polarized light consists of both tangential and normal components. This explains the differences in the mode profiles for s-and p-waves. Notice that in our Green-function formalism, boundary conditions are automatically satisfied, whereas in related work based on transfer-matrix methods, boundary conditions must be considered explicitly [15,16,20]. Reflection as a function of frequency by the ten-plane Bragg mirror is plotted in Fig. 2 for both polarization directions. The reflection |ρ| 2 equals (1−|τ | 2 ), with |τ | 2 the (relative) transmitted light intensity. For light incident perpendicularly to the planes, both transverse polarization vectors are equivalent and accordingly in Fig. 2(a), the graphs for s-and p-polarized light overlap. Differences between the two polarizations do appear for nonnormal incidence. In Fig. 2(b) the angle of incidence is 60 • . The red edges of the stop bands for s-polarized light move to slightly higher frequencies and the widths of the stop bands become larger. For p-polarized light the red edges of the stop bands shift to the blue much faster, and the faster so for larger angles of incidence. For scalar waves, a crystal of plane scatterers can be an omnidirectional mirror [5], which means that waves experience a stop bands for all angles of incidence. For vector waves, the crystal will only be an omnidirectional mirror if there are frequency intervals in which the crystal is an omnidirectional mirror both for s-and p-waves. As stated earlier, it is the Bloch wave vectors K that distinguish between light that can propagate inside the crystal (real K) and light that feels a stop band (when K is an imaginary number or π plus an imaginary number). In our formalism, the Bloch wave vectors are the arc cosines of the constants C s and C p given in Eq. (37). As is shown in detail in [5], these Bloch wave vectors show up in expressions for the N -plane T-matrix T (N ) . It must be said that in the present T-matrix formalism it is not obvious simply by looking at the equations that a stop band occurs whenever the Bloch wave vector has a nonzero imaginary part. Nevertheless, we conclude from our numerical calculations that the relation does exist. To give an example, of the four modes in the figures 1(a,b), only the p-polarized light incoming at 60 • has a corresponding real Bloch wave vector. It is an interesting fact that the Bloch wave vector, the role of which is obvious in infinite crystals, also plays an important role in finite periodic structures. This was already noticed before in the context of transfer matrix methods [27,28,29]; here we see the importance of the Bloch wave vector for finite periodic structures in a T-matrix formalism. For light of a frequency corresponding to a/λ = 0.5 and planes with D eff = 0.46, the s-waves are reflected omnidirectionally [5]. In Fig. 3 we plot both constants C s and C p for this frequency, as a function of angle of incidence. Unlike for s-waves, for p-waves there are incident angles larger than the critical angle θ c 55 • for which the values of C p are between -1 and 1. Light incident with these large angles can propagate inside the crystal and therefore the crystal is not an omnidirectional mirror for this frequency. Actually, this information could already be read off from the mode profile of p-polarized light incident at 60 • in Fig. 1(b). The conclusion holds more generally: for larger D eff the critical angle θ c increases, but it can be shown by expanding Eq. (37b) around θ in = 90 • that for every finite D eff /a and a/λ there always is a finite interval of angles corresponding to propagating p-polarized light. In conclusion, crystals of identical and equidistant plane scatterers can reflect vector waves in almost all (but not in all) directions. B. Guided modes Besides propagating modes there can also be bound modes that do not correspond to incoming light (see Sec. II). Bound modes can be found by solving the LS equation in the absence of an incoming field. In crystals of plane scatterers, bound modes are guided modes. They have imaginary wave vectors in the z-direction and they decay exponentially away from the planes. Their in-plane wave vectors k are larger than ω/c. With each mode, be it of the propagating or guided type, a nonzero local density of states is associated. In the following, guided modes will be searched by looking for nonzero densities of states. (This is a method alternative to the one used in [5] where guided modes of scalar waves were identified.) For vector waves, the local optical density of states N (r, ω) is defined by [7] −[(2ω)/(πc 2 )]ImTr G(r, r, ω), so it is a scalar proportional to the trace over the imaginary part of the Green tensor G(r, r, ω). In planar geometries the latter can best be found as an integral over the Green tensor in the plane representation: The local density of states can only be nonzero if the imaginary part of the integrand in (40) of this integrand G(k , z, z, ω) have nonzero imaginary parts for a certain k > ω/c. For the crystals of plane scatterers the Green tensor directly follows from the Dyson-Schwinger equation: All three diagonal components of G 0 (k , z, z ′ , ω) become real quantities for k > ω/c, and indeed there are no guided modes in free space. On the other hand, the off-diagonal elements G vz 0 = G zv 0 become purely imaginary when k > ω/c and the latter elements do show up in the diagonal elements of G. However, since they always show up in paired products, for example in the term G zv 0 T (N ),vv αβ G vz 0 , they also give a real contribution to diagonal elements of G. The T-matrix elements are also real when k > ω/c, except when the matrix has a pole. Therefore, all guided modes must correspond to poles of the s-or p-components of the N -plane T-matrix First the guided modes of a single plane will be determined. There can be a guided mode when either T ss or T vv in Eq. (28) has a pole. Now T ss has a pole when 1−V (ω)G ss 0 (k , z α , z α , ω) vanishes. Using the same model for the optical potential as in Sec. III F, we find the dispersion relation κ (1) 1 = D eff (ω/c) 2 /2 for one and only one guided mode corresponding to s-polarized light. Here, κ is the positive square root [k 2 − (ω/c) 2 ] 1/2 for k > ω/c. A single-plane guided mode with this dispersion relation was also found in [15], and in [5] for scalar waves. A pole of T vv occurs when 1 − V (ω)G vv 0 (k , z α , z α , ω) vanishes, which is equivalent to the requirement that D eff κ/2 equals −1. Now in principle, D eff could be negative when modelling a slab of negative dielectric function as a plane scatterer. However, in the physical situations that we are interested in, the effective thickness is real and positive (see Sec. III F). Therefore, there is no guided mode corresponding to p-polarized light for a single plane scatterer. This result was also derived in [15], where only the cases of a single plane and infinitely many planes were considered. At this point it is worthwhile to compare the guided modes of a dielectric slab (thickness d, dielectric constant ε) in air with the guided modes of an infinitely thin plane with effective thickness D eff = (ε − 1)d. For the slab, the number M of guided modes is the same for both polarizations in the special case considered here, and equal to [38] Here, [X] stands for the largest integer smaller than or equal to X. In the large-wavelength limit there is a single guided mode for each polarization direction. The second guided mode appears when a/λ = a/(2 √ dD eff ). For example, for d = 0.1a and D eff = 0.46a, a second guided mode exists when a/λ > 2.3. We are interested in frequencies around a/λ = 0.5, where the first stopband for normally incident light occurs (see Fig. 2). For these frequencies, both the plane scatterer and the dielectric slab have a single s-polarized guided mode; the slab has a single p-polarized guided mode, whereas the plane scatterer has no such guided mode at all. Now we determine the guided modes of a finite crystal of N parallel and equidistant planes, using the same Green-function method as for the single plane. First look for the poles of the component T ss,(N ) αβ (k , ω). This is easy, because this component is identical to the N -plane T-matrix T (N ) for scalar waves, for which it was found that there are at most N guided modes in a crystal of N planes [5]. For an infinite number of planes, the guided modes form a band, as was found in [15,19]. Now look for guided modes corresponding to p-polarized light. The poles of the component T vv,(N ) αβ (k , ω) occur when the determinant det[(T vv,(N ) ) −1 ] is equal to zero. An expression for this determinant can be found just like was done for scalar waves in [5]. The result is that for p-polarized light there are guided modes if the following equation has nontrivial solutions ω(κ): (43) The Bloch wave vector K p is still defined as a −1 times the arccosine of C p , which reads C p = cosh(κa) + [(κc 2 F )/(2ω 2 )] sinh(κa) in terms of κ. Eq. (43) should lead to the dispersion relations ω(κ) for the guided modes, if they exist. When increasing the frequency, new guided modes would appear that at first are only just captured by the structure so that κ = 0 + . It is therefore convenient to count the guided modes in the small-κ limit. Let the constant χ be defined as a 1 + F c 2 /(aω 2 ). Then C p can be written up to second order in κ as To the same order in κ, the Bloch wave vector K p becomes equal to iχκ/a. Therefore, solutions of Eq. (43) will only exist when sinh[(N + 1)χκ] equals sinh(N χκ), or equivalently when χ ≡ 0. Since F is taken to be D eff (ω/c) 2 , there are guided modes for p-polarized light whenever 1 + D eff /a = 0. So in finite crystals of N planes with a positive effective thickness, there are no guided modes corresponding to p-polarized light. This is in agreement with the result just obtained for the single plane, and with the results in [15,20] for infinite crystals. In conclusion, there are at most N guided modes in the finite crystal of plane scatterers, all modes corresponding to s-polarized light. The comparison of the single plane and the single slab indicates that s-waves in plane scatterers are a good model for s-waves in slabs, at least for frequencies around the first stopband. For the p-polarized guided modes, the conclusion must be that in the finite slab structures there are guided modes which have no analogues in the crystal of plane scatterers. The E l are the normal-mode solutions with eigenfrequencies ω l of the wave equation (4). The spontaneousemission rate can alternatively be expressed in terms of the Green function of the medium (See [40] for early derivations of this relation; in [41] a modern derivation is given for inhomogeneous and absorbing dielectric media.) In Eq. (46), G is the classical dyadic Green function of the electric-field wave equation (4). For homogeneous dielectrics, it is known that the total Green function is the sum of a transverse part that describes radiative decay and a longitudinal part describing nonradiative decay [42]. Here, the Eqs. (45) and (46) are equivalent, because nonradiative decay is absent for dielectrics with real dielectric functions. Layered dielectrics (not necessarily plane scatterers) are translation invariant in two directions, which can be chosen to be the (x,ŷ) directions. Spontaneousemission rates will only depend on the z-coordinate of the atomic position R = (x, y, z). It is then easiest to first calculate the Green function in the plane representation G(k , z, z, Ω). This Green function must be Fourier transformed back to real space as in Eq. (40) in order to find the local Green function of Eq. (46) that determines spontaneous-emission rates. A slight complication in doing the integration (40) is that the plane representation for G(k , z, z, Ω) is corotating with the incoming wave vector k , a variable that must now be integrated over. A fixed basis {x,ŷ,ẑ} is needed instead and it is chosen such that the atomic dipole becomes (µ x , 0, µ z ) in the new representation. Write the two-dimensional integral d 2 k in polar coordinates as ∞ 0 dk k 2π 0 dk . After doing the angular integral, only diagonal elements of the dyadic Green function survive. The total spontaneous-emission rate is the sum of two contributions, the perpendicular and the parallel decay rate (Green-function arguments (k , z, z, Ω) were again dropped.) The parallel decay rate has a contribution both from s-and p-polarized light (through G ss and G vv , respectively) whereas the perpendicular decay rate only has a p-polarized decay channel (through G zz ). Notice that the (real) delta-function term in G zz 0 does not play a role in the emission rates. The spontaneous-emission rates in Eq. (47a) and (47b) are integrals over all possible lengths of the in-plane wave vector. Both rates can be subdivided into a propagating-mode (or radiative-mode) rate corresponding to the integration of k from 0 to Ω/c, and a guided-mode rate which is the integral from Ω/c to infinity. B. Spontaneous emission near plane scatterers The general expressions obtained in Sec. V A for spontaneous emission in layered structures will now be applied to crystals of plane scatterers. Combine the general expressions (47) for spontaneous-emission rates in layered dielectrics with the Green functions in the plane representation that were determined in Eq. (41) for a crystal of plane scatterers. Because of the absence of p-polarized guided modes, the parallel decay rate near plane scatterers can be subdivided into three (instead of four) parts: an s-polarized radiative-mode rate (sr), a p-polarized radiative-mode rate (pr), and an s-polarized guided-mode rate (sg). Again, because of the absence of p-polarized guided modes, the perpendicular decay rate Γ z is purely radiative. Here is a list of the nonzero partial decay rates: To be precise, in Eq. (48d) it was used that the tensor element G zz has a vanishing imaginary part (leading to a vanishing contribution to the density of states) for k > Ω/c; for the same reason, there is no guided-mode rate analogous to Eq. (48c) corresponding to G vv . These properties were found in Sec. IV B. With all partial emission rates spelled out now, we first study spontaneous-emission rates near a single plane, for which the Green function in the plane representation (41) features the single-plane T-matrix of Eq. (28). In Fig. 4(a), spontaneous-emission rates as a function of position are plotted for D eff = 0.46a. For both orientations of the dipole, far away from the plane the rate approaches the free-space value. Close to the plane, Γ x is larger than Γ 0 , but it consists of a rate into propagating modes that is less than Γ 0 , and a guided-mode rate. Close to the plane, Γ z is larger than Γ 0 , but the maximal values of (the purely radiative) Γ z occur somewhat away from the plane. The contribution of radiative and guided s-waves for an atom with µ = µ x is the same as for scalar waves with "scalar dipole moment" µ, but since the total decay rate Γ 0 for vector waves is larger than for scalar waves, the relative contributions of s-waves to Γ/Γ 0 are smaller for vector waves (by a factor 3/4). In Fig. 4(b), the same rates are plotted, this time for a plane with D eff = 10a that reflects light almost ideally: near the plane, Γ z is almost twice Γ 0 . The maximum values of Γ z still occur away from the plane, although this has become invisible in Fig. 4(b). The propagatingmode part of Γ x has decreased and is practically zero on the plane. The partial emission rate into the guided mode has a much larger (but finite, not shown) amplitude near the plane. The other prominent difference in the two figures is that the 'spike' in the emission rates due to the guided modes has become much narrower. Indeed, from Eqs. (39)(40)(41), it follows that the guided-mode rate decays exponentially away from the plane like exp(−2κ (1) 1 |z|). It follows from the dispersion relation for κ (1) 1 that was obtained in Sec. IV B, that an increase in a/λ or in D eff will give narrower spikes. In the limit that the atomic position z becomes equal to the plane position z α = 0, the spontaneous-emission rates can be calculated analytically for both dipole ori-entations. For a dipole perpendicular to the planes, where the dimensionless parameter ξ is defined as πD eff /λ. Similarly, for a dipole parallel to the plane, the three partial contributions to the decay rate can also be expressed in terms of the parameter ξ alone: In Fig. 5 the relative rates are plotted as a function of ξ. The results can be checked in two limiting cases: if D eff = 0 there is no plane and then indeed both Γ z and Γ x are equal to the free-space value Γ 0 . The other limit is that of a perfect mirror, when D eff (and consequently ξ) is sent to infinity. This limit is not visible in the figure, but the limiting values are Γ z /Γ 0 = 2 and Γ x /Γ 0 = 0. These values indeed agree with the well-known emission rates for atoms near perfect mirrors [21,22,23]. Emission rates into guided modes vanish in the perfect-mirror limit; at z = z α this is only the case by sending D eff to infinity before putting z equal to z α = 0. This completes the discussion of emission rates near a single plane. Now consider emission rates inside and near a crystal of a number N of plane scatterers. Results will be presented for N = 10. In Fig. 6(a-d), orientation-dependent spontaneous-emission rates are plotted for several frequencies. For clarity in the pictures, the positions of the planes at a, 2a, ..., 10a are not shown as vertical lines this time. The most striking difference between Γ x and Γ z is that Γ x becomes very spiky near the planes, because only parallel dipoles can couple to the s-polarized guided modes (and because p-polarized guided modes are absent in our model). As the frequency increases when going from Fig. 6(a) to Fig. 6(d), Γ x becomes more spiky, because the partial emission rate into the guided modes (the difference between the solid and the dotted lines in Fig. 6) becomes more concentrated near the planes. The term 'concentration' is appropriate here, because the maximum amplitudes near the planes are higher for narrower spikes. The same effect was observed for the single plane in Fig. 4(ab), where the frequency is kept constant and D eff is increased instead. The purely radiative-mode rate Γ z on average increases due to the presence of the planes, whereas the radiative part of Γ x on average decreases. The same behavior occurs near a single plane in Fig. 4 and for a perfect mirror. Figure 1 showed that the optical modes of p-polarized light have discontinuities at the plane positions. There are also discontinuities in the spontaneous-emission rates (not for a single plane, for symmetry reasons), but these are too small to be visible in Fig. 6. It can be understood that they are small from the fact that the discontinuities per mode are averaged in the emission rate. The dotted lines in Fig. 6 are the radiative parts of Γ x . These are similar to the radiative-mode rates for scalar waves [5], but not identical, since in Γ x not only s-polarized but also p-polarized light contributes, according to Eq. (47b). In particular, far away from the planes, the emission rate of dipoles parallel to the planes consists of 75% s-polarized and 25% p-polarized light. (To be sure, light emitted by perpendicular dipoles is 100% p-polarized for all layered dielectrics, see Eq. (47a).) For 'large enough' photonic crystals, one expects inner unit cells to have optical properties similar to unit cells in the infinite crystal. Are the ten-plane crystals large enough? This depends on the properties of a single plane. In two extreme cases, the crystal size does not matter: when the individual planes do not reflect any light in any direction (D eff /a = 0), or when they reflect all light (D eff /a = ∞), one finds the same emission rates in finite and in infinite crystals, because for D eff /a = 0 we have the free-space case and for D eff /a = ∞ all unit cells are optically disconnected. Only in intermediate cases (0 < D eff /a < ∞) can finite photonic crystals have an appreciable and unit-cell dependent influence on spontaneous-emission rates. In the intermediate cases (we assumed D eff /a = 0.46), the single-plane reflection also depends on the frequency of the light. The planes reflect light better at higher frequencies, because material dispersion was neglected. It appears that for the lowest frequency considered in Fig. 6, a/λ = 0.2 in (a), the emission rates in inner unit cells already are influenced considerably by the crystal. The rates vary in neighboring inner unit cells, which indicates that the rates have not yet converged to the infinitecrystal values. For higher frequencies, say a/λ = 0.6, individual planes reflect light much better. In the corresponding Fig. 6(d), all inner unit cells look alike and emission rates have converged. Now consider a parallel dipole at a fixed position, very close to a plane in an inner unit cell. From Fig. 6, one can also appreciate how the guided modes will influence the (frequency-dependent) emission rate of a dipole very close to a plane. For frequencies a/λ = 0.4 and higher, Figs. 6(b-d) show that emission into propagating modes modes is negligible compared to emission into guided modes. The dipole falls inside the guided-mode spike near the plane. As the frequency increases, the maximum amplitude of the spike increases. With the dipole still well inside the spike, the emission rate of the dipole will increase as well. When increasing the frequency further, the spike becomes so narrow that the dipole ends up in one of the wings of the spike, until the dipole finds itself completely outside it. This will cause the emission rates into guided modes to drop at higher frequencies. The combined effect is a peak in the frequency-dependent emission rates. Indeed, in [19] the dipole emission rate (or more precisely, its s-wave component) for infinite crystals as a function of frequency shows a pronounced peak for dipole positions z near a plane (see Fig. 3 in [19]). We can unambiguously attribute this peak to emission into the guided modes. We can also understand how an emission peak will depend on D eff and on the distance to the plane. We have seen that spikes are narrower, with higher amplitudes, for larger D eff and for higher frequencies. When assuming larger D eff , a dipole at fixed distance will feel a guidedmode enhanced emission rate for lower frequencies. However, the dipole will also at lower frequencies begin to fall outside the range of the spike. This explains why for fixed dipole position and increased D eff , the emission peak has a larger amplitude and attains its maximum at a lower frequency, precisely as seen in Fig. 3 in [19]. Similar reasoning suggests that for a parallel dipole a bit further away from a plane but still close to it (while keeping D eff fixed), an emission peak will occur at lower frequencies, with lower amplitude. What can be appreciated best in Fig. 6(a) is that for parallel dipoles the total emission rate converges faster than the radiative and guided-mode partial rates separately. This can be related to a kind of mode competition between s-polarized propagating and guided modes that we also found for scalar waves [5]. On the other hand, for perpendicular dipoles all emission is into radiative modes, so mode competition is absent there and convergence sets in earlier. For scalar waves the ten-plane structure could act as an omnidirectional mirror, whereas in Sec. IV A it was found that it is not an omnidirectional mirror for vector waves. Correspondingly, the radiative-mode LDOS for scalar waves at a/λ = 0.5 dropped down to (almost) zero inside the ten-plane omnidirectional mirror, whereas the emission rates in Fig. 6(c) show that the radiative LDOS for vector waves stays nonzero inside the crystal. In the inner unit cells, dipoles parallel to the planes emit predominantly guided light. The small amount of light that leaves the structure is strongly p-polarized. This is the case around a/λ = 0.5 only, where s-polarized light is omnidirectionally reflected. Such strongly polarized emission is not a peculiarity of the plane-scatterer model, because it will also occur for a real Bragg mirror whenever light of only one of the two polarization directions is omnidirectionally reflected. In the other plots in Fig. 6 for higher and lower frequencies, the radiativemode parts of Γ x are the sums of emission rates into both polarization directions. VI. CONCLUSIONS, DISCUSSION, AND OUTLOOK A theory was set up for the multiple scattering of vector waves by parallel planes, thereby generalizing previous work for scalar waves to the more interesting but also more complicated case of light waves. Unlike for scalar waves, the Green function had to be regularized. This was accomplished by introducing a high-momentum cutoff. An effective scattering theory emerged with a nonzero T-matrix that no longer depends on the cutoff. The T-matrix and Green-function formalism turned out to be very convenient for the calculation of propagating and guided modes, as well as spontaneous-emission rates, of finite photonic crystals of plane scatterers. A non-absorbing plane scatterer satisfies a separate optical theorem for s-and p-polarized light. The radiative and guided modes of s-polarized light could be mapped onto modes for scalar waves. The s-polarized light has continuous modes, whereas p-polarized modes have discontinuities at the plane positions. Throughout the paper, we have stressed the similarities and differences of optical properties of a plane scatterer or a crystal of plane scatterers as compared to the corresponding dielectric slab structures. It turns out that p-polarized waves differ more in the two cases than spolarized waves. Firstly, because propagating p-polarized modes are different in the two cases because the Brewster angle is 90 • for plane scatterers. Secondly, p-polarized guided modes in finite slab structures have no analogues for plane scatterers. This was also found in [15] for the single plane and in [15,20] for the infinite crystal. Unlike for scalar waves, equidistant and identical plane scatterers can not be an omnidirectional mirror for all vector waves. Such omnidirectional mirrors consisting of dielectric layers do exist. For layered media, at least three refractive indices are required in order to prevent complete transmission of p-polarized light in the Brewsterangle direction [3,4,43]. Omnidirectional reflection is a property of a dielectric for external light sources. In this paper, omnidirectional reflection could be related to the emission properties of atomic light sources from within the finite crystals. The graphs of emission rates (Fig. 6) show that the emission by dipoles oriented parallel to the planes is affected much more strongly by the planes than emission by perpendicular dipoles. This is a characteristic of the plane-scatterer model, because the absence of p-polarized guided modes is responsible for much of the difference. In the frequency interval where s-polarized light is omnidirectionally reflected, all light that exits the crystal after a spontaneousemission process will be p-polarized, irrespective of the orientation of the emitters. Still, the major fraction of the light will be emitted into guided modes and stay inside the crystal. For low frequencies, the single-plane reflectivity is lower, emission rates are less affected by the crystal, and finite-size effects are appreciable also in the inner unit cells of the ten-plane crystal. For higher frequencies, planes reflect light better and emission rates are more strongly modified. In the inner unit cells, the emission rates converge faster to the values of the unit cell of an infinite crystal. If the infinite crystal has larger variations in the emission rates inside a unit cell, then a smaller finite crystal is needed to converge to this result. We argued that the guided modes will give rise to a peak in the frequency-dependent emission rate of a parallel dipole close to a plane. We also reasoned that the peak will shrink and shift to lower frequencies when either the effective thickness or the distance to the plane is increased. Indeed, the occurrence of a peak and its dependence on the effective thickness agree with numerical calculations on infinite crystals in [19]; some more numerical work is needed to corroborate our prediction of the distance dependence. What other purposes can plane scatterers serve in the future? The finite photonic crystals that can be made with them have one-dimensional periodicity only, yet light propagation in all three dimensions for all polarization directions is properly taken into account. The number of planes can be chosen at will and further advantages are that all optical modes and the complete Green tensor can be determined. Our calculations can be extended to situations where not al planes are identical; or one could allow light absorption or gain in the planes by giving the effective thickness a complex value; the number of planes per unit cell could also be increased to more than one. Such calculations are possible in our formalism because the results (34) and (35) for the T-matrix were generalized in Eq. (56) of the Appendix to planes chosen at arbitrary positions, each with a different T-matrix T α (k , ω), in other words with a different effective thickness. The model can therefore also be used to study the effects of disorder in the positions or in the optical properties of the planes on the spontaneous-emission rates of embedded atoms. It would also be interesting to study models of finite photonic crystals built up of non-parallel planes. This can be done, but numerically the model would become more involved, not so much because the planes have an overlap of measure zero, but rather because the plane representations for nonparallel planes will be different. Numerical calculations for nonparallel planes require the discretization of individual planes. Apart from extending it, one can extract other interesting observables from the model. For example, the knowledge of the complete Green function makes it possible to calculate both far-field and near-field spectra of atoms embedded in the finite crystal. It would be interesting to study spectra near frequencies where the corresponding infinite crystal gives rise to a Van Hove singularity in the emission rates [20]. Crystals of plane scatterers can serve as a model environment to study the modification of several quantum optical processes when atoms are embedded in photonic crystals. Transient ef-fects in the spontaneous-emission rates are but one example, thereby generalizing work done on a one-dimensional cavity formed by two planes [44]. Calculations are underway that show photonic-crystal induced modifications of cooperative atomic processes.
2016-03-01T03:19:46.873Z
2004-01-03T00:00:00.000
{ "year": 2004, "sha1": "169129e4fbfb0614c9f865ca550387d8ef1c8941", "oa_license": null, "oa_url": "https://ris.utwente.nl/ws/files/6846704/spontaneous.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "863a0cc9e95ca0f595238d6c1963eaa575a98089", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
102545154
pes2o/s2orc
v3-fos-license
High Sensitivity and Precision High-Temperature Reversed-Phase LC Analysis of Bevacizumab for Intact Bioanalysis of Therapeutic Monoclonal Antibodies We optimized several analytical conditions for more sensitive and precise HT-RPLC analysis of the therapeutic monoclonal antibody (mAb), bevacizumab. Specifically, we (1) optimized the sample preparation process to reduce adsorption and aggregation of bevacizumab, (2) introduced a sample concentration process using a centrifugal ultrafiltration unit to increase detection sensitivity, and (3) used another therapeutic mAb as an internal standard to improve analytical precision. The optimized method for bevacizumab analysis was shown to have low detection and quantification limits of 0.010 and 0.032 µg/mL, respectively, good correlation coefficients (r 2 > 0.9997), and good intra- and inter-day precisions within < 12.0 %. This study provides an important methodology for the intact bioanalysis of therapeutic mAbs, not merely their LC measurement. Introduction In recent years, therapeutic monoclonal antibodies (mAbs) and related drugs have been widely used in the treatment of various ailments such as cancer, rheumatoid arthritis, and autoimmune and infectious disease. As of December 2016, over 60 therapeutic mAbs have been approved in Japan, the United States, and Europe. Among the ten top-selling pharmaceuticals in the world, five are therapeutic mAbs and their use is expected to continue to expand in the future [1][2][3]. The pharmacokinetics (PK) and pharmacodynamics (PD) of therapeutic mAbs are very complicated compared with those of low-molecular-weight drugs [4,5]. These profiles are concentration-dependent for many mAbs that exhibit non-linear PK due to the presence of fetal Fc receptors, which function in mAbs for catabolism before blood transition, binding to the target antigens, and transportation [5][6][7][8]. To date, PK and PD analyses of therapeutic mAbs have performed mainly by ligand-binding assays (LBAs), such as enzyme-linked immunosorbent assay (ELISA) [9,10]. Although LBAs permit high-sensitivity and high-throughput analysis, some potential exists for cross-reactivity of capture antibodies and low accuracy [11]. In contrast, tryptic digestion-liquid chromatography-tandem mass spectrometry (LC-MS/MS) methods have been applied to the analysis of therapeutic mAbs in serum or plasma samples [12][13][14][15][16][17][18]. These methods enable sensitive bioanalysis of therapeutic mAbs, but also present several limitations, such as time-consuming trypsin digestion and manual purification for tryptic peptides using solid-phase extraction cartridges. It is difficult to control the accuracy of the pretreatment process and peptide analysis [12,14]. We have recently developed simple and rapid quantification methods for the therapeutic mAbs bevacizumab and infliximab in the plasma of cancer and rheumatoid arthritis (RA) patients, using a combination of immunoaffinity magnetic purification and high-temperature reversed-phase LC (HT-RPLC) with fluorescence detection [19]. In this method, target drugs in plasma samples are purified using immunoaffinity magnetic beads immobilized with anti-idiotype mAbs. The purified drugs are separated further using HT-RPLC, which enables excellent separation of mAbs with good peak shape, using a large pore-size octyl column [20,21]. The separated drugs are detected with high sensitivity by their own fluorescence. This method requires no tryptic digestion or expensive LC-MS/MS instruments. Although this method was successfully applied to clinical analyses, several problems remain, such as decreased quantitativity at low concentrations due to adsorption and/or aggregation of mAbs, insufficient sensitivity for trough concentration analysis using native fluorescence detection, and the necessity for rigorous sample preparation when not using an internal standard. In this study, to overcome these problems, we optimized several analytical conditions to allow more sensitive and precise HT-RPLC analysis for anti-cancer drug, bevacizumab. Specifically, we (1) optimized the sample preparation process to reduce adsorption and aggregation of mAbs, (2) introduced a sample concentration process using a centrifugal ultrafiltration unit to increase detection sensitivity, and (3) used of another therapeutic mAb as an internal standard to maintain accuracy. This study provides an important methodology for the intact bioanalysis of therapeutic mAbs, not merely their LC measurement. Reagents and solutions Deionized and distilled water, purified using the ELGA Purelab Flex system (ELGA, Marlow, UK), was used to prepare all aqueous solutions. LC-grade acetonitrile, isopropanol and methanol were purchased from Kanto Chemicals (Tokyo, Japan). Bevacizumab (Avastin® 400 mg/16 mL Intravenous Infusion), tocilizumab (ACTEMRA® 80 mg for Intravenous Infusion), and trastuzumab (HERCEPTIN® Intravenous Infusion 150, 150 mg/7.2 mL) were produced by Chugai Pharmaceutical (Tokyo, Japan). Infliximab (REMICADE for Intravenous Infusion 100) was produced by Mitsubishi Tanabe Pharma (Osaka, Japan). Trehalose dehydrate, polysorbate 20, and trifluoroacetic acid for amino acid sequence analysis were purchased from Wako (Osaka, Japan) and Sigma-Aldrich (St. Louis, MO, USA), respectively. All other chemicals were of the highest purity available and were used as received. Preparation of therapeutic mAb solutions The Avastin preparation (100 mg/ 4 mL) contained 23.2 mg sodium dihydrogen phosphate, 4.8 mg disodium hydrogen phosphate, 240 mg trehalose dihydrate, and 20 mg polysorbate 20 as additives [22]. Therefore, an aqueous solution having the same composition was used for dilution of therapeutic mAbs. Bevacizumab (0.1-10 μg/mL) and other mAb (10 μg/mL) solutions were prepared. To 450 μL of each solution of bevacizumab, 50 μL of 10 μg/mL trastuzumab (internal standard: IS) solution was added, and the solution then underwent the concentration procedure described in Section 2.3. The resulting solution was analyzed by HT-RPLC, as described in Section 2.1. Concentration procedure for bevacizumab solution For centrifugal ultrafiltration, an Amicon Ultra Centrifugal Filter device (MWCO 100 kDa, 0.5 mL, Merck, Darmstadt, Germany) was used in accordance with the manufacturer's operating instructions. The device consisted of housing, membrane, and collection tube, composed of styrene/butadiene co-polymer, low-adsorption regenerated cellulose membrane, and polypropylene, respectively. Prior to using the device, filter units were washed with 10 % methanol and water. An aliquot of bevacizumab solution (500 μL), prepared by dilution with the sample preparation solution, was loaded onto the centrifugal ultrafiltration device, and the resulting concentrated solution (ca 25 μL) was analyzed by HT-RPLC. Using centrifugal ultrafiltration, 500 μL of sample solution could be concentrated 20-fold, to approximately 25 μL. Method validation The proposed analytical method was evaluated partially based on the FDA bioanalytical method validation [23]. To obtain the validation parameters (intra-and inter-day precisions, accuracy, linearity, limit of detection (LOD) and limit of quantification (LOQ)), peak areas were integrated by LabSolutions LC, and the baseline-to-baseline method was used for quantification. Precision The precision of the assays was determined by the repeated evaluation of five (0.1, 0.5, 1, 5, 10 μg/mL; n = 5) bevacizumab samples. For intra-day precision, these levels were analyzed three times daily, whereas for inter-day precision, samples at the same concentrations were analyzed three times daily for three days (n = 9). Accuracy The accuracy was determined by the repeated evaluation of three concentrations (0.1, 1, and 10 μg/mL; n = 5) of QC samples. The minimum acceptable biases were < 20 % at 0.1 μg/mL and 15 % at other concentrations. Calibration curve, limit of detection, and limit of quantification For the quantitative analysis, calibration standard solutions (n = 5) with concentrations ranging from 0.1 to 10 μg/mL (0.1, 0.5, 1, 5, 10 μg/mL) were prepared by diluting the stock solutions. The calibration curve equations were determined using least squares linear prediction. The limit of detection (LOD) and the lower limit of quantification (LOQ) were determined from signal-to-noise ratios of 3 and 10, respectively. Effect of additives in bevacizumab solution Aqueous solutions of bevacizumab were prepared by diluting with sample preparation solution having the same composition as the Avastin preparation. For comparison, aqueous solutions of bevacizumab at the same concentrations were also analyzed. The chromatogram for bevacizumab solution prepared with either sample preparation solution or water is shown in Fig. 1a. Each calibration curve is shown in Fig. 1b. The difference in retention times of the two peaks were considered to be due to the influence of additives in the sample solutions. In the sample solution diluted with water, a drastic decrease in peak intensity was observed for concentrations less than 10 μg/mL, and the calibration curve showed strong linearity, with r 2 = 0.9949. These results indicate that bevacizumab was adsorbed onto the sample vial and pipette tip by hydrophobic interaction. In contrast, when diluted with the sample preparation solution, adsorption and aggregation were suppressed, and increases in peak intensities and improved linearity of the calibration curve (r 2 =0.9998) were observed. The addition of phosphate salts stabilized pH, and the addition of trehalose suppressed the denaturation, aggregation, and adsorption of mAbs through its strong hydration force [24]. Polysorbate 20 contributed to suppressing the adsorption of mAbs onto containers and the aggregation of mAbs by solubilizing as a surfactant [25]. Effect of concentration procedure on detection sensitivity By centrifugal ultrafiltration, 500 μL of sample solution could be concentrated 20-fold, to approximately 25 μL. Figure 2 shows the chromatograms of bevacizumab solutions at each concentration before and after the concentration procedure. Table 1 shows the results of comparisons between the fluorescence intensities at each concentration before and after concentration. Through the concentration procedure, peak intensities increased 5.2-to 17.7-fold at each concentration, and the calibration range could be lowered from 1 μg/mL to 0.1 μg/mL. However, the concentration ratio for each sample was not constant, which caused a decrease in the linearity of the calibration curve. This may be the result of hydrophobic adsorption of bevacizumab onto the filter unit and/or the housing of the concentrating device. However, we addressed this decreased linearity by introducing an internal standard method. Table 1. Relative peak intensities of bevacizumab solution with and without the concentration procedure and concentration ratios. Improvement of analytical accuracy by internal standard compound In choosing the internal standard compound, we considered its separation from bevacizumab and appropriate retention time. As shown in Fig. 3, four commercially available therapeutic mAbs (trastuzumab, infliximab, tocilizumab, bevacizumab) could be well separated by HT-RPLC, with trastuzumab being separated farthest from bevacizumab with R s >1.5. From these results, trastuzumab was selected as the internal standard compound. Each peak corresponds to 100 µg/mL. Figure 4 shows HT-RPLC chromatograms of bevacizumab and I.S. undergoing the optimized preparation and concentration procedure. The calibration curve for bevacizumab calculated from the peak height ratio between bevacizumab and IS showed good correlation coefficients (r 2 > 0.9997). The LOD and LOQ of bevacizumab were 0.010 and 0.032 µg/mL, respectively. In contrast, these values, when analyzed by the same LC system without the concentration procedure were 0.209 and 0.696 µg/mL, respectively. Thus, the concentration process achieved 21-fold higher sensitivity. The intra-and inter-day assay precisions obtained by six-replicate analysis of bevacizumab ranged from 3.26-10.7 %, 3.25-12.0%, respectively Conclusion In this paper, we proved that the adsorption of therapeutic mAbs onto the container during sample preparation and/or dilution greatly affects detection sensitivity and quantitativity in HT-RPLC analysis. Furthermore, using the same solution composition as that of the diluting solution used for sample preparation, the quantitativity of the mAb could be greatly improved. This method achieved both sufficient sensitivity and excellent quantitativity in HT-RPLC analysis of bevacizumab. In the future, the proposed method could be successfully applied to the intact bioanalyses of various therapeutic mAbs and antibody-drug conjugates (ADC) by combining immune-affinity purification. Recently, LC-TOF MS bioanalytical methods which analyze intact mAb itself or light chains of mAbs have been reported [26,27], and the findings of this study may also be useful in refining these analyses.
2019-04-09T13:06:14.243Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "b68472a0835d53c8918591ad0fceab3b44367349", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jpchrom/39/1/39_2017.014/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7b188a2ee39e0f85e0c991a4f861ade122ed501a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
8677581
pes2o/s2orc
v3-fos-license
Inflammation in depression: is adiposity a cause? Mounting evidence indicates that inflammation may play a significant role in the development of depression. Patients with depression exhibit increased inflammatory markers, and administration of cytokines and other inflammatory stimuli can induce depressive symptoms. Mechanisms by which cytokines access the brain and influence neurotransmitter systems relevant to depression have also been described, as have preliminary findings indicating that antagonizing inflammatory pathways may improve depressive symptoms. One primary source of inflammation in depression appears to be adiposity. Adipose tissue is a rich source of inflammatory factors including adipokines, chemokines, and cytokines, and a bidirectional relationship between adiposity and depression has been revealed. Adiposity is associated with the development of depression, and depression is associated with adiposity, reflecting a potentional vicious cycle between these two conditions which appears to center around inflammation. Treatments targeting this vicious cycle may be especially relevant for the treatment and prevention of depression as well as its multiple comorbid disorders such as cardiovascular disease, diabetes, and cancer, all of which have also been associated with both depression and inflammation. Mounting evidence indicates that inflammation may play a significant role in the development of depression. Patients with depression exhibit increased inflammatory markers, and administration of cytokines and other inflammatory stimuli can induce depressive symptoms. Mechanisms by which cytokines access the brain and influence neurotransmitter systems relevant to depression have also been described, as have preliminary findings indicating that antagonizing inflammatory pathways may improve depressive symptoms. One primary source of inflammation in depression appears to be adiposity. Adipose tissue is a rich source of inflammatory factors including adipokines, chemokines, and cytokines, and a bidirectional relationship between adiposity and depression has been revealed. Adiposity is associated with the development of depression, and depression is associated with adiposity, reflecting a potentional vicious cycle between these two conditions which appears to center around inflammation. Treatments targeting this vicious cycle may be especially relevant for the treatment and prevention of depression as well as its multiple comorbid disorders such as cardiovascular disease, diabetes, and cancer, all of which have also been associated with both depression and inflammation. mals by the acute administration of proinflammatory cytokines such as IL-1β or TNF-α [7][8][9][10][11] or indirectly via the induction of peripheral immune activation by stimuli such as bacterial endotoxin. 12,13 Acute administration of endotoxin as well as other immune stimuli including typhoid vaccination causes a similar sickness syndrome in humans that includes depressed mood, decreased social interaction, sleep disturbance, and anhedonia. 14,15 This constellation of symptoms, which parallels that found in major depression, has also been consistently observed during chronic administration of cytokines such as IFN-α and β for illnesses including hepatitis C, multiple sclerosis, and several types of cancers, including malignant melanoma. 3 To explore the degree to which cytokine-induced depression parallels depression in ostensibly medically healthy individuals, Capuron et al 8 compared 20 patients who were being treated with INFα for malignant melanoma with 28 medically healthy subjects with major depression using the Hamilton Rating Scale for Depression (HAM-D). 16 Forty-five percent of the IFN-α-treated patients developed major depression during the 12-week follow-up period. There were minimal differences in the severity of individual depressive symptoms between patients who became depressed during IFN−α treatment versus medically healthy depressed individuals, although IFN-α-treated depressed patients did exhibit more psychomotor retardation and weight loss, and the medically healthy depressed group experienced greater feelings of guilt and thoughts of suicide. 8 These results suggest that the depression induced by cytokines is remarkably similar to depression seen in medically healthy depressed patients. Of note, the link between inflammation and depression may explain the frequent association between medical illnesses and depression. 17 As shown in Table I, while there are many medical conditions associated with increased rates of depression, the majority of these illnesses are also associated with increased inflammation, including not only infectious diseases and cancer but also cardiovascular disease and diabetes, both of which are now recognized to have an inflammatory component. 18 Of note, when depression occurs in the context of medical illness, it has been associated with increased concentrations of inflammatory cytokines. For example, several studies have shown that depressed patients with cancer [19][20][21][22] or cardiovascular disease 23 have higher peripheral blood concentrations of IL6 and CRP. Moreover, depression scores have been shown to be strongly correlated with blood cytokine concentrations in these patients. 24 How do cytokines cause depression? Access to the brain Peripheral immune activation, such as that seen with local infection, wounding and/or psychological stress, induces release of IL-1α, IL-1β, IL-6, and TNF-α. 5,[25][26][27] However, these cytokines are too large to freely pass through the blood-brain barrier, which raises the question of how a centrally mediated behavioral effect is achieved. Several pathways by which cytokine signals can access the brain have been identified. Local release of cytokines can stimulate peripheral afferent nerve T r a n s l a t i o n a l r e s e a r c h fibers such as the vagus that innervate peripheral tissues, ultimately leading to activation of microglia, which can produce cytokines in the brain. In addition, "leaky" regions in the blood brain barrier such as the circumventricular organs 6,28 allow access of peripheral inflammatory mediators to the brain. Cytokines in the peripheral circulation can also cross the blood-brain barrier via saturable active transport molecules expressed on brain endothelial cells. 29 Finally, in the context of chronic immune stimulation, microglia activated by peripheral TNF-α can produce the chemokine, monocyte chemoattractant protein (MCP)-1, which in turn, can attract monocytes into the brain parenchyma. 30 Impact on neurotransmitter metabolism Once cytokine signals reach the brain, there is a rich literature indicating that they can interact with virtually every pathophysiologic domain relevant to depression, including marked effects on brain monoamines, which are the target of conventional antidepressant medications. Indeed, cytokines have been shown to influence central monoamine synthesis, release, and synaptic reuptake. Serotonin Serotonin is synthesized from tryptophan by tryptophan hydroxylase (TH) and aromatic amino acid decarboxylase (AAAD), and the amount of serotonin in brain is highly dependent on tryptophan availability. 31 Specifically, depletion of tryptophan rapidly leads to reduced brain serotonin levels, which in turn can precipitate depressive symptoms in vulnerable individuals. 31 Activation of the enzyme idoleamine 2,3-dioxygenase-IDO (and the related liver enzyme tryptophan 2,3dioxygenase) is an alternative pathway for tryptophan metabolism yielding kynurenine (KYN) and leading to tryptophan depletion and ultimately decreased serotonin in brain. 32,33 Several cytokines and their signaling pathways have been shown to activate IDO 34,35 (for a review see Shelton and Miller 14 ). Interestingly, peripheral administration of the cytokine-inducer, lipopolysaccharide (LPS) to mice activates IDO and is associated with depressive-like behavior. 36 These LPS-induced behavioral changes can be reversed by IDO inhibition using the IDO antagonist 1-methyltryptophan. IDO activation also has other effects that may be relevant to depression. For example, KYN is metabolized to kynurenic acid (KYNA), which antagonizes α7 nicotinic acetylcholine receptors 32 and can reduce striatal dopamine release (see below) 37,38 KYN is also metabolized to quinolinic acid (QUIN); QUIN leads to the generation of toxic lipid peroxides and activates N-methyl-D-aspartic acid (NMDA) receptors and the release of glutamate, all of which can contribute to neurotoxicity. 39 The impact of QUIN on neuronal integrity has been implicated in the pathophysiology of several degenerative neurological conditions including Alzheimer's, Huntington's, and Parkinson's diseases, amyotrophic lateral sclerosis, and human immunodeficiency virusrelated dementia. 40-47 Of note, IFN-α therapy has also been shown to increase KYN/tryptophan ratios in humans, and KYN has been found to access the brain in IFN-α-treated patients where it is associated with increased cerebrospinal fluid (CSF) concentrations of both QUIN and KYNA. 48,49 CSF KYN and QUIN were in turn correlated with depression in during IFN-α treatment. Aside from its impact on tryptophan and serotonin synthesis, immune activation can also affect serotonin availability by acting on synaptic reuptake via the high-affinity serotonin transporter (5HTT). 50 Activation of p38 mitogen activated protein kinase (MAPK) by both IL-1β and TNF-α leads to phosphorylation of 5HTT and increased neuronal uptake of serotonin. 51 Expression 52 and trafficking of 5HTT to the cell surface 53 is also increased by the activation of p38 MAPK. These effects of cytokines on 5HTT expression and function have been observed both in vitro and in vivo. Of note, polymorphisms in the 5HTT gene have also been associated with the development of depression during cytokine (IFN-α) administration. 54 There are several mechanisms by which dopamine may be depleted in the CNS during immune activation, aside from decreased dopamine release secondary to the α7 nicotinic acetylcholine receptor mechanism described above. 32 For example, IFN-α 79 administration to rodents has been associated with depletion of tetrahydrobiopterin (BH4), a cofactor for tyrosine hydroxylase, the rate-limiting enzyme in dopamine synthesis. Also, in a mechanism similar to the effects of immune activation on 5HTT, phosphorylation of the dopamine transporter (DAT) by MAPK kinase (MEK) has been shown to increase cell surface expression of DAT and uptake of dopamine. 80 Therefore, relative depletion of synaptic dopamine (via reduced synthesis and release and increased reuptake) may underlie some of the neurovegetative symptoms of sickness behavior and depression, such as low energy, reduced motivation, and reduced response to rewarding stimuli. 69,81 The anti-inflammatory effects of antidepressant treatments and the antidepressant effects of anti-inflammatories There have been a number of in vitro and in vivo studies of antidepressant medications 82-98 and other antidepressant treatments such as electroconvulsive therapy 99 indicating that antidepressant treatments can reduce proinflammatory factors including IL2, IL-6, TNF-α, and IFN-γ. 1 In fact, the available evidence indicates that many antidepressant therapies induce a shift from a Th1 (proinflammatory) to a TH2/TH3 (anti-inflammatory) pattern. 82,87,88,100,101 The IFN-γ to IL10 or IL4 ratio is a measure of relative TH1 to TH2-3 activity, and a number of studies indicate that antidepressants decrease this ratio. 82,87,88 Because these effects have been observed both in vitro and in vivo, they do not appear to be dependent on the actions of these drugs on monoamines such as norepinephrine or serotonin, suggesting a direct impact of antidepressant medications on cytokines. 95 Therefore, the mechanism of antidepressant action in the context of inflammation-induced depression may be a direct effect on inflammatory factors themselves. There is also a small but significant literature indicating that anti-inflammatory drugs may produce antidepressant effects. Cyclooxygenase 2 (COX-2) activity is increased by proinflammatory cytokines, particularly IL-6, and it, in turn, activates the release of IL-1β and TNF-α 100 as well as prostaglandin E2 (PGE2), a central mediator of sickness behavior. 6 COX-2 inhibitors have been shown to reverse depression-like behaviors in animal models. [102][103][104] In addition, the COX-2 rofecoxib has been shown to reduce depressive symptoms in patients with osteoarthritis. 105 Adjunctive treatment, the nonselective COX-1 and -2 antagonist acetylsalicylic acid (aspirin), increased remission rates in one open-label study of depressed patients previously nonresponsive to fluoxetine alone. 106 A prospective, double-blind, placebocontrolled trial of the COX-2 antagonist celecoxib (400 mg. per day) added to the norepinephrine reuptake inhibitor antidepressant reboxetine (4-10 mg per day) for 6 weeks showed greater effects of the combination treatment than reboxetine alone. 107 TNF receptor antagonists such as infliximab, adalimumab, golimumab, and certolizumab pegol, and the TNF receptor fusion protein etanercept have been developed in recent years to treat inflammatory and autoimmune diseases such as psoriasis, rheumatoid arthritis, and Crohn's disease. Direct actions in depressed patients have not yet been reported. However, one study of etanercept treatment of psoriasis did examine antidepressant effects. 108 Six hundred and eighteen patients with moderate to severe psoriasis received double-blind treatment with placebo or 50 mg twice weekly infusion treatment with etanercept for 12 weeks. Patients on T r a n s l a t i o n a l r e s e a r c h etanercept had greater improvements on measures of depression (as measured by Beck Depression Inventory) than those on placebo. Notably, these improvements were not associated with reduction in psoriatic plaques or joint pain, which indicates a primary effect of TNF antagonism on depression, not simply a cosmetic or analgesic effect. 108 These effects were confirmed in subsequent longer term studies in psoriasis patients 109,110 and in patients with rheumatoid arthritis. 111 A similar effect has been shown with the TNF-α monoclonal antibody infliximab. 112,113 Adiposity as a possible causal pathway to depression In considering possible sources of inflammation leading to depression, there has been increasing interest in the role of obesity. Rates of overweight and obesity have increased tremendously in recent years in both adults and children. [114][115][116][117][118][119] Along with this has been an epidemic of related metabolic conditions like type 2 diabetes, dyslipidemias, cardiovascular and fatty liver disease, and certain forms of cancer. [120][121][122] The bulk of evidence links obesity and its attendant complications to inflammation. [123][124][125] The possible relationship between depression and obesity appears to be bidirectional, as evidence indicates that being depressed also increases the risk for the subsequent development of obesity, probably mediated, in part, by inactivity. 126 Obesity as an inflammatory state Adipose tissue is now understood as being a very complex organ system. 127 White adipose tissue (WAT) is the main location for long-term fat storage in the body. WAT, particularly in the abdomen, is the main contributor to metabolic diseases. 122,128,129 Adipocytes in WAT secrete a variety of hormones, inflammatory factors including cytokines (referred to as adipocytokines or adipokines). 130,131 These factors include hormones traditionally associated with adipose tissue such as leptin, adiponectin, resistin, and visfatin; however, adipocytes can also secrete IL-6 and TNF-α. 130,130 Nevertheless, one of the primary mechanisms for the induction of inflammation in adipose tissue is the secretion of chemokines, particularly MCP-1. MCP-1 attracts leukocytes such as macrophages, T lymphocytes, and dendritic cells to adipose tissue, which in turn secrete cytokines including IL1, IL6, and TNF-α. 132,133 Thus, chemokines and cytokines produced by WAT may contribute to widespread immune activation, potentially causing or exacerbating diseases associated with inflammation such as type 2 diabetes, cardiovascular disease, cancer, and depression. 130 Leptin is another important peptide produced by adipocytes that regulates dietary intake. It regulates appetite by acting on leptin receptors in brain, particularly the hypothalamus. 134 In the case of obesity, a state of leptin resistance develops in which circulating levels are actually increased but responsiveness is reduced. Excess calories in the diet lead to leptin resistance; however, high-fructose feeding is a major contributor. 135,136 Leptin is a member of the type I cytokine superfamily; 137,138 it is involved in the modulation of white blood cell response, including T-cell activation and a shift to Th1 cytokine production. 137,138 Resistin is another proinflammatory adipocytokine produced by both WAT and monocytes. 130 It sets up a positive inflammatory feedback system in which the secretion of resistin is increased by proinflammatory cytokines such as IL-1, IL-6, and TNF-α, but it also increases the production of these same cytokines by macrophages. 130,139 By contrast, adiponectin increases fatty acid oxidation and reduces the synthesis of glucose in the liver. 137,138 Adiponectin, whose levels are reduced in obese persons, 137 has a predominantly inhibitory role in Th1 immune responses, including the inhibition of IL-6 and TNF-α production and an increase in the anti-inflammatory cytokine IL-10. 130 Therefore, dietary excess, leading to expansion of WAT, produces a shift in the pro-and anti-inflammatory mediators such as leptin, resistin, adiponectin, and other adipocytokines, leading to a general proinflammatory state. 14 This, then, contributes to metabolic derangements and disease such as dyslipidemias, cardiovascular disease, and type 2 diabetes. 123,130,140,141 The activation of inflammatory factors related to obesity also appears to induce the IDO-KYN pathway. Plasma tryptophan concentrations are reduced 142 and the KYN/tryptophan ratio is increased in obese relative to lean individuals, indicating IDO activation 142,143 Weight reduction by diet 142 or bariatric surgery 143 restores a normal KYN/tryptophan balance. This is likely to be the result of a reduction in the proinflammatory state after weight loss. 143 It, then, appears that, like other inflammatory diseases, the immune activation found in obesity may shift metabolism from tryptophan to KYN, which may contribute to depression. Adiposity and depression Both depression and obesity, then, are associated with Th1 activation. However, is there evidence of a causal link in either direction-ie, from depression to obesity of viceversa? Some larger-scale epidemiological studies have failed to find a strong association between obesity and depression. 144,145 Nevertheless, while cross-sectional studies do not show strong correlations between depression and obesity, longitudinal studies tell a very different story. [146][147][148][149] A recent meta-analysis of 15 longitudinal studies showed a bidirectional association between depression and obesity (especially abdominal adiposity) in which prior obesity increases the risk for depression and depression increases the likelihood of subsequent obesity. 150 To further investigate this bidirectional relationship especially as it pertains to inflammation, Miller et al 151 conducted a mediational analysis 152 evaluating the relationship between serum inflammatory markers (including IL-1β, IL-6, TNF-α, CRP, and MCP-1) in 50 physically healthy young adults with depression and 50 matched controls. IL-6, CRP, and BMI were elevated in the depressed sample compared with controls. When the relationship between depression and both IL-6 and CRP (but not IL-1β) were adjusted for BMI, the results became nonsignificant, indicating a mediational role for adiposity in the relationship between depression and IL-6 and CRP elevation. 151 A separate analysis of the same dataset 153 using structural equation modeling (SEM) estimated the relationship among depression, adiposity, leptin, and inflammation (IL-6 and CRP). The best fit model indicated that the primary causal pathway was from depression to adiposity to inflammation. This was interpreted as indicating that depression leads to increased adiposity (possibly through inactivity) which, in turn, leads to an increase in inflammatory markers. Diet and depression Diets in much of the world have shifted to high carbohydrates and a reduction in omega-3 (n-3) (unsaturated) compared with omega-6 (n-6) (saturated) fatty acids. 154 The intake of fish and other sources of n-3 fatty acids appear to be somewhat protective from certain metabolic conditions, [155][156][157][158][159][160][161][162][163] and epidemiological studies have associated an increased relative intake of fish with a reduced risk for depression. 164 However, it does not seem to be primarily intake of fish per se, but so-called fatty fish with high n-3 concentration (eg, anchovy, sea bass, carp, dogfish, eel, halibut, herring, mackerel, mullet, fish, roe, salmon, sardine, trout, and tuna) that lend protection against both metabolic diseases and depression. 162,163,165,166 The benefits of the Mediterranean diet pattern Recent studies have found particular health benefits, including reduction in risk of depression, associated with the so-called Mediterranean Diet Pattern (MDP). 167 As noted in the seminal work by Willett et al, 167 this pattern of eating has been associated historically with good general health and longer life expectancy. This method "is based on food patterns typical of Crete, much of the rest of Greece, and southern Italy in the early 1960s" and "included regular physical activity… abundant plant foods (fruit, vegetables, breads, other forms of cereals, potatoes, beans, nuts, and seeds), fresh fruit as the typical daily dessert, olive oil as the principal source of fat, dairy products (principally cheese and yogurt), and fish and poultry consumed in low to moderate amounts, zero to four eggs consumed weekly, red meat consumed in low amounts, and wine consumed in low to moderate amounts, normally with meals." This pattern of eating is characterized by lower saturated and total fat content. This manner of eating was shown recently to be associated with reduced risk for depression in a prospective study of the relationship between the MDP and health. 168,169 A sample of 10 094 healthy persons in Spain were assessed using a validated 136-item item food frequency questionnaire to determine the relative adherence to the MDP, and followed for 4.4 years. Using the lowest adherence to the MDP as the reference condition, adjusted hazard ratios for depression for the higher categories of adherence ranged from 0.74 for modest adherence to 0.49. These results indicate a strong prospective protective effect for the MDP. Of relevance, earlier research found a strong inverse relationship between adherence to the MDP and serum IL-6 with a trend for CRP. 170 These data indicate that diet is an important contributor to inflammatory load and risk for depression. In addition to the n-3 to n-6 fatty acid ratio in the diet is the relative intake of carbohydrates, particular simple sugars. Carbohydrates in Western diets have also increased substantially in recent years. While the intake of certain T r a n s l a t i o n a l r e s e a r c h refined sugars such as cane sugar has declined over the last 40 years, the total caloric load from sweeteners has increased; this has primarily been in the form of fructose, particularly in the form of high-fructose corn syrup (also known as "corn sugar"). 171 A high level of fructose intake is associated with obesity and metabolic diseases. [172][173][174][175][176][177] Although the specific role of fructose intake, as opposed to increased total calories, has been questioned, 178 it is increasingly clear that high intake of fructose contributes uniquely to problems of obesity 179 and metabolic diseases such as cardiovascular disease, dyslipidemia, and type 2 diabetes. [180][181][182] Fructose has a very high extraction ratio by the liver, 183 and does not contribute significantly to increases in insulin 184 or satiety signaling. 185 High levels of fructose loading in the liver leads to the synthesis of triglycerides, which contribute to liver and abdominal fat. 181,184,186 The shift in intake from proteins and "healthy" fats to saturated fats and carbohydrates, particularly fructose, has contributed to the worldwide epidemic of obesity. Does n-3 fatty acid supplementation reduce depression? A recent study indicates that not all n-3 fatty acids reduce inflammation; this study actually showed that docosahexanoic acid, one constituent of fish oil, may actually increased the ratio of interferon gamma to IL-10, indicating a proinflammatory effect. However, eicospentaenoic acid (EPA) did not show this effect; EPA has shown to reduce depressive symptoms in a few, smallerscale studies. One study 187 randomized 70 persons with major depression not responsive to antidepressants to ethyl-eicosapentaenoic acid (e-EPA) (a specific n-3 fatty acid) 1, 2, or 4 g per day or placebo as add-on therapy. 187 Curiously, the 1 mg per day, but not 2 or 4 mg./day doses was significantly better than placebo. Subsequent studies have supported these results. [188][189][190] Of note, a polymorphism in the gene for phospholipase A2, a key enzyme in the metabolism of polyunsaturated fatty acids, was associated with a 3-fold increase in the likelihood of developing major depression during IFN-α treatment as well as lower blood concentrations of EPA. 191 Diet, adiposity, and risk for depression in children The increase in obesity in adults has been paralleled in children and adolescents, 119 along with an increase in inflammation 192,193 and inflammatory diseases previously thought to occur mostly in adults: type 2 diabetes, fatty liver disease, cardiovascular disease, and dyslipidemia. 121, [194][195][196][197][198][199][200] As described earlier for adults, the current evidence suggests a bidirectional relationship between obesity and depression in children. 201 Prior depression in childhood is a relatively strong predictor of the subsequent development of obesity, metabolic syndrome, and related diseases in adult life. [202][203][204] Depression may increase risk by changes in diet, eating behavior, and inactivity. 126 Alternatively, baseline obesity may increase risk for depression via increases in inflammation as well as cultural aspects of beauty. 205 Obesity negatively impacts self-esteem based on cultural aspects of beauty and desirability. 205 Obesity also may contribute to risk for depression via effects on physical activity, sleep, and eating behavior. 205 Summary and conclusions It seems clear at this point that inflammatory mediators, whether they are generated by specific diseases or administered exogenously (as with IFN therapy) can lead to depression. It also appears that a significant subset of depressed patients without known inflammatory disease have inherent upregulation of inflammatory factors, particularly IL-6, TNF-α, and CRP, without other known inflammatory disease. 1,3,14 As posited in this paper, one causal pathway for this increased inflammation may be overweight and obesity. Therefore, depression (and the inactivity and diet changes associated with it), obesity, and inflammation may represent a "vicious cycle" (Figure 1). A person may enter this cycle at any pointobesity may lead to inflammation which leads to depression; depression may lead to inactivity and dietary changes, which lead to obesity leading to inflammation; inflammatory diseases may lead to both depression and inactivity, resulting in obesity. Western high-fat, high-car- bohydrate diets and inactivity may lead to obesity, inflammation, and depression. This cycle may also explain the common association between inflammatory diseases such as lupus or fibromyalgia and both depression and obesity. [206][207][208][209][210][211][212][213][214][215][216][217][218] Therefore, multiple, interacting factors may lead to a general decline in mental and physical health. However, this cycle also provides multiple nodal points for both treatment and prevention. For example, children and adolescents at risk for depression (ie, with positive family history or those who have been traumatized 219 ) may represent a group for whom targeted diet and exercise programs would be beneficial to help to prevent or reduce risk for depression. In addition, recent data indicate that overweight and obese patients have reduced response to antidepressant treatments. [220][221][222] For example, a recent combined analysis of outcomes in three clinical trials of marketed antidepressants divided participants into normal weight (BMI<25), overweight (BMI 25-<30), and obese (BMI > 30). 221 The results indicated progressive resistance to antidepressant therapies from normal weight to obesity. Future interventions could target overweight and obesity as a possible remediable cause of treatment resistance. Depression is a complex condition with many potential causal pathways; two, possibly interrelated mechanisms, diet-associated overweight and obesity and inflammation have been reviewed. Although these mechanisms represent only two among many causal paths, they potentially explain many features, such as the common association between inflammatory diseases and depression risk. Nevertheless, there is cause for optimism for possible intervention strategies given the evidence for success of lifestyle modifications such as exercise, diet, and other weight loss approaches to inflammatory diseases and obesity. 116,167,207,216,223-225 ❏ T r a n s l a t i o n a l r e s e a r c h 48 La inflamación en la depresión: ¿es la adiposidad una causa? Adiposity and depression -Shelton and Miller Dialogues in Clinical Existe una evidencia creciente que señala que la inflamación puede jugar un papel significativo en el desarrollo de la depresión. Los pacientes con depresión muestran aumentados marcadores inflamatorios, y la administración de citoquinas y otros estímulos inflamatorios pueden inducir síntomas depresivos. También se han descrito mecanismos a través de los cuales las citoquinas tienen acceso al cerebro y afectan los sistemas de neurotransmisión importantes en la depresión, y se cuenta con hallazgos preliminares que indican que el antagonizar las vías inflamatorias puede mejorar los síntomas depresivos. Una fuente primaria de inflamación en la depresión parece ser la adiposidad. El tejido adiposo es una rica fuente de factores inflamatorios que incluyen las adipoquinas, las quemoquinas y las citoquinas, y también se ha revelado una relación bidireccional entre adiposidad y depresión. La adiposidad está asociada con el desarrollo de la depresión y la depresión está asociada con la adiposidad, lo que refleja un potencial círculo vicioso entre estas dos condiciones que parece estar centrado en la inflamación. Los tratamientos que se enfocan en este círculo vicioso pueden ser especialmente relevantes para el tratamiento y prevención de la depresión como de sus múltiples trastornos comórbidos como la enfermedad cardiovascular, la diabetes y el cáncer, todos los cuales también se han asociado con la depresión y la inflamación. T r a n s l a t i o n a l r e s e a r c h
2014-10-01T00:00:00.000Z
2011-03-01T00:00:00.000
{ "year": 2011, "sha1": "e84bb0ced4721fd98853e904fb2343d58f5c4a48", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181969", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e84bb0ced4721fd98853e904fb2343d58f5c4a48", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
268408839
pes2o/s2orc
v3-fos-license
Risk Prediction of Diabetes Progression Using Big Data Mining with Multifarious Physical Examination Indicators Purpose The purpose of this study is to explore the independent-influencing factors from normal people to prediabetes and from prediabetes to diabetes and use different prediction models to build diabetes prediction models. Methods The original data in this retrospective study are collected from the participants who took physical examinations in the Health Management Center of Peking University Shenzhen Hospital. Regression analysis is individually applied between the populations of normal and prediabetes, as well as the populations of prediabetes and diabetes, for feature selection. Afterward,the independent influencing factors mentioned above are used as predictive factors to construct a prediction model. Results Selecting physical examination indicators for training different ML models through univariate and multivariate logistic regression, the study finds Age, PRO, TP, and ALT are four independent risk factors for normal people to develop prediabetes, and GLB and HDL.C are two independent protective factors, while logistic regression performs best on the testing set (Acc: 0.76, F-measure: 0.74, AUC: 0.78). We also find Age, Gender, BMI, SBP, U.GLU, PRO, ALT, and TG are independent risk factors for prediabetes people to diabetes, and AST is an independent protective factor, while logistic regression performs best on the testing set (Acc: 0.86, F-measure: 0.84, AUC: 0.74). Conclusion The discussion of the clinical relationships between these indicators and diabetes supports the interpretability of our feature selection. Among four prediction models, the logistic regression model achieved the best performance on the testing set. Introduction Diabetes is a metabolic disease characterized by hyperglycemia, which is caused by insufficient insulin secretion or reduced insulin sensitivity.Its main characteristics are that the blood sugar level is higher than the normal range for a long time.Long-term abnormal blood sugar level increases the risk of microvascular and macrovascular complications, thus damaging multiple organs and tissues, even leading to death.Since there is no effective cure for diabetes at present, patients need lifelong treatment, which brings a heavy economic burden to patients and their families. 1rediabetes, also known as impaired glucose regulation (IGR), is a pathological state in which the level of human blood sugar is higher than normal but has not yet reached the diagnostic criteria for diabetes. 2According to the definition of the World Health Organization (WTO), prediabetes can be divided into two types: impaired fasting glucose (IFG) and impaired glucose tolerance (IGT).Research shows that prediabetes have a significant positive correlation between the risk and mortality of obstructive sleep apnea, coronary heart disease, stroke, and complex cardiovascular disease. 3,4lectrocardiogram. 17The above examples show that the four basic ML methods, LR, SVM, RF, and XGBoost, can achieve good performance in the downstream task of diabetes classification. However, all the above studies were only focused on the transitions either from normal population to diabetes or from normal population to prediabetes.Few studies on machine-learning-based disease prediction investigated the transition among the three stages.In this work, we conduct a cross-sectional study to identify independent influencing factors of the transition from normal population to prediabetes and the transition from prediabetes to diabetes, respectively.Then, machine learning models are trained correspondingly for disease risk prediction.The flowchart of the whole process for diabetes analysis is presented in Figure 1.We first collect a large amount of physical examination data and generate a diabetes dataset after a series of preprocessing manners.Then we adopt regression analysis to select independent influencing factors and utilize four machine learning approaches to build the risk prediction models. The purpose of this study is to deepen the understanding of diabetes progress and enhance the diagnosis with prediction models.With early warning of individuals at risk of prediabetes and diabetes among the populations participating in physical examinations, timely interventions may improve their quality of life.Furthermore, this study can serve as a valuable reference for public research works in early screening for other chronic non-communicable diseases. Dataset and Preprocessing The studies involving humans were approved by the Ethics Committee of Peking University Shenzhen Hospital.The original data in this retrospective study were collected from the participants who took physical examinations in the Health Management Center of Peking University Shenzhen Hospital from January 2020 to March 2023.There are a total of 7811 individuals participating in the physical examination, and the original dataset contains 41 physical examination indicators.The subjects were divided into three groups according to their blood sugar and HbA1c level following the WHO 1999 standard.The normal population had a fasting blood sugar level of <5.6 mmol/L and a HbA1c level of <5.7%, the diabetes population had a fasting blood sugar level of ≥7.0 mmol/L or a HbA1c level of ≥6.5%, and the remaining subjects were prediabetes population.After preprocessing, the data from the original dataset, there are still 5127 medical examination data remaining, each of which includes 25 medical examinations. The original data collected from the electronic healthcare record (EHR) system was desensitized by removing patient privacy, such as name, address, and telephone number.The desensitized data contained 41 physical examination indicators and 7811 records.To ensure the data quality and maintain the data quantity, we first deleted 16 physical examination indicators where the proportion of missing values was higher than 50%, and then deleted the records that still contained missing values.Table 1 describes the remaining physical examination indicators with their meanings.In addition, due to the inconvenience in processing text information during logistic regression prediction, we converted categorical variables into numerical variables in advance for our subsequent analysis and prediction.Specifically, we considered "female" as "0" and "male" as "1" for gender, and considered "-", "+-" as "0" and "1+", "2+", "3+", "4+" as "1" for U.GLU and PRO.Finally, in order to eliminate the bias caused by the scale differences of different features, we normalized all features by subtracting their mean values from each variable and dividing them by their standard deviation, ensuring that the numerical range of each feature variable is between (−1, 1).The record numbers of normal, prediabetes, and diabetes populations after preprocessing are 1582, 2929, and 616, respectively. Feature Selection In machine-learning-based models, a large number of variables are typically gathered as they can provide the model with enough knowledge to produce good discriminatory outcomes.However, in clinical applications, irrelevant features can also induce noise or redundancy, which may lead to poor prediction accuracy.Therefore, it is necessary to select the most suitable variables for the best prediction performance before model training. In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and independent variables.The difference between univariate and multivariate analysis is that the former has only one physical examination indicator in its independent variable, and the latter has multiple physical 1252 examination indicators in its independent variable.As the dependent variable in this study is the diabetes stage, a categorical variable, we chose the logistic regression model for the regression analysis.Firstly, univariate logistic regression analysis was used to analyze the classification results and physical examination indicators.Each pre-processed physical examination indicator will be used as the independent variable of univariate logistic regression, and the classification result will be used as the dependent variable.Then, it will be fitted into a logistic regression model, and the relationship between the dependent variable and the independent variable will be analyzed and predicted.The physical examination indicators with p-value <0.05 in the results of univariate logistic regression analysis were selected.We believe that these indicators are the influencing factors related to the development of diabetes.In order to further increase persuasiveness, we will also use the indicators selected from univariate regression analysis as independent variables for multivariate logistic regression analysis, with the dependent variable still being the classification result.Then, the independent variable and dependent variable are fitted into a logistic regression model, and the relationship between the dependent variable and the independent variable is analyzed and predicted.Finally, the variables with p-value <0.05 are retained for machine learning model training. Model Training The dataset is split into training and testing sets with a proportion of 7:3.And the hyperparameter selection of the model is the same on both the training and testing sets.Then the training set is utilized to train four machine learning models: logistic regression, 18 random forest, 19 SVM, 20 and XGBoost. 21The best parameters for the models are decided by 5-fold cross validation.The four models are briefly introduced below. 1. Logistic Regression.A logistic regression model is often used to predict the probability of an event as a function of a predictor variable.Given a set of variables x, the probability is calculated by a sigmoid function p(x) = 1/(1 + e x•β ).First, we use the function "LogisticRegression" in the scikit-learn library for the model training, then we set the solver to "lbfgs" respectively to optimize the multi classification problem.Finally, we use the 5-fold crossvalidation method to select the best parameters on "C", and "max_iter".2. SVM.SVM is a method for solving classification and regression issues.The model determines the hyperplane that optimizes the distance between the two nearest classes as well as the distance between samples.First, we use the function "SVR" in the scikit-learn library for the model training.Then, we use the 5-fold cross-validation method to select the best parameters on "kernel", "C", "degree", and "coef0".3. Random Forest.Random forest is an ensemble method based on the bagging technique, where decision trees are constructed independently.The final result is then derived from the majority voting results of all trees.First, we use the function "RandomForestClassifier" in the scikit-learn library for the model training, then we use the 5-fold cross-validation method to select the best parameters on "n_estimators", "max_depth", "min_samples_split", and "min_samples_leaf".4. XGBoost.In contrast, XGBoost is an ensemble method based on gradient boosting machine.Gradient boosting is a technique where new models are added to correct the error of existing models.First, we use the function "XGBClassifier" in the xgboost library for the model training, then we select the classification task in "objecti-ve=binary:logistic", and use the 5-fold cross-validation method to select the best parameters on "max_depth", "learning_rate", "n_estimators", and "min_child_weight". Model Evaluation We evaluate the model performance by metrics derived from the confusion matrix, which is a special visual matrix with two dimensions and can be used to compare prediction results and actual values.The confusion matrix contains four values: true positive (TP), false positive (FP), false negative (FN), and true negative (TN).TP is the number of samples that are actually positive and correctly predicted to be positive.FP is the number of samples that are actually negative and incorrectly predicted as positive.FN is the number of samples that are actually positive but are incorrectly predicted to be negative.TN is the number of samples that are actually negative and correctly predicted to be negative.Then, accuracy, precision, recall, F-measure and specificity can be defined as (TP + TN)/(TP + FP + FN + TN), TP/(TP + FP), TP/(TP + FN), 2*(precision * recall)/(precision + recall), and TN/(TN + FP), respectively.Besides, we also the use area under curve (AUC) of receiver operating characteristic (ROC) to evaluate model performance.The ROC curve is the plot of the true positive rate against the false-positive rate at various threshold settings.AUC is the area enclosed by the curve and the x-axis.A larger AUC value suggests a better classification performance. Results In this study, all data processing steps, including preprocessing statistical analysis, model training, and model evaluation, are performed with Python 3.8 and machine learning libraries such as statmodels, scikit-learn, and xgboost. Regression Analysis We separately analyze the transition from normal to prediabetes and the transition from prediabetes to diabetes.For the transition from normal to prediabetes, the variables are first individually selected by univariate logistic regression analysis.Age, Gender, Wt, H, BMI, DBP, SBP, PRO, TP, GLB, T.BIL, DB, IB, ALT, AST, BUN, UA, TC, TG, HDL.C, and LDL.C present significant correlations with p-value <0.05.Then, these variables are further included in multivariate logistic regression analysis.The result suggests that Age, PRO, TP and ALT are independent risk factors with OR > 1, and GLB and HDL.C are independent protective factors with OR < 1.More details are presented in Table 2. Similarly, the variables are first individually selected by univariate logistic regression analysis for the transition from prediabetes to diabetes.Age, Gender, Wt, BMI, DBP, SBP, U.GLU, PRO, T.BIL, DB, IB, ALT, AST, BUN, TC, TG, HDL.C, and LDL.C present significant correlations with p-value <0.05.Then, these variables are further included in multivariate logistic regression analysis.The result suggests that Age, Gender, BMI, SBP, U.GLU, PRO, ALT, and TG are independent risk factors with OR > 1, and AST are independent protective factors with OR < 1.More details are presented in Table 3. Prediction Model First, we present the model comparison to classify normal and prediabetes populations.According to the regression analysis results in Table 2, the independent influencing factors for the normal to prediabetes include Age, PRO, TP, GLB, ALT, and HDL.C, a total of six physical examination indicators, and logistic regression, random forest, support vector machine, XGBoost are used to build prediction models.We plotted the confusion matrix for each model in Figure 2. The results of the model on the training and testing sets are shown in Table 4, which includes the accuracy, precision, recall, F-measure, and specificity of the model, with the best performing results represented in bold.Besides, we provide the ROC curves of four prediction models in Figure 3. Specifically, Figure 3A and B represent the ROC curve of the logistic regression model on the training and testing set.Similarly, we have Figure 3C and D for the random forest model, Figure 3E and F for the SVM model, and Figure 3G and H for the XGBoost model, respectively.XGBoost performed the best among all models on the training set, while the best performing model on the testing set was the logistic regression. Then we present the model comparison to classify prediabetes and diabetes populations.According to the regression analysis results in Table 3, the independent influencing factors for the normal to prediabetes include Age, Gender, BMI, Overall, all the models in this study were built with commonly used indicators in physical examination and achieved AUC values higher than 0.7, indicating that the results of this experiment are convincing. Discussion This work investigated the physical examination data of normal people, prediabetes people, and diabetes people, aiming to analyze the independent influencing factors from normal to prediabetes and from prediabetes to diabetes.We found that Age, PRO, TP, GLB, ALT, and HDL.C are the independent influencing factors for the normal population to the prediabetes stage through multivariate regression analysis.Among them, Age, PRO, TP, and ALT were independent risk factors, and GLB and HDL.C were independent protective factors.Meanwhile, there were nine independent physical examination indicators from prediabetes to diabetes, including Age, Gender, BMI, SBP, U.GLU, PRO, ALT, AST, and TG.Among them, Gender, BMI, SBP, U.GLU, PRO, ALT, and TG were independent risk factors, and AST is an independent protective factor. In many studies, it had been found that Age, BMI, SBP, U.GLU, PRO, and other factors were independent influencing factors for normal people to develop prediabetes or even diabetes.According to the National Health and Nutrition Examination Survey (NHANES) of the United States, the prevalence of diabetes was proportional to the growth of age.The number of people with diabetes under the age of 45 accounted for 5% of the total, but the proportion of people over the age of 65 was as high as 33%. 22At the same time, NHANES research also showed that more than 75% of diabetes patients have a BMI ≥ 25.0 kg/m 2 , which indicated that obese people had a greater probability of suffering from diabetes. 23A study on the hypertension population without diabetes in China found that the risk of diabetes in people with SBP in the range of 130-140 mmHg was 24% higher than that in the population with SBP in the range of 120-130 mmHg.The incidence rate of diabetes in the former group increased by 24%, and the incidence of Fasting blood sugar returned to normal decreased by 29%. 24In addition, relevant research showed that every 10 mmHg reduction in average systolic blood pressure would reduce the risk of diabetic complications by 12% and the risk of related deaths by 15%. 25 This study found that the positive U.GLU in urine examination was an independent risk factor for the development of diabetes from prediabetes to diabetes, while the positive PRO was not only an independent risk factor for the development of normal people to prediabetes but also an independent risk factor for the development of diabetes from prediabetes to diabetes.It was found through research that the sensitivity of using U.GLU to detect prediabetes and diabetes could reach 83.5%.At the same time, it showed that the combination of U.GLU and FPG to detect diabetes could greatly improve the effectiveness of screening.All of the above could indicate the high correlation between the positive U.GLU and the occurrence of diabetes. 26It had also been pointed out that in any category of glomerular filtration rate (eGFR) in the general population, PRO positive in urine test results was directly proportional to the prevalence of hypertension, diabetes, and metabolic syndrome. 27Therefore, the urine test results can not only be used as an important indicator to reflect the level of renal function damage in patients but also as one of the indicators for screening prediabetes and diabetes.This study also found that TP was an independent risk factor for normal people to develop prediabetes, and GLB and HDL.C were independent protective factors for normal people to develop prediabetes.Protein is the main carrier of life activities.9][30] These results can provide some evidence for TP to be an independent risk factor for the development of diabetes to diabetes.After reviewing relevant papers, no evidence was found to indicate the correlation between GLB level and prediabetes.However, studies had found that the level of fetuin-A in diabetes patients during pregnancy was significantly higher than that in normal pregnant women, which indicated that diabetes during pregnancy played a role in the occurrence of insulin resistance and metabolic changes. 31Therefore, the specific relationship between GLB and the occurrence of prediabetes needs further study.In a large cross-sectional study based on the population in Jiangsu Province, non-HDL.C could be used as a biomarker for screening undiagnosed diabetes patients. 32In addition, HDL.C could regulate the endocrine function of the β cells in pancreas.It played an anti-diabetes role in cells, and keeping proper HDL.C level in human body can reduce the risk of diabetes. 33All the above results indicated that HDL.C is an independent protective factor for normal people to develop prediabetes. In addition, among the independent influencing factors for the development of the prediabetes population to diabetes, this study found that gender and TG were independent risk factors.A study showed that women are more likely to suffer from diabetes than men. 34Through consulting relevant data, 35 TG was an important risk factor for diabetes, which was consistent with the results of this study.Research showed that the ratio of TG/HDL was positively related to the development of diabetes and diabetes, and this indicator was also an important risk assessment factor for some complications of diabetes patients, such as cardiovascular disease. Furthermore, this study also found that ALT was not only an independent risk factor for normal people to develop prediabetes but also an independent risk factor for prediabetes people to develop diabetes.Meanwhile, AST is an independent protective factor for prediabetes people to develop into diabetes.Previous studies had found that the elevation of ALT level was related to type 2 diabetes, suggesting that it may be involved in the development of diabetes and insulin resistance. 36Other studies had found that AST/ALT levels were negatively correlated with the occurrence of type 2 diabetes. 37In the univariate regression analysis of AST, the results showed that AST was an independent risk factor for the development of prediabetes to diabetes.However, after the multivariate regression analysis of AST, the results showed that AST was an independent protective factor for the development of prediabetes to diabetes, which was contrary to the medical logic.After analysis, it was possible that there was a mutual correlation or collinearity between the two variables AST and ALT in multivariate regression analysis, and collinearity can change the direction of the variable relationship in the multivariate model.By calculating the Spearman correlation coefficient between ALT and AST, it was found that the absolute value of the coefficient was 0.78, indicating a strong correlation between these two variables.However, further research is needed to prove this conclusion.The results of this study not only provided a new perspective for understanding the occurrence and development of prediabetes but also contributed to a more The above examples show that the four basic ML methods, LR, 13 SVM, 15 RF, 12 and XGBoost 14 can achieve good performance in the downstream task of diabetes classification.This study further utilized logistic regression, random forest, support vector machine, and XGBoost to build prediction models for the independent influencing factors found above and calculates the accuracy (Acc), precision (Pre), recall (Rec), F-measure, specificity and AUC of each model.Specifically, among all models to classify between the population of normal and prediabetes, XGBoost performed the best among all models on the training set with 0.78 (Acc), 0.78 (Pre), 0.78 (Rec), 0.78(F-measure), 0.56(specificity) and 0.85 (AUC).However, the best performing model on the testing set was the logistic regression with 0.76 (Acc), 0.75 (Pre), 0.76 (Rec), 0.74 (F-measure), 0.5 (specificity) and 0.78 (AUC).Meanwhile, for the classification between the population of prediabetes and diabetes, XGBoost still performed best among all models on the training set with 0.88 (Acc), 0.88 (Pre), 0.88 (Rec), 0.85 (F-measure), 0.99 (specificity) and 0.89 (AUC).On the testing set, the logistic regression outperformed other models by AUC, while random forest and SVM achieved the highest score by Pre.As for Acc and Pre, all models presented the same performance with a value of 0.86.In a comprehensive evaluation, although XGBoost performed best on the training set, logistic regression performed best on the testing set.Such an observation implied that the simpler model might have the best resistance to overfitting for diabetes prediction tasks with physical examination indicators.Overall, all the models in this study were built with commonly used indicators in physical examination and achieved AUC values higher than 0.7, suggesting a moderate ability for the diagnosis of prediabetes and diabetes in clinical practice.After investigation, we found that Gong et al used the same original dataset as our experiment. 38In their study, a total of 5310 subjects and 22 variables were included after preprocessing.After conducting logistic regression analysis on the variables, a "Full".model and a "Simplified".model were established.But in our experiment, 24 physical examination indicators were selected by a series of data preprocessing operations such as feature normalization, and 5127 subjects were included, including 1582 normal people, 2929 prediabetes people and 616 diabetes people after pretreatment.In addition, except for the logistic regression model, we also trained and predicted using three other models: random forest, SVM, and XGBoost.In the prediction model from prediabetes population to diabetes population, the AUC value of logical regression in our test set also has 0.74, and the accuracy, precision, recall, and F-measure are 0.86, 0.85, 0.86, and 0.84, respectively.Our results are compared with those in Gong's article, except for 0.86 (Acc) which is the same, all other results are our better with 0.59 (Pre), 0.20 (Rec), and 0.73 (AUC) in Gong's article. There were also some limitations in this study.The data used in this study were all from the examinees who had undergone physical examinations in the Health Management Center of Peking University Shenzhen Hospital.The sample size might not be sufficient, and whether the examinees themselves had some metabolic disorder, such as kidney function damage and liver function damage, was not considered.Therefore, there might be some deviation in the results.To overcome these limitations, we will consider expanding the sample size in future research, adding other physical examination indicators of the subjects, such as vascular indicators and fundus color photography, and adopting a multicenter study design to enhance the reliability of the research conclusions.With these improvements, we will build a model with better discrimination and performance for prediction and apply it to the clinical diagnosis of prediabetes and diabetes. Conclusion Through univariate and multivariate logistic regression, this study analyzed the independent influencing factors from normal to prediabetes and from prediabetes to diabetes, most of which were independent risk factors.The study found six independent influencing factors for normal people to develop prediabetes, of which Age, PRO, TP, and ALT were four independent risk factors, and GLB and HDL.C were two independent protective factors.We also found nine independent influencing factors for prediabetes people to diabetes, of which Age, Gender, BMI, SBP, U.GLU, PRO, ALT, and TG were independent risk factors, and AST was an independent protective factor.The discussion of the clinical relationships between these indicators and diabetes supported the interpretability of our feature selection.The above independent influencing factors were used as predictors to build prediction models.The trained models all had AUC scores higher than 0.7, while the XGBoost model achieved the best on the training set and logistic regression performed the best on the testing set.Our analyzing results and predicted models can be used to promote personal health management.Moreover, we plan to expand the sample modality and quantity in future work, adding other physical examination indicators for subjects, such as vascular indicators and fundus color imaging, using a multicenter study design, and using a deep learning framework to concatenate the above data.Our work on disease factor analysis and prediction models would deepen understanding of diabetes progression, contributing to the development of personal health management. Figure 1 Figure 1 Flowchart of diabetes analysis process.Collected raw data is preprocessed to build a dataset.Then feature selection, model training, and model evaluation are successively applied to select the best machine learning model. Figure 4 . The results of the model on the training and testing sets are shown in Table5, which includes the accuracy, precision, recall, F-measure, and specificity of the model, with the best performing results represented in bold.Besides, we provide the ROC curves of four prediction models in Figure5.Specifically, Figure5Aand B represent the ROC curve of the logistic regression model on the training and testing set.Similarly, we have Figure5C and Dfor the random forest model, Figure5Eand F for the SVM model, and Figure5Gand H for the XGBoost model, respectively.In a comprehensive evaluation, although XGBoost performed best on the training set, logistic regression performed best on the testing set. Figure 2 Figure 2 Confusion matrix of results for each model between normal and prediabetes populations.(A and B) represent the results of Confusion matrix for the logistic regression model on the training and testing set.Similarly, we have (C and D) for the random forest model, (E and F) for the SVM model, and (G and H) for the XGBoost model, respectively. Figure 3 Figure 3 Receiver operating characteristic (ROC) curves of classification models between normal and prediabetes population.(A and B) represent the ROC curve of the logistic regression model on the training and testing set.Similarly, we have (C and D) for the random forest model, (E and F) for the SVM model, and (G and H) for the XGBoost model, respectively. Figure 4 Figure 4 Confusion matrix of results for each model between prediabetes and diabetes populations.(A and B) represent the results of Confusion matrix for the logistic regression model on the training and testing set.Similarly, we have (C and D) for the random forest model, (E and F) for the SVM model, and (G and H) for the XGBoost model, respectively. Figure 5 Figure 5 Receiver operating characteristic (ROC) curves of classification models between prediabetes and diabetes population.(A and B) represent the ROC curve of the logistic regression model on the training and testing set.Similarly, we have (C and D) for the random forest model, (E and F) for the SVM model, and (G and H) for the XGBoost model, respectively. Table 1 Descriptions of Physical Examination Indicators in This Study Table 2 Univariate Regression Analysis and Multivariate Regression Analysis Between Normal and Prediabetes PopulationsSBP, U.GLU, PRO, ALT, AST, and TG, a total of nine physical examination indicators, and logistic regression, random forest, support vector machine, XGBoost are used to build prediction models.We plotted the confusion matrix for each model in Table 5 The Results of the Model on the Training and Testing Sets Between Prediabetes and Diabetes Populations
2024-03-15T16:07:47.455Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "29d61fc3ac7e23b931d797ccb7c461dedd8b5713", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=97456", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77942fcfc0edda460aeaddb878fa658da928b58a", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
261544424
pes2o/s2orc
v3-fos-license
Proteomic Identification and Characterization of Collagen from Bactrian Camel (Camelus bactrianus) Hoof With the development of camel-derived food and pharmaceutical cosmetics, camel hoof, as a unique by-product of the camel industry, has gradually attracted the attention of scientific researchers in the fields of nutrition, health care, and biomaterial development. In this study, the protein composition and collagen type of Bactrian camel hoof collagen extract (CHC) were analyzed by LC-MS/MS, and the functional properties of CHC were further investigated, including its rheological characteristics, emulsification and emulsion stability, and hygroscopicity and humectancy. Proteomic identification confirmed that CHC had 13 collagen subunits, dominated by type I collagen (α1, α2), with molecular weights mainly in the 100–200 KDa range and a pI of 7.48. An amino acid study of CHC revealed that it carried the standard amino acid profile of type I collagen and was abundant in Gly, Pro, Glu, Ala, and Arg. Additionally, studies using circular dichroism spectroscopy and Fourier transform infrared spectroscopy revealed that CHC contains a collagen-like triple helix structure that is stable and intact. Different concentrations of CHC solutions showed shear-thinning flow behavior. Its tan δ did not differ much with increasing concentration. The CHC has good emulsifying ability and stability, humectancy, and hygroscopicity. This study provides a basis for utilizing and developing Bactrian camel hoof collagen as a functional ingredient. Introduction Collagen is one of the most prevalent animal proteins, with substantial quantities found in the skin, bone, cartilage, and tendon, accounting for around 30% of total animal protein [1,2].Collagen is the central extracellular matrix molecule that affects the mechanical flexibility of connective tissues and is believed to be the biological glue that binds cells in place [3].Modern medical research has confirmed that collagen induces platelet adhesion and aggregation, releases coagulation factors, and promotes blood coagulation.It is one of the most commonly used materials in protein scaffolds for skin wound healing [4,5].Due to its superior biological qualities, including low immunogenicity, compatibility, degradability, coagulability, and proliferation ability, collagen is also frequently utilized in biomedical materials, tissue engineering, burns, cosmetology, food, and other industries [5][6][7].In addition, due to the high molecular weight of collagen, different types of collagen have different compositions, forming a functionally diverse collagen family.Researchers have extracted collagen from various animal bodies, including pig skin, bovine Achilles tendon, sheep skin, fish skin, scales, etc. [8,9]. Bactrian camel (Camelus bactrianus) hooves are a unique and superior food from the Alashan region of China.Camel hoof is sweet, salty, flat, and more precious than camel meat.Its meat is delicate, elastic, and soft [10].It has developed its unique physiological function of resisting extreme environments during long-term natural selection.The hoof is the most active part of the camel body, consisting of large, soft finger pillows at the rear end and small, pointed toes at the front end.The content of Ca, Na, Fe, Zn, and other elements in camel hoof is significantly higher than that of eggs [11].Compared with pig hooves and cow hooves, camel hooves are characterized by high protein content and low oil content, in which the contents of some nutrients contained are 32.8 g of protein, 2 g of fat, 26 µg of vitamin A, 170.6 mg of sodium, 152 mg of calcium, 59 mg of magnesium, and 261 mg of copper per 100 g [11].The medical benefits of camel hoof may include reducing fatigue, improving skin, preserving youthful beauty, etc. [10,11].However, there are fewer reports of related research in this area.Scientific research has proven in recent years that there is very little connection between disorders, including mad cow disease, and collagen of animal origin [2].Less comprehensive information is available on the purification and characterization of collagen from camel hooves.Due to widespread slaughter in pastoral areas, the annual stock of camel hooves is relatively considerable.Still, China has long lacked this "best" raw material for processing and producing camels.Twenty-eight types of collagen have been discovered, and their structural characterization, functional properties [12], and medicinal functions are still in the exploratory and discovery stage due to their widespread presence in living organisms.Bactrian camel hoof collagen, as a result, can be used as a new research direction for collagen bio-matrix raw materials. In this study, CHC was obtained from camel hoof using enzyme-acid hydrolysis, the main types and amounts of collagen were analyzed by protein technology combined with liquid chromatography-tandem mass spectrometry, and further analyses of the structural characterization and functional properties of CHC were carried out.The findings of this research will offer an innovative strategy for the high-value use and development of camel hoof proteins as well as a theoretical foundation for the application and advancement of CHC and collagen peptides as functional components. Materials and Chemicals Camel hooves were obtained from six Bactrian camels (9 to 10 years old) available at local abattoirs (Alashan, Inner Mongolia, China).Camel hooves were depilated and peeled, bone was removed, and they were vacuum-packed in polyvinylidene chloride bags and stored at −25 • C for further chemical composition.Pepsin (250 N.F.U/mg) and hydroxyproline (HYP) standards were purchased from Beijing Solaria Technology Co., Ltd., Beijing, China.Other reagents were of analytical grade and obtained from Sinopharm Chemical Reagent Co., Ltd., Shanghai, China. Sample Preparation of CHC Camel hoof samples were cut into small pieces and shaken in ice water with 10% 1-butanol (1:15 w/v) for 24 h to remove fat in fluid changed every 6 h.The residue was then collected by centrifugation at 8000× g for 10 min at 4 • C and repeatedly washed with ice water to wipe off the fat.The leftover material was soaked in 0.1 M NaOH (1:10 w/v) for 24 h, with fluid changes every 6 h, to eliminate non-collagenous proteins and pigments.The residue was then washed with deionized water and centrifuged at 8000× g for 10 min.Finally, the pretreated Bactrian camel hoof sample was placed at −20 • C until use.Skimmed Bactrian camel hoof samples were mixed with 0.5 mol/L glacial acetic acid at a ratio of 1:30 (w/v).The substrate was hydrolyzed by pepsin (4000 u/g powder) for 48 h at 4 • C. The supernatant was centrifuged at 12,000× g for 20 min, and sodium chloride was added to the supernatant to a concentration of 0.9 mol/L.The supernatant was stirred at a low temperature for 24 h.The supernatant was centrifuged at 10,000× g for 20 min to isolate the collagen.After being entirely dissolved in 0.5 mol/L acetic acid solution, dialysis was carried out for 24 h.The sample solution was frozen.An HYP kit was used to detect the HYP content, and the collagen yield [13] was estimated using the following formula: Foods 2023, 12, 3303 3 of 16 Amino Acid Analysis Amino acid content was assessed based on the method of Song et al. [10] was slightly modified.CHC samples were completely hydrolyzed to free amino acids by 6 mol/L HCl, proportionally diluted and evaporated to dryness by nitrogen blowing, solubilized in 0.02 mol/L HCl, filtered through a 0.22 µm pore size, and then analyzed using a high-speed analyzer (L-8900; Hitachi, Tokyo, Japan) for 20 µL of the filtrate. Camel Hoof Protein Analysis The protein enzymatic hydrolysis process of camel hoof was carried out, as previously reported by Song et al. [10].Briefly, the tissue samples were first ground with liquid nitrogen, followed by ultrasonic treatment with 300 µL SDT buffer solution (4% SDS,100 mM Tris-HCl) lysate in a boiling water bath and centrifuged at 10,000× g for 10 min to collect supernatant, and then an appropriate amount of 1M DL-dithiothreitol (DTT) was added for proteolytic hydrolysis using filter-aided sample preparation (FASP) method.A 200 µL urea (UA) buffer solution was added to the protein enzymatic hydrolysate and centrifuged at 12,000× g for 15 min twice.An appropriate amount of iodoacetamide (IAA) was added to the residue, reacted without light for 30 min, and centrifuged at 12,000× g for 15 min.A 100 µL UA buffer solution was added and centrifuged at 12,000× g for 15 min twice.A 100 µL ammonium bicarbonate buffer solution was added and centrifuged at 14,000× g for 15 min twice.A 40 µL trypsin buffer solution was added, reacted at 37 • C for 16-18 h, and centrifuged at 12,000× g for 15 min.The filtrate was collected, desalted with C18 Stage Tip, and redissolved with 0.1% trifluoroacetic acid (TFA) after vacuum drying to obtain the peptide samples. Peptide samples were separated using an Easy nLC 1200 Chromatography System (Thermo Fisher Scientific Co. (Shanghai, China)) at a nanolitre flow rate.The column was balanced with 100% liquid A (0.1% formic acid aqueous solution).After the samples were injected into the Trap Column, the chromatographic column was used for gradient separation with a flow rate of 300 nL/min.Liquid B is a mixture of 0.1% formic acid, acetonitrile (80%), and water, and the gradient of liquid B is as follows: 1% (0-3 min), 2-8% (3-43 min), 8-28% (43-51 min), 28-40% (51-52 min), 40-100% (52-60 min), and maintained at 100%.Data-dependent acquisition mass spectrometry was carried out using a Q-Exactive HF-X mass spectrometer (Thermo Fisher Scientific Co. (Shanghai, China)).The detection mode is a positive ion, the scanning range of the parent ion is 350-1800 m/z, the primary mass spectrometry resolution is 60,000 @m/z 200, and the automatic gain control (AGC) target is 3•e6.The direct Maximum IT is 50 ms.Secondary mass spectrometry of peptide segment triggers the collection of secondary mass spectra of 20 highest-intensity parent ions after each full scan.Secondary mass spectrometry resolution 15,000 @m/z 200, the AGC target is 1•e5, the second-level Maximum IT is 25 ms, the MS2 ActivationType is high-energy collision dissociation (HCD), and the Isolation window is 1.6 m/z.Normalized collision energy is 28.MaxQuant1.6.1.0;a mass spectrometry database retrieval tool was used for the analysis.Camelus bactrianus (Camel) served as the reference protein database, containing 21,155 protein sequences.The download address is https://www.uniprot.org/taxonomy/9837 (accessed on 27 August 2020).Highly reliable identification results were obtained by screening false discovery rate (FDR) < 0.01 of protein and FDR < 0.01 of peptide-spectrum matches (PSM). Characterization of Collagen Extraction 2.5.1. Fourier Transform Infrared Spectroscopy The method of Zou et al. [13] was slightly modified.The CHC was mixed with KBr and extruded onto a transparent sheet.The 4000-400 cm −1 FTIR spectra were recorded with an FTIR spectrometer (Thermo Fisher Scientific Co. (Shanghai, China)). Circular Dichroism Spectra As Li et al. [11] described, the CHC was prepared in 0.5 mol/L glacial acetic acid at a concentration of 0.3 mg/mL.The sample was detected at intervals of 2 nm in the 190-250 nm using the Jasco J-815 system (Japan Spectroscopic Company, Tokyo, Japan). Emulsification and Emulsion Stability We took 10 mL CHC solution of different concentrations with pH adjusted to 7, mixed it with 10 mL peanut oil, centrifuged at 1000× g for 5 min, and recorded the emulsion layer's volume (or height).Meanwhile, 20 mL of the above sample emulsion was placed in a 50 • C water bath, and the height of the emulsion layer was recorded after 30 min.The following formulas were used to determine the emulsification and emulsion stability.Additionally, emulsification and emulsion stability of CHC solution (3 mol/mL) were examined at pH 3-11 and different concentrations of NaC1, respectively, according to the above-described method.C, 80% humidity), and the mass of the sample was measured accurately at regular intervals.At the same time, 0.200 g of collagen sample was placed in a weighing flask with 10% distilled water in a sealed desiccator containing anhydrous silica gel, and the weight of the sample was measured at regular intervals.The above experimental procedure was carried out using glycerol as a control and the following formulas: moisture absorption and retention were calculated as follows: The rheological properties were determined using the DHR-2 rotational rheometer cone-plate measurement system (TA Instruments Inc., New Castle, DE, USA).These measurements were obtained as described in a previous study [14].The experimental parameters were a scanning frequency of 0.1-80 Hz and a parallel stainless steel plate abrasive with a diameter of 40 mm and a 1 • clamping angle. Statistical Analysis The findings in this study are presented as mean ± standard deviation (± SD), with all measures being performed in triplicate.Data were analyzed using one-way ANOVA in SPSS 25.0 (SPSS Inc., Chicago, IL, USA) to evaluate whether there were significant differences between the samples.A p value of < 0.05 was considered significant.Protein theoretical isoelectric point (pI) and molecular weight (Mw) were obtained using Compute pI/Mw tool.The analysis used Microsoft Excel 2019 (Microsoft Co., Redmond, WA, USA) software and GraphPad Prism 9.5 (GraphPad Software Inc., La Jolla, CA, USA) to generate graphs. Yield of CHC The appropriate concentration of glacial acetic acid solution facilitates the extraction of CHC.This may be because the low-acidic environment interfered with the stability of the salt bonds and Schiff base structure between collagen molecules, facilitating collagen solubilization [15].In this experiment, the collagen yield of CHC was 35.77 ± 2.06%.This is similar to the results of collagen extraction from Bactrian camel skin [16]. Proteome Identification After applying the protein identification screening criteria of FDR ≤ 0.01, 121 proteins were successfully identified from the CHC samples.Among these proteins, there were 13 collagen subunits and precollagen-enhancing factors (Table 1).The top three identified proteins in CHC were ANK, collagen alpha-1(I) chain, and collagen alpha-2(I) chain.ANKs are protein modules with β2α2 structures widely found in all organisms, and they can bind to various ligands to realize complex biological functions, such as protein modification, protein ubiquitination, etc. [17,18].ANK can rely on hydrogen bonds and hydrophobic interactions to form a tightly stabilized structure with collagen macromolecules, as shown by the 32.60% iBAQ percentage of ANKs in CHC [18].Eighty collagen peptides were identified in CHC, and the identified collagens were collagen type XI (α1, α2), collagen type I (α1, α2), collagen type II (α1), collagen type III (α1), collagen type IV (α2), collagen type V (α1, α2, α3), collagen type VI (α1, α2, α3), and the precollagen c endopeptidase enhancer (Appendix A).Most of these collagens are associated with calcium binding, metal linking, extracellular matrix organization, and the assembly of collagen fibrils and other multimers [2].Type I collagen is mainly involved in forming protofibrils in tendons, ligaments, and bones [19].Type III collagen and type VI collagen are primarily associated with developing type III and V collagen trimers and assembling collagen fibers and other multimeric structures [20,21].The CHC had the highest percentage (Figure 1) of type I collagen (48.85% for IBAQ), followed by type III collagen (α1) and type VI collagen (α1, α2, α3).In earlier investigations, these experimental findings were not covered.Variations in the type and amount of collagen in CHC may result from a variety of factors, such as the replacement of proteins during camel growth, carrying a large body, and environmental conditions [2,22]. Eighty collagen peptides were identified in CHC, and the identified collagens were collagen type Ⅺ (α1, α2), collagen type I (α1, α2), collagen type II (α1), collagen type III (α1), collagen type IV (α2), collagen type V (α1, α2, α3), collagen type VI (α1, α2, α3), and the precollagen c endopeptidase enhancer (Appendix A).Most of these collagens are associated with calcium binding, metal linking, extracellular matrix organization, and the assembly of collagen fibrils and other multimers [2].Type I collagen is mainly involved in forming protofibrils in tendons, ligaments, and bones [19].Type III collagen and type VI collagen are primarily associated with developing type III and V collagen trimers and assembling collagen fibers and other multimeric structures [20,21].The CHC had the highest percentage (Figure 1) of type I collagen (48.85% for IBAQ), followed by type III collagen (α1) and type VI collagen (α1, α2, α3).In earlier investigations, these experimental findings were not covered.Variations in the type and amount of collagen in CHC may result from a variety of factors, such as the replacement of proteins during camel growth, carrying a large body, and environmental conditions [2,22].Figure 1 summarizes the molecular weight, pI, and iBAQ % distributions of CHC and identified collagens.The structural characteristics and processing applications of collagen are directly related to its isoelectric point (pI).The pI of the identified collagen α1 chains were 9.28 (type I), 5.96 (type III), 8.38 (type II), 5.18 (type VI), 4.84 (type V), 5.66 (type XI fragment), with a mean of 6.55.The pI of the collagen α2 chains were 9.30 (type I), 9.36 (type IV), 5.54 (type VI), 7.95 (type V), and 5.82 (type XI), with a mean of 7.59.The pI of collagen α3 chains were 5.87 (type VI) and 5.82 (type V), and the mean value was 5.85.The mean pI value of camel hoof collagen type I was 9.29, significantly higher than that of bovine pericardium type I bovine collagen and bovine pure type I triple helical chain [23].Compared to bovine trotters and bone collagen, the mean PI of collagen in CHC was 6.84 [19,20].Due to the positive or negative charge it gives the sample, a pH value lower than 4.0 or higher than 10.0 is acceptable for the purification process of CHC. Minor collagen structure variations can be observed in tissues derived from the same animal or across various animal species.Due to these potential variations, it is possible to separate collagen molecules of different weights [23].The collagen molecular weights in Figure 1 summarizes the molecular weight, pI, and iBAQ % distributions of CHC and identified collagens.The structural characteristics and processing applications of collagen are directly related to its isoelectric point (pI).The pI of the identified collagen α1 chains were 9.28 (type I), 5.96 (type III), 8.38 (type II), 5.18 (type VI), 4.84 (type V), 5.66 (type XI fragment), with a mean of 6.55.The pI of the collagen α2 chains were 9.30 (type I), 9.36 (type IV), 5.54 (type VI), 7.95 (type V), and 5.82 (type XI), with a mean of 7.59.The pI of collagen α3 chains were 5.87 (type VI) and 5.82 (type V), and the mean value was 5.85.The mean pI value of camel hoof collagen type I was 9.29, significantly higher than that of bovine pericardium type I bovine collagen and bovine pure type I triple helical chain [23].Compared to bovine trotters and bone collagen, the mean PI of collagen in CHC was 6.84 [19,20].Due to the positive or negative charge it gives the sample, a pH value lower than 4.0 or higher than 10.0 is acceptable for the purification process of CHC. Minor collagen structure variations can be observed in tissues derived from the same animal or across various animal species.Due to these potential variations, it is possible to separate collagen molecules of different weights [23].The collagen molecular weights in CHC were mainly distributed in the range of 100-200 kDa (iBAQ of 49.65%), with the most prominent being collagen type VI α3 (339.58 kDa) and the smallest being collagen type XI α1 fragment (89.147 kDa).The molecular weights of type I collagen in CHC were 138.94 kDa (α1) and 120.28 kDa (α2), which were slightly higher than the molecular weights of the α1 and α2 chains of pure bovine type I collagen [1].This also indicates that the collagen of the camel hoof was not extensively degraded or dissolved during the extraction process.In this study, collagen α1 chains in CHC were mainly from type I, type II (141.83kDa), type III (138.44 kDa), type V (93.742 kDa), type VI (108.67 kDa), and type XI fragment (89.147 kDa), with an average molecular weight of 114.47 kDa, and the α2 chains were mainly from type I, type IV (147.58 kDa), type V (123.7 kDa), type VI (97.135 kDa), and type XI (172.24kDa), with an average molecular weight of 132.19 kDa.There are also two α3 chains, from type VI and type V (172.07), with an average molecular weight of 255.83 kDa, and this may be the trimeric structure of the collagen α-chain, which is often referred to as the γ-chain [10]. This further proves that the camel hoof collagen molecule consists of the α1 chain, α2 chain, and α3 chain.The strength of the α1 chain of CHC (26.39%) was higher than that of the α2 chain (22.89%), while the α3 chain was only 0.09%.This is consistent with previous polyacrylamide gel studies.Additionally found was procollagen C-endopeptidase enhancer (48.21 kDa).These findings imply that the structural stability of camel hoof tissue is considerably maintained by the 1 and 2 chains of collagen, particularly from collagen I and II. CHC Amino Acid Composition The amino acid composition of CHC is listed in Table 2. CHC had the highest glycine (15.11 ± 1.78 g/100 g), followed by proline (9.54 ± 1.06 g/100 g), glutamic acid (7.28 ± 0.46 g/100 g), alanine (6.20 ± 0.38 g/100 g), and hydroxyproline (6.06 ± 1.16 g/100 g).The total content of hydroxyproline and proline was found to be 15.61 ± 1.28 g/100 g.In contrast to most proteins, collagen contains nearly 1/3 of glycine and is free of cysteine and tryptophan [24].The sample comprised 153 proline, 97 hydroxyproline, and 243 glycine per 1000 residues of amino acids.Its tyrosine, histidine, and methionine contents were minimal, and neither cysteine nor tryptophan could be found.The observed result aligns with the normal arrangement of collagen in mammals [2].In addition, the high content of polar amino acids is also a characteristic of collagen amino acids [6,23,25].The polar amino acid content of CHC was 19.53 ± 1.15 g/100 g, representing 31.39% of the total amino acid content.This finding is significantly higher than fish skin and bovine Achilles tendon [19,26].The FT-IR spectra of CHC (Figure 2) showed that the characteristic frequency region of its FT-IR spectrum was from 3500 to 1350 cm −1 , which was due to the many distinctive vibrational modes or group frequencies of the amide groups within the protein molecule [27].The FTIR spectrum is similar to type I collagen, consistent with the above identification results.More specifically, all major bands associated with collagen-related functional groups were detected in CHC as the amide A band (3409-3280 cm −1 ), amide B band (2972 cm −1 ), amide I band (1658 cm −1 ), amide II band (1554 cm −1 ), and amide III band (1235 cm −1 ).The amide I and amide II bands are the leading ones identifying collagen.The amide I band is directly related to the collagen backbone conformation, which reflects the change in ν C=O stretching vibration on the peptide backbone [28].The amide I band of CHC had a higher wave number than the amide I band of marine fish collagen.This indicated that the peptide chain backbone of camel-hoof-source collagen was better ordered, and the triple helix structure was stable [29].The amide II and III bands are characteristic frequency regions that occur after coupling N-H bending vibrations (δ NH ) and C-N stretching vibrations (ν C-N ).They are also distinct absorption regions for the secondary structure of proteins [26].The presence of the amide II band suggests that the secondary collagen structure in CHC is stable.There is a minimal amount of irregular curling transition of CHC at low frequencies, which is the cause of the amide II band peak's lower intensity [30].N-H stretching bands (the amide A band of aggregates) and asymmetric stretching vibrations (ν as C-H, amide B band) were also observed.The wider peak shape of the amide A band is due to the red shift of the CHC molecule's vibrational frequency due to the N-H bond's multi-polymerization with hydrogen bonding [31,32].Since the amide B band's telescopic vibration frequency exhibits a spectrum blue shift and its absorption intensity is weaker than that of the amide A band, the CHC may include more hybridized molecules, which would raise the bond energy and reduce the bond length of the amide B band [28].The sample also has a peaked shape in the 1340 cm −1 band, representing the CHC showing the characteristics of proline side-chain rocking vibration [2].In addition, a C-N stretching vibration at 1409 cm −1 suggests an interaction between the amide group and the molecule of the triple helix structure [33].No absorption band corresponding to the carboxyl functional group was found in CHC (1740-1720 cm −1 ), consistent with the lyophilized pure collagen results previously reported in the literature [19].Thus, the above analyses suggest that CHC retains a relatively intact triple-helical structure and has a high degree of purity. Circular Dichroism (CD) Spectra Circular dichroism spectroscopy is one of the most important methods for studying protein conformation and determining the secondary structure of collagen. N-H stretching bands (the amide A band of aggregates) and asymmetric stretching vibrations (ν as C-H , amide B band) were also observed.The wider peak shape of the amide A band is due to the red shift of the CHC molecule's vibrational frequency due to the N-H bond's multi-polymerization with hydrogen bonding [31,32].Since the amide B band's telescopic vibration frequency exhibits a spectrum blue shift and its absorption intensity is weaker than that of the amide A band, the CHC may include more hybridized molecules, which would raise the bond energy and reduce the bond length of the amide B band [28].The sample also has a peaked shape in the 1340 cm −1 band, representing the CHC showing the characteristics of proline side-chain rocking vibration [2].In addition, a C-N stretching vibration at 1409 cm −1 suggests an interaction between the amide group and the molecule of the triple helix structure [33].No absorption band corresponding to the carboxyl functional group was found in CHC (1740-1720 cm −1 ), consistent with the lyophilized pure collagen results previously reported in the literature [19].Thus, the above analyses suggest that CHC retains a relatively intact triple-helical structure and has a high degree of purity. Circular Dichroism (CD) Spectra Circular dichroism spectroscopy is one of the most important methods for studying protein conformation and determining the secondary structure of collagen.Figure 3 demonstrates the CHC CD spectrum.Natural collagen has a triple helix structure and a CD with a positive absorption peak near 225 nm and a negative absorption peak near 197 nm [34].CHC shows strong positive and negative absorption at 221 nm and 204 nm, respectively, consistent with the triple helix conformation characteristic of the protein.If the collagen is completely denatured, the positive absorption peak at 220 nm disappears entirely, and the negative absorption peak is red-shifted [31].This suggests that CHC has a complete collagen triple helix structure. Emulsification and Emulsion Stability Collagen emulsion stability and emulsification are influenced by pI, mass concentration, and ambient ionic concentration [35].Figure 4a shows that the emulsification increased with increasing concentration of CHC, while the emulsification stability gradually decreased.The maximum emulsification was attained at a concentration of 0.5%, after which there was a small decline.The adsorption force between collagen molecules gradually increases as the concentration rises.A tight interfacial membrane layer will form when it reaches a certain critical value.Even if the concentration continues to grow, the emulsification ability no longer shows apparent changes [36].Figure 4a shows that the emulsification stability of CHC continues to decrease with increasing concentration.This indicates that collagen surface tension increased due to macro-molecular aggregation and decreased hydrophobicity [33]. Figure 4b shows that the trend of emulsification and emulsion stability of CHC was similar for increasing environmental pH.Both emulsification and emulsion stability reached high values at pH 3-4 and gradually decreased with the lowest emulsification (50.10 ± 1.03%) and emulsion stability (58.61 ± 1.20%) at pH 7. It increased slightly at pH 7-10.This indicates that the isoelectric point of CHC is near pH 7, which reduces the exposed hydrophilic groups in the molecule, decreasing the ability of the substance to combine with water and falling solubility [34].The collagen molecules are well dispersed in the pH range away from the isoelectric point, allowing them to move faster toward the interfacial membrane and increase emulsification [9].Higher pH levels cause camelid collagen molecules' net negative charge to increase, which causes the chains to repel one another more strongly.This phenomenon results in a reduction in the process of emulsification and the stability of emulsions [37]. Emulsification and Emulsion Stability Collagen emulsion stability and emulsification are influenced by pI, mass concentration, and ambient ionic concentration [35].Figure 4a shows that the emulsification increased with increasing concentration of CHC, while the emulsification stability gradually decreased.The maximum emulsification was attained at a concentration of 0.5%, after which there was a small decline.The adsorption force between collagen molecules gradually increases as the concentration rises.A tight interfacial membrane layer will form when it reaches a certain critical value.Even if the concentration continues to grow, the emulsification ability no longer shows apparent changes [36].Figure 4a shows that the emulsification stability of CHC continues to decrease with increasing concentration.This indicates that collagen surface tension increased due to macro-molecular aggregation and decreased hydrophobicity [33]. Figure 4b shows that the trend of emulsification and emulsion stability of CHC was similar for increasing environmental pH.Both emulsification and emulsion stability reached high values at pH 3-4 and gradually decreased with the lowest emulsification (50.10 ± 1.03%) and emulsion stability (58.61 ± 1.20%) at pH 7. It increased slightly at pH 7-10.This indicates that the isoelectric point of CHC is near pH 7, which reduces the exposed hydrophilic groups in the molecule, decreasing the ability of the substance to combine with water and falling solubility [34].The collagen molecules are well dispersed in the pH range away from the isoelectric point, allowing them to move faster toward the interfacial membrane and increase emulsification [9].Higher pH levels cause camelid collagen molecules' net negative charge to increase, which causes the chains to repel one another more strongly.This phenomenon results in a reduction in the process of emulsification and the stability of emulsions [37]. Figure 4c shows that the emulsification of CHC showed an increasing trend within 0.1-0.5% NaCl concentration, with high values of both emulsification and emulsion stability of CHC at 0.5% NaCl concentration.This may be because an increase in NaCl concentration weakens the attraction that collagen molecules have for one another, allowing salt solubility to occur and for oil droplets to adsorb on the collagen interface [5].To increase the stability and emulsification of collagen, salt ions compress the thickness of the diffused bilayer as the concentration rises.This decreases the potential on the surface of the emulsion droplets, which lowers the repellent force barrier of the emulsion system and makes it easier for the droplets to aggregate, which reduces the stability and emulsification of collagen [3,38].Figure 4c shows that the emulsification of CHC showed an increasing trend within 0.1-0.5% NaCl concentration, with high values of both emulsification and emulsion stability of CHC at 0.5% NaCl concentration.This may be because an increase in NaCl concentration weakens the attraction that collagen molecules have for one another, allowing salt solubility to occur and for oil droplets to adsorb on the collagen interface [5].To increase the stability and emulsification of collagen, salt ions compress the thickness of the diffused bilayer as the concentration rises.This decreases the potential on the surface of the emulsion droplets, which lowers the repellent force barrier of the emulsion system and makes it easier for the droplets to aggregate, which reduces the stability and emulsification of collagen [3,38].The change in the humectancy of CHC was slightly more significant than that of glycerol but lower than that of fish scale collagen and camel skin collagen [16,39].The humectancy of CHC and glycerol steadily decreased with time (Figure 5a).After 1h of placement, the moisture retention rate of CHC and glycerol significantly reduced to 75.67 ± 1.54% and 80.37 ± 1.66%, respectively.After 24 h of placement, the difference in humectancy between CHC (52.06 ± 1.55%) and glycerol (59.45 ± 2.21%) was small and continued for a more extended time, up to 48 h.The change in their humectancy was not significant.This is The change in the humectancy of CHC was slightly more significant than that of glycerol but lower than that of fish scale collagen and camel skin collagen [16,39].The humectancy of CHC and glycerol steadily decreased with time (Figure 5a).After 1h of placement, the moisture retention rate of CHC and glycerol significantly reduced to 75.67 ± 1.54% and 80.37 ± 1.66%, respectively.After 24 h of placement, the difference in humectancy between CHC (52.06 ± 1.55%) and glycerol (59.45 ± 2.21%) was small and continued for a more extended time, up to 48 h.The change in their humectancy was not significant.This is because collagen contains an abundance of hydrophilic amino acids, like glycine, hydroxyproline, and hydroxylysine, that prevent water loss [40]. Moisture Absorption and Moisturizing Properties At 30 • C and 80% humidity, the hygroscopicity of CHC and glycerol increased over time (Figure 5b).According to earlier research, hygroscopicity is influenced by the sample's molecular makeup, the type or quantity of exposed hydrophilic groups, and the extraction technique [41].The increase in the hygroscopicity of CHC was significantly weaker than that of glycerol after 4 h of placement.At 48 h of placement, it reached 8.57 ± 0.13%, higher than snakehead skin collagen [32].This indicates that the number and type of hydrophilic groups exposed by CHC molecules are higher than those of soft-shelled turtle calipash and camel skin [13,16].because collagen contains an abundance of hydrophilic amino acids, like glycine, hydroxyproline, and hydroxylysine, that prevent water loss [40]. (a) (b) At 30 °C and 80% humidity, the hygroscopicity of CHC and glycerol increased over time (Figure 5b).According to earlier research, hygroscopicity is influenced by the sample's molecular makeup, the type or quantity of exposed hydrophilic groups, and the extraction technique [41].The increase in the hygroscopicity of CHC was significantly weaker than that of glycerol after 4 h of placement.At 48 h of placement, it reached 8.57 ± 0.13%, higher than snakehead skin collagen [32].This indicates that the number and type of hydrophilic groups exposed by CHC molecules are higher than those of soft-shelled turtle calipash and camel skin [13,16]. Rheological Analysis The steady-state viscosity of the CHC solution versus the shear rate is shown in Figure 6.The steady-state density of the CHC solution decreased with increasing shear rate but increased with increasing solution concentration.The more distributed chain particles roll and spin to compact into a group when the shear rate increases due to shear stress between the flow layers, lowering the number of physical cross-linking points and decreasing viscosity [5,42].A total of 0.1-0.9% of the CHC solution showed typical mimetic plasticity (Figure 6a).The collagen molecules were entangled at low shear rates and formed more physical cross-linking sites, increasing the solution's apparent viscosity [42].Previous studies have reported that the higher the relative molecular mass of straightchain polymer molecules, the higher the plasticity [16]. Thus, the CHC solution exhibits a shear-thinning rheological behavior and is a pseudoplastic fluid.The viscous modulus (G″) and elastic modulus (G′) of CHC solutions increased with increasing shear frequency (Figure 6b).In the 0.01-68.13Hz range, the G′ of CHC solution increased more than that of G″.In the 0.01-14.68Hz range, the G″ of the CHC solution was more significant than G′.The solution was viscous-dominated [2]; with the increase in shear frequency, the CHC solution entered into a smooth region (G″ < G′).The intersection of G″ and G′ gradually increased with increasing concentration, and the corresponding frequencies were mainly distributed in Hz.The findings were superior to earlier studies on bovine collagen [21], which could be attributed to the camel's colossal body weight, hostile environment, camel hoof protein composition, and molecular structure [11].Figure 6c shows the changes in tan δ as a function of frequency.The loss modulus tanδ is the Rheological Analysis The steady-state viscosity of the CHC solution versus the shear rate is shown in Figure 6.The steady-state density of the CHC solution decreased with increasing shear rate but increased with increasing solution concentration.The more distributed chain particles roll and spin to compact into a group when the shear rate increases due to shear stress between the flow layers, lowering the number of physical cross-linking points and decreasing viscosity [5,42].A total of 0.1-0.9% of the CHC solution showed typical mimetic plasticity (Figure 6a).The collagen molecules were entangled at low shear rates and formed more physical cross-linking sites, increasing the solution's apparent viscosity [42].Previous studies have reported that the higher the relative molecular mass of straight-chain polymer molecules, the higher the plasticity [16]. Thus, the CHC solution exhibits a shear-thinning rheological behavior and is a pseudoplastic fluid.The viscous modulus (G ) and elastic modulus (G ) of CHC solutions increased with increasing shear frequency (Figure 6b).In the 0.01-68.13Hz range, the G of CHC solution increased more than that of G .In the 0.01-14.68Hz range, the G of the CHC solution was more significant than G .The solution was viscous-dominated [2]; with the increase in shear frequency, the CHC solution entered into a smooth region (G < G ).The intersection of G and G gradually increased with increasing concentration, and the corresponding frequencies were mainly distributed in 14.68-21.54,14.68-21.54Hz, 31.62-46.42Hz, 46.42-68.13Hz, and 46.42-68.13Hz.The findings were superior to earlier studies on bovine collagen [21], which could be attributed to the camel's colossal body weight, hostile environment, camel hoof protein composition, and molecular structure [11].Figure 6c shows the changes in tan δ as a function of frequency.The loss modulus tanδ is the critical value for the transition of the sample from solid to liquid behavior; the smaller the value of tanδ, the more pronounced the elastic behavior of the sample [16].The tan δ of the CHC solution tested at 0.01-100 Hz is not much different, and most of them are in the range of 0-1.This implies that flexibility is essential in the CHC solution system.As a result, the 0.1-0.9%CHC solution might have a stable liquid network structure. critical value for the transition of the sample from solid to liquid behavior; the smaller the value of tanδ, the more pronounced the elastic behavior of the sample [16].The tan δ of the CHC solution tested at 0.01-100 Hz is not much different, and most of them are in the range of 0-1.This implies that flexibility is essential in the CHC solution system.As a result, the 0.1-0.9%CHC solution might have a stable liquid network structure. Conclusions In conclusion, the current study found that camel hoof tissue is a natural terrestrial source of collagen extraction.CHC proteomics revealed 13 collagen molecular components and one collagen enhancer.The results of this study have not been previously reported.The CHC samples had the most ANK and type I collagen, followed by type II and VI collagen.They had the spectroscopic and amino acid characteristics of typical collagen.In addition, CHC showed good emulsification stability, humectancy, hygroscopicity, and stable rheological properties.More research is needed to determine camel-derived collagen in vivo and Figure 5 Figure5displays the humectancy and hygroscopicity curves of CHC and glycerol.The change in the humectancy of CHC was slightly more significant than that of glycerol but lower than that of fish scale collagen and camel skin collagen[16,39].The humectancy of CHC and glycerol steadily decreased with time (Figure5a).After 1h of placement, the moisture retention rate of CHC and glycerol significantly reduced to 75.67 ± 1.54% and 80.37 ± 1.66%, respectively.After 24 h of placement, the difference in humectancy between CHC (52.06 ± 1.55%) and glycerol (59.45 ± 2.21%) was small and continued for a more extended time, up to 48 h.The change in their humectancy was not significant.This is Figure 5 Figure 5 displays the humectancy and hygroscopicity curves of CHC and glycerol.The change in the humectancy of CHC was slightly more significant than that of glycerol but lower than that of fish scale collagen and camel skin collagen[16,39].The humectancy of CHC and glycerol steadily decreased with time (Figure5a).After 1h of placement, the moisture retention rate of CHC and glycerol significantly reduced to 75.67 ± 1.54% and 80.37 ± 1.66%, respectively.After 24 h of placement, the difference in humectancy between CHC (52.06 ± 1.55%) and glycerol (59.45 ± 2.21%) was small and continued for a more extended time, up to 48 h.The change in their humectancy was not significant.This is because collagen contains an abundance of hydrophilic amino acids, like glycine, hydroxyproline, and hydroxylysine, that prevent water loss[40].At 30 • C and 80% humidity, the hygroscopicity of CHC and glycerol increased over time (Figure5b).According to earlier research, hygroscopicity is influenced by the sample's molecular makeup, the type or quantity of exposed hydrophilic groups, and the extraction technique[41].The increase in the hygroscopicity of CHC was significantly weaker than that of glycerol after 4 h of placement.At 48 h of placement, it reached 8.57 ± 0.13%, higher Table 1 . Top 10proteins and all collagen types identified in CHC samples. Table 2 . Amino acid composition and contents in CHC (g/100 g sample).
2023-09-06T15:16:01.733Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "c68da0f18b15a4f9f3e34fc0ee880b17f81098f7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/12/17/3303/pdf?version=1693624323", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d32bc18d07f9fbee992615bcda0524d38838a99", "s2fieldsofstudy": [ "Materials Science", "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251814982
pes2o/s2orc
v3-fos-license
On the Interpolation of Contextualized Term-based Ranking with BM25 for Query-by-Example Retrieval Term-based ranking with pre-trained transformer-based language models has recently gained attention as they bring the contextualization power of transformer models into the highly efficient term-based retrieval. In this work, we examine the generalizability of two of these deep contextualized term-based models in the context of query-by-example (QBE) retrieval in which a seed document acts as the query to find relevant documents. In this setting -- where queries are much longer than common keyword queries -- BERT inference at query time is problematic as it involves quadratic complexity. We investigate TILDE and TILDEv2, both of which leverage BERT tokenizer as their query encoder. With this approach, there is no need for BERT inference at query time, and also the query can be of any length. Our extensive evaluation on the four QBE tasks of SciDocs benchmark shows that in a query-by-example retrieval setting TILDE and TILDEv2 are still less effective than a cross-encoder BERT ranker. However, we observe that BM25 could show a competitive ranking quality compared to TILDE and TILDEv2 which is in contrast to the findings about the relative performance of these three models on retrieval for short queries reported in prior work. This result raises the question about the use of contextualized term-based ranking models being beneficial in QBE setting. We follow-up on our findings by studying the score interpolation between the relevance score from TILDE (TILDEv2) and BM25. We conclude that these two contextualized term-based ranking models capture different relevance signals than BM25 and combining the different term-based rankers results in statistically significant improvements in QBE retrieval. Our work sheds light on the challenges of retrieval settings different from the common evaluation benchmarks. INTRODUCTION Query-by-Example (QBE) retrieval is an Information Retrieval (IR) setting in which a seed document 1 acts as the query to represent the user's information need and the retrieval engine searches over a collection of the same type of documents [1,19,20,29]. This retrieval setup is typical in professional, domain-specific tasks such as legal case law retrieval [1,2], patent prior art search [11,24,25], and scientific literature search [1,19,20]. While using a document as a query could become challenging due to its length and complex semantic structure, prior work has shown that traditional termbased retrieval models like BM25 [27] are highly effective when used in QBE retrieval [1,2,28]. Recently, deep contextualized term-based retrieval models have gained attention as they bring the contextualization power of the pre-trained transformer-based language models into the highly efficient term-based retrieval. Examples of such models are Deep-Impact [18], SPLADE [10], SPLADEv2 [9], TILDE [34], TILDEv2 [33], COIL [12], and uniCOIL [16]. Here, we specifically investigate TILDE, which is a term independent likelihood model, and its follow-up TILDEv2 which is a deep contextualized lexical exact matching model. TILDE and TILDEv2, which are introduced as term-based reranking models, follow a recent paradigm in term-based retrieval where term importance is pre-computed with scalar term weights. Besides, to predict the relevance score, both of these models use the BERT tokenizer as their query encoder which means that they do not need to perform any BERT inference at query time to encode the query. However, leveraging tokenizer-based encoding of the query trades off the query representation and therefore effectiveness with higher efficiency at inference time [33]. While the effectiveness of these models is evaluated on tasks and benchmarks where we have short queries, e.g., MSMARCO Passage Ranking [21] and the TREC DL Track [7], in this paper, we evaluate them in the aforementioned QBE retrieval setting where queries are much longer than common keyword queries. In this regard, we address the following research questions: RQ1 How effective are TILDE and TILDEv2 in query-by-example retrieval? A specific direction in answering RQ1 is to investigate the ranking quality of TILDE and TILDEv2 in comparison with the effective cross-encoder BERT ranker [1,22], which is described in section 2.4. We are interested in this direction for two reasons. First, the cross-encoder BERT ranker exhibits quadratic complexity in both space and time with respect to the input length [17] and this is aggravated in QBE where we have long queries. TILDE and TILDEv2, however, do not need any BERT inference at query time. Second, due to the maximum input length of BERT, cross-encoder BERT ranker, which uses the concatenation of the query and the document, might not cover the whole query and document tokens in a QBE setting, whereas in TILDE and TILDEv2, the query can be of any length and documents are covered up to the maximum length of BERT. Additionally, since TILDEv2 pre-computes the term weights only for those tokens existing in the documents, one risk is that it might aggravate the vocabulary mismatch problem. A typical approach to address this issue is to use document expansion methods. Zhuang and Zuccon [33] use TILDE as their document expansion model for TILDEv2. We adopt that approach for our task and further investigate the impact of token-based document expansion with TILDE on the ranking quality of TILDEv2 in a QBE retrieval setting. Apart from comparing TILDE and TILDEv2 to the cross-encoder BERT ranker, we also make a comparison to traditional lexical matching models (BM25 and Probabilistic Language models), which have been shown as strong baselines on QBE tasks in prior work [2,28]: RQ2 What is the effectiveness of traditional lexical matching models with varying tokenization strategies in comparison to TILDE and TILDEv2? To answer RQ2 we will investigate the effect of using the BERT tokenizer [8] as pre-processing for traditional term-based retrieval models. By doing so, we are aligning the index vocabulary of traditional models with that of TILDE and TILDEv2, which could make our comparison more fair. We will see in the Section 4 that BM25 shows a competitive ranking quality in comparison to TILDE and TILDEv2 in our QBE benchmark. Because of the similar quality on average, we are interested to see if the relevance signals of TILDE and TILDEv2 are different from that of BM25, to find out if the methods are complementary to each other. To this aim, we will investigate the following research question: RQ3 To what extent do TILDE and TILDEv2 encode a different relevance signal from BM25? To address the question above, as it is described in details in Section 3.3, we will analyze the effect of the interpolation of the scores of TILDE and TILDEv2 with BM25. Since TILDE and TILDEv2 are introduced as re-ranking models, we use four different tasks from the SciDocs evaluation benchmark [5] as a domain-specific QBE benchmark. This benchmark uses scientific paper abstracts as the query and documents. The retrieval setting in these tasks suits as a re-ranking setup because of the number of documents to be ranked for each query. Since that we are working in a domain-specific evaluation setting, we will also address the following research question: RQ4 To what extent does a highly tailored domain-specific pretrained BERT model affect the effectiveness of TILDE and TILDEv2 in comparison to a BERT base model? In summary our main contributions in this work are three-fold: • We show that two recent transformer-based lexical models (TILDE and TILDEv2) are less effective in Query-by-Example retrieval than was expected based on results reported for ad hoc retrieval. This indicates that QBE retrieval is structurally different from other IR settings and requires special attention for methods development; • We show that the relevance signals of TILDE and TILDEv2 can be complementary to that of BM25 as interpolation of the methods leads to an improvement in ranking effectiveness; • We also investigate interpolations of BM25 with TILDE and TILDEv2 in an ideal setting where the optimal interpolation weight is known a priori, and by doing so, we show that more stratified approaches for the interpolation could result in higher gains from the interpolation of BM25 with TILDE and TILDEv2. In section 2 we describe the retrieval models used in this work. In section 3 we provide details about our methods and experiments and in section 4 we analyze the results and discuss the answers to our research questions. Section 5 is dedicated to to further analysis of the results, and finally, in Section 6 we provide the conclusion. The code used in this paper is available at: https://github.com/ aminvenv/lexica BACKGROUND: RETRIEVAL MODELS In this section, we briefly introduce the retrieval models that we implement and evaluate in our experiments. Traditional lexical matching models BM25. For BM25 [27], we use the implementation by Elasticsearch 2 with the parameters = 2.75, and = 1, which was tuned over the validation set. Probabilistic Language Models. For language modeling (LM) based retrieval [4,13,26], we use the built-in similarity functions of Elasticsearch for the implementation of language model with Jelinek Mercer (JM) smoothing [32]. Term Independent Likelihood Model: TILDE TILDE is a tokenizer-based term-based retrieval model which follows a term independence assumption and formulates the likelihood of a query as follows: in which is the query, and is the document. As Figure 1 shows, to compute the relevance score, the text of a document is fed as the input for BERT and the log probability for each token is estimated by using a language modeling head on top of the BERT [CLS] token output. In other words, we are pre-computing the [34]. Right: TILDEv2 [33]. Both TILDE and TILDEv2 leverage the BERT tokenizer as their query encoder. stands for the th token of the document. The and have the same length as the BERT vocabulary size. term weights over the complete BERT vocabulary. During both training and inference time, the query text is tokenized by using a BERT tokenizer and the resulting token IDs are used to look up the corresponding log probability from the likelihood distribution predicted in the output of the language modeling head. It is worth mentioning that the document likelihood can be computed in a similar way by swapping the query and document; however, we only use the query likelihood (Equation 1) in our experiments. For TILDE, we use the implementation from the authors' code repository. 3 We report results for the TILDE model with different initial checkpoints as the BERT encoder for our fine-tuning procedure. TILDE BERT uses bert-base-uncased, TILDE SciBERT uses SciBERT, and TILDE MSMARCO uses a TILDE which is already finetuned on MSMARCO; we use TILDE MSMARCO in a zero-shot setting on our data. Lexical Exact Matching: TILDEv2 TILDE has a drawback in which it expands each document to the size of BERT tokenizer vocabulary. To tackle this problem, the authors proposed TILDEv2. TILDEv2, which builds upon uniCOIL [16] and TILDE, follows a recent paradigm in contextualized lexical exact matching in which BERT is used to output a scalar importance weight for document tokens [16,33]. As it is shown in Figure 1, in TILDEv2, the token representation is downsized into a scalar weight and the relevance score between a query and a document pair is computed by a sum over the contextualized term weights for all terms appearing in both query and document: Here, and are the query and the document respectively; is the th token of the document; is the term importance weight for the th token of , and ( ) is the count of the -th unique token which is achieved by using the BERT tokenizer as the query encoder. In this equation, is computed using the same method as in Lin and Ma [16] in which a function is used on the projection layer to force the model to map the token representations into a positive scalar weight: in which is the th token in document and is the learnable bias parameter of the projection layer . Lin and Ma [16] show that using a scalar weight as term importance (uniCOIL [16]) instead of a vector representation (COIL [12]) results in a decrease in the effectiveness; however, by using query expansion, uniCOIL can achieve higher effectiveness. Following the method proposed by Zhuang and Zuccon [33] for query expansion with TILDE, we will show how TILDEv2 will act when we expand documents with TILDE. For TILDEv2, we use the implementation from the authors' code repository. 4 Cross-encoder BERT Ranker The state-of-the-art results on SciDocs is reported by Abolghasemi et al. [1] where they use a multi-task optimized cross-encoder BERT ranker [22]. The cross-encoder BERT ranker uses the concatenation of query and the document as the input to a BERT encoder. The BERT encoder is then followed by a projection layer on top of its [ ] token to compute the relevance score: In this equation, and represent the query and the document respectively and [ ] as well as [ ] are special BERT tokens [8]. METHODS AND EXPERIMENTAL SETTINGS In this section, we provide details and preliminaries about our methods and experimental settings. Evaluation Benchmark We run our experiments on the SciDocs benchmark [5]. This dataset was originally introduced as a benchmark for representation learning tasks. Later, several works including [1,19] used the tasks of {co-view, co-read, citation, co-citation}-prediction from this benchmark as a query-by-example retrieval setting. As Figure 2 depicts, in this setting, given a query document, the goal is to retrieve and rank the most relevant documents out of a collection. The evaluation dataset for each of these four tasks includes approximately 30K total papers from a held-out pool of papers, consisting of 1K query papers and a candidate set of up to 5 positive papers and 25 negative papers [5]. To make our results comparable, we follow the prior work on SciDocs to prepare the same training data [1]. To this aim, we take the validation set of each of tasks and use 85% of them as training and 15% of them as the validation. Thus, each query in the train set has 5 relevant documents and 25 non-relevant documents. While TILDE is trained over relevant query-document pairs [34], TILDEv2 needs triplets in the format of (query, positive document, negative document). To prepare these triplets we pick two nonrelevant documents per relevant document. By doing so, we create 10 triplets out of 30 training samples for each query. It should be noted that following Cohan et al. [5] we use a concatenation of abstract and title of the papers as documents. BERT-based Tokenization in Traditional Models In order to address RQ2, we will examine the effects of transformerbased tokenizers as text pre-processor for traditional retrieval models. Doing so aligns the index vocabulary of traditional models with that of TILDE and TILDEv2, which in turn makes our comparison more fair. Transformers use different tokenization mechanisms e.g. WordPiece [31], which result in different query and document representations compared to common word-based tokenization approaches that are sometimes combined with normalization steps such as stemming and lemmatizing. Kamps et al. [14] show that using the BERT tokenizer as a pre-processor for BM25 results in a higher efficiency at the cost of a small decrease in effectiveness on the TREC 2020 Deep Learning Track [6]. QBE retrieval, however, has the challenge of long queries. In this work, investigate whether the same effect applies to a QBE retrieval setting. To this aim, we use the BERT base tokenizer as a pre-processor for LM and BM25. In addition, we use the SciBERT tokenizer, which is a domainspecific BERT tokenizer, to find out if a domain-specific tokenizer would have a different effect in comparison to the BERT base tokenizer. We use three different pre-processing setups in Elasticsearch to compare with our two transformer-based tokenizers: • Elasticsearch Standard Analyzer (SA) • Lowercase token filter, Porter Stemmer, Whitespace tokenizer (STM1) • Lowercase token filter, Porter Stemmer, Standard tokenizer (STM2) In Table 2, models corresponding to these setups respectively have SA, STM1, and STM2 as their subscript. BERT-Token and SciBERT-Token as subscripts stand for using BERT and SciBERT tokenizers as the text pre-processors. Interpolation between BM25 and TILDE (TILDEv2) scores To answer RQ3 about the difference between BM25 and TILDE (as well as TILDEv2) in terms of their relevance signals, following Wang et al. [30], we evaluate the effect of the interpolation between the relevance scores from BM25 and from the contextualized term-based ranking models TILDE and TILDEv2. To this aim the interpolated score is computed as following: Here, 25 stands for the BM25 score for query , and document , and refers to the relevance score from TILDE or TILDEv2. Also, is the hyperparameter that controls the impact of the scores from BM25 and TILDE (or TILDEv2). Prior to the interpolation both of the relevance scores are normalized using -scaling (subtracting the mean and dividing by the standard deviation). We optimize on the validation set. Additionally, to further investigate the impact of interpolation, we do a per-query oracle interpolation in which we assume the best interpolation setting, i.e., optimal , could be predicted per query, and thus we can explore how much effectiveness is reachable by the interpolation of the scores. In the following of the paper, "oracle interpolation" refers to this latter interpolation setup and "non-oracle interpolation" refers to the vanilla interpolation, i.e., one for all queries that is optimized on the validation set. Document Expansion with TILDE The Average token count of SciDocs documents (abstract+title) is 219 and 208 for BERT and SciBERT respectively. Their 90% token count quantiles are 341 and 385. Comparing these numbers to the maximum input length of BERT models, i.e., 512 tokens, we can see a capacity for the expansion of the documents. To further investigate RQ1, following recent works which use document expansion to alleviate the vocabulary mismatch in contextualized term-based retrieval [16,33], we evaluate the impact of retrieval on documents which are expanded at indexing time. To this aim, we use TILDE in the same way as the original paper [33]. TILDE is at an advantage where it is more efficient than doc2query [23]. In this work, using TILDE SciBERT of which we found it performs the best compared to other TILDE models (Table 1), we generate = 200, and = 300 expansion terms for TILDEv2 SciBERT . It is noteworthy that similar to the original paper [33] not all expansion terms are added to a document, but only new expansion terms -that are not yet present in the document -are added. Domain-Specific BERT in TILDE and TILDEv2 To answer RQ1, and RQ4, we will investigate the power that can be brought by domain-specific pre-training to term-based ranking models. To do so, we evaluate the models' ranking quality in Figure 2: In the Query-by-Example retrieval setting, given a document (in its meaning as a unit of retrieval [17]) as the query , the goal is to retrieve and rank the top-k relevant documents { 1 , 2 , ... } out of a collection of documents. We use the four QBE tasks from SciDocs [5] benchmark including { , , , }, each of which has its own relevance criterion [5]. three settings: a) using BERT base as encoder, b) zero-shot utilization of TILDE and TILDEv2 models which are already fine-tuned on MSMARCO, and c) using a domain-specific pre-trained BERT as their encoder. Specifically, we use SciBERT [3] since our evaluation benchmark is from the scientific domain. Implementation Details We run our experiments on NVIDIA RTX 3090 GPU machines with 24GB GPU memory. For BERT base , and SciBERT we use the pretrained models available on Huggingface. All BERT-based models are trained for 5 epochs. We use the Adam optimizer [15] with a learning rate of 2 × 10 −5 for TILDE, and the AdamW optimizer with a learning rate of 5 × 10 −6 for TILDEv2. In addition, we relax the maximum document length to the maximum input length of BERT during indexing. RQ1. How effective are TILDE and TILDEv2 in query-by-example retrieval? and RQ4 To what extent does a highly tailored domainspecific pre-trained BERT model affect the effectiveness of TILDE and TILDEv2 in comparison to when we use a BERT base model? As Table 1 shows, TILDE and TILDEv2 are less effective than a cross-encoder BERT ranker in QBE retrieval despite having longer queries. This could be due to the fact that the cross-encoder BERT ranker applies all-to-all attention across tokens in both the query and the document [17] and thus, query terms and document terms are highly contextualized for the estimation of the relevance score. In addition, we see that TILDEv2 BERT outperforms TILDE BERT despite TILDEv2 being highly prune to the vocabulary mismatch problem. One hypothesis for this observation could be that in a domainspecific retrieval setup like ours, TILDEv2 with the BERT base encoder predicts more effective document term weights than the term weights predicted for all tokens in the BERT vocabulary by TILDE with the BERT base encoder. In addition, using SciBERT as our domain-specific pre-trained BERT model unsurprisingly improves the ranking quality of both TILDE and TILDEv2; however, this improvement is higher between TILDE BERT and TILDE SciBERT than between TILDEv2 BERT and TILDEv2 SciBERT to an extent where TILDE SciBERT even outperforms both TILDEv2 BERT and TILDEv2 SciBERT . This observation could be due to the fact that the vocabulary mismatch problem caused by exact matching limits the TILDEv2 ranking quality, even if we use a highly tailored domain-specific BERT as its encoder. In this respect, we investigate the impact of token-based query expansion (see section 3.4) with TILDE on the ranking quality of TILDEv2 in our QBE retrieval setting. Lines , and in Table 1 are the ranking results on the documents that are expanded using TILDE with the method introduced by Zhuang and Zuccon [33]. Here, we are interested to find out if using document expansion is able to compensate for the gap in the ranking quality between TILDE SciBERT , and TILDEv2 SciBERT . As shown in Table 1, TILDEv2 SciBERT with = {200, 300} expansion terms, is still less effective than TILDE SciBERT . Furthermore, Table 1: Ranking quality on the four SciDocs benchmark tasks using contextualized term-based ranking and cross-encoder BERT. "BERT" and "SciBERT" refers to the pre-trained model used as the encoder. "MSMARCO" indicates the utilization of TILDE or TILDEv2 which are already fine-tuned on MSMARCO. Rows and refer to the experiments on expanded documents with terms using TILDE SciBERT as described in section 3.4. Statistical significance improvements are according to paired ttest (p<0.05) with Bonferroni correction for multiple testing. Rows and are included from Table 2 new tokens on average. These numbers beside the statistics of the tokens in SciDocs benchmark, provided in Section 3.4, indicate that should be tuned in order to take advantage from the document expansion with TILDE in QBE retrieval setting. Finally, we see that the zero-shot utilization of TILDE MSMARCO and TILDEv2 MSMARCO does not show superior performance over the fine-tuned TILDE and TILDEv2 with both BERT and SciBERT encoders. It should be noted that taking models which are already fine-tuned on general domain (like TILDE MSMARCO and TILDEv2 MSMARCO ) and further fine-tuning them on the task domain is a typical approach which could result in improvement in their ranking quality; however, we leave this item as a direction to be explored in future work. RQ2. What is the effectiveness of traditional lexical matching models with varying tokenization strategies in comparison to TILDE and TILDEv2 Table 2 shows that leveraging BERT and SciBERT tokenizers results in competitive ranking quality in both probabilistic language model based retrieval and BM25 in comparison to the three traditional pre-processing setups introduced in section 3.2. Moreover, as the results of Table 1 shows, the ranking quality of BM25 2 not only outperforms LM and BM25 with different traditional and BERT-based pre-processing approaches, but also it could even outperform TILDE , and TILDEv2 in most of the tasks. In fact, we do not see a large gap between BM25 compared to TILDEv2 as was shown for retrieval based on short queries in the experiments on MSMARCO and TREC DL Track benchmarks [33]. This finding is important as (1) it sheds light on the challenges of retrieval settings different from the common evaluation benchmarks including MSMARCO and the TREC DL Track; (2) raises the question how effective other contextualized term-based ranking models would be in those settings. RQ3. To what extent do TILDE and TILDEv2 encode a different relevance signal from BM25? The blue lines in Figure 3 show the ranking quality for TILDE SciBERT and TILDEv2 SciBERT when their scores are interpolated with the BM25 score over varying values of interpolation parameter with the step of 0.1. Besides, Table 3 shows the ranking quality for the interpolations with the that is tuned over the validation set. We can see that an optimal interpolation between the scores from BM25 and the contextualized term-based ranking models TILDE and TILDEv2 could provide significant improvements for almost all tasks over the individual rankers participating in the interpolation. The only exceptions are in the co-view, and cite tasks. To be specific, there is no improvement over BM25 in the nDCG metric in the co-view (line e vs. line a in Table 3). Besides, in the cite task the improvement over TILDEv2 (line e vs. line c in Table 3) is not significant for the nDCG metric, and there is no improvement for the MAP metric. Nevertheless, the improvements obtained by the interpolation for almost all tasks and metrics indicates that TILDE and TILDEv2 are capturing different relevance signals compared to BM25. To further investigate the impact of the score interpolation with BM25 scores, we perform an oracle interpolation in which we assume the optimal interpolation hyperparameter is known for each individual query. This query-specific optimal value is selected over varying values of with the step of 0.1. Table 4 as well as orange lines in Figure 3 show the results for the oracle interpolation. We can see that the oracle interpolation would result in a substantial improvement for both TILDE and TILDEv2. Moreover, we can see in Table 4 that there is a subset of queries for which the BM25 ranking alone is better than the interpolation (queries with optimal =1). This number is lower for the interpolation with TILDE than for the interpolation with TILDEv2. One hypothesis for this observation could be that the interpolation with TILDE is likely to be more helpful for BM25 since TILDE could bring more contextualization power for BM25 as it incorporates the term importance for all tokens in the query. In other words, since TILDEv2 pre-computes term weights only for the tokens of the document (whereas TILDE pre-computes the term importance weight for all the tokens in the BERT vocabulary per document), due to the chance of vocabulary mismatch in TILDEv2, it could incorporate less query-dependent contextualization than TILDE. In addition, we see that the margin between the oracle interpolation results and both non-interpolated scores as well as non-oracle interpolation scores (Table 3) is substantial, which demonstrates that more complex aggregation methods could benefit more from the relevance signals from TILDE, TILDEv2 and BM25. DISCUSSION In this section, we further analyze the interpolation between BM25 and TILDE (TILDEv2) in terms of the interpolation effectiveness and the interpolation weight . Interpolation effectiveness The first two rows on the top of Figure 3 correspond to the interpolation between TILDE and BM25 and the two rows in the bottom correspond to the interpolation between TILDEv2 and BM25. Comparing the nDCG and MAP plots for the interpolation between TILDE and BM25, we can see that for this combination, =0.1 shows the highest ranking quality for both nDCG and MAP metrics in all tasks. Thus, a high weight for TILDE with a small weight for BM25 gives the highest effectiveness for this combination. This observation could mean that while TILDE, as contextualized transformerbased model, is able to outperform BM25 as an exact matching model, it could still benefit from the strong lexical relevance scores from BM25. On the other hand, for the combination of TILDEv2 and BM25 we see that the highest ranking quality is obtained with ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8} depending on the task. The exceptions are in the cite, and coview tasks as described in the answer to RQ3 in Section 4. Thus, in the combination of TILDEv2 and BM25, an equal or slightly higher weight for BM25 relative to TILDEvs gives the optimal results. A hypothesis for this observation could be that while both BM25 and TILDEv2 are performing based on exact matching, the term weights from TILDEv2, which are predicted through contextualization of the document terms, are not always more effective than the term scores from BM25; however, they can act as a complement for each other and thus their interpolation could benefit from both. Interpolation weight To further analyze the interpolation weight , we consider the two aforementioned settings of oracle interpolation and non-oracle interpolation. Non-oracle interpolation. We can see in Figure 3 (blue lines) that for the effective interpolations, i.e, the interpolations that result in higher effectiveness than each individual ranker included in the interpolation, the interpolation weight in the combination of BM25 and TILDEv2 has a wider range than in the combination of BM25 and TILDE. This indicates that in this experimental setting the interpolation of BM25 and TILDEv2 could be achieved by a broader range of values and is therefore more robust to the choice of interpolation weight than for BM25 and TILDE. Oracle interpolation. As a measure of the statistical dispersion, we report the inter-quartile range (IQR) for the oracle interpolation weight which is shown in Table 4. Taking the range of [0.0, 1.0] into account, we can see that we have low inter-quartile range (IQR) for the optimal values of per query in the interpolation with TILDE (top part of the table). On the other hand, the IQR for the optimal values of per query for the interpolation with TILDEv2 are much higher (bottom part of the table), which indicates that the optimal interpolation setting for the queries are more varied. This observation could give some sense of robustness against query variation for TILDE in comparison to TILDEv2 in this experimental setting. In other words, a query-dependent approach for optimizing would be more robust against query variation for TILDE than for TILDEv2. CONCLUSION In this paper we investigated the generalizability of two contextualized term-based ranking models TILDE and TILDEv2 for a QBE retrieval setting. In QBE, the queries are much longer than in ad-hoc retrieval, and efficient query processing is essential. We were specifically interested to see to what extent the relative performance of contextualized term-based ranking models in comparison to both traditional term-based models and the effective cross-encoder BERT ranker is generalizable to a QBE retrieval setting. Our results show that similar to the original papers [33,34], TILDE and TILDEv2 are less effective than a cross-encoder BERT ranker in QBE retrieval despite the context of longer queries. On the other hand, in the original papers, TILDE and TILDEv2 have shown superior ranking quality in comparison to BM25 as a traditional term-based retrieval model. We investigated if the same pattern exists in a query-by-example retrieval setting and our results show that BM25 has a competitive ranking quality compared to TILDE and TILDEv2. In fact, not only is it competitive, but also in some cases it could outperform TILDE and TILDEv2. This finding is important as (1) it sheds light on the challenges of retrieval settings different from the common evaluation benchmarks including MSMARCO and the TREC DL Track; (2) raises the question how effective other contextualized term-based ranking models would be in those settings. Our results indicate that QBE retrieval is structurally different from other IR settings and requires special attention for methods development. Furthermore, we investigated the impact of the interpolation between BM25 and TILDE as well as TILDEv2. By doing so, we find that a linear interpolation between the score of TILDE (TILDEv2) with that of BM25 leads to an improvement in the ranking effectiveness. This shows that the relevance signals from contextualized ranking models TILDE and TILDEv2 are complementary to the relevance signals from BM25. Additionally, through an analysis on the oracle interpolation between BM25 and TILDE (TILDEv2), we show that more stratified approaches could benefit more from the interpolation between the scores from these models. ACKNOWLEDGMENTS This work is funded by the DoSSIER project under European Union's Horizon 2020 research and innovation program, Marie Skłodowska-Curie grant agreement No. 860721.
2022-08-26T13:19:01.473Z
2022-08-23T00:00:00.000
{ "year": 2022, "sha1": "2b9c732da9f00cfce132600c118e69364a2ea559", "oa_license": null, "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3539813.3545133", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "76cb8c3632c7d2bf9c49eda5a25473371888ab45", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
139290036
pes2o/s2orc
v3-fos-license
Algorithm for the layout of a piezoelectric element in an elastic medium providing the maximal piezoelectric effect within a specified frequency range ABSTRACT An algorithm for the layout of a piezoelectric that provides the most efficient performance within a specified range of vibration frequencies is proposed in this paper. This algorithm is based on the consideration of a special parameter within the area of a piezoelectric element’s possible location. This parameter characterizes the superposition of electromechanical coupling coefficients’ for all the natural vibration frequencies included in a specified frequency range. The condition for defining the best option for location of the piezoelectric element in the case of several equivalent positions is specified. The efficiency of the proposed algorithm is shown numerically. The electromechanical coupling coefficients were calculated numerically based on solution to the problem of natural vibrations for electroelastic bodies using a finite element method. The calculations were performed to define the best location for a single piezoelectric element at the surface of a thin-walled shell having a half-cylindrical shape. The results are presented for natural vibration frequencies within the frequency range from 0 up to 1100 Hz. The numerical results were obtained by solving the problem of natural vibrations with a finite element method using the ANSYS software package. Graphical Abstract terizes the superposition of electromechanical coupling coefficients' for all the natural vibration frequencies included in a specified frequency range. The condition for defining the best option for location of the piezoelectric element in the case of several equivalent positions is specified. The efficiency of the proposed algorithm is shown numerically. The electromechanical coupling coefficients were calculated numerically based on solution to the problem of natural vibrations for electroelastic bodies using a finite element method. The calculations were performed to define the best location for a single piezoelectric element at the surface of a thin-walled shell having a half-cylindrical shape. The results are presented for natural vibration frequencies within the frequency range from 0 up to 1100 Hz. The numerical results were obtained by solving the problem of natural vibrations with a finite element method using the ANSYS software package. Patterns of distribution of electromechanical coupling coefficients for the first (a), the second (b) and the third (c) natural vibration frequencies of the shell with piezoelectric element (d) and (e) is the pattern of distribution of the parameter that characterizes the superpostion of the patterns (a), (b) and (c). Introduction There is a necessity to search for new solutions to control the dynamic behaviour of a structure, including problems of excitation, registration or damping of vibrations and structural shape control, when designing modern high-tech structures One such solution involves supplementing the original structure with elements made of piezoelectric materials as these kinds of elements can be connected to an external electric circuit that can provide additional vibration control [1]. The efficiency of a piezoelectric element when using such a control strategy depends on many factors, such as the existing parameters of the original structure (geometry, operational conditions), the material properties of the piezoelectric element, its geometry and its layout on the structure. The efficient operation of a piezoelectric element under the specified conditions can be achieved by varying the parameters mentioned above. However, problems can emerge when using such a structural control strategy, whereby the electric circuit setup using the parameters provided to achieve a required dynamic behavior at a specific frequency becomes inefficient when the frequency of the external excitation changes. Regarding this, it is important to guarantee the effective performance of a dynamic behaviour control strategy aided by minimal technical devices (such as a single piezoelectric element with a simple shunting circuit) within any specified frequency range, including several natural vibration frequencies of the structure, such as understudy herein, especially when rigorous restrictions are put on the mass and dimensions of a structure. For the latter group, it is also possible that all the piezoelectric elements could be combined into a common network connected to a single shunting circuit [13][14][15][16][17][18]. Otherwise though, piezoelectric elements are generally not connected to each other and instead each one has its own shunting circuit [16,17]. In this regard, a number of researchers consider an array of piezoelectric elements uniformly distributed across a surface of a structure having the shape of a beam in order to realize multimodal control strategy [18][19][20]. Each of these approaches has its own advantages and drawbacks. The drawbacks include the following: • the complexity of tuning a branched shunting circuit; • the bulkiness of a branched circuit, that can affect the stability of the system performance; • a necessity to consider a number of restrictions imposed by the weight and dimensions of a structure on the dimensions and number of piezoelectric elements used. It was shown in [21] that a large number of engineering applications require a reduction in the number of sensors and actuators used for structural dynamic behavior control. This problem leads to the need to exploit these elements in a multimodal regime. Thus, the problem of controlling the dynamic behavior of structures by using the simplest configurations of shunting circuits and a minimal set of additional elements (i.e. piezoelectric elements) is rather acute. The effective performance of a piezoelectric element depends on its location on the structure. Thus, it is important to locate the piezoelectric element in such a way that enables its efficient operation within the frequency range under consideration at several resonant frequencies and corresponding mode shapes. The determination of the optimal location for a piezoelectric element for single-mode vibration damping was first considered in [22]. It was shown that the optimal locations for a piezoelectric element's layout are areas where the mean strains have the highest values. Conventionally, passive vibration dampers are located at such positions where the structural strain energy, defined on the basis of an analysis of the strains' distribution, is maximal [23,24]. Alongside this, it should be considered that the addition of even one piezoelectric element changes the picture of the deformations and the spectrum of natural vibration frequencies of a structure. A number of review papers have been published [23,[25][26][27], highlighting about 260 papers devoted to the problem of determining the optimal layout of piezoelectric elements in a structure, with the general impression being that despite the manifold number of methods and approaches for solving this problem, the layout of a piezoelectric element is still rather an art and is mostly based on the intuition of the researcher. Herewith, it was pointed also out in papers [23] and [27] that the improper layout of a piezoelectric element in a structure may lead to violation of its stable operation, ultimately leading to a failure in its performance. Due to this problem, the review paper [26] systematically presented criteria for the optimal layout of piezoelectric elements from the considered papers in order to minimize the intuitive element when making decisions on the layout. The aspects considered in [25] mainly related to the smart-systems with active feedback, such as active dynamic behavior control systems and energy harvesting systems. The problem of optimization of the layout of piezoelectric elements for passive systems in many cases is reduced to the search for such a location to place the piezoelectric element where it would show the highest electromechanical coupling coefficient value. In [25], special attention is paid to the problem of determining the optimal layout for piezoelectric elements that act as sensors or actuators on structures having a different geometry (beams, plates, spatial structures) with the aim to achieve vibration damping or structural dynamic behavior control depending on the external impact. All the numerous approaches for defining the location of piezoelectric elements that act as sensors or actuators can actually be divided into two groups: derivation of the criterion that would allow estimating the quality of the piezoelectric element location (in fact this is the goal function of the optimization problem) or development of optimization methods that would allow finding the minimum or maximum of the criterion. The required results, which depend on the goal of the piezoelectric element usage, should match this criterion. The amount of harvested electric energy, the level of electric potential generated on the electrodes, the controllability of a structure, the damping of vibration at the specified frequency, etc. [23,25,[27][28][29][30][31][32][33][34][35][36][37] can all play a role in the goals set. Optimal places for the layout of piezoelectric sensors or actuators depend on the chosen optimization criterion and the purpose for the application of the piezoelectric elements [26]. Different authors propose various conditions for determination of the optimal location of a piezoelectric element for passive systems (such as the strain energy [23,38], level of electric charge [23], level of electric potential [38], etc.). The most commonly used condition for seeking the optimal piezoelectric element location is based on the maximal value of the electromechanical coupling coefficient [23,27,28]. This coefficient characterizes the degree of transformation of mechanical energy into electric energy and vice versa. Different algorithms for defining the optimal location of piezoelectric elements have been proposed in order to realize the above-mentioned approaches to vibration control. The optimal placement of sensors/actuators has been achieved using various objective functions, like maximizing the degree of controllability, minimizing the control effort, minimizing the spillover effects, maximizing the modal forces applied by piezoelectric actuators and by optimizing techniques, such as genetic algorithm (GA), simulated annealing (SA), sequential best adding (SBA) algorithm, penalty function method, swarm intelligence algorithm, and the tabu search method, etc. [26]. In papers [29][30][31], a genetic algorithm was applied for searching for options for the optimal location of piezoelectric elements. Singular value analysis (SVD) approaches have also been used as an objective function to find the optimal locations of the actuators by a number of authors [39,40]. As noted in the conclusions to the review paper [26], where over 100 papers were considered, most investigations tend to be oriented on simple structures having the form of beams and plates. Herewith, investigations related to the layout of piezoelectric sensors or actuators for real structures having a complex shape are almost absent. The possibility of multimodal vibration damping using a single piezoelectric element and single resonant RL-circuit was numerically shown in [41]. This approach has obvious advantages over conventional ones due to the fact that mass change is minimal since only one piezoelectric element is used. Moreover, the process of the shunting circuit tuning is reduced to a search only for two parameters: resistance and inductance. However, in this case, the layout of the piezoelectric element plays the special role. Unfortunately, among the large number of approaches related to determining the optimal location of piezoelectric elements, no approaches have been found for defining the optimal location of a single piezoelectric element providing its efficient performance within some specified frequency range when it is attached to a structure with an arbitrary configuration. This fact led to the necessity to develop the special algorithm considered in the present paper to allow finding the solution to the problem of achieving an efficient performance of a single piezoelectric element within a specified frequency range. In the present study, the reliability and efficiency of the proposed algorithm were demonstrated numerically on the example of a shell structure having a half-a-cylinder shape. The numerical results were obtained by solving problems of natural vibrations and of steady-state vibrations using the finite element method realized in the ANSYS software package (license: Academic Research Mechanical and CFD No 1,064,623). Algorithm for determining the optimal location of a piezoelectric element providing the best performance The electromechanical coupling coefficient proposed in [1] was utilized as an effective parameter that could be used to estimate the efficiency of the given piezoelectric element for damping vibrations at a single frequency: where ω o=c , ω s=c are the natural vibration frequencies of the structure with the piezoelectric element operating in the open circuit (o/c) and short circuit (s/c) modes. The open circuit mode is realized in the case when one of the electrodes of the piezoelectric element is grounded (electric potential value is equal to zero) and the other one is free of load. In the short circuit conditions, both electrodes are grounded, i.e. the value of electric potential is set equal to zero. The best option for the piezoelectric element's location for the corresponding vibration mode i ¼ 1; :::; N (where N is the number of natural vibration frequencies of a structure included in the specified frequency range) is defined by the point on the surface (or its part) of the structure S. This point is characterized by the coordinates (x 1 ; x 2 ; x 3 ) that define the location of the piezoelectric element's center of masses. At this point the maximal value of the electromechanical coupling coefficient K max i is reached [41]: It should be taken into account that the best piezoelectric element's location may be different for different structural mode shapes included in the frequency range under consideration. It was pointed out in the work [42] with the example of a flexural plate that the optimal options for the location of a piezoelectric element are different for different vibration modes as well as for different rates of damping of vibrations. Thus, some kind of superposition of patterns of the distribution of electromechanical coupling coefficients K sum ¼ P n i¼1 f K i ð Þ obtained separately for each of the n structural vibration frequencies under study should be considered when defining the optimal location of a piezoelectric element that implies its efficient application within the specified frequency range. However, consideration of the simple sum of K i as such a superposition is incorrect. First, an area with maximal values of the coupling coefficient K i for one vibration mode can be an area with minimal values of the coupling coefficient for another vibration mode. Thus, a situation may emerge where areas of the maximal values of the sum pattern of distribution K sum will not correspond to the optimal position of the piezoelectric element, neither for two frequencies simultaneously, nor for any of two frequencies separately. Second, the maximal values of K i may sufficiently differ in magnitude for different frequencies. Herewith in comparison with the highest value magnitude K i , the contribution of the remaining coefficients K j jÞi ð Þ obtained for j-th frequencies in a value of K sum that can be negligible. In this case, the piezoelectric element located according to the maximal value of K sum would show an efficient performance only for the frequency having the highest value of K i . Regarding this, the parameterP, calculated according to the formula (3), was introduced. Here, n p and n k are the initial and end numbers of the frequencies (according to the global numbering of eigenfrequencies) in the specified frequency range and K i ði ¼ n p ; :::; n k Þ are the electromechanical coupling coefficient for each possible location of the mass center of piezoelectric element at vibration modes under consideration. This parameter P does not have the meaning of an electromechanical coupling coefficient and is a relative index whose magnitude qualitatively defines a piezoelectric element's location, where its most efficient application for structural vibration modes within the frequency range under consideration is achieved. The best location for a piezoelectric element's center of mass, where the P parameter has the highest value, is defined on the basis of an analysis of the pattern of distribution of the P parameter values across the surface of a structure S. This condition can be written in the form of: For the defined in such a way optimal location of a piezoelectric element, it will show the best performance at all n k À n p þ 1 natural vibration frequencies included in the frequency range under study. However, the possible locations of the piezoelectric element having the highest value of P may not be unique. The number of such possible locations of the piezoelectric element where P max are very close to each other in magnitude depends on a number of factors: the choice of frequency range one is interested in, the symmetry of the structure, etc. However, these possible location options can be nonequivalent between each other when one considers the values of the electric potential generated by the piezoelectric element at each mode included in the specified frequency range. Let us suppose that the results of the numerical calculations revealed M such possible options for the piezoelectric element's location with very close values of P max . In order to choose the most preferable location option, the following algorithm is proposed. For each j-th (j ¼ 1; :::; M) option of piezoelectric element's location the following quantities should be calculated: • the values of the electromechanical coupling coefficients K j ð Þ i for each i-th vibration mode from the frequency range under consideration related to the j-th location option having the maximal value of P; • the magnitudes of the deviations of the maximal values of the electromechanical coupling coefficients K max i , which correspond to the optimal location of the piezoelectric element for only one single i-th mode, from values of K j ð Þ i , i.e. • the sum magnitude of these deviations Δ j ¼ P n k i¼n p Δ j i for all the vibration modes from the frequency range under consideration. Next from all of the calculated values of Δ j (j ¼ 1; :::; M), the minimal magnitude value Δ min should be chosen: The minimal value of the sum deviation Δ min corresponds to the best location of the piezoelectric element, with its most efficient application within the range of structural vibration frequencies from n p up to n k . Numerical example of the proposed algorithm An application of the proposed algorithm for choosing the location of the piezoelectric element at the surface of the thin-walled shell (Figure 1) is considered. This location should provide the best performance for the piezoelectric element. The parameters of the shell geometry are as follows: R ¼ 76 mm, L ¼ 300mm, h ¼ 0:25mm. The shell is made of purely elastic material with the following values of material constants: E ¼ 1:96 Á 10 11 Pa, ν ¼ 0:3, ρ ¼ 7700 kg/m 3 . As a piezoelectric element, the element in the shape of a sector of the ring with the following geometric parameters was chosen: r q ¼ 76:25 mm, φ q ¼ 15:08 , thickness h q ¼ 0:36 mm, length l q ¼ 50mm. Spatial structural 20-nodal finite elements having a quadratic approximation of nodal unknowns (SOLID186 from ANSYS element library [43]) were used for modeling of the shell. Spatial coupled 20-nodal finite elements having a quadratic approximation of nodal unknowns (SOLID226 from ANSYS element library [43]) were used for modeling the piezoelectric element. The first eight natural vibration frequencies f 0 (in Hz) of the shell without the piezoelectric element are correspondingly equal to: 557.4; 587.7; 620.2; 759.6; 803.8; 987.5; 1012.1; 1035.0. The search option for location of the piezoelectric element guaranteeing its best performance for the first three vibration modes is considered. The patterns of distribution of the electromechanical coupling coefficients' values K i for the first three vibration modes depending on the location of the piezoelectric element are shown in Figure 2(a-c). The presented results indicate that the maximal values of K i for each frequency are reached at different options of the piezoelectric element's location. Figure 2(d) represents the pattern of distribution of values of the new parameter P. The symmetry of patterns shown in Figure 2 is due to the symmetry of the structure under study. According to the proposed algorithm, the optimal options for the best performance of the piezoelectric element in the first three vibration modes are two symmetrical points having the following coordinates of the piezoelectric element's center of mass: Taking into account the symmetry of the structure, M ¼ 1 is accepted for the case under study. The value of Δ 1 and the corresponding quantities required for its calculation according to the formula (5) are presented in Table 1 in order to characterize the found solution. For comparison, the value of Δ o1 and all the required quantities for its calculation are also represented in Table 1. For Δ o1 , the piezoelectric element's location is not optimal and has the following coordinates of its center of mass: φ ¼ 90:0 o , z ¼ 0:15m. In this case, the piezoelectric element's location ensures the best performance for only the first vibration mode. Further, the results of the search for the location of the piezoelectric element providing the best performance within the frequency ranges from 700 to 900 Hz, from 900 to 1100 Hz and from 0 to 1100 Hz are presented as another illustration of the proposed algorithm. The analysis of patterns of distribution of values of the parameter P for the abovementioned frequency ranges revealed the following (taking into account the symmetry): • one option for the piezoelectric element's location (M ¼ 1) for the frequency range from 700 to 900 Hz having the following coordinates of its center of masses: The presented coordinates of the center of mass of the piezoelectric element were defined by the highest values of the parameter P. According to the presented algorithm, the point L 4 should be chosen as the best option for the piezoelectric elements' location within the frequency range from 900 to 1100 Hz. The point L 5 should be chosen as the best option for the piezoelectric element's location within the frequency range from 0 to 1100 Hz. The magnitude of Δ allows estimating the efficiency of the piezoelectric element within the specified frequency range (Tables 2-4). Validation of the obtained results The implementation of the present algorithm allows providing the best or optimal (within the frameworks of the formulated conditions) realization of the direct or inverse piezoelectric effect for the specified frequency range. This means that it is possible to achieve relatively large deformations within the specified frequency range using the optimally located piezoelectric element as an actuator (inverse piezoelectric effect) for the excitation of vibrations by setting the electric potential on a single piezoelectric element. Considering the direct piezoelectric effect, the optimally located piezoelectric element can be most-efficiently used within the specified frequency range as a sensor or electric energy power supply when damping the vibrations with the aid of a passive external electrical circuit. This effect will be shown in the following example. Consider the point A loaded with the periodic force F (F r ¼ F 0 ; F φ ¼ F z ¼ 0) (Figure 3(a)). Figure 3(b) represents the frequency response of the amplitude values of voltage generated across the piezoelectric element's electrodes for two options for the piezoelectric element's layout. The first option (solid line) was found on the basis of the present algorithm under the condition of the best (optimal) realization of the piezoelectric properties within the frequency range from 500 to 700 Hz, which includes the first three vibration modes (point). The second option (dashed line) was found from the condition of achieving the maximal piezoelectric effect for only the first vibration mode. The presented results show, that for the second option, the highest level of electric potential was achieved for the first mode. However, at the same time, the level of electric Table 3. The quantities that define Δ for the frequency range from 900 to 1100 Hz. Table 4. The quantities that define Δ for the frequency range from 0 to 1100 Hz. potential was negligibly small for the second and the third vibration modes. The first option for the piezoelectric element's layout gave a bit lower level of electric potential in comparison to the second option. Nevertheless, it turned out to be close in value to the level of electric potential for the second and third vibration modes. The next example shows the possibility of damping the first three vibration modes of the shell within the frequency range from 500 up to 700 Hz using a single-branch series RL-shunting circuit. The electrical circuit was modeled used two-nodal circuit-type finite elements CIRCU94 having a linear approximation of nodal unknowns [43]. Here, the single piezoelectric element was optimally located according to the proposed algorithm in the point L 1 with the following coordinates of center of mass φ ¼ 30:24 o , z ¼ 0:15m. The piezoelectric element's electrodes were connected to the series external RL-circuit. The parameters of the circuit's elements were defined according to the algorithm presented in [41] (Figure 4). For the specified frequency range from 500 up to 700 Hz, the value of resistance was found to be R = 4.92 kΩ and the value of inductance was found to be L = 6.30 H. In [41], it was shown that for damping of the vibrations at several modes within any arbitrary frequency range, the rates of decay of vibrations should be equal (or very close in values) to each other for all modes and should be as high as possible. According to the mathematical statement presented in [44,45], the solution to the problem of the natural vibrations of electroelastic bodies with external electric circuits is sought in the form (7) Here, u f g is the state vector containing the components of unknown displacements and the electric potential; ω ¼ ω Re AE iω Im is the complex natural vibration frequency, where ω Re is the circular frequency of vibrations, and ω Im is damping index characterizing exponential decay rate of vibrations. Representing (7) in trigonometric form (8) and considering that ω ¼ 2πf ¼ 2πðf Re AE if Im Þ, we finally obtain: Taking into account the complex conjugation of the obtained eigenvalues, the ones are chosen are those that have a negative imaginary part. This choice provides the damped character of vibration processes under consideration. From (8), it is seen that exactly the imaginary part of complex natural vibration frequency f Im (or ω Im ) determines the damping properties of a structure (in this case, the rate of decay of the vibration amplitude). The values of parameters of series external RL-circuit (resistance R = 4.92 kΩ and inductance L = 6.30 H) was calculated according to the approach represented in [41] for the defined optimal location of piezoelectric element for the frequency range from 500 up to 700 Hz. These parameters of series RL-circuit provides vibration damping at all three natural vibration modes from the specified frequency range. The problem on natural vibrations for the structure under study with attached piezoelectric element connect to series RL-circuit having the calculated parameters was solved using the algorithm described in [45]. The obtained natural vibration frequencies are presented in Table 5. For more clarity, the third column of Table 5 contains values of the modal damping ratios ( ¼ f Im =f 0 , where f 0 is the natural vibration frequency of the shell without the piezoelectric element) for each natural vibration frequency under study. The maximal achievable modal damping ratios are represented in the fourth column of Table 5. These values of modal damping ratios are reached when the external circuit is tuned for damping only one single mode under the condition that the piezoelectric element is located optimally for this vibration mode. The results presented in Table 5 show that utilization of a piezoelectric element optimally located only for one vibration mode (considering the appropriate choice of parameters of the shunting circuit) allows achieving higher modal damping ratios than for the case of an optimally located piezoelectric element for its efficient performance at several frequencies (considering the choice of parameters of the shunting circuit according to the approach from [41]). However, the layout of the piezoelectric element considering its acceptable performance within some frequency range allows achieving a satisfactory level of vibration decay rate for all natural vibration modes from the specified frequency range using a single piezoelectric element connected to a single-branch shunt circuit. This aspect can be useful for practical implementation. Conclusions The algorithm for determining the optimal location of the piezoelectric element at the surface of an elastic deformable solid was proposed in the paper. The layout of the piezoelectric element according to this algorithm provides the best option for realization of the piezoelectric effect at all resonant frequencies within a specified frequency range. The combination of electromechanical coupling coefficients for the vibration modes included in the specified frequency range is proposed. A new parameter, which is as a parameter for qualitative estimation of the performance of a single piezoelectric element for multimodal operation was introduced. The highest value of the new introduced parameter defines the coordinates of the center of mass for the optimal location of the piezoelectric element. Electromechanical coupling coefficients were calculated numerically on the basis of the solution of the problem of natural vibrations for electroelastic bodies using the finite element method. A condition for the unambiguous determination of the optimal location of the piezoelectric element was proposed for the cases when the highest values of the new introduced parameter were reached for more than one point. The possibilities of the proposed algorithm were shown numerically using the example of a thin-walled half-cylindrical elastic shell. For this example, it was necessary to find the optimal location of the piezoelectric element at its surface which provided the optimal option for realization of the piezoelectric effect for different frequency ranges. The possibility of multimodal damping of vibrations with the aid of a single piezoelectric element and simple series RL-circuit with properly selected parameters of resistance and inductance was also demonstrated.
2019-04-30T13:09:09.733Z
2019-02-14T00:00:00.000
{ "year": 2019, "sha1": "93a2fb6db568d7f804421d7b35d17ee6c812d2dd", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19475411.2019.1576070?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "e2b904ba52dcaffbd1e58627cde06be297cc1112", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
119194711
pes2o/s2orc
v3-fos-license
Direct dark mode excitation by symmetry matching of a single-particle based metasurface This paper makes evidence for direct dark mode excitation mechanism in a metasurface structure. The dark mode excitation mechanism is entirely determined by structures' symmetry and does not depend on near-field coupling between elements. In our examples, we consider single element based metasurface composed of two V antennas connected in an anti-symmetric arrangement. Both experimental and modeling results show an efficient excitation of magnetic dipolar mode in such structures. The direct dark mode excitation mechanism provides a design that is more robust with respect to technology imperfections. The considered approach opens promising perspectives for new type of nanostructure designs and greatly relaxes fabrication constraints for the optical domain. Despite the great variety of studied designs, most of them are based on the same principle. It consists in associating a superradiant element, acting as a radiative or bright mode, with a subradiant element, playing the role of the dark (or trapped) mode. In contrast to bright mode, the dark one is only weakly coupled to free space. The resonance frequencies of the bright and dark mode are not substantially different. In a system where the constituting elements are brought far apart, the interaction between them is small. The spectral response is dominated in this case by the superradiant element. The resonance quality factor is generally low due to the strong radiation coupling. Transmission at the resonance frequency is thus highly attenuated. Making the separation distance between the elements smaller causes an increase in the near field coupling between bright and dark modes. The resulting mode hybridization leads to the opening of a narrow EIT window inside the absorption band. 3 With a few exceptions, the origin of the EIT in such systems was attributed to the destructive Fano interference between a directly excited bright mode with an indirectly excited dark mode. However the last theoretical advances lead to revisit this commonly shared interpretation [36][37][38][39]. In particular it was evidenced that no dark mode excitation is necessary for the existence of Fano resonances. They can be described by the interference of bright modes only. The origin of this apparent contradiction stems from the fact that the eigenmodes of the coupled system can significantly differ from those of the individual elements, and in general they are not orthogonal [39,40]. Moreover, it turns out that Fano interference effect is most pronounced when the radiative strength of the interacting modes is substantially equal. The Fano interference of two modes with substantially different radiative strength results in a very weak EIT effect [37]. In this context it is natural to wonder what is then the interest for using dark modes and if there is any other way to excite them other than through mode hybridization? The last point is especially important in view of the generally very tight fabrication tolerances for plasmonic nanostructures operating in the optical domain. The great sensitivity of the mode hybridization to the variation of the separation distance between elements on a scale of few nanometers yields highly challenging their reliable fabrication. The aim of the present contribution is to address these critical issues. In particular we show that instead of EIT, dark mode excitation can lead also to a sharp maximum in reflection. We discuss the advantages related to this operation mode and we detail the essential of the underlying excitation mechanism which is not based on modes hybridization. We propose an excitation mechanism which is based on a direct field coupling to the dark mode by an appropriate symmetry matching with the structure geometry. The considered approach opens promising perspectives for new type of nanostructure designs and greatly relaxes technological constraints for the optical domain. Direct dark mode excitation by symmetry matching To present the essential of the pursued approach, let us start with the example of cut wires (CW) metasurface. We consider a uniform electromagnetic plane wave normally incident on the metasurface. For experimental facilities, demonstration of the concept is performed at microwave frequencies, keeping in mind that the extension to the optical domain can be performed in a straightforward way by structure downsizing and taking into account material properties. To examine the behavior of the CW resonator presented in Fig to the Rayleigh anomaly caused by the opening of the first diffraction order. As evident from the spectral response, the electric dipole mode at 8.3 GHz is heavily damped. The resonance quality factor is very low Q0 = 1.19 because of the strong radiative coupling to free space. In contrast, the quality factor of the second order modes is much better (Q2s=52.2 and Q2a=31. 3) since their resonance corresponds to the higher order multipole radiation. Besides the bright modes well visible in the spectral response, the CW structure possesses also dark eigenmodes. Their presence can be inferred from general considerations but they do not manifest in the spectral response because of the zero net dipolar momentum. So the first CW higher order eigenmode (m1 in Fig. 1) that should occur in the frequency region around 15 GHz, represents a superposition of two opposite dipolar momentums P(y) = -P(-y). Since the uniform electric field E(r) = E(-r) is of even symmetry, the excitation of the odd symmetry doubly degenerated mode is not possible and it remains dark. The goal of our study is to show how to excite these kind of dark modes. From symmetry considerations it follows that to avoid coupling with an uniform electric field the geometry of the resonant element should be odd [37][38][39][40]. One of the simplest odd symmetry structures is the example of two connected antisymmetric V antennas (AVA) shown in Fig. 2 [41]. The total length of the AVA in our example is set to be the same as that of the CW, i.e. 16.3mm. The opening angle of each V element is set to 120°. The choice of such a geometry is intentional to allow following eigenmodes evolution with respect to that of a CW. The transmission and reflection spectra at normal incidence under vertical polarization are shown in Fig. 2. When compared to the CW spectral response it can be observed that the fundamental resonance frequency f0 = 8.1 GHz corresponding to the excitation of the electric dipole is very close to that of a CW. Because of the smaller capacitive coupling the splitting of the second higher order modes is greatly reduced while their mean frequency position f2mean = 19.7 GHz is practically the same as for the CW. Under normal incidence the modal 6 behavior of the AVA is essentially similar to that of CW. Because of the even symmetry of the electric field and odd dark mode symmetry, the resulting interaction overlap is null and therefore the dark mode is not excited. One solution to excite the dark mode would be to break the symmetry of the AVA structure by making for example one V antenna shorter than the other [42]. The two opposite dipolar momentums would then present different magnitudes. The resulting nonzero net dipolar moment allows a direct coupling with the electric field. The mechanism of dark mode excitation is in this case similar to the Wheatstone bridge operating principle. The coupling to the free space is due to the unbalance of electric dipolar momentums. Such solution presents the advantage of not relying on some hybridization mechanism. However the drawback is that the radiative damping is still that of an electric dipole which precludes obtaining a high resonance quality factor. Another solution for dark mode excitation consists in using the magnetic component of the incident field. As the magnetic field transforms as a pseudovector, its symmetry is odd. The direct dark mode excitation is in this case allowed for an oblique incidence. An additional insight on the excitation mechanism can be obtained by considering the magnetic momentums generated by the loops formed by the internal angles of AVA. The dark mode excitation corresponds to the generation of currents flowing in the same clockwise or counterclockwise direction and results in a net dipolar magnetic momentum. It follows that an external magnetic field can thus directly feed the charge displacement corresponding to the dark mode excitation. It can be noted that net electric dipole momentum still remains null, contrary to the magnetic one. The transmission and reflection spectra under 45° oblique incidence for a vertically oriented electric field are shown in Fig. 2. In addition to the fundamental and second higher order modes, a new resonance appears at 14.8 GHz. Its spectral width is much narrower as compared to the fundamental resonance. Even though the dark mode excitation is 7 achieved through a direct field coupling, the resonance quality factor is considerably increased Q1=15.9. This is due to the smaller radiative damping corresponding to that of a magnetic dipole. The fact that the dark mode manifests as a peak in reflection and not in transmission can present certain advantages. The inherent inconvenience of EIT spectral response is that it requires to create a high contrast narrow transmission band inside a high contrast reflection band. It is intuitively clear that this is more challenging as compared to create a single high contrast narrow reflection band. It is worthwhile to note that the excitation of the dark mode resonance almost doesn't affect the rest of the spectral response. The position and intensity of the bright mode resonances associated with the fundamental and second higher order modes remain practically unchanged. On this point the dark mode excitation is essentially different from that relying on modes hybridization mechanism. The absence of hybridization mechanism is another great advantage of the considered solution. This allows considerably relaxing the tolerances for the fabrication of nanostructures operating in the optical domain. A particular robust design can be obtained when considering an AVA design where the arms of the V elements are parallel to the borders of the unit cell. The shape of the resulting AVA looks then like a Z. The numerical modeling and experimental results for Z-shaped atom are presented in the next section. Dark mode excitation of a Z-shaped meta-atom: simulations and experiments The geometry of the Z shaped is represented in Fig. 3(a). It is composed of two AVA with V antenna internal angle of 45°. As for the previous case of 120° AVA the total length of the Z is set the same as that of the CW (16.3 mm). Fundamentally, the dark mode excitation 8 mechanism is the same as for AVA. The essential difference of the Z atom with respect to the 120° AVA is that Z legs parallel to the borders of unit cell are bringing some advantages. It can be observed that Z legs can act as an additional capacitance and also as a current loop. As represented in Fig. 3(a) for the fundamental resonance, the flow of the current leads to the accumulation of the opposite sign charges between the adjacent Z legs. This creates an additional capacitance that causes the fundamental resonance to shift to lower frequencies. At the same time, since the direction of the current flow (indicated by the red arrows) is the same for the adjacent Z legs, no dipolar magnetic momentum is created. The fundamental resonance is essentially that of an electric dipole. For the first higher order resonance, which is that of a dark mode, due to the opposite direction of the currents flow in the adjacent Z legs, the accumulated charges are of the same sign. This lowers the capacitance of the structure and shifts up the resonance frequency. At To demonstrate that dark mode excitation is not relying on any hybridization mechanism, we perform modeling of a structure where dimensions of the Z atom are the same but separation distance between adjacent Z elements is increased from 0.2 mm to 1.4 mm. As evident from results displayed in Fig. 4, the efficiency of dark mode excitation for off-normal incidence in the H-plane is very little affected, despite the important change in coupling between the adjacent elements. The important change of the coupling strength can be directly inferred from the observed variation of the resonance frequencies. By changing the lattice period to p + 20% with p = 6 mm, the capacitance between the adjacent Z legs decreases making the electric dipolar resonance frequency shift to 7.9 GHz, as shown in Fig. 4. For the same reason, but making this time an opposite effect, the dark mode excitation under oblique incidence appears at a slightly lower frequency (16.4 GHz) than in the original case with lattice period p (Fig. 3). The results presents in Fig. 4 justifies the fact that the dark mode excitation does absolutely not rely on any modes hybridization, but on a direct field coupling through an appropriate symmetry matching with the structure geometry. When the Z meta-atom is rotated by 22.5° around the z-axis (propagation direction), and keeping the vertical polarization configuration, the dark mode excitation is forbidden under normal incidence. Conversely, as shown in Fig. 5, dark mode excitation is very efficient under oblique incidence when there is a magnetic field component perpendicular to the incidence plane. All provided examples show that for dark mode excitation relies only on symmetry matching and doesn't depend of coupling between elements. The direct dark mode excitation mechanism provide thus a design that is more robust with respect to technology imperfections and greatly relaxes fabrication tolerances. This moment is of great importance when considering fabrication of nanostructures operating in the optical domain [43]. Summary and conclusions The present contribution address the problematic of dark mode excitation in metasurface structures different from that using mode hybridization mechanism. We propose a dark mode excitation mechanism based only on symmetry matching conditions. It is shown that direct excitation mechanism is possible for an anti-symmetric type higher order mode having a zero net electric dipolar momentum but different from zero magnetic one. The excitation of magnetic dipolar momentum can be achieved under field oblique incidence on metasurface having anti-symmetric unit cell geometry. In our examples we considered single-element metasurface composed of two V antennas connected in an anti-symmetric arrangement or more simply Z-shaped meta-atoms. Both experimental and modeling results show an efficient excitation of magnetic dipolar mode in such structures. The great advantage of the considered approach is that dark mode excitation is entirely determined by structures' geometry symmetry and doesn't depend of coupling between elements. The considered approach opens promising perspectives for new type of nanostructure designs and greatly relaxes technological constraints for the optical domain. antisymmetric V antennas. In contrast to normal incidence, dark mode is excited under 45° oblique incidence, due to a non-zero net dipolar magnetic momentum.
2019-04-13T08:54:53.751Z
2014-06-24T00:00:00.000
{ "year": 2014, "sha1": "ab3c4b44218e1ef7f7b7034783b2955216aca9b9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1406.6378", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ab3c4b44218e1ef7f7b7034783b2955216aca9b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248907108
pes2o/s2orc
v3-fos-license
Transcriptome and Physiological Analyses of a Navel Orange Mutant with Improved Drought Tolerance and Water Use Efficiency Caused by Increases of Cuticular Wax Accumulation and ROS Scavenging Capacity Drought is one of the main abiotic stresses limiting the quality and yield of citrus. Cuticular waxes play an important role in regulating plant drought tolerance and water use efficiency (WUE). However, the contribution of cuticular waxes to drought tolerance, WUE and the underlying molecular mechanism is still largely unknown in citrus. ‘Longhuihong’ (MT) is a bud mutant of ‘Newhall’ navel orange with curly and bright leaves. In this study, significant increases in the amounts of total waxes and aliphatic wax compounds, including n-alkanes, n-primary alcohols and n-aldehydes, were overserved in MT leaves, which led to the decrease in cuticular permeability and finally resulted in the improvements in drought tolerance and WUE. Compared to WT leaves, MT leaves possessed much lower contents of malondialdehyde (MDA) and hydrogen peroxide (H2O2), significantly higher levels of proline and soluble sugar, and enhanced superoxide dismutase (SOD), catalase (CAT) and peroxidase (POD) activities under drought stress, which might reduce reactive oxygen species (ROS) damage, improve osmotic regulation and cell membrane stability, and finally, enhance MT tolerance to drought stress. Transcriptome sequencing results showed that seven structural genes were involved in wax biosynthesis and export, MAPK cascade, and ROS scavenging, and seven genes encoding transcription factors might play an important role in promoting cuticular wax accumulation, improving drought tolerance and WUE in MT plants. Our results not only confirmed the important role of cuticular waxes in regulating citrus drought resistance and WUE but also provided various candidate genes for improving citrus drought tolerance and WUE. Introduction Drought stress severely limits plant growth and development and reduces crop yield and quality all over the world [1]. In plants, cuticular waxes are primarily composed of very-long-chain fatty acids (VLCFAs) and their derivatives, including aldehydes, alkanes, alcohols, ketones and esters. In addition, cyclic compounds, such as terpenoids, flavonoids and sterols, are also present in the cuticular waxes of many plant species [2]. As a hydrophobic barrier, cuticular waxes cover the plant surfaces to prevent non-stomatal water loss and protect plants against abiotic and biotic stresses such as UV radiation, cold, drought, high salt, pathogens and insect invasion [3]. Previous reports revealed that cuticular waxes play an important role in regulating plant tolerance to drought stress. For example, the crops with high wax contents were Comparison of Phenotype, Chromatic Aberration and Cuticular Permeability between WT and MT Leaves Chromatic aberration analysis revealed that MT leaves had a significantly lower a* value and much higher b*, a*/b* and L* values than those in WT leaves, suggesting the MT leaves were much greener and brighter than the WT leaves ( Figure 1D-G). However, the two varieties shared similar fresh and dry weight in leaves ( Figure 1H,I). Interestingly, the water loss rates in MT leaves were significantly lower than those in WT leaves from 8 to 48 h of dehydration ( Figure 1J). Thus, the MT leaves rolled much more severely than WT leaves after 48 h of dehydration ( Figure 1A,B). Moreover, the chlorophyll leaching rates in MT leaves were much lower than those in WT leaves from 2 to 30 h after alcohol treatment ( Figure 1K). Therefore, the alcohol solution of MT leaves was much lighter green than that of WT leaves after 30 h of alcohol treatment ( Figure 1C). These results suggested that the cuticular permeability in MT leaves was much lower than that in WT leaves. The comparison of water loss rates and chlorophyll leaching rates in WT and MT leaves. Vertical bars represent standard deviations of the means (n = 3). Significant differences between WT and MT leaves at the p < 0.05 and p < 0.01 levels were indicated by * and **, respectively, according to Student's t-test. Comparison of Cuticular Wax Morphology and Chemical Composition between WT and MT Leaves The scanning electron microscopy (SEM) results showed that the epidermic cells were obviously convex on the adaxial surfaces of MT leaves (Figure 2A,B,E,F). On the adaxial and abaxial sides, the wax crystal density in MT leaves was higher than that in WT leaves (Figure 2A-H). alcohol; ALD: aldehyde. Vertical bars represent standard deviations of the means (n = 3). Significant differences between WT and MT leaves at the p < 0.05 and p < 0.01 levels were indicated by * and **, respectively, according to Student's t−test. Comparison of Morphological and Physiological Responses of WT and MT Plants to Drought Stress The WT and MT plants grew well under control conditions ( Figure 4A,B). After 21 days of drought treatment, severely wilting was observed in almost all of the WT leaves ( Figure 4C). In contrast, light wilting occurred only in a few leaves of MT plants ( Figure 4D). Further study revealed that almost all physiological indexes increased after drought treatment, except for chlorophyll content, which decreased under drought stress ( Figure 4E-M). The ion leakage, malondialdehyde (MDA) and hydrogen peroxide (H2O2) contents in MT leaves were significantly lower than those in WT leaves under control and drought conditions ( Figure 4E,F,J). In contrast, the MT leaves exhibited much higher contents of proline, soluble sugar, chlorophyll and significantly increased superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT) activities in comparison with WT leaves un- alcohol; ALD: aldehyde. Vertical bars represent standard deviations of the means (n = 3). Significant differences between WT and MT leaves at the p < 0.05 and p < 0.01 levels were indicated by * and **, respectively, according to Student's t-test. At the individual level, a total of 22 cuticular wax constituents were detected in WT and MT leaves. There were no significant differences in the amounts of C16:0, C18:0, C20:0 and C24:0 fatty acids, C18:2 and C18:3 unsaturated fatty acids, C23:0, C25:0 and C27:0 alkanes, C34:0 primary alcohol, α-Amyrin and β-Amyrin between WT and MT leaves. However, the amounts of three n-alkane constituents with odd-numbered chain lengths from C29 to C33, five n-primary alcohol constituents with even-numbered chain lengths from C24 to C32 and one n-aldehyde constituent (C26:0 aldehyde) in MT leaves were significantly higher than those in WT leaves. On the contrary, the campesterol deposited on the surfaces of MT leaves with much lower amounts in comparison to WT leaves ( Figure 3C). Comparison of Morphological and Physiological Responses of WT and MT Plants to Drought Stress The WT and MT plants grew well under control conditions ( Figure 4A,B). After 21 days of drought treatment, severely wilting was observed in almost all of the WT leaves ( Figure 4C). In contrast, light wilting occurred only in a few leaves of MT plants ( Figure 4D). Further study revealed that almost all physiological indexes increased after drought treatment, except for chlorophyll content, which decreased under drought stress ( Figure 4E-M). The ion leakage, malondialdehyde (MDA) and hydrogen peroxide (H 2 O 2 ) contents in MT leaves were significantly lower than those in WT leaves under control and drought conditions ( Figure 4E,F,J). In contrast, the MT leaves exhibited much higher contents of proline, soluble sugar, chlorophyll and significantly increased superoxide dismutase (SOD), peroxidase (POD) and catalase (CAT) activities in comparison with WT leaves under control and drought conditions ( Figure 4G-I,K-M). Photosynthetic analysis showed that the net photosynthetic rate (Pn), stomatal conductance (Gs), intercellular CO 2 concentration (Ci) and transpiration rate (Tr) decreased after drought treatment in WT and MT leaves ( Figure 5A-D). The Pn in MT leaves was significantly higher than that in WT leaves under control and drought conditions ( Figure 5A). However, the Gs, Ci and Tr in MT leaves were much lower than those in WT leaves under control conditions and drought stress ( Figure 5B-D). Compared to WT plants, the MT plants possessed much higher instantaneous water use efficiency (WUEi) and δ 13 C under control and drought conditions ( Figure 5E,F). These results suggested that MT plants had enhanced drought tolerance and increased water use efficiency (WUE) in comparison to WT plants. Functional Classification of Differentially Expressed Genes (DEGs) between WT and MT Leaves To investigate the molecular mechanism leading to the phenotypic and wax composition differences between WT and MT leaves, total RNAs from WT and MT leaves were sequenced with three biological replicates per variety. According to the criteria of |log 2 (MT/WT)| ≥ 1 and Q-value < 0.001, a total of 1316 DEGs were identified in MT vs. WT, of which 537 DEGs were upregulated and 779 DEGs were downregulated in MT leaves ( Figure 6A). The information on all DEGs are listed in Table S1. Furthermore, a volcano plot of DEGs was constructed to visualize the distribution of upregulated and downregulated DEGs ( Figure 6B). The Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses were used to explore the potential functions of the DEGs. The GO classification analysis showed that a total of 971 DEGs were assigned to the 1513 GO term under three categories, including biological process, molecular function and cellular component. In these GO terms, catalytic activity had the largest number of DEGs (517), followed by binding (465), membrane (392), membrane part (386), cellular process (189), metabolic process (177) and other GO terms ( Figure 7A and Table S2). According to the Q-value, the most enrichment GO term was oxidoreductase activity, followed by drug catabolic process, heme binding, tetrapyrrole binding, oxidoreductase activity, acting on paired donors, with incorporation or reduction in molecular oxygen, monooxygenase activity, cofactor binding, reactive oxygen species metabolic process and other GO terms ( Figure 7B and Table S3). Most of the enriched GO terms were involved in plant response to drought and other stresses. KEGG classification analysis revealed that a total of 645 DEGs were assigned to 120 pathways under five categories, including cellular processes, environmental information processing, genetic information processing, metabolism and organismal systems. The greatest number of DEGs belonged to global and overview maps (324), followed by biosynthesis of other secondary metabolites (123), environmental adaptation (114), signal transduction seedings under control and drought conditions. The leaves of WT and MT leaves were collected to measure the (E) ion leakage, the contents of (F) MDA, (G) proline, (H) soluble sugar, (I) chlorophyll, (J) H2O2 and the activities of (K) SOD, (L) POD, (M) CAT. Vertical bars represent standard deviations of the means (n = 3). Significant differences between WT and MT leaves at the p < 0.05 and p < 0.01 levels were indicated by * and **, respectively, according to Student's t−test. Photosynthetic analysis showed that the net photosynthetic rate (Pn), stomatal conductance (Gs), intercellular CO2 concentration (Ci) and transpiration rate (Tr) decreased after drought treatment in WT and MT leaves ( Figure 5A-D). The Pn in MT leaves was significantly higher than that in WT leaves under control and drought conditions ( Figure 5A). However, the Gs, Ci and Tr in MT leaves were much lower than those in WT leaves under control conditions and drought stress ( Figure 5B-D). Compared to WT plants, the MT plants possessed much higher instantaneous water use efficiency (WUEi) and δ 13 C under control and drought conditions ( Figure 5E,F). These results suggested that MT Functional Classification of Differentially Expressed Genes (DEGs) between WT and MT Leaves To investigate the molecular mechanism leading to the phenotypic and wax composition differences between WT and MT leaves, total RNAs from WT and MT leaves were sequenced with three biological replicates per variety. According to the criteria of |log2 (MT/WT)| ≥ 1 and Q-value < 0.001, a total of 1316 DEGs were identified in MT vs. WT, of which 537 DEGs were upregulated and 779 DEGs were downregulated in MT leaves ( Figure 6A). The information on all DEGs are listed in Table S1. Furthermore, a volcano plot of DEGs was constructed to visualize the distribution of upregulated and downregulated DEGs ( Figure 6B). The Pn/Tr was used to calculate instantaneous water use efficiency (WUEi). Vertical bars represent standard deviations of the means (n = 15). (F) The leaf's δ 13 C were used to represent the short-term WUE in the whole plant. Vertical bars represent standard deviations of the means (n = 3). Significant differences between WT and MT leaves at the p < 0.05 and p < 0.01 levels were indicated by * and **, respectively, according to Student's t −test. Functional Classification of Differentially Expressed Genes (DEGs) between WT and MT Leaves To investigate the molecular mechanism leading to the phenotypic and wax composition differences between WT and MT leaves, total RNAs from WT and MT leaves were sequenced with three biological replicates per variety. According to the criteria of |log2 (MT/WT)| ≥ 1 and Q-value < 0.001, a total of 1316 DEGs were identified in MT vs. WT, of which 537 DEGs were upregulated and 779 DEGs were downregulated in MT leaves ( Figure 6A). The information on all DEGs are listed in Table S1. Furthermore, a volcano plot of DEGs was constructed to visualize the distribution of upregulated and downregulated DEGs ( Figure 6B). KEGG enrichment analysis was performed for the criteria of Q-value ≥ 0.05. According to the Q-value, the mitogen-activated protein kinase (MAPK) signaling pathway plant was the most enriching pathway, followed by phenylpropanoid biosynthesis, flavonoid biosynthesis, brassinosteroid biosynthesis, ABC transporters, cutin, suberine and wax biosynthesis, stilbenoid, diarylheptanoid and gingerol biosynthesis, plant-pathogen interaction and other pathways ( Figure 7D and Table S5). Most of these pathways were involved in the plant's response to environmental stress and cuticular wax biosynthesis, which might be the major reason for the difference in drought resistance between WT and MT plants. According to the Q-value, the most enrichment GO term was oxidoreductase activity, followed by drug catabolic process, heme binding, tetrapyrrole binding, oxidoreductase activity, acting on paired donors, with incorporation or reduction in molecular oxygen, monooxygenase activity, cofactor binding, reactive oxygen species metabolic process and other GO terms ( Figure 7B and Table S3). Most of the enriched GO terms were involved in plant response to drought and other stresses. KEGG classification analysis revealed that a total of 645 DEGs were assigned to 120 pathways under five categories, including cellular processes, environmental information processing, genetic information processing, metabolism and organismal systems. The greatest number of DEGs belonged to global and overview maps (324), followed by biosynthesis of other secondary metabolites (123), environmental adaptation (114), signal transduction (111), carbohydrate metabolism (101), lipid metabolism (65), amino acid metabolism (52) and other pathways ( Figure 7C and Table S4). Most of these pathways were related to plant response to drought stress or other stresses, suggesting these DEGs might contribute to enhancing MT drought tolerance. KEGG enrichment analysis was performed for the criteria of Q-value ≥ 0.05. According to the Q-value, the mitogen-activated protein kinase (MAPK) signaling pathway plant Identification of DEGs Involved in MAPK Cascade, Reactive Oxygen Species (ROS) Scavenging and Drought Response According to the Q-value, the MAPK signaling pathway plant was the most significant enrichment pathway of DEGs ( Figure 7D), indicating it played important roles in the phenotypic differences between WT and MT. A total of 86 DEGs (50 upregulated and 36 downregulated) were functioning in the MAPK signaling pathway plant ( Figure 8A and Table S6). The KEGG classification analysis showed that 86 DEGs in the MAPK signaling pathway plant were classified as signal transduction, 65 DEGs were involved in environmental adaption and 25 DEGs belonged to the metabolism of stress-related chemical compounds, such as lipid, carbohydrate, amino acid, terpenoids and polyketides ( Figure 9A and Table S7). It should be noted that most of the remaining enrichment pathways were biosynthetic, metabolic and signal transduction pathways related to abiotic and biotic stress, such as phenylpropanoid biosynthesis, flavonoid biosynthesis, brassinosteroid biosynthesis, stilbenoid, diarylheptanoid and gingerol biosynthesis, and plant-pathogen interaction, etc. Notably, the plant-pathogen interaction pathway contained the largest number of DEGs (99), suggesting the biotic tolerance of MT plants might also be changed in comparison to WT plants ( Figure 7D and Table S5). In addition, the expression levels of CsMEKK1-LIKE encoding MAP kinase kinase kinase (MEKK1), CsSOD1-LIKE encoding superoxide dismutase 1 (SOD1) and four CsPRX-LIKE genes (CsPRX5-LIKE, CsPRX10-LIKE, CsPRX24-LIKE and CsPRX25-LIKE) encoding peroxidases (PRXs) were upregulated in MT leaves, indicating the DEGs involved in the MAPK cascade and ROS scavenging might play an important role in regulating MT tolerance to drought stress (Table 1). Identification of DEGs Encoding Transcription Factor A total of 74 DEGs encoding transcription factors were identified in MT vs. WT, of which 30 DEGs were upregulated and 44 DEGs were downregulated in MT leaves. These DEGs belonged to 24 transcription factor families. AP2/ERF and WRKY families were the top two largest transcription factor families, and each family included 10 DEGs. Interestingly, most AP2/ERF DEGs were upregulated in MT leaves. In contrast, most DEGs in the WRKY family were downregulated in MT leaves. Most of the remaining transcription factor families were involved in the plant's response to drought stress or other stresses, such as NAC, MYB, and WRKY, AP2/ERF, bZIP, bHLH, C3H, MADS, GRAS and Dof ( Figure 8B and Table S8). The KEGG classification analysis revealed that 17 DEGs encoding transcription factors were involved in signal transduction, 15 DEGs were classified to environmental adaption and 7 DEGs were related to the metabolism of stress-related chemical compounds, such as cofactors, vitamins, lipids, amino acids, carbohydrates and glycans ( Figure 9B and Table S9). The upregulated DEGs, including CsERF4-LIKE, CsERF9-LIKE, CsMYB62-LIKE, CsZAT10-LIKE1, CsZAT10-LIKE2, and CsNAC22-LIKE, and downregulated DEGs, such as CsWRKY27-LIKE and CsWRKY29-LIKE, might make a great contribution to enhance MT resistance to drought stress (Table 1, Figure 8B and Table S8). of DEGs (99), suggesting the biotic tolerance of MT plants might also be changed in comparison to WT plants ( Figure 7D and Table S5). In addition, the expression levels of CsMEKK1-LIKE encoding MAP kinase kinase kinase (MEKK1), CsSOD1-LIKE encoding superoxide dismutase 1 (SOD1) and four CsPRX-LIKE genes (CsPRX5-LIKE, CsPRX10-LIKE, CsPRX24-LIKE and CsPRX25-LIKE) encoding peroxidases (PRXs) were upregulated in MT leaves, indicating the DEGs involved in the MAPK cascade and ROS scavenging might play an important role in regulating MT tolerance to drought stress (Table 1). Identification of DEGs Encoding Transcription Factor A total of 74 DEGs encoding transcription factors were identified in MT vs. WT, of which 30 DEGs were upregulated and 44 DEGs were downregulated in MT leaves. These DEGs belonged to 24 transcription factor families. AP2/ERF and WRKY families were the top two largest transcription factor families, and each family included 10 DEGs. Interestingly, most AP2/ERF DEGs were upregulated in MT leaves. In contrast, most DEGs in the WRKY family were downregulated in MT leaves. Most of the remaining transcription factor families were involved in the plant's response to drought stress or other stresses, such as NAC, MYB, and WRKY, AP2/ERF, bZIP, bHLH, C3H, MADS, GRAS and Dof ( Figure 8B and Table S8). The KEGG classification analysis revealed that 17 DEGs encoding transcription factors were involved in signal transduction, 15 DEGs were classified to environmental adaption and 7 DEGs were related to the metabolism of stress-related chemical compounds, such as cofactors, vitamins, lipids, amino acids, carbohydrates and glycans The Decrease in Cuticular Permeability was Caused by the Increase in Aliphatic Wax Accumulation in MT Leaves The MT tree is a bud mutation derived from the WT tree. A previous report showed that the MT tree possessed curly and bright leaves with prominent veins [30]. In agreement with a previous report, our study revealed that MT leaves were much brighter and greener than WT leaves (Figure 1A,D-G). Furthermore, the water loss rates and chlorophyll leaching rates in MT leaves were significantly lower than those in WT leaves, suggesting the cuticular permeability of MT leaves decreased significantly in comparison to WT leaves ( Figure 1B,C,J,K). Further study showed that the MT leaves possessed much higher amounts of total waxes, n-primary alcohols, n-alkanes and n-aldehydes, but significantly lower amounts of sterols in comparison to WT leaves ( Figure 3A,B). Previous reports revealed that the formation of epicuticular wax crystals on the surface of citrus was dependent on a high proportion of aliphatic wax compounds such as n-alkanes, naldehydes and n-primary alcohols [22,25]. Thus, the significant raises in the amounts of n-primary alcohols, n-alkanes and n-aldehydes could explain the increase in epicuticular . The y-axis records the relative gene expression levels calculated by the 2 −∆∆CT method with citrus β-actin as the endogenous reference. Vertical bars represent standard deviations of the means (n = 3). Significant differences between WT and MT leaves at the p < 0.05 and p < 0.01 levels were indicated by * and **, respectively, according to Student's t-test. The Decrease in Cuticular Permeability was Caused by the Increase in Aliphatic Wax Accumulation in MT Leaves The MT tree is a bud mutation derived from the WT tree. A previous report showed that the MT tree possessed curly and bright leaves with prominent veins [30]. In agreement with a previous report, our study revealed that MT leaves were much brighter and greener than WT leaves (Figure 1A,D-G). Furthermore, the water loss rates and chlorophyll leaching rates in MT leaves were significantly lower than those in WT leaves, suggesting the cuticular permeability of MT leaves decreased significantly in comparison to WT leaves ( Figure 1B,C,J,K). Further study showed that the MT leaves possessed much higher amounts of total waxes, n-primary alcohols, n-alkanes and n-aldehydes, but significantly lower amounts of sterols in comparison to WT leaves ( Figure 3A,B). Previous reports revealed that the formation of epicuticular wax crystals on the surface of citrus was dependent on a high proportion of aliphatic wax compounds such as n-alkanes, n-aldehydes and n-primary alcohols [22,25]. Thus, the significant raises in the amounts of n-primary alcohols, n-alkanes and n-aldehydes could explain the increase in epicuticular wax crystal density on the surfaces of MT leaves (Figure 2A-H). The cuticular waxes are multiphase systems consisting of highly ordered crystalline zones and mobile amorphous domains. The impermeable wax crystalline zones, which contribute greatly to forming the barrier, are mainly made up of aliphatic wax compounds. Water and other solutes can only pass through the amorphous domains, which are composed mainly of pentacyclic components such as sterols and triterpenoids [31]. The significant increase in total wax load in MT leaves, especially the significant increases in n-primary alcohols, n-alkanes and n-aldehydes, fits well with the previous model that indicated aliphatic wax compounds as critical determinants of cuticular permeability [32,33]. Therefore, we deduced that the increases in amounts of aliphatic wax compounds, including n-primary alcohols, n-alkanes and n-aldehydes, might be the major reasons for the reduction in cuticular permeability in MT leaves. Increased Cuticular Wax Accumulation and Enhanced ROS Scavenging Capacity Contributes to the Improvement of Drought Tolerance and WUE in MT Plants To investigate whether the increase in cuticular wax accumulation leads to the improvement of drought tolerance in MT plants, the drought tolerance between WT and MT plants was compared in this study. As expected, MT plants were much more tolerant to drought stress than WT plants, as indicated by their fewer wilting leaves under drought stress ( Figure 4A-D). This result supported previous reports, which suggested that high cuticular wax deposition could improve plant tolerance to drought stress [5,28,34,35]. Proline and soluble sugar acted as compatible osmolytes involved in plant response to drought stress [36,37]. The increases of proline and soluble sugar contents in MT leaves could keep the osmotic balance between the intracellular and extracellular environments, thus decreasing cellular membrane damage and resulting in enhanced drought tolerance in MT plants ( Figure 4G,H). As a soluble product of membrane lipid peroxidation, MDA content is used to assess the extent of lipid peroxidation [38]. Ion leakage is an important indicator of the cellular membrane injury caused by redundant lipid peroxidation [39]. In addition, the increase in H 2 O 2 content usually causes severe oxidative injury in plant cellular membranes [40]. Therefore, the significant declines in ion leakage rate, MDA levels and H 2 O 2 content suggested that MT plants suffered less oxidative injury than WT plants under drought stress ( Figure 4E,F,J). ROS are highly reactive molecules that can interact with DNA, RNA, proteins, pigments, lipids and numerous other metabolites in plants. Overproduction of ROS in response to drought stress causes serious oxidative damage to plant cells. The antioxidant enzymes involved in the ROS scavenging system reduce oxidative injury and, thus, play an important role in plant tolerance to drought stress [41]. In this study, significantly higher SOD, POD and CAT activities were observed in MT plants compared to WT plants ( Figure 4K-M). The enhanced antioxidant enzyme activities might be the major reason for the decline in ROS content (indicated by H 2 O 2 content) in MT plants under drought stress. All above, we concluded that the enhanced drought tolerance of MT plants could be attributed to two major reasons. The first one was the decrease in cuticular permeability, which was caused by the increase in aliphatic wax accumulation in MT leaves. Another explanation was the improved ROS scavenging capacity, which limited ROS damage and enhanced cell membrane stability in MT leaves. Drought stress usually causes stomatal closure and consequently decreases Gs and Tr in leaves to limit water loss. The stomatal closure also leads to a decline in the diffusion of CO 2 , resulting in a decrease in Pn under drought stress [42]. The Ci was also reported to decline after drought treatment [43]. In agreement with previous reports, the present study showed that the Pn, Gs, Ci and Tr in leaves of both varieties declined after drought treatment. In addition, MT leaves possessed much higher Pn and significantly lower Gs and Ci compared to WT leaves under control and drought conditions ( Figure 5A-C). Chlorophyll is the main photosynthetic pigment in plants. A stable supply of chlorophyll is required for plant photosynthesis [44]. Thus, the increased levels of Pn in MT leaves might be attributed to the increase in chlorophyll contents under control and drought conditions. Moreover, the Tr in MT leaves was significantly lower than that in WT under control and drought conditions ( Figure 5D). Under drought stress, most of the leaf stomata close and cuticular transpiration becomes the major route for water loss [45]. Thus, the decrease in cuticular permeability caused by an increase in cuticular wax accumulation in MT leaves was probably one of the major reasons for the decline of Tr in MT leaves. Plant WUE describes the ratio of CO 2 gain or dry matter production per unit of water loss [46]. High WUE has been reported to improve plant growth, survival and vegetation productivity and reduce water use under drought stress [10]. Plant WUE is often investigated on an instantaneous scale (minutes, WUEi), which is calculated by the ratio of Pn to Tr in leaves [47]. Moreover, leaf δ 13 C has long been used to assess whole plant WUE on short-term (hours or days) and long-term (years or decades) scales since a significant positive correlation has been observed between WUE and δ 13 C in plants [48]. Our results showed that MT plants exhibited much higher WUE (indicated by WUEi and δ 13 C) than WT plants both under control conditions and drought stress ( Figure 5E,F). Previous reports revealed that cuticular waxes can enhance WUE in plants by reducing cuticular transpiration and water loss [8,9,49]. The Tr in MT leaves were much lower than those in WT leaves under control and drought conditions ( Figure 5D). Therefore, we deduced that the significant increase in cuticular wax accumulation in MT leaves might enhance its WUE by reducing Tr and water loss under control and drought conditions. It is well-known that L* represents the lightness or glossy degree in the color space system. Interestingly, a recent report suggested that blueberry leaves with high L* could reduce heat load by reflecting a great amount of solar radiation, thus decreasing transpiration and consequently increasing WUE [50]. The cuticular waxes on a leaf surface reflect sunlight and consequently alter leaf lightness (L*) [51]. Therefore, we deduced that the high level of L* in MT leaves, which might be caused by increased cuticular wax accumulation, also contributed to its high WUE under control and drought conditions. Another explanation for the high WUE was the high Pn in MT leaves since WUE was reported to show higher dependence on Pn than Tr in citrus [52]. Taken together, we concluded that the increase in cuticular wax accumulation decreased cuticular water loss, increased L* levels and consequently led to the increase in WUE in MT leaves. In addition, the increased level of Pn might be another reason for the increase in WUE in MT leaves. The Changes in Expression Levels of Wax Biosynthesis and Export Genes Contributed to the Alterations in Cuticular Wax Accumulation of MT Leaves To further explain the phenotype difference between WT and MT plants at the molecular level, transcriptome sequencing was performed on leaves of two varieties. KEGG enrichment analysis revealed that the ABC transporters and cutin and the suberine and wax biosynthesis were the fifth and sixth greatest enrichment pathways, suggesting the important contribution of wax biosynthesis and export DEGs to the phenotype difference between WT and MT plants ( Figure 7D). Transcriptome sequencing identified three upregulated (CsCER3-LIKE, CsABCG11-LIKE and CsABCG21-LIKE) and five downregulated DEGs (CsFAR1-LIKE, CsFAR3-LIKE, CsFAR4-LIKE, CsNSDHL-LIKE and CsChDI-LIKE) involved in wax biosynthesis and transport from MT vs. WT (Table 1). The CER3 encodes a very-long-chain aldehyde reductase, which may physically interact with cytochrome b5 isoforms (CYTB5) and CER1 to catalyze the biosynthesis of n-alkanes [53,54]. Thus, the significant increase in the expression level of CsCER3-LIKE may explain the sharp increase in n-alkane amounts in MT leaves (Table 1 and Figure 10A). To date, numerous ABC transporter G subfamily genes, such as AtABCG11 and AtABCG12 from Arabidopsis, OsABCG9 from rice, PpABCG7 from Physcomitrella patens and glossy13 (AtABCG32 homolog) from maize, have been reported to export the cuticular waxes from plasma membrane (PM) to an apoplastic environment [55][56][57][58][59]. In this study, the expression levels of two ABC transporter G subfamily DEGs (CsABCG11-LIKE and CsABCG21-LIKE) in MT leaves were much higher than those in WT leaves under control and drought stress (Table 1 and Figure 10G,H). The increased expression of these two genes might improve the cuticular wax export from PM to the extracellular environment, leading to the increase in amounts of cuticular wax components in MT leaves. The sterol-4alpha-carboxylate 3-dehydrogenase (NSDHL) and cholestenol delta-isomerase (ChDI) are involved in the biosynthesis of sterols [60]. The decline in the expression levels of CsNSDHL-LIKE and CsChDI-LIKE might cause a reduction in the amounts of sterols in MT leaves (Table 1 and Figure 10E,F). In plants, fatty acyl-CoA reductase (FAR) catalyzes the reduction in VLC acyl-CoA to n-primary alcohols [61][62][63]. To our surprise, the expression levels of three CsFAR-LIKE genes (CsFAR1-LIKE, CsFAR3-LIKE and CsFAR4-LIKE) declined in MT leaves, which was contradictory to the increase in the amounts of n-primary alcohols ( Table 1 and Figure 10B-D). This result suggested that other unidentified genes except for CsFARs might be involved in the biosynthesis of n-primary alcohols. Another explanation for this divergence was that the increases in the expression levels of CsABCG11-LIKE and CsABCG21-LIKE exported much more n-primary alcohols from PM to the extracellular environment, resulting in the increase in the amounts of n-primary alcohols in MT leaves. Interestingly, the expression levels of all wax-related genes in WT and MT leaves increased after drought stress, which suggested that these genes might be involved in navel orange response to drought stress ( Figure 10A-H). Above all, we concluded that the changes in the expression levels of wax biosynthesis and export genes led to the alterations in the number of cuticular wax components and, finally, improved the drought tolerance and WUE in MT plants. The DEGs Involved in the MAPK Signaling Pathway-Plant, ROS Scavenging and Other Enriched Pathways Might Contribute to Improve MT Tolerance to Drought Stress The most significantly enriched pathway is the MAPK signaling pathway plant. It has been reported that the MAPK signaling pathway plays an important role in regulating the plant response to drought [78]. As the first member of the MAPK cascade, MAP-KKK, also named MEKK, are activated by extracellular stimuli and then activates MAPKK and MAPK downstream by relay phosphorylation [79]. The MEKK family genes were reported to be involved in the plant's response to drought stress [80][81][82][83]. In the present study, the expression of CsMEKK1-LIKE was upregulated in MT leaves under control and drought stress, suggesting this gene might play an important role in enhancing MT drought tolerance (Table 1, Figures 8A and 10I). Interestingly, a rice MEKK1 gene, DSM1, has been reported to enhance plant drought tolerance by promoting ROS scavenging [84]. Thus, the upregulation of CsMEKK1-LIKE expression might promote ROS scavenging and reduce H 2 O 2 levels in MT leaves. In fact, most of the MAPK signaling pathway genes were upregulated in MT leaves and involved in signal transduction, environmental adaption and metabolism of stress-related chemical compounds, indicating the impor-tant role of the MAPK signaling pathway in enhancing MT resistance to drought stress ( Figures 8A and 9A, Tables S6 and S7). In addition to the MAPK signaling pathway, many DEGs were also enriched in numerous pathways involved in the plant's response to drought and other abiotic stresses, such as phenylpropanoid biosynthesis, flavonoid biosynthesis, brassinosteroid biosynthesis, glutathione metabolism, starch and sucrose metabolism, etc., suggesting the potential role of these DEGs in the citrus response to drought stress ( Figure 7D and Table S5). Interestingly, three DEGs (CsSOD1-LIKE, CsPRX5-LIKE and CsPRX10-LIKE) encoding antioxidant enzymes were upregulated in MT leaves under control and drought conditions ( Table 1 and Figure 10J-L). Plant SOD and PRX family genes encode superoxide dismutase and peroxidase, respectively, which are involved in the ROS scavenging system [41]. Further study revealed that overexpression of SOD and PRX genes could enhance plant drought tolerance [85,86]. Thus, the upregulated expression of CsSOD1-LIKE, CsPRX5-LIKE and CsPRX10-LIKE might enhance the ROS scavenging capacity of MT plants to reduce H 2 O 2 levels under drought stress ( Figure 4J) and finally result in the increase in MT drought resistance. These results suggested that the DEGs involved in the MAPK signaling pathway plant, ROS scavenging and other enriched pathways might also contribute to improving MT drought tolerance. The DEGs Encode Transcription Factor May Play an Important Role in Enhancing MT Tolerance to Drought Stress The transcription factor can positively or negatively regulate numerous downstream gene expressions at the transcript level. Our study identified that 74 DEGs encode transcription factors (30 upregulated DEGs and 44 downregulated DEGs) belonged to 24 transcription factor families, such as AP2/ERF, WRKY, MYB, C2H2, NAC, MADS, FAR1, Dof, bHLH, G2-like and GRAS, etc. ( Figure 8B and Table S8). Notably, AP2/ERF and WRKY were the top two largest transcription factor families, containing 10 DEGs in each family. A large number of AP2/ERF family genes from sesame [87], cauliflower [88], tobacco [89] and strawberry [90] and WRKY family genes from citrus [91], common bean [92] and peanut [93] were reported to be involved in the response to drought stress. The remaining transcription factor families, such as NAC, MYB, bZIP, bHLH, C3H, MADS, GRAS and Dof, also played an important role in the drought stress response in plants [94]. In addition, 17 DEGs encoding transcription factors were involved in signal transduction, 15 DEGs were classified as environmental adaption and 7 DEGs were related to the metabolism of stress-related chemical compounds ( Figure 9B and Table S9). These results suggested that at least some of these transcription factors were related to the citrus response to drought stress. Previous studies showed that ERF4 [95], ERF9 [96], MYB62 [97], ZAT10 [98] and NAC22 [99] could positively regulate plant tolerance to drought stress, whereas WRKY27 [100] and WRKY29 [101] were negatively related to plant drought response. In the present study, the upregulated expression of CsERF4-LIKE, CsERF9-LIKE, CsMYB62-LIKE, CsZAT10-LIKE1, CsZAT10-LIKE2, CsNAC22-LIKE and downregulated expression of CsWRKY27-LIKE and CsWRKY29-LIKE were observed in MT leaves both under control and drought conditions, suggesting these transcription factors might play important roles in enhancing MT tolerance to drought stress (Table 1, Figures 8B and 10M-T). Plant Materials Seeds of trifoliate orange (Poncirus trifoliata (L.) Raf.) obtained from fruits on trifoliate orange trees in an orchard of Jiangxi Agriculture University (Nanchang, Jiangxi province, China) were sown in plastic pots filled with nutritional soil (garden soils, peats and sands mixed at the ratio of 3: 2: 1) to produce rootstocks. The fresh buds cut from the shoots of 'Newhall' navel orange (Citrus sinensis Osbeck cv. Newhall, WT) and 'Longhuihong' navel orange (Citrus sinensis Osbeck cv. Longhuihong, MT) were grafted on one-year-old trifoliate orange rootstocks to generate grafted plants of WT and MT. The WT and MT grafted plants were grown in a growth chamber under normal growth conditions (25 • C, relative humidity of 75%, 16 h light/8 h dark in a day). In this paper, a total of 200 one-year-old grafted plants per variety with uniform size were used as plant materials, of which 100 plants were used for control (under normal growth conditions) and 100 plants underwent drought treatment. The leaves used in all experiments were sampled from five plants randomly selected from 100 plants in each variety per biological replicate. Analysis of Leaf Chromatic Aberration, Fresh and Dry Weight, Water Loss Rate and Chlorophyll Leaching Rate In total, 15 fully expanded leaves per variety were used to determine leaf lightness and color by a colorimeter (CR-300, Minolta, Osaka, Japan). The colorimeter was calibrated with a standard white plate using an illuminant D65, 2 • observer, Diffuse/O mode, 8 mm aperture. The colorimeter was standardized with a white tile. The L* values represent lightness, with scores ranging from 0 (black) to 100 (white). The a* values measure redgreen colors, with positive scores indicating redness and negative scores representing greenness. The b* values reflect yellow to blue colors, with positive values representing yellowness and negative values indicating blueness. The leaf fresh and dry weight were measured by an electronic balance (Model MP31001, Selon Scientific Instrument, Shanghai, China) and expressed as the average value of 20 leaves per replicate. Three biological replicates were used for each variety. The leaves were dried completely at 65 • C using a thermostatic oven (RX-RF023, kaimiao Electric equipment co., LTD, Suzhou, China). The WT and MT plants were transferred to a dark environment for 6 h to make the stoma close. Then, the leaves were detached and placed in the dark (25 • C, relative humidity of 75%) for dehydration. As a result, 15 leaves per biological replicate in each variety were weighted at 1, 2, 4, 8, 12, 24, 36 and 48 h after detachment by an electronic balance (Model MP31001, Selon Scientific Instrument, Shanghai, China). The water loss rate was expressed by the percentage of the lost weight in initial leaf weight. In total, five fully expanded leaves with the same size and weight per biological replicate in each variety were sampled from WT and MT plants. These leaves were soaked in 80% ethanol and kept in a dark environment (25 • C, relative humidity of 75%). After 2, 5, 8, 10, 24, 30h and 48 h of initial immersion, 2 mL of the solution was used to determine the absorbance of the extract at 647 nm (A647) and 664 nm (A664) wavelengths by a spectrophotometer (UV-2600, Shimadzu, Kyoto, Japan). The leaf chlorophyll concentration was calculated as the following equation: total micromoles chlorophyll = 7.93 × A664 + 19.53 × A647. The chlorophyll leaching rate was expressed as a percentage of the chlorophyll concentration at each time point in total chlorophyll extracted after 48 h. Three biological replicates were used for water loss and chlorophyll leaching experiments. SEM Analysis The SEM analysis was carried out according to our previous report [22]. In short, leaf disks with a diameter of 0.5 cm were excised from the fully expanded leaves and transferred to the aluminum holders. Then, the disks were frozen in liquid N2, freeze-dried and sputter-coated with gold film in a sputter coater (SBC-12, KYKY, Beijing, China). The coated disks were observed by SEM (Zeiss DSM 962, Carl Zeiss, Oberkochen, Germany) at ×1000 and ×3000 magnification. Cuticular Wax Extraction and Analysis The fully expanded leaves of WT and MT plants were collected for cuticular wax analysis. The surface areas of leaves were determined by counting pixels of leaf photos using Image J software. In total, 10 leaves from each variety per biological replicate were immersed three times in chloroform for 30 s each time to extract the cuticular waxes. Then, 5 µg n-tetracosane was added to the cuticular wax solution as the internal standard. The extracted solution was concentrated using a rotary evaporator under reduced pressure at 35 • C and dried under nitrogen. Before the gas chromatography-mass spectrometer (GC-MS) analysis, the extracts were derivatized with bis-N,O-(trimethylsilyl) trifluoroacetamide (BSTFA) in pyridine for 40 min at 70 • C to convert the free carboxyl and hydroxyl compounds to their corresponding trimethylsilyl (TMSi) ethers and esters. The derivatized extracts were then treated with a nitrogen stream to remove the residual BSTFA and re-dissolved in 1 mL of chloroform. The GC-MS analysis was performed according to our previous report [26]. In brief, 1 µL of cuticular wax extract from each sample was used to identify the wax components by a GC system (Agilent 6890N, Agilent Technologies, Santa Clara, CA, USA) equipped with an HP-5 MS capillary column (30 m × 0.25 mm i.d. × 0.25 µm, Agilent Technologies, Santa Clara, CA, USA) and coupled with an MS detector (Agilent 5973N, Agilent Technologies, Santa Clara, CA, USA). The carrier gas was helium at a constant flow rate of 2 mL min −1 . The on-column injection temperature of GC was 50 • C, 1 min, then increased to 170 • C by 20 • C min −1 , remained at 170 • C for 2 min, increased to 300 • C by 5 • C min −1 , remained at 300 • C for 8 min. The identification of cuticular wax compounds was performed by matching their mass spectra with those from the NIST 14 MS library. The quantification of cuticular wax components were performed by the same GC detector equipped with a flame ionization detector (FID) under the same chromatographic conditions. The wax compounds were quantified by comparing their peak areas with the internal standard. The amounts of cuticular wax components were expressed as microgram per leaf area (µg cm −2 ). Three biological replicates were used for each variety. Analysis of Physiological Indexes in WT and MT Leaves under Control and Drought Treatment For drought treatment, one-year-old WT and MT plants were cultivated without watering for 21 days and then used for further study. In contrast, the control plants were well watered in the same growth chamber. In each variety, 15 fully expanded leaves were used to measure the photosynthesis parameters at 10:00 a.m. The photosynthesis parameters of WT and MT leaves, including PnGs, Ci and Tr, were investigated by a LI-6400 photosynthetic system (LI-COR, Inc. Lincoln, NE, USA) according to the manufacturer's instruction under a photon fux density of 1000 µmol m −2 s −1 . The ambient carbon dioxide concentration (375 µmol mol −1 ) was controlled by the LI-6400 CO 2 injecting system. Pn, Gs, Ci and Tr were automatically recorded by the photosynthetic system. The Pn/Tr was used to calculate WUEi. In total, 15 fully expanded leaves per biological replicate in each variety were sampled and pooled for measurement of the physiological indexes. The ion leakage was measured by the method of a previous study [102]. In brief, the leaves were stripped and transferred to 30 mL of distilled water and then shaken by a gyratory shaker (200 rpm) at room temperature for 2 h. The initial conductivity (C1) was detected by a DDSJ-318 conductivity meter (Yidian Scientific Instrument Co., Ltd., Shanghai, China). Afterward, the samples were boiled for 10 min to reach maximum ion leakage. After cooling down at room temperature, the electrolyte conductivity (C2) was determined by a conductivity meter. Finally, the ion leakage (%) was calculated by 100× C1/C2. The contents of malondialdehyde (MDA), proline, soluble sugar, chlorophyll, H 2 O 2 and the activities of SOD, POD, CAT were measured by corresponding detection kits (Nanjing Jiancheng Bioengineering Institute, Nanjing, China) according to the manufacturer's protocols. Briefly, the MDA contents were measured by a thiobarbituric acid (TBA)-based colorimetric method. MDA was extracted from 0.1 g leaves homogenized in 1 m of 80% ethanol solution on ice. After centrifuging at 10,000× g for 20 min at 4 • C, the supernatants were extracted and mixed with 0.5 mL of 20% trichloroacetic acid containing 0.65% TBA. Then, the mixture was incubated at 95 • C for 30 min and cooled in an ice bath. Afterward, the mixture was centrifugated at 10,000× g for 10 min. The absorbance of the extracted supernatants was detected at 532 nm (A532), subtracting the value for nonspecific absorption at 600 nm (A600). The MDA content was calculated by the following equation: the MDA content = [6.45 × (A532 − A600)]/0.1. The ninhydrin reaction method was used to measure the proline content. In brief, a stan-dard curve was obtained from a series of standard solutions containing 0-10 µg proline. A total of 0.5 g leaves was homogenized in 5 mL of 3% suphosalicylic acid and heated at 100 • C for 10 min. Then, 2 mL of the extracted solution was mixed with 2 mL of acetic acid and 2 mL of 2.5% acid ninhydrin reagent. This mixed solution was heated at 100 • C for 30 min and then cooled at room temperature. Afterward, 4 mL of methylbenzene was added to the solution and incubated for 10 min. The supernatants were isolated and centrifugated at 10,000× g for 5 min. The methylbenzene solution was used as a control, and the absorbance of the supernatants (2 mL) was detected at 520 nm (A520). The proline content was determined by the following equation: the proline content = (B × 5)/(0.5 × 2). B represents the proline content obtained by the standard curve according the A520. The phenol reaction method was used to detect the soluble sugar content. Briefly, a standard curve was obtained from a series of standard solutions containing 0-100 µg sucrose. A total of 0.2 g of leaves was boiled in 5 mL of distilled water for 30 min and diluted with distilled water to 10 mL. Then, 2 mL of the solution was added to 1 mL of 9% phenol and 5 mL of concentrated sulfuric acid. After standing for 30 min, distilled water was used as a control to determine the absorbance of the aqueous solution (2 mL) at 485 nm. The soluble sugar content was determined by the following equation: the soluble sugar content = (D × 10)/(0.2 × 2). D represents the soluble sugar content obtained by the standard curve according to the A485. To measure the chlorophyll content, 0.1 g of fine powder of leaves was homogenized in 1 mL of 80% acetone and held for 15 min at room temperature in the dark. The extracted solution was centrifugated at 10,000× g for 20 min. Then, the supernatant was used to measure the absorbance at 663 and 645 nm. The chlorophyll content was calculated by the following equation: the chlorophyll content = (20.29 × A645 + 8.05 × A663). For H 2 O 2 analysis, 0.5 g of fine powder of leaves was homogenized in 2 mL cold acetone. The homogenized solutions were centrifuged at 10,000× g for 10 min. Then, the supernatants were mixed with titanium reagent (0.2 mL of 20% titanic tetrachloride in concentrated HCl) and 0.4 mL of NH 4 OH. Then, the mixtures were centrifuged at 12,000× g for 5 min. The precipitates were solubilized in 2 mL 2 N H 2 SO 4 , washed repeatedly with acetone and brought to a final volume of 2 mL. The absorbance of the final solution was measured at 415 nm against a water blank, which had been carried through the same procedure. The H 2 O 2 content was calculated by comparing the absorbance against the standard curve representing the titanium-H 2 O 2 complex in the range from 0.05 to 0.3 µmol·mL −1 . For the extraction of SOD, POD and CAT, 0.5 g of leaf powder was homogenized in 5 mL of extraction buffer containing 50 mM phosphate buffer (pH7.8) and 1% polyvinylpyrrolidone. After centrifuging at 10,000× g for 20 min (4 • C), the supernatants were collected for enzyme activity analysis. The SOD reaction mixture contained 100 µL enzyme extract, 50 mM sodium phophate buffer (pH 7.8), 10 µM EDTA, 75 µM nitroblue tetrazolium (NBT), 13 mM methionine and 2 µM riboflavin in a total volume of 3 mL. The mixtures were exposed to white fluorescent illumination for 20 min, and the absorbance was measured at 560 nm. The SOD activity was expressed as U·g −1 . One unit of SOD activity was defined as the amount of enzyme inhibiting NBT reduction by 50%. The POD reaction mixture contained 0.05 M phosphate buffer (pH 7.0), 0.3% H 2 O 2 , 0.2% guaiacol and 50 µL enzyme extract in a total volume of 3 mL. The POD activity, expressed as U·g −1 , was determined based on the increase in absorbance read at 470 nm, and one unit of POD activity was defined as the increase in absorbance by 0.01 per min. The CAT reaction mixture was composed of 0.1% H 2 O 2 , 0.1 M phosphate buffer (pH 7.0) and 100 µL enzyme extract in a total volume of 3 mL. The CAT activity, expressed as U·g −1 , was assessed by monitoring the decrease in absorbance at 240 nm as a consequence of H 2 O 2 consumption, and one unit of CAT activity was defined as a reduction in the absorbance by 0.01 per min. The leaf stable carbon isotope composition (=δ 13 C) analysis was carried out according to a previous report [103]. Briefly, the leaf samples were dried in an oven at 70 • C and ground to a fine powder. Subsamples of the leaf powder were combusted in an isotope ratio mass spectrometer (IRMS) (DELTA V Plus, Thermo Fisher Scientific, Bremen, Germany). The resulting CO 2 was separated and the ratio of 13 C/ 12 C was assessed by the IRMS. The δ 13 C (‰) was expressed relative to the Pee Dee Belemnite (PDB) standard and was calculated using the following equation: δ 13 C = [(Rsa − Rsd)/Rsd] × 1000. The Rsa and Rsd in the equation are the ratio of 13 C/ 12 C of the sample and the standard, respectively. Three biological replicates were used for the above experiments. Total RNA Extraction and Transcriptome Sequencing Three biological replicates were used for transcriptome sequencing. In total, 15 fully expanded leaves per biological replicate in each variety were sampled from WT and MT plants under normal growth conditions. These leaves were pooled and ground to a fine powder for total RNA extraction. Total RNA was extracted from WT and MT leaves according to our previous report [17]. The quantity, concentration and RNA Integrity Number (RIN) of the total RNA were assessed by an Agilent 2100 Bioanalyzer (Agilent, SantaClara, CA, USA). The poly(A) mRNA was obtained from total RNA using oligo(dT)attached magnetic beads. The mRNA was interrupted to small fragments and then used to synthesize the first strand of cDNA using random hexamer primers. A Super Script Double-Stranded cDNA Synthesis kit (Invitrogen, Camarillo, CA, USA) was used to synthesize the second strand cDNA. After end repair and 5 phosphorylation, the cDNA samples were 3 -adenylated and ligated to 3 and 5 adapters. After PCR amplification, the PCR products were separated into a single cDNA strand by heat denaturation. Circular single-stranded DNA libraries were obtained by a bridge primer and were sequenced on the DNBSEQ platform (BGI, Shenzhen, China). qRT-PCR Analysis The same plant materials and drought treatment procedure as described in the physiological index investigation were used for qRT-PCR. The transcript levels of 16 wax biosynthesis, export and drought responsive DEGs in WT and MT leaves under control and drought conditions were investigated by qRT-PCR. The total RNA extraction and cDNA synthesis were performed as described in our previous study [22]. A series of gene specific primers were designed by Primer Premier 5.0 (Table S10). The qRT-PCR was carried out according to our previous report [22]. Statistical Analysis All data are shown as the means ± standard deviations. Student's t-test in SPSS version 22 was used to compare the data differences between WT and MT. Statistical significance was considered at the p < 0.01 and p < 0.05 level. Conclusions In conclusion, the amounts of total waxes and aliphatic wax compounds, including n-alkanes, n-primary alcohols and n-aldehydes in MT leaves, were significantly higher than those in WT leaves. The increase in aliphatic wax accumulation reduced the cuticular permeability and, finally, improved the drought tolerance and WUE in MT plants. The contents of MDA and H 2 O 2 in MT leaves were much lower than those in WT leaves under control and drought conditions. In contrast, MT leaves possessed much higher levels of proline and soluble sugar and significantly enhanced SOD, CAT and POD activities under control and drought conditions. These results suggested that the MT leaves had improved the capacity of ROS scavenging, suffered much less ROS damage and kept much better cell membrane stability, which might be another reason for its enhanced drought tolerance. Based on transcriptome sequencing and qRT-PCR, we concluded that several structural genes potentially involved in the wax biosynthesis and transport (CsCER3-LIKE, CsABCG11-LIKE and CsABCG21-LIKE), MAPK cascade (CsMEKK1-LIKE), ROS scavenging (CsSOD1-LIKE, CsPRX5-LIKE and CsPRX10-LIKE) and genes encoding transcription factors (CsERF4-LIKE, CsERF9-LIKE, CsMYB62-LIKE, CsZAT10-LIKE1, CsZAT10-LIKE2, CsWRKY27-LIKE and CsWRKY29-LIKE) might play an important role in increasing MT drought tolerance and WUE. Conflicts of Interest: The authors declare no conflict of interest.
2022-05-20T15:19:05.018Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "1555d973914e0e5a7ff85d3e2967111dce7a09c2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/10/5660/pdf?version=1652954146", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e0f32e80d8f580f07a66954b28f61d3b765b2f8", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13326674
pes2o/s2orc
v3-fos-license
CCN1 (CYR61) and CCN3 (NOV) signaling drives human trophoblast cells into senescence and stimulates migration properties ABSTRACT During placental development, continuous invasion of trophoblasts into the maternal compartment depends on the support of proliferating extravillous trophoblasts (EVTs). Unlike tumor cells, EVTs escape from the cell cycle before invasion into the decidua and spiral arteries. This study focused on the regulation properties of glycosylated and non-glycosylated matricellular CCN1 and CCN3, primarily for proliferation control in the benign SGHPL-5 trophoblast cell line, which originates from the first-trimester placenta. Treating SGHPL-5 trophoblast cells with the glycosylated forms of recombinant CCN1 and CCN3 decreased cell proliferation by bringing about G0/G1 cell cycle arrest, which was accompanied by the upregulation of activated Notch-1 and its target gene p21. Interestingly, both CCN proteins increased senescence-associated β-galactosidase activity and the expression of the senescence marker p16. The migration capability of SGHPL-5 cells was mostly enhanced in response to CCN1 and CCN3, by the activation of FAK and Akt kinase but not by the activation of ERK1/2. In summary, both CCN proteins play a key role in regulating trophoblast cell differentiation by inducing senescence and enhancing migration properties. Reduced levels of CCN1 and CCN3, as found in early-onset preeclampsia, could contribute to a shift from invasive to proliferative EVTs and may explain their shallow invasion properties in this disease. Introduction In mammalian species, the formation of a functional placenta is essential for normal fetal growth and development. Appropriate placentation in humans relies on the ability of extravillous cytotrophoblasts (EVTs) to proliferate and then to invade the maternal tissue. Diploid EVTs located in the proximal cell column continuously proliferate to provide a constant supply of invading EVTs during the first trimester of pregnancy. [1][2][3][4] To establish a connection between the placenta and the maternal vasculature, terminal differentiated trophoblast giant cells, characterized by endoreduplication up to 1000N of DNA, 5 invade and transform the maternal vessels, and this transformation in turn guarantees nutrition and oxygen support to the placenta and the fetus. 6,7 Before differentiating into the invasive pathway, the proliferative trophoblast cells of the cell column escape from the cell cycle when they come into contact with the maternal decidua. To prevent tumor-like behavior, such as that occurring in choriocarcinoma, the proliferation and invasion properties are temporally and spatially separated in EVTs. In humans, these two processes are tightly controlled by a plethora of multiple and complex signaling factors, such as growth factors, hormones, and chemokines. [8][9][10][11] Preeclampsia, a complication in pregnancy, is known to coincide with an insufficient invasion of trophoblast cells into the decidua and the maternal spiral arteries. Such placentas lack sufficient maternal vascular remodeling, and this characteristic, combined with a restricted supply of oxygen and nutrition for the embryo, may result in intrauterine growth restriction. Therefore, deciphering this defined regulation process is important for understanding the pathogenesis of preeclampsia. Previous studies have shown that the matricellular CCN protein family members CCN1 (CYR61) and CCN3 (NOV) play an important role in these regulatory processes. [12][13][14][15] CCN proteins are known to regulate pivotal cellular processes, such as differentiation, proliferation, migration, and angiogenesis. 16,17 Downstream signaling events are mediated by integrins, bone morphogenetic proteins (BMPs), vascular endothelial growth factor (VEGF), Wnt proteins, and Notch. 18 CCN1 and CCN3 proteins occur in a secreted glycosylated form (g-CCN1 and g-CCN3) or in an intracellular non-glycosylated form (ng-CCN1 and ng-CCN3). 19,20 As shown in earlier studies, these two types of proteins function differently in regulating trophoblast proliferation and migration. 14,21 In the human placenta, CCN1 and CCN3 are expressed in endothelial cells of placental vessels, stromal cells, and interstitial EVT giant cells, and their expression levels increase during pregnancy. 22 In the placentas of women with early-onset preeclampsia, CCN1 and CCN3 protein levels are significantly lower than in gestation-matched control placentas. 23 We have already demonstrated that in the malignant trophoblast cell line Jeg3, a model of invasive EVT, CCN3 reduces cell proliferation and enhances migration properties. [12][13][14] These studies showed that CCN3 acts by inducing the mitogen-activated protein kinase/extracellular signal-related kinase (MAPK/ERK), phosphatidylinositol 3-kinase/protein kinase B (PI3K/ Akt), and Notch/p21 pathways, mediating these separate functions for proliferation and migration/invasion in Jeg3 cells. 14 In the study reported here, we investigated the proliferation control of CCN1 and CCN3 in benign SGHPL-5 trophoblast cells, which are more similar to the in vivo situation than previous models. We confirmed that the proliferation of the SGHPL-5 cell line is reduced by CCN1 and CCN3, whereas the migration is mostly enhanced by these proteins. We found that the CCN1 and CCN3 proteins induce senescence of the trophoblast cells, which is accompanied by cell cycle arrest at G0/G1. Simultaneously, CCN1 and CCN3 seem to promote migration capability by activating focal adhesion kinase (FAK) and Akt kinase (protein kinase B), a finding suggesting that the CCNs play a regulatory role in controlling proliferation and stopping differentiation, inducing senescence and the onset of migration in EVTs. Materials and methods Cell culture and treatment of SGHPL-5 trophoblast cells The cytotrophoblast cell line SGHPL-5 (kindly provided by G. Whitley, Division of Basic Medical Sciences, St George's University of London, UK) was routinely cultivated in Ham's F10 nutrient mixture (Biochrom AG, Berlin, Germany) supplemented with 10% fetal calf serum (FCS; Biochrom AG), 2 mM L-glutamine, and 1% penicillin/streptomycin (10,000 U/ml, 100x; Live Technologies, Carlsbad, CA, USA). Cells were seeded as specified in the following sections and allowed to attach for 24 h in normal culture medium. Synchronization in cell cycle phase distribution was achieved by serum starvation for another 24 h. In vitro proliferation assay Cells were seeded at a density of 5£10 4 cells per well in 12-well plates in triplicate. After 24 h of serum starvation, the cells were treated with 5% FCS and 1 mg/ml g-rhCCN1, ng-rhCCN1, g-rhCCN3, ng-rhCCN3, or PBS/ 0.1% BSA as a solvent control. An electronic cell counter (CASY-I; Sch€ arfe Systems, Reutlingen, Germany) was used to count the cells 24 h and 48 h after plating, as previously described. 13,24 Analysis of cell cycle distribution Cells were seeded at a density of 7£10 5 cells per well in 25-cm 2 cell culture flasks. After 24 h of serum starvation, cells were treated with 5% FCS and 1 mg/ml g-rhCCN1, ng-rhCCN1, g-rhCCN3, ng-rhCCN3, or PBS/0.1% BSA as a solvent control for 0 h, 4 h, or 24 h. Bromodeoxyuridine (BrdU) was added to the culture for the last two hours of the incubation period. Cells were then fixed and stained for newly synthesized DNA as marked by incorporated BrdU using a specific fluorescein isothiocyanate (FITC)-conjugated anti-BrdU antibody as well as total DNA by 7-amino-actinomycin D (7-AAD) according to the manufacturer's protocol (FITC BrdU Flow Kit; BD Pharmingen, San Jose, CA, USA). Two-color flow cytometric analysis was used to detect cells actively synthesising DNA (Fl-1, FACSCalibur; Becton Dickinson, Heidelberg, Germany) and total DNA (Fl-3). Positions in the G0/G1, S, and G2/M phases of the cell cycle were quantified with a classical DNA profile (FL-3; histogram plot of DNA content against cell numbers). Annexin V apoptosis assay Cells were seeded at a density of 9£10 4 cells per well in 6-well plates. After 24 h of serum starvation, the cells were treated with 1 mg/ml g-rhCCN1, g-rhCCN3, or PBS/0.1% BSA as a solvent control for 24 h. Annexin V apoptosis assays were performed as described by Koch et al. 25 using flow cytometry (FACSCalibur, Becton Dickinson) in combination with FITC-coupled annexin and propidium iodide (PI; BD Pharmingen). Senescence-associated b-galactosidase staining SGHPL-5 cells were seeded in 6-well plates (3£10 5 cells per well), and experiments were performed with 1 mg/ml rhCCN1, rhCCN3, or PBS/0.1% BSA as a solvent control for 24 h or 48 h. Cells were washed with PBS and were then fixed for 15 min in 0.2% glutaraldehyde in PBS. After two washes with PBS, fixed cells were incubated in freshly prepared senescence-associated b-galactosidase (SA-b-Gal) staining solution (1 mg/ml X-Gal, 5 mM potassium ferricyanide, 5 mM potassium ferrocyanide, and 2 mM MgCl 2 in PBS at pH 6.0) for 24 h at 37 C. At least three random fields were digitally photographed with a phase-contrast microscope (10£ magnification). The numbers of total cells and of positive blue-stained cells were counted and depicted as SA-b-gal-positive cells per 100 cells. Analysis of migration Wound healing migration assays for analyzing horizontal migration properties were performed with co-culture inserts (ibidi GmbH, Martinsried, Germany). We seeded 2 £ 10 4 cells into each chamber of the insert and allowed to attach in regular culture medium. After 24 h, the cells were pretreated with 1 mg/ml rhCCN1, rhCCN3, or PBS/0.1% BSA as a vehicle control in serum-free culture medium for another 24 h. Then the culture insert was removed, leaving a defined cell-free gap of 500 mm. Cell patches were overlaid with serumfree culture medium and 1 mg/ml ng-rhCCN1, g-rhCCN1, ng-rhCCN3, g-rhCCN3, or PBS/0.1% BSA as a vehicle control. Cells were allowed to migrate, and phase contrast images were collected with a Zeiss Axiovert 25 microscope (Carl Zeiss AG, Oberkochen, Germany) at 0 h and 24 h. For documentation, an Axi-oCamICc1 camera and the AxioVision LE Release 4.8.2 image analysis software (Carl Zeiss AG) were used. Photographs of wound healing migration assays were analyzed with the WimScratch quantitative image analysis software (www.wimasis.com). To analyze vertical migration with transwell assays, we seeded SGHPL-5 cells were seeded at a concentration of 3£10 4 cells per well on uncoated Transwell chambers (Falcon cell culture inserts for 24-well plates, 8-mm pore size; Thermo Fisher Scientific, Waltham, MA, USA). After attachment, cells were incubated with 1 mg/ml g-rhCCN1, ng-rhCCN1, g-rhCCN3, ng-rhCCN3, or PBS as a vehicle control in both chambers. After 6 h, noninvaded cells on the upper side of the inserts were removed with a cotton swab. Cells on the lower surface were fixed in ice-cold methanol and stained with hematoxylin. Membranes were cut out with a scalpel blade, placed on glass slides, and covered with Shandon Xylene Substitute mountant (Thermo Fisher Scientific). For evaluation, we took 5 non-overlapping pictures of each membrane in three independent experiments prepared as duplicates (20£ magnification; Axiophot microscope, Carl Zeiss AG; Digital Sight DS-U1 camera, Nikon, D€ usseldorf, Germany) and analyzed with Image J software (http://imagej.nih.gov/ij/). RNA isolation and quantitative reverse-transcriptase polymerase chain reaction Total RNA was isolated from cells with the E.Z.N.A RNA extraction kit (Omega-Biotek, Norcross, GA, USA) and was reversely transcribed as previously described. 24 Gene expression was quantified with the quantitative PCR Master Mix and SYBR green (Applied Biosystems, Darmstadt, Germany) and an ABI Prism 7300 sequence detector (Applied Biosystems). PCR reactions were carried out in triplicate with a final volume of 20 ml, with 1 ml (40 ng) cDNA, 1x reaction buffer containing SYBR green, and 10 pmol sense and anti-sense primers (for sequences, see Table 1). PCR was performed for 10 min at 95 C, followed by 40 cycles consisting of 10 sec denaturation at 95 C and 1 min annealing at 60 C. Specificity of the amplification products was confirmed by melting curve analysis. The use of 10-fold series dilutions of purified PCR products from 1 pg to 0.1 pg as standards provided a relative quantification of the unknown samples. The quantity of cDNA in each sample was normalized to the b-actin content. Statistical analysis Statistical analysis of densitometric data from Western blot analyses and of the qRT-PCR results was performed with GraphPad Prism 5 (Graphpad Software Inc., La Jolla, CA, USA). Statistical significance was determined with PASW Statistics 18 (IBM, Duesseldorf, Germany) using the Mann-Whitney U test for nonparametric independent two-group comparisons. For comparing CCNtreated samples with untreated controls, statistical significance was set at the level of P 0.05. CCN1 and CCN3 decrease proliferation and induce G0/G1 cell cycle arrest in SGHPL-5 trophoblast cells As previously reported, both g-CCN3 and ng-CCN3 recombinant proteins, decrease proliferation of the malignant trophoblast cell line Jeg3. 13,14 Because of the tumor characteristics of Jeg3 cells concerning proliferation control we investigated the influence of recombinant human CCN1 and CCN3 proteins on proliferation control in benign SGHPL-5 cells. SGHPL-5 cells endogenously express ng-CCN1 but not CCN3 protein (data not shown). When g-rhCCN1, ng-rhCCN3 or g-rhCCN3 was added to the cell culture medium, the numbers of SGHPL-5 trophoblast cells were significantly lower within 48 h than in control cultures (Fig. 1A), and g-rhCCN1 and ng-rhCCN3 were the most effective proteins. The expression of the proliferation marker Ki-67 was clearly lower after treatment with both glycosylation forms of CCN1 and CCN3 than in control cells, as determined by immunocytochemistry (Fig. S1) To identify the reasons for the reduction in cell numbers after the addition of CCNs, we investigated apoptosis and cell cycle arrest. Analysis of apoptosis with the Annexin V assay showed a significant increase in Annexin V staining after treatment with g-rhCCN1 but not with g-rhCCN3 (Fig. 1B). Antibodies against CCN1 alone or rh-CCN1 preincubated with anti-CCN1 did not increase Annexin V staining. Other markers of apoptosis, such as caspase-3 and p53, however, were not altered after the addition of either of the rhCCN proteins (data not shown). Moreover, we did not find that CCN1 and CCN3 induced any increase in the numbers of polyploid cells (n > 4) in SGHPL-5 cells as a marker of endoreduplication (Fig. 1C). Cell cycle analysis of BrdU-labeled cells by fluorescenceactivated cell sorting (FACS) showed that the number of cells arrested in the G0/G1 phase after treatment with ng-rhCCN1 (52.85% § 0.55%), g-rhCCN1 (52.45% § 2.69%), ng-rhCCN3 (53.06% § 2.57%), or g-rhCCN3 (51.49% § 2.17%) was significantly higher than that of control cells (40.05% § 1.46%) or vehicle control (42.74% § 0.86%) (Fig. 2). Accordingly, the fraction of cells in G2/M phase was significantly lower after the addition of ng-rhCCN1 Total cell numbers were determined after treatment of SGHPL-5 cells with 1 mg/ml of non-glycosylated recombinant human CCN1 (ng-rhCCN1), ng-rhCCN3, glycosylated (g)-rhCCN1 and g-rhCCN3. Proliferation was significantly lower in response to g-rhCCN1, ng-CCN3, and g-rhCCN3 than in control cultures (ctrl); proliferation was slightly reduced after treatment with ng-rhCCN1. N D 3. Ã P 0.05. (B) Staining of SHGPL-5 cells treated with g-rhCCN1 or g-rhCCN3 alone, preincubated with anti-CCN1 or anti-CCN3 and anti-CCN1 or anti CCN3 alone, compared with staining of control cells for the early apoptosis marker Annexin V (AnnV C /propidium iodide [PI] ¡ ). Numbers of stained cells as a percentage were determined by fluorescence-activated cell sorting. Apoptosis was significantly increased only after treatment with g-rhCCN1. Ã P 0.05. (C) Determination of the number of polyploid SGHP-5 cells as a percentage (percentage of cells with n > 4) after treatment for 24 h with CCN1 and CCN3 recombinant proteins. No significant changes in polyploidy were found. CCN1 and CCN3 decrease proliferation via Notch-1 receptor/p21 signaling The Notch-1 receptor is known to be expressed in cytotrophoblast cells of the human placenta 26 and to regulate the cyclin/CDK inhibitor p21 as a downstream target of Notch-1 activation. 27 Our recent studies using Jeg3 malignant trophoblast cells also showed a link between the CCN-induced Notch-1 signaling pathway and the decrease in cell proliferation. 14 Treating SGHPL-5 cells with g-rhCCN1, ng-rhCCN1, or g-rhCCN3 significantly enhanced the cleavage of the Notch-1 receptor (Fig. 3A). After 2 h, the expression of p21 protein was significantly upregulated by both glycosylation states of CCN1, whereas stimulation with glycosylated or non-glycosylated CCN3 enhanced p21 protein expression only slightly but not significantly (Fig. 3B). Interestingly, expression of cyclin D1, a positive regulator for the transition from the G1 to the S phase, 28 was slightly but not significantly upregulated after the addition of both glycosylation states of CCN1 and CCN3 (data not shown). CCN1 and CCN3 induce cellular senescence via upregulation of the senescence marker b-galactosidase and p16 expression Since it is known that a G0/G1 cell cycle arrest via upregulation of the p21 pathway upon CCN1 is associated with cellular senescence, 29 we performed senescenceassociated b-galactosidase staining of CCN1-and CCN3-treated SGHPL-5 trophoblast cells. The number of cells with SA-b-Gal staining was significantly higher after 48 h of treatment with g-rhCCN1 and both glycosylation forms of CCN3 than in control cultures (Fig. 4A-B). Expression of the cyclin-dependent kinase inhibitors p15 INK4B , p27 Kip1 , and p57 Kip2 , as well as p16 INK4A , on mRNA level did not differ significantly between treated and untreated SGHPL-5 cells (Fig. S2). However, the increase in SA-b-Gal staining upon CCNs was preceded by a significant upregulation of the cell cycle regulator and senescence marker protein p16 upon 2 h treatment with g-and ng-CCN1 and g-CCN3 The proportion of SGHPL-5 cells remaining in G0/G1 after treatment with CCN1 and CCN3 for 24 h was significantly higher than that in control cells. The number of cells in G2/M phase was significantly reduced by treatment with ng-rhCCN1, g-rhCCN1, or g-rhCCN3. N D 3. Ã P 0.05. (Fig. 4C). Interestingly, p16 protein levels decreased upon g-CCN1 after 48 hours maybe due to an increase in turnover rate. CCN1 and CCN3 increase the migration of SGHPL-5 trophoblast cells via phosphorylation of FAK and Akt kinases In our recent studies using the choriocarcinoma cell line Jeg3, we found increased migration of Jeg3 cells after treatment with ng-CCN3 but not with g-CCN3. This increased migration was mediated by the activation of Akt and MAP kinases. 13,14 We investigated the cell migration behavior of SGHPL-5 cells after treatment with g-rhCCN1, ng-rhCCN1, g-rhCCN3 or ng-rhCCN3 by using two separate assays to analyze horizontal and vertical migration properties. Horizontal migration of SGHPL-5 cells was performed as a wound healing assay using co-culture chambers and was mostly enhanced by the glycosylated forms of both CCN1 and CCN3 and by non-glycosylated CCN1 ( Fig. 5A and 5C). In uncoated transwell migration assays analyzing vertical migration, we observed an increase in migration, but not significantly after treatment with ng-rhCCN1 or ng-rhCCN3, and a significantly decreased migration after treatment with g-rhCCN3 as well as no obvious change in migration upon g-CCN1 ( Fig. 5B and 5D) Our analysis of matrix metalloproteinases (MMPs) showed that treatment with CCN1 or CCN3 does not alter transcript expression of MMP-2 by SGHPL-5 trophoblast cells (Fig. 6A). However, the glycosylated forms of both CCN1 and CCN3 significantly enhance MMP-9 mRNA expression in SGHPL-5 cells (Fig. 6B), whereas ng-rhCCN1 seems to exert no substantial influence on the expression of MMP-9 transcript. FAK/Akt signaling is known to be activated by secreted MMP-2 and MMP-9. [30][31][32][33] Our studies showed that the phosphorylation of FAK is significantly affected only by g-CCN3 and is slightly increased by ng-CCN3 after 8 h of CCN3 treatment (Fig. 7A). Both glycosylation forms of CCN1 and the glycosylated form of CCN3 significantly promote the phosphorylation of Akt after 2 h and 8 h of CCN treatment (Fig. 7B). The phosphorylation of ERK1/2 was not changed by treatment with CCN1 or CCN3 (Fig. 7C). Exemplary phase micrographs of wound healing horizontal migration assays using ibidi co-culture chambers. SGHPL-5 cells were treated with glycosylated or non-glycosylated CCNs for 24 h. Untreated cells were used as controls (controls and vehicle controls). Each micrograph is representative of three independent experiments. Glycosylated and non-glycosylated CCNs stimulated the horizontal migration of SGHPL-5 cells; the most pronounced effect was achieved with glycosylated CCNs compared to controls. Scale bar, 500 mm. (B) Exemplary micrographs of migration assays using transwell chambers are shown. The cells were treated with glycosylated or non-glycosylated CCNs for 6 h. Five random fields per condition were photographed at 20£ magnification. Interestingly, non-glycosylated CCN1 (ng-CCN1) and CCN3 (ng-CCN3) slightly induced migration, whereas glycosylated (g)-CCN3 significantly reduced the migration capability of SGHPL-5 cells. Scale bar, 50 mm. (C) Quantification of horizontal migration assays. Bars represent mean values of three independent experiments; error bars indicate SD. Horizontal migration was significantly enhanced after stimulation with glycosylated CCNs and non-glycosylated CCN1 compared to controls. N D 3 Ã P 0.05 (D) Quantification of vertical migration assays using transwell chambers. Bars represent mean values of three separate (glycosylated and non-glycosylated) experiments performed in duplicate; error bars indicate SD. Mean value of untreated cultures (control) was arbitrarily set at 100%. Vertical migration of SGHPL-5 cells was moderately higher after treatment with non-glycosylated (ng)-CCN3 and with ng-CCN1, not changed upon g-CCN1, but it was significantly reduced by treatment with g-CCN3. N D 3. Ã P 0.05. CCN proteins are known to mediate the regulation of cell migration and invasion through diverse integrin receptors. 30 Using Jeg3 trophoblast cells we confirmed that integrin a5b1 is the receptor for CCN3-promoted trophoblast migration. 14 Both subunits of the integrin a5b1 are expressed by SGHPL-5 trophoblast cells (Fig. S3). Whether integrin a5b1 mediates the increased migration behavior of SGHPL-5 like the Jeg3 cells must be elucidated by future experiments. The schematic overview (Fig. 8) summarizes the identified signaling pathways of both CCN proteins in SGHPL-5 cells. CCN1 and CCN3 decrease proliferation by inducing cell cycle arrest and bringing about senescence; they also activate Notch/p21 signaling and simultaneously increase migration by activating FAK and Akt, probably via integrins a5b1, which are expressed in SGHPL-5 cells. Discussion Cell cycle exit and subsequent differentiation into the invading cell type of trophoblast cells are central processes of placentation and are coordinated by an exact interplay between proliferation, differentiation, and invasion capabilities. 34 This study focused on the molecular regulatory mechanisms, such as proliferation and migration, that are mediated by both glycosylation forms of the matricellular proteins CCN1 and CCN3. We have previously shown that these proteins control the proliferation process in Jeg3 cells as a model of EVTs. EVTs detach from the cell column and differentiate into the invasive phenotype; they then deeply invade the maternal decidua and maternal spiral arteries. Previous studies showed that CCN1 and CCN3 are expressed at high levels in the human placenta during pregnancy, with expression in interstitial EVT cells, in endothelial cells of vessels, and in stromal cells. 22 The levels of both CCN proteins are consistently high in the sera of non-pregnant and pregnant women. However, lower levels of CCN1 and CCN3 were detected in the sera of pregnant women with early-onset preeclampsia, a disease that is associated with insufficient trophoblast invasion. 22,23 These findings indicate that CCNs are involved in the regulation of cell biological events at the feto-maternal interface. More detailed analyses in previous studies showed that CCN3-mediated migration was induced by integrin a5b1 as the receptor and activator of Akt kinase, whereas Notch-1 and p21 are involved in antiproliferative capabilities of CCN3. 14 In the present study we focused mainly on the role of CCN proteins in proliferation control, using the benign cytotrophoblast cell line SGHPL-5 as a model system for the in vivo situation. CCN1 and CCN3 decrease proliferation of SGHPL-5 cells by inducing a G0/G1 cell cycle arrest, and then differentiate into a cellular senescent state During trophoblast differentiation, some of the cytotrophoblasts (CTBs) underlining the syncytiotrophoblast layer maintain their undifferentiated phenotype throughout pregnancy, thereby providing a reservoir of placental stem cells. The remainder of the CTBs differentiate into two subpopulations of trophoblast cells: syncytiotrophoblast cells (STBs) and invasive extravillous interstitial cytotrophoblasts (EVTs). 3 Until now, little has been known about the interplay of cell cycle regulators, and it has been impossible to determine whether trophoblast cells proliferate or exit from the cell cycle to allow further differentiation. 35,36 The results of previous experiments using the choriocarcinoma cell line Jeg3 suggested that CCN3 causes an imbalance between the proliferation and migration of human trophoblast cells. 13,14,22 In the present study we found that both glycosylation forms of CCN1 and CCN3 proteins reduce the numbers of benign SGHPL-5 trophoblast cells, whereas in Jeg3 cells only CCN3 seems to regulate proliferation. Comparing the effect of CCNs on migration properties in both cell lines showed that Jeg3 trophoblast cells and SGHPL-5 cells are mostly stimulated by non-glycosylated CCN1 and CCN3. [12][13][14] Thus, the regulation properties of CCNs on proliferation differ between the malignant and the benign trophoblast cell lines, and migration seems to be similarly regulated. The reduced number of SGHPL-5 cells after treatment with CCN1 and CCN3 is based on cell cycle control and not on apoptosis. The analysis of cell cycle phase distribution found that reduced proliferation after treatment with CCN1 or CCN3 is associated with a G0/G1 cell cycle arrest characterized by an increased number of cells in the G0/G1 phase. The proportion of cells in the G2/M phase was significantly reduced by both glycosylation forms of CCN1 and CCN3. Arrest of or exit from the cell cycle is a precondition for a cell to pass into postmitotic states, such as quiescence, senescence, or terminal differentiation. 37 Studies of murine trophoblast giant cells have shown that terminal differentiation is marked by endoreduplication. 38 However we did not detect an increase in the number of polyploid SGHPL-5 cells after treatment with either CCN. Instead, our results clearly showed that both CCN proteins induced cellular senescence in SGHPL-5 cells, as demonstrated by an increased expression of SAb-gal, and the increased expression of p16, both are wellestablished markers of cellular senescence. 39 Meanwhile, separate signaling pathways are mediated by CCNs and lead to alterations in proliferation and migration properties. It is known that the Notch-1 receptor is expressed in CTBs of the human placenta 26 and that this receptor regulates the cyclin/CDK inhibitor p21. 27 In small cell lung cancer cells, Notch-1 signaling induces a p21-mediated cell cycle arrest. 40 Furthermore, Notch signaling plays an important role in the regulation of proliferation in the placental cell column and of trophoblast invasion and differentiation of EVTs. Inhibiting the Notch signaling pathway in primary EVTs and SGHPL-5 cells enhanced Figure 8. Molecular mechanisms of CCN-mediated signaling in the human trophoblast leading to the switch between proliferation and invasion. Treatment with CCN1 or CCN3 decreased cell proliferation via Notch-1, accompanied by an upregulation of activated Notch-1 and its target gene p21, causing a cell cycle arrest. Both CCN proteins increased cellular senescence in SGHPL-5 cells, as characterized by an increase in senescence-associated b-galactosidase activity and p16 expression. In parallel, the migration capability of SGHPL-5 cells was mostly enhanced in response to non-glycosylated CCN1 and CCN3 by the activation of FAK and Akt kinase but not ERK1/2, probably via integrin a5b1 as the receptor. These results are transferable to the placenta. Here the Notch-1 receptor is expressed proximally in the placental cell column, whereas the integrin a5b1 receptor is expressed distally in the invading extravillous cytotrophoblasts (EVTs), as seen in the upper right corner (modified scheme from ref. 9). proliferation in the placental cell column, invasion capability, and expression of EVT markers, as shown by Haider et al. 41 In the present study we found that Notch-1 expression is associated with the proliferative capability of CTB cell column progenitor cells, which is highest during the first trimester of pregnancy. This finding strongly corroborates our findings that CCN1 and CCN3 activate Notch-1 signaling and thereby reduce proliferation of the cytotrophoblast cell line SGHPL-5, as reported by Haider et al. 41 CCN proteins are known to act via Notch-1 in other systems, such as myoblasts. [42][43][44][45] Our recent studies showed that Notch/p21 signaling also seems to mediate the proliferation-reducing activity of CCN3 in malignant Jeg3 cells. 14 In the present study we found that CCN1 and CCN3 cause a G0/G1 cell cycle arrest and induce cellular senescence in SGHPL-5 trophoblast cells; this effect is presumably mediated by activation of the Notch-1 receptor after upregulation of p21. Normally, cellular senescence is a characteristic feature of aging. It protects against tumourigenesis by limiting the proliferation of potentially detrimental cells and restricts tissue damage. 46 Recent studies by Krizhanovsky and colleagues 46 clearly showed that, in the placenta, the fusion of CTBs to STBs induces cellular senescence and that this action may be necessary for proper STB function during embryonic development. 47 The same finding has been reported in studies of mouse placentas. Zhang et al. 48 showed that throughout gestational days 14.5 to 18.5 the labyrinthine trophoblast cells strongly express SA-b-gal, p53, and p21 and therefore induce cellular senescence. The induction of senescence by CCN1 has already been described in other cell types, such as fibroblasts, 49 hepatic myofibroblasts, 50 cell lines of non-smallcell lung carcinoma, 29 and aging muscle cells. 51 CCN1 and CCN3 induce the migration properties of SGHPL-5 cells by FAK and Akt signaling In addition to the inhibition of proliferation, the nonglycosylated forms of CCN1 and CCN3 in particular tend to enhance the vertical migration properties of SGHPL-5 trophoblast cells, whereas for horizontal migration properties the glycosylated CCNs exert the strongest effect. Interestingly, the application of glycosylated CCN3 results in less vertical migration of SGHPL-5 cells. So far we had no proven explanation for these separate effects of the various CCN forms on migration directions. However, it is already known that glycosylation controls diverse protein functions such as migration and invasion properties of the extravillous trophoblast. 52 Thus, here the different glycosylated CCN proteins may differ in the modulation of focal adhesion structures. The results in horizontal migration between the different CCN isoforms regarding its glycosylation may be explained by the fact that the glycosylated CCN proteins are the secreted isoforms which could act from outside on migration behavior of the cells in a paracrine manner and may therefore more efficiently increase migration compared to the non-glycosylated intracellular form of the CCNs which is located intracellularly and could only act in a autocrine manner. Future investigations will focus on other potential signaling pathways that differ between the glycosylation forms of the CCN proteins. FAK is involved in integrinmediated signal transduction pathways of the extracellular matrix and plays an important role in the regulation of cell proliferation, migration, and invasion. 53 It is known that the activation of Akt and ERK1/2 is related to cell migration and the activation of FAK [54][55][56][57] and that these kinases are involved in trophoblast migration and invasion (reviewed by Chakraborty et al. 8 ). In SGHPL-5 cells, phosphorylation and thereby activation of FAK occurs only after treatment with glycosylated CCN3. The phosphorylation status of FAK does not change after treatment with CCN1. The phosphorylation and activation of Akt are induced by both CCN1 and CCN3. This finding has been verified for CCN3 in renal carcinoma cells. 58 Haslinger et al. 59 verified the increase in SGHPL-5 migration after the application of epidermal growth factor by Akt signaling, in particular the Akt 1 and Akt 3 isoforms. In Jeg3 cells, epidermal growth factor-like domain 7 promotes migration and invasion by activating the MAPK, PI3K, and Notch pathways. 60 In contrast here, phosphorylation and activation of ERK1/2 do not seem to play a role in CCN1/3-mediated regulation of migration in SGHPL-5 cells, because the phosphorylation status remains unchanged after treatment with both CCN proteins. An important aspect of cell migration and invasion is the FAK/Akt-mediated enhanced expression and activity of MMP-2 and MMP-9. 31-33 SGHPL-5 cells treated with g-CCN1 or g-CCN3 exhibit significantly higher mRNA expression of MMP-9. If the protein level or activity of MMP-9 is also increased upon g-CCNs is unknown up to now and has to be investigated in future experiments. However, we assume that the involvement of CCN1 and CCN3 in invasion is obvious, and the activation of these signaling cascades and the resulting changes in cell physiology seem to depend on the inducing factor and the trophoblast cell line. Thus, in a receptor-dependent manner, CCN1 and CCN3 support the inhibition of trophoblast proliferation and promote the migration of invasive trophoblasts into the maternal decidua. This conclusion is easily transferable to the placenta in vivo, because the Notch-1 receptor is expressed proximally in the placental cell column, whereas the integrin a5b1 receptor is expressed distally in the invading EVTs, 41 thereby providing a spatially distributed spectrum of action (Fig. 8). Taken together, the results of this study show that CCN1 and CCN3 are key regulatory proteins of the EVTs that control proliferation and invasion. They could support the cell cycle exit of trophoblast cells located at the proximal column and simultaneously enhance the migration properties of the invasive trophoblasts detaching from the column. Thus, we assume that both CCN proteins regulate the switch of EVTs from the proliferating to the non-proliferating senescence phenotype but not endoreduplication. We further assume that the reduced levels of CCN1 and CCN3 observed in earlyonset preeclampsia could lead to the increased proliferation and thereby the reduced invasion capability of EVT cells. Our findings regarding the coordinated multifunctional properties of both CCN proteins in the human placenta and their defined signaling cascades may inspire efforts aimed at correcting impaired pathways in reproductive diseases by interfering with the CCN molecules. Disclosure of potential conflicts of Interest No potential conflicts of interest were disclosed.
2018-04-03T05:25:45.405Z
2016-01-08T00:00:00.000
{ "year": 2016, "sha1": "2464dd09240712902ff58159f51845f1b16a9f5f", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19336918.2016.1139265?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "a5b3fde5366b6807d44ba1d5e624743ef0956cc5", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
249978007
pes2o/s2orc
v3-fos-license
Non-Covalent Interaction on the Self-Healing of Mechanical Properties in Supramolecular Polymers Supramolecular polymers are widely utilized and applied in self-assembly or self-healing materials, which can be repaired when damaged. Normally, the healing process is classified into two types, including extrinsic and intrinsic self-healable materials. Therefore, the aim of this work is to review the intrinsic self-healing strategy based on supramolecular interaction or non-covalent interaction and molecular recognition to obtain the improvement of mechanical properties. In this review, we introduce the main background of non-covalent interaction, which consists of the metal–ligand coordination, hydrogen bonding, π–π interaction, electrostatic interaction, dipole–dipole interaction, and host–guest interactions, respectively. From the perspective of mechanical properties, these interactions act as transient crosslinking points to both prevent and repair the broken polymer chains. For material utilization in terms of self-healing products, this knowledge can be applied and developed to increase the lifetime of the products, causing rapid healing and reducing accidents and maintenance costs. Therefore, the self-healing materials using supramolecular polymers or non-covalent interaction provides a novel strategy to enhance the mechanical properties of materials causing the extended cycling lifetime of products before replacement with a new one. Introduction Supramolecular chemistry was established and presented by Charles Pedersen, Jean-Marie Lehn, and Donald Cram after their Nobel Prize in Chemistry in 1987s [1][2][3][4][5]. Noncovalent interaction in chemistry was produced to explain the supramolecular chemistry that involved the arrangement of molecules via intermolecular force or non-covalent interactions [6][7][8][9][10][11]. This relatively new part of chemistry deals with self-assembly or molecular assembly that results from the association of two or more chemical species held together by intermolecular interactions [12][13][14][15]. Generally, the atoms and molecules are linked by the interaction to obtain the strengthened molecules compared to a single atom. The interaction is classified into two types shown in Figure 1: (i) intramolecular forces are covalent interactions, which are forces within molecules such as covalent bonds, and (ii) intermolecular forces are non-covalent interactions, which are forces between molecules, for example, hydrogen bonding, metalligand coordination, π-π interaction, van der Waals, ion-ion, ion-dipole, dipole-dipole, etc. Generally, covalent interaction is stronger than the non-covalent interaction. However, non-covalent interactions are strong enough to be maintained and applied for material utilization [16][17][18][19][20]. [16]. Adapted from [16], Copyright 2017, with permission from Elsevier. Therefore, the concept of a combination between covalent bond and non-covalent interaction for molecular recognition is used to design high-performance materials, which can be used as dual crosslinking networks [21][22][23][24]. When the load is applied, the noncovalent interactions are broken before the main backbone of covalent bonds. However, the performance of non-covalent interactions depend on the distance between atoms or molecules. Furthermore, these forces can be reversible and cause a self-healing phenomenon in the materials. The example of non-covalent interaction in natural rubber was studied by Sriring et al. (2018) [25]. Natural latex can be obtained from the Hevea brasiliensis plant [26][27][28][29][30][31]. A rubber particle is suspended in the serum, which consists of non-rubber components such as phospholipids, proteins, carbohydrates, inorganic salts, etc. [32][33][34][35]. The microstructure of rubber exhibits the end chain containing both a ω-terminal and α-terminal. The ω-terminal is normally linked to proteins from the biosynthesis process, whereas the α-terminal is linked to phospholipids. Therefore, the rubber chain with the protein and the phospholipid linkage can be moved and connected together like a network according to the Reptation theory by de Gennes, P.G. (1971) [36][37][38][39][40][41]. This storage hardening phenomenon from the non-covalent interaction between the rubber molecules (as shown in Figure 2) increases the mechanical properties of rubber as a function of time. [25]. Adapted from [25], Copyright 2018, with permission from Elsevier. [16]. Adapted from [16], Copyright 2017, with permission from Elsevier. Therefore, the concept of a combination between covalent bond and non-covalent interaction for molecular recognition is used to design high-performance materials, which can be used as dual crosslinking networks [21][22][23][24]. When the load is applied, the noncovalent interactions are broken before the main backbone of covalent bonds. However, the performance of non-covalent interactions depend on the distance between atoms or molecules. Furthermore, these forces can be reversible and cause a self-healing phenomenon in the materials. The example of non-covalent interaction in natural rubber was studied by Sriring et al. (2018) [25]. Natural latex can be obtained from the Hevea brasiliensis plant [26][27][28][29][30][31]. A rubber particle is suspended in the serum, which consists of non-rubber components such as phospholipids, proteins, carbohydrates, inorganic salts, etc. [32][33][34][35]. The microstructure of rubber exhibits the end chain containing both a ω-terminal and α-terminal. The ω-terminal is normally linked to proteins from the biosynthesis process, whereas the α-terminal is linked to phospholipids. Therefore, the rubber chain with the protein and the phospholipid linkage can be moved and connected together like a network according to the Reptation theory by de Gennes, P.G. (1971) [36][37][38][39][40][41]. This storage hardening phenomenon from the non-covalent interaction between the rubber molecules (as shown in Figure 2) increases the mechanical properties of rubber as a function of time. [16]. Adapted from [16], Copyright 2017, with permission from Elsevier. Therefore, the concept of a combination between covalent bond and non-covalent interaction for molecular recognition is used to design high-performance materials, which can be used as dual crosslinking networks [21][22][23][24]. When the load is applied, the noncovalent interactions are broken before the main backbone of covalent bonds. However, the performance of non-covalent interactions depend on the distance between atoms or molecules. Furthermore, these forces can be reversible and cause a self-healing phenomenon in the materials. The example of non-covalent interaction in natural rubber was studied by Sriring et al. (2018) [25]. Natural latex can be obtained from the Hevea brasiliensis plant [26][27][28][29][30][31]. A rubber particle is suspended in the serum, which consists of non-rubber components such as phospholipids, proteins, carbohydrates, inorganic salts, etc. [32][33][34][35]. The microstructure of rubber exhibits the end chain containing both a ω-terminal and α-terminal. The ω-terminal is normally linked to proteins from the biosynthesis process, whereas the α-terminal is linked to phospholipids. Therefore, the rubber chain with the protein and the phospholipid linkage can be moved and connected together like a network according to the Reptation theory by de Gennes, P.G. (1971) [36][37][38][39][40][41]. This storage hardening phenomenon from the non-covalent interaction between the rubber molecules (as shown in Figure 2) increases the mechanical properties of rubber as a function of time. [25]. Adapted from [25], Copyright 2018, with permission from Elsevier. Figure 2. The model of rubber particles and non-covalent interaction in natural rubber molecules [25]. Adapted from [25], Copyright 2018, with permission from Elsevier. Self-healing is a spontaneous healable process of repairing damages i can be classified into two types such as extrinsic and intrinsic healing based ous works from Li and Meng (2015) [43] and Xu et al. (2021) [44]. In the ca self-healing materials, the micro-or nano-scale capsule was embedded in matrix to produce the self-healable process. Nevertheless, the healable effi system was decreased over time because the healable agents were exhaust healing process. In the case of the intrinsic self-healing polymers based on su or non-covalent interaction, they have advantages including: (i) a fast self-he and (ii) reversible bonding in the molecular scale for damage repair. Moreov lecular interactions can produce self-healing within polymer systems such bonding, metal-ligand coordination, electrostatic interaction, host-guest int stacking interaction, dipole-dipole interaction, and van der Waals force (Fig fore, the material properties depend on reversible behavior. When the mate ture or damage, it is possible to repair or self-heal. So, this material can b replacing it with a new one [44][45][46][47][48][49][50][51]. The stress-strain curves of rubber samples with/without non-rubber components [25]. Reprinted from [25], Copyright 2018, with permission from Elsevier. Self-healing is a spontaneous healable process of repairing damages in materials. It can be classified into two types such as extrinsic and intrinsic healing based on the previous works from Li and Meng (2015) [43] and Xu et al. (2021) [44]. In the case of extrinsic selfhealing materials, the micro-or nano-scale capsule was embedded in the polymer matrix to produce the self-healable process. Nevertheless, the healable efficiency of this system was decreased over time because the healable agents were exhausted during the healing process. In the case of the intrinsic self-healing polymers based on supramolecular or non-covalent interaction, they have advantages including: (i) a fast self-healing process and (ii) reversible bonding in the molecular scale for damage repair. Moreover, supramolecular interactions can produce self-healing within polymer systems such as hydrogen bonding, metal-ligand coordination, electrostatic interaction, host-guest interaction, π-π stacking interaction, dipole-dipole interaction, and van der Waals force ( Figure 4). Therefore, the material properties depend on reversible behavior. When the material has a fracture or damage, it is possible to repair or self-heal. So, this material can be used before replacing it with a new one [44][45][46][47][48][49][50][51]. [44]. Adapted from [44], with the permission of AIP Publishing. In the case of chemical structures for self-healing, the hydrogen bonding is one of dipole attraction between molecules which is also a non-covalent interaction [5]. Then, the hydrogen bonding can be obtained between a hydrogen atom and a highly electronegative atom, for example, nitrogen, oxygen, or fluorene atoms. Furthermore, it has strengths ranging from 5 kJ/mole to 100 kJ/mole of hydrogen bonding [16]. However, the hydrogen bonds are weak interactions and generally intermolecular bonds, which hold much of soft matter together [52][53][54][55]. In addition, electrostatic interaction is the attractive or repulsive interaction between molecules which have electric charges [5]. These interactions are divided into two types; (i) electrostatic attractions or electrostatic interactions, which occur between molecules that have opposite charges, and (ii) electrostatic repulsions or electrostatic interactions, which occur between molecules that have the same charges [56][57][58][59]. Furthermore, the strengths of electrostatic interaction range from 1 kJ/mole to 25 kJ/mole [16]. Then, the dipole-dipole interactions are intermolecular between two molecules. There are electrostatic interactions between the permanent dipoles of different molecules. The positive charge in polar molecules interacts with the negative charge at the end of another molecule, which exhibits strengths ranging from 10 kJ/mole to 50 kJ/mole [16,[60][61][62][63]. Furthermore, metal-ligand coordination is organic-inorganic bonding between metals and ligand atoms related to the Lewis acid base. The Lewis acid within the complexes is an electron acceptor or a central metal ion, which is often a transition metal or inner transition metal. The Lewis base is an electron donor, which is called a ligand. The condition of metal-ligand coordination is that they have one or more electron pairs, which can be donated to the central metal [5]. Therefore, this relates to a donor atom of the ligand with a lone pair of electrons to establish a coordinate interaction with the metal, which widely has strength ranges of 10 kJ/mole to 400 kJ/mole [16,[64][65][66][67][68]. Moreover, host-guest interactions are complexes of two molecules or materials through unique structural relationships and non-covalent interactions such as molecular recognition. This interaction is applied in the biorecognition process, for example, via enzyme-inhibitor and antigen-antibody interactions. In addition, π-π interactions occur when the plane of aromatic rings is stacked parallel to one another. This parallel stacking can occur either in a sandwich or displaced stacking arrangement [5,[69][70][71]. From a market perspective, self-healing concrete is generally used, which is forecast to have a market value of nearly USD 100 million in 2025 in the USA. These self-healing . Typical self-healing modes and their chemical structures for self-healing [44]. Adapted from [44], with the permission of AIP Publishing. In the case of chemical structures for self-healing, the hydrogen bonding is one of dipole attraction between molecules which is also a non-covalent interaction [5]. Then, the hydrogen bonding can be obtained between a hydrogen atom and a highly electronegative atom, for example, nitrogen, oxygen, or fluorene atoms. Furthermore, it has strengths ranging from 5 kJ/mole to 100 kJ/mole of hydrogen bonding [16]. However, the hydrogen bonds are weak interactions and generally intermolecular bonds, which hold much of soft matter together [52][53][54][55]. In addition, electrostatic interaction is the attractive or repulsive interaction between molecules which have electric charges [5]. These interactions are divided into two types; (i) electrostatic attractions or electrostatic interactions, which occur between molecules that have opposite charges, and (ii) electrostatic repulsions or electrostatic interactions, which occur between molecules that have the same charges [56][57][58][59]. Furthermore, the strengths of electrostatic interaction range from 1 kJ/mole to 25 kJ/mole [16]. Then, the dipole-dipole interactions are intermolecular between two molecules. There are electrostatic interactions between the permanent dipoles of different molecules. The positive charge in polar molecules interacts with the negative charge at the end of another molecule, which exhibits strengths ranging from 10 kJ/mole to 50 kJ/mole [16,[60][61][62][63]. Furthermore, metal-ligand coordination is organic-inorganic bonding between metals and ligand atoms related to the Lewis acid base. The Lewis acid within the complexes is an electron acceptor or a central metal ion, which is often a transition metal or inner transition metal. The Lewis base is an electron donor, which is called a ligand. The condition of metal-ligand coordination is that they have one or more electron pairs, which can be donated to the central metal [5]. Therefore, this relates to a donor atom of the ligand with a lone pair of electrons to establish a coordinate interaction with the metal, which widely has strength ranges of 10 kJ/mole to 400 kJ/mole [16,[64][65][66][67][68]. Moreover, host-guest interactions are complexes of two molecules or materials through unique structural relationships and non-covalent interactions such as molecular recognition. This interaction is applied in the biorecognition process, for example, via enzyme-inhibitor and antigen-antibody interactions. In addition, π-π interactions occur when the plane of aromatic rings is stacked parallel to one another. This parallel stacking can occur either in a sandwich or displaced stacking arrangement [5,[69][70][71]. From a market perspective, self-healing concrete is generally used, which is forecast to have a market value of nearly USD 100 million in 2025 in the USA. These self-healing materials have good performance in repairing crack damage by themselves through embedded capsules. The micro-or nano-capsule releases the healable agents to repair the damaged materials. Then, the market revenue of self-healing materials in the USA from 2020 to 2025 (in USD millions) is shown in Table 1 [72]. Self-healing polymers are also widely applied in many fields, in particular in the applications of soft materials. That is why it is in the top three of market revenue in the United States. Table 1. Market revenue of self-healing materials in the USA from 2020 to 2025 (in USD millions) [72]. The interdiffusion of rubber molecules plays an important role in self-adhesion between two identical rubbers, especially uncrosslinked or weakly crosslinked rubber [73,74]. This means that the uncrosslinked or weakly crosslinked rubber also represents the selfhealable phenomenon. However, the rate of interdiffusion or self-healing of identical soft polymer molecules depends on the structure of the polymer matrix and also the compatibility of chemical substances in the polymer matrix [75]. Interdiffusion is also a function of polymer viscosity; polymers with high viscosity exhibit a long diffusion rate. This interdiffusion phenomenon can also occur with hydrogen bonding interaction in the case of sodium alginate-linked oxidized natural rubber, indicating the rapid self-healing [76]. Type of Product The self-healing mechanisms relate to viscoelastic properties allowing the molecular mobility to heal the damage and the surface energy creating the contact between two damaged areas. The self-healing of polymers can be succeeded by both covalent and non-covalent networks [77]. From a thermodynamics point of view, the self-healing phenomenon is spontaneous from the favorable Gibbs free energy. The entropy changes come from the conformational chain, while the enthalpy changes come from the chemical reaction in the polymer system [78]. Moreover, the self-healing of a shape memory polymer was modeled based on the constitutive equations to compare the experimental mechanical property with the computational study. The authors found a good agreement between experimental and modelling [79]. Concerning the self-healing of polymer, the self-healing rate depends on the important parameters shown in Figure 5: (v) The crosslinking density (uncrosslinked or weakly crosslinked polymer); (vi) Non-covalent interaction (metal-ligand coordination, hydrogen bonding, π-π interaction, etc.); (vii) Entropy aspects (conformational entropy of polymer chains); (viii) Enthalpy aspects (chemical reaction in the polymer). Interestingly, self-healing materials can be utilized in advanced applications such as biomimetic, bio-inspired, and smart materials in the robotic field. Tan et al. (2021) [80] presented the roadmap and utilization of self-healing for autonomous robotics, which can potentially apply to conductors, batteries, display screens and lighting, autonomous control, and other electronic devices for humanoid robots, underwater robots, and other biomimetic robots in nano-and micro-scales [80]. Therefore, this review mainly focuses on the very recent technology for the development of self-healing polymers using non-covalent interaction, which has significant potential in material utilization such as metal-ligand coordination, hydrogen bonding, π-π interaction, dipole-dipole interaction, electrostatic interaction, and host-guest interaction. Metal-Ligand Coordination of Polymers The self-healing mechanism may be developed from the metal-ligand coordination in the mussel, which is one of the non-covalent interactions [43]. In nature, the mussel can hold to various substrates such as rocks, metal, wood, and marine organisms using metalligand coordination in the byssal thread and byssal plaque surfaces represented in Figure 6. The byssal plaque surface contains the mussel foot protein. The mussel foot protein is an amino acid subsequence with a lot of the catechol group. Therefore, the catechol group can form a coordination bond with substrates to obtain good adhesion. Furthermore, the strength of mussels not only appears in byssal plaque but also in the byssal thread. The metal coordination bonds occurring in the end chain of the collagen in the byssal thread are also shown in this picture. As a result, the byssal thread is able to elongate a lot. The concept of metal-ligand coordination in the mussel using catechol compound is an im- Interestingly, self-healing materials can be utilized in advanced applications such as biomimetic, bio-inspired, and smart materials in the robotic field. Tan et al. (2021) [80] presented the roadmap and utilization of self-healing for autonomous robotics, which can potentially apply to conductors, batteries, display screens and lighting, autonomous control, and other electronic devices for humanoid robots, underwater robots, and other biomimetic robots in nano-and micro-scales [80]. Therefore, this review mainly focuses on the very recent technology for the development of self-healing polymers using non-covalent interaction, which has significant potential in material utilization such as metal-ligand coordination, hydrogen bonding, π-π interaction, dipole-dipole interaction, electrostatic interaction, and host-guest interaction. Metal-Ligand Coordination of Polymers The self-healing mechanism may be developed from the metal-ligand coordination in the mussel, which is one of the non-covalent interactions [43]. In nature, the mussel can hold to various substrates such as rocks, metal, wood, and marine organisms using metal-ligand coordination in the byssal thread and byssal plaque surfaces represented in Figure 6. The byssal plaque surface contains the mussel foot protein. The mussel foot protein is an amino acid subsequence with a lot of the catechol group. Therefore, the catechol group can form a coordination bond with substrates to obtain good adhesion. Furthermore, the strength of mussels not only appears in byssal plaque but also in the byssal thread. The metal coordination bonds occurring in the end chain of the collagen in the byssal thread are also shown in this picture. As a result, the byssal thread is able to elongate a lot. The concept of metal-ligand coordination in the mussel using catechol compound is an important key to improving the properties of materials [81][82][83][84]. OR PEER REVIEW 7 of 30 Figure 6. Metal-ligand coordination both in the mussel and between mussel foot protein and substrate [82]. Adapted from [82], with permission from John Wiley and Sons. From a research and development point of view, metal-ligand coordination was combined with covalent bonds to produce high-performance materials which exhibit both stiffness and toughness. Filippidi et al. (2017) [85] studied the mussel-inspired iron-catechol complexes in toughening elastomers as poly(ethylene glycol) diglycidyl ethers. It consists of both stiffness and toughness. This phenomenon indicated that stiffness was improved by the coordination between Fe 3+ and the oxygen atom of catechol. The stiffness of iron-treated materials is higher than that of untreated materials, shown in Figure 7. Furthermore, the effect of the metal-ligand coordination causes the bonds to reform at their original position after unloading. Due to the covalent bond and unbroken metalligand coordination, shape memory is preserved to the unloading state. Therefore, the polymer chain can help recovery because of the high interaction between Fe 3+ and oxygen atoms [85]. The effect of the catechol on the toughening elastomer was confirmed by Cristiani et al. (2020) [86]. The 2-[[3,4-bis[(triethylsilyl)oxy]phenyl]methyl]oxirane or catechol was the same as that used in the research by Filippidi et al. (2017) [85]. In this research, the effect of varying catechol contents on elastomeric properties was investigated. The result revealed that the Young's modulus of the toughening elastomer is increased with increasing catechol content. Increasing the catechol concentration promotes the formation of the iron-catechol complex site between Fe 3+ and the oxygen atom of catechol to improve the stiffness, strength, and toughness of the toughening elastomer shown in Figure 7. Further- Figure 6. Metal-ligand coordination both in the mussel and between mussel foot protein and substrate [82]. Adapted from [82], with permission from John Wiley and Sons. From a research and development point of view, metal-ligand coordination was combined with covalent bonds to produce high-performance materials which exhibit both stiffness and toughness. Filippidi et al. (2017) [85] studied the mussel-inspired iron-catechol complexes in toughening elastomers as poly(ethylene glycol) diglycidyl ethers. It consists of both stiffness and toughness. This phenomenon indicated that stiffness was improved by the coordination between Fe 3+ and the oxygen atom of catechol. The stiffness of iron-treated materials is higher than that of untreated materials, shown in Figure 7. Furthermore, the effect of the metal-ligand coordination causes the bonds to reform at their original position after unloading. Due to the covalent bond and unbroken metal-ligand coordination, shape memory is preserved to the unloading state. Therefore, the polymer chain can help recovery because of the high interaction between Fe 3+ and oxygen atoms [85]. The effect of the catechol on the toughening elastomer was confirmed by Cristiani et al. (2020) [86]. The 2-[[3,4-bis[(triethylsilyl)oxy]phenyl]methyl]oxirane or catechol was the same as that used in the research by Filippidi et al. (2017) [85]. In this research, the effect of varying catechol contents on elastomeric properties was investigated. The result revealed that the Young's modulus of the toughening elastomer is increased with increasing catechol content. Increasing the catechol concentration promotes the formation of the iron-catechol complex site between Fe 3+ and the oxygen atom of catechol to improve the stiffness, strength, and toughness of the toughening elastomer shown in Figure 7. Furthermore, the tris-formation of the iron-catechol is the best structure to obtain high strength because the metal in crosslinking points can hold the catechol groups more than the mono-and bis-formation depending on the pH of the system [85][86][87]. [85,86,88]. Adapted (a) from [85], with permission from AAAS; Adapted (b,c) with permission from [86]. Copyright 2020 American Chemical Society; Adapted (d) from [88], with permission from Springer Nature. Metal-ligand coordination is non-covalent interaction between metal and non-metal elements, which has strength ranging from 10 kJ/mole to 400 kJ/mole [16]. A coordination complex can be obtained from metal (Fe 3+ , Fe 2+ , Zn 2+ , Co 2+ , Cu 2+ , Al 3+ , etc.) and ligands, which consists of a lone electron pair such as oxygen in a catechol group or nitrogen in the imidazole ring. So, metal-ligand coordination can possibly be applied in self-healing polymers. Self-healing with metal-ligand coordination in the NR was studied by Han et al. (2017) [89]. Epoxidized natural rubber (ENR) is produced by the modification of NR using peracid from formic acid and hydrogen peroxide to obtain the epoxide groups in the NR molecules. In this research, the ENR was reacted with dopamine, which is one of the catechol compounds, to obtain the grafting of dopamine (PDA) onto ENR molecules using self-polymerization. Then, the ENR/PDA was crosslinked by Fe 3+ to form reversible Fe 3+catechol coordination interactions shown in Figure 8. The results revealed that the ENR/PDA with Fe 3+ sample is cut into two pieces and reconnected, which can be bent again without fracture. Therefore, the self-healing process was obtained using metal-ligand coordination. Furthermore, the healing performance in terms of the mechanical properties was increased with the increase in healing time [89]. [85,86,88]. Adapted (a) from [85], with permission from AAAS; Adapted (b,c) with permission from [86]. Copyright 2020 American Chemical Society; Adapted (d) from [88], with permission from Springer Nature. Metal-ligand coordination is non-covalent interaction between metal and non-metal elements, which has strength ranging from 10 kJ/mole to 400 kJ/mole [16]. A coordination complex can be obtained from metal (Fe 3+ , Fe 2+ , Zn 2+ , Co 2+ , Cu 2+ , Al 3+ , etc.) and ligands, which consists of a lone electron pair such as oxygen in a catechol group or nitrogen in the imidazole ring. So, metal-ligand coordination can possibly be applied in selfhealing polymers. Self-healing with metal-ligand coordination in the NR was studied by Han et al. (2017) [89]. Epoxidized natural rubber (ENR) is produced by the modification of NR using peracid from formic acid and hydrogen peroxide to obtain the epoxide groups in the NR molecules. In this research, the ENR was reacted with dopamine, which is one of the catechol compounds, to obtain the grafting of dopamine (PDA) onto ENR molecules using self-polymerization. Then, the ENR/PDA was crosslinked by Fe 3+ to form reversible Fe 3+ -catechol coordination interactions shown in Figure 8. The results revealed that the ENR/PDA with Fe 3+ sample is cut into two pieces and reconnected, which can be bent again without fracture. Therefore, the self-healing process was obtained using metal-ligand coordination. Furthermore, the healing performance in terms of the mechanical properties was increased with the increase in healing time [89]. [89]. Adapted with permission from [89]. Copyright 2017 American Chemical Society. Self-healing with metal-ligand coordination was studied in synthetic rubber by Lia et al. (2020) [90]. In this research, methyl vinyl silicone rubber (MVQ) was dissolved in a tetrahydrofuran solvent and reacted with DOPA and FeCl3·6H2O. The mixture was poured into a polytetrafluoroethylene (PTFE) Petri dish. Then, it was dried at 60 °C for 24 h under a vacuum to obtain a silicone elastomer with DOPA and Fe 3+ . The results in Figure 9 reveal that the silicone elastomer with DOPA and Fe 3+ could be healed at both high temperatures and in underwater (pH = 9) conditions [90]. Furthermore, metal-ligand coordination can be obtained from the reaction between pyridine ligands with Fe 3+ , which was discovered by Cao et al. (2021) [91]. The ENR with [89]. Adapted with permission from [89]. Copyright 2017 American Chemical Society. Self-healing with metal-ligand coordination was studied in synthetic rubber by Lia et al. (2020) [90]. In this research, methyl vinyl silicone rubber (MVQ) was dissolved in a tetrahydrofuran solvent and reacted with DOPA and FeCl 3 ·6H 2 O. The mixture was poured into a polytetrafluoroethylene (PTFE) Petri dish. Then, it was dried at 60 • C for 24 h under a vacuum to obtain a silicone elastomer with DOPA and Fe 3+ . The results in Figure 9 reveal that the silicone elastomer with DOPA and Fe 3+ could be healed at both high temperatures and in underwater (pH = 9) conditions [90]. [89]. Adapted with permission from [89]. Copyright 2017 American Chemical Society. Self-healing with metal-ligand coordination was studied in synthetic rubber by Lia et al. (2020) [90]. In this research, methyl vinyl silicone rubber (MVQ) was dissolved in a tetrahydrofuran solvent and reacted with DOPA and FeCl3·6H2O. The mixture was poured into a polytetrafluoroethylene (PTFE) Petri dish. Then, it was dried at 60 °C for 24 h under a vacuum to obtain a silicone elastomer with DOPA and Fe 3+ . The results in Figure 9 reveal that the silicone elastomer with DOPA and Fe 3+ could be healed at both high tem peratures and in underwater (pH = 9) conditions [90]. . Self-healing of silicone elastomer via amino groups and mechanical properties; (a) network structure of silicone elastomer, (b) self-healing testing in silicone elastomer, (c,d) stress-strain curve of healed samples at various healing times for 120 °C and underwater (pH = 9) [90]. Reprinted from [90], Copyright 2020, with permission from Elsevier. Furthermore, metal-ligand coordination can be obtained from the reaction between pyridine ligands with Fe 3+ , which was discovered by Cao et al. (2021) [91]. The ENR with pyridine ligand was prepared by the ring-opening reaction of epoxy groups on ENR mol Figure 9. Self-healing of silicone elastomer via amino groups and mechanical properties; (a) network structure of silicone elastomer, (b) self-healing testing in silicone elastomer, (c,d) stress-strain curves of healed samples at various healing times for 120 • C and underwater (pH = 9) [90]. Reprinted from [90], Copyright 2020, with permission from Elsevier. Furthermore, metal-ligand coordination can be obtained from the reaction between pyridine ligands with Fe 3+ , which was discovered by Cao et al. (2021) [91]. The ENR with pyridine ligand was prepared by the ring-opening reaction of epoxy groups on ENR molecules and interacted with amino groups to obtain the grafting of pyridine ligands onto ENR molecules. The coordination between pyridine ligands with Fe 3+ is presented in Figure 10a. The DSC thermograms revealed that the glass transition temperature of ENR-AP was increased with increasing Fe 3+ contents due to the coordination between pyridine ligands with Fe 3+ . This interaction obstructed the chain mobility, which confirmed the metal-ligand coordination in the molecules and increased the mechanical properties, as shown in Figure 10b. Moreover, the healing performance in terms of the tensile strength and elongation at break in Figure 10c were increased by increasing the healing time, as in Han et al. (2017) [89,91]. ENR molecules. The coordination between pyridine ligands with Fe 3+ is presented in Figure 10a. The DSC thermograms revealed that the glass transition temperature of ENR-AP was increased with increasing Fe 3+ contents due to the coordination between pyridine ligands with Fe 3+ . This interaction obstructed the chain mobility, which confirmed the metalligand coordination in the molecules and increased the mechanical properties, as shown in Figure 10b. Moreover, the healing performance in terms of the tensile strength and elongation at break in The characteristics of metal-ligand coordination can be investigated by FT-IR, Raman, and UV/VIS spectroscopy, which are summarized in Table 2. -292 nm (π-π* transition of pyridine ring) and redshifted to 324 nm (after the Ln 3+ addition) [94] 3. Hydrogen Bonding of Polymers The characteristics of metal-ligand coordination can be investigated by FT-IR, Raman, and UV/VIS spectroscopy, which are summarized in Table 2. Hydrogen Bonding of Polymers In the case of hydrogen bonding, this bond is a strong intermolecular interaction between hydrogen atoms and high electronegativity atoms such as nitrogen, oxygen, or fluorene atoms. H-bonding is also widely used for self-healing applications to repair damage and improve the mechanical properties of polymers [43]. Li and Xia (2017) [95] studied the self-healing of modified poly(vinyl alcohol) or PVA using hydrogen bonding. The PVA reacted with succinic anhydride using grafting modification to obtain the modified PVA. Then, dopamine hydrochloride was reacted with modified PVA via carboxylation, as shown in Figure 11, to produce the dopamine functionalized PVA, which contains the catechol in the PVA chains [95]. The sample can be bent and stretched again after self-healing via hydrogen bonding. Int. J. Mol. Sci. 2022, 23, x FOR PEER REVIEW 11 In the case of hydrogen bonding, this bond is a strong intermolecular interaction tween hydrogen atoms and high electronegativity atoms such as nitrogen, oxygen, or orene atoms. H-bonding is also widely used for self-healing applications to repair dam and improve the mechanical properties of polymers [43]. Li and Xia (2017) [95] studied the self-healing of modified poly(vinyl alcohol) or P using hydrogen bonding. The PVA reacted with succinic anhydride using grafting m fication to obtain the modified PVA. Then, dopamine hydrochloride was reacted w modified PVA via carboxylation, as shown in Figure 11, to produce the dopamine f tionalized PVA, which contains the catechol in the PVA chains [95]. The sample ca bent and stretched again after self-healing via hydrogen bonding. [96] presented another type of self-healing of silicone rubber, w represented the hydrogen bonding of silicone rubber (HBSR) using multiple hydro bonds of α, ω-aminopropyl poly(dimethylsiloxane), and ethylene carbonate based on non-isocyanate reaction. The results (Figure 12) revealed that multiple hydrogen bond [96] presented another type of self-healing of silicone rubber, which represented the hydrogen bonding of silicone rubber (HBSR) using multiple hydrogen bonds of α, ω-aminopropyl poly(dimethylsiloxane), and ethylene carbonate based on the non-isocyanate reaction. The results ( Figure 12) revealed that multiple hydrogen bonding is obtained between the carbonyl and imino groups as well as the generated hydroxyl groups. In terms of the mechanical properties, tensile strength and elongation at the break of the healable sample at 24 h can be reached at almost 90% compared to those of the original sample. These mechanical properties of the original HBSR sample are equal to or even better than the conventional vulcanized silicone rubber. Moreover, multiple hydrogen bonding also led to silicone rubber exhibiting thermal-induced self-healing properties at 80 • C. Therefore, this research provided an alternative method to develop self-healing silicone rubber with multiple hydrogen bonding [96]. [97] reported non-covalent networks by incorporating carboxy chitosan (CMCS) into epoxidized natural rubber (ENR) to obtain hydrogen bondin a solution-mixing method. The results revealed that hydrogen bonding is formed b tiple hydrophilic groups of CMCS with ENR chains to obtain the multi-linkages supramolecular networks in the molecules. In addition, hydrogen bonding can im the healing system and mechanical properties of the ENR with CMCS composites 13). They found that the ENR with 5 and 10 wt.% CMCS improve tensile strengths and 1.92 MPa, respectively. Then, these materials represented self-healing effici almost 90% at room temperature and healable time at 12 h. In the case of CMCS over 10 wt.%, although the mechanical properties still increased, the self-heali ciency degraded significantly because of the agglomeration of CMCS filler. Furthe hydrogen bonding produced supramolecular networks to improve the recycling c of ENR with CMCS composites [97]. Shen et al. (2021) [76] studied rubber networks based on supramolecular hyd bonding networks of oxidized natural rubber (oNR) crosslinked with sodium a (SA) to obtain a rapidly self-healing composite film. The result showed that the oN posite with 20 phr SA exhibited mechanical properties improving in terms of strength to 6.5 MPa. Moreover, the photographs and optical microscopy images self-healing process of the oNR/SA film are presented in Figure 14. This result in Figure 12. Hydrogen bonding network with silicone rubber (PDMS) represents the self-healing properties [96]. Reprinted with permission from [96]. Copyright 2019 American Chemical Society. Xu et al. (2019) [97] reported non-covalent networks by incorporating carboxymethyl chitosan (CMCS) into epoxidized natural rubber (ENR) to obtain hydrogen bonding using a solution-mixing method. The results revealed that hydrogen bonding is formed by multiple hydrophilic groups of CMCS with ENR chains to obtain the multi-linkages as the supramolecular networks in the molecules. In addition, hydrogen bonding can improve the healing system and mechanical properties of the ENR with CMCS composites (Figure 13). They found that the ENR with 5 and 10 wt.% CMCS improve tensile strengths of 1.40 and 1.92 MPa, respectively. Then, these materials represented self-healing efficiency of almost 90% at room temperature and healable time at 12 h. In the case of CMCS content over 10 wt.%, although the mechanical properties still increased, the self-healing efficiency degraded significantly because of the agglomeration of CMCS filler. Furthermore, hydrogen bonding produced supramolecular networks to improve the recycling capacity of ENR with CMCS composites [97]. Shen et al. (2021) [76] studied rubber networks based on supramolecular hydrogenbonding networks of oxidized natural rubber (oNR) crosslinked with sodium alginate (SA) to obtain a rapidly self-healing composite film. The result showed that the oNR composite with 20 phr SA exhibited mechanical properties improving in terms of tensile strength to 6.5 MPa. Moreover, the photographs and optical microscopy images of the self-healing process of the oNR/SA film are presented in Figure 14. This result indicated the self-healing efficiency could reach 60% after 2 min of healing time and reach 80% after 10 min of healing time at room temperature [76]. Figure 13. The self-healing microscope and photograph (a,b) ((a1) the cutting line at the surface before self-healing; (a2) the cutting line at the surface after self-healing; (a3) interior of the cut position before self-healing; (a4) interior of the cut position after self-healin for 12 h at room temperature), stress-strain curves at various healing times (c) and healing efficiencies (d) of epoxidized natural rubber with carboxymethyl chitosan composites [97]. Adapted with permission from [97]. Copyright 2019 American Chemical Society. Figure 14. The self-healing photograph and microscope (a,b), self-healing mechanism (c), and healing efficiency (d) of SA crosslinked oNR supramolecular networks [76]. Adapted from [76], Copyright 2021, with permission from Elsevier. Furthermore, the synergistic effect of self-healing in the silicone elastomer based on dynamic covalent bonds and multiple hydrogen bonding was studied by Chen et al. Figure 13. The self-healing microscope and photograph (a,b) ((a1) the cutting line at the surface before self-healing; (a2) the cutting line at the surface after self-healing; (a3) interior of the cut position before self-healing; (a4) interior of the cut position after self-healin for 12 h at room temperature), stress-strain curves at various healing times (c) and healing efficiencies (d) of epoxidized natural rubber with carboxymethyl chitosan composites [97]. Adapted with permission from [97]. Copyright 2019 American Chemical Society. Figure 13. The self-healing microscope and photograph (a,b) ((a1) the cutting line at the surface before self-healing; (a2) the cutting line at the surface after self-healing; (a3) interior of the cut position before self-healing; (a4) interior of the cut position after self-healin for 12 h at room temperature), stress-strain curves at various healing times (c) and healing efficiencies (d) of epoxidized natural rubber with carboxymethyl chitosan composites [97]. Adapted with permission from [97]. Copyright 2019 American Chemical Society. Figure 14. The self-healing photograph and microscope (a,b), self-healing mechanism (c), and healing efficiency (d) of SA crosslinked oNR supramolecular networks [76]. Adapted from [76], Copyright 2021, with permission from Elsevier. Furthermore, the synergistic effect of self-healing in the silicone elastomer based on dynamic covalent bonds and multiple hydrogen bonding was studied by Chen et al. Figure 14. The self-healing photograph and microscope (a,b), self-healing mechanism (c), and healing efficiency (d) of SA crosslinked oNR supramolecular networks [76]. Adapted from [76], Copyright 2021, with permission from Elsevier. Furthermore, the synergistic effect of self-healing in the silicone elastomer based on dynamic covalent bonds and multiple hydrogen bonding was studied by Chen et al. (2022) [98]. The network structure in Figure 15 present that the multiple bonds are obtained by adding thiourea into the polyurea network. The dynamic covalent bonds of imine groups provide materials with a strong link to the damaged surface. The results found that this method can be improved for self-healing efficiency without degrading the mechanical properties of the elastomer. Furthermore, the Raman spectra using mapping mode presented self-healing behavior and healable efficiency of almost 79% for 1 h and 94% for 6 h of healing time. Therefore, the optimized self-healing process presented interpenetration diffusion of the rubber chain and rearrangement of network join between the two interfaces to obtain a rapidly self-healing and tough material [98]. J. Mol. Sci. 2022, 23, x FOR PEER REVIEW (2022) [98]. The network structure in Figure 15 present that the mu tained by adding thiourea into the polyurea network. The dynami imine groups provide materials with a strong link to the damaged found that this method can be improved for self-healing efficiency wi mechanical properties of the elastomer. Furthermore, the Raman spe mode presented self-healing behavior and healable efficiency of alm 94% for 6 h of healing time. Therefore, the optimized self-healing pro penetration diffusion of the rubber chain and rearrangement of netw two interfaces to obtain a rapidly self-healing and tough material [98 Figure 15. The self-healing model and microscope of silicone elastomers w and covalent bonds; (a) the model of broken and healing mechanism; and (b) Raman spectra in the healing process [98]. Adapted with permission from American Chemical Society. The characteristics of H-bonding in self-healing polymers were FT-IR and 1 H-NMR, which are summarized in Table 3. Figure 15. The self-healing model and microscope of silicone elastomers with hydrogen bonding and covalent bonds; (a) the model of broken and healing mechanism; and (b) the mapping mode of Raman spectra in the healing process [98]. Adapted with permission from [98]. Copyright 2022 American Chemical Society. Characterization Methods The characteristics of H-bonding in self-healing polymers were investigated using FT-IR and 1 H-NMR, which are summarized in Table 3. π-π Interaction of Polymers The π-π stacking interactions are non-covalent interactions between aromatic compounds containing π orbitals which can be arranged in two types; (i) face-to-face stacking and (ii) edge-to-face stacking [43,99]. It can be used in many applications such as selfassembly, self-healing materials, molecular receptors, controlled drug release, fabrication and sensors, composites, and function materials with supramolecules to produce advanced properties [100]. Burattini et al. (2009) [101] studied the novel supramolecular polymer system for self-repairing by π-π stacking interactions. The terminal pyrenyl groups of polyamide were inserted into the chains of a polyimide through complementary π-π stacking. The result found that the new material exhibited an improved ability to flow and was self-healable compared to conventional thermoplastics. The healable process was very fast at 80 • C depending on the healing time [101]. Furthermore, the polymer blend based on aromatic π-π stacking and hydrogen bonding interactions was investigated by Burattini et al. (2010) [102]. The results in Figure 16 revealed the tensile modulus and healing efficiency of the damaged material as a function of healing time, which exhibits a maximum healing efficiency of 95% after a healing time of approximately 240 min [102]. Therefore, these results indicated that π-π stacking could be applied for self-healing applications. Furthermore, the research of Hart et al. (2015) [103] confirmed the self-healing behaviors via π-π interaction. Figure 17 show the mechanism of π-π interaction in the molecules. The results revealed that the healing process is rapidly and fully healed at 75 • C for 40 min or 125 • C for 14 min, as shown in environmental scanning electron microscopy (ESEM) images. Moreover, the tensile modulus of 10 MPa exhibited 100% recovery over three break-heal cycles. Therefore, this research demonstrates the ability of the new perylene-based non-covalent interaction and the ability to tailor π-π interaction to promote self-healing polymers. very fast at 80 °C depending on the healing time [101]. Furthermore, the polymer blend based on aromatic π-π stacking and hydrogen bonding interactions was investigated by Burattini et al. (2010) [102]. The results in Figure 16 revealed the tensile modulus and healing efficiency of the damaged material as a function of healing time, which exhibits a maximum healing efficiency of 95% after a healing time of approximately 240 min [102]. Therefore, these results indicated that π-π stacking could be applied for self-healing applications. Figure 16. The supramolecular polymer network with π-π interaction; (a) π-π interaction mechanism of pyrenyl polymer and aromatic ring in chain-folding polymer, (b) tensile modulus and healing efficiency as function of time, and (c) tensile modulus under five breaks-heal cycles [102]. Adapted with permission from [102]. Copyright 2010 American Chemical Society. R PEER REVIEW 16 of 30 Figure 16. The supramolecular polymer network with π-π interaction; (a) π-π interaction mechanism of pyrenyl polymer and aromatic ring in chain-folding polymer, (b) tensile modulus and healing efficiency as function of time, and (c) tensile modulus under five breaks-heal cycles [102]. Adapted with permission from [102]. Copyright 2010 American Chemical Society. Furthermore, the research of Hart et al. (2015) [103] confirmed the self-healing behaviors via π-π interaction. Figure 17 show the mechanism of π-π interaction in the molecules. The results revealed that the healing process is rapidly and fully healed at 75 °C for 40 min or 125 °C for 14 min, as shown in environmental scanning electron microscopy (ESEM) images. Moreover, the tensile modulus of 10 MPa exhibited 100% recovery over three break-heal cycles. Therefore, this research demonstrates the ability of the new perylene-based non-covalent interaction and the ability to tailor π-π interaction to promote self-healing polymers. Figure 17. The self-healing model in perylene polymer and chain-folding polydiimide; (a) healing mechanism in polymer chain with π-π interaction, (b) stress-strain curves of pristine and samples with various healing cycles, and (c) ESEM images of healing polymer with π-π interaction (a: 25 °C, b: 75 °C, c: 125 °C) [103]. Adapted from [103], Copyright 2015, with permission from Elsevier. The characteristics of π-π interaction for self-healing processes were investigated using FT-IR, and UV/VIS spectroscopy, which is summarized in Table 4. Table 4. Overview of characterization methods for π-π interaction. Figure 17. The self-healing model in perylene polymer and chain-folding polydiimide; (a) healing mechanism in polymer chain with π-π interaction, (b) stress-strain curves of pristine and samples with various healing cycles, and (c) ESEM images of healing polymer with π-π interaction (a: 25 • C, b: 75 • C, c: 125 • C) [103]. Adapted from [103], Copyright 2015, with permission from Elsevier. The characteristics of π-π interaction for self-healing processes were investigated using FT-IR, and UV/VIS spectroscopy, which is summarized in Table 4. Table 4. Overview of characterization methods for π-π interaction. Characterization Methods Ref. Electrostatic Interaction of Polymers Electrostatic interaction is one of van der Waals interactions relating to the attractive or repulsive interaction between atoms, which consists of electric charges. It can be applied in self-healing polymers due to electric charges between atoms and to obtain matrix repairing. Guo et al. (2019) [106] presented the self-healing of tough polymers from the polymeric complexes between branched poly(ethylenimine) or bPEI, poly(acrylic acid) or PAA, and poly(ethylene oxide) or PEO using dual dynamic crosslinked polymers. The dual dynamic interactions consist of the hydrogen bonding between PAA and PEO and electrostatic interactions between PAA and bPEI, which are presented in Figure 18. The results revealed that the maximum stress and elongation at break of the storage sample for 48 h reach 25.7 MPa and 750%, respectively. Furthermore, this result indicated that the mechanical properties are completely returned to their original stage after the self-healing process. The self-healing property exhibited a higher, which related to the strong dynamic electrostatic interactions and hydrogen bonding [106]. Electrostatic Interaction of Polymers Electrostatic interaction is one of van der Waals interactions relating to the attractive or repulsive interaction between atoms, which consists of electric charges. It can be applied in self-healing polymers due to electric charges between atoms and to obtain matrix repairing. Guo et al. (2019) [106] presented the self-healing of tough polymers from the polymeric complexes between branched poly(ethylenimine) or bPEI, poly(acrylic acid) or PAA, and poly(ethylene oxide) or PEO using dual dynamic crosslinked polymers. The dual dynamic interactions consist of the hydrogen bonding between PAA and PEO and electrostatic interactions between PAA and bPEI, which are presented in Figure 18. The results revealed that the maximum stress and elongation at break of the storage sample for 48 h reach 25.7 MPa and 750%, respectively. Furthermore, this result indicated that the mechanical properties are completely returned to their original stage after the self-healing process. The self-healing property exhibited a higher, which related to the strong dynamic electrostatic interactions and hydrogen bonding [106]. Figure 18. The self-healing of toughening polymers using dual dynamic crosslinked complexes; (a) self-healing samples were stretched under tension and lifted a weight after the healing process, (b) stress-strain curves of pristine and healed samples with various healing times for 6, 12, 24, and 48 h at room temperature, (c) network structure of complexes from electrostatic interaction and hydrogen bonding, and (d) optical images of damaged and healed sample after healing time for 48 h at room temperature [106]. Adapted with permission from [106]. Copyright 2019 American Chemical Society. The mussel-inspired antibacterial hydrogel using electrostatic interactions, coordination bonds, and hydrogen bonds for self-healing was studied by Deng et al. (2021) [107]. The results in Figure 19 indicate the self-healing mechanisms. The Al 3+ on the fracture interface still reactivated to the alginate-dopamine or Alg-DA chains using a coordination interaction, while Al 3+ was diffused near the fracture surface to help the mobility of the Alg-DA chains. This promoted the rearrangement of coordination and electrostatic interactions. The ultra depth microscope was used to observe the damage-heal process that the sample was self-healable after 8 h and almost restored to the original stage after 24 h. Figure 18. The self-healing of toughening polymers using dual dynamic crosslinked complexes; (a) self-healing samples were stretched under tension and lifted a weight after the healing process, (b) stress-strain curves of pristine and healed samples with various healing times for 6, 12, 24, and 48 h at room temperature, (c) network structure of complexes from electrostatic interaction and hydrogen bonding, and (d) optical images of damaged and healed sample after healing time for 48 h at room temperature [106]. Adapted with permission from [106]. Copyright 2019 American Chemical Society. The mussel-inspired antibacterial hydrogel using electrostatic interactions, coordination bonds, and hydrogen bonds for self-healing was studied by Deng et al. (2021) [107]. The results in Figure 19 indicate the self-healing mechanisms. The Al 3+ on the fracture interface still reactivated to the alginate-dopamine or Alg-DA chains using a coordination interaction, while Al 3+ was diffused near the fracture surface to help the mobility of the Alg-DA chains. This promoted the rearrangement of coordination and electrostatic interactions. The ultra depth microscope was used to observe the damage-heal process that the sample was self-healable after 8 h and almost restored to the original stage after 24 h. Furthermore, the compressive stress increased with the ion concentration due to the strong interaction [107]. Furthermore, the compressive stress increased with the ion concentration due to the strong interaction [107]. Figure 19. The self-healing in the mussel-inspired hydrogel system; (a) mechanism of self-healing process, (b) healing process and its experiment by lifting a weight, (c) ultra depth micrographs o damaged and healed samples, and (d) compressive modulus of hydrogels at various ion concentra tions [107]. Adapted with permission from [107]. Copyright 2021 American Chemical Society. Furthermore, Su et al. (2021) [108] studied the conductive self-healing hydrogel in terms of network structure, healing, and mechanical properties. The results revealed the multiple supramolecular in the molecules as electrostatic interaction, hydrogen bonding and covalent bonds. The stress-strain curves of hydrogel samples indicated the healed samples could recover their mechanical properties with the increase in healing time. Afte healing for 12, 24, 48, and 72 h, the tensile strength and strain at 72 h were recovered to the nearly original stage. So, the mechanical properties of hydrophobic association poly(acrylic acid)/polyaniline (PAAN) hydrogel were improved without requiring the self-healing efficiency of the hydrophobic association poly(acrylic acid) or HAPAA hy drogel matrix, compromising the balance between mechanical and self-healing properties The characteristics of electrostatic interaction for self-healing in materials were inves tigated using FT-IR, UV/VIS spectroscopy, and 1 H-NMR, which are summarized in Table 5. Table 5. Overview of characterization methods for electrostatic interaction. Figure 19. The self-healing in the mussel-inspired hydrogel system; (a) mechanism of self-healing process, (b) healing process and its experiment by lifting a weight, (c) ultra depth micrographs of damaged and healed samples, and (d) compressive modulus of hydrogels at various ion concentrations [107]. Adapted with permission from [107]. Copyright 2021 American Chemical Society. Furthermore, Su et al. (2021) [108] studied the conductive self-healing hydrogel in terms of network structure, healing, and mechanical properties. The results revealed the multiple supramolecular in the molecules as electrostatic interaction, hydrogen bonding, and covalent bonds. The stress-strain curves of hydrogel samples indicated the healed samples could recover their mechanical properties with the increase in healing time. After healing for 12, 24, 48, and 72 h, the tensile strength and strain at 72 h were recovered to the nearly original stage. So, the mechanical properties of hydrophobic association poly(acrylic acid)/polyaniline (PAAN) hydrogel were improved without requiring the self-healing efficiency of the hydrophobic association poly(acrylic acid) or HAPAA hydrogel matrix, compromising the balance between mechanical and self-healing properties. The characteristics of electrostatic interaction for self-healing in materials were investigated using FT-IR, UV/VIS spectroscopy, and 1 H-NMR, which are summarized in Table 5. Table 5. Overview of characterization methods for electrostatic interaction. Characterization Methods Ref. Dipole-Dipole Interaction of Polymers Dipole-dipole interactions are also non-covalent interactions, which are weaker than those of other interactions. They can be obtained from the interaction of two dipolar molecules. The role of dipole-dipole interactions in the self-healing process is the chain movement of polar molecules to produce polymer matrix repairing. Cao et al. (2018) [109] studied self-healing elastomers using dipole-dipole interactions and demonstrated a self-healing process underwater. In this research, the elastomer was prepared by mixing poly(vinylidene fluoride-co-hexafluoropropylene) (PVDF-HFP) with various plasticizers, including succinonitrile (SN), dioctyl phthalate (DOP), and dibutyl phthalate (DBP), respectively. The results found that the elongation at the break of the PVDF-HFP elastomer with DBP was higher than those of SN and DOP. So, PVDF-HFP/DEP was used to study the self-healing underwater due to the high hydrophobic behavior of fluorinated elastomer and DBP. The healing mechanism at various healing times was observed by a microscope, as shown in Figure 20. After the healing test, the elastomer can be stretched to 200% strain, indicating the self-healing of the elastomer at a healing time of 3 h. Furthermore, the microscope shows fully healing in the range of 12-24 h, which disappears the crack of the damaging surface. Dipole-Dipole Interaction of Polymers Dipole-dipole interactions are also non-covalent interactions, which are weaker than those of other interactions. They can be obtained from the interaction of two dipolar molecules. The role of dipole-dipole interactions in the self-healing process is the chain movement of polar molecules to produce polymer matrix repairing. Cao et al. (2018) [109] studied self-healing elastomers using dipole-dipole interactions and demonstrated a self-healing process underwater. In this research, the elastomer was prepared by mixing poly(vinylidene fluoride-co-hexafluoropropylene) (PVDF-HFP) with various plasticizers, including succinonitrile (SN), dioctyl phthalate (DOP), and dibutyl phthalate (DBP), respectively. The results found that the elongation at the break of the PVDF-HFP elastomer with DBP was higher than those of SN and DOP. So, PVDF-HFP/DEP was used to study the self-healing underwater due to the high hydrophobic behavior of fluorinated elastomer and DBP. The healing mechanism at various healing times was observed by a microscope, as shown in Figure 20. After the healing test, the elastomer can be stretched to 200% strain, indicating the self-healing of the elastomer at a healing time of 3 h. Furthermore, the microscope shows fully healing in the range of 12-24 h, which disappears the crack of the damaging surface. Figure 20. The self-healing of fluorinated elastomer using dipole-dipole interaction; (a) healing mechanism underwater, (b) stress-strain curves of healed samples with various healing times at room temperature, (c) healing demonstration underwater, and (d) optical images of damaged and healed samples [109]. Adapted from [109], with permission from John Wiley and Sons. Interestingly, self-healing underwater was studied by Zhang et al. (2020) [110]. In this research, poly(vinylidene fluoride-co-hexafluoropropylene) called fluorinated elastomer (FE) was dissolved and mixed in ionic liquids, which obtained multiple ion-dipole inter- Figure 20. The self-healing of fluorinated elastomer using dipole-dipole interaction; (a) healing mechanism underwater, (b) stress-strain curves of healed samples with various healing times at room temperature, (c) healing demonstration underwater, and (d) optical images of damaged and healed samples [109]. Adapted from [109], with permission from John Wiley and Sons. Interestingly, self-healing underwater was studied by [110]. In this research, poly(vinylidene fluoride-co-hexafluoropropylene) called fluorinated elastomer (FE) was dissolved and mixed in ionic liquids, which obtained multiple ion-dipole interactions in the molecules. The results in Figure 21 reveal that the mechanical properties are completely restored to their original stage after healing at 50 • C for 12 h due to the hydrophobic ion-dipole interaction. Furthermore, the crack of the damaged surface disappears after a healing time of 12 h. Therefore, the self-healing efficiency depends on both healing time and temperature. [110]. Adapted with permission from [110]. Copyright 2020 American Chemical Society. Furthermore, Wang and Urban (2021) [111] presented self-healing of fluorinated copolymers. In the present research, trifluoroethyl methacrylate (TFEMA) and n-butyl acrylate (nBA) were copolymerized to obtain the random copolymer called p(TFEMA/nBA). The optical image in Figure 22 revealed the self-healing samples composed of a 50/50 TFEMA/nBA monomer molar ratio at 0 and 48 h. This result indicated that the combination of dipole-dipole interactions between C-F and C=O causes self-healing after 48 h. Figure 22. The self-healing process of fluorinated copolymers; (a) chemical structure of monomer in self-healing system; and (b) comparison of the optical images of undamaged, damaged, and healed samples [111]. Reprinted from [111], with permission from John Wiley and Sons. The characteristics of dipole-dipole interactions in a self-healing system were investigated using FT-IR and summarized in Table 6. [110]. Adapted with permission from [110]. Copyright 2020 American Chemical Society. Furthermore, Wang and Urban (2021) [111] presented self-healing of fluorinated copolymers. In the present research, trifluoroethyl methacrylate (TFEMA) and n-butyl acrylate (nBA) were copolymerized to obtain the random copolymer called p(TFEMA/nBA). The optical image in Figure 22 [110]. Adapted with permission from [110]. Copyright 2020 American Chemical Society. Furthermore, Wang and Urban (2021) [111] presented self-healing of fluorinated copolymers. In the present research, trifluoroethyl methacrylate (TFEMA) and n-butyl acrylate (nBA) were copolymerized to obtain the random copolymer called p(TFEMA/nBA). The optical image in Figure 22 revealed the self-healing samples composed of a 50/50 TFEMA/nBA monomer molar ratio at 0 and 48 h. This result indicated that the combination of dipole-dipole interactions between C-F and C=O causes self-healing after 48 h. Figure 22. The self-healing process of fluorinated copolymers; (a) chemical structure of monomer in self-healing system; and (b) comparison of the optical images of undamaged, damaged, and healed samples [111]. Reprinted from [111], with permission from John Wiley and Sons. The characteristics of dipole-dipole interactions in a self-healing system were investigated using FT-IR and summarized in Table 6. Figure 22. The self-healing process of fluorinated copolymers; (a) chemical structure of monomer in self-healing system; and (b) comparison of the optical images of undamaged, damaged, and healed samples [111]. Reprinted from [111], with permission from John Wiley and Sons. Host-Guest Interaction of Polymers Host-guest interaction is a type of non-covalent interaction that uses a principle similar to the lock and key principle, indicating specificity. The principle of the host-guest interaction relates to the receptor molecule acting as the host, and the ion acting as the guest. The host molecule must be specific in choosing to bind to the guest. Therefore, the lock and key concept is used to obtain the self-healing process. Wang et al. (2018) [114] studied rapid self-healing using the host-guest interaction in hydrogel. In this research, host-guest recognition was formed between a host of poly(isocyanatoethyl acrylate modified b-cyclodextrin) and a guest of 2-(2-(2-(2-(adamantyl-1-oxy)ethoxy)ethoxy)ethoxy)ethanol acrylate to obtain the host-guest supramolecular (HGSM) hydrogel. Then, HGSM hydrogel was crosslinked under UV-irradiated polymerization to obtain the covalent bonds in the hydrogel. The mechanical properties of the HGSM hydrogel ( Figure 23) were presented in terms of tensile, compression, and cyclic compression testing. The result found that the stress-strain curves exhibit high strength of HGSM hydrogel. In terms of compressive modulus, the modulus was increased with concentration due to the higher crosslinking density. Then, the large hysteresis loops in the loading-unloading cycle of HGSM hydrogel represented its dissipated energy effectively. Moreover, the stretching length of HGSM hydrogel reached 48% and exhibited elasticity in this stretched state. Furthermore, the microscopy images revealed the self-healing process of the HGSM hydrogel for almost 60 min without any healable agent. The self-healing of toughening elastomers based on polycyclodextrin (poly-CDs) and methacryl-1-adamantane ethylene glycol diester (HEMA-Ad) was studied by Hou et al. (2019) [115]. The host-guest interaction was used for a healable process due to it being stable in moisture conditions and the fact that it was not affected by surface aging. Then, the poly-CDs and HEMA-Ad were multi-functional hosts and guest molecules, respectively. From a mechanical properties point of view, the hysteresis loop indicated the energy dissipation of materials. Figure 24 show that the toughening elastomers exhibit the energy dissipation more effectively compared to the poly(2-hydroxyethyl acrylate)co-poly(methacryl-1-adamantane ethylene glycol diester) or PHEA during deformation because the synergistic effect of the host-guest interaction, representing the micro-network of poly-CD, and hydrogen bonding between polymer chains. Furthermore, the toughening elastomers revealed high strength, extensibility, and autonomously self-healing under ambient conditions [115]. Figure 23. The mechanical properties of HGMS hydrogel in terms of (a) compression curves of hydrogel with various HGSM contents, (b) compression modulus of hydrogel with various HGMS contents, (c) cyclic compression test curves, and (d) stress-strain curve of HGSM hydrogel. The selfhealing process of host-guest supramolecular hydrogels; (e-g) photographs of conventional hydrogel compared to HGSM hydrogel, (h,i) shape recovery of hydrogel after cyclic compression and compression by squashed, and (j) healing process and micrographs of healed samples [114]. Adapted from [114], with permission from John Wiley and Sons. sample under tension, (c) typical mechanism for self-healing materials, and (d) tensile loading-unloading curves of the toughening elastomers [115]. Adapted with permission from [115]. Copyright 2019 American Chemical Society. Furthermore, Park et al. (2021) [116] designed and studied the supramolecular double-network in hydrogels containing reversible, non-covalent interactions. The supramolecular network in the hydrogel was also self-healable ( Figure 25). In terms of the mechanical properties, the compressive stress-stain curve between the single-network hydrogel (ncSNH) and double-network hydrogel (ncDNH) were compared during a loading-unloading cycle. The results found that the Young's modulus of ncDNH (9.3 ± 0.1 kPa) is higher than that of ncSNH (3.2 ± 0.1 kPa) due to the combination network in molecules. Interestingly, the hydrogel was more stiffed with an increase in temperature to 36.5 °C, which caused the additional formation of physical and interchain interactions together. Therefore, the increase in mechanical properties of self-healing material depends on the formation of network interactions in the molecules. Figure 25. The self-healing of the supramolecular network in hydrogel; (a) the combination of reversible chains and host-guest interaction to obtain supramolecular networks, (b) demonstration of self-healing process, (c) compressive stress-strain curves of (i) ncSNH, (ii) ncDNH, and (iii) ncDNH at 36.5 °C before and after UV irradiation (solid and dotted), and (d) Young's modulus changing the sample at various cycles of UV and visible light irradiation [116]. Adapted with permission from [116]. Copyright 2021 American Chemical Society. The characteristics of the host-guest interaction in the self-healing polymer were investigated using FT-IR and 1 H-NMR, which are summarized in Table 7. Table 7. Overview of characterization methods for host-guest interaction. Materials Characterization Methods Ref. Figure 25. The self-healing of the supramolecular network in hydrogel; (a) the combination of reversible chains and host-guest interaction to obtain supramolecular networks, (b) demonstration of self-healing process, (c) compressive stress-strain curves of (i) ncSNH, (ii) ncDNH, and (iii) ncDNH at 36.5 • C before and after UV irradiation (solid and dotted), and (d) Young's modulus changing the sample at various cycles of UV and visible light irradiation [116]. Adapted with permission from [116]. Copyright 2021 American Chemical Society. FT-IR 1 H-NMR The characteristics of the host-guest interaction in the self-healing polymer were investigated using FT-IR and 1 H-NMR, which are summarized in Table 7. Table 7. Overview of characterization methods for host-guest interaction. Conclusions and Perspective Healable supramolecular polymers from the non-covalent interactions are an emerging innovation which can be designed for new applications in polymer technology. This type of supramolecular polymer can enhance mechanical properties in terms of reversible interactions for functional polymer products. Such self-healing phenomena may also be found in different types of non-covalent interaction, for example, metal-ligand coordination, hydrogen bonding, π-π interaction, electrostatic interaction, dipole-dipole interaction, and host-guest interaction. The challenge of performance is the control of the molecular structure; this relates to the program on the non-covalent interaction of supramolecular molecules. Thus, it is very festinated to better understand the relationship of method-structure-property for tailor-made supramolecular polymers. This relationship involves not only the chemical functionalized polymer structure based on the preparation method but also the mechanical properties of the self-healing phenomenon and the physical thermodynamics of both entropy and enthalpy changes. A collection of approaches has been proposed by researchers to develop healable supramolecular polymers, summarized in Table 8. The data present self-healing systems using various non-covalent interactions, which can be obtained from chain movement to produce polymer matrix repairing. Interestingly, some of the systems can rapidly repair or self-heal, such as the hydrogen bonding of oxidized natural rubber/sodium alginate system and the π-π interaction of the polyimide/pyrenyl system. Furthermore, the results revealed that some self-healing processes could be easily obtained at room temperature. Therefore, we can adjust and apply the self-healing process with raw materials to produce good efficiency in terms of mechanical properties, energy dissipation, energy-saving, and cost. From a human wound healing point of view, the self-healing mechanism from the non-covalent interaction may be applied for wound healing of humans in the future due to the intermolecular forces in the human body such as protein-protein, lipid-lipid, and hydrophobic interaction, etc. Furthermore, in terms of polymer utilization for selfhealing products, this knowledge can be applied and developed to increase the lifetime of products, causing rapid healing, the reduction of accidents, and reduced maintenance costs of products such as surgical gloves, wound dressing, drug delivery materials, or even aircraft tires. Therefore, the future evolution of technologies is possible to apply this idea of molecular recognition, self-healing, and supramolecular force for non-covalent material utilization. Author Contributions: K.B. wrote the initial draft manuscript; P.L. and S.S. revised and corrected the manuscript. W.S. provided the original idea for this work and wrote and revised the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This manuscript was funded by Budget Bureau, The Prime Minister's Office, Thailand (the strategic program on value creation agriculture for Kasetsart University in the fiscal year 2022). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: No new data was generated for this review paper.
2022-06-24T15:20:37.950Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "ce85c712f32166052b287725a0e8b9f3b59103ce", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/13/6902/pdf?version=1656065813", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "737fb97c8ff1f60a5eaac3b115eb2194d8d8665a", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
117039411
pes2o/s2orc
v3-fos-license
Relativistic First-Principles Full Potential Calculations of Electronic and Structural Properties of group IIIA-VA semiconductors based on Zeroth Order Regular Approximation (ZORA) Hamiltonian First-principles full potential calculations based on Zeroth Order Regular Approximation (ZORA) relativistic Hamiltonian and Kohn-Sham form of Density Functional Theory (KS DFT) in local spin density approximation (LSDA) are reported for group IIIA-VA (InAs, GaAs, InP) semiconductors. The effects of relativity are elucidated by performing fully relativistic, scalar relativistic, and nonrelativistic calculations. Structural and electronic band structure parameters are determined including split-off energies, band gaps, and deformation potentials. The nature of chemical bonding at the equilibrium and under hydrostatic strain is investigated using projected (PDOS) and overlap population weighted density of states (OPWDOS). ZORA results are compared with Augmented Plane Wave plus Local Orbitals method (APW+lo), and experiment. Viability and robustness of the ZORA relativistic Hamiltonian for investigation of electronic and structural properties of semiconductors is established. I. INTRODUCTION There is a great interest in electronic and structural properties of group IIIA-VA materials due to their wide spread applications in semiconductor devices. In particular, InAs/GaAs and InAs/InP semiconductor quantum dots (QDs) [1] have shown great promise [2] in quantum information applications such as is the generation of entangled photon pairs (EPPs) on demand [3][4][5][6]. Atomistic modeling of semiconductor nanostructures may require input from accurate Density Functional [7][8][9] calculations in cases when experimental data is not available. Therefore, it is important to understand which material parameters are well reproduced with "standard DFT" and this work is a contribution in this area. Three factors determine accuracy of Density Functional calculations 1) Model exchange-correlation functional; 2) Representation of single-particle orbitals (atomic orbitals, plane-waves, real space grids) and representation of ion-electron interaction (ab initio pseudopotentials, full potential schemes); 3) Treatment of relativity. The assessment of the accuracy of exchange-correlation functionals is beyond the scope of this contribution. The main objective of this work is to perform a detailed study of structural and electronic structure properties of InAs, InP, and GaAs semiconductors using highly accurate representation of single-particle orbitals and ion-electron interaction and to assess the role of relativity in these calculations. First-principles calculations on group IIIA-VA semiconductors based on Kohn-Sham form of Density Functional Theory [7][8][9] have already been performed in the past [10][11][12][13][14][15]. The computational approach used in these calculations generally evolved from ab initio pseudopotential calculations to more elaborate full potential (FP) augmentation schemes [16][17][18][19] such designed to capture scalar relativistic effects such as s and p "orbital contraction" (stabilization) and d "orbital expansion" (destabilization) as well as spin-orbital splitting for electrons with angular momentum l > 0. While ZORA Hamiltonian is very well established among quantum chemists, relativistic ZORA calculations on solids are much less common [28][29][30][31][32][33], especially, in comparison with a large volume of calculations employing LAPW and APW+lo methods. Therefore, further assessment of ZORA performance in solids is important. In this work, I will employ ZORA Hamiltonian to perform a detailed study of structural and electronic band structure properties of InAs, InP , and GaAs semiconductors in zincblende phase. The effects of relativity are elucidated by performing three sets of calculations 1) nonrelativistic 2) scalar relativistic and 3) relativistic with variational treatment of spinorbital coupling (or fully relativistic). ZORA Hamiltonian is applied to calculate electronic band structures, band gaps, and deformation potentials. The results obtained with ZORA Hamiltonian are compared to those obtained with APW+lo method, and with experiment. Whereas it is a relatively general practice to report and discuss PDOS, the usage OPWDOS is much less common. The OPWDOS analysis, popularized [34] by the Nobel Prize winner Roald Hoffmann, provides a clear pictorial representation of bonding, non-bonding, and anti-bonding orbital interactions and is deemed useful. I will demonstrate the usage of OPWDOS plots and explain how they add to our understanding of chemical bond in InAs, InP, and GaAs. Atomic units = e = m e = 1 are used throughout unless otherwise specified. II. COMPUTATIONAL APPROACH The first-principles KS DFT calculations with ZORA Hamiltonian are carried out with BAND program [20][21][22]. BAND makes use of periodic boundary conditions (PBC) and explicit Bloch basis composed of numerical and Slater type atomic orbitals (NAO/STAO basis). I also perform calculations with APW+lo method [16,17] as implemented in EXCIT-ING program [35]. Both methods are capable of including spin-orbital coupling variationally and I choose to do so. To clarify effects of relativity, I also present results of scalar relativistic and nonrelativistic calculations. The relativistic effects in BAND are treated in the Zeroth Order Regular Approximation (ZORA) approach of van Lenthe and co-workers [23][24][25][26][27]. The details of ZORA implementation in BAND are given in Ref. [29]. In ZORA approach, the kinetic energy operator is replaced by the ZORA expressionT ZORA where p is the "momentum operator" (p = −i∇ in the absence of a magnetic field); V SAP A (r) -is a sum of atomic potentials (SAPA), an approximation to the total effective potential in the ZORA kinetic energy operator; and σ = {σ x , σ y , σ z } -is a vector made up of the Pauli matrices. Introducing the notation the ZORA kinetic energy becomeŝ In the last equation,T ZORA was split into the so-called scalar relativisticT ZORA SR and spin-orbitalT ZORA SO terms, whereT ZORA SO = 1/2 σ · (∇K × p). The nonrelativistic limit can be obtained by setting K → 1. I will refer to calculations withT ZORA ,T ZORA SR , and K = 1 as (fully) relativistic (ZORA FREL), scalar relativistic (ZORA SREL), and nonrelativistic (NREL), respectively. In my BAND calculations, I use basis set of triple zeta quality (TZ2P in BAND's notation) taken from the program's database. The core states are obtained from the full potential atomistic calculations and are kept frozen during the self consistent field (SCF) procedure. The valence states are expanded in terms of the NAO/STAO Bloch basis set functions orthogonalized on the core states (VOC basis). The Hamiltonian matrix elements are evaluated using highly accurate numerical integration scheme [36]. The Brillouin zone integration is carried out using accurate quadratic tetrahedron method [37,38] with 65 symmetry unique k-points spanning the irreducible Brillouin zone (IBZ). The "default" convergence criteria are used to terminate the SCF procedure. APW+lo calculations are carried out with EXCITING program [35]. The local orbitals and linearization energies are taken from the program's database. The "core" states are treated fully relativistically and self-consistently in the spherical approximation, whereas the "valence" states are treated using the second-variational Hamiltonian. The IBZ is sampled The exchange-correlation is treated within the local spin density approximation (LSDA) [8,9]. BAND and APW+lo LSDA calculations are carried out using Vosko-Wilk-Nusair [39] and Perdew-Wang [40] parameterization of the correlation energy, respectively. The calculations are performed using "primitive" face-centered cubic cell with two atoms per cell. The zinc-blende crystal structure was assumed. The structural parameters are obtained by varying the lattice constant (from -20 to 20% of the equilibrium volume) and fitting the total energies to the Murnaghan equation of state [41]. where c µi (k) are expansion coefficients and χ µk are basis set functions -Bloch sums of equivalent atomic orbitals. The gross population of χ µk for the eigenstate ψ ik is and PDOS for function χ µ is where E ik is Kohn-Sham eigenvalue corresponding to eigenstate ψ ik and L is a Lorentzian broadening function. OPWDOS is defined for two functions χ µ and χ ν or between two sets of functions ({χ µ } and {χ ν }) and has large positive or negative values depending on whether the interaction between χ µ and χ ν ({χ µ } and {χ ν }) is bonding or anti-bonding, respectively. The use of these plots is demonstrated in Ref. [44]. The overlap population of two orbitals and OPWDOS are defined as III. RESULTS A. InP, GaAs, and InAs: Structural Parameters Fig. 1 shows energy level diagram for "spherically symmetric" In, As, Ga, and P atoms. [46]. The pressure release results in a transition to a cinnabar phase followed by a transition to the original zinc-blende phase [47]. No direct zinc-blende to cinnabar transition was observed. The first high-pressure phase in common "cation" InP and InAs was experimentally found to be NaCl phase [48][49][50]. The latter finding is supported by Density Functional calculations [51,52]. In connection with InAs quantum dots in GaAs or InP matrix, the low pressure zinc- Table I. Table I also shows experimental values [45] which were measured at room temperature and the results of APW+lo calculations. Figure 2 shows the deviations between LSDA lattice constants and the experimental lattice constants. The finite temperature effects will increase the lattice constant. Once these temperature effects are taken into account, the "experimental" lattice constant is effectively reduced which will influence the conclusions about the accuracy of a given exchange-correlation functional. For example, in the case of GaAs, the temperature effects lead to the increase in the lattice constant by 0.3% from 5.638Å to 5.653Å [53]. Since LSDA underestimates [53,54] the bond lengths, the inclusion of finite temperature effects into consideration will improve the agreement between the theory and experiment. Table I and Figure 2 show that LSDA ZORA relativistic and scalar relativistic results underestimate the lattice constants by, approximately, 0.6% (InP), 0.8 % (GaAs), and 0.5 % (InAs). The consideration of finite temperature effects will further improve agreement between the theory and experiment and it is likely that the error of LSDA relativistic ZORA calculation for lattice constants is within 0.5%. In agreement with the established trend [55], the relativity contracts the bond length. The DFT lattice constant decreases as the treatment of relativity changes from "nonrelativistic" (NREL) to fully relativistic (FREL). Variational treatment of spin-orbital coupling does not seem to affect the structural properties by much, the reduction in the equilibrium lattice constant upon going from SREL to FREL is very small (within 0.05%). It is important to include some kind of description of relativity for In -NREL lattice constants for InP and InAs are larger than experimental ones which contradicts to the established LSDA trend [53,54]. on the cation (In, Ga) and anion (As, P) atomic orbitals (AOs), respectively. The names "cation" and "anion" reflect the move away from covalent bonding and towards ionicity in InP, GaAs, and InAs semiconductors. The Hirshfeld charge analysis [56] performed in this work reveals that electron density transfers from regions near In and Ga ions into regions near As and P ions making In and Ga "positively" and As and P "negatively" charged, respectively. We find that PDOS of InP, GaAs, and InAs in the valence band energy region (up to 16 eV below the Fermi energy, the Fermi energy is at zero) consists of four main "spectral features". The first "spectral feature" in the PDOS is a peak of broad character just below the The OPWDOS in the conduction band energy region is mostly of the anti-bonding character and has peaks that are generally higher in magnitude than the OPWDOS peaks in the valence band region. The latter is a demonstration of a well known fact that the "antibonding is more anti-bonding than bonding is bonding" (see Ref. [57] and the references therein). The bottom of the conduction band is strongly "destabilized" by cation-anion The electronic band structure parameters are computed at several levels of relativity (FREL -fully relativistic, SREL -scalar relativistic, and NREL -nonrelativistic) as well as with the APW+lo method (fully relativistic approach). The results of these calculations are compared with available experimental data. In agreement with previous work, Table II In the Table III, I summarized orbital populations for three top valence bands (split-off Γ 7v , light-hole, and heavy hole Γ 8v ) and the lowest conduction band (Γ 6c ) at the highsymmetry points of the Brillouin zone. From Table III, one can see that the bottom of the conduction band at the Γ and L points has significant contributions from cation s-type AOs, whereas at the X point the bottom of the conduction band is made up of cation p-type orbitals. The reason for the strong "relativistic" band gap decrease at the Γ and L points is that the conduction s states are stabilized by relativity stronger than the valence p states. The stabilization itself stems from the relativistic "contraction" of the atomic orbitals [58]. Overall, the gaps in the relativistic description decrease substantially which might affect the conclusions with respect to the accuracy of a given exchange-correlational functional for the band gap calculation. For a fixed lattice constant, LSDA also poorly describes the energy differences within the conduction band. For example, E(X C ) − E(L C ) energy difference is 508 meV from ZORA FREL calculations, whereas the experimental value is 160 meV. The relative position of the conduction band minima is described only qualitatively The strong underestimation of the band gaps may result in the wrong energetic order of bands in some specific points of the Brillouin zone. This happens at the Γ point for InAs at the equilibrium lattice constant, where a conduction band Γ 6c is strongly stabilized and lies below the split-off band Γ 7v . The strain also may affect the energetic order of bands. Figures 10 and 11 show scalar relativistic and fully relativistic band structure plots for InP, GaAs, and InAs, respectively. The band structure is computed along the edges connecting the high-symmetry points of the Brillouin zone. The Cartesian coordinates of these high-symmetry points are summarized in Table V. From these band structure plots one can see the band crossing at the Γ-point for InAs at the equilibrium lattice constant. This band crossing also occurs in InP and GaAs subjected to the tensile hydrostatic strain. It is found that the split-off energies at the Γ and L points are reproduced very accurately within ZORA fully relativistic approach, the agreement with the experiment is a few % or several meVs. The fully relativistic treatment for the calculation of the split-off energies is essential as both scalar relativistic/nonrelativistic calculations lead to the six-fold (including spin) degeneracy of the valence band at the Γ-point (Fig. 10). The relative volume deformation potential describes how fast a given band gap changes with volume. The negative (positive) relative deformation potential (in our definition) means that the band gap increases (decreases) as volume decreases. Table II shows that for both Γ V → Γ C and Γ V → L C transitions, both gaps are increasing as volume decreases (negative deformation potential), whereas for the Γ V → X C transition, the band gap decreases (the deformation potential is positive). The differences in the sign of the deformation potential for Γ V → Γ C and Γ V → L C transitions on one hand and Γ V → X C transition on the other hand are attributed to the "different nature" of the conduction band minimum at these points (see Table III). The calculated absolute magnitude of the "rate of change" in the gap is the largest for GaAs and decreases for semiconductors with a larger lattice constant (InP, InAs). The experimental relative deformation potentials are obtained from the direct band gap pressure dependence coefficients and experimental bulk moduli. The experimental trend in the absolute magnitude of the "rate of change" in the gap is GaAs, InAs, and InP. Note, however, that experimental uncertainties for the relative deformation potential can be as large as 1 eV. The relative deformation potentials are quite close for fully relativistic and scalar relativistic calculations and, therefore, a fully relativistic calculation of this quantity does not seem essential, at least, for the transitions considered in this work. Finally, there are two other quantities which sensitively depend on fully relativistic calculation -the cation d-band width and the width of the upper part of the valence band UVBW. Both band widths increase as the treatment of relativity changes from non-relativistic to fully relativistic level. For UVBW, the increase is 6%-8% (0.4-0.5 eV). The d-band width increases dramatically from 0.10-0.15 eV to 0.9 eV. The agreement between ZORA fully relativistic and APW+lo calculations is exceptionally good, especially, for the relative deformation potentials, E Γ d , δE Γ d , and for UVBW. The splitoff energies are reproduced within 10 meVs, and the gaps usually agree within 20 meVs. Method In.s In.p As.s As.p In.s In.p As.s As.p In.s In.p As.s As.p Relative volume deformation potentials a V (eV) for InP, GaAs, and InAs IIIA-VA semiconductors (zinc-blende crystal structure) for specific transitions. The different signs of the deformation potential is attributed to the "different nature" of the conduction band minimum. Experimental volume deformation potentials are obtained from the direct band gap pressure dependence coefficient and bulk modulus. Method
2010-05-04T19:47:50.000Z
2010-05-04T00:00:00.000
{ "year": 2010, "sha1": "8c9f62f806a93a93c9a906a7245116902387dc1b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8c9f62f806a93a93c9a906a7245116902387dc1b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
229407639
pes2o/s2orc
v3-fos-license
Studies on Fe/Fe Redox Flow Batteries with Recombination Cell Different Fe/Fe redox fl ow batteries were constructed and investigated. The aim of the work was to assess the feasibility of Fe/Fe redox fl ow batteries as potentially inexpensive candidates for stationary energy storage for renewable energy. A recombination cell was developed and integrated into the battery. The recombination cell should prevent irreversible loss of capacity caused by hydrogen generation. Furthermore, electrolyte regeneration experiments with external hydrogen were conducted to reverse irreversible losses. With the battery and recombination cell up to 100 two-hour charge and discharge cycles were carried out and different materials were investigated. Different substrate materials for iron deposition were compared and different microporous and ion exchange membranes were used. A kynol fabric achieved the best performance and all membranes investigated showed potential applications. An optimized battery achieved up to 70% energy ef fi ciency at 12.5 mA cm − 2 and max. 47 mW cm − 2 power density at 75 mA cm − 2 . © 2020 The Author(s). Published on behalf of The Electrochemical Society by IOP Publishing Limited. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, Due to fluctuating energy production and the significant increase in renewable energy producers such as wind power and photovoltaics, the need for stationary energy storage systems is increasing. 1,2Electrochemical energy storage systems have the advantage of decentralized use and modular scaling with potentially low lifetime costs.The most common technologies currently in use are lead/acid (LAB), lithium-ion (LIB), sodium/sulphur and vanadium redox flow batteries (VRFB). 3All these technologies have been or are currently being significantly extended in the MW/MWh range worldwide, although lead/acid batteries are being replaced.A disadvantage of lithium-ion batteries is the partly questionable raw material procurement 4 and recycling, 5 as well as the service life of the batteries to be achieved.Lithium-ion batteries in very large-scale devices also present a safety concern.Compared to conventional batteries, redox flow batteries (RFB) have the advantage that power and energy can be scaled separately and can therefore be better adapted to the respective requirements. 6Vanadium redox flow batteries (VRFB) offer the possibility of potentially easy recycling by reusing used vanadium solution, but the price of vanadium has fluctuated significantly in the past. 7High costs for vanadium as a raw material for the energy storage medium have a direct impact on investment costs and can prevent successful commercialisation.In addition, the maximum temperature limitation of VRFBs for use in very warm and sunny regions is an obstacle, since the effort for heat management directly affects the investment costs and additionally influences the efficiency and thus the specific storage costs.For these and other reasons, almost countless alternative redox flow batteries based on inorganic and organic redox pairs and electrolytes have been investigated in recent years, 8,9,10 Especially aqueous organic redox pairs have attracted a lot of attention in recent years.The motivation for this was mostly a reduction of investment costs by using active materials that are as inexpensive as possible.The iron/ iron redox flow battery is also a representative with extremely inexpensive active materials and was first investigated by Hruska and Savinell in 1981. 11The Fe/Fe-RFB uses iron as the sole active material in the three different oxidation states 0, +2, and +3.During the charging process, metallic iron is deposited from an iron(II) solution at the negative electrode and is oxidized to soluble iron(III) at the positive electrode (see also Fig. 1): A major challenge for this type of battery is the negative deposition potential for iron and the relatively high kinetics of hydrogen generation at iron electrodes.The kinetics of hydrogen formation is pH-dependent and slows down with increasing pH values, which favors iron deposition.The pH value of the negative electrolyte solution increases due to the hydrogen formation, especially at the electrode.Above a roughly neutral pH value, sparingly soluble iron(II) hydroxide is formed and is removed from the battery, causing a loss of capacity because of electrochemical inactivity.Furthermore, precipitates are formed on electrodes and in the fluidic system, which can lead to an increase in internal resistance or pressure loss and thus to battery failure.In connection with this, past work has dealt with an optimized electrolyte composition. 12,13This includes conducting salts, buffers 14,15 and metal ions to increase the overvoltage for hydrogen generation but also different organic ligands, 16,17,18 The stabilization of the pH value at the electrode is an important aspect for achieving high performance batteries with high efficiencies.However, there will always be a non-negligible amount of hydrogen, which is associated with a continuous loss of capacity and must be prevented or compensated for.One possibility is the electrochemical reversal of the side reactions in a separate electrochemical cell. 19,20,21The hydrogen produced is oxidized on a catalyst layer and iron(III) ions from the positive electrolyte are reduced to iron(II) ions: The difference between the standard potentials is 0.77 V.The cell reaction almost completely reverses the secondary reaction, with the exception that in the overall balance protons are transported from the negative electrolyte to the positive electrolyte and the pH of the positive electrolyte decreases while that of the negative electrolyte increases.In addition, there is a loss of energy, but theoretically this can be partially used. Journal of The Electrochemical Society, 2020 167 160527 Within the scope of this work we were mainly interested in the basic properties of Fe/Fe-RFBs and whether they could have the potential for commercial application.For this purpose, we were particularly interested in the problem of the lifetime of energy storage media in connection with recombination and regeneration possibilities.Furthermore, different substrates and membranes were investigated for their applicability in Fe/Fe-RFBs. Experimental For battery tests, a test stand was set up as shown schematically in Fig. 2 and as photograph in Fig. 3.The test stand consisted of an Fe/Fe-RFB cell, a recombination cell, two reservoirs for the energy storage medium, two pumps, a potentiostat (Reference 3000, Gamry, USA) and a benchtop multimeter.The supply of the media to the different cells was carried out in such a way that gaseous hydrogen from the head space of the negative storage tank could reach the negative half-cell of the recombination cell and the output of the positive energy storage medium of the Fe/Fe-RFB cell (Fe 3+ /Fe 2+ ) was connected to the input of the positive half-cell of the recombination cell.The electrolyte was returned to the storage tank after passing the recombination cell.The outlet connection of the hydrogen side of the recombination cell was closed by a 100 mm high water column.The battery cell was electrically connected to the potentiostat.The recombination cell was directly connected to the current measurement of a benchtop multimeter. The battery cell consisted of two half-cells with an active area of 40 cm 2 (see Fig. 4).The positive half-cell consisted of a flow frame (f) in which a glassy carbon plate (HTW High Temperature Materials, Germany) with a thickness of 3 mm was embedded.A graphite felt (GFA 5, SGL-Carbon, Germany) was placed in the flow frame to increase the electrochemical surface area.The graphite felt was thermally treated for 1 h at a temperature of 400 °C for hydrophilization.The negative half-cell also consisted of a flow frame (f) made of polyvinyl chloride (PVC) with an embedded glassy carbon plate.The gap between frame and glassy carbon plate has been sealed with conventional silicone sealant.The flow frame had a cavity with a thickness of 3.5 mm.Various carbon fabrics or papers were placed in this cavity as substrate (j) for iron deposition.To create a cavity for iron deposition, a 3D printed spacer was also placed in the cavity.The maximum possible distance to the membrane was thus approx.3.3−3.4mm depending on the substrate used.The half-cells were separated by a membrane (h).Four different membranes were tested during the experiments.A cation exchange membrane (NAFION 115), an anion exchange membrane (Fumasep FAP-450, Fumatech GmbH, Germany), a microporous separator (BH-Consulting, Australia) and another microporous separator (SF-601, Asahi-Kasei, Japan).If not mentioned otherwise, the anion exchange membrane was used. Thin copper sheets were used as current collectors.To reduce contact resistance, a carbon paper (Toray TP 30, Quintech GmbH, Germany) was placed between copper and graphite plate.Flat gaskets (c) were placed at various points to ensure tightness.The two half-cells were finally held together by two metal plates, one of which contained holes for the media inlets and outlets.An insulation plate (b) was used for electrical insulation. As shown schematically in Fig. 5, a recombination cell was built and integrated into the battery as described above.The recombination cell consisted of two half-cells separated by a one-sided catalyst coated membrane (CCM) (NAFION 115, 1 mg cm −2 Pt/C, Baltic Fuel Cells, Germany).The side with the catalyst layer was the negative half-cell.The positive half-cell consisted of a glassy carbon foam as electrode (ERG Aerospace, USA), which was placed in a flow frame (f).A glassy carbon plate (HTW high temperature materials, Germany) was embedded in the flow frame.Thin copper sheet (d) was used as a current collector, on which Toray paper (e) was laid to reduce contact resistance.The negative half-cell again consisted of a glassy carbon electrode (g), which was placed between two Toray papers (i) in a flow frame (f) with embedded glassy carbon plate.Again a thin copper plate served as current collector.The cell was sealed by different flat gaskets (c).The components were held together by two steel plates (a), one of which had the media feed-throughs. A solution of 1 M FeCl 2 , 2 M NH 4 Cl and 0.2 M HCl with a pH < 0 served as energy storage medium.The solution was initially used for both half-cells.The volume of the negative electrolyte was either 30 or 60 ml.The volume of the positive electrolyte was either 60 ml or 120 ml.The theoretical maximum capacity was 1.62 Ah (30 ml/ 60 ml) or 3.24 Ah with double the electrolyte volume.The positive electrolyte was continuously purged with nitrogen. Charging and discharging tests were carried out to investigate the properties of different cell materials, current densities and charging and discharging parameters.Between the tests, a new cell was set up, the fluidic system was cleaned and new portion of electrolyte was used.For a new cell a new felt was always used for the positive electrode, a new substrate for the negative electrode and a new membrane.The charge and discharge experiments were carried out galvanostatically with a constant current.To check the ohmic resistance of the battery cell, impedance measurements were taken before each measurement on a newly constructed cell and the ohmic resistance was determined by reading the intercept on the X-axis at high frequencies in the Nyquist plot. Electrolyte regeneration was performed using the same setup shown in Fig. 2. The aim of the study of the electrolyte regeneration was to verify the regeneration of capacity and composition of the electrolyte by Hydrogen loss to the atmosphere.Regeneration was performed with externally supplied hydrogen after the battery cell was completely discharged and the positive electrolyte was pumped into the negative electrolyte circuit and then back into the positive electrolyte circuit.The electric current of the recombination cell was measured by a benchtop multimeter and regeneration was continued until the electric current density was less than 0.25 mA cm −2 .The content of Fe 2+ and Fe 3+ was determined by potentiometric titration.Iron content measurements were made on a freshly prepared Fe(II) solution, the solution before regeneration and the solution after regeneration.All experiments were carried out at room temperature.All experiments were carried out with recombination cell except the comparison in Fig. 6 at the beginning of the results. Results and Discussion Figure 6a shows an example of voltages of an Fe/Fe-RFB without recombination cell during the charge and discharge.The standard potential difference is 1.22 V.The overvoltages to this value were about 150 mV during charging (at SOC 50) and 350 mV during discharging.It is known that the positive redox reactions of Fe 2+ /Fe 3+ are electrochemically reversible and therefore fast. 22,23hus, iron dissolution was much slower than iron deposition and a limiting factor for battery efficiency.The area specific resistance (ASR) of the battery cell was 5.4 Ohm*cm 2 and thus comparatively high.The ohmic resistances of all further experiments were in the range between 90−140 mOhm and thus varied quite strongly.In this cell, the loss due to the ohmic resistance was 136 mV alone during charging and discharging and was thus also a significant factor in the efficiency losses. The cell voltage polarized to a value of approx.1.6 V at the beginning of the charging process, decreased slightly and increased continuously until the maximum charging time of 1 h was reached or the final charging voltage of 1.9 V was reached.This behaviour is typical for metal deposition on carbon electrodes, where the kinetics of iron deposition on carbon is lower than on the deposited metal itself.Only the first two charging processes were limited by time. The following four charging processes were terminated by concentration depletion and reaching the final charging voltage.Especially during charging, a significant gas development was visible in the negative half-cell.During discharging, the cell voltage dropped to a value of approx.0.88 V at the beginning of the first discharging process and then decreased in the further course of the charging process until the final discharge voltage of 0 V was reached due to concentration depletion. The first discharge process achieved a discharge capacity of 0.63 Ah and thus only 63% of the theoretical value of 1 Ah.In the subsequent discharging processes, the third discharging process had the absolute highest value with 0.92 Ah.During all further discharges the capacity was continuously reduced, but the capacity utilization increased significantly to a value of 92%-93% (see Table I).This capacity behavior can be explained on the one hand by the more favorable deposition of iron due to an increase in the pH value of the negative electrolyte as a result of hydrogen loss, and on the other hand by the loss of capacity caused by the formation of hydrogen as a side reaction, which led to an irreversible oxidation of Fe 2+ at the positive electrode.In the first cycle the pH value is too low to be able to deposit iron with a high capacity utilization.A large proportion (37%, neglecting other side reactions) was converted to hydrogen and the irreversible oxidation of Fe 2+ .As the process progressed, the battery reached an optimum pH value for iron deposition, but had already lost a lot of capacity before and continued to lose capacity due to hydrogen formation.The difference in the discharge capacities of the 4th and 5th cycle corresponded to 70 mAh and, if only hydrogen generation was attributed to the battery, only 26 ml hydrogen loss. Due to the strong irreversible loss of capacity and the danger of the formation of poorly soluble iron hydroxides, investigations were carried out with a recombination cell.Figure 6b shows the capacities of several charge and discharge experiments with and without recombination cell.Without the use of a recombination cell, the discharge capacity dropped to a value of 0.1 Ah within 40 cycles.The batteries with recombination cell achieved a significantly lower capacity degression by recombination of the hydrogen produced with the Fe 3+ ions of the positive half-cell.The batteries without recombination cell only had a discharge capacity of about 0.1 Ah (10%) after 50 cycles.The batteries with recombination cells had a discharge capacity of 0.56 Ah (56%) after 50 cycles, or 0.41 Ah (41%) after 100 cycles.The decrease in capacity was still significant and dramatic for a permanent use as an energy storage device, so further investigations into its cause were conducted.The difference in discharge capacity between the 45th and 50th cycle was only 40 mAh or 17 ml hydrogen, with only 3.4 ml hydrogen loss per cycle.It was plausible that an irreversible loss of hydrogen was caused by the laboratory setup.However, other side reactions like loss of deposited iron due to poor adhesion on the electrode could not be excluded.To further investigate this behaviour, regeneration experiments with external hydrogen were carried out. Figure 7 shows the capacities of an Fe/Fe-RFB and a recombination cell during 46 charge and discharge cycles.The charging capacity was 3.6 Ah at the beginning and decreased significantly to about 2.5 Ah in the 21st cycle.Afterwards the electrolyte was regenerated with external hydrogen.Due to side reactions the charge Journal of The Electrochemical Society, 2020 167 160527 capacity was always significantly higher than the discharge capacity.Side reactions include diffusion of ions through the membrane and hydrogen formation.The differences in the charging and discharging capacities (see Q in -Q out ) initially had a significant drop until the 5th cycle and then decreased more or less constantly to about 0.25 Ah. As mentioned above, the pH value of the negative electrolyte was too low at the beginning and caused a high proportion of hydrogen production.By shifting the pH to more positive values, the proportion of hydrogen loss was reduced and with it the difference in charging and discharging capacities.Theoretically, it was expected that the difference would not decrease linearly, but would change to a plateau with a constant value.The reason for this assumption is the increase in the diffusion of protons from the positive half-cell space to the negative one through the membrane due to proton pumping with the help of the recombination cell. Consumption of protons at the negative electrode and back diffusion should if possible reach a constant value and bring the cell into dynamic equilibrium.The approach to such an equilibrium was not observed during the first 21 cycles, but after regeneration such a tendency was observed. However, it should also be noted that there were possibilities for a continuous non-reversible hydrogen loss out of the system.With the exception of the first value, the conversion capacities of the recombination cell during regeneration (Q RC-cell ) decreased continuously to approx.243 mAh in the 21st cycle.The first value was probably lower than the others because the system volume was first filled with hydrogen and could not be recombined.Roughly estimated, the system volume should still be 50-100 ml and thus contribute to a large irreversible loss of capacity.With the exception of the first cycle (58%), the recombination efficiencies (Q RC-cell /(Q In -Q Out )) were over 90%.On the one hand, the value was thus pleasingly high, and on the other hand 10% of the losses were not regenerated by the recombination cell.In the 21st cycle, these 10% losses amounted to only 24 mAh or 10 ml hydrogen, assuming hydrogen loss as the only irreversible reaction.Considering the laboratory setup it seemed plausible that 10 ml hydrogen could be irreversibly lost during a cycle of several hours.The 24 mAh loss also matched well with the difference of 18 mAh loss between the 21st discharge capacity and the 20th discharge capacity. After the 21st discharge cycle the battery was completely discharged at a voltage of 0 V and the positive electrolyte was pumped into the negative electrolyte circuit, all the mixed electrolyte was pumped into the positive half-cell and external hydrogen was passed through the recombination cell until there was hardly any current left.Figure 8 shows the resulting change in the mixed electrolyte.The color changed from a cloudy orange solution to a clear green solution with orange solids, which also completely dissolved over time.During regeneration, the pH of the solution was lowered and Fe 3+ ions were reduced to Fe 2+ .Due to the redox potential ratios, the regeneration should stop on its own when all Fe 3+ has been reduced, which means that, unlike other regeneration processes for vanadium RFBs, 7 the progress of the regeneration can be followed with a simple current measurement. After regeneration, the pH value, which was finally lowered again, resulted in a similar behaviour of the capacity curves as with a freshly used Fe 2+ solution.Titration of the regenerated electrolyte solution showed a 100% Fe 2+ content in the solution.This was 4% more than the freshly prepared solution.However, the first discharge capacity was 2.53 Ah instead of 2.74 Ah in the first cycle.The difference could be explained by the fact that before regeneration, the positive electrolyte was first pumped into the negative half-cell to dissolve any iron deposits and then pumped back again.A portion of Fe 3+ ions could thus remain in the negative half-cell and in the fluid system and was not available for regeneration.Based on the results of the quantitative analyses, it can be assumed that the conversion was almost complete and that the loss can be kept constant over the number of regenerations using this method. Further cell tests with recombination cells were carried out to investigate the battery properties of different substrate materials.Figure 9 shows the discharge capacities and the achieved energy efficiencies over 50 cycles. In terms of discharge capacity, all materials, except Toray paper, behaved very similarly.The batteries with the Toray paper had a significantly higher capacity drop than the other materials because of visible solid Fe accumulation in the negative tank.The reason for the difference was probably an optimized test stand design and operation, where less hydrogen was lost.Typical for all other three materials is the initial increase of the discharge capacity to a value of about 0.95 Ah (95% capacity utilization) and a further slight drop in capacity due to probably hydrogen loss.Since the charging time was limited to 1 h and the maximum possible discharge capacity was 1.62 Ah, there was a surplus of capacity at the beginning, so that the capacities of the three materials were more linear at the beginning.With Toray paper, the loss was so high that this behaviour was not so clearly visible (see also Fig. 10a).In the energy efficiency curves shown in Fig. 9b, all of the materials had low energy efficiencies at the beginning due to the low pH value and the resulting hydrogen generation, due to low Coulomb efficiencies (not shown here) while the voltage efficiencies (not shown here) remained more or less constant.In the course of the 50 cycles, a dynamic equilibrium was achieved in which back diffusion of protons from the positive halfcell space into the negative half-cell space was approximately in equilibrium with the hydrogen produced at the negative electrode.With a maximum of 50%, Toray paper achieved the lowest maximum energy efficiency.A kynol fabric type ACC-507-20 achieved significantly higher values with a maximum of 65%.In between was another carbon fabric (CC-060, Quintech GmbH, Germany) with different spacers in the negative cell space (AH.1 & AH.2) and with a completely closed hydrogen exhaust (dead end) with approx.55% energy efficiency. In order to investigate the influence of the state of charge on the capacities, two further tests were carried out.Figure 10 shows the discharging capacities and the energy efficiencies of batteries with different end-of-charge criteria.In one experiment the charging time was limited to 1 h (SOC 60) and in another the time and voltage was set so high that the battery reached the final discharge voltage by polarisation due to concentration depletion (SOC 100).As can be seen in Fig. 10a, the batteries that were charged up to a maximum of 1 h had an initially increasing capacity curve.The batteries that were fully charged had a continuously decreasing capacity curve. The difference is mostly a result of the two different charge methods.It can be explained by the fact that during the time limitation, initially a large part of the charge carriers is converted into hydrogen.With an increasing pH, Fe-deposition becomes more attractive and the capacity reaches a constant value over cycle number.Hydrogen or Fe losses will be hidden by the low capacity utilization but will probably follow the trend of the 100% charge after 50 cycles.When charged to a potential via constant current, the loss of charge carriers is manifested by an increased charging time and thus an increased charging capacity.The discharge capacity is higher and has a decreasing trend because of Hydrogen or Iron losses.The courses of the energy efficiencies of the two experiments shown in 10b were approximately the same within the scope of the measurement errors and amounted to up to 65%.The behaviour of the different discharge capacities is interesting in so far as for practical operation the charging strategies differ significantly and should be taken into account in the battery management system. Figure 11 shows the behavior of Fe/Fe-RFBs at different current densities.As can be seen from the results in Fig. 11a, a battery at a current density of 12.5 mA cm −2 (500 mA/40 cm 2 ) had an average discharge power density of about 12.5 mW cm −2 .The energy efficiency was 70%.As the current density increased, the power density increased and the energy efficiency decreased due to cell resistance losses.At a current density of 75 mA cm −2 (3 A/40 cm −2 ) a battery achieved a power density of approx.47 mW cm −2 with an energy efficiency of 33%.The measurements were limited by the performance of the potentiostat, so it can be assumed that the batteries could also convert higher currents at room temperature.However, the energy efficiencies were extremely low and future work will investigate the behaviour at higher temperatures to increase efficiencies and power densities.The discharge capacities were between 0.9−1.2Ah.As expected, the capacities tended to decrease due to increasing IR-drop. Figure 12 shows the coulomb and energy efficiencies of Fe/Fe redox flow batteries with different membranes.NAFION 115 is a cation exchange membrane, Fumatech FAP-450 is an anion exchange membrane, ASAHI SF-601 is a microporous separator and MPM is also a microporous separator with very low costs. The coulomb efficiencies were relatively high for all batteries, with MPM achieving the lowest value at around 79%. ASAHI SF-601 Journal of The Electrochemical Society, 2020 167 160527 achieved a slightly higher coulomb efficiency of 85% and the ion exchangers finally achieved even higher values of over 90%, as expected.The reason for this behaviour is the different selectivity of the different types of membranes.Microporous membranes have a low selectivity but often a low resistivity.Ion exchange membranes have a high selectivity and therefor a higher coulomb efficiency.Iron(III)ions will migrate from the positive electrolyte to the negative and react with deposited Iron to Iron(II).This reaction is reversible and results in a loss in coulomb efficiency.A comparison of the coulomb efficiency of FAP-450 at different current densities also showed that the coulomb efficiency increases with the current density.On the one hand, this can mean that at higher overpotentials the iron deposition is more efficient or, on the other hand, that the diffusion of ions into the other half-cell space is reduced by shorter cycle times. The conditions were similar for the energy efficiencies, with only low efficiencies of 33%-43% being achieved at 50 mA cm −2 .The microporous separator MPM achieved the lowest value, followed by ASAHI SF-601 and finally FAP-450 with the highest efficiency.At lower current densities the energy efficiencies increased to over 60%.Overall, the difference in the achieved efficiency values was smaller than expected.The biggest differences for the energy efficiencies were finally caused by the internal resistances and here especially by the negative reactions.Furthermore, it should be noted that there was no pressure control of the fluidic semi-circles.With such a control, higher coulomb efficiencies can probably be achieved with microporous separators.Finally, the results of techno-economic simulations must decide which of the membrane materials is more suitable. Conclusions Within the scope of this work and due to a small number of publications, the feasibility of an Fe/Fe redox flow battery was investigated.Due to the acidic electrolyte used here, there was a relatively high hydrogen evolution at the beginning of charging and discharging experiments.This resulted in a relatively high loss of capacity, which can be reduced by using a recombination cell.When using a recombination cell, a complex dynamic equilibrium is created by protons being transported into the positive half-cell, which must be further investigated to further reduce the capacity loss and increase the efficiency of the battery.Furthermore, in contrast to many other literature sources, cycle times of up to two hours were used here in order to be able to make statements as close to reality as possible.A battery with a recombination cell could be charged and discharged 100 times and lost about half its capacity.Regeneration experiments with external hydrogen showed that the loss was caused by hydrogen loss in the laboratory cell structure and that the capacity could be almost completely restored by regeneration.In further experiments, different substrate materials for iron deposition and different membranes were compared.Furthermore the effects of different current intensities were investigated.The substrate material had a significant influence on the efficiency values of the batteries.A Kynol fabric achieved the highest values with 65% energy efficiency.For the membranes, microporous separators, a cation exchange membrane (NAFION), and an anion exchange membrane (Fumatech FAP-450) were compared.Anion and cation exchangers achieved approximately the same energy efficiencies.As expected, the microporous separators achieved lower values.In principle, the microporous separators are interesting because of their low costs.However, at 50 mA cm −2 they achieved just 39% energy efficiency.Here, it was suspected that a large proportion of losses was caused by pressure differences in both half-cells and that this behaviour can be improved.At room temperature up to 47 mW cm −2 of power density could be achieved with an energy efficiency of 33%.This was mainly due to the low kinetics of iron deposition and dissolution, together with the low Coulomb efficiency associated with recombination.Operation at elevated temperature and optimization of recombination could achieve significantly higher performance values, making this system very interesting for commercial use.However, a high development effort is still necessary in all aspects to achieve this goal. Figure 1 . Figure 1.General schematics of an iron/iron redox flow battery-Reactions are in direction of charge process. Figure 2 . Figure 2. Schematics of an iron/iron redox flow battery with integrated recombination cell-Reactions are in direction of charge process. Figure 3 . Figure 3. Picture of an iron/iron redox flow battery laboratory setup with recombination cell. Figure 7 . Figure 7. Capacities of an Fe/Fe redox flow battery (Q) and capacities of a recombination cell (Q RC ) (60/120 ml electrolyte) before and after electrolyte regeneration with external hydrogen. Figure 8 . Figure 8. Color change of used electrolyte during a regeneration in a recombination cell with external hydrogen. Figure 12 . Figure 12.(a) Coulomb and (b) energy efficiencies of Fe/Fe redox flow batteries with different membranes. Table I . Capacities of a charge and discharge experiment with an Fe/ Fe redox flow battery without recombination cell (1 M FeCl 2 , 2 M NH 4 Cl, 0.2 M HCl, i = 25 mA cm −2 , t charge, max = 1 h).
2020-12-03T09:07:23.701Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "04ab126935777595890eb910417754f3a4e015f6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1149/1945-7111/abcf50", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "0ce839d745bb35579d4ce8a93a2d9c12c41c0595", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
246702464
pes2o/s2orc
v3-fos-license
Association of Treatment with Remdesivir and 30-day Hospital Readmissions in Patients Hospitalized with COVID-19 Background Since the beginning of COVID-19 pandemic, there has been a widespread use of remdesivir in adults and children. There is little known information about remdesivir's role in reducing 30-day readmissions after hospitalization with COVID-19. This study aimed to determine whether treatment with remdesivir was associated with reduced risk of 30-day readmission after index hospitalization with COVID-19. Methods The study was a multi-center cohort study in Rhode Island, USA. Patients included all adults that were discharged after hospital treatment for COVID-19 between April 1st and December 31st, 2020. The main study outcomes were length of hospital stay, 30-day readmission, and post-discharge 30 days mortality. Results A total of 2,062 patients (2,279 hospitalizations) were included in the analytic sample. Patients were less likely to be readmitted within 30 days if they received remdesivir relative to not receiving remdesivir; associations were strongest for those with mild disease (RR: 0.31; 95% CI: 0.13,0.75). Remdesivir treatment was associated with reduction in all-cause mortality (HR: 0.65; 95% CI: 0.49,0.85) and an increase in length of stay (estimated average increase of 3.27 days; 95% CI: 2.11,4.44). Limitation: Unmeasured factors such as time-to-treatment and severity of disease prior to initiation of remdesivir. Conclusions Remdesivir may be an effective strategy for reducing progression to severe COVID-19 disease and limiting morbidity associated with readmission to hospital. Larger prospective studies are justified to study the role of remdesivir in mild or early COVID-19 with high risk of disease progression and readmission to hospital within 30 days. INTRODUCTION T he pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has recorded more than 181 million global cases and 3.9 million deaths as of June, 2021. 1 In the United States 2.2 million hospital admissions for coronavirus disease 2019 , the syndrome caused by SARS-CoV-2, have occurred from August 1 st , 2020 to June 26, 2021. 1 In-hospital mortality ranges from 0.3-13.3% by age group and increases with age. 1 Hospital readmission rates amongst survivors are not readily available but have been reported between 4-20% within 60 days of discharge. [2][3][4] Currently remdesivir, a nucleotide analogue prodrug, is approved by the US Food and Drug Administration (FDA) for the treatment of hospitalized patients with COVID-19. Guidelines from the US National Institute of Health (NIH) recommend use of remdesivir for patients hospitalized with COVID-19 and requiring oxygen (Grade BIIa). 5 There are insufficient data to recommend use of remdesivir for the treatment of hospitalized patients not requiring supplemental oxygen. The guidelines are currently limited by the low number of randomized trials and perceived low mortality and morbidity rates in this subgroup of hospitalized patients. 6,7 Additionally, dexamethasone, a corticosteroid, is recommended for the treatment of patients with moderate and severe COVID-19 disease requiring supplemental oxygen, including high-flow devices, non-invasive ventilation and mechanical ventilation. 5 Antibiotic, anticoagulant, and diuretic medications are other therapeutic options commonly employed in the treatment of COVID-19 and related complications. Little is known about relationships between pharmacological treatments of COVID-19 and post-discharge outcomes. Reasons for readmission to hospital after COVID-19 may include underlying co-morbid disorders, social determinants of health such as access to housing and access to health care, as well as susceptibility to progression of COVID-19. 8,9 We hypothesized that administration of remdesivir to hospitalized patients with mild COVID-10 disease may result in decreased viral replication and ultimately less morbidity as represented by hospital readmission. The aim of the study was to examine the association between readmission and remdesivir using inverse-probability of treatment weights in a cohort of patients admitted to hospital during the COVID-19 pandemic at a large hospital system in Rhode Island from April to December 2020. METHODS Study setting and population. From April 1, 2020, to December 31, 2020, there were 2,557 COVID-19 related hospital admissions within the Lifespan network (Rhode Island Hospital, The Miriam Hospital, Newport Hospital) in 2,230 unique patients aged 18 years and older. All patients identified tested positive for presence of SARS-CoV-2 coronavirus via nasopharyngeal swab or serum serology testing. Serum serology was utilized for diagnosis when patients were admitted with clinical presentation consistent with COVID-19 but had negative polymerase chain reaction testing for SARS-CoV-2 by nasopharyngeal swab. The combined Lifespan Institutional Review Board approved the study protocol. Details regarding the clinical course, rehospitalizations, and/or deaths were abstracted from the electronic health records (EHR). Our analytic sample was restricted to the 2,279 hospitalizations (N=2,062 patients) with complete data on sociodemographic and clinical covariates of interest. In this historical cohort study, for each hospitalization a given patient was followed from admission to 30 days post discharge. Outcome measures Outcome measures of interest included length of hospital stay, 30-day readmission, and post-discharge 30-day mortality. Patients who died prior to discharge were classified as dying at 0 days post discharge. Statistical analysis All analyses were conducted using Stata version 16.1 (StataCorp, College Station, Texas). Characteristics and outcome events of patients treated with and without remdesivir are reported as column percentages or mean and standard deviation, as appropriate. Subsequent marginal structural models regressing outcomes of interest on remdesivir use were weighted using inverse-probability of treatment weights (IPTW) to address confounding by indication (i.e., non-randomized treatment allocation) and inverse-probability of censoring weights (IPCW) to address selective survival. Inverse-probability of treatment weights (IPTW) The propensity score (PS) for treatment with remdesivir for each patient was estimated using logistic regression, which modeled the probability of being treated with remdesivir, compared to not being treated with remdesivir, using patient gender, race, ethnicity, language, age, insurance type, smoking status, medical history (yes/no for presence of particular medical diagnoses in the EHR), as well as the results of laboratory values (ALT, AST, eGFR) and vital signs (hypotension, hypoxia, fever, tachycardia, respiration rate above 30 measured) within 24 hours of index admission. Variables for the IPTW models were chosen by identifying potential confounders as well as causes of readmission, extended length of stay, and death that are not in the causal pathway using directed-acyclic graphs (DAGs). Weights were stabilized using the marginal probability of being treated with remdesivir (probability of being treated with remdesivir/ being treated with remdesivir, given their covariates [i.e., PS]). The sample created using IPTW assumes that the distribution of baseline characteristics is independent of treatment assignment. Inverse-probability of censoring weights (IPCW) The propensity score (PS) for dying before or within 30 days of discharge for each patient was estimated using logistic regression, which modeled the probability of dying before or within 30 days of discharge, compared to still being alive 30 days after discharge. Included variables were again selected with the use of DAGs. This model included the same covariates used in the development of IPTW as well as treatment type (remdesivir, antibiotics, steroids, anticoagulants, diuretics), maximum/ most extreme laboratory values during admission (AST, WBC, lymphocytes, albumin, ALT, eGFR), the maximum amount of respiratory support, and vital sign abnormalities within 24 hours of discharge (hypotension, hypoxia, fever, tachycardia, respiration rate above 30 measured). Marginal structural models To estimate the treatment effect of receiving remdesivir relative to not receiving remdesivir on length of stay and 30 days readmission, generalized linear models were weighted by the product of IPTW and IPCW for each hospitalization truncated at the 5th and 95th percentiles. Likewise, marginal structural Cox models were used to estimate the treatment effect of receiving remdesivir relative to not receiving remdesivir on 30-day survival, weighted by the stabilized IPTW. All models accounted for clustering at the patient level. Stratified-analyses A priori we hypothesized the potential for the greatest benefit to be conferred for patients treated with remdesivir whom had mild COVID-19 related disease. Patients were categorized as "mild" during their hospitalization if they did not require supplemental oxygen. Alternatively, those who required 0.5-6Lpm maximum oxygen support were categorized as "moderate" and those who required 6.5Lpm or more, including high flow, non-invasive ventilation, and mechanical ventilation, were classified as "severe". Sensitivity analyses In separate alternative model specifications, we estimate the treatment effect of receiving antibiotics, steroids, diuretics, and anticoagulants on length of stay, readmission within 30 days, and 30-day survival. RESULTS Of the 752 hospitalized patients who received remdesivir (N=758 hospitalizations), 742 (N=748 hospitalizations) were included in this analytic sample. Likewise, of the 1,538 patients who did not receive remdesivir (N=1,799 hospitalizations), 1,369 (N=1,531 hospitalizations) were included in this analytic sample. Those included in the analytic sample tended to have a longer length of stay (median: 5 days [IQR: 3 days, 10 days] vs median 3 days [IQR: 1 day, 7 days]. Table 1 summarizes the characteristics of the 2,062 patients (2,279 hospitalizations) included in the analytic sample stratified by remdesivir treatment status. Remdesivir treatment was disproportionately given to those who were older, men, white, and with admission vitals indication respiration rate >30 and higher CRP values. Additionally, patients treated with remdesivir disproportionately required some degree of respiratory support at some point during their hospitalization. Those treated with remdesivir tended to have a longer length of stay but a smaller proportion were admitted within 30 days of discharge. Table 2 further summarizes characteristics by COVID-19 symptoms severity. Patients with mild symptoms were disproportionately younger and BIPOC, while those with more severe symptoms were disproportionately older, white, and with a greater number of comorbid health conditions. Sampling weights distribution and balance Truncated IPCW, IPTW, and the combined weight had a mean § standard deviation of 0.80 §0.22, 0.96 § 0.42, and 0.76 §0.40, respectively (Supplemental Table A1). The characteristics of the 2,062 patients hospitalized with COVID-19 (2,279 hospitalizations) stratified by remdesivir treatment status after applying inverse probability of treatment weights are displayed in Supplemental Table A2 and demonstrate greater balance relative to characteristics displayed in Table 1 particularly as they pertain to labs and vitals measured around the time of admission. Marginal structural models Association of treatment with remdesivir with length of stay, 30-day readmission, and all-cause mortality in 2,062 patients hospitalized with COVID-19 (2,279 hospitalizations) are displayed in Table 3. Length of stay Overall, being treated with remdesivir was associated with a 3.27 day (95%CI: 2.11,4.44) increase in length of stay on average relative to not being treated with remdesivir. This finding was most pronounced for those with severe COVID-19 symptoms (b: 6.70 days; 95%CI: 0.47,12.92) while those with mild and moderate symptoms had a fairly negligible increases in length of stay. 30-day all-cause mortality Overall, being treated with remdesivir was associated with a 35% decrease in risk of dying in the 30-days following discharge (HR: 0.65; 95%:0.49,0.85). Sensitivity analyses Supplemental Tables A3-A6 display the results of sensitivity analyses employing steroids, antibiotics, anticoagulants, and diuretics in place of remdesivir as the treatment, respectively. Generally, all alternative models yielded comparable or worse health outcomes. Treatment with corticosteroids was associated with increased risk of readmission within 30 days and treatment with antibiotics was associated with increased length of stay as well as increased risk of dying within 30 days following discharge. DISCUSSION Treatment with remdesivir has been shown to improve disease severity and shorten duration of symptoms in patients with moderate to severe COVID-19 disease. 10,11 The benefits of treatment with remdesivir in mild disease severity have not been established. In our retrospective multicenter analysis within a large hospital network, receipt of remdesivir was associated with lower likelihood of 30-day readmission after hospitalization; associations were strongest for those with mild COVID-19, defined as hospitalized but not requiring supplemental oxygen. Additional COVID-19 treatments investigated including corticosteroids, antibiotics, diuretics and anticoagulants were not associated with a reduction of hospital readmission rates. We also observed improved overall survival and increased hospital length of stay associated with remdesivir treatment. Our study population represents a broad range of real-world patients presenting to hospital in the period encompassing the "spring" and "fall" 2020 COVID-19 case number surges in Rhode Island, USA. While non-randomized treatment allocation can be a significant source of bias in observational assessments of different treatments, we leveraged rich sociodemographic and clinical data to construct IPTWs and additionally construct IPCWs to reduce the impact of confounding by indication and selective survival on our estimates, respectively. This is the first report on hospital readmissions in a large cohort of patients treated with remdesivir, likely owing to the early availability of remdesivir in Rhode Island. Hospital readmission rates after hospitalization for COVID-19 range between 4-20%. 2 Between 30-80% of readmitted patients return to hospital with a primary diagnosis of COVID-19 or related adverse event including sepsis, pneumonia, hypoxic respiratory failure, and thromboembolism. 3,4 Other reasons for hospital readmission include exacerbation of underlying conditions such as congestive heart failure and pulmonary disease, and a decline in overall functional status. Mortality upon readmission ranges up to 23%. 4 Remdesivir, a nucleotide analogue prodrug, acts as a viral RNA-dependent RNA polymerase inhibitor, targeting the viral genome replication process. 12,13 Treatment with remdesivir early in the clinical course of COVID-19 may limit SARS-CoV-2 viral replication and prevent progression to more severe disease. Studying the effects of remdesivir presents a conundrum which is likely related to the disease biology of SARS-CoV-2 and COVID-19. 14 Remdesivir may be a treatment agent for COVID-19 which is most effective in mild to moderate disease, and therefore may not exhibit a mortality reduction due to the lower overall mortality rates in this sub-population of patients. Our findings suggest that patients who develop severe disease may also benefit from remdesivir's antiviral effects as is suggested by current guidelines recommending treatment with remdesivir in this population. 5,14,15 Remdesivir has not been shown to improve overall mortality across several randomized studies, but we are encouraged to see an association with mortality reduction in this real-world data set. Evidence demonstrating a reduction of co-morbid outcomes such as reduced 30-day hospital readmission rates in this analysis, may form the basis for ongoing clinical use of remdesivir in hospitalized patients with mild COVID-19 disease. 16 In studies reporting on rates of hospital readmissions after COVID-19 hospitalization, treatments included including hydroxychloroquine, lopinavir/ritonavir, corticosteroids, anticoagulants and remdesivir. 4,[17][18][19][20] A study with a small number of patients treated with remdesivir reported 0/22 patients readmitted within 30 days. 4 By comparison, other treatments such as hydroxychloroquine and corticosteroids, were not associated with a reduction in hospital readmissions. 4 A prospective randomized trial evaluating the treatment effect of remdesivir conducted in outpatients with early COVID-19 disease revealed a decrease in hospital admissions in the remdesivir arm supporting the idea that anti-viral therapy for COVID-19 has an active role in clinical management but likely heavily depends upon timing within the disease course. In our analysis commonly used treatments for COVID-19 such as corticosteroids, anticoagulants, antibiotics and diuretic medications were not associated with reduced readmissions or in-hospital mortality. Finn et al Propensity score matching (PSM) and inverse probability of treatment weighting (IPTW) are increasingly popular methods used to address confounding by indication. 21 IPTW aims to achieve a balanced distribution of confounders across treatment groups and thereby achieve a more robust baseline for comparison. An estimated propensity score reflects the probability of treatment assignment conditional on a patient's measured baseline characteristics, which in this analysis included a broad range of underlying diagnoses, social factors, clinical indicators including vital signs and laboratory studies, and in-hospital complications. An additional adjustment was performed to account for survivorship bias. Remdesivir may be an effective strategy for reducing progression to severe COVID-19 disease and limiting morbidity associated with readmission to hospital. Further prospective studies will be required to confirm this finding to help revise current guidelines even as we grapple with resurgence of cases of COVID-19. Although not currently included in guidelines for treatment of hospitalized patients with mild COVID-19, remdesivir should be considered for those patients with defined risk factors for disease progression and readmission to hospital within 30 days. Our study has important limitations. Treatment with remdesivir was not randomized and there are known clinical factors that dictate differential allocation of treatment. In attempts to mitigate the contribution of confounding by indication, we leveraged available data on underlying diagnoses, social factors, and clinical indicators − as measured and recorded in the electronic health record − to estimate and incorporate probability of treatment assignment as IPTWs. IPTW − as with other techniques employed for confounding control − can only balance factors that are measured. Unmeasured factors, such as time-to-treatment with remdesivir and timing of maximum respiratory support in relation to remdesivir treatment, may play a role in the analysis. CONCLUSIONS We observed an association between reduction in likelihood of hospital readmission in patients with mild COVID-19 who were treated with remdesivir. These results provide evidence that augments the case for use of remdesivir in patients hospitalized with mild (early) COVID-19 disease, particularly in patients with risk factors for progression to severe disease as recommended by treatment guidelines. Additionally, we observed an association of overall mortality reduction and remdesivir treatment in this real-world analysis. Due to the nonrandomized nature of our analysis, additional study of the treatment effect of remdesivir in COVID-19 is required. SOURCE OF FUNDING This research received no specific grant from any funding agency in the public, commercial, or not-forprofit sectors. DECLARATION OF COMPETING INTEREST The authors certify that they have NO affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers' bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.
2022-02-11T14:09:29.170Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "49d2f017a53c12ceddb6ee65524b3db898f6c51d", "oa_license": null, "oa_url": "http://www.amjmedsci.org/article/S0002962922000684/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "98f7a4d6d03198a38674a4ef284e4e8e4375abdf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252901654
pes2o/s2orc
v3-fos-license
Cellular plasticity and immune microenvironment of malignant pleural effusion are associated with EGFR-TKI resistance in non–small-cell lung carcinoma Summary Malignant pleural effusion (MPE) is a complication of lung cancer that can be used as an alternative method for tissue sampling because it is generally simple and minimally invasive. Our study evaluated the diagnostic potential of non–small-cell lung carcinoma (NSCLC)-associated MPE in terms of understanding tumor heterogeneity and identifying response factors for EGFR tyrosine kinase inhibitor (TKI) therapy. We performed a single-cell RNA sequencing analysis of 31,743 cells isolated from the MPEs of 9 patients with NSCLC (5 resistant and 4 sensitive to EGFR TKI) with EGFR mutations. Interestingly, lung epithelial precursor-like cells with upregulated GNB2L1 and CAV1 expression were enriched in the EGFR TKI-resistant group. Moreover, GZMK upregulated transitional effector T cells, and plasmacytoid dendritic cells were significantly enriched in the EGFR TKI-resistant patients. Our results suggest that cellular plasticity and immunosuppressive microenvironment in MPEs are potentially associated with the TKI response of patients with EGFR-mutated NSCLC. INTRODUCTION Advanced non-small-cell lung cancer (NSCLC) is the leading cause of cancer-related deaths globally and accounts for 85% of lung cancer cases (Herbst et al., 2008). Patients with epidermal growth factor receptor (EGFR)-mutated NSCLC show sensitivity to EGFR tyrosine kinase inhibitors (TKIs) such as gefitinib, erlotinib, and osimertinib (Riely et al., 2006). However, approximately 10% of patients with EGFR-mutated NSCLC exhibit primary resistance to EGFR TKIs, showing a clinical feature of disease progression during the initial course of EGFR-TKI therapy. Previous studies on primary resistance to EGFR-TKIs have reported actionable or novel gene alterations with targeted exome sequencing analysis using surgical tumors and biopsy specimens. Representative types of known alterations are as follows: MET amplification (Lai et al., 2019), de novo T790M (Su et al., 2018;Zhong et al., 2017), ERBB2 amplification (Zhong et al., 2017), BIM deletion (Lee et al., 2013), PIK3CA mutation (Su et al., 2018), and PTEN mutation (Su et al., 2018;Zhong et al., 2017). Despite various studies, the mechanism of resistance is unknown in up to 50% of cases (Leonetti et al., 2019). Recently, as immunotherapy has emerged, the relationship between the PD-L1 expression and TKI response has also been reported (Su et al., 2018;Takashima et al., 2018), but there have been few studies on the association between TKI resistance and the tumor microenvironment (TME). Patients with many advanced-stage NSCLC experience malignant pleural effusion (MPE). MPE causes discomfort and pain for the patient and requires additional management, but it can be a resource for the pathologic and genetic analysis of cancer. Basak et al. suggested that MPE is a proper model to investigate intra-tumoral heterogeneity in lung cancer because it incorporates various MPE-fluid component cells, including tumor and stromal cells (Basak et al., 2009) iScience Article lymphocytes (TIL) compared with the spatial heterogeneity of TIL in biopsy specimens (Donnenberg et al., 2019). The development of single-cell RNA sequencing (scRNA-seq) has enabled the analysis of extensive tumor heterogeneity and the identification of various cell populations at the single-cell level. However, the biggest hurdle to scRNA-seq is the limited availability of samples because meticulous preparation procedures are required to separate adherent cells immediately after samples are collected. This limitation prevents the use of cryopreserved tissue samples without labor-intensive immediate cell separation. Hence, most previous scRNA-seq studies on lung cancers have utilized primary and metastatic lesions to understand their cellular features and TME (Guo et al., 2018;Kim et al., 2020;Lambrechts et al., 2018;Maynard et al., 2020). However, cryopreserved pleural effusion with simple pre-freezing preparation can overcome this limitation. Recently, Huang et al. reported that MPEs from patients with NSCLC harbor various types of immune cells such as T and B cells and macrophages, which can provide therapeutic targets and biomarkers for treating NSCLC (Huang et al., 2021). Kashima et al. also utilized a pleural effusion sample from an EGFR-mutated lung cancer patient to validate their findings from the scRNA-seq analysis of TKI resistant cell line models (Kashima et al., 2021). Here, we sought to establish the TKI response factors with cryopreserved MPEs from patients with EGFRmutated NSCLC. Transitional effector T cells with high GZMK expression and plasmacytoid dendritic cells were significantly enriched in the TKI-resistant group. Furthermore, lung epithelial precursor-like cells were enriched in the EGFR TKI-resistant group, while suprabasal-like cells were more prominent in the sensitive group. In addition, cancer-testis (CT) antigen genes such as CSAG1 and MAGEA3, which are frequently overexpressed in magnoid lung cancer subtypes, were upregulated in the epithelial cells of the TKI-resistant group compared to those in the sensitive one. In contrast, the expression of tumorspecific HLA class II and JAK-STAT pathway genes regulating the HLA class II expression was significantly downregulated in the TKI-resistant group. Overall, we demonstrated that the cellular plasticity of malignant epithelial cells and immune microenvironment are potentially associated with the TKI response of patients with NSCLC. RESULTS Single-cell RNA-sequencing analysis from pleural effusion samples of patients with nonsmall-cell lung carcinoma We performed a scRNA-seq analysis of 38,414 cells isolated from the MPEs of 9 patients with NSCLC, including 5 EGFR-TKI resistant (LCPE.R) and 4 sensitive patients (LCPE.S) with EGFR mutations (Table S1), to investigate the heterogeneity of cancer cells and the tumor microenvironment ( Figure 1A). After quality control, single-cell transcriptome profiles were obtained from a total of 31,743 cells. In addition, doublets were removed from the scRNA-seq datasets of each patient using Scrublet (Wolock et al., 2019), and an average of 4,268 cells per sample was obtained. The observed average number of genes per cell was 1,103, and the average number of unique molecular identifiers (UMIs) per cell was 5,537 (Figures S1A and S1B). To investigate the composition of cell types in MPE, we performed unsupervised graph cluster analysis and identified five major cell types ( Figures 1B-1D). Cell types were annotated using SingleR (Aran et al., 2019) and previously known cell type markers (epithelial cells: EPCAM, KRT19; mesothelial cells: VCAM1, RAMP2; T cells: PTPRC, CD3D; B cells: CD79A, MS4A1; myeloid cells: S100A8, LYZ; Figure 1E). Unlike immune cells, such as T, myeloid, and B cells, epithelial cells were largely distinguished by individual patients ( Figure 1D). The cell-type composition for each patient was heterogeneous ( Figures 1F and S1C). In particular, the ratio of myeloid cells was enriched in the EGFR-TKI sensitive group compared to the resistant group (p = 0.016, Wilcoxon rank-sum test) ( Figure 1G). To investigate whether MPE-derived cells could represent the characteristics of tissue-derived cells (Lambrechts et al., 2018), we performed a co-clustering analysis and confirmed that cells were well clustered by their types rather Tumor heterogeneity conferring tyrosine kinase inhibitor resistance We performed an additional sub-clustering analysis of epithelial cells at high resolution to further characterize tumor heterogeneity and confirm the differences between the EGFR-TKI resistant and sensitive groups ( Figure 3A). As mentioned above, epithelial cells were mainly grouped by individual patients ( Figure 3B). We also inferred large-scale chromosomal CNVs in each epithelial cell. The CNV profiles showed significant levels of alterations and heterogeneity, implying that the epithelial cells in MPEs were mostly malignant. In addition, amplification of chr1q and chr7 was recurrently observed in our data, consistent with the previously reported lung adenocarcinoma CNV profiles (Cancer Genome Atlas Research Network, 2014) ( Figure 3C). Additionally, we investigated the expression profiles of a group of genes related to EGFR-TKI resistance and found that KRAS, ERBB2, and IGF1R were overexpressed in epithelial cells of three TKI-resistant samples (LCPE.R3, LCPE.R4, and LCPE.R5, respectively) ( Figure 3D). To validate TKI response based on the above gene expression profiles, we performed drug sensitivity assays using cell cultures derived from LCPE.R4 with ERBB2 overexpressed and identified considerable sensitivity to afatinib, an irreversible TKI that targets both EGFR and ERBB2 (Wind et al., 2017), and resistance to other TKIs without ERBB2 inhibiting activity ( Figure 3E). Taken together, these drug results suggest that the expression of other signaling pathway genes, such as ERBB2, can affect TKI resistance. Cellular plasticity as a mechanism of tyrosine kinase inhibitor resistance Furthermore, to understand the association between tumor cellular plasticity and EGFR-TKI resistance, we inferred the cellular origins of epithelial cells from MPEs through correlation analysis with previously reported cell types of airway epithelium ( Figure S2A) (Deprez et al., 2020). Interestingly, the precursor-like cells were relatively enriched in the resistant group, while the suprabasal-like cells were more prominent in the sensitive group ( Figures 3F and 3G). We performed differentially expressed genes (DEG) analysis to determine the characteristics of these two cell types and observed the upregulation of SCGB3A2, a club cell iScience Article precursor marker (He et al., 2021), in the precursor-like cells ( Figure S2B). In addition, genes related to tumor cell proliferation, such as GNB2L1 and ZFAS1, had higher expression levels in the precursor-like cells than other lung epithelial cell subtypes identified in MPEs ( Figure S2B) (Duff and Long, 2017;Fan et al., 2020). In contrast, the expression of tumor suppressor genes, including KLF6 and RHOB, was upregulated in the suprabasal-like cells ( Figure S2C) (Ito et al., 2004;Mazieres et al., 2004). Through gene set enrichment analysis (GSEA), we found that the Myc targets V1 and TNF-a signaling pathways were enriched in the precursorlike and suprabasal-like cells, respectively ( Figure S2D). Interestingly, the expression of MHC class II genes (HLA-DQB1 and HLA-DRB5) was higher in the suprabasal-like cells than that in other epithelial cell subtypes. Decreased human leukocyte antigen class II expression in epithelial cells of epidermal growth factor receptor-tyrosine kinase inhibitor -resistant and clinical validation Next, the DEGs of epithelial cells between EGFR-TKI resistant and sensitive patients were analyzed. Epithelial cells of each patient were combined to generate a pseudo-bulk sample for DEG analysis. As a result, we identified 673 upregulated and 628 downregulated genes in the EGFR-TKI-resistant group compared with the sensitive group (p < 0.05 and absolute fold-change > 1.5). According to a previous large-scale multiomics study of lung adenocarcinoma by The Cancer Genome Atlas project, EGFR-mutant lung adenocarcinoma mostly belongs to the bronchial subtype (Cancer Genome Atlas Research Network, 2014). Interestingly, CSAG1 and MAGEA3, known as cancer-testis (CT) genes, were upregulated in the epithelial cells of the resistant group ( Figure 4A); previous studies have shown that these genes are most frequently activated in the magnoid subtype of lung adenocarcinoma (Yao et al., 2014). Hence, the expression profiles of magnoid subtype-related genes could be associated with resistance to the EGFR-TKI therapy. Through GSEA, we found that the downregulated genes in the epithelial cells of the resistant group were associated with the immune response ( Figure 4B). In particular, HLA-DPB1, HLA-DQA1, and HLA-DRB1 showed significantly lower expression levels in the resistant group than those in the sensitive group ( Figures 4C and S3A), which is concordant with the DEG analysis results between lung epithelial precursor-like and suprabasal-like cells. To verify our findings, we examined whether HLA class II genes were expressed in the published human lung cancer scRNA-seq data (Lambrechts et al., 2018). As a result, most HLA class II genes were highly expressed in immune cells, such as T, myeloid, and B cells, but some HLA class II genes were also highly expressed in epithelial cells ( Figure S3B). Regulation of MHC class II expression is known to be mediated by the transactivator gene CIITA and induced by IFNG (Steimle et al., 1994). Although there was no apparent difference in the expression levels of IFNG between the two groups, the expression of CIITA was increased in the EGFR-TKI sensitive group ( Figure S3C). To gain deeper insights into the causes of the difference in the HLA class II gene expression levels, we investigated the upstream JAK-STAT signaling pathway genes regulating HLA class II genes. Interestingly, we found that the expression levels of JAK-STAT signaling pathway-related genes tended to increase in the EGFR-TKI sensitive group ( Figure S3C). By adopting a scoring scheme for JAK-STAT pathway activity, we confirmed that the JAK-STAT pathway was significantly activated in the sensitive group (p = 0.032) ( Figure 4D). To validate the distinct expression patterns of the three MHC class II proteins in the epithelial cells of the primary tumor, formalin-fixed and paraffin-embedded (FFPE) slides for 19 TKI-resistant and 45 TKI-sensitive patients underwent immunohistochemistry (IHC) staining ( Figures 4E and S3D). The IHC scores of HLA-DPB1 (p = 0.001) and HLA-DRB1 (p = 0.007) were significantly lower in the TKI-resistant group ( Figure 4F). Although there was no statistical significance (p = 0.2), the IHC score of HLA-DQA1 was also lower in the resistant group. Moreover, PFS was significantly better in patients with high (median IHC score HLA-DPB1 240, HLA-DQA1 30, HLA-DRB1 35) MHC class II expression than those with low MHC class II expression iScience Article ( Figure 4G). The expression levels of MAGEA3, CSAG1, CIITA and HLA-DR in MPE-derived epithelial cells were further validated using multiplex immunofluorescence staining, which showed increased MAGEA3 and CSAG1 as well as decreased CIITA and HLA-DR in TKI-resistant group (Figures S4A and S4B). We verified that HLA-DR + epithelial cells were increased in the EGFR-TKI sensitive group through flow cytometry analysis ( Figures S4C and S4D). In addition, HLA-DR and HLA-DQ were notably upregulated in EGFR-TKI sensitive MPE cells ( Figures S4E). To confirm the expression of IFN-g in the tumor microenvironment, IFNg-producing CD3 + cells after PMA and ionomycin stimulation were higher in TKI-sensitive MPEs than in TKIresistant MPEs ( Figure 4H). Enrichment of transitional effector T cells in epidermal growth factor receptor-tyrosine kinase inhibitor resistant patients We performed a co-clustering analysis of T cells from primary tumors (Lambrechts et al., 2018) and MPEs. Nine helper T cell clusters were characterized by the expression of CD4, CCR4, CCR6, and IL6R; two dysfunctional T cell clusters by CD8A, LAG3, and PDCD1 expression; three transitional effector T cell clusters by dysfunctional T cell marker gene and GZMK expression; two natural killer (NK) cell clusters by CD3D, CD160, NCR3, CX3CR1, and FGFBP2 expression; one naive T cell cluster by SELL, TCF7, CCR7, and LEF1 expression; and one regulatory T cell (Treg) cluster by FOXP3, IL2RA, TNFRSF4, TIGIT, and CTLA4 expression ( Figures 5A-5D). Overall, the T cell subtypes were heterogeneously distributed across the individual patients ( Figure S5A). Most of the T cell clusters were identified from both primary tumors and MPEs without significant differences in their enrichment level ( Figure S5B) except for clusters 4 (Treg), 10 (helper T cell), 12 (transitional T cell), and 16 (proliferating T cell). Interestingly, among three transitional effector T cell clusters (clusters 2, 3, and 12), cluster 3 was significantly enriched in the TKI-resistant group (p = 0.032, Wilcoxon rank-sum test) ( Figures 5E and S5C), and the expression of GZMK and FYN was higher than that of the other clusters. FYN is known to phosphorylate the negative regulator of T cell signaling and may be involved in terminating the TCR signal (Filby et al., 2007). Heterogeneity of myeloid cells in pleural effusion samples We identified nine macrophage clusters, one non-classical monocyte cluster, one myeloid precursor cluster, one classical dendritic cell cluster, one activated dendritic cell cluster, and one plasmacytoid dendritic cell (pDC) cluster ( Figures 6A-6D). Similar to T cell subtypes, myeloid cell subtypes were also heterogeneously distributed across the individual patients ( Figure S6A). Most of the myeloid cell clusters were identified from both primary tumors and MPEs without significant differences in their enrichment level, except for clusters 0 (macrophage), 10 (activated dendritic cell), 13 (macrophage), and 14 (pDC) ( Figure S6B). Myeloid precursor and pDC clusters were enriched in the TKI-resistant group (Figures 6E and S6C). The pDC cluster (cluster 14) with high granzyme B (GZMB) expression is known to induce regulatory T cell response (Swiecki and Colonna, 2015) and to inhibit T cell proliferation (Jahrsdorfer et al., 2010). The myeloid precursor cluster (cluster 11) had a high expression of genes associated with cell cycles such as CCNB1, CCNB2, CDC20, and CDK1 (Engeland, 2018). DISCUSSION Most NSCLC patients with EGFR mutations are responsive to EGFR-TKI therapy, but approximately 10% of patients show primary resistance to TKI (Lee et al., 2013;Sharma et al., 2007). Recent studies have suggested that MPE is an appropriate model for investigating the heterogeneity and the immune microenvironment of lung cancer because MPE preserves tumor, stromal, and immune cells (Basak et al., Huang et al., 2021;Maynard et al., 2020), and its sampling is minimally invasive. Despite having these advantages, most single-cell analyses of NSCLC have been based on tumors (Guo et al., 2018;Kim et al., 2020;Lambrechts et al., 2018), and MPE has rarely been utilized. To the best of our knowledge, we are the first to analyze the cellular landscapes potentially associated with EGFR-TKI response at the single-cell level using MPEs of NSCLC patients with EGFR mutations. We examined the expression of 22 genes known to be associated with EGFR-TKI response in epithelial cells from MPE (Table S2). A drug response test was further conducted on the cell culture from the TKI-resistant sample (LCPE.R4) with ERBB2 upregulation, and interestingly, it showed sensitivity to afatinib, a dual-targeting drug of ERBB2 and EGFR ( Figure 3D). Unfortunately, in actual clinical practice, the patient from which the LCPE.R4 MPE cell culture was obtained did not receive afatinib treatment because there was no such data at the time, and it is expected that if he had received the treatment, he would have benefited. The major drug resistance factors can be divided into genetic mutagenicity and non-mutagenic mechanisms. The non-mutagenic mechanism is mainly caused by cellular plasticity and is closely related to the re-activation of developmental programs such as cancer stem cell characteristics and the epithelial-mesenchymal transition (Qin et al., 2020). Therefore, we examined the relationship between cellular plasticity and TKI resistance through single-cell transcriptome analysis of MPEs from TKI-resistant and -sensitive patients with EGFR mutations. Our results showed that precursor-like and suprabasal-like cells were enriched in the resistant and sensitive groups, respectively. In the precursor-like cells, genes related to tumor cell proliferation (GNB2L1, CAV1, and ZFAS1) were generally upregulated. GNB2L1 is known to promote tumor cell proliferation by regulating Src activity (Duff and Long, 2017;Peng et al., 2013). In addition, GNB2L1 and Src regulate P-glycoprotein activity by caveolin-1 (CAV1) phosphorylation (Fan et al., 2019). P-glycoprotein, a drug transporter in cancer cells, is one of the main causes of multidrug resistance because it helps excrete anticancer drugs out of the cell (Fan et al., 2019). ZFAS1 induces tumor cell proliferation and migration by directly binding with miR-1271-5p, which acts as a tumor suppressor in lung adenocarcinoma (Fan et al., 2020). Maynard et al. analyzed advanced-stage NSCLC patients with EGFR mutations using a scRNA-seq technique and demonstrated that residual tumor cells during therapy had enhanced alveolar cell signatures, and tumor cells that acquired resistance reduced immunity (Maynard et al., 2020). According to them, CAV1 is upregulated in treatment-resistant tumor cells and transcriptionally activates the WNT/b-catenin pathway. They also suggested that the activation of the WNT/b-catenin pathway in NSCLC patients with EGFR mutations may lead to resistance to EGFR inhibitors (Maynard et al., 2020). Although very few alveolar cells were identified in our results ( Figure 3F), CAV1 was significantly upregulated in the precursor-like cells that were abundant in the TKI-resistant group ( Figures 3G and S2B), which is concordant with the results from Maynard et al. The suprabasal-like cells, which accounted for a high cellular proportion in the sensitive group, showed upregulation of KLF6, RHOB, HLA-DQB1, and HLA-DRB5. The KLF family is known to be involved in cell differentiation, proliferation, and apoptosis (Black et al., 2001). Among them, KLF6 is frequently downregulated in NSCLC and inhibits tumor cell growth by inducing apoptosis (Ito et al., 2004). Reduction of RHOB expression often occurs in lung cancer and enhances tumor suppressive activity (Mazieres et al., 2004). Taken together, both proliferative properties and increased expression of drug transporter genes in precursor-like cells are associated with TKI resistance. We also found that some immune response-related genes are downregulated in the TKI-resistant group ( Figure 4B). In particular, the expression of HLA class II genes was significantly reduced in the epithelial cells of the TKI-resistant group. In general, HLA class II is known to be expressed in professional antigen-presenting cells (APCs), but recent studies have confirmed the expression of HLA class II from non-professional APCs, including epithelial cells (Axelrod et al., 2019;Wosen et al., 2018). In the tumor microenvironment, the MHC class II-mediated antigen presentation of epithelial cells appears to play an important role in iScience Article regulating the inflammatory response by activating T cells (Mehrfeld et al., 2018). MHC class II expression in the epithelial cells was further confirmed by IHC staining, and notably, patients with high MHC class II expression showed significantly superior EGFR-TKI therapy outcomes ( Figures 4E-4G). Therefore, we hypothesized that the difference in HLA class II expression was due to a specific transcription factor and revealed that CIITA expression, an HLA class II transcription factor (Devaiah and Singer, 2013), was significantly decreased in the TKI-resistant group. Furthermore, Pollack et al. (2011) reported that IFNG activates CIITA expression and, subsequently, HLA class II expression. The resistant group exhibited decreased expression of genes related to the IFNG signaling pathway, including JAK and STAT. DEG analysis of epithelial cells revealed significant upregulation of CSAG1 and MAGEA3 in the resistant group. CT genes such as CSAG and MAGEA are known potential targets for immunotherapy because they are expressed in various malignant tumors, including lung cancer (Yao et al., 2014). Yao et al. investigated CT gene expression in 10 common cancer types from TCGA and reported that MAGE and CSAG are activated in the magnoid subtype of lung adenocarcinomas (Yao et al., 2014). Although most of the EGFR-mutated lung adenocarcinomas are the bronchial subtype (Cancer Genome Atlas Research Network, 2014), our analysis showed that some EGFR-TKI resistant patients could have magnoid subtype characteristics ( Figure 4A). Furthermore, transitional effector T cells were more enriched in the TKI-resistant group compared with the sensitive group. Li et al. found that the dysfunction of CD8 + T cells is associated with tumor reactivity and characterized transitional effector T cells in between early effector T cells and dysfunctional T cells (Li et al., 2019). In our data, one transitional effector T cell cluster with a high expression of FYN, which activates negative regulators of T cell signaling and is involved in terminating TCR signaling (Filby et al., 2007), was enriched in the TKI-resistant group. We also confirmed that a pDC cluster was enriched in the TKI-resistant group. pDC secretes soluble factors that play an important role in anti-tumor immunity, but inactivated pDC is known to be associated with immunosuppression (Demoulin et al., 2013). Furthermore, the pDC cluster (cluster 14) had a high GZMB expression, which induces regulatory T cell responses and inhibits T cell proliferation (Jahrsdorfer et al., 2010;Swiecki and Colonna, 2015). In addition, a previous study reported that an unstimulated pDC expresses GZMB and induces regulatory T cell responses (Ye et al., 2020). Limitations of the study We investigated EGFR-TKI resistance mechanisms in NSCLC using the limited number of MPE samples. In addition, EGFR-TKI resistant MPE specimens were collected after different types of EGFR-TKI treatments. Therefore, these might influence our interpretation of the results. Furthermore, endothelial cells and fibroblasts, which were not observed in our data, could also potentially affect EGFR-TKI responsiveness. Hence, it will be important in the future to validate in an independent cohort with samples before and after treatment. STAR+METHODS Detailed methods are provided in the online version of this paper and include the following: ACKNOWLEDGMENTS We thank the patients who participated in this study and also thank Binnari Kim for the pathological evaluation of the patient samples. was used to perform sample demultiplexing, barcode processing, and single-cell 3 0 gene counting. The cDNA insert was aligned to the hg19 reference genome. Only confidently mapped, non-PCR duplicates with valid barcodes and unique molecular identifiers were used to generate the gene-barcode matrix. We applied Scrublet (Wolock et al., 2019) to remove doublets that occur when two or more cells enter the same microfluidic droplet. Further analysis, including quality filtering, identifying highly variable genes, dimensionality reduction, standard unsupervised clustering algorithms, and the discovery of differentially expressed genes (DE-Gs), was performed using the Seurat R package (version 3.1.4) (Butler et al., 2018). To exclude low-quality cells, we used QC covariates such as counts per cell, number of genes per cell, and mitochondrial gene ratio per cell. Because the distribution of these QC covariates differs for each sample (Plasschaert et al., 2018), we determined different thresholds for each sample. After removing unwanted cells from the dataset, we normalized the data by the total expression, multiplied by a scale factor of 10,000 using the ScaleData function. We used FindVariableFeatures to identify highly variable genes and then performed PCA with the top 2,000 variable genes. Clusters were partitioned using FindClusters, and each cell was projected into a two-dimensional space using t-Stochastic Neighbor Embedding. DE-Gs in each cluster were calculated using the FindMarkers function. We integrated two different scRNA-seq datasets using Seurat canonical correlation analysis alignment. Correlation analysis of epithelial cells for infer cellular plasticity To infer the cellular plasticity of epithelial cells, we performed a correlation analysis with previously reported epithelial cell types (Deprez et al., 2020). We averaged the gene expression of cells of each cell type, and then we analyzed the correlation between the gene expression values for each defined cell type and the gene expression values of MPE-derived cells at the cellular level. Each cell was assigned the cell type with the highest Pearson correlation coefficient, and cell types were restricted to the epithelial cell type. Differential gene expression analysis of pseudo-bulks We combined all the cells from each sample to create pseudo-bulk samples. DE-Gs were identified using the DESeq2 R package (version 1.26.0) (Love et al., 2014) based on the average expression level (mean CPM) of each cell. Each DEG was filtered using abs (fold change) > 1 and p value < 0.05. We used EnrichR (Chen et al., 2013) to analyze the enrichment of biological process ontology. Copy number variation analysis in scRNA-seq Copy number variation (CNV) in each cell was estimated using the inferCNV R package (version 1.2.1) (Patel et al., 2014). Each CNV level was estimated using relative expression values with a sliding window of 100 genes based on the genomic location of the genes. All assays were analyzed using the default option, and the CNV of epithelial cells was estimated with reference to myeloid cells. Cell viability test In the cell viability test, cells were equally distributed into 96-well plates with 7,000 cells/well. Thereafter, cells were separately exposed to TKI drugs (gefitinib, erlotinib, osimertinib, and afatinib) in 1/4 and seven-point serial dilution doses from 4 nM to 20 mM for 72 h. Subsequently, CellTiter-Glo Luminescent Cell Viability Assay reagents (G7572; Promega, Madison, WI, USA) were added to each well at a 1:1 ratio with media volume and shaken gently. The plates were incubated at room temperature for 15-30 min, and cell viability was determined using a Mithras LB940 Multimode Microplate Reader (Berthold Technologies GmbH & Co. KG, Bad Wildbad, Germany) according to the manufacturer's protocols. Immunohistochemistry staining FFPE tumor sections were dewaxed in xylene and ethanol and submerged into ER1 buffer (pH 6.0) for 20 min at 100 C in a Bond-RX Multiplex IHC Stainer (Leica Biosystem, Melbourne, Australia) to retrieve the antigens, followed by incubation in endogenous peroxidase for 10 min. Anti-HLA-DPB1 antibody (Abcam, Cambridge, UK) was diluted at 1:1,000 and incubated with Bond-RX autoimmunostainer for 15 min.
2022-10-15T15:02:04.776Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "6a9c8d1bd3014a515395825b84096617aea274ba", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2589004222016303/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c240ee6133d31048a37aa2630e64285464cd8c3", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
211534329
pes2o/s2orc
v3-fos-license
Exploring Adult Learners’ Viewpoints and Motivation Regarding Distance Learning in Medical Education Introduction Literature in education and training supports the notion that distance learning (DL) is the most effective mode of learning for health care workers to improve the quality of patient care. However, implementing DL requires pre-assessing learners’ perspectives and attitudes for providing better delivery, essential support, and facilities. This study aimed to identify the viewpoints and attitudes of dental graduates toward DL in medical education and their point views of the effectiveness and efficacy of DL tools. Methods A structured, self-administered questionnaire was distributed to registered adult graduates working in government- or private hospitals in a permanent position or for a long term (3 months or more). Data were collected and analyzed. Results Two-thirds (67.9%) of the participants had previously attended a DL course. The highest ranked items on the participants’ views on DL were ease of access, ability to take the course from any location, and be taught from anywhere in the world. Their perception of DL was analyzed in relation to gender and previous exposure to DL. Conclusion This investigation revealed a positive attitude among graduates on the effectiveness of DL. Most respondents appreciated DL’s convenience in terms of time flexibility and online attendance. Residents’ attitudes toward DL and DL characteristics are major factors to consider when instituting or planning for DL. Continuous medical education through DL will continue to generate considerable interest as an international movement. Introduction Despite their busy schedules, health care workers need to keep pace with the developments in their field and maintain and improve their professional knowledge and skills. Also, with the growing number of health care workers, the demand for health education and training services has grown as well. Health education is, therefore, facing an exceptional situation with the large numbers of health care workers and increasing demands and services. Distance learning (DL), in such cases, has played a significant role in the provision of education across the world. In 1996, Dutta Jena, and Panda 1 observed that long-distance training in medical and health practice, particularly, can increase its cost-effectiveness and scope. Other studies have found distance education to improve time management, teamwork, communication skills, critical thinking, and knowledge application. [2][3][4][5][6] Besides, it has been proven that a well-accepted strategy for higher education successfully engages participants in beyond physical classroom boundaries. 2,7,8 Examining the first attempt at long-distance learning in the medical field, Blakeley and Curran-Smith 3 reported some astounding outcomes. According to them, it was difficult for registered nurses to continue their health education because of work and home conflicts; however, after completing their first DL nursing course, they could gain and apply new knowledge. DL also served to impart knowledge and skills long distance; McDonald et al 9 claimed that e-based learning per se is comparable to patient simulation. Despite its advantages, DL also has its shortcomings. Bloomfield and Jones 10 underscored the importance of supervision in technical procedures once training was completed. Others expressed concerns about DL limiting interactions and causing technical difficulties, and how its efficacy depends on the instructional design used. [7][8][9] Thus, a combination of conventional teaching methods and DL may be an effective educational technique, particularly for undergraduates. [9][10][11] Moreover, it would be difficult to implement DL in the medical corridors for specific clinical skills and procedures. Therefore, to effectively deliver these skills using a DL approach, adding video materials to the virtual program could help tackle the challenges of DL's limitations as well as create a positive environment. 12 DL in the medical field has thrived because of the problems and difficulties that have choked traditional learning. Most previous studies [2][3][4][9][10][11][12] have measured DL's effectiveness, usage, and design. However, learning attitudes and perceptions are also essential factors in successful learning. 13 Therefore, this study aimed to identify the perceptions and attitudes of dental graduates toward DL in medical education and how effective and efficient they found the DL tools. The Questionnaire A structured, self-administered questionnaire was developed based on a literature review and pretested. 8,12 A short introduction in the questionnaire explained the aim and the definition of distance-and face-to-face learning. There were 16 questions relating to distance-and face-to-face learning. The first part contained the introduction and asked participants whether they had attended a DL course. The second part had 16 questions that asked about the viewpoints of participants about DL, motivation, main characteristics of DL, advantages and disadvantages, as well as their opinion about face-to-face learning as compared to DL. The questionnaire was in random order. The responses were scored on a 5-point Likert scale ranging from "strongly agree" to "strongly disagree." The pilot study was conducted on 30 participants. No adjustments were necessary, and thus, the questionnaire was distributed to the study population. Method and Participants Ethical approval was granted by the local institutional Research Ethics Committee. Respondents were assured that their responses would remain confidential. The questionnaires were distributed and collected immediately after the survey by the researcher. Questionnaires were distributed to 106 dental graduates, and participation was voluntary. The purpose of the study was explained, and any uncertainties were resolved prior to participation. All participants were registered adult graduates working in government or private hospitals in a permanent position or for a long term (3 months or more). Statistical Analysis The collected data were analyzed using IBM SPSS Statistics for Windows, version 22 (IBM Corp., Armonk, NY, USA). Frequencies, means, and exploratory factor analysis were used to analyze the prevalence of statistical differences. Results The Cronbach's alpha value for the questionnaire was 0.863, which was above 0.70, indicating a good internal consistency. Women constituted 39.65% of the total respondents and men 60.4%. A total of 67.9% (24.5% women and 43.4% men) had previously attended a DL course. Of all the items, the ones most respondents valued and ranked the highest were convenient access to the online course (88.7%), followed by the ability to attend from any location (88.7%), and being taught from anywhere in the world (83.01%). The lowest-ranked item was DL workload (37.7% were not sure), followed by the need for IT support in DL (almost half the sample [49.1%] either did not agree or was not sure) (see Table 1, Figure 1). Perception of DL was analyzed in relation to gender and previous exposure to DL. Women suggested that it would be more useful if DL was based on daily clinical practice (p = 0.035) than did men. Responses from respondents who had attended a DL course and who found DL interesting, flexible in terms of time and location, global, self-paced, self-regulated, useful if accompanied by DovePress videos, and based on daily practice were statistically significant (p = 0.000). Respondents who had attended a DL course believed they would use the knowledge they learnt through DL in the future (p = 0.001). Significant responses also corresponded to enthusiasm in attending a DL course (p = 0.011) and considering DL costeffective (p = 0.031) ( Table 2). Factor analysis was also performed ( Table 3). The correlation matrix showed the appropriateness of the data for factor analysis. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.810 (more than 0.6). Bartlett's test of sphericity was significant (0.0001); thus, factor analysis with principal component analysis was performed. Using the rotated component matrix and the extraction method, with Varimax as the rotation method and Kaiser normalization, three factors were extracted. The names of the factors were created based on the meaning of the variables included in each factor. The three factors were attitude toward DL and characteristics, faceto-face (F2F) instruction benefits, and advantages and disadvantages of DL. The three factors were ranked based on students' responses (Table 4). Correlation Spearman's correlation indicated a significant moderate positive association between attending DL courses and attitude toward DL and characteristics (r(106) = 0.605, p = 0.000). It also indicated that there was a significant moderate positive association between appreciating F2F instruction benefits and valuing the advantages and disadvantages of DL (r(106) = 0.531, p < 0.000) (Figure 2). Discussion This investigation revealed a positive attitude toward DL among graduates, as two-thirds of the sample had attended DL at some point before taking the questionnaire. A majority of the respondents appreciated DL's convenience in terms of time flexibility and online attendance. The results also revealed that residents' attitudes toward DL and DL characteristics are major factors to consider when instituting or planning for DL and that understanding the advantages of traditional learning affects learners' attitude toward DL. Thus, promoting DL's benefits, convenience, and success may increase adult learners' acceptance of DL as an alternative learning approach and lead to higher enrollment. Additionally, DL education in health care delivery is useful since it is applied at a learner's own pace, which helps in reducing instructional time and complications for individual trainees. Indeed, long-distance training is effective provided it is adequate, appropriate, and soundly designed. It works well to address technical ability with its fixed assessment of performance. As the results indicate, attending DL courses increases participants' optimism toward and belief in DL. Factors related to this result include being taught by global experts and learning at one's own pace and on one's own time. Additionally, in the medical health field, there is often insufficiency of specialists and resources. A combination of these and other difficulties in medical health makes formation of DL centers of excellence a reasonable point of concern. Of importance is the complex nature and the vastness of the content of medical education courses. As health care education is critical for successful health outcomes, integration of DL should not be undertaken blindly. A successful integration should incorporate a well-choreographed plan that considers the most preliminary activities, which includes considering the varying attitudes toward and experiences of the DL participants before fully embracing this approach. DovePress First, a medical training institute must carry out a needs assessment to evaluate attitudes toward DL. If a need to implement distance education is determined, then a series of logical decisions are necessary to adopt DL. Many global institutions have tested DL as a standalone solution in a bid to improve continuous medical education. 6,[14][15][16][17][18][19] The outcomes of these projects were successful, and hence, adoption of distance education in the field of medical health was advocated in these cases. 6,[14][15][16][17][18][19] In general, experts recommend initiating DL with an assimilation strategy that considers the pros and cons before implementation. 17,18 Still, there is a need to widen the scope of DL training, particularly in new technical procedures and complex skills. 20,21 Another limitation is the difficulty in measuring clinical skills obtained from DL and the difficultly and time needed to prepare its context. 20,22 Strengthening medical skills with the help of videos and role-plays while simultaneously practicing on real patients immediately after completing a DL course can ensure a successful distance education. 20,21 To measure learner's attitudes, knowledge, and skills, it might be crucial to utilize adaptive learning that takes advantage of technology. This is applied at the beginning of the online training to determine which educational materials are the most appropriate for each learner. 23,24 The Accreditation Council for Graduate Medical Education has instituted core competencies concerning the application of e-learning in graduate medical education. 25 E-learning constituents that meet these requirements can be incorporated into the education of medical residents and fellows, substituting for lectures and other synchronous methods of instruction. 26,27 As a result, an increase in the use of educational software programs in medical education has been reported. 28 Asynchronous DL can be successfully utilized during demanding clinical care. Randomized trials of DL have been demonstrated to produce effects similar to those of traditional educational methods. 22,29,30 The results of this investigation are encouraging. The positive attitudes toward DL found in this study show DL continuing education courses could be used for training in the workplace. The study outcomes are also helpful for policymakers and health care administrators to plan for continuing education, particularly because there is an increased global demand for medicine and medical facilities. This unprecedented growth in health care delivery has necessitated a quick turnaround in the medical world to match the supply with the demand. The changes that have triggered an increased demand for health care providers have simultaneously instigated increased demands on medical faculty, which can be met through DL. Indeed, the future has a lot to offer regarding the transformation of pedagogical and instructional methods of DL, which is still in its inception stage, and with the current dynamic technological revolution, more sophisticated ways of delivering education are at hand. An important aspect of DL in continuous medical education is that in e-learning's online environment, adaptive learning shows promise because it helps to identify the learner, personalize content, and individualize tracking, monitoring, support, and assessment. Similarly, collaborative learning has the potential to end learner isolation. The enormous advancements in synchronous distance education play a pivotal role in enhancing collaborative learning. Likewise, the continuous development of collaborative technologies, which include tools such as e-mail, teleconferencing, chats, message boards, and weblogs, contributes to collaborative learning's success. Despite the gains in medical education in using DL, its drawbacks should also be considered. In a systemic review 31 of medical students involving 249 papers, it was revealed that technology may be a barrier for both tutors and students. Technology must be perceived as useful and easy for a successful course. They also found that interactivity and feedback between tutor and peer play a significant role in understanding, performance, and accepting the course. Course design and content are also crucial when using DL; for example, courses using virtual microscopy are highly valued when compared to virtual textbooks. 31 Thus, previous limitations should be considered when planning for DL courses for adult health care learners. Conclusion Regarding continuous medical education around the world, DL offers a likely direction for the future. Indeed, distance education in health care has just begun, and this method of instruction has much potential. Earlier, distance education would have been considered inferior to traditional training in any field, but not anymore; distance education in all academic areas is in high demand, particularly by busy, in-service professionals. Although some biases may jeopardize the research on DL, there is enough evidence to show that health specialists effectively pass short courses associated with their present employment. Ethical Considerations Ethical approval was obtained from Research Ethics Committee, Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia. (IRB#KSU-HE-19-235). Participation was voluntary, and both verbal and written consent were obtained prior to contribution.
2020-02-20T09:17:28.654Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "b187f9df64635d957d246a3423450e99c7c12af5", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=56209", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04a98c5bbb229ecb8f6938905928fb6016623972", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
56175071
pes2o/s2orc
v3-fos-license
Bistable Perception Is Biased by Search Items but Not by Search Priming During visual search, selecting a target facilitates search for similar targets in the future, known as search priming. During bistable perception, in turn, perceiving one interpretation facilitates perception of the same interpretation in the future, a form of sensory memory. Previously, we investigated the relation between these history effects by asking: can visual search influence perception of a subsequent ambiguous display and can perception of an ambiguous display influence subsequent visual search? We found no evidence for such influences, however. Here, we investigated one potential factor that might have prevented such influences from arising: lack of retinal overlap between the ambiguous stimulus and the search array items. In the present work, we therefore interleaved presentations of an ambiguous stimulus with search trials in which the target or distractor occupied the same retinal location as the ambiguous stimulus. Nevertheless, we again found no evidence for influences of visual search on bistable perception, thus demonstrating no close relation between search priming and sensory memory. We did, however, find that visual search items primed perception of a subsequent ambiguous stimulus at the same retinal location, regardless of whether they were a target or a distractor item: a form of perceptual priming. Interestingly, the strengths of search priming and this perceptual priming were correlated on a trial-to-trial basis, suggesting that a common underlying factor influences both. Introduction The human visual system represents the physical world to guide behavior in a useful way. This representation, however, can only be an approximation of the environment, due to physiological restrictions as well as inconclusive sensory information. Consequently, the meaning of a visual scene becomes ambiguous in certain conditions. Typically, in such a situation, the visual system prefers one interpretation over the other (Pastukhov et al., 2013). Experimentally, this can manifest as bistable perception of ambiguous figures-meaning that an observer sees different interpretations of the same ambiguous stimulus in alternation. Many studies have shown that, when such stimuli are shown repeatedly, a type of sensory memory due to previous presentations biases perception at the start of a subsequent presentation (de Jong, Knapen, & van Ee, 2012;Leopold, Wilke, Maier, & Logothetis, 2002;Maier, Wilke, Logothetis, & Leopold, 2003;Pastukhov, 2016;Pastukhov & Braun, 2008;Pearson & Brascamp, 2008). We have previously suggested that this dependency on prior history could be related to priming in visual search (Brinkhuis, Kristja´nsson, & Brascamp, 2015). In particular, when the features of a target, such as its shape or color, repeat between trials, search response times (RTs) decrease (Maljkovic & Nakayama, 1994;Treisman & Gelade, 1980) and the number of correct responses increases (Á sgeirsson, Kristja´nsson, & Bundesen, 2014;Sigurdardottir, Kristja´nsson, & Driver, 2008). The same is true when distractor features repeat between trials (Chetverikov, Campana, & Kristja´nsson, 2016;Lamy, Yashar, & Ruderman, 2013;Tipper, 1985), whereas when target and distractor features are reversed, performance goes down . We considered that this target versus distractor bias may be analogous to a perceptual bias that is elicited by the current dominant percept of an ambiguous stimulus. In the previous study, we examined potential links between these two kinds of history effects, by presenting a search priming paradigm where search displays were interleaved with ambiguous displays. To investigate potential interactions, the target and distractors resembled the two perceptual interpretations of the ambiguous stimulus. We confirmed that visual search elicited search priming, and that bistable perception elicited sensory memory, but found no influence of either kind of trial on the other: Prior search trials did not bias subsequent bistable perception, and prior ambiguous stimuli did not affect subsequent visual search. This suggested that the two kinds of history-dependence, search priming and sensory memory, are unrelated. The present work constitutes a closer examination of this earlier result, motivated by the dual notion that search priming acts by altering attention allocation, and that bistable perception depends on attention allocation (Brascamp & Blake, 2012;Dieter, Brascamp, Tadin, & Blake, 2016;Dieter & Tadin, 2011;Ling & Blake, 2012;Zhang, Jamison, Engel, He, & He, 2011). These two facts together suggested to us that search priming should be able to influence bistable perception, in spite of our inability to find such an influence in prior work. One example of evidence for attentional influences on bistable perception is that perception at the onset of an ambiguous stimulus is influenced by attentional cues (Chong, Tadin, & Blake, 2005;Chopin & Mamassian, 2010Dieter, Melnick, & Tadin, 2016;Kristja´nsson, 2009;Mitchell, Stoner, & Reynolds, 2004;Ooi & He, 1999). For example, Mitchell et al. (2004) and Chong and Blake (2006) showed that a transient attentional cue affects the interpretation of a subsequent ambiguous stimulus. The approach in both studies involved binocular rivalry in which incompatible images are presented to the left and the right eye, leading to two possible percepts. By precueing one of the two images by applying a movement or contrast increase, the cued image was predominantly perceived on subsequent presentations. Further evidence for a role of attention allocation in bistable perception includes the finding by Ooi and He (1999) that perception during binocular rivalry could be biased by presenting a pop out cue to one eye, such that the image presented to this cued eye preferentially gained perceptual dominance. Further evidence suggests that attention allocation in visual search biases perceptual conflict toward the features of a preceding target. Chopin and Mammassian (2010) presented two arrays with contrasting orientations to the left and the right eyes. Observers performed a search task and a bistable perception task on alternating trials. On search trials, observers reported the location of the target that had a lower contrast than surrounding items, whereas on bistable trials observers reported the orientation of the dominant array, continuously for 12 seconds. When the orientation of the array that contained the search item became predictable, observers were more likely to report this orientation during the onset of bistable trials. The authors concluded that task relevance cued perception to favor the orientation that would improve search performance (note, however, that search performance was not measured). Similarly, in a subsequent study, Chopin and Mamassian (2011) found that the surface of an ambiguous stimulus was more often perceived in the front when this surface contained a search target. While their results did not show a relation between search and perceptual biases directly, they show a link between the processes that are involved in both search perception, and, importantly, that an attentional shift between stimulus features may affect both search and perceptual outcomes (see Kristja´nsson, 2009 for converging results). Given this indirect evidence that search priming, by altering attention allocation, may influence bistable perception, we considered whether the lack of such an influence in our previous work was due to an incidental experiment design choice. In that previous study (Brinkhuis et al., 2015), we presented, at different moments, an ambiguous stimulus at fixation or visual search stimuli at a fixed distance around fixation. While this spatial arrangement is in correspondence with prior work on sensory memory (typically studied at fixation) and on visual search priming (typically studied using extrafoveally presented items), this difference in spatial locations may have prevented any interaction between the two trial types. In particular, sensory memory for bistable perception is confined to a narrow spatial range (Chen & He, 2004;Knapen, Brascamp, Adams, & Graf, 2009). Similarly, attentional biasing of perception also falls off across space (Fischer & Whitney, 2014). Any effect of search priming on bistable perception may therefore be spatially restricted as well. Here, we therefore ask whether search priming affects bistable perception when search items and ambiguous images overlap retinotopically. As the facilitation of target selection relies on the repetition of target and distractor features, we compared the influence of target items and distractor items on subsequent ambiguous displays presented at the same position. Methods Participants Eight observers (mean age ¼ 30.75 years; standard deviation [SD] ¼ 3.27)) participated. Seven of them, including the current first author, had experience with psychophysics tasks. Except for this author, participants were naive to the goal of the experiment. Participation was voluntary, and observers did not receive payment or study credits. The experiment was conducted in accordance with the Declaration of Helsinki. Apparatus We used Python and Psychopy to create and present stimuli (Peirce, 2007) on a 1,920 by 1,200 pixels, 60-cm-wide thin-film transistor -display at 60 Hz, at a distance of approximately 60 cm from eye position. To ensure that participants kept a stable view and constant distance relative to the display throughout the experiment, head position was held constant with a chin rest. Stimuli We presented animated rotating spheres by showing white (123.14 cd/m 2 ) circular dots that were scattered across its surface, against a gray background (28.58 cd/m 2 ). Specifically, 64 dots were positioned by creating 16 imaginary rings at equal distances along the sphere's vertical axis, and on each ring drawing 4 dots, placed at random radial positions, but with equal distance to one another. The top, bottom, and center rings of the spheres were dot-free. The distance between each ring of four dots was 0.15 (visual angle). The spheres rotated around their vertical axis at a speed of 0.17 cycles per second. We presented two types of displays, as shown in Figure 1. One display involved the visual search paradigm. Three vertically aligned spheres were presented. The center sphere was presented at the center of the screen, the top sphere's center was presented 3 above the central sphere, and the bottom sphere's center was presented 3 below the central sphere. Depth cues were applied to the spheres by decreasing dot sizes as a function of dot depth position. Dot sizes ranged from 0.15 to a minimum of 0.03 for the farthest dots. In addition, dot luminance gradually decreased as a function of the dot's depth position, to 84.0 cd/m 2 at minimum. A third depth cue was added by linearly scaling horizontal and vertical coordinates as a function of the dot depth position such that the horizontal and vertical positions of the farthest dot were 20% closer to the central horizontal and vertical axes of the sphere relative to the dots closest to the observer, giving the impression of perspective and further disambiguating the sphere's rotation direction, resulting in unambiguous leftward (i.e., clockwise when viewed from the top) or rightward rotation (i.e., counterclockwise when viewed from the top). After applying this perspective cue, the spheres were scaled such that their outline diameter remained 2.4 . One of the spheres always rotated in the opposite direction to the other two spheres and was the target of the visual search task. The rotation direction of the spheres was randomly set on each trial. On search display trials, participants were asked to respond by indicating the position of the oddly rotating sphere using the eight, five, and two keys on the numeric keypad of the keyboard, corresponding to the top, middle, and lower sphere, respectively. The second display type involved the presentation of a single sphere with varying levels of ambiguity regarding rotation direction on each occurrence. The different levels of ambiguity were implemented by using different gain factors for the scaling of dot size, luminance, and dot placement as a function of distance. In particular, dot size decreased linearly as a function of dot depth position to 0.09 , 0.105 , 0.12 , 0.135 , or was constant for a fully ambiguous sphere. Dot luminance, in turn, faded to 73.01, 84.0, 96.0, 108.73 cd/m 2 , or remained constant (123.14 cd/m 2 ). For the perspective cue, the horizontal and vertical positions of the dots farthest from the observer were scaled to be 40%, 30%, 20%, 10%, or 0% closer to the central horizontal and vertical axes relative to the dots that were closest to the observer. On these trials where a single sphere was presented, from here on referred to as ambiguous trials, participants were asked to respond by indicating perceived rotation direction using the four and six keys on the numeric keypad, corresponding to leftward and rightward rotation, respectively. A central fixation dot was presented continuously throughout the experiment. The dot had a diameter of 0.15 . Because the central ring of each sphere did not contain dots, there was no overlap between moving dots and the central fixation dot, to ensure that it would not affect the percept of the central sphere. Procedure Each experiment run consisted of a sequence of one, two, or three search trials followed by a single ambiguous trial. Each block contained 75 of those sequences or 225 trials in total. Search displays were presented for 2.5 seconds, or until a response was given, and ambiguous displays were presented for 1 second. Between each two trials, there was a period of 1 to 2.25 where the left sphere was fully ambiguous and the right sphere was most strongly disambiguated through three gradually amplified depth cues: (i) decreasing dot size, (ii) reducing dot luminance, and (iii) enhancing perspective (note that, while differences in dot luminance and size can be appreciated in the figure, the full three-dimensional experience is not elicited without the dot motion that was present in our actual stimuli). seconds where no stimulus was presented, except for the central fixation dot. The duration of each experiment block was about 12 minutes. Participants performed 4 blocks in each session and returned for three sessions for a total of 12 blocks. To encourage participants to maintain eye fixation at screen center, we stressed that it was important to fixate on the central dot. Furthermore, we introduced a second task parallel to the main task. On random occasions, at the offset of either a search display or an ambiguous display, the fixation dot changed luminance from white to light gray (54.68 cd/m 2 ) or the other way around. Participants were instructed to respond, using the space bar, each time the luminance of the fixation dot changed. Participants received points, added for each correct response, and subtracted for each incorrect response. Specifically, for each correctly identified luminance change, participants received 400 points, while for each missed change, 400 points were subtracted. Furthermore, during the search task, each correct response regarding the search target was rewarded with 200 points and for each incorrect response, 200 points were subtracted. During the ambiguous display, each response was rewarded with 100 points, regardless of what the response was. The score was displayed for 500 milliseconds after each response or at stimulus offset when the fixation dot luminance changed but was not reported. Furthermore, the score was presented at the position of the central fixation dot, in green for correct responses and in red for incorrect responses. Prior to the start of the first session, participants were asked to practice the task, which they continued until they reached 5,000 points; these trials were not used in the analysis. Analysis Responses were recorded continuously throughout experiment runs. We selected each first response after the onset of search displays, ignoring response corrections. On ambiguous displays only the last response after stimulus onset (until the next stimulus onset) was selected, allowing participants to correct their response of perceived rotation direction. Note, however, that participants were not instructed that they could change their choice and consequently rarely did so. Before conducting statistical analysis, we preprocessed the search data in the following way. First, we excluded outliers that were defined as (RTs more than three SDs above the mean RT or RTs lower than 500 milliseconds. We included all trials with incorrect response into our analyses. Next, we subtracted the linear slope of RTs, corresponding to a gradual slowing that might be associated with waning motivation, for each experiment block. We then normalized the data by taking the log of RTs to decrease distribution skew and by subtracting the mean RT and dividing by the SD for each experiment run, thus resulting in a detrended, z-scored logarithm of the RT. Finally, we rescaled the RTs to the mean SD across experiment runs and added the grand mean. Note that the last step scales the RTs so they reflect averages across subjects and that RTs at the individual level become less meaningful. Finally, we concatenated the data across all participants, and all experiment runs for further analysis. To assess significance, we used the R statistical software package (R core team, 2017) and the lme4 library (Bates, Ma¨chler, Bolker, & Walker, 2015). We fitted a linear mixed model to the concatenated search data to assess search priming using the lmer function in R. We fitted generalized linear mixed models to the perceptual choice data to assess the relation between search priming and bistable perception using the glmer function in R and a logistic link function. In all models, individual intercepts were modeled as random effects. The interpretation of ambiguous spheres was modeled as a binary dependent variable, where leftward and rightward responses were modeled as 0 and 1, respectively. We used an iterative approach to select the model that best fitted the data. All models were fitted using Laplace approximation (Bolker et al., 2009). By adding one factor of interest at a time and comparing the expanded model with the initial model using a likelihood ratio test, we identified significant predictors using a cutoff at a p value of .05. We calculated approximations of the Bayes factors (BFs) from the Bayesian Information Criteria of both models using the following Equation 1 derived from (Wagenmakers, See, & Cohen, 2007), where BF 12 is the BF of Models M 1 and M 2 , M 1 is a baseline model and M 2 is the expanded model. Here, a BF value larger than 1 suggests the expanded model fits the data best, whereas a value lower than 1 favors the baseline model (Dienes, 2014). Results On average, participants responded correctly on 90% (SD ¼ 7%) of the search trials. Figure 2 displays average RTs per observer, for search trials where target and distractor rotation directions repeated, and for search trials where they switched. To assess whether search priming affected bistable perception, it is fundamentally important that our paradigm elicited search priming. We conducted a linear mixed-model analysis of the normalized search RTs, dependent on repetition of target-rotation direction across search displays. Specifically, we first fitted a baseline linear mixed model to the search data, including only the intercept. We then compared the baseline model with an expanded model that included a parameter for target rotation repetition (see Models A1 and A2 in Table 1). Estimation of the parameters showed a decrease in RTs when target rotation repeated. A likelihood ratio test yielded significantly improved performance of the expanded model over the baseline model, 2 (1) ¼ 14.27, p ¼ .0002, supported by a BF of 10.5, suggesting that RTs were indeed reliably faster when target rotation repeated. Another important prerequisite to testing our hypothesis is that perception of the ambiguous stimuli was susceptible to perceptual biases through trial history. This may not be the case if, for example, our method to disambiguate the spheres worked too well or when observers were already too strongly biased toward a certain interpretation at the start of the experiment. To rule out this possibility, we fitted a generalized linear mixed model, with a logistic link function, to test for influences of prior perception during the previous ambiguous trial on perception during the current ambiguous trial (i.e., influences of sensory memory; depicted in Figure 3). We first created a baseline model with the probability of a rightward response to an ambiguous display, indicating perceived rightward rotation, as the outcome Note. To assess search priming, we compared how well linear mixed models A1 and A2 fitted RTs. For the next part of our analyses, we compared generalized linear mixed models. First, we compared B2 and B3 to assess biased perceptual choice depending on the previous interpretation. In each Model B, we predict the probability that the observer responded with the right-arrow key; P(!). We compared Models B2 and B4 to assess the influence of the central search item on perceptual choice, and Models B4 and B5 to assess whether the role of the CS interacted with the influence of the CS on perceptual choice. All models included an intercept and models B2 to B5 included a regressor for nine different ambiguity levels, ranging from unambiguously rotating leftward, through fully ambiguous rotation, to unambiguously rotating rightward. AS ¼ ambiguous sphere; CS ¼ central sphere; RT ¼response time. Figure 3. Probabilities that single spheres that interleaved search trials were perceived to rotate rightward. The red lines show perceptual bias when the previous sphere was perceived to rotate leftward, whereas blue lines show perceptual bias when the previous sphere was perceived to rotate leftward. variable. The model (Table 1; Model B2) included an intercept and a predictor that reflected the level of physical ambiguity of the ambiguous display. The ambiguity levels ranged from À1 to 1 in nine equally distanced steps, reflecting leftward rotation and rightward rotation, respectively, going through 0, reflecting a physically fully ambiguous sphere. These values mapped onto the steps we took to disambiguate the spheres as described in the Methods section. We then included a predictor for the response (left; 0, right; 1) on the previous ambiguous display (Table 1; model B3) and found that this model, following a likelihood ratio test, fitted the data significantly better than the baseline model, 2 (1) ¼ 376.21, p < .0001, with a very high associated BF (5.81 Â 1,079). These results show convincingly that observers' perception was biased by trial history. After having, in this fashion, established the presence of trial history effects within each trial type (search and perception), we next investigated trial history effects from search trials onto trials with ambiguous displays: the main objective of the present work. To get one step closer to that objective, we iteratively expanded the previous model (see Tables 1 and 2 for an overview of all of these models) to assess the influence of target selection during search on perception during subsequent ambiguous displays, on top of the history effect that was evident for ambiguous displays. The response probabilities that were modeled in the next two analyses are depicted in Figure 4, as a function of the level of the graphically induced bias on ambiguous displays. The central sphere presented during search displays overlapped with the position of the sphere presented during the ambiguous display. As our objective was to search for retinotopically specific effects of search priming on bistable perception, we specifically assessed the influence of this central search item on bistable perception (i.e., on the probability the ambiguous spheres would be perceived to have a rightward rotation direction). To assess this influence, we first created a new model that included a predictor for the rotation direction of the central sphere on the search trial that immediately preceded an ambiguous trial (Table 1; model B4). This expanded model fitted the response data significantly better, 2 (1) ¼ 70.36, p < .0001, supported by a large BF (2.24 Â 1,013). The data therefore convincingly show that rotation direction of the overlapping sphere biased observers toward perceiving the same rotation direction of the overlapping subsequent ambiguous sphere. The above analysis, however, while showing a priming effect of the central sphere during the most recent search trial on perception during an ambiguous trial, does not address our central question: whether visual search priming influences perception of an ambiguous display. After all, this analysis does not take into account whether the central sphere was a target or a distractor during this most recent search trial. In other words, the analysis simply tests whether perception of an ambiguous stimulus is affected by prior presentation, at the same retinal location, of an unambiguous stimulus that resembles one of its interpretations, regardless of any role the unambiguous stimulus may have as part of a visual search array. Such sensory memory effects due to disambiguated input have been demonstrated before (Kanai, Knapen, van Ee, & Verstraten, 2007;Kanai & Verstraten, 2005;Long & Moran, 2007;Long, Toppino, & Mondin, 1992). Our next analysis therefore takes the role of the central sphere during search (i.e., whether it is a target or distractor) into account, to address the question whether a target at the central location affects subsequent perception differently than a distractor at the central location does, as would be predicted in the case of retinally specific effects of search priming on perception. Specifically, we added a predictor to the model reflecting the interaction between the role and the direction of the central sphere (Table 1; Model B5). In other words, we added, to our earlier model that included the rotation direction of the preceding sphere as a predictor, a second predictor specifying whether the most recent central sphere was a target or a distractor. If history effects due to visual search also affect bistable perception, in a retinally specific fashion, then we expect that the rotation direction of the central sphere would have a particularly strong priming effect on bistable perception if the central sphere was the target, whereas the effect may be weaker if the central sphere was a distractor, or may even be reversed, analogous to the effect of role reversals in visual search (Chetverikov et al., 2016;Chetverikov, Campana, & Kristja´nsson, 2017). The expanded model did, however, not fit the response data significantly better, 2 (2) ¼ 3.42, p ¼ .181, than the model that did not take the role of the central sphere into account. Indeed, a BF of 7.67 Â 10 À4 showed strong evidence in favor of the simpler model, thus suggesting that the The probability that the sphere was perceived as rotating rightward (on the y-axis) relied, generally, on the induced stimulus bias (x-axis). Furthermore, the red lines represent perceptual bias when the CS in the preceding search display was rotating leftward, whereas the blue lines show perceptual bias when the preceding CS was rotating rightward. Solid lines show perceptual bias when the CS was the target, whereas dotted lines show perceptual bias when the CS was a distractor. CS ¼ central sphere. search item's role did not bias the perceived rotation direction of the ambiguous sphere. Together with the results of the previous analysis, this shows that effects of prior visual search on subsequent bistable perception in this experiment were restricted to retinally specific sensory memory effects (i.e., priming of the rotation direction of the central search item) that are unrelated to the search task itself (i.e., no influence of whether this item was a target or distractor). To assess the consistency of these effects across observers, we reran the analyses for Models B1 to B5 (see Table 1) for each observer with session included as random factor. We found the same pattern of significant results across observer, except for Observer 1 and Observer 5. The above results corroborate our previous finding (Brinkhuis et al., 2015) that history effects in visual search and history effects in the perception of ambiguous displays are unrelated in the sense that the traces left by visual search do not influence bistable perception. The results also expand on them by showing that this even holds when retinal overlap between the search display element and the ambiguous stimulus is ensured. The results described earlier provide a negative answer to the main question of the present work, that is, whether sensory memory and search priming are closely related phenomena so that search priming could also affect ambiguous figure perception. In an exploratory analysis, we next examined the possibility of a more indirect relation. In particular, our experiments elicited both search priming and sensory memory simultaneously, and we tested whether the strengths of the two types of history effects were correlated on an observer-to-observer or block-by-block basis. Such a correlation might be expected if some more general mechanism (e.g., arousal) affects both types of history effects similarly. We therefore modeled whether the search priming strengths per experiment block were predicted by the strength of sensory memory, including observers as a predictor of random effects (see Figure 5; left panel). Specifically, we calculated the strength of search priming as the difference between mean normalized RTs on trials where the target and distractor repeated and on trials where they Figure 5. Search priming strength (y-axis) against sensory memory strength (x-axis) for central search items (left) and for ambiguous displays (right). In the left panel, the x-axis represents the difference between the probabilities (Áp) of giving a right response when the preceding central sphere rotated leftward and the preceding central sphere rotated rightward. In the right panel, the x-axis represents the difference between the probabilities (Áp) of giving a right response when the preceding response to the ambiguous display was left and when it was right. Smaller gray dots show the priming strengths across experiment runs (i.e., 12 blocks per observer), whereas bigger gray dots show average priming strengths per observer across blocks. The linear function was fitted to the average priming strengths across blocks. RT ¼response time. switched. This was done for each block and for each observer. We also calculated the strength of sensory memory for search items, by assessing the difference between the influence of leftward and rightward rotating central search items on the probability of perceiving ambiguous spheres as rightward rotating, again per block and observer. In a linear mixed model, with search priming strength as the outcome variable, we found that the strength of sensory memory predicted the strength of visual search priming significantly, relative to a model including only the intercept, 2 (1) ¼ 6.98, p ¼ .008, with a substantial BF of 3.34. To further explore this effect, we also calculated the correlation between the averages per observer of perceptual biases and priming strengths. Doing so, we aimed to identify whether the correlation depended on structural differences between observers, possibly reflecting that observers used different strategies. We indeed found the same positive trend, r(8) ¼ .60, p ¼ .07, although this correlation was not quite significant. Interestingly, the result was specific to a comparison between search priming and sensory memory elicited by the central search item: It did not arise when we compared search priming to sensory memory elicited by ambiguous displays (i.e., to the effect of one ambiguous trial on the next). Specifically, we replaced the predictor that reflects the perceptual bias due to the central search item, with one that reflects sensory memory elicited by the prior ambiguous display ( Figure 5; right panel), and did not obtain the same result. Instead, the strength of visual search priming per block was not significantly predicted by sensory memory for ambiguous displays per block, 2 (1) ¼ 1.29, p ¼ .26, with a BF of 0.20, and per observer, r(8) ¼ À .34, p ¼ .41. The relation between search priming and sensory memory was therefore specific to sensory memory elicited by search items. Discussion While search priming effects spread across the visual field, the range at which bistable perception can be influenced by history effects has been found to be narrower (Chen & He, 2004;Knapen, Adams, & Graf, 2009). We measured interactions between retinotopically overlapping search items and ambiguous figures, to examine whether priming of visual search can influence perception of ambiguous stimuli. When search items and ambiguous figures were presented at fixation, bistable perception was indeed significantly biased by the prior search display, but this influence was not related to search priming. Instead, the perceptual bias induced by the search item presented at the central location did not rely on the role of this item (target or distractor). Rather, bistable perception was biased toward the percept that resembled this foveal search item regardless of its role. The current results confirm our earlier finding (Brinkhuis et al., 2015) that there is no effect of search priming, as such, on bistable perception and extend this finding by showing that this is even true when there is retinal correspondence between the bistable stimuli and search items used. Our results, therefore, further support the notion that sensory memory and search priming are independent phenomena and, by inference, that the resolution of perceptual ambiguity relies on distinct processes from competition between attended and ignored items during visual search. In an exploratory analysis, we did find a more indirect link between search priming and sensory memory for ambiguous figures. Specifically, the strength of the sensory memory for search items predicted the strength of visual search priming. This suggests a weaker relation between the biasing effects of sensory memory and of search priming. For instance, both types of visual memory may rely on a common, nonspecific factor like arousal or the level of attention given to display items. In the present work, the correlation between the strengths of the two types of history effects reached significance when analyzed at the level of individual runs and showed a similar trend when analyzed across observers, instead of across experiment runs. Further study on this observer-to-observer relation should reveal whether this trend reflects a real effect. Such consistent differences may indicate differences in physiology or strategy between observers that specifically affect sensory memory for search items. The current finding that the mere presence of a search item at the central location primes subsequent bistable perception, regardless of its role as either target or distractor, resembles findings in previous studies showing that perception can be biased by the mere viewing of prior unambiguous stimuli (Brascamp, Knapen, Kanai, van Ee, & van den Berg, 2007;Kanai et al., 2007;Kanai & Verstraten, 2005;Takeuchi, Tuladhar, & Yoshimoto, 2011). Similar to the present findings, those previous studies showed that an unambiguous stimulus can prime the corresponding interpretation so that it becomes dominant during subsequent viewing of a similar, ambiguous stimulus. Interestingly, these effects become more pronounced when attention is drawn toward stimulus characteristics corresponding to one of the two possible interpretations (Chong & Blake, 2006;Mitchell et al., 2004). The current lack of an interaction between search priming and bistable perception may therefore be interpreted in two ways. First, reorientation of attention toward the target may prime subsequent search but not bistable perception-the two tasks may rely on separate mechanisms. Second, it may be that observers did not reorient attention toward the target during a search but attended target and distractors to an equal extent. In that case, the search priming effect in this study, and in similar studies, may have relied primarily on the enhanced discriminability of target and distractor. Analogously to the complementary effects of target and distractor items in visual search, perception is locally attracted toward previously attended stimuli or repelled from previously unattended stimuli (Fischer & Whitney, 2014;Fritsche, Mostert, & de Lange, 2017). While Whitney and Fischer found that perception is attracted to previously displayed stimuli when an observer is asked to report a certain orientation of a stimulus, Fritsche and others replicated this finding but found that perceptual judgment tasks yielded different results. Importantly, their results for different types of experiments relied on the same stimuli but were opposite in nature. The results specifically indicated that different biases are represented in different types of responses. For example, as suggested by Fritsche and others, a positively biased perceptual decision may rely on the previous response to a stimulus as well as on the perception of a preceding stimulus (Cicchini, Mikellidou, & Burr, 2017), whereas in parallel, perceptual judgments may rely on adaptation to that same stimulus. In other words, the effects of visual search on perception may depend on the method used to probe perception, suggesting that the perceptual signature of search priming may change when using a different paradigm. Importantly, though, here, we find no evidence that, specifically, the perception of ambiguous stimuli and target selection during search relies on shared mechanisms. To summarize, we previously found that distinct effects of attentional and perceptual selection history did not interact. Here, we showed that the absence of those interactions was not due to a spatially narrow susceptibility of bistable perception for bias through prior visual search: When ensuring retinal overlap between search items and ambiguous stimuli, we found that search priming still does not alter perception of subsequent ambiguous stimuli. Instead, items in the search array can cause sensory memory irrespective of their role in search-an effect that has also been observed when no search is involved at all, and that is distinct from search priming. In an exploratory analysis, we found that the strength of this sensory memory was related to the strength of visual search priming, possibly reflecting that observers performed the search task with varied attention to search items across experiment blocks.
2018-12-19T14:03:53.409Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "561aacd5e894b835c5962daf36d71e63e6e3fd90", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2041669518812485", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "561aacd5e894b835c5962daf36d71e63e6e3fd90", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
246576011
pes2o/s2orc
v3-fos-license
CO-REGISTRATION OF VIDEO-GRAMMETRIC POINT CLOUDS WITH BIM – FIRST CONCEPTUAL RESULTS : The co-registration of photogrammetric products such as image blocks or point clouds is an essential step before they can be used for subsequent analysis. Usually this is done by using control points. This has some disadvantages such as the need for additional measuring devices and a laborious measuring of the coordinates. In prior works we developed a procedure that enables a marker-less co-registration of an image block with a digital building model. This extended abstract presents our current research as work-progress. For further facilitating and improving this process we identified two tasks. Using videogrammetry as data capturing technique and using an enhanced matching algorithm during the co-registration. This paper summarizes essential steps when making the switch from photogrammetry to videogrammetry and explains the basic principles of the improved matching process. INTRODUCTION Today the sustainable maintenance and conservation of buildings and especially infrastructure such as bridges, tunnels and roads is a major challenge. New tools for the digital documentation of the actual conditions can help to detect necessary renovation measures on time. Photogrammetric measuring techniques can help to improve this process. Point clouds and oriented image blocks can be used for capturing the actual state of the structure for different points in time and therefore to monitor the health of it. Modern photogrammetric sensoros providing a lot of details in a high resolution combined with artificial intelligence techniques can for example be used to detect cracks or deformations on the structure (Morgenthal et al., 2021). An important step in the photogrammetric process chain is the registration of the generated data. A proper registration either in respect to a global reference frame or a digital model is very important to establish the connection of potential damages and their location in the structure. Usually the registration is carried out using control points with known coordinates in the world as well as in the object coordinate system. This well established procedure has the advantage that high registration accuracies can be reached. On the other hand it often requires additional measuring devices such as total stations or GNSS receivers for obtaining the coordinates of the control points. Additionally, this requires expert knowledge and manually measuring the control points in the images is an error-prone, repetitive and time-consuming task. In order to get widely adopted by many users it is important that the complete process including data capturing and the actual co-registration can be automated as high as possible. With the emergence of Structure from Motion (SfM) packages it already became possible to reconstruct accurate 3d scenes if certain conditions such as enough overlap between the images are met. For further simplifying the data capturing, video frames can be used as input data source. In (Kaiser et al., 2022) we presented a novel approach for the automated co-registration of (single) image blocks with an existing digital building model. With our ongoing research we want to improve and ease the complete workflow by using videogrammetry as data capturing technique (Section 4) and an enhanced matching algorithm (Section 5). Section 4 discusses the various principles of image selection from video frames in the context of videogrammetric 3d reconstruction. Also the videoprocessing pipeline is presented. Section 5 prsents a new principle component based cluster method for the SfMgenerated 3d-lines. This method serves to reduce the of candidates and intends do accelerate the matching algorithms from image blocks to BIM model. Please note, that the two enhancements are theoretically independent, but are practically used in a common pipeline for the co-Registration of videogrammetric point clouds with BIM. RELATED WORK The co-registration of photogrammetric products with digital building models is a very active research field. The rising usage of digital methods like Building Information Modeling (BIM) has accelerated this trend. In projects related to construction progress monitoring (Vincke andVergauwen, 2020, Tuttas et al., 2017) the registration is carried out once at the begin of the construction using a classical approach with control points. Image blocks of later points in time are then co-registered with the initial reference frame. Other works use the geometry of the digital building model for an automated co-registration. (Kim et al., 2013) for example co-registers a point cloud to a model with the help of the Iterative Closest Point Algorithm. (Kropp et al., 2018) match lines extracted from video sequences with lines extracted from a building model for co-registering the image block. Whereas plane-based registration mainly is used in applications related to terrestrial laser scanning (TLS). These procedures can either be used to register the single scan stations into one common reference frame (Wujanz et al., 2018) or to co-register the scan with a building model (Bosché, 2012). EXISTING SOLUTION As stated in the introduction, we developed a procedure that enables the co-registration of an image block consisting of single images with a digital building model (Kaiser et al., 2022). More precisely we focused on the co-registration of indoor scenes. The basic idea of the method is to match 3d line segments that are extracted from the images with planar surfaces from the digital building model. By observing the geometric relationships between lines and planes the required transformation parameters can be estimated. Figure 1 shows the basic steps for coregistering an image block with the building model. After the images have been captured they are relatively oriented using Structure from Motion (SfM) algorithms. In our implementation this is done using the open source software COLMAP (Schönberger and Frahm, 2016). This step delivers the interior and exterior orientation of the images. The orientation parameters and the images are processed by Line3D++ (Hofer et al., 2017) for extracting the 3d line segments. These are defined by the coordinates of the start and end points in the SfM coordinate system • s scale parameter m are determined in the adjustment stage. Each 3d line segment that is directly located on an extracted boundary surface provides two observation equations and where R is the rotation matrix, − → t the translation vector, m the scale parameter, − → n is the normal vector of the plane, − → u is the direction vector of the 3d line segment and − → p is the mid point of line. Equation 2 can be used to calculate the unknown rotation from the point cloud to the BIM coordinate system whereas equation 3 also enables to determine the translation and the scale parameters. By using a Gauß Helmert Model the transformation parameter can be estimated. The adjustment process only delivers correct results if the involved 3d line segments are matched to the correct planar surface. However, there is no a priori information about correct line plane pairs available. Since a brute force approach (where all possible combinations of line plane pairs would be tested) is not feasible, we developed a clustering algorithm that assigned the 3d line segments into multiple clusters. In the next step a RANSAC (Fischler and Bolles, 1981) inspired random assignment of the cluster's lines to the planes is performed. In total four line plane pairs are necessary to calculate the transformation parameters. Due to the random line plane assignment, numerous minimal configurations have to be processed and afterwards filtered to obtain the best suitable seven transformation parameters R, ⃗ t and m. VIDEOGRAMMETRY In recent years, and by now decades, development in the field of real-time robotics has come a long way in terms of camerabased systems. In just a few milliseconds, vehicles can recognize signs and road situations and, in some cases, react autonomously to them. A very active research focus in this context uses SfM and pursues Simultaneous Location and Mapping (SLAM) solutions to localize themselves in self-generated maps or 3d models of an environmental situation. The boundaries between these real-time applications and photogrammetry methods are now fluid. Both profit greatly from this. Videogrammetry (VG) can be understood simplified as an extension of photogrammetry (PG) by an intelligent image selection (IS) in the available videos: The approach of capturing and processing videos instead of photos has a number of advantages and disadvantages. The biggest disadvantage is certainly the fact that the extracted single photos usually do not have geotags and thus an automatic georeferencing is not easily possible. However, this disadvantage can be solved satisfactorily in combination with pure photogrammetry. For the image selection we have many different strategies at our disposal. First of all: Not only one solution exists for the image selection. If the goal is to generate a point cloud as quickly as possible, e.g. to ensure on site that the acquired data is complete and to generate a coherent, gapless 3d model, then a minimal, fast image selection would be an option. However, if the goal is to generate the densest point cloud possible, then more time can be invested in image selection and that may result in a larger image set. Before discussing some of these strategies, we need to understand the spectrum of data and the min-max conflict that exists with it. Min-Max Conflict For 3d reconstruction of a point in SfM, there must be at least three images in which that point has been uniquely determined. Since we have a continuum of consecutive data available in the video footage, we could come up with the idea of just taking all the frames. This would give us the minimum distance between frames. Given the rule of three, we can derive the min property: The smaller the distance between the images, the more 3d points are possible. In practice, we quickly find that using all the images unfortunately leads to worse results with fewer 3d points than using a smaller number of images. In order to understand the reason, we need to appreciate another important property in the SfM approach: The larger the distance between images, the more accurately 3d points can be determined, and only accurate 3d points survive later in the 3d model (see If for two images A and B1 the image distance of the camera centers (baseline) is smaller, we get a larger area of uncertainty for the jointly observed point X than if we choose a larger image distance as for images A and B2. The area of uncertainty provides us an important quality attribute for the identified 3d point. Thus, in order to obtain the maximum number of 3d points for a 3d model, we need to solve the so-called min-max conflict during image selection: maximize 3d points by choosing an appropriate image (image spacing) between min and max property. Image and correspondence evaluation criteria The research in feature extraction, the reduction of all image pixels to some relevant, is as old as the Computer vision itself. There are several solutions i.e. Harris Corner (Harris and Stephens, 1988), SIFT (Lowe, 2004) or SURF (Bay et al., 2006), and much more. All candidates for SfM need to be invariant to affine transformations like scaling, rotating, translation and a mix of them. One of the most common feature detectors and used in various applications like object recognition, image retrieval or 3d reconstruction is SIFT, published and patented by Lowe (2004). There are different approaches to speed them up, like SiftGPU (Wu, 2010). But in robotics, when Computer vision needs to work in real-time, other solutions are more common (Miksik and Mikolajczyk, 2012). Typically, a feature extracted by a detector has not only a position. In most cases, i.e. to improve the necessary matching between two feature sets extracted from two images I1 and I2, every feature has a more accurate description (Mikolajczyk and Schmid, 2005 It is easily comprehensive, that the result for Kx,y(I, I) should be always 1. The following sections describe methods to measure the quality of single images or the quality of correspondences between two images, considering the aim which is to use them in a 3d reconstruction process. Most of these methods are based on feature detection. If e.g. a method A(Ii, Ij) is given to deliver a correlation score between images Ii and Ij which followed by the step A(Kx,y(Ii), Kx,y(Ij)) we simplify to A(Ii, Ij)x,y. If selecting one image as the n th keyframe from a set of {I1, I2, . . . , Imax}, we notice this image I n by bold letters. If the position i inside the image set is needed to follow the algorithm, we give both indices I n i . A number of solutions are available today for solving the image selection part due to the Min-Max Conflict in real-time scenarios, let's take a look at a few representatives of these algorithms. 4.2.1 Sharpness measure While recording video data with moving systems like UAVs, single images with the same content may differ strongly in sharpness due to the fact that small camera movements are applied. In contrast to the relative sharpness measure for an image I, as a mean square of the horizontal and vertical derivatives: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVI-5/W1-2022 Measurement, Visualisation and Processing in BIM for Design and Construction Management II, 7-8 Feb. 2022, Prague, Czech Republic Nistér proposes a discretized, faster version with finite differences except for the image boundaries where ∥I∥ conforms to the amount of pixels and f describes an image function to get pixels from downsampled and normalized image data (Nistér, 2001). Normalized Correlation Constraint between two images Ii and Ij to delete redundant frames (Nistér, 2001). Redundant in that case means very similar and will be discussed later. Distance Constraint Nistér also checked the maximum distance in correspondences (Nistér, 2001 The Correspondence Ratio Constraint CRC depends on the camera motion and needs to be located between the values t low and t high , which are not specified by the authors. Rashidi et. al. experimented with scenes of different complexity and different camera motion speed and suggested estimated values for them (Rashidi et al., 2013). Maximum Distance Constraint A simple method motivated by autonomous robot navigation and proposed by Royer et. al. selects images with maximum distances while there are at least M common interest points between two correlated frames (Royer et al., 2007). They choose always the first image as the first keyframe I 1 1 . When n keyframes I 1 , I 2 , . . . , I n are chosen, they select the next keyframe I n+1 as follows: (i) there are as many video frames as possible between I n and I n+1 , (ii) there are at least M interest points in common between I n+1 and I n , (iii) there are at least N common points between I n+1 and I n−1 . We can summarize this description as follows where i < k. The two unknown parameters M and N are specified by the authors with M = 400 and N = 300 and were set experimentally. Royer et. al. (2007) use Harris corner detector for feature detection. Optical-Flow-Based Motion Estimation In 2001 Nistér uses the initial step of coarse to fine optical flow based video mosaicing (Kanatani and Ohta, 1999) to use the result as a global motion estimation for Structure and Motion (Nistér, 2001). The motivations to use this over feature based approaches like in Capel et. al. (Capel and Zisserman, 1998) were that the behavior works fast and also for gravely unsharp frames. Assuming a rigid world, between two images I1 and I2 a homographic mapping H can be derived. An image Ii is downsampled and normalized and a position of pixel − → p = (x, y) is accessible by an image function fi( − → p ) (see Sec. 4.2.1). To estimate H the mean square residual R with will be minimized using non-linear least squares algorithm such as Levenberg-Marquardt (Press et al., 1988). Degeneracy Constraint As the fundamental matrix F better defines general camera motion, the homography H better defines degenerated camera movements. The Geometric Robust Information Criterion GRIC introduced by Torr computes a score based on the fundamental matrix GRICF and the homography GRICH separately (Torr, 1998), where p(e 2 ), a robust function of the residuals, is defined by where d is the number of dimensions modeled (d = 3 for F , d = 2 for H), n the total number of features matched across the two frames, k is the number of degrees of freedom (k = 7 for F , k = 8 for H), r is the dimension of the data (r = 4 for 2d correspondences), σ 2 is the assumed variance of the error, λ1 = log(r), λ2 = log(rn), and λ3 limits the residual error , Ahmed et al., 2010. Torr uses also the Harris corner detector for feature detection. Normalized GRIC Difference Criterion The smaller the GRIC score the better the model. If GRICF is better than GRICH , then a good candidate keyframe is indicated. The normalized GRIC Difference Criterion GDC was introduced by Ahmed et. al. (Ahmed et al., 2010) and is defined by: Point-to-epipolarline Cost The point-to-epipolarline cost P ELC is the standard geometric reconstruction error measure for F given two images Ii and Ij and was named as the Gold-Standard error function by Hartley and Zisserman (Hartley and Zisserman, 2011): This score depends on the chosen feature detection method. where σ is the assumed standard deviation of the error. The weights wG and wP are not specified by the authors and were set experimentally (Ahmed et al., 2010). Shot Boundary Detection Sometimes uncorrelated frame sequences can be produced while recording videos. This can happen, if the frame rate is very low and a large camera motion becomes somewhat arbitrary, or if the camera has been stopped and then started again at a new position. Shot boundaries are detected by evaluating the correlation between adjacent frames after global motion compensation (Sec. 4.2.6) (Nistér, 2001). The threshold for the Normalized Correlation Constraint is set by the authors to TSB = 0.75. Videogrammetry in Archaeo3D In our experience with recording data while moving, videogrammetry is the more fault-tolerant, more cost-effective and easier-to-use approach. The software JKeyFramer, an automatic key frame selection tool, was one of the most important outcomes of the project Archaeocopter 2 . This tool uses the presented videogrammetric methods for image selection and combines them depending on the objective and was at that time an important step towards fast 3d reconstruction. Meanwhile, it has evolved to allow us to render fast preview models on site. Within the scope of the Archaeocopter project, the semiautomatic software Archaeo3D was developed to optimize and control the complete reconstruction process. Videos and photos are automatically imported and processed. The software is able to reorder or change the pipeline modules and adjust the parameters, according to the current hardware and the real recording situation and complexity. A combination of VisualSFM 3 , COLMAP, CMPMVS and Meshroom 4 provided the backbone of the processing toolchain, in all Archaeocopter related projects. The Archaeo3D reconstruction pipeline includes the following processing steps and software packages: 9. SGM, Surface fitting (Poisson reconstruction (Kazhdan et al., 2006), CMPMVS (Jancosek and Pajdla, 2011), Meshroom, OpenMVS) 10. Producing orthoimages (CMPMVS) 11. Georeferencing, mesh cleaning (MeshLab (Cignoni et al., 2008)) 12. Integrate data into GIS (QGIS 11 ) Additional software components like JUndistortion, for automatic camera calibration, and JKeyFramer, for automatic key frame selection, were developed and integrated. The pipeline automatically shifts processing toward CPU or GPU, depending on the hardware, on which Archaeo3D is running. The number of parallel processing jobs is chosen according to the available system memory. While reprocessing old data and preparing new recording campaigns, we also made progress, both in terms of reliability and quality of 3d results, by preparing our software packages JKeyframer, JUndistortion, JResizer, JFea-tureManager and JEnhancer, and releasing them one by one as freely available software tools 12 . The georeferencing step, following the 3d reconstruction process, is an important step due to the fact that 3d models without spatial reference or scale are of limited scientific value. In the Archaeo3D workflow, the free software package QGIS fulfills this task. As an alternative, the point cloud can also be georeferenced in VisualSFM. Our Archaeo3D pipeline allows us to produce preview point clouds and rapidly examine them on-site with the benefit of validating the results immediately. The final reconstruction with Archaeo3D off-site, with more powerful computing equipment, will produce more detailed results. This technique was first used during the campaign in Tamtoc/Mexico 2013 (Block et al., 2015). Initially, a number of point clouds of an Huastec settlement site were produced, computed and validated on-site, and afterwards the complete 3d model was produced in the computer lab of the HTW Dresden. We are currently on the way to integrate some parts (such as Keyframe extraction or Image undistortion and enhancement) of this pipeline into our BIM co-registration process. ENHANCED MATCHING ALGORITHM As it was shown in our previous work the developed coregistration procedure is able to deliver registration accuracies in the range of 3-5 cm. The crucial point of the whole process is the creation of correct line plane pairs. When using manually assigned line plane pairs, it could be shown that even better registration accuracies can be reached. This can be explained with the user's scene understanding. When choosing the segments manually, longer and therefore more stable 3d line segments can be selected. Besides of that, the distribution of selected 3d lines can be more balanced so that ideally line segments are chosen from the entire scene. This also delivers more reliable transformation parameters. Consequently, it can be said that a reliable classification of the 3d line segments into spatially belonging clusters is of great importance for the automated line plane matching, having the overall aim to get better line plane pairs, in mind. Since SfM reconstructions are not up to scale without further information (e.g. by using control points), the clustering is a challenging task because no metric threshold values can be used. As rotations are scale invariant the direction vectors of 3d line segments play an important role in this context. The existing solution uses a clustering approach based on established plane hypotheses or rather normal vectors hypotheses. For improving the matching algorithm, we are currently following another approach. Figure 3 shows our test data set covering an indoor scene. Since the built environment in large parts follows a Manhattan World (Coughlan and Yuille, 2000), we can calculate the main axes of the reconstructed scene in the point cloud coordinate system by applying a principal component analysis on the direction vectors of the 3d line segments. After finding the main axes, the 3d line segments that are parallel to the main axes are determined using the dot product (see Figure 4). In the next step, the distance from the main axes to each mid point for all non-parallel lines are calculated and stored in a list. This list is classified using the Jenks Natural Breaks algorithm (Jenks and Caspall, 1971). This clustering algorithm, which is applicable for one dimensional data, tries to group the entries in a way that the variance of the data points inside a group is minimized whereas the variance between the groups is maximized. An important characteristic of the Jenks algorithm is that it is necessary to specify the number of cluster before running the algorithm. By default, we set the number of clusters to six nc = 6. However, using Jenks algorithm it is possible to calculate the goodness of variance fit (GVF) ranging from 0 (indicating a bad fit) to 1 (meaning a good fit) which is a quality measure for the evaluation of the clustering. Before that, the sum of squared deviations for array mean (SDAM) and the sum of squared deviations for class mean (SDCM) need to be calculated for the Jenks clusters: where L is the list of values to cluster, x represents a single value in L and µ is the mean of L where nc is the number of clusters, x represents a single value in cluster i and µi is the mean of cluster i: Using the quality measure GV F we are increasing the number of clusters as long as GV F ≥ 0.995. As a result (see 5) we obtain 6 clusters roughly equal to the six main bounding surfaces of the room. After establishing the cluster, the remaining procedure is quite similar to the existing one. We first randomly select three lines from three different clusters. The fourth line is chosen from one randomly chosen cluster that is opposite of one used cluster. So in total we have 4 lines that are matched to all possible sets of four different BIM planes. This process is repeated for a fixed number of times among other things depending on the present room geometry and the resulting minimal configurations are further processed during the adjustment calculation. CONCLUSION AND OUTLOOK In this article we presented two extensions for the coregistration of image blocks with BIM. For videogrammetric measurements, procedures for optimized image selection were discussed and an overview of the video processing up to the dense point cloud was given. After that, we introduced an improved matching algorithm for the matching of 3d lines (from images) to 3d planes (from BIM). With the new cluster approach, the number of possible matching candidates is reduced. This speeds up the computing time. The approaches must now be tested further with more complex data. Also, we are currently developing a web service and user interface so that the pipeline can be accessed online.
2022-02-06T16:40:21.822Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "10c9e1cf4ea04b17083a0b031797ae3aac4fef54", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLVI-5-W1-2022/141/2022/isprs-archives-XLVI-5-W1-2022-141-2022.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cbb705bacd1b2dc8007f63473ef304047b766084", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
56059356
pes2o/s2orc
v3-fos-license
Reuse of discarded deactivated bleaching earths in the bleaching of oils Tierra decolorante desechada, fue empleada, tras su reactivación para decolorar aceites de girasol, soja y maíz. La eficiencia de la tierra decolorante reactivada fue comparada con la de la virgen activada. La tierra reactivada ácida (pH 2,5-3) tuvo ligeramente mayor contenido en silicona que la tierra virgen o la reactivada neutra. Los mejores resultados en el color de los aceites de girasol y maíz fueron obtenidos cuando se emplearon niveles del 1 y 2% de tierra reactivada neutra (pH 6-7). La tierra ácida reactivada, usada al 2% consiguió una mayor reducción del color del aceite de soja, que una misma dosis de tierra virgen (pH 3). Ambas tierras reactivadas redujeron el índice de peróxidos, hierro, dienos conjugados y jabón de los aceites, mientras que hicieron aumentar la acidez y los trienos conjugados. Además, estas tierras reactivadas determinaron mayores descensos en los periodos de inducción del aceite que la tierra virgen. Las tierras reactivadas podrían ser usadas durante 5 ciclos para la decoloración de aceites de soja y girasol y durante más de 6 ciclos con aceite de girasol. INTRODUCTION Bleaching is recognized as one of the most important steps in edible oil processing (Mag, 1990;Micheal et al., 1992).According to Brimberg (1982), Young (1987), Abul Kalam and Joshi (1988), and Topallar (1998), bleaching is used daily in refining practices and aims not only at the removal of coloring bodies, pigments, etc., but also at the removal of residual amounts of phospholipids, mucilage, oxidized tri -or partial acyl -glycerols, metal traces in ionizable and non -ionizable (complexed) forms, and soap traces which survived the washing of the neutralized oil.Waldmann and Eggers (1991) reported that montmorillonite is the starting material for all activated bleaching clays.While Segers (1992) cited that the bulk of natural bleaching earths consists of complex silicates with aluminum ions.Kaufmann (1968) and Hoffmann (1989) reported that bleaching earths are subdivided into two groups: the naturally active clays form one group, and the highly active clays form another. Waldmann and Eggers (1991) mentioned that to make clay suitable for bleaching purposes, montmorillonite is subject to acid treatment, which replaces cations by protons and partially dissolves the original crystal structure.On the other hand, many other processes for the regeneration of bleaching earth have been patented (Abul Kalam and Joshi, 1988).Anderson (1996) suggested that spent clay can be used as asphalt additive, replacement for plastic parts in refractories, in cement manufacturing, soil stabilizers and road foundation.In Egypt, spent bleaching earth is not used in any manufacturing, so this discard causes some problems for the environment. According to The Chamber of Food Industries, Egyptian Industries Federation (The Egyptian Industry Ministry, 2002a), the production volume from oils and fats in 2001/2002 was about 650.000 tons.On the other hand, all activated bleaching earths are imported to Egypt from China, Germany and Indonesia.The percentage of activated bleaching earth used for oil bleaching ranges from 1 to 2 % of the oil weight.Therefore, the imported amounts from activated bleaching earth to Egypt ranged from 6500 to 13000 tons in 2001/2002 for bleaching the entire 650.000 tons.Respectively, the price of imported activated bleaching earth is about $500.00 per ton (The Egyptian Industry Ministry, 2002b).Hence, if discarded deactivated bleaching earth can be reused (after reactivation) for oil bleaching, it will reduce the imported amount of activated bleaching earth and it will also reduce the Grasas y Aceites production cost of oils at about 3.25 -6.5 million dollars per year.Therewithal, the reactivation cost of spent bleaching earth in this study is very low. The purpose of the present research was to reuse spent bleaching earth (discarded deactivated bleaching earth) after its bleaching in the oil bleaching process and to reduce the production cost of oils as well as to reduce environmental pollution from this waste by reusing it in a beneficial manner and, at the same time, to reduce imported amounts of activated bleaching earth to Egypt. Materials Neutralized sunflower and soybean oils were brought from the Cairo Oils and Soaps Company (Aiat factory, Giza, Egypt) while neutralized corn oil was supplied by the Egyptian Company for Starch and Glucose (Cairo, Egypt).These oils were used as substrates in the bleaching tests. Virgin activated bleaching earth (as a reference) was obtained from the Cairo Oils and Soaps Company (Aiat factory, Giza, Egypt), which was imported from Germany (Table I). A discarded deactivated bleaching earth was also taken from the Cairo Oils and Soaps Company (Aiat factory, Giza, Egypt), which contained 40 % oil and 10.5 % moisture.It had a repulsive odor. N -hexane (60 -80 o C) and some materials used for the reactivation of discarded deactivated bleaching earth were brought from the EL-Gomhoria Company for Pharmaceutical (Cairo, Egypt). Methods Determination of physical and chemical properties of the oils used in this study Acidity (%) as oleic acid, peroxide value (eq.O2 / Kg oil), iron (ppm), phosphorus (ppm) and soap (ppm) in sunflower, soybean and corn oils were performed according to the methods described in the A. O. C. S. (1997), while conjugated dienes and trienes (absorbance E 1% 1 cm at 232 and 270 nm, resp.) were estimated according to the methods found in the FAO / WHO (1970) by using U.V.-Vis.Spectrophotometer, Model Labomed, 120-02.The color of oils was measured by Lovibond tintometer, Model E, using 5.25 inch cell following the method reported in the A. O.C.S. (1993). The per cent color reduction was calculated using the equation of Krishnan (1975). The moisture content in both virgin activated and reactivated bleaching earths was determined using an electric oven at 105 o C for 3 hrs.while their pH values were estimated using pH Meter HI-9321 Table I Effect of reactivation of discarded deactivated bleaching earth on its chemical composition (Hanaa Instruments) according to the method described by Anthony and Ogugua (1988). Determination of oxidative stability of the bleached oils The determination of the oxidative stability of the bleached oils was performed according to the method described by Tsaknis et al., (1999) using Metrohm Rancimat 679 at 100 o C with an air flow of 20 L / h. Reactivation of discarded deactivated bleaching earth About 500 gr. of spent bleaching earth was milled and soaked in 2 liters n-hexane (60-80 o C) for 24 h at room temperature to remove the residual oil and also to obtain defatted spent bleaching earth.Defatted deactivated bleaching earth was reactivated using several procedures (Patent, 2002).After that, the resultant reactivated bleaching earth was acidic or neutral.The reactivated bleaching earth was dried in an electric oven at 105 o C to reach a moisture content of about 10 ± 1 % and pulverized to a fine powder in pestle and mortar to its original particle size (about 85 % reactivated bleaching earth passing through 74 micron sieve 200 mesh).The reactivated bleaching earths were analyzed and tested for bleachability. The bleachability test for reactivated bleaching earth The reactivated bleaching earth was tested for bleachability by using the method described by Rolando (1991) with some modifications as follows: Different dosages from reactivated bleaching earth (1.0, 1.5 and 2.0 % from the oil weight) were separately added to each of neutralized sunflower, soybean and corn oils at a temperature of 95 ± 1 o C and the mixture was stirred at 250 rpm for 30 min under vacuums (25-30 mm/Hg) in a Rotary evaporator.After that, the mixture (the hot oil and reactivated bleaching earth) was cooled at 50 ± 1 o C and filtered through filter paper Whatman n o 1 to separate the clay from the bleached oil and at the same time to return this discard for reusing it in the oil bleaching after its activation.The bleached oils were analyzed for their physical and chemical properties.Virgin activated bleaching earth was used for bleaching the oils (as a reference) according to the method described above. RESULTS AND DISCUSSION Bleaching clay performs not only color removal, but also the removal of trace metals, adsorption of phospholipids and soap, and decomposition of oxidation products such as peroxides (Nnadozie et al., 1989;Hui, 1996). Effect of reactivation of discarded deactivated bleaching earth on its chemical composition Bleaching clay is a general term for a clay-like substance, water containing aluminosilicates containing various proportions of magnesium, calcium and iron (Waldmann and Eggers, 1991).The effect of activation on the chemical components of the reactivated bleaching earth is evident from the results presented in Table I.It can be noted that acid reactivated bleaching earth (pH 2.5-3) had a slightly higher amount of silicone and a lower content of aluminum (68.8 and 16.2%, resp.)than neutralized reactivated bleaching earth (pH 6-7) which were 66.9 and 17.3%, respectively.These differences are closely related to the variation in pH value.These results are the same as those found by Kheok and Lim (1982) who reported that the initial increase in bleaching ability with increasing sulfuric acid addition is due to net charges in the clay.Also, the results showed that silicone was the main mineral present, followed by aluminum for virgin activated and reactivated bleaching earths but at different levels.Also, the results shown that the contents of other minerals in acid reactivated bleaching earth (pH = 2.5-3) were somewhat equal to those obtained in virgin bleaching earth (pH = 3) but they were slightly higher than those found in the neutralized reactivated bleaching earth (pH = 6-7).These data agree with those recorded by Kheok and Lim (1982) who pointed out that the activation of montmorillonite with mineral acids dissolve some impurities such as aluminum, ferrous and ferric iron and magnesium. The other parameters pertaining to moisture, pH value and particle size of reactivated bleaching earth are somewhat similar to those found in virgin activated bleaching earth.According to Hoffmann (1989), the moisture content of bleaching earth is generally 10%. Effect of pH value of virgin activated and reactivated bleaching earths on the color of bleached oils Figure 1 and Table II give the changes in color and total color of sunflower, soybean and corn oils treated separately with either virgin activated or reactivated bleaching earths (which had different pH values).From these data, it can be found that the bleaching of sunflower oil with neutralized reactivated bleaching earth (pH = 6-7) produced a reduction of 82.4% in the total color compared to those obtained by acid reactivated bleaching earth (pH = 2.5-3) and virgin activated bleaching earth (pH = 3) which were 74.4 and 74.5%, respectively.These values are similar to those mentioned by Hoffmann (1989). Also, the data showed that acid reactivated bleaching earth (pH = 2.5-3) produced a higher reduction in the color of soybean oil than that done by neutralized reactivated bleaching earth (pH = 6 -7).These differences are perhaps related to the fact that acid reactivated bleaching earth is more effective in color reduction of the oil, containing a high content of chlorophyll, than neutralized reactivated bleaching earth.These data are in agreement with the findings of Hoffmann (1989), who cited that oils (which have a high content of chlorophyll) treated with acid activated bleaching earth have a much lower residual color than do those treated with an earth of natural activity. According to Loncin (1970) and Hui (1996) from 5 -20 % reduction in the red color of soybean, canola and palm oils occurred during the bleaching with acid activated bleaching earth, so a breakdown occurs in chlorophyll and carotene.The increase in adsorption activity of clays after acid treatment is explained to be due to the weakness of the Si-O bonds in the clay structure. The results displayed in Table II show that neutralized reactivated bleaching earth (pH = 6 -7) produced a high reduction of about 33.7 % in total color of corn oil during the bleaching compared to acid reactivated bleaching earth (pH = 2.5 -3) which gave a low reduction in the total color (24.2 %) at the same dosage (2%).These results are similar to those recorded by Boki et al. (1991) and Hui (1996) who revealed that differences in physical and chemical properties of activated carbons cause differences in activity for reducing color. From the aforementioned results, it can be clear that neutralized reactivated bleaching earth (pH = 6 -7) was more effective in reducing the color of sunflower and corn oils than acid reactivated bleaching earth (pH = 2.5 -3).On the other hand, the most effective in the reduction of soybean oil color was the acid reactivated bleaching earth (pH = 2.5 -3), so the pH value of reactivated bleaching earth plays an important role in bleaching efficiency. Effect of dosages of virgin activated and reactivated bleaching earths on color of the bleached oils The quantity of activated bleaching earth required for oil bleaching depends on the quality of the oil, activity of the earth and process conditions, and is normally between 0.5 and 2.0 per cent of the oil weight (Brimberg, 1982;Hoffmann, 1989).From the data listed in Table III, it can be seen that the highest reduction in total color of sunflower oil was observed after bleaching with 2 % neutral reactivated bleaching earth (pH 6 -7).On the other hand, 1 % of neutral reactivated bleaching earth (pH 6 -7) produced a reduction in total color (Total color = 12.5) similar to that obtained with 2 % from the same bleaching earth (Total color = 8.0).These results are better than those obtained with either acid reactivated bleaching earth (pH 2.5 -3) or virgin activated bleaching earth (pH 3) at the same dosages.It is also clear that 2 % acid reactivated bleaching earth (pH 2.5 -3) used for the bleaching of soybean oil produced a high reduction in total color (Total color = 60.0)compared with that obtained by using 1% of the same earth (Total color = 78.0).These reductions were somewhat similar to those obtained by the same dose levels of virgin activated bleaching earth (pH = 3).These results are better than those obtained by using neutral reactivated bleaching earth (pH 6 -7) at the same dosages.The color of corn oil recorded a high reduction after bleaching it with 2 % neutral reactivated bleaching earth (pH = 6 -7) compared with the same earth but at the lowest dosage (1%).These results are in agreement with those found by Kheok and Lim (1982) and Nnadozie et al. (1989) who reported that an increase in clay dosage would be expected to lead to an increase in color reduction. It can be concluded that for sunflower oil, about 1% neutral reactivated bleaching earth (pH = 6 -7) was generally the best dose, giving the high reduction in color.While for soybean oil, 2 % acid reactivated bleaching earth (pH = 2.5 -3) was the best dose used to achieve a light color.As for corn oil, 2 % neutral reactivated bleaching earth (pH 6 -7) was the best dosage used to provide a high reduction in color. Effect of the bleaching with reactivated bleaching earth on some properties of the oils used Efficient bleaching requires measurements such as red color, peroxide value, free fatty acid content, iron concentration, and conjugation values (Mag, 1990). The changes in levels of acidity, peroxide value, iron, phosphorus, conjugated dienes and trienes, and soap of sunflower, soybean and corn oils before and after their bleaching (with either virgin activated bleaching earth or reactivated bleaching earth) are shown in Table IV.From these results, it can be concluded that the acidity values of sunflower, soybean and corn oils treated with virgin activated bleaching earth (pH = 3) undergo a slight increase.This increase also occurs in soybean oil after bleaching it with acid reactivated bleaching earth (pH = 2.5 -3).These results agreey with those reported by Morgan et al. (1985) and Hoffmann (1989) who reported that free fatty acids increase in the bleached oil with acid bleaching earth.On the other hand, no changes in the acidity values of sunflower and corn oils before and after their bleaching with neutralized reactivated bleaching earth (pH = 6 -7) were observed.This datum is in agreement with that found by Hong et al. (2000) who revealed that acid treated soy hull carbon reduced free fatty acids adsorption relative to non activated soy hull carbon.On the contrary, high reductions were found in peroxide values of the above oil after their bleaching with all kinds of bleaching earths.These data agree with those presented by Young (1987), Boki et al. (1989) and Mag (1990) who showed that oxidation levels are reduced by the breakdown of hydroperoxide primary oxidation product on the adsorbent surfaces such as bleaching earth.Boki et al. (1989) suggested that a decrease in peroxide value is due to the decomposition of peroxides by the strongest acid on the surface of bleaching earth. Table IV again shows that the differences in pH values of reactivated bleached earth had a minor effect on soap content in the bleached oils in which the highly acidic-reactivated bleaching earth produced decreases in the soap content of the bleached oils.These findings are in agreement with those stated by Hoffmann (1989) and Anderson (1996) who indicated that high acidity activated bleaching earth causes splitting of the adsorbed soap. As to the iron in the above oils after their bleaching, it is clear that bleaching by using either virgin activated or reactivated bleaching earths produced decrements in the levels of iron in all the bleached oils.These values are in line with those stated by The Egyptian Standard Specifications (1993), which recommended that the level of iron in refined, bleached and deodorized oil must be less than 1.5 ppm.Iron is the major pro-oxidant metal to be removed from oils in the course of the refining process (Kheok and Lim 1982;Mag, 1990).Results obtained by Tan et al. (1985) showed that traces of iron and some other metallic contaminants greatly favor color development in some fats, and certain pigments are very refractory to ordinary refining and bleaching treatments.A similar effect was observed for the aforementioned oils after their bleaching (with virgin activated or reactivated bleaching earths) in which the phosphorus contents were declined.These data agree with those expressed by Ostric et al. (1980) and Kheok and Lim (1982) who reported that oil treated with bleaching clay tends to give a lower phosphorus content. With regards to the E 1% 1 cm values of the bleached oil samples, it is clear that the bleaching produced decrements in the values of conjugated dienes and at the same time slight increments in conjugated trienes contents occurred in the bleached oils.These data are the same as those reported by Morrison (1975), Hoffmann (1989) and Topallar (1998) who cited that bleaching increases the level of conjugated trienes and reduces the content of conjugated dienes.Overall, all the results tabulated in Table IV agree with those established by Topallar (1998) who stated that bleaching clay performs not only color removal but also removes trace metals, adsorbs phospholipids and soaps, and decomposes oxidation products such as peroxides. From the above findings, it can be concluded that properties of the bleached oil samples with reactivated bleaching earth are somewhat similar to V. those obtained using virgin activated bleaching earth. 3.5.Impact of the bleaching with virgin activated and reactivated bleaching earths on the Rancimat induction period of the bleached oils at 100 o C Table V show the effect of bleaching by either virgin activated or reactivated bleaching earths on the Rancimat induction period of sunflower, soybean and corn oils at 100 o C. It is clear that the induction period of neutralized sunflower, soybean and corn oils dropped after their bleaching.These reductions may be due to the fact that some antioxidants were adsorbed (removed) onto the surface of the bleaching earth.These results are agreement with those found by Ostric et al. (1980), Nnadozie et al. (1989), Micheal andIbrahim (1991), andHui (1996), who reported that the bleaching process decreases the oxidative stability of oil. It is clear that no big differences were observed in the induction periods of either sunflower or corn oils treated with either neutralized reactivated bleaching earth (pH 6 -7) or acid reactivated bleaching earth (pH 2.5 -3) compared with virgin activated bleaching earth (pH = 3) (as a reference). As to the stability of soybean oil treated with acid reactivated bleaching earth (pH = 2.5 -3) it was somewhat higher than that found in the same oil when bleached with neutralized reactivated bleaching earth (pH = 6 -7).These differences are closely attributed to acid reactivated bleaching earth, which produced a higher adsorption of chlorophyllic compounds than neutralized reactivated bleaching earth.These results are similar to those shown by Hoffmann (1989) who cited that oil treated with acid activated bleaching earth has a much lower residual color (chlorophyllic and caratonoid) than the same amount with an earth of natural activity.On the other hand, Coultate (1989) recommended that chlorophyllic compounds must be removed from the oil to avoid rapid oxidation of the oil in the presence of light. Effect of reusable discarded deactivated bleaching earth after its reactivation on bleaching efficiency of the used oils Table VI shows the effect of reactivation of discarded deactivated bleaching earth on the bleaching efficiency of sunflower, soybean and corn oils.The results point out that the reactivated bleaching earth, which used the same dosages found in Table III, can be used for the bleaching of sunflower oil in more than six cycles and only five cycles for the bleaching of both soybean and corn oils.This limitation is due to the red color for each bleached soybean and corn oil (7.2 and 7.1, respectively) which became over the limit recommended by The Egyptian Standards Specifications (1993), which stipulated that red color Lovibond of bleached oil must be less than 7.0 in a 5.25 inch cell.These results are the same as those reported by Abul Kalam and Joshi (1988) who found that the degree of regeneration of spent earth reduced according to the number of cycles. Hence, from the aforementioned data, it could be concluded that reactivated bleaching earth can be used for the bleaching of oils till red color (Lovibond) of the bleached oil reaches 7 red.Hence, it can be recommended that the reactivated bleaching earth can be used for 5 cycles of bleaching for both soybean and corn oils and for more than 6 cycles for the bleaching of sunflower oil (note that the recycled bleaching earth must be reactivate after each use).This produced a high reduction in the colors. CONCLUSIONS It could be concluded that reactivated bleaching earth was suitable for the bleaching of sunflower, soybean and corn oils which gave similar results when compared to those obtained from virgin activated bleaching earth, which produced great color reduction with high stability in the bleached oils.Therefore, it could be recommended that discarded deactivated bleaching earth were reused after its reactivation for the bleaching of oils. Figure 1.-Effect of pH value of reactivated bleaching earth on total color of sunflower, soybean and corn oils. Effect of the bleaching with virgin activated and reactivated bleaching earths on the Rancimat induction period of the bleached oils at 100 Bleached using 1 % either virgin activated or reactivated bleaching earths.(b)Bleached using 2 % either virgin activated or reactivated bleaching earths.conditions as in Table
2018-12-05T03:20:44.812Z
2005-03-30T00:00:00.000
{ "year": 2005, "sha1": "02de6ae8ff6bbfc2b688f42ce6c9647b090c3d0c", "oa_license": "CCBY", "oa_url": "https://grasasyaceites.revistas.csic.es/index.php/grasasyaceites/article/download/132/132/132", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "02de6ae8ff6bbfc2b688f42ce6c9647b090c3d0c", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
27813601
pes2o/s2orc
v3-fos-license
Stem cells for end stage liver disease: How far have we got? End stage liver disease (ESLD) is a health problem worldwide. Liver transplantation is currently the only effective therapy, but its many drawbacks include a shortage of donors, operative damage, risk of rejection and in some cases recidivism of the pre-transplant disease. These factors account for the recent growing interest in regenerative medicine. Experiments have sought to identify an optimal source of stem cells, sufficient to generate large amounts of hepatocytes to be used in bioartificial livers or injected in vivo to repair the diseased organ. This update aims to give non-stem cell specialists an overview of the results obtained to date in this fascinating field of biomedical research. INTRODUCTION A stem cell is an undifferentiated cell capable of renewing itself throughout its life and of generating one or more types of differentiated cells. While embryonic stem cells (ESCs) are the only ones to be totipotential, adult tissues with high cellular turnover (e.g. skin, gut mucosa and bone marrow) retain a population of stem cells with restricted differentiation potential that constantly supply the tissue with new cells (Figure 1). End stage liver disease (ESLD) is the final stage of acute or chronic liver damage and is irreversibly associated with liver failure. ESLD can develop rapidly, over days or weeks (acute and sub-acute liver failure, respectively), or gradually, over months or years (chronic liver failure) [1] . Currently, liver transplantation is the most effective therapy for patients with ESLD [2] . However, its potential benefits are hampered by many drawbacks, such as the relative shortage of donors, operative risk, post-transplant rejection, recidivism of the pre-existing liver disease, and high costs. In this scenario, stem cell therapy sounds particularly attractive for its potential to support tissue regeneration requiring minimally invasive procedures with few complications. This field of research, which represents the ground from which the new discipline of "regenerative medicine" has germinated, has rapidly developed in recent years, arising great interest among scientists and physicians, and frequently appearing in newspapers headlines touting miracle cures, but arising ethical crises as well [3] . The most debated issue pertains to the use of human ESCs, as it implies, with current technologies, the destruction of human embryos. Opponents of ESC research argue that ESC research represents a slippery slope to reproductive cloning, and can fundamentally devalue human life. Contrarily, supporters argue that such research should be pursued because the resultant treatments could have significant medical potential. It is also noted that excess embryos created for in vitro fertilization could be donated with consent and used for the research [4] . The ensuing debate has prompted authorities around the world to seek regulatory frameworks and highlighted the fact that stem cell research represents a social and ethical challenge. Thus, current legislation on ESC use widely varies, with some countries being more permissive (such as UK, Netherlands, Spain and France) How might stem cells help? An ongoing debate involves the mechanisms by which stem cells might restore the function of a diseased org an. While some research g roups suppor t the hypothesis of stem cell integration into the tissue through "transdifferentiation" or "fusion" with resident parenchymal cells, others favour stem cells helping local cells through soluble factors production. How might stem cells be implanted? The way of stem cell administration to a diseased organ widely varies in different studies, from local (direct vascular delivery) to peripheral (injection in a peripheral vein) route. Moreover, attempts to increase the number of circulating stem cells by administering growth factors have been made. Which way is best it is still unclear, and further studies are needed to clear doubts. What is the transferability of the data obtained from animal models to human disease? Most data come from experiments perfor med in rodents, in which an organ is injured, either chemically or surgically, to study the effect of subsequent stem cell administration. Whilst animal studies are quite numerous, human usage of stem cells is still far from being everyday practice, particularly in the setting of ESLD. The translation of animal data to human disease has to be taken with great caution, and the validation of basic investigations still requires further extensive research. How far are we with stem cell purity and function and stability of their products? The techniques for both ESC line isolation and adult stem cell separation from tissues need to be refined, since separation from stromal contaminating components is still not optimal. Moreover, although mature cells have been obtained by stem cell transdifferentiation in vitro, their ability to express the entire repertoire of specific biological functions and maintain them over time has not been clearly demonstrated as yet. LIVER REGENERATION Under physiological conditions, the liver does not need any external source of cells to repair injury, as resting hepatocytes have the ability to re-enter the cell cycle rapidly and efficiently after an injury has occurred. Nevertheless, in persistent liver injury, as is the case with chronic liver diseases in humans, the sustained proliferative stress prematurely ages the hepatocytes and exhausts their ability to replicate. In this context, hepatic progenitor cells (or "oval cells", as they are called in rodents where they were first described) appear as a rich population of small round cells spreading from the periportal area into the parenchyma [5] . Oval cells have been demonstrated to be bipotential progenitors able to generate both hepatocytes and biliary cells [6][7][8] . They are thought to reside in the terminal branches of the intrahepatic biliary tree (e.g. the canals of Hering) [8] and support liver regeneration when hepatocyte proliferation is ineffective in absolute or relative terms. Rodent oval cells have proved effective in repopulating the diseased liver, but a clearly positive effect on liver function has yet to be fully demonstrated. By contrast, there is evidence that, as bipotential progenitors, oval cells can give rise to both hepatocellular-and cholangiocarcinoma [9] . The lack of an exclusive oval cell marker makes this cell population elusive and this has aroused much speculation. Some years ago, the finding of CD34 and Sca-1 hematopoietic markers on oval cells gave rise to the theory of an active trafficking of stem cells between the bone marrow (BM) and the liver and a potential involvement of the BM in liver regeneration during injury [10] . Although extremely attractive, this hypothesis is the topic of ongoing debate ( Figure 2). BONE MARROW-DERIVED STEM CELLS All the experimental strategies and conceptual paradigms applicable to stem cells in general were initially defined in haematopoietic stem cells (HSCs) residing in the BM. Not being at the top of the stem cell hierarchy, HSCs were initially thought to possess a restricted differentiation potential and therefore to be able to generate only cells of the haematopoietic system. This theory was questioned after studies in BM transplanted patients demonstrated the presence of donor-derived epithelial cells in some extra-haematological tissues, including the liver [11] . The hypothesis of a "germ layerunrestricted plasticity" of HSCs rapidly captured the attention of investigators interested in regenerative medicine. There are several potential advantages of using adult rather than embryonic stem cells to regenerate tissues including fewer ethical concerns, better known biological behaviour, easier accessibility and, therefore, lower costs. Both rodent and human HSCs have been induced to differentiate into hepatocytes in vitro. Most of the protocols to induce CD34+ HSCs differentiation into hepatocytes employed growing media conditioned with growth factors and mitogens [e.g. hepatocyte growth factors (HGF), fibroblast growth factor (FGF) and oncostatin M] and culture layers specific for hepatocyte growth, like matrigel. To reproduce the pathophysiological conditions of liver injury, some studies also employed cholestatic serum or co-culture with chemically damaged liver tissue [12][13][14] . Although these studies showed some HSC "transdifferentiation" into hepatocytes, the reported percentage of hepatocytes derived from HSCs did not exceed 5%. Thus, HSCs exhibit a limited differentiation potential that make them non-optimal candidates for tissue regeneration purposes. The cost of repeated cultures needed to obtain sufficient amounts of hepatocytes from HSC would presumably be too high for cell therapy-based applications. Another population of stem cells in adults resides in the bone marrow stroma. Bone marrow mesenchymal stem cells (BMMSCs), as they are termed, represent the non-haematopoietic fraction of the bone marrow. In vitro, they are adherent, clonogenic, non-phagocytic and fibroblastic in habit. Under proper experimental conditions, they are able to differentiate into bone, cartilage, adipose and fibrous tissue, and hematopoietic supporting tissue [14] . There is also evidence that BMMSCs can undergo unorthodox differentiation, giving rise to cells with visceral mesoder m, neuroectoder m and endoderm characteristics. When transplanted, these cells can engraft in bone, muscle, brain, lung, heart, liver, gastrointestinal tract and haematopoietic tissue, and could even contribute to most somatic cell types when injected into an early blastocyst [15] . In vitro experiments have shown that human and rodent BMMSCs grown on matrigel and supplemented with HGF and FGF-4 differentiate into mature hepatocytes, with a differentiation rate ranging from 30% to 80% [16,17] . BMMSCs that acquire the hepatocyte phenotype in vitro [18] also exhibit typical hepatocyte functions, including albumin production, glycogen storage, urea secretion, low density-lipoprotein uptake and phenobarbitalinducible cytochrome-P450 activity. BMMSCs likely represent pluripotent stem cells that remain in adult life and experimental evidence suggests that they might be a reliable cellular source to generate hepatocytes for use in cell therapy. Flanking in vitro experiments, in vivo tests with BMderived stem cells have also been performed to treat the diseased liver. Most data have been obtained in rodent models where liver damage was induced by either a hepatospecific necrotic insult (e.g. carbon tetrachloride (CCl 4 ), allyl-alcohol or fumarylacetoacetate hydrolase (FAH) genetically induced deficiency) or a proliferative stimulus like partial hepatectomy and bile duct legation. Retrorsine or 2-acetyl-aminofluorene, two liver toxins enhancing oval/progenitor cell proliferation, have frequently been used to simulate chronic liver damage [19][20][21][22] . Another model of chronic hepatocellular injury used to study the role of BM in liver regeneration is the hepatitis B surface antigen (HBsAg) transgenic mouse model [23] . The results obtained in rodents have frequently been puzzling. What generally emerges is that BMSC engraftment into the damaged liver widely varies, ranging from 0.16% to about 50% in different experiments [24][25][26][27] . Even though the cellular mechanisms responsible for these variable results are not known, transdifferentiation into hepatocytes occurs at a very low level when CD34+ HSCs are administered to rodents treated with liver toxins [28] . On the other hand, cell fusion between hepatocytes and stem cells from the myelomonocytic lineage of the BM (e.g. the precursors of circulating macrophages) has been shown to underlie liver regeneration after BM administration in the FAH-deficient mouse [20,21] . A genetic advantage of the transplanted BM stem cells with respect to resident enzyme-deficient hepatocytes likely accounts for the higher level of engraftment and tissue repopulation observed in this model. Whereas hepatocyte formation from BM cells in vivo has proved to be poorly effective, some studies have postulated a much more important role for BM derived stem cells in liver tissue remodelling and fibrosis resolution. In mice injured with CCl 4 and thioacetamide, Russo et al [29] demonstrated that the BM-derived stem cell contribution to parenchymal regeneration was marginal (0.6%), but they substantially contributed to hepatic stellate cell (68%) and myofibroblast (70%) populations, which were able to influence the liver fibrotic response to toxin injury. In a sex-mismatched bone marrow transplantation model, both stellate cells and myofibroblasts of donor origin found in the In persistent liver injury, as is the case with liver cirrhosis, the sustained proliferative stress prematurely ages the hepatocytes and exhausts their ability to replicate. In this context hepatic progenitor cells, or oval cells as they are called in rodents where they were first described, appear as a rich population of small round cells spreading from the periportal area into the parenchyma. The contribution of bone marrow derived stem cells to tissue regeneration in chronic liver diseases is still debated (B). www.wjgnet.com Lorenzini S et al . Stem cells for end stage liver disease recipient liver did not originate through cell fusion with the indigenous hepatocytes, but largely derived from the circulating BMMSCs [29] . Lastly, Duffield et al [30] showed in rodents that BM-derived macrophages are likely to be crucial in regulating the liver fibrotic response to injury in a time-dependent manner, since depletion of these cells before injury reduces the fibrotic response, whereas their depletion during the recovery phase is associated with a greater fibrosis. In contrast to the many studies perfor med in animals, those on BM-derived stem cell administration to patients with liver diseases can be counted on one hand. They can be divided into studies performed in patients with and without an underlying chronic liver disease. In patients with liver malignancies arisen on a "healthy" liver, the intraportal injection of CD133+ BM stem cells (a subpopulation of stem cells with both haematopoietic and endothelial progenitor characteristics) improved liver regeneration after extensive resection and segmental portal vein embolization [31] . This procedure was safe and highly effective in terms of liver mass recovery. Looking at future applications, this technique may offer the chance to treat the so-called "small-for-size" liver failure, a dramatic event occurring in transplanted patients who received either a small or a split liver. Few studies dealing with stem cell therapy in patients with liver cirrhosis [32][33][34] have been published to date. Gordon et al [32] injected CD34+ HSCs directly into the liver vascular system of patients with cirrhosis, whereas Terai et al [32] injected autologous BM through a peripheral vein. Albeit the small number of patients and lack of a control group [34] , both studies demonstrated a slight improvement in liver function and clinical conditions. These results seem to confirm, at least in part, the results obtained in the many experiments performed in rodents showing some role of BM-derived stem cells in liver repair. In our experience [35] , the administration of granulocyte colony stimulating factor (G-CSF) to mobilize BM-stem cells to the peripheral blood did not modify the residual liver function in patients with compensated liver cirrhosis. However, the procedure was safe, and may represent a good way to obtain autologous stem cells for cell therapy applications. EMBRYONIC STEM CELLS Due to the difficulty in controlling their hug e proliferative and differentiative potential, and major ethical concerns, the use of human ESCs is currently limited to in vitro and animal studies. Biotechnology industries and research laboratories are committed to devise effective protocols to optimize the ability of ESCs to differentiate into functional hepatocytes. The final goal is relevant on both scientific and clinical grounds. A suitable source of hepatocytes is what is lacking for the implementation of bio-artificial liver (BAL) technology. Effective protocols are needed not only to promote ESC differentiation into hepatocytes, but also to determine the expression of hepatic functions such as albumin secretion, indocyanine green uptake and release, glycogen storage and p450 metabolism [36] . Cytokines and growth factors such as HGF and FGF have been shown to promote ESC differentiation and growth [37] . In addition, it has been demonstrated that sodium butyrate, a non-proteinaceous compound, supports the action of these factors [38] . Hay et al [39] developed a multistage system in which HGF was used without the requirement of sodium butyrate, and human ESCs differentiated into hepatocyte-like cells without embryoid body formation. Use of an extracellular synthetic or natural matrix can be relevant, as shown by Ishizaka et al [40] in a three-dimensional system in which hepatocytes developed from mouse ESCs transfected with the hepatocyte nuclear factor-3 beta on a 3-D matrix scaffold. 3-D matrix scaffolds have been reported to be superior to the more commonly used 2-D monolayer culture in inducing differentiation into hepatocytes [41] . This is not surprising, as 3-D matrix scaffolds better reproduce the architecture of the liver parenchyma, which is essential for normal tissue function. Another effective way to obtain hepatic differentiation is genetic modulation. This can be achieved by transfecting stem cells with recombinant DNA encoding for hepatospecific proteins. Adding collagen, appropriate cytokines and growth factors has an important effect on hepatocyte differentiation [42] . Recently Agarwal et al [43] proposed a new differentiation protocol for the generation of high-purity (70%) hepatocyte cultures: the differentiation process was largely uniform, with cell cultures prog ressively expressing increasing numbers of hepatic lineage markers and functional hepatic characteristics. When transplanted in mice with acute liver injury, the human ESC derived endoderm differentiated into hepatocytes and repopulated the damaged liver. FETAL ANNEX STEM CELLS Cord blood contains multiple populations of embryoniclike and other pluripotential stem cells capable of originating hematopoietic, epithelial, endothelial, and neural tissues both in vitro and in vivo. The isolation of HSCs and MSCs from cord blood is a relatively new procedure and only few studies have been published [44,45] . Sàez-Lara et al [46] transplanted CD34+ HSCs derived from human cord blood into rats with liver cirrhosis without achieving a significant rate of engraftment, as GFP-positive cells were clearly eliminated. By contrast [47] , low-density mononuclear cells obtained from human cord blood transplanted in utero in fetal rats generated functional hepatocytes that persisted in the fetal recipient liver at least 6 months after birth. This humanized animal model provides a very interesting approach to in vivo investigation of human cord blood stem cell differentiation into hepatocytes. Hong et al [48] described the ability of human umbilical cord blood MSCs (CD34-) to differentiate into hepatocytes when cultured in prohepatogenic conditions. The differentiation rate in their protocol was about 50%, and the hepatocytes obtained were capable of incorporating low-density lipoprotein, considered one of the most typical hepatocyte functions. More recently, Campard et al [49] demonstrated that human cord matrix stem cells cultured with growth factors show hepatocyte characteristics like cytochrome P450-34A expression, glycogen storage and urea production. In addition, when transplanted into hepatectomized immune-deficient mice, small clusters of human cells expressing albumin and alpha-fetoprotein appear, thereby demonstrating the good engraftment and differentiation capacity of the transplanted cells [50,51] . Placenta is another potential source of stem cells. Placenta-derived stem cells (PDSCs) are fibroblast-like cells that attach to a plastic surface. Like BMMSCs, they can be expanded for more than 20 population doublings and induced to differentiate into cells of various mesenchymal tissues. Chien et al [52] recently cultivated PDSCs derived from human placentae in hepatic differentiation media, and obtained cells with hepatocyte morphology expressing specific hepatocyte functions. In comparison with stem cells isolated from other tissues there are no ethical problems associated with the study of PDSCs as the collection of placenta samples does not harm mother or infant. The ability of PDSCs to differentiate and their straightforward handling could make them an appropriate source for cell-based applications. CONCLUSION Under proper experimental conditions, adult, embryonic and fetal annex stem cells have been shown to be able to differentiate into hepatocytes. At present, most biotechnology industries and research laboratories are working to optimize the differentiation protocols. In the future, stem cell-derived hepatocytes will likely be used in BAL employed as "bridge therapy" for patients with liver failure awaiting transplantation or to recover liver function. Intrahepatic injection of stem cell-derived hepatocytes might also be useful in patients with acute liver failure. In chronic liver diseases, which account for the majority of cases of liver failure worldwide, the future of stem cell therapy is still uncertain. Liver failure occurring in patients with chronic liver disease, namely cirrhosis, is not only due to the lack of healthy cells, but also to the disruption of tissue architecture and progressive accumulation of inflammatory cells and fibrosis. While "brand new" hepatocytes derived from stem cells may temporarily support the impaired liver function, they would hardly be able to restore the original liver structure and eliminate collagen deposition. Thus, further strategies are needed. A better understanding of the mechanisms leading to collagen deposition and re-adsorption, and the development of new antifibrotic agents, combined with effective antiviral agents for patients with viral hepatitis, will be critical for the success of cell-based therapy in chronic liver failure.
2018-04-03T01:01:39.259Z
2008-08-07T00:00:00.000
{ "year": 2008, "sha1": "3e6e647de47e584fe6349204c0a94850f5a5668e", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.14.4593", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "279b3ef267158d576a9ba12b944f65eb10a6594b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257551749
pes2o/s2orc
v3-fos-license
Study of the geometry influence of the support points in coordonates transformation: application from WGS84 to NS59 datum Abstract: The use of a transformation method for the passage from one geodetic system to another requires the use of some common points as support points, these points are used in the determination of the transformation parameters. Generally, the choice of the support points is effected manually by choosing the best distribution of these points in the transformation area. In the present study, we present a methodology of selection of these points where an algorithm takes into account the computation of the transformation parameters with all combinations between the common points and the best result will be adopted. An application of this methodology are carried out in North-East of Algeria to determine the best set of the 09 transformation parameters between the WGS84 system and the National North Sahara system using 10 common points (05 support and 05 control). This methodology is efficient in the case where the common points are near one another. The transformation of WGS84 coordinates into a local datum becomes a common and frequent task in geodesy.The procedure for converting (transforming) from one coordinate system to another is known as coordinate transformation (Wolf et al. 2014).The purpose of a datum transformation is to provide a mathematical or computational means to transform the coordinates of a point from one datum to another (Ruffhead 2021).This procedure in the case of the transformation from WGS84 to NS59 datum requires the definition of the following elements (Figure 1): -Source and target system: WGS84 is the source system and NS59 is the target datum. -Transformation parameters: It is the link between WGS84 and NS59 datum, when the transformation parameters are known is sufficient to apply them directly to the coordinates available to obtain their correspondents in the other system, in other cases they need to be estimated from some common points (Andrei 2006) -Common points: The set of known points in WGS84 and NS59, the number, distribution and accuracy of these points may influence the accuracy of the transformation (Janssen 2009), these points are divided in support points used in the determination of the transformation parameters and control points used to evaluate the results of transformation. In this work, we study the effect of the geometry distribution of the support points on the transformation results, where an algorithm takes into account the computation of the transformation parameters with all combinations between the common points and the best result will be adopted. The present work study is organized as follows: in section 2, we present the mathematical formalism of the 9 parameters transformation model, the section 3 is devoted to the computation of combinations for the creation of the different set of support and control points, in section 4 an application of transformation using 10 common points between WGS84 and NS59 is presented, a summary of conclusions is given. Transformation Model of 9-Parameters The 7-Parameters transformation model is the general similarity transformation developed by Helmert in 1876, which is well known in the geodetic community as a conformal datum transformation (Abbey et al. 2020), the current form of this model was developed by Burša (1962) and Wolf (1963) known as the Bursa-Wolf model. The simplified and approximate form of Bursa-Wolf model (7-Parameters) is the most used in geodetic transformations because their simplicity and ease of implementation into software, (Gullu et al. 2017), the general formula of this model is presented in several documents (Deakin 2006;Janssen 2009;Ruffhead 2021): Where: 2 X the coordinates in target system and 1 X the coordinates in source system, T the translation vector, k scale factor and R rotation matrix result of three rotations about each axis given in (Andrei 2006;Deakin 2006;Ruffhead 2021) by: If the rotations exceed a few arc-seconds, the use the rotation matrix ( 03) is required (Janssen 2009). The 7 transformation model (Figure 1) determine three translations, three rotations and a single scale factor (Abbey et al. 2020).In Equation ( 01), there a single scale factor to determine, in some case several datum have a problem in orientation and scale factor as the NS59 datum (Medjahed and Zeggai 2012), and to resolve this problem we suppose that each axis has a different scale factor. The matrix of scale factor K given in (Andrei 2006;Awange et al. 2008;Paláncz and Piroska 2011) by: Where, , kx ky and kz are the three different scale factors for each axis. Substituting the scale factor matrix (04) in Equation ( 01), the model of 7-Parameters becomes a model of 9-Parameters described in the following: The 9-Parameters transformation model it is an extension of the 7-Parameters transformation model (Awange et al. 2008), the Equation ( 05) is not linear and needs to be linearized as follows. 1-Translation: The three translations are approximated as: This leads to formulate the matrix K under the form: Where: 0 , , , K M N P are 3x3 matrices 3-Rotation: To linearize the rotation matrix, the three rotations are approximated as: Where: ( ) Where: The matrices 0 , , , K M N P o f scale factor and 0 , , , R D G F of rotation are provided in Appendix. Ex kx Ey ky Ez kz δ δ δ δ δ δ ≈ ≈ ≈ , the 9-Parameters transformation model becomes: By organizing the unknowns in the same vector, the Equation ( 13) can be written for one support point as: Where: A is the configuration matrix, B is vector of observations and X δ is the vector of the 9 unknown parameters. The Equation ( 14) is defined as the problem of 9 transformation parameters of determining three translations parameters, three scales parameters and three rotations parameters in the X , Y and Z directions (Awange et al. 2008). The solution of system (14) based on least squares method is given in (Deakin 2006;Gullu et al. 2017;Ruffhead 2021;Wolf et al. 2014) by: ( ) Which can be obtained iteratively until convergence as follows: ( ) Where: [ ] The solution of (16) converges very quickly after two or three iterations (Andrei 2006). The elements of B and A are provided in Appendix. Creation of the Support and Control Points Sets To use all the common points as support points and create the different sets of points (Figure 3), we will use the combination technique.A combination is a technical term meaning 'selections', it is a mathematical technique that determines the number of possible arrangements in a collection of items where the order of the selection does not matter. Given a set E of P objects, we call combinations any set of N objects among the P objects.The combinations of N objects taken from P objects are denoted N P C with N P ≤ . In this work study, we asked the following question: How many possibilities to choose N support points from P common points not taking into account the order. The number N P C given by the formula (17) answers this question (Gordon 1994; Awange et al. 2010). Presentation of Data Test In this application, we determinate the 9 transformation parameters between WGS84 and NS59 datum using 10 common points known in both systems (Figure 2).In this application, the following five support points: (01 02 04 09 10) have the best geographical distribution (Figure 2) and this set of support point is considered as the first choice. The Figure 3 show the file of the support and control points for 252 sets.After creating the file of 252 sets, we found that our choice was the set number 36 (Figure 3). Results of Set Number 36 The Figure below shows what the solution converge after the second iteration for each parameter, where: 10 10 10 0 0 0 0 0.9 0.9 0.9 10 10 10  is the initial solution.The residuals have a maximum values in X and Z component and not very significant for the Y component. Results of All Combinations (9-Parameters and RMS) Figures 6, 7 and 8 shows the variation of the transformation parameters in function of combination number, where C=1 to 252. Figure 9 shows the RMS results on support and control points for each set.On the basis of the numerical and graphical results, we can cite some remarks -The best result of transformation were obtained with set number 81 (Figure 10) where the RMS=11.3cm in control points and 35.2cm in support point. -The worst case in terms of transformation results is obtained with the set number 148 (Figure 10) where the RMS is 26.7cm in support points and is 454cm in control point, these results are due to the aligned geometry of the five support points. -In section 2 we have used the set number 36 (Figure 2), for this set, the RMS = 36.2cm in control points and 6.7cm in support points. -The methodology used for the choice of support points has retained the set number 81 as the best choice. -A difference of 24.9cm in the control points between set number 81 and set number 36 was observed, this difference is very important in geodetic application -For the set number 81, the transformation parameters were used outside their determination area. -The use of the points border the transformation area as support points is not always efficient. Conclusions The transformation of GPS coordinates into national geodetic North Sahara system (NS59) are the subject of this paper, this transformation is related to the choice of support points selected from the common points, this choice has an important influence on the quality of the transformation results. The objective of this study is the presentation of selection methodology of the support points where an algorithm takes into account all possible combinations between the common points and the best result will be retained in function of the RMS minimum value. The North Sahara geodetic system have a problem in orientation and scale factor (lack of information) and for this we have used the 9 parameter transformation model where the scale factor parameter is replaced by three scale factors. The methodology presented in the present work can be applied with any transformation model (Bursa-Wolf, Molodensky, polynomial transformation, leveling/GPS...) and can be applied to identify the support points used in the computation of the transformation parameters. The 9-Parameters transformation model given by Equation ( 13): need the computation of the following matrices: ; ; ; ; from the European Datum 1950 (ED50) by a simple translation of longitude and the corrected values of translation vector.2-Scalefactor:The three scale factors are approximated as: and the corrected values of scale factor matrix. Ty Tz kx ky kz Ex Ey Ez = the vector of the initial values of 0 Arrangement of P ; P : Number of common points; N : Number of support points; !: Factorial; P N M − =: Number of control points. Figure 3 : Figure 3: File of support and control points. Figure 5 : Figure 5: Residuals in support and control points (set number 36). Figure 9 : Figure 9: RMS support and control points. Figure 10 : Figure 10: Geometry of results (manual, best and poor choice). Figure 11 , Figure11, summarizes the calculation steps followed in this work-study. Figure 11 : Figure 11: Selection methodology of support points.
2023-03-16T15:20:19.665Z
2023-03-13T00:00:00.000
{ "year": 2023, "sha1": "1cca063571af02ad73212dcc9eefb96dec2cc5d9", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/bcg/a/n4bjJNy6nKwD9wwy6vPfy7n/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Dynamic", "pdf_hash": "ae33392cc8cb7091e710bfa19abfd41595e28bf6", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
214809909
pes2o/s2orc
v3-fos-license
A Single‐Dose, Open‐Label, Randomized, Two‐Way Crossover Study in Healthy Japanese Participants to Evaluate the Bioequivalence and the Food Effect on the Pharmacokinetics of Daprodustat Abstract Daprodustat is a prolyl hydroxylase inhibitor that stimulates erythropoiesis in a manner similar to the natural response to hypoxia, whereby inhibition of hypoxia inducible factor (HIF) prolyl‐4‐hydroxylases by daprodustat ultimately results in increased levels of HIF‐responsive genes. Daprodustat is under development as an emerging new class of agents for the treatment of anemia associated with chronic kidney disease (CKD). This was a single‐center, single‐dose, open‐label, randomized, 2‐way crossover study in healthy Japanese male participants consisting of 2 parts. The primary objective was to evaluate the bioequivalence (BE) between daprodustat tablet strengths (part 1) and to evaluate the food effect on the pharmacokinetics (PK) of daprodustat (part 2). A total of 64 healthy Japanese male participants were enrolled; 52 participants were included in part 1 and 12 in part 2. BE was demonstrated between the daprodustat 2‐mg tablet and the daprodustat 4‐mg tablet. A standard CKD meal did not have a large effect on the PK parameters of daprodustat after a single oral dose of daprodustat 4 mg. Administration of single oral doses of daprodustat 4 mg was generally well tolerated in the healthy Japanese participants, and no new safety signals were identified without regard to food. Anemia, which is frequently observed in patients with chronic kidney disease (CKD), has been associated with decreased circulating levels of a glycoprotein hormone, erythropoietin. [1][2][3] This hormone is primarily produced by the kidneys and, to a lesser extent, by the liver, and it stimulates normal red blood cell production, maturation, and survival. 2,3 Daprodustat 4,5 is a prolyl hydroxylase inhibitor that stimulates erythropoiesis in a manner similar to the natural response to hypoxia, whereby inhibition of hypoxia inducible factor (HIF) prolyl-4-hydroxylases (PHD1, PHD2, PHD3) by daprodustat ultimately results in increased levels of HIF-responsive genes. Daprodustat is under development as an oral dose as an emerging new class of agent for treatment of anemia associated with CKD. From clinical studies with the oral daprodustat programs, daprodustat was found to be rapidly absorbed following oral administration (time to maximum concentration [T max ] of 1.0 to 2.5 hours) and exhibited dose-proportional increases in exposure over the 10-to 100-mg dose range in healthy Japanese and Caucasian participants. 6 The pharmacokinetic (PK) properties of steady-state daprodustat maximum observed drug concentration (C max ), daprodustat area under concentration-time curve (AUC), and time to maximum observed drug concentration (T max ) were comparable in healthy and CKD subjects. 7 In addition, there was no clinically relevant difference in these properties in the hemodialysis subjects between a dialysis and nondialysis day, and the renal clearance of daprodustat was minimal. The cytochrome (CYP) P450 enzymes that are involved in the oxidative metabolism of daprodustat have been evaluated both in vitro (human liver microsomes) and in clinical studies. Using in vitro assays of human liver microsomes and CYP supersomes, daprodustat was found to be primarily metabolized through CYP P450 enzymes, suggesting that it undergoes firstpass metabolism. 4 Drug-drug interactions were evaluated with a 100-mg oral dose of daprodustat, a dose 4 times higher than the highest daily dose investigated in clinical trials. The strong CYP2C8 inhibitor gemfibrozil markedly increased the AUC 0-t of daprodustat by 18.6-fold and C max by 3.9-fold. In a subsequent study, 8 when daprodustat 25 mg was coadministered with a weak CYP2C8 inhibitor, trimethoprim (200 mg), the AUC of daprodustat was increased by 48% and C max by 28%. However, when daprodustat was coadministered with pioglitazone (CYP2C8 probe) and rosuvastatin (organic anion transporting peptide 1B1 probe), daprodustat did not affect the PK of these 2 probes, suggesting very low interaction potential with these drugs. Daprodustat is formulated as immediate-release tablets with dose strengths of 1, 2, 4, and 6 mg in Japan. The dissolution profiles of these tablets have been evaluated according to the Guideline for Bioequivalence Studies for Different Strengths of Oral Solid Dosage Forms and Guideline for Bioequivalence Studies for Formulation Changes of Oral Solid Dosage Forms (Japanese bioequivalence guidelines), and the dissolution test results across tablet strengths demonstrated equivalence, with the exception of the 2-mg tablet versus the 4-mg tablet test in water. Administration of daprodustat in development with a highfat/-calorie breakfast led to a 1.0-hour delay in T max and 29% decrease in C max and an 8% decrease in AUC in healthy non-Japanese participants. 4 The primary objective of our study was to evaluate the bioequivalence (BE) of daprodustat tablet strengths (2 versus 4 mg) in healthy Japanese male participants according to the Japanese BE guideline (part 1) and to evaluate the food effect on the PK of daprodustat in healthy Japanese male participants with reference to Japanese Notification for Clinical Pharmacokinetic Studies of Pharmaceuticals (part 2). This part assessed the PK of a single oral dose of daprodustat under a fasted state and following a standard CKD meal. Methods Ethics The study was conducted between April 24, 2018, and June 9, 2018, at the SOUSEIKAI Global Clinical Research Center, Fukuoka Mirai Hospital in Japan in accordance with the International Conference on Harmonization Good Clinical Practice guidelines, all applicable participant privacy requirements, and the ethical principles outlined in the current version of the Declaration of Helsinki. The study protocol and informed consent documents were approved by the Institutional Review Board of the Hakata Clinic. Written informed consent was obtained from each participant before any screening evaluations. Study Design and Study Population This was a single-center, single-dose, open-label, randomized, 2-way crossover study to evaluate the BE between daprodustat tablet strengths and to evaluate the food effect on the PK of daprodustat following single oral doses in healthy Japanese male participants (ClinicalTrials.gov identifier: NCT03493386). Part 1 of this study was the BE part, in which participants received single dose of 2 tablets of 2-mg daprodustat and a single dose of 1 tablet of 4-mg daprodustat according to the Japanese BE guideline. Part 2 was the food effect part, in which participants received a single dose of a 4-mg daprodustat tablet in fasting and fed states in a crossover manner. To investigate the food effect in the closer clinical practice, a standard meal as recommended by the Japanese Society of Nephrology for CKD 9 was used in part 2. In both parts (part 1 and part 2), healthy participants had a screening visit within 30 days prior to the first dose of study intervention, 2 intervention periods, and revisit 7 ± 1 days after the second dose for followup. All participants were administered daprodustat as a single oral dose, with assessments conducted for up to 24 hours postdose. At least a 5-day washout period occurred between each intervention period. In part 1, participants refrained from any food or drink at least 10 hours before dosing and 4 hours after dosing. No water was allowed within 2 hours before and after dosing, but it was allowed ad libitum at all other times. In part 2, participants fasted 10 hours before administration of a standard CKD meal or dosing. The standard CKD meal consisted of 500-700 kcal, 12-16 g of protein, greater than or equal to 1 g but less than 2 g of salt, and less than or equal to 500 mg of potassium based on the dietary recommendations for CKD patients. 9 Study participants who were in the fed state consumed a standard CKD meal in 20 minutes or less as breakfast, and the drug product was administered 30 minutes after the end of the meal. No water was allowed until 2 hours after dosing, but it was allowed ad libitum at all other times. Healthy Japanese male participants aged between 20 and 55 years with a body weight ࣙ 50 kg and a body mass index between 18.5 and 24.9 kg/m 2 were eligible to participate in this study. Participants were healthy, as determined by the investigator at the screening evaluation, which included assessment of concurrent conditions, medical history, concomitant medications, alcohol and smoking habits, allergies, and presence of infectious diseases, as well as physical examination, laboratory tests, and a 12-lead electrocardiogram (ECG). Pharmacokinetic Sample Collection and Bioanalytical Methods Blood samples were collected for the measurement of daprodustat concentrations predose and 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12, and 24 hours following daprodustat administration in parts 1 and 2. Samples were collected at nominal times relative to the proposed time of daprodustat dosing. The blood samples were taken via an indwelling cannula (or by direct venipuncture), collected in a blood collection tube containing ethylenediaminetetraacetic acid dipotassium. The tube was immediately inverted 10 times and was placed on water ice until centrifugation, which should occur within 1 hour of sample collection (3000 rpm, 4°C, 10 minutes). Supernatant plasma was transferred to a 2.0-mL polypropylene tube and stored at -20°C before shipment. Samples were shipped frozen on dry ice at agreed times throughout the study to a bioanalytical facility (PPD, Middleton, Wisconsin). The bioanalytical method for the daprodustat analysis in plasma was validated by PPD. The concentration of daprodustat was analyzed by high-performance liquid chromatography-tandem mass spectrometry with negative ion mode. The analytical system consisted of a Series 1100 HPLC system (Agilent, Santa Clara, California), XBridge Phenyl analytical column (2.1 × 30 mm, 3.5 μm; Waters, Milford, Massachusetts), and an API6500 mass spectrometer (Sciex, Framingham, Massachusetts). Daprodustat was extracted from a 250-μL aliquot of plasma by solid-phase extraction using Evolute ABN (30 μm, 25 mg 96-well plate; Biotage, Charlotte, North Carolina) with isotopically labeled internal standard ([ 13 C 5 15 N]-daprodustat). Mobile phases consisted of 0.1% formic acid in water (A) and 0.1% formic acid in acetonitrile (B). Extracts were separated under the following gradient conditions: 1-minute gradient from 50% to 95% (B) with a flow rate of 0.4 mL/min and kept from 1 to 2 minutes, washing Safety Assessments Safety was assessed in all participants by monitoring adverse events (AEs), clinical laboratory tests (hematology, chemistry, and urinalysis), vital signs (blood pressure, heart rate, and body temperature), 12-lead ECGs, and physical examinations. AEs were collected from the start of treatment until the follow-up visit. Clinical laboratory tests were performed predose, 24 hours postdose, and at follow-up. Vital signs, 12-lead ECGs, and physical examinations were performed predose, 3 and 24 hours postdose, and at follow-up. Pharmacokinetic Analyses Plasma daprodustat concentration-time data were analyzed by noncompartmental methods using Phoenix WinNonlin version 6.3 (Certara L.P., St. Louis, Missouri). The plasma concentration-time data were used to determine the following PK parameters: AUC 0-t , AUC 0-inf , C max , T max , t 1/2 , %AUCex, CL/F, kel, and correlation coefficient between time and log concentration of daprodustat for the points used in estimation of kel. The calculations were based on the actual sampling times recorded during the study. Statistical Analyses In part 1, the number of participants was determined based on statistical considerations. According to the regulatory definition of BE criteria; the 90% confidence interval (CI) of the ratio for AUC 0-t and C max between tablet strengths (2 mg × 2 versus 4 mg × 1) should lie within the range of 0.80-1.25. Assuming the true ratio was 1.0 and the coefficient of variation within subject (%CVw) was 35%, in a 2-way crossover design 52 participants in total (ie, 26 participants for each group) were randomized to achieve at least 90% power for meeting the BE criteria. In part 2, the number of participants was determined based on feasibility rather than statistical considerations. A total of 12 participants (ie, 6 participants for each group) were randomized in a 2-way crossover design. The safety population was defined as all participants who received at least 1 dose of the study medication. The PK population was defined as all participants who received at least 1 dose of the study medication from whom a PK sample was obtained and analyzed. For part 1, the AUC 0-t , AUC 0-inf , and C max of daprodustat were separately analyzed following the logtransformation of PK parameters of daprodustat. The model included tablet strength, period, and group as fixed effects, whereas subject was a random effect. The estimates of least-squares means for each tablet strength and the treatment difference was exponentially back-transformed to obtain adjusted geometric means of AUC 0-t , AUC 0-inf , and C max for each tablet strength and adjusted geometric mean ratios (test/reference) along with the associated 90%CIs. Two treatments were considered bioequivalent if the 90%CI of the geometric mean ratio of the AUC 0-t and C max was within the acceptable range of 0.80-1.25. For part 2, the effect of food on the PK parameters (AUC 0-t , AUC 0-inf , and C max ) was assessed using the same mixed-effects model as part 1, with feeding condition (fasted or fed) instead of tablet strength effect. The formal test of bioequivalence was not performed for part 2. Participant Disposition and Demographics A total of 64 healthy Japanese male participants were enrolled. In part 1, all 52 participants received the study drug, and 51 participants completed the study. One participant who completed treatment in period 1 was withdrawn from the study because of the participant's inconvenience of the subsequent study schedule. In part 2, all 12 participants received the study drug and completed the study. The demographic characteristics of the participants are summarized in Table 1. Pharmacokinetics Mean daprodustat plasma concentration-time profiles categorized by strength or food effect are displayed in Table 2. The results of the statistical analysis for both parts are presented in Table 3. Following the single oral 4-mg daprodustat dose (2mg tablet × 2 or 4-mg tablet × 1) in part 1, daprodustat plasma concentration reached a peak at 2.0 hours (median) after dosing and was rapidly eliminated. The AUC 0-t , AUC 0-inf , and C max were similar in participants dosed with 2 daprodustat 2-mg tablets and 1 daprodustat 4-mg tablet. The T max was identical for both tablet strengths, and the t 1/2 values were also similar. The mean percent AUC 0-inf extrapolated was ˂20%. The 90%CIs for the adjusted geometric mean ratios for AUC 0-t , AUC 0-inf , and C max for 2 daprodustat 2-mg tablets compared with 1 daprodustat 4-mg tablet were within the predefined bioequivalence range of 0.80-1.25. In part 2, the exposure of daprodustat in the fed state was slightly lower than in the fasted state. The ratio of fed/fasted for AUC 0-t was 0.91, indicating that AUC 0-t in the fed state was 9% lower than in the fasted state. Similarly, the ratio of fed/fasted for C max was 0.89, indicating that C max in the fed state was 11% lower compared with the fasted state. The median T max was delayed from 1.75 to 2.75 hours when administering daprodustat with a standard CKD meal. There was no apparent difference in t 1/2 between fed and fasted states. Safety During part 1 of the study, 4 of the 52 participants (8%) experienced AEs, and no AE was reported in part 2 of the study. All 4 AEs were considered by the investigator to be moderate in intensity (Table 4), unrelated to treatment with study medication, and resolved by the end of the study. No serious adverse events (SAEs) or deaths were reported during this study. No participant was withdrawn from the study because of an AE. In addition, no safety signals were identified via vital signs, 12-lead ECGs, or clinical laboratory parameters. Overall daprodustat was generally well tolerated in healthy Japanese male participants following administration of a single oral dose of daprodustat 4 mg in the fed and fasted states. Discussion This study was conducted to evaluate the BE between daprodustat tablet strengths (2 versus 4 mg) in healthy Japanese male participants and to evaluate the food effect on the PK of daprodustat in healthy Japanese male participants. In part 1, the BE of daprodustat tablet strengths (2-mg tablet × 2 versus 4-mg tablet × 1) was investigated. The 90%CIs for the adjusted geometric mean ratios for AUC 0-t and C max for 2 daprodustat 2-mg tablets compared with 1 daprodustat 4-mg tablet under fasted states were within the predefined BE range of 0.80-1.25. Therefore, BE was demonstrated between the daprodustat 2-mg tablet and the daprodustat 4-mg tablet used in this study. For other PK parameters, the 90%CI for the adjusted geometric mean ratios for AUC 0-inf was also contained within the 0.80-1.25 ranges for BE. In part 2, the effect of administration with a standard CKD meal (fed versus fasted) was investigated following a single oral dose of 4-mg daprodustat. The Adjusted geometric mean ratio for AUC or C max is 2-mg tablet × 2/4-mg tablet × 1 (part 1). Adjusted geometric mean ratio for AUC or C max is fed/fasted (part 2). exposures (AUC 0-t , AUC 0-inf , and C max ) of daprodustat in the fed state were slightly lower (9%, 9%, and 11%, respectively) than those in the fasted state. However, there was considerable overlap in the 95%CIs of AUC 0-t , AUC 0-inf , and C max in the fed and fasted states. The median of T max was delayed from 1.75 to 2.75 hours when administering daprodustat with a standard CKD meal. However, there was considerable overlap in the range of T max values in the fed and fasted states. There was no apparent difference in t 1/2 between the fed and fasted states. These data indicated that a standard CKD meal did not have a large effect on the PK parameters of daprodustat after a single oral dose of 4 mg daprodustat. The use of daprodustat for renal anemia may require individual dose adjustments, and until the clinical relevance of this change in exposure can be determined, within-subject variability in plasma exposure of daprodustat may be minimized by consistently taking daprodustat either with or without food. These results are consistent with the previous study in which the effect of a high-fat/-calorie meal was investigated in non-Japanese. 4 Following a single oral dose of 4 mg daprodustat in healthy Japanese male participates, there were no drug-related AEs or SAEs or deaths. In addition, no safety signals were identified via vital signs, 12-lead ECGs, or clinical laboratory parameters. Overall daprodustat was generally well tolerated in healthy Japanese male participants following administration of a single oral dose of daprodustat 4 mg in the fed and fasted states. Conclusion Bioequivalence was demonstrated between the daprodustat 2-mg tablet and the daprodustat 4-mg tablet. A standard CKD meal did not have a large effect on the PK parameters of daprodustat after a single oral dose of daprodustat 4 mg. Administration of single oral doses of daprodustat 4 mg was generally well tolerated in the healthy Japanese participants, and no new safety signals were identified. Funding GlaxoSmithKline sponsored and provided funding for the study (ClinicalTrials.gov identifier NCT03493386). Data-Sharing Statement Anonymized individual participant data and study documents can be requested for further research from www.clinicalstudydatarequest.com.
2020-04-07T13:04:29.354Z
2020-04-06T00:00:00.000
{ "year": 2020, "sha1": "b64abbae726559d917f20e97208a4aa7bdab42d2", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cpdd.793", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "299370443691d9025731ec1b888277535ef9a1c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215771933
pes2o/s2orc
v3-fos-license
Investigation on the Micro Deformation Mechanism of Asphalt Mixtures under High Temperatures Based on a Self-Developed Laboratory Test. Rutting has always been considered the main disease in asphalt pavement. Dealing with rutting disease would be benefitted by understanding the formation of rutting and testing the rutting performance of mixtures more reasonably. The objective of this paper is to systematically investigate the rutting mechanism by employing a self-designed rutting tester along with the corresponding numerical simulations. The deformation of different positions of the existing tracking tester was found to be inconsistent, and the loading was not in line with reality. Accordingly, a more practical tester was proposed: the reduced scale circular tracking (RSCT) tester integrates the functions of asphalt mixture fabrication and rutting monitoring. The results demonstrated that the loading of the new tester is closer to the actual situation. In addition, determining the stress and displacement characteristics of particles in the asphalt mixture was found to be difficult due to the limitations of the testing methods. Therefore, a two-dimensional virtual rutting test based on the RSCT was built using PFC2D (Particle Flow Code 2 Dimension) to investigate the mechanism of formation in rutting and to obtain the corresponding guidance. The numerical simulation showed that all particles of the specimen tended to move away from the load location. The main cause of rutting formation was the eddy current flow of asphalt mastic driven by coarse aggregates. The aggregates with diameters ranging from 9.5 to 4.75 mm were observed to have the greatest contribution to rutting deformation. Therefore, the aggregate amount of these spans should be focused on in the design of mixture grading. Introduction Asphalt pavement possesses superior qualities in performance, such as its surface smoothness, low noise, convenient construction and maintenance, and has been widely used around the world [1][2][3][4][5]. With the aggravation of traffic loads and the continuous rise in global average temperatures, rutting disease of asphalt pavement under high temperatures has been a focus for road researchers [6][7][8][9]. The existence of ruts damages the flatness of the road surface, easily causing a vehicle to slip and detrimentally affecting the comfort of drivers. Accurate rutting tests and reasonable evaluation methods serve as the basis to solve the rutting problem. Currently, the full-scale pavement test and the laboratory rutting test are two major methods that characterize the rutting performance of asphalt mixtures. Full-scale pavement tests include the NCAT (National Center for Asphalt Technology test road) [10], the AASHO (American Association of State Highway Officials) test road [11], Minnesota Road, Wes Track [12], the MLS (Mobile Loading Simulator) [13], the HVS (Heavy Vehicle Simulator) [14], the ALF (Accelerated Loading Facility) [15], 1. A more practical rutting tester is developed considering the shortcomings of existing testers. 2. A two-dimensional RSCT virtual test is built based on the discrete element method, and the validity of the virtual test is verified. 3. The microscopic response of the numerical model is analyzed to study the formation mechanism of rutting, and the corresponding guidance is obtained. Asphalt Basic asphalt, with a density of 1.03 g/cm 3 , is used in this study, with a penetration grade of 60/80, called "Pan Jin" basic asphalt. The technical parameters of the materials were tested according to the Chinese Test Methods of Bitumen and Bituminous Mixtures for Highway Engineering (JTG E20-2011). All parameters are shown in Table 1. Aggregate The aggregate and mineral powder used in this study is limestone, which was acquired from the Jiutai Stone Factory. The nominal maximum size of aggregates was 16 mm. The technical parameters of the materials were tested via experimentation according to the Chinese Test Methods of Aggregate for Highway Engineering (JTG E42-2005). All parameters are shown in Tables 2-4. Preparation of Specimens Two kinds of specimens were used: the asphalt mixture and asphalt mastic specimens. The maximum nominal size of the dense-graded asphalt mixture used in this paper was 16 mm, named AC16. This type of asphalt mixture is the most commonly used asphalt surface material in China. The specimens would then be used to test the rutting performance. The asphalt mastic specimens would be used to obtain the microscopic parameters used in the Burgers model. Preparation of Asphalt Mixture In the indoor tests, asphalt mixture specimens with a target porosity of 4% were prepared according to JTJ 052-2000. The optimized asphalt-aggregate ratio determined by the Marshall design method was 4.8%. Figure 1 shows the aggregate gradation and asphalt contents of AC16. Preparation of Asphalt Mastic According to the aggregate gradation of asphalt mixtures, the aggregate gradation of asphalt mastic is shown in Figure 2. Marshall specimens of asphalt mastic were prepared according to the requirements of the dynamic creep tests to attain the macroscale parameters. The asphalt content of asphalt mastic was determined through the direct proportional relationship of the specific surface area between the asphalt binder and fine aggregates, which was 10%. Methods The rutting mechanism was studied using two laboratory rutting tests and a numerical simulation test. The two laboratory rutting tests used in this study were the WTT and the RSCT test, and the numerical simulation test was a discrete element rutting test based on the RSCT test. In the specifications of both America (AASHTO Guide for Design of Pavement Structures) [32] and China (JTG D50-2017), it has been proposed that permanent deformation of high-grade asphalt pavement over 15 mm affects the normal driving of vehicles. Therefore, the test ends when the rut deformation reaches 15 mm. Each test (the WTT and the RSCT test) was carried out three times. In order to ensure the reliability of the experimental data, the average value of the experimental data was used to express the results. The Wheel Tracking Test The WTT was conducted and compared with the RSCT test to verify the effectiveness of the RSCT test. The WTT was carried out according to the Chinese standard GB/T0719. The experimental temperature applied in this test was 60 °C. The load pressure was 0.7 MPa, and the loading distance of the test wheel was 230 mm, with the linear reciprocating motions of the test wheel being 42 times per minute. Preparation of Asphalt Mastic According to the aggregate gradation of asphalt mixtures, the aggregate gradation of asphalt mastic is shown in Figure 2. Marshall specimens of asphalt mastic were prepared according to the requirements of the dynamic creep tests to attain the macroscale parameters. The asphalt content of asphalt mastic was determined through the direct proportional relationship of the specific surface area between the asphalt binder and fine aggregates, which was 10%. Preparation of Asphalt Mastic According to the aggregate gradation of asphalt mixtures, the aggregate gradation of asphalt mastic is shown in Figure 2. Marshall specimens of asphalt mastic were prepared according to the requirements of the dynamic creep tests to attain the macroscale parameters. The asphalt content of asphalt mastic was determined through the direct proportional relationship of the specific surface area between the asphalt binder and fine aggregates, which was 10%. Methods The rutting mechanism was studied using two laboratory rutting tests and a numerical simulation test. The two laboratory rutting tests used in this study were the WTT and the RSCT test, and the numerical simulation test was a discrete element rutting test based on the RSCT test. In the specifications of both America (AASHTO Guide for Design of Pavement Structures) [32] and China (JTG D50-2017), it has been proposed that permanent deformation of high-grade asphalt pavement over 15 mm affects the normal driving of vehicles. Therefore, the test ends when the rut deformation reaches 15 mm. Each test (the WTT and the RSCT test) was carried out three times. In order to ensure the reliability of the experimental data, the average value of the experimental data was used to express the results. The Wheel Tracking Test The WTT was conducted and compared with the RSCT test to verify the effectiveness of the RSCT test. The WTT was carried out according to the Chinese standard GB/T0719. The experimental temperature applied in this test was 60 °C. The load pressure was 0.7 MPa, and the loading distance of the test wheel was 230 mm, with the linear reciprocating motions of the test wheel being 42 times per minute. Methods The rutting mechanism was studied using two laboratory rutting tests and a numerical simulation test. The two laboratory rutting tests used in this study were the WTT and the RSCT test, and the numerical simulation test was a discrete element rutting test based on the RSCT test. In the specifications of both America (AASHTO Guide for Design of Pavement Structures) [32] and China (JTG D50-2017), it has been proposed that permanent deformation of high-grade asphalt pavement over 15 mm affects the normal driving of vehicles. Therefore, the test ends when the rut deformation reaches 15 mm. Each test (the WTT and the RSCT test) was carried out three times. In order to ensure the reliability of the experimental data, the average value of the experimental data was used to express the results. The Wheel Tracking Test The WTT was conducted and compared with the RSCT test to verify the effectiveness of the RSCT test. The WTT was carried out according to the Chinese standard GB/T0719. The experimental temperature applied in this test was 60 • C. The load pressure was 0.7 MPa, and the loading distance The Reduced Scale Circular Track Test In all rutting tests, the full-scale test was deemed to be the most practical. However, this test is expensive and difficult to repeat. Therefore, laboratory tests were used to generally study the rutting performance of the asphalt mixture. The loading mode of the laboratory tests was usually linear reciprocating loading. The corresponding loading speed was different from the actual situation and was not constant, which would cause various deformations in the specimen at different positions. Considering the above, a new laboratory rutting tester called the RSCT tester was independently developed. The loading mode of the tester was annular loading to ensure that the deformation of the specimen was the same everywhere. The loading speed and size of the instrument may also be adjusted to meet the actual road conditions well. Compositions of the Tester The self-developed tester consists of four parts: the power system, the environmental simulation system, the loading system, and the monitoring system. The power system is used to provide loading power and control the loading frequency, and the environmental simulation system controls environmental conditions such as temperature and humidity. The loading system is used to control the loading position, weight, and size. The monitoring system is used to record the rutting depth and temperature. The corresponding tester is shown in Figure 3. The Reduced Scale Circular Track Test In all rutting tests, the full-scale test was deemed to be the most practical. However, this test is expensive and difficult to repeat. Therefore, laboratory tests were used to generally study the rutting performance of the asphalt mixture. The loading mode of the laboratory tests was usually linear reciprocating loading. The corresponding loading speed was different from the actual situation and was not constant, which would cause various deformations in the specimen at different positions. Considering the above, a new laboratory rutting tester called the RSCT tester was independently developed. The loading mode of the tester was annular loading to ensure that the deformation of the specimen was the same everywhere. The loading speed and size of the instrument may also be adjusted to meet the actual road conditions well. Compositions of the Tester The self-developed tester consists of four parts: the power system, the environmental simulation system, the loading system, and the monitoring system. The power system is used to provide loading power and control the loading frequency, and the environmental simulation system controls environmental conditions such as temperature and humidity. The loading system is used to control the loading position, weight, and size. The monitoring system is used to record the rutting depth and temperature. The corresponding tester is shown in Figure 3. Parameters of the Tester The functions of the asphalt mixture preparation and rutting monitoring are integrated in the tester. Thus, the compaction times and load conditions need to be determined. The asphalt mixture must first be compacted in the disc to form the specimen. The theoretical volume of the specimen was calculated according to the design height of the specimen, which was 5 mm. Therefore, the quality of the asphalt mixture may be obtained by density conversion. Thereafter, the mixed asphalt mixture was placed in the disc evenly and compacted by the test wheel with a width equal to the specimen. The compaction load was 65 kg, and following 50 times of compaction, four average positions of the specimen were selected to measure the compaction height. The height is shown in Table 5, which shows that the compaction heights reached the theoretical design height with an error of less than 4%. The specimen was taken out for porosity inspection, and the porosity was determined to be 5.1, satisfying the specification requirements. The above procedure indicated that this compaction was able to meet the requirements of the test and may be used to compact asphalt mixtures in the following test. Parameters of the Tester The functions of the asphalt mixture preparation and rutting monitoring are integrated in the tester. Thus, the compaction times and load conditions need to be determined. The asphalt mixture must first be compacted in the disc to form the specimen. The theoretical volume of the specimen was calculated according to the design height of the specimen, which was 5 mm. Therefore, the quality of the asphalt mixture may be obtained by density conversion. Thereafter, the mixed asphalt mixture was placed in the disc evenly and compacted by the test wheel with a width equal to the specimen. The compaction load was 65 kg, and following 50 times of compaction, four average positions of the specimen were selected to measure the compaction height. The height is shown in Table 5, which shows that the compaction heights reached the theoretical design height with an error of less than 4%. The specimen was taken out for porosity inspection, and the porosity was determined to be 5.1, satisfying the specification requirements. The above procedure indicated that this compaction was able to meet the requirements of the test and may be used to compact asphalt mixtures in the following test. The load conditions consisted of load and temperature. The temperature was controlled at 60 • C, and the load was set to 70 kg, making the load pressure 0.7 MPa. Test Procedure According to JTG E20-2011, the test procedure is as follows: 1. The resistance wire in the surrounding ring is heated to ensure a temperature of 120 • C. 2. The asphalt mixture, mixed according to JTG E20-2011, is evenly placed in the disc and tamped. Subsequently, the width of the test wheel is adjusted to the same width as the specimen and the asphalt mixture is compacted with a 65 kg load 50 times. 3. The test wheel is adjusted to a wheel with a width of 5 cm. The temperature is controlled at 60 • C by an environmental simulation system and held for 5 h. Finally, the test is carried out after the load is adjusted to 70 kg. Numerical Simulation Test Based on the RSCT test, a numerical simulation test was established. All aspects of the numerical simulation test are consistent with the RSCT test. The details are as follows: In terms of specimens, the gradation of specimen in the numerical simulation test is consistent with that in the RSCT test. The 2D specimen in the numerical simulation test is a full-size replica of the cross section of the RSCT test specimen. In terms of loading conditions, both the load and the load time are the same as those in the RSCT test. The temperature condition in the RSCT test can be achieved in the numerical simulation test by giving the micro-parameters at the corresponding temperature to a virtual specimen. A brief introduction is given below regarding the basic assumptions and contact models of the numerical simulation test. Basic Assumptions In the discrete element analysis, particles were used to represent substances. According to the actual structure of the asphalt mixture, a discrete element model was then established. It was assumed that fine aggregate and mineral powder were wrapped in the asphalt, forming a homogeneous asphalt mastic material. Hence, the particles of the model may be divided into two parts: the coarse aggregate and asphalt mastic parts. In the process of building the discrete element model, it is very important to accurately simulate the actual situation of the asphalt mixture. Numerous studies have demonstrated that the shapes and distributions of coarse aggregate confer a great influence on the performance of the asphalt mixture [33,34]. The unreasonable shape and distribution of the aggregate will lead to the simulation deviation from the reality, resulting in a large error [35]. Therefore, the shapes and distributions must be considered for a high-precision rutting simulation. It is obvious that the original simple circle cannot represent the actual shape of the aggregate. Hence, the aggregates with a complex shape and uniform distribution are used to form the virtual asphalt mixture specimen [30]. The gradation of the virtual asphalt mixture is consistent with the RSCT test. Accordingly, the basic assumptions are as follows: 1. The shape of the coarse aggregate is assumed to be an irregular pentagon. In this paper, an irregular pentahedron's aggregate clump is composed of several original round particles using FISH codes. 2. The distribution of the coarse aggregate is random. 3. Due to the fact that the coarse aggregate will not deform when subjected to loading, the coarse aggregate is assumed to be a homogeneous material with sufficient strength and stiffness. 4. Due to its recoverable capacity under high temperatures, the plastic deformation of asphalt mastic is very small, which can be ignored. Asphalt mastic has both elasticity and viscosity under high temperatures. It is unreasonable to treat asphalt mastic as a single elastic or viscous body. Therefore, asphalt mastic is assumed to be a homogeneous viscoelastic material. The Burgers model was utilized to describe the properties of asphalt mastic. The parameters would then be obtained using the dynamic creep test. Contact Models Contacts and contact models in the discrete element model must be set up according to the assumptions above. In this study, three contact models exist: 1. The contact model of coarse aggregate particles is set up as a linear contact model. 2. The contact between the asphalt mastic particles and the coarse aggregate particles is set up as a linear contact bond model. 3. The contact model of asphalt mastic particles is set up as a Burgers model. The Reduced Scale Circular Track Test After the specimens were kept at 60 • C for 5 h, the tests were performed. The tests ended when the rutting deformation reached 15 mm. At this time, the load times measured by the HYCZ-5A tester (a rut tester for WTT) were 5219, and the load times measured by the RSCT tester were 9179. The rutting deformation curves of the two rut testers under the same load action times are shown in Figure 4. Contact Models Contacts and contact models in the discrete element model must be set up according to the assumptions above. In this study, three contact models exist: 1. The contact model of coarse aggregate particles is set up as a linear contact model. 2. The contact between the asphalt mastic particles and the coarse aggregate particles is set up as a linear contact bond model. 3. The contact model of asphalt mastic particles is set up as a Burgers model. The Reduced Scale Circular Track Test After the specimens were kept at 60 °C for 5 h, the tests were performed. The tests ended when the rutting deformation reached 15 mm. At this time, the load times measured by the HYCZ-5A tester (a rut tester for WTT) were 5219, and the load times measured by the RSCT tester were 9179. The rutting deformation curves of the two rut testers under the same load action times are shown in Figure 4. In Figure 4, the formation rate of permanent deformation increases first and then slows down with the increase in loading times. The rut deformation of the RSCT test is always smaller than that of the WTT when the load and load times are the same. The load times of WTT account for 9.22% of the total times, and that of the RSCT test account for 5.83% when the deformation reached 5 mm. The load times of the WTT and the RSCT test account for 30.29% and 23.51%, respectively, when the deformation reached 10 mm. Both the ratio and growth ratio of the RSCT test is smaller than that of the WTT. Concerning the quantitative analysis of deformation, a logarithmic curve was used to fit the data. The fitting formula of the curve is shown in Table 6, and the determination coefficients of the two were determined to be 0.9785 and 0.9715, respectively, indicating that the degree of curve fitting is high. Test Logarithmic Equation The a of the RSCT test curve is 3.5594, which is smaller than that of the WTT curve of 4.136. The In Figure 4, the formation rate of permanent deformation increases first and then slows down with the increase in loading times. The rut deformation of the RSCT test is always smaller than that of the WTT when the load and load times are the same. The load times of WTT account for 9.22% of the total times, and that of the RSCT test account for 5.83% when the deformation reached 5 mm. The load times of the WTT and the RSCT test account for 30.29% and 23.51%, respectively, when the deformation reached 10 mm. Both the ratio and growth ratio of the RSCT test is smaller than that of the WTT. Concerning the quantitative analysis of deformation, a logarithmic curve was used to fit the data. The fitting formula of the curve is shown in Table 6, and the determination coefficients of the two were determined to be 0.9785 and 0.9715, respectively, indicating that the degree of curve fitting is high. Test Logarithmic Equation The a of the RSCT test curve is 3.5594, which is smaller than that of the WTT curve of 4.136. The value of a represents the deformation rate. The smaller the value, the more load times needed to reach a certain deformation. Therefore, the rut deformation curve of the RSCT test is always below the WTT curve. A smaller a value indicates that the RSCT test specimen has a smaller deformation under the same load times. There are generally two causes for differing performances in asphalt mixtures: internal and external causes. Internal causes are known as the performances of asphalt mixture materials. Both the materials and gradation used in the two rut testers were the same. The errors in the actual test operation were ignored. Therefore, the difference of the two testers in the results was not due to internal causes. The external causes are the test environment and load parameters. There are only two different external causes in the two rut testers: the speed and form of the load. The load linear speed of the traditional rut tester was 0.58 km/h, while that of the new rut tester was 3.69 km/h. The load linear speed of the new tester was observed to be closer to the actual vehicle speed. The practical experience shows that the lower load speed will produce a larger permanent deformation. Therefore, the specimen in the new rut tester possessed a smaller deformation under the same load times. The second external cause is the form of the load. The specimen surface of the traditional rut tester was squared, and the loading wheel performed a reciprocating linear motion during the test. This form of the load will lead to the accumulation of asphalt mixture at the front and rear edge of the specimen as well as different deformations at each position of the specimen. The load form of the new rut tester is annular load. Therefore, there will be no accumulation in mixtures, and the rut deformation of the whole specimen will be basically the same. In view of the above two points, the load of the new rut tester more closely conforms to that of the actual pavement. Hence, the new tester may more reliably evaluate the rutting potential of asphalt mixtures. Preparation During the numerical simulation, the virtual test was established using the discrete element software PFC2D. The asphalt mastic was considered to be a homogeneous viscoelastic material. The Burgers model parameters for the contacts between the asphalt mastic elements were obtained via the dynamic creep test. The instruments used in the test were the DTS-30. The Marshall specimen was compacted on both sides 75 times. The cylinder specimen, with a diameter of 101.6 mm and a height of 63.5 mm, was prepared for the uniaxial creep tests [36][37][38], as shown in Figure 5. The experimental temperature applied on the creep test was 60 • C, consistent with the experimental temperature applied in the rutting test. The specimen was thermally insulated for 5 h. A square wave pressure with a frequency of 0.5 Hz and a load of 100 kPa was applied on the specimen. Every 2 s was considered a cycle, and there were 1800 cycles. In order to eliminate errors caused by the device, a preload of 10 kPa was first applied on the specimen for 5 min. The fitting curve of the dynamic creep test was then recorded, as shown in Figure 6. The Burgers model is a four element viscoelastic model, composed of a Maxwell model and a Kelvin model in series. Its creep equation can accurately describe the stress-strain relationship of the asphalt mixture. The parameters of the Burgers model were obtained by fitting the creep equation (Equation (1)) of the Burgers model and are shown in Table 7. The value of the correlation coefficient R was determined to be 0.999444, and the value of the determination coefficient R 2 was 0.998888. The fitting effect was observed to be ideal. where ε is the strain, σ is the stress, t' is the loading time, E 1 is the Maxwell elasticity coefficient, η 1 is the Maxwell viscosity coefficient, E 2 is the Kelvin elasticity coefficient, and η 2 is the Kelvin viscosity coefficient. new rut tester is annular load. Therefore, there will be no accumulation in mixtures, and the rut deformation of the whole specimen will be basically the same. In view of the above two points, the load of the new rut tester more closely conforms to that of the actual pavement. Hence, the new tester may more reliably evaluate the rutting potential of asphalt mixtures. Preparation During the numerical simulation, the virtual test was established using the discrete element software PFC2D. The asphalt mastic was considered to be a homogeneous viscoelastic material. The Burgers model parameters for the contacts between the asphalt mastic elements were obtained via the d The experimental temperature applied on the creep test was 60 °C, consistent with the experimental temperature applied in the rutting test. The specimen was thermally insulated for 5 h. A square wave pressure with a frequency of 0.5 Hz and a load of 100 kPa was applied on the specimen. Every 2 s was considered a cycle, and there were 1800 cycles. In order to eliminate errors caused by the device, a preload of 10 kPa was first applied on the specimen for 5 min. The fitting curve of the dynamic creep test was then recorded, as shown in Figure 6. The Burgers model is a four element viscoelastic model, composed of a Maxwell model and a Kelvin model in series. Its creep equation can accurately describe the stress-strain relationship of the asphalt mixture. The parameters of the Burgers model were obtained by fitting the creep equation (Equation (1)) of the Burgers model and are shown in Table 7. The value of the correlation coefficient R was determined to be 0.999444, and the value of the determination coefficient R 2 was 0.998888. The fitting effect was observed to be ideal. where ε is the strain, σ is the stress, t' is the loading time, E1 is the Maxwell elasticity coefficient, η1 is the Maxwell viscosity coefficient, E2 is the Kelvin elasticity coefficient, and η2 is the Kelvin viscosity coefficient. where l is the contact length of the area between the test wheel and the specimen, and s is the one-time loading distance of the test wheel. According to the previous RSCT test, the load time was found to be 328 min when the deformation was 15 mm. Therefore, the total load time of the virtual rutting test was determined to be 0.0179 s, based on the time-temperature equivalence principle with a conversion coefficient of 10 4 . • The macro contact parameters between the coarse aggregate particles. Aggregates were considered as elastic materials without deformation. The desired modulus of the coarse aggregates was 55.5 GPa [39]. The damping force was ignored as the contact force between aggregate particles possessed no viscosity. • The macro contact parameters between the asphalt mastic particles. Based on the time-temperature equivalence principle and the transformation equations [40], the micro-parameters of the asphalt mastic particles were calculated using the following formula, as shown in Table 8. where t is the disk thickness. • The macro contact parameters between the coarse aggregate particles and asphalt mastic particles. Because bond failure will not occur in an actual situation, the tensile strength and shear strength in this linear contact bond model were set to a larger value [25]. The calculation results of the normal stiffness and tangential stiffness are shown in Table 9. • Establishment of the rutting emulation test. The specific steps in establishing the model are shown in Figure 7. In view of the stability of mechanical properties of materials, as well as the computational efficiency of the PFC software, the asphalt concrete in PFC was composed of mesoscale phases including coarse aggregates and asphalt mastics. More specifically, the asphalt mastics were regarded as homogeneous particles composed of an asphalt binder, mineral filler, and fine aggregate, with a nominal size smaller than 2.36 mm. First, a rectangular rutting test area of 300 × 50 mm was established, where coarse aggregate balls in different sizes and random locations were generated according to the preceding gradation. FISH codes were then used to cut the circular coarse aggregate balls into an irregular pentahedron, in order to simulate the actual shape of the coarse aggregate. After recording information pertaining to the position and shape of the balls larger than 2.36 mm, all balls were deleted, and balls with a radius of 1 mm were generated to fill the entire test area. The balls located at the position of the coarse aggregate, after being cut, formed clumps without deformation. The remaining balls represented the asphalt mastic. A certain number of asphalt mastic balls were then removed as air voids, and the contact model and parameters were set. Because bond failure will not occur in an actual situation, the tensile strength and shear strength in this linear contact bond model were set to a larger value [25]. The calculation results of the normal stiffness and tangential stiffness are shown in Table 9. • Establishment of the rutting emulation test. The specific steps in establishing the model are shown in Figure 7. In view of the stability of mechanical properties of materials, as well as the computational efficiency of the PFC software, the asphalt concrete in PFC was composed of mesoscale phases including coarse aggregates and asphalt mastics. More specifically, the asphalt mastics were regarded as homogeneous particles composed of an asphalt binder, mineral filler, and fine aggregate, with a nominal size smaller than 2.36 mm. First, a rectangular rutting test area of 300 × 50 mm was established, where coarse aggregate balls in different sizes and random locations were generated according to the preceding gradation. FISH codes were then used to cut the circular coarse aggregate balls into an irregular pentahedron, At this time, the virtual rutting test established according to the given procedure is shown in Figure 8. The model is mainly composed of coarse aggregate clumps (red pentagons), asphalt mastic balls (blue balls), and the loading clump (red rectangle). The number of coarse aggregate clumps is shown in Table 10. Materials 2020, 13, x FOR PEER REVIEW 11 of 19 in order to simulate the actual shape of the coarse aggregate. After recording information pertaining to the position and shape of the balls larger than 2.36 mm, all balls were deleted, and balls with a radius of 1 mm were generated to fill the entire test area. The balls located at the position of the coarse aggregate, after being cut, formed clumps without deformation. The remaining balls represented the asphalt mastic. A certain number of asphalt mastic balls were then removed as air voids, and the contact model and parameters were set. The loading condition of the rutting test was 0.7 MPa. To simulate the test, the loading condition of the discrete element model was also 0.7 MPa. Specifically, 250 balls with a radius of 1 mm were used to construct the loading clump tool (25 × 10) of a model possessing an adequate level of strength and no deformations. After setting the constant loading pressure, in order to avoid the displacement of the loading clump tool in the loading process, caused by the uneven distribution of asphalt mixture aggregate, the x-axis of the loading clump tool was fixed to ensure the accuracy of the loading position. At this time, the virtual rutting test established according to the given procedure is shown in Figure 8. The model is mainly composed of coarse aggregate clumps (red pentagons), asphalt mastic balls (blue balls), and the loading clump (red rectangle). The number of coarse aggregate clumps is shown in Table 10. The real test loading time will be transformed by the time-temperature equivalence principle, as the loading time of the discrete element model should not be too long. The Burgers model's parameters were also reduced so as to improve the efficiency of the model. The time-temperature conversion coefficient was set to 10,000 in order to maintain the model's accuracy, as demonstrated by numerous previous studies. Model Validation The HISTORY module in PFC2D was used to monitor the change in rutting deformation. The The real test loading time will be transformed by the time-temperature equivalence principle, as the loading time of the discrete element model should not be too long. The Burgers model's parameters were also reduced so as to improve the efficiency of the model. The time-temperature conversion coefficient was set to 10,000 in order to maintain the model's accuracy, as demonstrated by numerous previous studies. Model Validation The HISTORY module in PFC2D was used to monitor the change in rutting deformation. The rutting change of the specimen was recorded and is shown in Figure 9. The real test loading time will be transformed by the time-temperature equivalence principle, as the loading time of the discrete element model should not be too long. The Burgers model's parameters were also reduced so as to improve the efficiency of the model. The time-temperature conversion coefficient was set to 10,000 in order to maintain the model's accuracy, as demonstrated by numerous previous studies. Model Validation The HISTORY module in PFC2D was used to monitor the change in rutting deformation. The rutting change of the specimen was recorded and is shown in Figure 9. According to Figure 9, the deformation is seen to grow with the increase in loading time. From a micro perspective, the rutting deformation was observed to be the slip of asphalt mixture particles. From the macroscopic point of view, the deformation of the loading area is obviously larger than that of the unloaded area. Combined with Saint Venant's principle, it can be seen that the stress in the loading area is large and easily causes deformation, while the stress far away from the loading position is small and no obvious deformation occurs. In addition, the asphalt mixture humps on the left and right sides of the loading position are obvious in the figure. The hump in the opposite direction of the load is due to the fact that the asphalt mixture at the loading position is squeezed by load and flows far away from the load position. The hump is consistent with the actual shape of pavement rutting. In order to further quantify this degree of coincidence, the deformation curve recorded by the HISTORY module is shown in Figure 10, and the results of the virtual test are compared with that of the RSCT test. According to Figure 9, the deformation is seen to grow with the increase in loading time. From a micro perspective, the rutting deformation was observed to be the slip of asphalt mixture particles. From the macroscopic point of view, the deformation of the loading area is obviously larger than that of the unloaded area. Combined with Saint Venant's principle, it can be seen that the stress in the loading area is large and easily causes deformation, while the stress far away from the loading position is small and no obvious deformation occurs. In addition, the asphalt mixture humps on the left and right sides of the loading position are obvious in the figure. The hump in the opposite direction of the load is due to the fact that the asphalt mixture at the loading position is squeezed by load and flows far away from the load position. The hump is consistent with the actual shape of pavement rutting. In order to further quantify this degree of coincidence, the deformation curve recorded by the HISTORY module is shown in Figure 10, and the results of the virtual test are compared with that of the RSCT test. The final deformation of the virtual track test was found to be 17.31 mm, with an error of only 15.4% when compared to that of RSCT. Figure 10 depicts the increase of the loading time, where the deformation is observed to be growing, but the growth rate gradually becomes slower. This evaluation aligns with the actual change in rut deformation. Figure 11 shows the location of the coarse aggregate particles after loading. Red represents the aggregate particles. It is clearly evident that the density of aggregate particles under the loading position is larger than that of the other position. The density of the aggregate particles gradually decreased from the loading position to the edges of the specimen. The density of the mastic particles in the raised position is larger than that of the other position. This phenomenon is consistent with the rutting sample extracted from the actual pavement, proving the validity of the model [41,42]. From the macroscopic point of view, the deformation of the loading area is obviously larger than that of the unloaded area. Combined with Saint Venant's principle, it can be seen that the stress in the loading area is large and easily causes deformation, while the stress far away from the loading position is small and no obvious deformation occurs. In addition, the asphalt mixture humps on the left and right sides of the loading position are obvious in the figure. The hump in the opposite direction of the load is due to the fact that the asphalt mixture at the loading position is squeezed by load and flows far away from the load position. The hump is consistent with the actual shape of pavement rutting. In order to further quantify this degree of coincidence, the deformation curve recorded by the HISTORY module is shown in Figure 10, and the results of the virtual test are compared with that of the RSCT test. The final deformation of the virtual track test was found to be 17.31 mm, with an error of only 15.4% when compared to that of RSCT. Figure 10 depicts the increase of the loading time, where the deformation is observed to be growing, but the growth rate gradually becomes slower. This evaluation aligns with the actual change in rut deformation. Figure 11 shows the location of the coarse aggregate particles after loading. Red represents the aggregate particles. It is clearly evident that the density of aggregate particles under the loading position is larger than that of the other position. The density of the aggregate particles gradually decreased from the loading position to the edges of the specimen. The density of the mastic particles in the raised position is larger than that of the other position. This phenomenon is consistent with the rutting sample extracted from the actual pavement, proving the validity of the model [41,42]. According to the aforementioned data, the validity of the model was verified to be adequate. Analysis and Discussion During the loading process, it was found that, with the increase in loading time, the aggregates exhibited obvious sliding behaviors: the aggregates on both sides of the loading wheel edge would turn outwards and move upwards gradually, and the aggregate under the loading wheel would move downward. Asphalt mastic particles moved in eddy currents by the driving of aggregate particles. In order to clearly analyze this event, the displacement vector and contacts of the particles in the coordinate system were plotted, as shown in Figure 12 and 13. The coordinate system was a plane rectangular coordinate system set according to the following: the coordinate zero point was chosen as the center position of the bottom of the specimen; the positive direction of the y-axis is upward; and the positive direction of the x-axis is towards the right. According to the aforementioned data, the validity of the model was verified to be adequate. Analysis and Discussion During the loading process, it was found that, with the increase in loading time, the aggregates exhibited obvious sliding behaviors: the aggregates on both sides of the loading wheel edge would turn outwards and move upwards gradually, and the aggregate under the loading wheel would move downward. Asphalt mastic particles moved in eddy currents by the driving of aggregate particles. In order to clearly analyze this event, the displacement vector and contacts of the particles in the coordinate system were plotted, as shown in Figures 12 and 13. The coordinate system was a plane rectangular coordinate system set according to the following: the coordinate zero point was chosen as the center position of the bottom of the specimen; the positive direction of the y-axis is upward; and the positive direction of the x-axis is towards the right. In Figure 12, the direction of the arrow indicates the direction of the particles' displacement, and the color indicates the value of the displacement vector. Here, red indicates the maximum displacement vector, and blue signifies the minimum displacement vector. It can be seen from Figure 12 that four main kinds of particle displacement exist in the A, B, C, and D regions of the specimen, respectively. Two kinds of displacements appeared near the loading position, with the first having occurred under the loading wheel (A). Due to the downward action of the loading wheel, this kind of displacement is mainly observed to be negative to the y-axis. The displacement vector was next to the second kind of particle displacement, which occurred on both sides of the loading wheel edge (B). This kind of displacement vector is an eddy current and is obviously larger than the other displacements. This is because the particles have both vertical displacement as well as lateral displacement. The other two kinds of displacements occurred in the specimen far away from the loading wheel. These displacements were small and decreased with the distance of particles away from the wheel. In the upper part of the specimen away from the loading wheel (C), the particle displacement mainly accounted for a small amount toward the positive direction of the y-axis. In the lower part of the specimen far away from the loading wheel (D), the particle displacement was also small and transverse, and very distant from the loading wheel. The four kinds of displacement vectors show the formation of the ruts and explain the uplift of the wheel track edge during the rutting process perfectly [41,42]. In Figure 13, the value change of contact force is consistent with that of the displacement vector at the corresponding location, which explains the state of the particle displacement vector in the four regions in Figure 12: the macro load that a particle bears is the contact force with its surrounding particles in the microstructure. The larger the contact force is, the larger the load it bears, and the larger the displacement vector is. For example, in Figure 13, it can be seen that the contact force of particles in area A is the largest. Therefore, it can be seen that the displacement vector of particles in area A is also the largest in Figure 12. The same is true for particles in other regions. According to the aforementioned data, the validity of the model was verified to be adequate. Analysis and Discussion During the loading process, it was found that, with the increase in loading time, the aggregates exhibited obvious sliding behaviors: the aggregates on both sides of the loading wheel edge would turn outwards and move upwards gradually, and the aggregate under the loading wheel would move downward. Asphalt mastic particles moved in eddy currents by the driving of aggregate particles. In order to clearly analyze this event, the displacement vector and contacts of the particles in the coordinate system were plotted, as shown in Figure 12 and 13. The coordinate system was a plane rectangular coordinate system set according to the following: the coordinate zero point was chosen as the center position of the bottom of the specimen; the positive direction of the y-axis is upward; and the positive direction of the x-axis is towards the right. In Figure 12, the direction of the arrow indicates the direction of the particles' displacement, and the color indicates the value of the displacement vector. Here, red indicates the maximum According to the aforementioned data, the validity of the model was verified to be adequate. Analysis and Discussion During the loading process, it was found that, with the increase in loading time, the aggregates exhibited obvious sliding behaviors: the aggregates on both sides of the loading wheel edge would turn outwards and move upwards gradually, and the aggregate under the loading wheel would move downward. Asphalt mastic particles moved in eddy currents by the driving of aggregate particles. In order to clearly analyze this event, the displacement vector and contacts of the particles in the coordinate system were plotted, as shown in Figure 12 and 13. The coordinate system was a plane rectangular coordinate system set according to the following: the coordinate zero point was chosen as the center position of the bottom of the specimen; the positive direction of the y-axis is upward; and the positive direction of the x-axis is towards the right. In Figure 12, the direction of the arrow indicates the direction of the particles' displacement, and the color indicates the value of the displacement vector. Here, red indicates the maximum According to the above observations, the displacement trend of the particles was briefly analyzed to study the mechanism of rutting formation. If the material properties, construction technology, and natural environments are the same, the gradation will have a greater impact on the rutting performance of the asphalt mixture. The influence of the particle size on rutting deformation was then quantitatively analyzed. The displacement of the aggregate particles after loading is represented by FISH codes, and the contribution rate (CR) of the coarse aggregates for rutting deformation is calculated as follows: where D m is the average absolute displacement of aggregates in m-grade, and n is the number of aggregates in m-grade. Figure 14 shows the CR of coarse aggregates of different sizes for rutting deformation. The CR of particles with diameters larger than 4.75 mm was 62.9%, illustrating that the primary cause of rutting deformation was due to the displacement of aggregates of large sizes. The coarse aggregate in asphalt mixture plays an important role in bearing load [43,44]. The skeleton structure of the asphalt mixture is mainly composed of large size aggregates (aggregates larger than 4.75 mm). It can be seen from Figure 14 that the CR values of these large size aggregates are large. The CR of aggregate particles with diameters ranging from 9.5 to 4.75 mm was found to be 24.6%, which is 1.1-2.4 times that of the other aggregates. This is because the aggregates with diameters ranging from 9.5 to 4.75 mm are the main constituent aggregate of skeleton structure (the number of 9.5-4.75 mm aggregates is 42, accounting for 80.77% of the total number of large size aggregates). Moreover, the CR of aggregate particles with diameters between 16 and 13.2 mm was 10.1%, signifying the smallest amount, where the CR of the other aggregate particles was also almost the same. This was because all particles of 16-13.2 mm were in the part of the specimen with a small particle displacement (area C in Figure 12). The above data shows that the average absolute displacement of aggregate particles with diameters from 9.5 to 4.75 mm is 1.1-2.4 times that of other aggregate particles. Therefore, it can be preliminarily inferred that aggregates with diameters from 9.5 to 4.75 mm have the largest contribution to pavement rutting. This point has been confirmed in the design of asphalt mixture gradation by the Bailey method. One of the key indexes of the Bailey method is the CA (coarse aggregate) ratio, which is mainly related to the passing amount of NMPS/2 (nominal maximum particle size/2) and PCS (primary control sieve) [45,46]. In this paper, NMPS is 16 mm. NMPS/2 is 8 mm, which is close to 9.75 mm. PCS is 3.52 mm, which is close to 4.75 mm. These two key sieve sizes of the Bailey method are consistent with the particle size with the largest CR value in the previous conclusion. The effectiveness of the CR index is verified, and the amount of 9.5-4.75 mm aggregate deserves special attention [47][48][49]. specimen, respectively. Two kinds of displacements appeared near the loading position, with the first having occurred under the loading wheel (A). Due to the downward action of the loading wheel, this kind of displacement is mainly observed to be negative to the y-axis. The displacement vector was next to the second kind of particle displacement, which occurred on both sides of the loading wheel edge (B). This kind of displacement vector is an eddy current and is obviously larger than the other displacements. This is because the particles have both vertical displacement as well as lateral displacement. The other two kinds of displacements occurred in the specimen far away from the loading wheel. These displacements were small and decreased with the distance of particles away from the wheel. In the upper part of the specimen away from the loading wheel (C), the particle displacement mainly accounted for a small amount toward the positive direction of the y-axis. In the lower part of the specimen far away from the loading wheel (D), the particle displacement was also small and transverse, and very distant from the loading wheel. The four kinds of displacement vectors show the formation of the ruts and explain the uplift of the wheel track edge during the rutting process perfectly [41,42]. In Figure 13, the value change of contact force is consistent with that of the displacement vector at the corresponding location, which explains the state of the particle displacement vector in the four regions in Figure 12: the macro load that a particle bears is the contact force with its surrounding particles in the microstructure. The larger the contact force is, the larger the load it bears, and the larger the displacement vector is. For example, in Figure 13, it can be seen that the contact force of particles in area A is the largest. Therefore, it can be seen that the displacement vector of particles in area A is also the largest in Figure 12. The same is true for particles in other regions. According to the above observations, the displacement trend of the particles was briefly analyzed to study the mechanism of rutting formation. If the material properties, construction technology, and natural environments are the same, the gradation will have a greater impact on the rutting performance of the asphalt mixture. The influence of the particle size on rutting deformation was then quantitatively analyzed. The displacement of the aggregate particles after loading is represented by FISH codes, and the contribution rate (CR) of the coarse aggregates for rutting deformation is calculated as follows: where is the average absolute displacement of aggregates in m-grade, and n is the number of aggregates in m-grade. Figure 14 shows the CR of coarse aggregates of different sizes for rutting deformation. The CR of particles with diameters larger than 4.75 mm was 62.9%, illustrating that the primary cause of rutting deformation was due to the displacement of aggregates of large sizes. The coarse aggregate in asphalt mixture plays an important role in bearing load [43,44]. The skeleton structure of the Figure 15, it is apparent that all particles possess x-axial displacement. Essentially, all particles tend to move towards the edge of the specimen, behaving in accordance with the actual spatial movement of asphalt mixture particles. The total lateral displacement of the particles increases with the decrease in particle size. The number of aggregates increases rapidly with the decrease in aggregate size, which is shown in Table 10. Therefore, the total displacement of small aggregates is large. The displacement of aggregates in the positive and negative directions of the x-axis was approximately the same, except for particles with diameters ranging from 16 to 9.5 mm. This demonstrates the uniformity and randomness of aggregate distribution in the model. The reason for the difference is due to the small number of particles as well as the asymmetry of its distribution. According to Figure 16, the CR of particles with diameters from 13.2 to 9.5 mm is observed to be 31.2%, which was the largest measured CR. Moreover, the CR was 1.4-4.2 times that of the other aggregates, which may preliminarily imply that aggregates between 13.2 and 9.5 mm confer a greater influence in lateral rutting deformation. Because the specimen has a small transverse deformation, the x-axial CR value of 9.5-4.75 mm aggregates is small when these aggregates are the major part of the skeleton structure. This also shows that coarse aggregates play an important role in resisting permanent deformation [50,51]. In addition, compared with the CR value of aggregates with diameters larger than 4.75 mm for rutting deformation, the x-axial CR value of these aggregates only has a little change. A mere change of 0.6% indicates that the structure of the asphalt mixture remains stable in the transverse direction. This is caused by the fixed transverse boundary of the specimen, and the relevant behavior of the full-scale specimen needs further study. 2.4 times that of the other aggregates. This is because the aggregates with diameters ranging from 9.5 to 4.75 mm are the main constituent aggregate of skeleton structure (the number of 9.5-4.75 mm aggregates is 42, accounting for 80.77% of the total number of large size aggregates). Moreover, the CR of aggregate particles with diameters between 16 and 13.2 mm was 10.1%, signifying the smallest amount, where the CR of the other aggregate particles was also almost the same. This was because all particles of 16-13.2 mm were in the part of the specimen with a small particle displacement (area C in Figure 12). The above data shows that the average absolute displacement of aggregate particles with diameters from 9.5 to 4.75 mm is 1.1-2.4 times that of other aggregate particles. Therefore, it can be preliminarily inferred that aggregates with diameters from 9.5 to 4.75 mm have the largest contribution to pavement rutting. This point has been confirmed in the design of asphalt mixture gradation by the Bailey method. One of the key indexes of the Bailey method is the CA (coarse aggregate) ratio, which is mainly related to the passing amount of NMPS/2 (nominal maximum particle size/2) and PCS (primary control sieve) [45,46]. In this paper, NMPS is 16 mm. NMPS/2 is 8 mm, which is close to 9.75 mm. PCS is 3.52 mm, which is close to 4.75 mm. These two key sieve sizes of the Bailey method are consistent with the particle size with the largest CR value in the previous conclusion. The effectiveness of the CR index is verified, and the amount of 9.5-4.75 mm aggregate deserves special attention [47][48][49]. Figure 15, it is apparent that all particles possess x-axial displacement. Essentially, all particles tend to move towards the edge of the specimen, behaving in accordance with the actual spatial movement of asphalt mixture particles. The total lateral displacement of the particles increases with the decrease in particle size. The number of aggregates increases rapidly with the decrease in aggregate size, which is shown in Table 10. Therefore, the total displacement of small aggregates is large. The displacement of aggregates in the positive and negative directions of the 2.4 times that of the other aggregates. This is because the aggregates with diameters ranging from 9.5 to 4.75 mm are the main constituent aggregate of skeleton structure (the number of 9.5-4.75 mm aggregates is 42, accounting for 80.77% of the total number of large size aggregates). Moreover, the CR of aggregate particles with diameters between 16 and 13.2 mm was 10.1%, signifying the smallest amount, where the CR of the other aggregate particles was also almost the same. This was because all particles of 16-13.2 mm were in the part of the specimen with a small particle displacement (area C in Figure 12). The above data shows that the average absolute displacement of aggregate particles with diameters from 9.5 to 4.75 mm is 1.1-2.4 times that of other aggregate particles. Therefore, it can be preliminarily inferred that aggregates with diameters from 9.5 to 4.75 mm have the largest contribution to pavement rutting. This point has been confirmed in the design of asphalt mixture gradation by the Bailey method. One of the key indexes of the Bailey method is the CA (coarse aggregate) ratio, which is mainly related to the passing amount of NMPS/2 (nominal maximum particle size/2) and PCS (primary control sieve) [45,46]. In this paper, NMPS is 16 mm. NMPS/2 is 8 mm, which is close to 9.75 mm. PCS is 3.52 mm, which is close to 4.75 mm. These two key sieve sizes of the Bailey method are consistent with the particle size with the largest CR value in the previous conclusion. The effectiveness of the CR index is verified, and the amount of 9.5-4.75 mm aggregate deserves special attention [47][48][49]. Figure 15, it is apparent that all particles possess x-axial displacement. Essentially, all particles tend to move towards the edge of the specimen, behaving in accordance with the actual spatial movement of asphalt mixture particles. The total lateral displacement of the particles increases with the decrease in particle size. The number of aggregates increases rapidly with the decrease in aggregate size, which is shown in Table 10. Therefore, the total displacement of small aggregates is large. The displacement of aggregates in the positive and negative directions of the Comparing the total longitudinal displacement of particles with different diameters, the smaller the particle size, the larger the displacement was found to be. This change is consistent with that of the lateral absolute displacement, as seen in Figure 15. The total longitudinal displacement of all particles in the positive direction is marginally greater than that in the negative direction, leading to the uplift morphology of rutting. This is also shown in Figure 18: compared with the CR value of aggregates with diameters larger than 4.75 mm for rutting deformation, the y-axial CR ratio of these aggregates has changed by 5.8%. This indicates that the structure of the asphalt mixture is unstable in the longitudinal direction, which leads to permanent deformation. In terms of the CR of coarse aggregates of different diameters in longitudinal rutting deformation, the CR of aggregates with diameters between 9.5 and 4.75 mm was observed to be the largest at 25%. The CR, accordingly, was 1.2-1.5 times that of the other aggregates, indicating that aggregates between 9.5 and 4.75 mm play an important role in longitudinal rutting deformation. transverse deformation, the x-axial CR value of 9.5-4.75mm aggregates is small when these aggregates are the major part of the skeleton structure. This also shows that coarse aggregates play an important role in resisting permanent deformation [50,51]. In addition, compared with the CR value of aggregates with diameters larger than 4.75 mm for rutting deformation, the x-axial CR value of these aggregates only has a little change. A mere change of 0.6% indicates that the structure of the asphalt mixture remains stable in the transverse direction. This is caused by the fixed transverse boundary of the specimen, and the relevant behavior of the full-scale specimen needs further study. Comparing the total longitudinal displacement of particles with different diameters, the smaller the particle size, the larger the displacement was found to be. This change is consistent with that of the lateral absolute displacement, as seen in Figure 15. The total longitudinal displacement of all particles in the positive direction is marginally greater than that in the negative direction, leading to the uplift morphology of rutting. This is also shown in Figure 18: compared with the CR value of aggregates with diameters larger than 4.75 mm for rutting deformation, the y-axial CR ratio of these aggregates has changed by 5.8%. This indicates that the structure of the asphalt mixture is unstable in the longitudinal direction, which leads to permanent deformation. In terms of the CR of coarse aggregates of different diameters in longitudinal rutting deformation, the CR of aggregates with diameters between 9.5 and 4.75 mm was observed to be the largest at 25%. The CR, accordingly, was 1.2-1.5 times that of the other aggregates, indicating that aggregates between 9.5 and 4.75 mm play an important role in longitudinal rutting deformation. transverse deformation, the x-axial CR value of 9.5-4.75mm aggregates is small when these aggregates are the major part of the skeleton structure. This also shows that coarse aggregates play an important role in resisting permanent deformation [50,51]. In addition, compared with the CR value of aggregates with diameters larger than 4.75 mm for rutting deformation, the x-axial CR value of these aggregates only has a little change. A mere change of 0.6% indicates that the structure of the asphalt mixture remains stable in the transverse direction. This is caused by the fixed transverse boundary of the specimen, and the relevant behavior of the full-scale specimen needs further study. Comparing the total longitudinal displacement of particles with different diameters, the smaller the particle size, the larger the displacement was found to be. This change is consistent with that of the lateral absolute displacement, as seen in Figure 15. The total longitudinal displacement of all particles in the positive direction is marginally greater than that in the negative direction, leading to the uplift morphology of rutting. This is also shown in Figure 18: compared with the CR value of aggregates with diameters larger than 4.75 mm for rutting deformation, the y-axial CR ratio of these aggregates has changed by 5.8%. This indicates that the structure of the asphalt mixture is unstable in the longitudinal direction, which leads to permanent deformation. In terms of the CR of coarse aggregates of different diameters in longitudinal rutting deformation, the CR of aggregates with diameters between 9.5 and 4.75 mm was observed to be the largest at 25%. The CR, accordingly, was 1.2-1.5 times that of the other aggregates, indicating that aggregates between 9.5 and 4.75 mm play an important role in longitudinal rutting deformation. Conclusions This paper presents a newly self-developed pavement rutting device as well as a discrete element model to analyze the mechanism of rutting. Based on completed analyses, the following conclusions may be drawn: • Compared with the existing laboratory experiments, the RSCT test has a uniform deformation and a low deformation rate, showing a more practical and accurate test method for testing rut performance. The RSCT test can thus be widely used. • A virtual numerical simulation test is established according to the RSCT test. The validity of the virtual rutting test using an irregular pentagon as the aggregate shape is verified. The micromechanical responses show that all mixtures within the stress range possessed a tension for displacement. The contact force and displacement of particles in the loading area were the largest and gradually diffused to the surrounding area. • The asphalt mastic extruded by the displacement of aggregates demonstrated its rheological behavior, mainly resulting in the formation of rutting. The CR index shows that 4.75-9.5 mm aggregates make the largest contribution to rutting deformation. Special attention should be given to these aggregate amounts in the design of mixture grading. In the future, further research can be carried out in three aspects. The aggregate in the virtual rutting test needs to be more consistent with the actual aggregate morphology. The rutting mechanism of asphalt mixture with different gradations need to be studied. A new mixture design method and a rutting prediction method may be proposed, based on the CR value.
2020-04-16T09:04:55.773Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "729ef1e47d375805e32d597c39d3d376736b4dac", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/13/7/1791/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9c244b619660030de1b750d9a30bf4ec5afc78b", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
225568665
pes2o/s2orc
v3-fos-license
The E ff ect of Operating Temperature on the Response Time of Optically Driven Liquid Crystal Displays : Optically driven liquid crystal displays (ODLCDs) realizes their display function by tuning the easy axis of liquid crystal (LC) molecules under polarized blue light, which has been utilized in some optical devices due to its advantages of ultra-low power consumption. However, a big issue arises in response time, i.e., the rewriting time of the ODLCD. The rewriting time of ODLCD samples was studied. Rotational viscosity plays a very important role for decreasing the rewriting time of the ODLCD. The operating temperature was changed from room temperature to nearly clearing point, the rewriting time decreased a lot as the rotational viscosity decreased for the five di ff erent kinds of the LCs. The rewriting time can be decreased from 5.2 s to 0.2 s around 25 times for the LC N4. Introduction With the advantages of the intermediate state of matter between isotropic liquid and crystalline solid, the liquid crystal display (LCD) technology still plays an important role in the development of new materials and technologies. One can find applications in TV sets, laptops, pads, cellphones, and smart watches owing to the diversity in the size of the LC devices. LCD technology became popular because of its large viewing angle, high resolution, long lifetime, but it is currently challenged by other devices with thinner module thickness, faster response times and more vivid colors. ODLCD, which is a new type of LCD, has been proposed recently [1][2][3][4][5]. ODLCD devices have their display unit apart from the driving electronics and this makes them significantly more compact, durable and flexible by allowing the replacement of glass substrates with plastic substrates [6][7][8][9][10]. The ODLCD technology allows the front substrate alignment layer to be tuned optically. Such a display has the LC sandwiched between two glass or plastic substrates with no current conducting layer that are spin-coated with the photoalignment layer and the conventional rubbing alignment layers [11][12][13][14][15][16]. The photo-alignment layer is sensitive to light exposure, whereas the easy axis of the photo-alignment layer can be changed by exposing to polarized UV or blue light. The rubbing alignment layer is insensitive to light exposure and will keep the alignment fixed. In this way, the intensities of Crystals 2020, 10, 626 2 of 7 different pixels on the ODLCD panel can be modulated by polarized blue light exposure to display entirely different images. Thereafter, the images on the ODLCD can be displayed without any power consumption [17][18][19]. However, the ODLCD technology cannot be easily industrialized mainly due to its long rewriting time. Some efforts have been made to increase the rewriting time, such as applying electrical field, process flow optimization, and mixing the LC with chiral dopants [9,10,17]. A new method for increasing the rewriting time of the ODLCD is proposed here. By raising the operating temperature of the ODLCD to near the LC clearing point, the rewriting time can be increased around 26 times for the LC N4. Materials and Methods The molecular structure of the SD1 azo dye, shown in Figure 1A, provides a basis for the optical active (OA) photo-alignment layer. The orientation of the SD1 molecules can be controlled by irradiation with the linearly polarized incident light at a wavelength of 450 nm. When the OA SD1 layer is irradiated with the polarized blue light, the energy absorbed by the SD1 molecules is proportional to cos2θ, where the angle θ controls the orientation of the dye molecules with respect to the polarization vector of the exposing light [20]. The alignment mechanism can be described in terms of a non-uniform probability distribution with a strong angular dependence. These SD1 molecules, whose transition dipole moments are parallel to the polarization plane of the exposing light, receive excess energy. This reorients them from their initial orientation to being orthogonal to the afore mentioned polarization plane. This process leads to excess chromophores in the direction in which the absorption oscillator is perpendicular to the polarization plane. Therefore, exposing the SD1 substrate to the polarized light of wavelength 450 nm provides an alignment direction (i.e., an easy axis) that is perpendicular to the polarization of the exposing light. The substrate has an almost zero pretilt angle and high anchoring energy (same order of magnitude as obtained when using the rubbing alignment method). The easy axis of the SD1 layer can also be rotated by exposing to the polarized light of the same wavelength, but with a crossed polarization direction. The anchoring energy of the SD1 layer increases with the irradiation energy of the exposing light, becoming saturated at higher energies. Distinct patterns with different alignment orientations can be realized by means of multi-step irradiation.32 grey levels can be made in the ODLCD by different methods as shown in Figure 1B [15]. To fabricate the SD1 photo-alignment layer of the ODLCD samples with the desired pattern, aSD1 solution (1 wt. % in N,N-dimethylformamide, DMF) was deposited onto a glass or plastic substrate with conductive indium tin oxide (ITO) layers, and then spin-coated at a speed of 3000 rpm for 30 s. The coated substrate was then heated for 10 min on a 100 • C hotplate to reach a film thickness of 10 nm. After using heating to remove the excessive solvent, the substrate was exposed under a linearly polarized blue light-emitting diode (LED) light (450 nm; I= 5.5 mW/cm 2 ) for 1 min to obtain the initial alignment. To fabricate the rubbing alignment layer (optical passive, OP) for the ODLCD samples, a polyimide(PI) solution was deposited on to the glass or plastic substrate with the conductive ITO layers, and then spin-coated at the speed of 3000 rpm for 100 s. The coated substrate was then heated for 60 min on the 230 • C hotplate, then cooling down to room temperature. It was ready for use after rubbing process. The afore mentioned two substrates were assembled together to the sample by AB glue with 10 µm spacers in between them as shown in Figure 1C. Next, the sample, with a photomask on top, was exposed to the same blue LED light, with its polarization plane orthogonal to the initial polarization direction, to obtain a precise pattern. The processing procedure presented in Figure 1C enables us to control the precise micro-domain alignment of the LC by simply modulating the easy axis of the exposing light. As the SD1 molecules follow the perpendicular polarization direction of the incident light, distinct tunable patterns with different alignment orientations can be realized by a two-step irradiation process [21][22][23]. The main operating principle of the ODLCD is switching between planar and twist nematic electro-optical modes [19]. 3 different alignment orientations can be realized by a two-step irradiation process [21][22][23].The main operating principle of the ODLCD is switching between planar and twist nematic electro-optical modes [19]. Experiments and Results As in our previous work [9], the response time of the ODLCD sample is: where γ is the rotational viscosity, χ is the absorption coefficient, E is the exposure energy, W is the azimuthal anchoring energy and h is the cell gap.The response time is linearly proportional to the rotational viscosity [24][25][26]. For the non-in-plane rotating LC molecules, the effective rotational viscosity * can be expressed as following: * = − ( sin − cos ) 2 sin cos + ( − ) sin + ( + ) cos + where are the Leslie viscosity coefficients, is the polar angle of the LC director. Generally, the effective rotational viscosity is a result from a complex cause of molecular rotation angle, molecular shape, moment of inertia, activation energy, and temperature. Among these factors, the activation energy and temperature are the most crucial ones. The activation energy depends on the detailed intermolecular interactions. An empirical rule is that for every 10 degrees of temperature rise, the rotational viscosity drops by about two times. Temperature has a great influence on the physical properties of the thermotropic LC. As temperature increases, the birefringence, dielectric anisotropy, viscosity, and elastic constant all decrease but at different rates. A set of values of the rotational viscosity for five different LCs were taken from reference [25,27] as shown in Table 1. These LCs are MBBA, 5CB, 8CB, EM and N4, respectively. The Experiments and Results As in our previous work [9], the response time of the ODLCD sample is: where γ 1 is the rotational viscosity, χ E is the absorption coefficient, E exp is the exposure energy, W is the azimuthal anchoring energy and h is the cell gap.The response time is linearly proportional to the rotational viscosity [24][25][26]. For the non-in-plane rotating LC molecules, the effective rotational viscosity γ * 1 can be expressed as following: where α i are the Leslie viscosity coefficients, θ is the polar angle of the LC director. Generally, the effective rotational viscosity is a result from a complex cause of molecular rotation angle, molecular shape, moment of inertia, activation energy, and temperature. Among these factors, the activation energy and temperature are the most crucial ones. The activation energy depends on the detailed intermolecular interactions. An empirical rule is that for every 10 degrees of temperature rise, the rotational viscosity drops by about two times. Temperature has a great influence on the physical properties of the thermotropic LC. As temperature increases, the birefringence, dielectric anisotropy, viscosity, and elastic constant all decrease but at different rates. A set of values of the rotational viscosity for five different LCs were taken from reference [25,27] as shown in Table 1. These LCs are MBBA, 5CB, 8CB, EM and N4, respectively. The temperature range is from 20 to 74 • C, and for each kind of the LC the maximum operating temperature is very close to clearing point. Then, the data in table were plotted in Figure 2A-E. It was shown that the rotational viscosity increases as the temperature increases for these five different kinds of LC, rotational viscosity was approximately linear with temperature. The operational speed of the ODLCD was represented by the response time (or rewriting time), which was defined as the time when the normalized dose-dependent transmittance reaches 90% through the ORW LC cell. For the TN cell operated in the Mauguin regime, the light transmittance can be written as: where ϕ is the twist angle of the LC. In Figure 2F, for the five different kinds of LCs (different from the LCs in table 1), normalized intensity as a function of the polarized blue light exposing time was plotted. The response time for these five kinds of the LCs were also shown clearly between the two parallel orange lines marked at 10% and 90%. The operational speed of the ODLCD was represented by the response time (or rewriting time), which was defined as the time when the normalized dose-dependent transmittance reaches 90% through the ORW LC cell. For the TN cell operated in the Mauguin regime, the light transmittance can be written as: where ϕ is the twist angle of the LC. In Figure 2F, for the five different kinds of LCs (different from the LCs in Table 1), normalized intensity as a function of the polarized blue light exposing time was plotted. The response time for Crystals 2020, 10, 626 5 of 7 these five kinds of the LCs were also shown clearly between the two parallel orange lines marked at 10% and 90%. The rewriting time for different LCs were measured by the set up in Figure 3F. The blue laser was for writing, erasing or rewriting of patterns on the ODLCD cell, the green laser was for the transmittance measurement. The rewriting time for different LCs were measured by the set up in Figure 3F. The blue laser was for writing, erasing or rewriting of patterns on the ODLCD cell, the green laser was for the transmittance measurement. As shown in Figure 2A-E and Figure 3A-E, the rotational viscosity dropped a lot as the temperature decreased. In the meantime, the rewriting time for the ODLCD also decreased about 10, 5, 5, 9 and 26 times as the operating temperature increased from the room temperature to nearly the clearing point for each kind of the LC. The minimum rewriting time is about 0.2s for the LC N4. Conclusions In conclusion, the dependence of the ODLCD rewriting time on the operating temperature and the rotational viscosity was studied. The rotational viscosity is a very important factor for the rewriting time of the ODLCD. By changing the operating temperature from room temperature to near the clearing point, five different kinds of the LCs were filled in the ODLCD cells and the rewriting time decreased a lot as the rotational viscosity decreased. By the method of reducing the rotational viscosity, the rewriting time can be increased by 25 times for the LC N4. As shown in Figure 2A-E and Figure 3A-E, the rotational viscosity dropped a lot as the temperature decreased. In the meantime, the rewriting time for the ODLCD also decreased about 10, 5, 5, 9 and 26 times as the operating temperature increased from the room temperature to nearly the clearing point for each kind of the LC. The minimum rewriting time is about 0.2s for the LC N4. Conclusions In conclusion, the dependence of the ODLCD rewriting time on the operating temperature and the rotational viscosity was studied. The rotational viscosity is a very important factor for the rewriting time of the ODLCD. By changing the operating temperature from room temperature to near the clearing point, five different kinds of the LCs were filled in the ODLCD cells and the rewriting time decreased a lot as the rotational viscosity decreased. By the method of reducing the rotational viscosity, the rewriting time can be increased by 25 times for the LC N4.
2020-07-23T09:06:30.818Z
2020-07-20T00:00:00.000
{ "year": 2020, "sha1": "f714166562576ab5e08a6a828f7f9367ddbbc8ca", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/10/7/626/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0095cfe351f716821b5f72cfa19d1e966270f8e5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
250244145
pes2o/s2orc
v3-fos-license
A Survey and Empirical Evaluation of Parallel Deep Learning Frameworks The field of deep learning has witnessed a remarkable shift towards extremely compute- and memory-intensive neural networks. These newer larger models have enabled researchers to advance state-of-the-art tools across a variety of fields. This phenomenon has spurred the development of algorithms for distributed training of neural networks over a larger number of hardware accelerators. In this paper, we discuss and compare current state-of-the-art frameworks for large scale distributed deep learning. First, we survey current practices in distributed learning and identify the different types of parallelism used. Then, we present empirical results comparing their performance on large image and language training tasks. Additionally, we address their statistical efficiency and memory consumption behavior. Based on our results, we discuss algorithmic and implementation portions of each framework which hinder performance. INTRODUCTION The previous decade witnessed an explosion in the development of machine learning algorithms. In particular, deep learning (DL), a subset of machine learning focused on using neural networks for function approximation, has gained widespread popularity. Deep neural networks (DNNs) have enabled the advancement of the state of the art in a plethora of research areas: ranging from visual recognition [28,55,61,64,72] and natural language processing [13,40,45,66] to computational chemistry and computer systems [4,19,21,34,36,62,63,67]. Their popularity stems from the DNN's ability to automatically learn low-dimensional representations from high-dimensional unstructured data such as images, text and audio. Given enough data, the representations learned by these models are often superior to handcrafted features designed by domain experts. The advances in accelerator technology, increased memory capacity per accelerator, and faster networks have encouraged users of deep learning to train neural networks with increasingly larger numbers of parameters. Figure 1 shows the increasing number of parameters in the largest networks since 2012. Often times, it is impossible to train such networks on a single accelerator either due to large execution time or insufficient memory capacity to fit these models. The latter problem is further exacerbated for contemporary neural architectures. For example, GPT-2, an extremely popular neural network used in NLP requires 84 GB of GPU DRAM for training. This has motivated recent works in parallelizing the task of deep learning: training large models using multiple GPUs on a single node [18,25] or across multiple nodes connected by a network [14,22,31,39,46,54,70]. Different parallel frameworks offer different strengths and weaknesses in terms of performance (execution time for training), memory consumption, and statistical efficiency. Ben-Nun et al. [3] surveyed parallel DL frameworks and the different ways of exploiting the concurrency in neural networks in 2018. However, many new frameworks have emerged in the last three years, and the authors limited their discussion to a qualitative analysis. In this paper, we survey the most popular parallel DL frameworks available today and perform an empirical evaluation for the ones with open-source implementations to compare various metrics. This comparative evaluation can help users of deep learning select the best parallel framework for their training tasks. We first present a comprehensive qualitative survey of the state of the art in parallel deep learning. We classify approaches for parallelization into three categories (defined in Section 2): data parallelism, intra-layer parallelism (sometimes referred to as model parallelism), and inter-layer parallelism (sometimes referred to as pipelining,). We present the advantages and disadvantages of using each approach and discuss the capabilities of different frameworks that implement each type of parallelism. An end user who needs a scalable DL framework for their training experiments needs to know which frameworks provide the best statistical efficiency in the shortest possible time. To the best of our knowledge, an empirical comparison of parallel DL frameworks has not been attempted before. We identify two popular arXiv:2111.04949v2 [cs. LG] 1 Jul 2022 training datasets and two neural networks to benchmark several open-source DL frameworks including DDP [31], PipeDream [39], ZeRO [46], Megatron [54], TorchGPipe [25], and LBANN [16]. We use metrics that matter the most to a deep learning researcherepoch execution times, statistical efficiency, and memory consumption. We run our experiments on two different supercomputers and clusters that are built using different generations of NVIDIA GPUs (A100s, V100s). Through these experiments, we seek to develop a consensus on the suitability of parallel frameworks to different scenarios. In this paper we contribute: • A comprehensive survey of current state-of-the art techniques in distributed deep learning organized by parallelization strategy. • An empirical evaluation of these techniques across vision and language tasks on 2 different clusters that, to our knowledge, has not been done before. • A comparison of metrics, recorded across frameworks and architectures, that concern both the HPC and deep learning communities: runtime, scaling, statistical efficiency, and memory consumption. BACKGROUND In this section, we first give brief descriptions of deep learning terminology. We refer the reader to [? ] for an in-depth review of deep learning. We then provide an outline of the three ways in which training of a deep neural network can be parallelized: data parallelism, intra-layer parallelism and inter-layer parallelism. Definitions Neural networks: Neural networks are parameterized functions for predicting properties of some input data. They excel at learning low dimensional representations of complex, high dimensional data. Layers: Networks are composed of a sequence of layers, which take the previous layer's output as input and computes some non-linear transformation. Training and Loss: The processing of finding the best parameters for a neural network is called training. This is done by minimizing a loss function over an input data set. Loss functions, such as mean squared error, are typically chosen to represent the prediction capability of the network. Backpropagation: Backpropagation is a dynamic programming algorithm based on reverse-mode automatic differentiation that computes the gradients of each layer with respect to the loss function. Gradient Descent and Learning Rate: Many training algorithms use variations of gradient descent to minimize the loss function. Gradient descent iteratively updates the parameters of the neural network based on the negative gradient such that the loss moves towards a minima. The distance moved in the direction of the negative gradient is scaled by a value called the learning rate. Mini-Batches, Epochs and Stochastic Gradient Descent: Computing gradients of the entire data set is expensive, so approximate gradients are computed using random mini-batches of data. This version of gradient descent is called batched stochastic gradient descent. Each time the entirety of the data set is iterated over is called an epoch. Statistical Efficiency: Statistical efficiency is a measure of the relationship between epochs and accuracy/loss. A training algorithm is said to be statistically efficient if it requires a low number of epochs to converge to a target validation loss. Parallel Deep Learning Methods Data Parallelism: Data parallelism refers to an even division of training data among worker GPUs. Each GPU possesses a copy of the neural network along with it's parameters. Gradient calculation via backpropagation proceeds independently on all GPUs. These gradients are then subject to a collective all-reduce operation before the weight update step of the optimizer. The all-reduce step can either take place synchronously after each mini-batch, or asynchronously using a central parameter server. Implementations of data parallelism are widely available in popular deep learning frameworks like PyTorch [31], and TensorFlow [1]. Figure 2 illustrates data parallelism across 4 GPUs. Intra-layer Parallelism: Intra-layer parallelism distributes the work of a layer by dividing its computation across multiple GPUs. Parallelizing an entire neural network entails applying intra-layer parallelism to some or all of its constituent layers. Research in this area is focused on optimizing the multi-GPU execution of different kinds of layers -Fully Connected, Convolutional [11,41,53] and more recently the Transformer [54]. Intra-layer parallelism enables us to train neural networks that would not fit inside the DRAM of a single GPU. Inter-layer Parallelism: In inter-layer parallelism contiguous subsets of layers are mapped to individual GPUs. Each GPU is thus tasked with operating on a subset of the neural network. Exchange of activations and gradients among consecutive layers on different GPUs takes place via point-to-point communication primitives. To achieve true parallelism more than one mini-batch should be active on different GPUs at a time since the processing of a minibatch across layers is sequential and cannot be parallelized. This is called pipelining. The maximum number of mini-batches active in the system at any given point of time is called the pipeline limit. Figure 3 shows inter-layer parallelism in action with four GPUs and a pipeline limit of four. Just like intra-layer parallelism interlayer parallelism makes it possible to train models whose memory requirements exceed the DRAM capacity of a single GPU. Related Work Pouyanfar et al. [43] and Ben-Nun et al. [3] comprehensively survey established techniques in sequential deep learning as well as distributed. Another survey [59] covers work in processing neural networks efficiently. Distributed training on big data software stacks (such as Spark and Hadoop) is explored by Lu et al. [32]. The network demands of parallel training are presented in [2] where typical communication workloads are profiled and characterized. Tang et al. [60] further character distributed training communication via analytical models and survey current practices. We also point the reader to the MLPerf benchmarks 1 , which have become popular for comparing deep learning algorithms, frameworks, and hardware. LITERATURE SURVEY In this section we present a survey of current state-of-the-art techniques and implementations for each type of distributed learning. Table 1 provides an overview of each discussed framework. Data Parallelism Data parallelism has been the go-to algorithm for parallelizing neural network training. It is simple in design and performs well with the correct settings. 1 https://mlcommons.org/en/training-normal-07/ 3.1.1 Small Models. Data parallelism hinges on a synchronous allreduce operation to gather the gradients across all GPUs. Naturally, this can become a bottleneck as the size of the gradients being being shared grows. This problem is further exacerbated by the increasing computational capabilities of hardware accelerators. The ensuing decrease in the computation to communication ratio increases the severity of this problem. Initial attempts to reduce the communication overhead targeted introducing asynchrony in the stochastic gradient descent (SGD) algorithm [10,12,49]. However, Chen et al. [6] demonstrate that synchronous SGD and its variants converged faster with higher accuracy than their asynchronous counterparts. Efforts to minimize communication bottlenecks continued. Zhang et al. [71] devise a strategy known as Wait-Free Backpropagation (WFBP) to interleave GPU and CPU computation and communication. WFBP reduces bursts in network traffic and lowers overall network strain. Using WFBP, Zhang et al. achieve speed-ups in training times in 16 and 32 single-GPU machines. WFBP has become the de-facto approach for data parallelism frameworks. PyTorch DistributedDataParallel (DDP) [31], Horovod [52] and Livermore Big Artificial Neural Network (LBANN) [16] toolkit are three open source frameworks designed to assist in transitioning models into a distributed environment. Out of these frameworks PyTorch DDP has been extremely popular among the deep learning community due to its seamless integration with PyTorch [42]. Horovod is an implementation of WFBP for TensorFlow by Uber. LBANN accelerates parallelized deep learning by taking advantage of high performance computing hardware. These implementations share an uncanny similarity in the way they optimize WFBP. Instead of having an individual all-reduce call for each parameter tensor, they fuse parameter tensors into fixed size bins. All reduce calls are made at the granularity of these fused parameter bins. This increases network bandwidth utilization and thus the overall performance of these frameworks. Although the fused tensor bin-size is kept as a tunable hyperparameter, Li et al. [31] demonstrate that the default bucket size of PyTorch DDP i.e. 25MB is a reasonable choice for efficient scaling. Large Models. Given the abundance of large training datasets neural networks with increasingly larger number of parameters have led to tremendous gains in performance on a variety of training tasks. As models and datasets grow in size GPU memory capacity becomes a major bottleneck. Data parallelism requires each GPU to store its own copy of the neural network. With larger models and datasets the memory required to house the activations, gradients and parameters of these neural networks often exceeds the capacity of a single GPU DRAM. Data parallelism is thus rendered infeasible for training large models without memory optimizations. Zero Redundancy Optimizer (ZeRO) [46] is a framework built over PyTorch to reduce per-GPU memory consumption. The paper observes that most memory during training is occupied by optimizer states, gradients, and parameters. ZeRO partitions these model states across GPUs to remove memory redundancies. With ZeRO, memory reduction scales proportionally with the number of GPUs while communication overhead only increases by a constant factor of 1.5x. The paper finds improvements in model size, training performance, and scalability with 100 billion parameter models on Inter-Layer 384 GPUs 100B * Note: FlexFlow does not provide a parameter size for the largest network it trains. We have defaulted to the largest network with a known network size cited in their paper. * * The following frameworks are compared quantitatively in Section 4 up to 400 GPUs using the Adam optimizer [27] and mixed precision. Researchers at Microsoft have used ZeRO to train one of the largest neural networks in language modeling literature: a 17B parameter neural network called the Turing-NLG. Out-of core training algorithms like NVIDIA's vDNN [50] are often used to train neural networks on a single GPU with insufficient DRAM capacity. These algorithms move data back and forth between the CPU and the GPU to free up space on the GPU. KARMA [65] is a framework built over PyTorch that extends this out-of-core approach to data parallelism on multiple GPUs. They design an efficient algorithm for automatic offloading and prefetching of activations and parameters of the neural network to and from the CPU DRAM. These capabilities are further extended to support multi-GPU models by performing weight updates on the CPU. KARMA sees a 1.52x speed-up against other state-of-the-art out-of-core methods. It provides an efficient way to utilize data parallelism for large models that would otherwise necessitate other frameworks. Zero-Infinity [47] is another framework that provides support for out-of-core data parallel training for multi-billion parameter models. Using their memory optimizations, The authors are able to deploy a 32 trillion parameter model on as little as 512 GPUs while maintaining a decent throughput of around 40% of the peak. Large Effective Mini-Batch Sizes. Data parallelism is most efficient with high per-GPU workloads. This is ensured by fixing the per-GPU mini-batch size. As an example, suppose a ResNet model with a per-GPU mini-batch size of 128 is trained over 64 GPUs. This is equivalent to an effective mini-batch size of 8192 on a single GPU. It has been empirically shown that an extremely large effective mini-batch size has an adverse effect on the statistical efficiency of neural network training [17]. The naive approach to compensate for this is to increase the learning rate (LR). Krizhevsky [29] proposes to scale LR linearly with mini-batch size. Problems emerge as more workers are added to accelerate training: large LR values result in accuracy losses and training instability. Goyal et al. [17] propose a LR warmup scheme to combat accuracy loss. Training begins with a lower LR that slowly builds up to a target value following the linear scaling rule. The paper was able to train ResNet-50 with a mini-batch size of 8K and accuracy matching smaller mini-batch models. You et al. [68,70] devise Layer-wise Adaptive Rate Scaling (LARS) as an alternate approach to LR warmup. LARS adapts the global LR to create separate LRs per model layer based on the ratio between layer weights and gradient updates. The paper observes this ratio varies across layers and provides insight into the efficacy of a layer's weight updates. You et al. utilize LARS to train AlexNet and ResNet-50 with a mini-batch size of 32K without accuracy loss. LARS experiences inconsistent performance gains across different deep learning tasks. You et. al [69] propose a general strategy to adapt any iterative optimizer for large mini-batch training. They apply this strategy to create LAMB using the Adam optimizer as a base. Using LAMB, You et al. scale BERT training to a mini-batch size of 32K without performance degradation. Intra-Layer Parallelism State of the art training techniques in intra-layer parallelism span from fine-grained parallel implementations of numerical kernels to dividing the coarse-grained work of a single layer across processes. It is often used in conjunction with other parallelization strategies such as data or inter-layer parallelism. 3.2.1 Fine-Grained Parallelism. At the fine-grained level many techniques draw from existing numerical methods and adapt them to deep learning. Matrix multiplication and convolutions are the most utilized kernels and have been the focus of much optimization from the ML and broader scientific community. Many accelerators and processors have paired software libraries which implement these kernels tuned to their hardware such as CuDNN [9], MIOpen [24], and OneDNN. Accelerators have been at the core of fine-grained parallelism within a layer. Several works have introduced techniques, some ML based, for mapping layer computations to the hardware optimally [23,30,58]. Here a mapping is the tiling strategy, computation order, and parallelization strategy, hence, the search space for optimal mappings can be immense. There has been recent interest in using hardware accelerators other than GPGPUs to train deep networks. FPGAs have emerged as a viable candidate in DNN acceleration due to their lower energy consumption than GPUs and the flexibility provided by their reconfigurability. Recent work has explored optimizing DNN operations on FPGA hardware [33]. More recently, novel architectures have been proposed to improve memory re-use and parallel performance [7,8,26]. Coarse-Grained Parallelism. Orthogonal to the fine-grained compute kernels there have been techniques developed to divide work inside a layer along coarser tensor dimensions. These typically involve using optimization algorithms and/or ML to identify optimal partitions of computation and data within a layer and then developing a parallel strategy for execution. Song et al. propose a method for finding communication optimal parallel strategies on accelerator arrays in linear time [57]. Similarly, Jia et al. introduce a novel Markov Chain Monte Carlo based search for finding optimal parallelization strategies, which encompasses intra-layer in its operator dimension [22]. MeshTensorFlow accomplishes a similar effect by mapping tensor dimensions to a n-dimensional processor array or "mesh" [53]. These tensors are split and/or replicated across the mesh, such that the computation can be done in parallel using the processor array. The framework itself provides an interface for users to define a layout. Any layout will produce the same results for the same problem, however, the memory footprint and performance can be greatly improved with an optimal layout. Dryden et al [15] also propose several algorithms for partitioning convolution tensor dimensions with the goal of reducing all-reduce time during training. Their algorithms are available in the LBANN framework. Convolutions are also parallelized in [41] with a hybrid parallelism by extending data parallelism with parallelism in the spatial domain. For language-based models Megatron [54] achieves a similar parallelism by partitioning the blocks in transformer layers across processors. Megatron has been increasingly used as language models become more common and larger (see Figure 1). It has shown up to 74% weak scaling coefficient on 512 GPUs. Dividing layer tensor dimensions across processors is, however, very sensitive to the layer type. For instance, fully connected layers involve an all-to-all computation and therefore all-to-all communication, which is more expensive the data parallelism's allreduce. Thus, it is hard to generalize coarser grained intra-layer parallelism for models with custom layers. To combat this some methods look strictly at compute graph operations and not model layers [22]. Inter-Layer Parallelism True inter-layer parallelism can only be achieved by pipelining i.e. having multiple mini-batches active in the system at any given instance. There are two ways to achieve pipelining: with and without flushing. In this section, we discuss the pros and cons of both approaches. We also provide an overview of frameworks that implement these approaches. Pipelining with Flushing. Pipelining with flushing divides a mini-batch into micro-batches of equal size. These micro-batches are injected one by one into the system. GPUs accumulate gradients from all the micro-batches in the system. A GPU updates its weights only after it has finished the backward pass of the last micro-batch. The next mini-batch and its corresponding micro-batches are injected after all the GPUs have finished updating their weights. This approach to pipelining is also called micro-batching. The number of micro-batches is usually kept to be much larger than the number of workers so that each worker can compute concurrently. Ensuring optimum hardware utilization requires having a large mini-batch size. To maintain statistical efficiency at large mini-batch sizes the same set of solutions discussed in Section 3.1.3 can be used. Figure 3 shows pipelining with flushing in action. Worker GPUs incur idle time between the forward pass of the last micro-batch and the backward pass of the first micro-batch. These are called pipeline bubbles. They reduce the overall hardware utilization of the system A load balanced mapping of layers to GPUs is absolutely critical to maximize performance. The load balancing algorithm must also be communication-aware. This is because activations and gradients exchanged at GPU boundaries can be in the magnitudes of GBs for large neural networks. An efficient implementation of pipelining with flushing must have load balancing support. This idea was first introduced by Huang et al. in GPipe [18]. Using GPipe they trained a 557M parameter neural network -AmoebaNet-B [48] on the ImageNet [51] dataset and surpassed the state of the art in a number of downstream image classification tasks. TorchG-Pipe [25] is an unofficial open-source implementation of GPipe built on the PyTorch [42] backend. GEMS (GPU-Enabled Memory Aware Model-Parallelism System) [20] introduces a novel approach to increase hardware utilization. This framework proposes an algorithm to train two neural networks concurrently using pipelining without flushing on multiple GPUs. They double the throughput of the system by overlapping the forward and backward passes of the two neural networks. We refer the reader to their paper for the details of their implementation. Recently ZeRO [46] and Megatron [54] also extended support for this approach towards inter-layer parallelism. TorchGPipe [25] provides a load balancing algorithm that seeks to balance the net execution time of the forward and backward pass of a micro-batch on each GPU. However, their algorithm ignores the communication overhead of exchanging tensors across GPU boundaries. Megatron divides the layers of a transformer across GPUs, which is optimal because all the layers of a transformer are identical. ZeRO also provides an identical strategy that divides the layers equally across GPUs. Additionally, they also support a load balancing algorithm that equalizes GPU memory consumption across GPUs. AxoNN [56] introduced a novel asynchronous communication backend for inter-layer parallelism. To the best of our knowledge this is the first work that utilizes asychrony for increasing hardware utilization by opting for MPI instead of NCCL. They also introduce a memory optimization algorithm that they use to decrease the pipeline depth, increase data parallelism and outperform the state-of-art by 15%-25% on models with as many as 100 billion parameters. 3.3.2 Pipelining without Flushing. In this approach the number of mini-batches active in the system is kept constant. As soon as a minibatch finishes its backward pass on the first GPU a new mini-batch is injected into the system to maintain full pipeline occupancy. Unlike pipelining with flushing, weight updates on a GPU take place as soon as it is done with the backward pass of a mini-batch. This method of pipelining seeks to increase hardware utilization by removing flushing induced bubbles in the pipeline. However, statistical efficiency of such a training algorithm reduces drastically. This is due to a problem called weight staleness that occurs when newer mini-batches in a pipeline encounter stale weights in forward passes which are yet to be updated with the backward pass of older mini-batches. This is one of the major reasons why pipelining without flushing has not seen widespread adoption. PipeDream [39] is a framework that implements pipelining without flushing. It employs an algorithm called weight stashing to counter weight staleness. We refer the reader to their paper for exact details of the implementation. Chen et al. [5] suggest predicting future weights from stale weights using a variant of SGD with momentum [44]. PipeDream additionally proposes a static load balancing algorithm that is communication aware. It instruments each layer and uses the profiling data in its load balancer. Their framework also has an additional provision to replicate compute-intensive layers across GPUs to increase their throughput. Replicated layers synchronize their gradients via all-reduce after each backward pass. EXPERIMENTAL SETUP In this section we present a detailed overview of our empirical evaluation of a number of parallel deep learning frameworks. Choice of Frameworks We use DDP 2 [31], ZeRO 3 [46], Megatron 4 [54], PipeDream 5 [39], TorchGPipe 6 [25], LBANN 7 [16], and AxoNN 8 [56] for our empirical analysis. For Megatron we profile it's implementations of dataparallelism and intra-layer parallel implementations separately. We refer to these as Megatron-data and Megatron-intra respectively. This subset is representative of the three types of parallelism discussed in Section 3. We select frameworks which have opensource implementations, are easy to setup, and have a relatively large user-base. We also tried to include MeshTensorFlow [53] and FlexFlow [22] in our set of frameworks. However, despite our best efforts we could not set them up successfully for experimentation on our machines. To prevent dataloading from being a bottleneck we copy training datasets into node-local SSDs before training. Data is loaded using PyTorch's distributed data loader with several worker processes. We defaulted to four processes, separate from the main process, to read in data. MegatronLM implements their own data loaders, which we used with Megatron rather than PyTorch's. In practice we found these to be much faster than the default PyTorch data loaders. For a fair performance evaluation of each framework we used mixed precision on the V100 and A100 cards on Lassen and ThetaGPU [38]. Of the frameworks we ran DDP, Megatron, LBANN, and ZeRO were the only ones that supported mixed precision with distributed training. All of the listed frameworks use Pytorch 1. Table 2 describes the systems and hardware used in our training. Lassen is an IBM machine at Lawrence Livermore National Laboratory with a Mellanox network. It currently sits at number 26 on the Top500 list. ThetaGPU is a GPU extension of the Cray XC40 Theta system. System Hardware Each system was selected to be representative of typical machines used for DL training. Lassen is similar to other leadership HPC systems with GPU-dense nodes. The ThetaGPU extension of Theta with dense A100 nodes is more typical of current cutting edge AI machines. Datasets and Neural Networks We evaluate the aforementioned subset of frameworks on two popular deep learning tasks: image classification and language modeling. For the former task we use The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 dataset [51]. This dataset has been widely used to train large state of the art image classification neural networks throughout the last decade. It consists of more than a million RGB images of dimension 224x224 evenly divided across 1000 image classes. We use this dataset to train the VGG-16 [55] architecture on our selected subset of frameworks. Language modeling is an unsupervised learning task wherein models are trained to predict the next word in a sentence given all of the previously occurring words. We use the Wikitext-103 [37] dataset for our language modeling training workloads. This dataset is comprised of more than 28000 articles from the English Wikipedia amounting to Table 3 provides an overview of the datasets used across our experiments. Hyperparameters were chosen based on corresponding MLPerf [35] benchmarks, which are a standard means of comparison for DL training. Because of this we keep the parameters fixed between frameworks. For parameters not included in the MLPerf description we choose them based on the values given in their respective papers. We ensure that training with our hyperparameters gives us reasonable performance on the validation set. Table 3 provides an overview of the hyperparameters applied to each model. It is possible further tuning could improve the performance and/or statistical efficiencies. For efficient scaling to larger GPU counts, data parallel algorithms typically use a fixed mini-batch size per GPU to maintain a constant computational workload per GPU. Thus, to ensure a fair comparison of other frameworks with DDP, AxoNN, ZeRO, LBANN and Megatron-data we do the following for each framework: • Megatron-intra -We linearly scale the mini-batch size with increasing number of GPUs. • TorchGPipe -We fix the size of a micro-batch and set the number of micro-batches to 4 times that of the GPU count. • PipeDream -We fix the size of a mini-batch. PipeDream ensures constant computational workload on each GPU by increasing it's pipeline limit automatically. Exceptions We make the following exceptions to the experimental setups listed above. We only show results for PipeDream on a subset of the GPUs due to the framework deadlocking on higher GPU counts. We only show results for TorchGPipe upto 8 GPUs on ThetaGPU and 4 GPUs on Lassen as it is only applicable to a single node. We only show results for LBANN on Lassen as we had difficulties building the framework on ThetaGPU. Likewise, we only show AxoNN results on Lassen due to jobs not finishing on ThetaGPU. Evaluation Metrics For our analysis we use metrics that matter the most to a deep learning researcher -epoch execution times, statistical efficiency, and GPU memory consumption. Statistically efficient training algorithms or frameworks require less number of epochs to reach a certain target accuracy on the validation data. When comparing parallel DL frameworks it is absolutely imperative to compare both the epoch execution times and statistical efficiency of the training runs. We have discussed the tradeoffs that parallel DL algorithms incur between these two metrics in Section 3. We profile epoch execution times on 1, 2, 4, 8, 16, 32 and 64 GPUs on Lassen and ThetaGPU. While profiling the statistical efficiency for a particular framework, we use the GPU count where it has the minimum epoch execution times. For gathering memory utilization data we use 1, 2, 4, 8, 16, 32 and 64 GPUs on ThetaGPU. Table 3 and Table 2 gives an overview of the neural networks and machines we used for evaluating these metrics. To measure the statistical efficiency we record the accuracy and loss for the vision tasks and perplexity for the language tasks. Loss is the output of the loss function used for training. Its magnitude depends on its definition, but the training loss should decrease towards zero as the model improves in predictive capacity. Accuracy measures the ratio of samples accurately predicted to total samples. We use the validation accuracy, which is calculated based on samples exclusive to the training set. Perplexity is commonly used in NLP to measure how well a model predicts for a certain corpus based on the cross-entropy of the model. It is defined as the exponential of the cross entropy loss on the dataset. COMPARATIVE EVALUATION In this section we present and discuss the results from our experiments on epoch execution times, statistical efficiency, and memory utilization. Execution Time Comparison We first look at the baseline performance of each framework. Figure 4 worst on both VGG-16 and GPT2-medium by up to 1.8x and 5.2x, respectively. We also observe Pipedream is the second slowest framework. The single GPU performances differ significantly largely due to these two not supporting mixed precision. The difference is exacerbated for extremely compute intensive neural networks like the GPT2-medium. While Megatron, DDP, ZeRO and AxoNN employ mixed precision, Megatron is considerably faster as it uses its own optimized implementation of the transformer encoder layer and Adam optimizer. Figure 4 exemplifies this, where we observe a 2x speedup on a single GPU over the native PyTorch kernel used by DDP and ZeRO. The PyTorch implementation performs worse due to its handling of the computationally intensive final softmax layer in GPT2-medium. While DDP and AxoNN compute this layer in full precision, ZeRO's mixed precision strategy computes this layer in half precision" leading to the difference in performance between the two. Out of all the frameworks TorchGPipe has the worst single GPU performance. This is because micro-batching provides no performance benefits as operations of different microbatches are serialized on a single GPU. It however does save memory used for stashing activations during the forward pass. We discuss this in Section 5.3. Figure 5 shows the time spent by each framework in the forward pass, backward pass, and I/O for GPT2-medium on ThetaGPU. We observe a marked improvement in Megatron's I/O performance due to its custom data loaders (see Section 4.1), however, these are a negligible part of the overall time per iteration. Across all frameworks, we see that the backward pass is more computationally intensive than the forward pass. This is because for each layer we not only compute the gradients for its parameters but also for its input activations which need to be backpropagated to previous layers. Single GPU profiles in the figure also highlight the difference in the absolute computation time of the forward and backward passes for these frameworks. It further supports our above explanation for the differences in sequential performance in Figure 4. Across both machines and neural networks we observe two separate trends amongst the frameworks. First, DDP, ZeRO, LBANN, AxoNN and Megatron-data all perform similarly with only constant deviations from each other. Second, PipeDream and TorchGPipe are slower, more erratic, and scale worse than the others. Third, Megatron-intra's speedup seems to plateau when we try to scale it across multiple nodes. Within this first trend we observe that ZeRO's performance trends the same as DDP and AxoNN with only 10-15% difference in absolute run time. These variations can be attributed to the different mixed precision implementations and ZeRO's memory optimizations. As noted previously in Section 3.1.2, ZeRO reduces the per GPU memory footprint of data parallelism at the expense of added communication. However, we see that this communication overhead scales the same as standard DDP. It is immediately apparent that these data parallel approaches strongly outperform the other frameworks in scaling. This is notably due to the embarrassingly parallel workload in data parallelism when the entire model fits within GPU memory. We also see an expected slight reduction in speedup on Lassen and ThetaGPU (shown in Figure 4) for data parallelism as the number of GPUs surpassed that of a single node. This happens as the all-reduce communication now occurs outside the fast intra-node NVLink and has to use the system network. This is a negligible issue due to how much better the data parallel algorithms scale. Due to the lack of mixed precision support, PipeDream and TorchGPipe have the largest epoch execution times at all GPU counts across all machines. PipeDream seems to scale erratically relative to its own single GPU execution. The poor scaling can be attributed to two factors. Firstly, PipeDream uses the relatively slow Gloo library as its communication backend. Secondly, erratic scaling is usually a sign of load imbalance. Our experiments show that their communication-aware load balancing algorithm does not perform satisfactorily in practice. Along with these two major trends we also observe that Megatronintra plateaus once it runs on multiple nodes. For larger GPU counts it scales worse than DDP, ZeRO and AxoNN. We observed that the communication overhead of Megatron-intra increases rapidly with increasing number of GPUs, ultimately reaching 52.5% of the total execution time on 16 GPUs. Based on our observations we recommend that researchers who wish to train large transformer models on language modeling task use Megatron-intra for their single GPU sequential implementations. If the model surpasses the memory capacity of a single GPU, we recommend employing Megatron's intra-layer parallelism to fit the model inside the GPUs of a single node. Scaling to large GPU counts should be done by integrating Megatron's intra-layer parallelism with data parallelism. Figure 8 illustrates the results of our statistical efficiency experiments. Following standard practice we measure the validation accuracy and perplexity at each epoch for the image classification and language modeling tasks respectively. We report the epoch number as well as the total training time. On observing the performance of PipeDream on both the tasks it is apparent that weight staleness is a huge roadblock in the path of algorithms that seek to implement pipelining without flushing. PipeDream's proposed weight stashing approach does not mitigate this problem satisfactorily. ZeRO, DDP and LBANN exhibit near identical validation curves. The slight variations in the validation curves are likely due to differences in the mixed precision implementations in these frameworks. TorchG-Pipe and Megatron-intra exhibit greater statistical efficiencies than the data parallel frameworks on the language modeling task. We attribute the fast convergence of these frameworks due to their training runs being carried out on a small GPU count. The data parallel frameworks being trained at 64 GPUs take a slight hit in their convergence speeds due to the problem of increase effective mini-batch sizes that we highlighted in Section 3.1.3. Figure 8 further details how the accuracies and perplexities behave over time rather than epoch. PipeDream is much slower to accuracies than the other frameworks. Such a figure presents a combined picture of the statistical efficiency and epoch execution times of a framework. We argue that plotting validation metrics against epoch times is the best way to evaluate the performance of any distributed deep learning framework. It also clearly demonstrates the superiority of data parallelism over other classes of parallel deep learning algorithms. Figure 9 details the per GPU memory usage of each framework during the training tasks. ZeRO, while having similar performance and scaling to DDP, had between 42% and 66% of the memory footprint. We also see this improving as more GPUs are added similar to the layer parallel runs, while DDP remains fixed as it simply duplicates the models across GPUs. The pipelining implementations both experienced over 2x better memory usage with more resources. More of the models were able to be partitioned amongst the GPUs. However, the memory savings begin to plateau as more GPUs are added since increase in the activation memory due to increasing batch sizes balances out the decrease in parameter memory. Memory Utilization The U-shaped per GPU memory curve of Megatron can be attributed to the inner workings of their intra-layer parallelism implementation. While the computation of a transformer layer is divided across multiple GPUs, the output of the last layer needs to be present in its entirety on every GPU. Since the per GPU mini-batch size is fixed the memory occupied by the input for any layer on each GPU increases linearly with an increase in GPU count. At lower GPU counts this increase is offset by the decrease in parameter memory due to the division of the layer computation across GPUs. After a while, however, the decrease is not enough to completely offset the increasing input activation memory. CONCLUSION The increasing size of contemporary neural network architectures has necessitated the development of efficient algorithms for parallelizing neural networks. The performance of parallel training of neural networks is heavily dependent on the algorithm, implementation, hyperparameters, and hardware used. In this paper we provide a comprehensive survey of parallel deep learning frameworks that have demonstrated scaling on parallel systems. We use two dataset-network combinations to study various properties of parallel deep learning frameworks such as scalability, memory requirements, and statistical efficiency as a function of performance. Our benchmarking studies presents some interesting observations. When the entire model can fit within a single GPU, it is best to use data parallel approaches as they perform and scale well. In memory constrained environments, ZeRO [46] can save us a decent amount of memory. Their memory optimizations only add substantial cost to the computation for non-transformer models. For saving more memory we recommend using intra or inter-layer parallelism to deploy a model across a few number of GPUs and then scale it in a hybrid fashion with data parallelism.
2021-11-10T02:15:47.040Z
2021-11-09T00:00:00.000
{ "year": 2021, "sha1": "39e9bff30cb439f55eb6c4188059977e159e14d7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "39e9bff30cb439f55eb6c4188059977e159e14d7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
125894115
pes2o/s2orc
v3-fos-license
Estimation of spatially varying heat transfer coefficient from a flat plate with flush mounted heat sources using Bayesian inference This paper employs the Bayesian based Metropolis Hasting - Markov Chain Monte Carlo algorithm to solve inverse heat transfer problem of determining the spatially varying heat transfer coefficient from a flat plate with flush mounted discrete heat sources with measured temperatures at the bottom of the plate. The Nusselt number is assumed to be of the form Nu = aReb(x/l)c. To input reasonable values of ’a’ and ‘b’ into the inverse problem, first limited two dimensional conjugate convection simulations were done with Comsol. Based on the guidance from this different values of ‘a’ and ‘b’ are input to a computationally less complex problem of conjugate conduction in the flat plate (15mm thickness) and temperature distributions at the bottom of the plate which is a more convenient location for measuring the temperatures without disturbing the flow were obtained. Since the goal of this work is to demonstrate the eficiacy of the Bayesian approach to accurately retrieve ‘a’ and ‘b’, numerically generated temperatures with known values of ‘a’ and ‘b’ are treated as ‘surrogate’ experimental data. The inverse problem is then solved by repeatedly using the forward solutions together with the MH-MCMC aprroach. To speed up the estimation, the forward model is replaced by an artificial neural network. The mean, maximum-a-posteriori and standard deviation of the estimated parameters ‘a’ and ‘b’ are reported. The robustness of the proposed method is examined, by synthetically adding noise to the temperatures. Introduction In electronic cooling, a typical circuit board may contain several discrete components all dissipating heat at distinct rates on their surfaces. The convective heat transfer with non uniform thermal boundary conditions in the flow direction may lead to a non similarity in thermal boundary layer [1]. The corresponding local convective heat transfer coefficient is determined either from intrusive measurements in the experiments which may disturb the fluid flow or by computing fluid flow and the conduction equations in the circuit board which is expensive. In the present study, instead of a coupled CFD problem an inexpensive methodology is proposed to overcome the above problems by solving the inverse heat conduction problem of discrete heat sources flush mounted to a flat plate by using measured temperatures at the adiabatic bottom surface to estimate the local convective heat transfer coefficient using the Bayesian framework. Ramadhyani et al. [2] and Ortega et al. [3] have reported heat transfer correlations that also take into account of heat conduction of the discrete heat sources on a plate. Yovanovich and Teertstra [4] proposed a composite model for the determination of area-average Nusselt number for forced flow parallel to a finite, isothermal rectangular plate for a wide range of Reynolds number. Gnanasekaran and Balaji [5] has reported the results for the simultaneous estimation of constants in a Nusselt number correlation by conducting transient heat transfer experiments coupled with Bayesian inference. Konda Reddy et al. [6] proposed an inverse methodology to estimate thermo-physical and transport properties individually and simultaneously from inhouse experimental data obtained using transient Liquid Crystal Thermography (LCT) and Bayesian inference. Bhowmik [7] has reviewed topic on convection heat transfer in channel flow with discrete heater arrays for electronic cooling. From the above literature review, it can be seen that the estimation of local convective heat transfer coefficient for a discrete heat sources flush mounted to a flat plate using Bayesian inference has not been explored adequately in literature. The variation of local convective heat transfer coefficient for a flat plate assembly is obtained from the Nusselt number correlation of the form N u = aRe b (x/l) c . This correlation is developed by solving the conjugate conductionconvection equations. The goal is to retrieve a and b in the above equation using Bayesian framework, with a view to estimate the local convective heat transfer coefficient. Forward model Steady state two dimensional heat conduction of the flat plate assembly is simulated using COMSOL for driving the inverse problem. Fig.1 shows the geometry of the forward model with dimensions. The geometry considered for modeling consists of hylam plate with pockets and three identical embedded discrete aluminium heat sources. There is a uniform heat generation in each heat source. The heat input to the first, second and third heat source is 16 , 5 and 3 W respectively. The thermal conductivity of Hylam plate and aluminium heater source is 1.4 and 200 W/m.K. Initially heat generated in the three heat sources is conducted along and across the flat plate assembly, before getting dissipiated from the top surface of the plate by convection. The cooling medium is air at temperature (T ∞ = 300K) and velocity U ∞ = 1m/s. Measuring In equation 1, q v = 0 wherever there is no heat source and q v =q 1 or q 2 or q 3 depending upon the location of (x, y) The boundary conditions at the plate surfaces are given by where The solution to the forward problem results in a temperature distribution throughout the domain including the adiabatic surface at the bottom. The inverse problem then is to get back 'a' and 'b' (c=0.3 in this study) given T (x, 0) from measurements. As already mentioned T (x 8 ) obtained by solving the governing equations with known values of 'a' and 'b' is treated as 'experimental' data. It is clear that the inverse problem would involve repeated solution to the forward model such that T (x, 0) [simulated] match T (x 8 ) [experimental]. A grid independence study was carried out and it was seen that 1000 elements was sufficient for simulation. In the problem under consideration, for the simultaneous estimation of two parameter i.e., constants a and b of the local convective heat transfer coefficient, a surrogate model which approximates the behavior of numerical model based on artificial neural networks is employed in the retrievals, in order to reduce the computational time and cost. The laminar fluid flow and conduction in the plate assembly were first solved using coupled CFD and energy equations for different values of Reynolds number and then a correlation of the form N u = aRe b (x/l) c was developed to obtain the values of a and b so that reasonable values of a and b are used in the problem. A total of three simulations was done for a velocity range of 1-1.5 m/s. For each velocity the spatially varying heat transfer coefficient was obtained which is then used to develop the correlation. Using Datafit software the constants a, b, and c are determined to be 0.18, 0.53 and 0.3 respectively. A typical temperature contours for U ∞ = 1 m/s (Re = 8753) are shown in Fig. 2. In the forward model the value of c is fixed in the equation 7, while estimating the values of a and b . Surrogate model In order to reduce the computational time required for the inverse problem, the time consuming forward model is replaced by a surrogate model. In this work artificial neural network, a non linear regression tool is used to correlate the input and target data sets (i.e. temperatures at the adiabatic surface) to generate a network this returns the target values for a given set of inputs in a fractions of the time required for the inverse model . A neural network consists of an input layer, hidden layer and an output layer. The neural network configuration used in the present study is based on the feed forward back propagation network. The Levenberg Marquardt algorithm is used for training and tan sigmoid is the transfer function employed. For 100 input values (constants 'a' and 'b') corresponding target values (temperature distribution) are computed using the numerical model. A schematic of the neural network architecture is given in Fig. 3. Out of 100 data sets , 80 % of the randomly selected training data sets are used for training the network and the remaining 20 % are used to assess the network performance. The following parameters are considered to calculate the optimum number of neurons in the hidden layer [8]. The optimum number of neurons required in the hidden layer for the network is determined to be 14. Inverse model Bayesian inference is a method of inference using Bayes theorem which states the relation between two conditional probabilities that are reverse of each other Eqn.11. i.e., The posterior probability of an event is directly proportional to the likelihood density function. The probability density function of a vector x, given the experimental observations y is related using 'Bayes formula' where P x y is the posterior probability density function (PPDF), P y x is the likelihood density function, the P (x) the prior density function. The likelihood density function, P y x is obtained by comparing the measured temperatures with the simulated temperatures for a given parameter the residue R 2 is given by (Y measured − Y simulated ) 2 and σ is the uncertainty in the measurements and the forward model. The prior density function P (x) is 1.0 for the case of uniform prior and the corresponding PPDF becomes the likelihood density function. In the case of normal prior with mean µ p and standard deviation σ p of the parameters, P (x) is represented as substituting Eqn.13 and Eqn.14 in Eqn.12 the PPDF then becomes For a discrete data values of x with normal priors, the mean , maximum a posteriori(MAP) and the variance estimate of x are as follows. The surrogate model is simulated to obtain the temperatures for a desired number of samples 'a' and 'b' to retrieve the unknown parameters. In this study x 1 = a and x 2 = b, y is the data vector of 'measured' temperatures and has temperature values at 8 locations along the adiabatic surface. Sample generation with Metropolis hasting-Markov Chain Monte Carlo algorithm is employed for generating the samples (a,b). The sampling procedure for two parameter estimation is as follows In the above algorithm M is the number of samples, n is the number of parameters and The ratio −j ) is called as likelihood density ratio(with uniform prior) or PPDF density ratio (with normal prior) and can be calculated from Eqns.19 and 20 The ratio , is called the proposal density ratio. Generation of prior A novel method to generate priors is employed here where in an "offline" Bayesian approach is proposed. In this method samples are generated by dividing the original interval of uncertainties (upper bound -lower bound) into a certain number of pre-decided intervals (0.1 ≤ a ≤ 0.4 and 0.4 ≤ b ≤ 0.7). Values thus chosen are used to run the ANN where output is used to generate the posterior densities function and from these approximate values of 'a' and 'b' can be estimated. This approach is often used to bracket the solution initially and the output of this can be used to generate priors for the actual Bayesian estimation. This approach is expected to drastically bring down the standard deviation or uncertainty in the final estimation of the quantities. Table 1 shows the mean, MAP and SD of the estimated parameters. The mean values obtained is further used as a Gaussian prior with 5 % of the mean as a standard deviation for the simultaneous estimation of constants 'a' and 'b' in the heat transfer coefficient equation using the regular MH-MCMC approach. The effect of number of samples The effect of the number of samples on the estimation is similar to a grid independence study in numerical simulations. The number of samples need to be chosen such that the PPDF attains stationarity. Tables 2 , shows that as the number of samples increases the standard deviation decreases. It is observed that 10000 samples are adequate for the simultaneous estimation of parameters. In order to remove the influence of initial guess first 1000 samples are excluded to calculate the mean, MAP and SD respectively. This is frequently referred to as 'burn in' in Bayesian literature. The effect of number of temperature data points The optimum number of samples need to be chosen based on a sensitivity study. Table 3 shows that 8 data points on the adiabatic surface of the flat plate assembly are sufficient since it give a minimum standard deviation for the estimation of unknown parameters. Fig. 1 shows the flat plate with the locations of the 8 points at which the temperatures are 'measured'. Thermocouples or Thermochromic liquid crystal sheets can be used to measure the temperature. [6] Estimation with surrogate data The constant 'a' and 'b' in the heat transfer coefficient are simultaneously retrieved using MH-MCMC based Bayesian approach for steady state two dimensional conduction of the three discrete heat sources flush mounted to the flat plate with surrogate temperature distributions with Gaussian prior for an air inlet velocity of U ∞ = 1 m/s (Re = 8753) and free stream temperature T ∞ = 300K. Table 4 shows the mean, MAP, and SD. From the Table 1 The effect of noise To check the robustness of the method, Gaussian noise was added in the surrogate temperature data obtained from forward model for a given value of 'a' and 'b' with zero mean and σ µ of 0, 1 and 2 % estimations were done. The results of this exercise are shown in Table 5. From the Table 5 it is clear that estimated parameters are close to the target values even with 2% noise. CONCLUSIONS The two dimensional steady state conduction equation of a flat plate assembly was solved for a precribed spatially varying local heat transfer coefficient on the top surface to obtain the temperature distribution in the domain and also at the bottom adiabatic surface using commercially available COMSOL. These were treated as 'measured' temperatures to solve the inverse problem of obtaining the spatially varying heat transfer coefficient from the temperatures. The variation of heat transfer coefficient was in the form of a Nusselt number correlation as N u = aRe b (x/l) c and 'c' was taken to be 0. 3 Hastings algorithm was then used to generate the samples of 'a' and 'b' to the forward model (i.e. conduction problem) and Bayesian inference was adopted to estimate 'a' and 'b' with a parameters were reported. The estimated 'a' and 'b' were seem to be in good agreement with target values. A novel way of generating priors which is the hallmark of the Bayesian method was proposed. Additionally with the Bayesian method the standard deviation of the estimated parameters are directly obtained, which are not possible with other methods. It was seen that priors significantly reduce the uncertainties in the final estimation. The effect of noise on the estimation process was done to check the robustness of the proposed method. In summary, this paper proposes a methodology by which temperatures measured at convenient places in a problem involving conduction and convection can be married with a simpler mathematical model and the Bayesian framework to obtain a spatially varying heat transfer coefficient on the surface over which the fluid flows. This approach thus avoids the need to measure temperatures and/or temperature gradients on the surface where fluid flow takes place, thereby disturbing the flow or making use of a fully coupled CFD model to obtain the heat transfer coefficient which will then not qualify to become an experimental procedure. The Bayesian framework additionally allows for a systematic injection of prior beliefs into the problem, thereby increasing the accuracy of the estimation process.
2019-04-22T13:03:37.367Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "169088e43d5a47e8eaac6f57d6a83feed149e310", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/745/3/032094", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4c721d478e9ec9610691f62d11d4077f1d4fe6f8", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
17992945
pes2o/s2orc
v3-fos-license
Effect of L-carnitine Supplementation on Circulating C-reactive Protein Levels: A Systematic Review and Meta-Analysis Summary Background C-reactive protein (CRP) has been proposed as a risk marker and risk factor of cardiovascular disease. There have been a number of clinical reports suggesting that supplementation with L-carnitine can modulate systemic inflammation and lower circulating CRP concentrations, but the results have not been consistent. Methods A comprehensive literature search in Medline, Scopus and Cochrane Central Register of Controlled Trials was performed in December 2012 to identify clinical trials investigating the impact of oral L-carnitine supplementation on serum/plasma CRP concentration. A random effect method was used to calculate the combined effect size. Results Six studies comprising 541 cases and 546 controls met the inclusion criteria. Meta-analysis of included trials revealed a significant reduction of circulating CRP concentrations in subjects under L-carnitine intervention compared to the control treatment. The calculated combined weighted mean reduction in CRP concentrations was −0.39 mg/L [95% CI (−0.62 – −0.16)]. This effect size estimate was found to be robust and remained unaffected by the removal of each single study. Conclusions The overall findings of the present meta-analysis support the clinically relevant benefit of L-carnitine supplementation in lowering the circulating levels of CRP. Summary Background: C-reactive protein (CRP) has been proposed as a risk marker and risk factor of cardiovascular disease. There have been a number of clinical reports suggesting that supplementation with L-carnitine can modulate systemic inflammation and lower circulating CRP concentrations, but the results have not been consistent. Methods: A comprehensive literature search in Medline, Scopus and Cochrane Central Register of Controlled Trials was performed in December 2012 to identify clinical trials investigating the impact of oral L-carnitine supplementation on serum/plasma CRP concentration. A random effect method was used to calculate the combined effect size. Results: Six studies comprising 541 cases and 546 controls met the inclusion criteria. Meta-analysis of included trials revealed a significant reduction of circulating CRP concentrations in subjects under L-carnitine intervention compared to the control treatment. The calculated combined weighted mean reduction in CRP concentrations was -0.39 mg/L [95% CI (-0.62 --0. 16)]. This effect size estimate was found to be robust and remained unaffected by the removal of each single study. Conclusions: The overall findings of the present metaanalysis support the clinically relevant benefit of L-carnitine supplementation in lowering the circulating levels of CRP. Introduction Atherosclerosis is the cornerstone of the pathogenesis of cardiovascular disease (CVD) and its complications. Recent advances in basic research have provided compelling clues to the pivotal role of inflammation in the development and progression of atherosclerosis (1). In parallel, there is a large body of evidence indicating a heightened state of systemic inflammation in patients with CVD. Nearly all the known risk factors for CVD such as dyslipidemia, hypertension, diabetes, obesity and infection are influential in triggering the inflammatory response during atherosclerosis (2). However, the central role pertains to the oxidatively modified lipoproteins, in particular low-density lipoprotein (LDL). Given the established role of inflammation in all stages of atherogenesis, antiinflammatory therapy has been suggested as a promising approach to lower the concentrations of atherogenic inflammatory mediators and cover the substantial residual risk following conventional lipid lowering therapy (2). CRP is a 206 amino acid pentraxin-like acutephase protein which is synthesized by hepatocytes in response to inflammation (3). CRP could be regarded as one of the best known biomarkers of systemic inflammation. Elevated circulating levels of CRP have been suggested to serve as an independent and strong predictor of CVD and atherothrombotic events (4). Recent evidence has suggested that CRP is not only a CVD risk marker, but also has a direct role in the development of vascular damage and CVD outcomes. Findings of the JUPITER (Justification for the Use of Statins in Primary Prevention: an Intervention Evaluating Rosuvastatin) trial have revealed a tight association between the degree of CRP reduction and corresponding decrement in CVD risk (5). Therefore, reduction of CRP concentrations is regarded as an effective approach for both primary and secondary prevention of CVD and its complications. Heretofore, only a few therapeutic options have been identified for the purpose of serum CRP reduction. Among these, inhibitors of 3-hydroxy-3-methylglutaryl coenzyme A reductase -known as statins -have been the most effective class of drugs (6,7). According to the American College of Cardiology Foundation/American Heart Association (ACCF/AHA) guideline for assessment of cardiovascular risk in asymptomatic adults, statin therapy is indicated for persons with LDL-C levels <2.59 mmol/L but elevated CRP. Nevertheless, successful CRP reduction may not be achieved in patients who are refractory to the effects of statins or those who are intolerant of statins. Therefore, it would be ideal to introduce novel agents, preferably of natural origin, that could lower CRP, and on the other hand have a wide safety margin that allows their chronic supplementation. Such safe CRP-lowering agents may also be used as an adjun ct to statins in order to achieve stronger reductions in CRP levels. Carnitine (L-b-hydroxy-g-N-trimethylaminobutyric acid) is a vitamin-like nonprotein nutraceutical primarily biosynthesized in the liver and kidney from the amino acids lysine and methionine (8). Due to its chiral structure, carnitine has two stereoisomeric forms: D and L. However, only the L isomer is known to be essential for human and animal health and possess biological activity, while the other isomer is biologically inert. The main physiologic role of L-carnitine is involvement in fat and energy metabolism by mediating the transport of long-chain free fatty acids across the mitochondrial membrane for b-oxidation (9). Aside from this leading task, L-carnitine supplementation has been reported to be associated with several health benefits such as regulation of carbohydrate metabolism and insulin sensitivity, mitigation of lipid peroxidation and oxidative stress, and enhancement of the immune system and spermatogenesis (10)(11)(12)(13)(14). Carnitine is also endowed with several cardioprotective properties (15)(16)(17). Several clinical trials have indicated the favorable impact of L-carnitine supplementation in the modulation of CVD incidence and mortality (16)(17)(18). However, it remains unclarified if mitigation of systemic inflammation plays a role in these beneficial properties of L-carnitine. Thus far, there have been scattered reports in different patient groups on the impact of oral supplementation with L-carnitine and its analogues on the circulating levels of CRP (19)(20)(21)(22)(23)(24)(25)(26). While the overall balance in the findings of conducted trials favors the efficacy of L-carnitine, some negative reports make the judgment inconclusive. This controversy in findings necessitates conducting a systematic review of literature and a meta-analysis of the published studies to clarify if oral consumption of L-carnitine could lower the circulating levels of CRP. Search strategy This study was designed according to the guidelines of the preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement (27). A systematic literature search for English-language articles was performed in the following databases from inception through December 2012: PubMed-Medline (http://www.ncbi.nlm.nih.gov/pubmed) and SCOPUS (http://www.scopus.com) and the Cochrane Database of Systematic Review published (http://www. cochrane.org). Databases were searched using the following terms: (carnitine OR L-carnitine OR acetyl-L-carnitine OR »acetyl L-carnitine« OR propionyl-Lcarnitine OR »propionyl L-carnitine«) AND (C-reactive protein OR CRP OR hsCRP OR hs-CRP). The wild-card term »*« was used to increase the sensiti vity of the search strategy. Study selection Studies were included if they fulfilled all of the following criteria: (1) an intervention study using Lcarnitine or its analogues as main or adjunctive therapy, (2) controlled design: drug-or placebo-controlled parallel or cross-over randomized trial, (3) study design consisted of random allocation of study participants to L-carnitine or control treatment, and (4) reported mean/median and SD/SE/IQR of serum/plasma CRP in both intervention and treatment groups at baseline as well as at the end of trial. Studies were excluded if they were: (1) a review article or meeting/conference paper; (2) were not of a clinical design; or (3) administered L-carnitine via the intravenous route. Data extraction Aside from baseline and post-trial CRP concentrations, data on the study location, publication year, population size, type of intervention, administered daily dose of carnitine, duration of supplementation, control group allocation, age, gender, smoking habit and serum/plasma levels of glucose, total cholesterol, LDL-cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C) and triglycerides were extracted from all retrieved articles. Assessment of risk of bias in included studies A methodological quality assessment of the included studies was carried out by employing the Jadad score level-of-evidence rating for randomized controlled trials (28). Jadad scale ranges from score 0 to 5, with higher scores indicative of better quality. The items for quality assessment in the Jadad scale include randomization, blinding, and description of withdrawals and dropouts. Using this scale, the overall quality of a trial could be classified as low (Jadad score of ≤3) or high (Jadad score of 4 or 5). Statistical analysis Meta-analysis was conducted using the Cochrane Program Review Manager version 5.1. Blood CRP levels were collated in mg/L. A multiplication by 0.0259, 0.0113 and 0.0555 was used to convert cholesterol (total cholesterol, HDL-C or LDL-C), triglyceride and glucose levels expressed in mg/dL into mmol/L, respectively. Standard deviations at one time point were calculated with the formula SD = SEM × square root n (SEM: standard error of the mean, n: number of participants). Standard deviations (SDs) of the mean difference were calculated using the formula: square root [(SD pretreatment ) 2 + (SD posttreatment ) 2 -(2R × SD pretreatment × SD posttreatment )], assuming a correlation coefficient (R) = 0.5 (29). For studies in which serum/plasma CRP lev-els were determined in multiple intervals, data from the last time point was used as the post-trial value in analyses. For parallel and cross-over trials, net changes in measurements were calculated as follows: (measure at end of follow-up in the L-carnitine group − measure at baseline in the L-carnitine group) − (measure at end of follow-up in the control group − measure at baseline in the control group). Due to the inter-study heterogeneities regarding design, underlying disease and age of recruited participants, and L-carnitine dosage and supplementation duration, quantitative data synthesis was performed using a random effect approach with the inverse variance weighting method. In order to evaluate the influence of each study on the overall effect size, a sensitivity analysis was conducted using the one-study remove approach (30). Summary of included studies Following the database search and removal of duplicate articles, a total of 110 articles were identified and subjected to initial screening. Thirteen articles were provisionally selected for further full-text evaluation. Out of these 13 publications, 6 met the inclusion criteria and were used for data extraction (19)(20)(21)(22)(23)(24). The reasons for the exclusion of the remaining 7 articles were being not original (n=2) (31,32), investigation of intravenous L-carnitine (n=2) (18,33), publication in languages other than English (n=2) (34,35), and lack of sufficient data on serum CRP status (n=1) (36) (Figure 1). Study characteristics The pooled population of included studies comprised 1087 individuals, of which 541 were classified as the case group and 546 as the control group. Three studies were conducted in diabetic or prediabetic patients (19,20,22), 2 studies in hemodialysis patients (23,24) and 1 study in patients with nonalcoholic steatohepatitis (21). All studies were double-blind and placebo controlled apart from that of Shakeri et al. which had an open-label design (25). Duration of L-carnitine supplementation ranged between 8 (22) to 48 (19,20) weeks. Five of the included studies used L-carnitine as intervention (19-21, 23, 24) while Bloomer et al. administered acetyl L-carnitine arginate (ALCA) (22). Dosage of L-carnitine ranged between 1-2 g/day in all the included trials. In 4 trials, L-carnitine was used as monotherapy (21)(22)(23)(24), while 2 studies used L-carnitine supplementation as adjun ctive therapy to sibutramine (19) or orlistat (20). Characteristics of included studies are summarized in Table I. The studies by Derosa et al. (20) investigated the efficacy of adjunctive therapy with L-carnitine in diabetic patients (19). In both trials, 12-month supplementation with L-carnitine was not associated with an additional benefit in terms of serum CRP reduction. There were 40% and 35% reductions in serum CRP following treatment with orlistat and sibutramine, respectively, whereas the reduction rates changed to 56% and 42% when L-carnitine was added as an adjunct. In another trial among prediabetic subjects, treatment with ALCA for 8 weeks did not result in a significant alteration in circulating CRP concentrations compared to placebo (22). In contrast to the aforementioned trials among diabetic or prediabetic patients, findings from two studies in end-stage renal disease (ESRD) patients under hemodialysis revealed a significantly improved effect on serum CRP (-41% (23) and -29% (24)) compared to control group (-3% (23) and +13% (24)) following supplementation with L-carnitine. This finding is also supported in patients with nonalcoholic steatohepatitis in whom L-carnitine caused a 42.9% reduction in serum CRP which was significantly greater than that obtained by placebo (14.9%) (21). Quantitative data synthesis A statistically significant pooled effect size [net change: -0.39 mg/L; 95% CI: -0.62 to -0.16; p = 0.001] was estimated for the impact of L-carnitine supplementation among 541 cases and 546 controls. Using random effect analysis, the overall inter-study heterogeneity was not found to be significant (I 2 = 44%; p = 0.11) (Figure 2). Sensitivity analysis Findings of the one-study remove sensitivity analysis revealed that the observed CRP lowering effect of L-carnitine is robust and not dependent on a single study. The pooled effect size remained statistically significant following leaving each study out, as shown in Table II. Full-text articles assessed for eligibility (n = 13) Full-text articles excluded (n = 7) Reasons: Not of case-control design (n = 2); not published in English (n = 2); evaluation of intravenous L-carnitine supplementation (n = 2) or lack of sufficient data on serum CRP status (n = 1). Figure 1 Flow diagram of studies through review. Discussion Overall, the findings arising from the present meta-analysis revealed a significant and positive effect of L-carnitine supplementation in reducing the circulating levels of CRP. Of the six included trials, three favored the significant effect of L-carnitine on CRP (21,23,24). Two studies which reported negative data on the efficacy of L-carnitine were those which investigated L-carnitine as an adjunctive therapy (19,20). This disparity in findings may be due to the impact of sibutramine and orlistat which were used in combination with L-carnitine in the above referenced studies. Sibutramine and orlistat have been previously reported to possess antiinflammatory properties and to be capable of reducing serum concentrations of CRP (37,38). Besides, in both of the aforementioned studies, although the reduction in serum CRP was comparable between the groups after 12 months, the decreasing trend started earlier in the group that received the L-carnitine supplement. Finally, both studies by Derosa et al. (38) were performed among obese individuals who normally bear a heightened state of inflammation. Hence, the antiobesity effects of orlistat and sibutramine, reflected as reduced weight and BMI, are likely to predominate over any antiinflammatory effect of L-carnitine and could account for a substantial part of the observed antiinflammatory effects. The study by Bloomer et al. is another study which failed to find a significant CRP lowering effect from L-carnitine (23). However, this latter study had the lowest duration of supplementation which could be regarded as a probable reason for not detecting any effect from L-carnitine. Findings from the sensitivity analysis implied that the detected significance for the efficacy of L-carnitine is robust and not considerably affected by a single study. It is interesting to note that the weighted pooled estimate for the effect of L-carnitine on circulating levels of CRP is greater than those reported from previous meta-analyses on soy isoflavones (40) and telmisartan (41), but lower than those of fibrates (42) and statins (6). Further, beside its overreported role as a biomarker of the risk and severity of atherosclerosis and cardiovascular disorders (43), CRP has been suggested to play an active role in the pathogenesis of CVD (39,44). Clues to the atherogenic potential of CRP include its interaction with lipoproteins and other components of atheroma which leads to subsequent activation of the complement system and inflammation cascade (44,45). Furthermore, there have been claims stating that CRP can trigger proinflammatory and proapoptotic responses through complementindependent mechanisms such as overexpression of cytokines, adhesion molecules, etc. (46). Since the significant combined effect size calculated in the present study was mainly derived from studies on Figure 2 Net change in serum CRP concentrations associated with L-carnitine supplementation. The overall effect size has been obtained using a random effect model and weighted by inverse variance of each trial. hemodialysis patients, the clinical benefits of L-carnitine supplementation might be especially important for patients with chronic renal failure or end-stage renal disease who are on hemo dialysis, as CVD is the leading cause of death among these patients (47). Moreover, heightened inflam mation, as characterized by elevated CRP levels, has been shown to be associated with CVD and mortality in patients with endstage renal disease (48). There are several limitations that need to be taken into account prior to any interpretation of the present results. As the most important limitation, the number of studies included in the quantitative data synthesis was small. The present work was an attempt to define the inclusion criteria in such a way that eligible studies would constitute a homogenous population. Nonetheless, there was still inter-study variability regarding the underlying disease, age, smoking habit and anthropometric indices of the recruited patient populations. It is anticipated that these hetero-geneities have been covered, at least in part, by applying the random effect model of analysis. Finally, the prevalence of cardiometabolic risk factors was not uniformly expressed across all the included studies. There fore, it was not possible to correct the esti mated pooled effect for these risk factors. Conclusion The overall findings of the present meta-analysis support the clinically relevant benefit of L-carnitine supplementation in lowering the circulating levels of CRP. Conducting future, large-scale, randomized clinical trials is warranted in homogenous populations to verify the findings of this meta-analysis. Conflict of interest statement The authors stated that there are no conflicts of interest regarding the publication of this article.
2017-09-06T09:33:49.785Z
2015-03-03T00:00:00.000
{ "year": 2015, "sha1": "5d7e0cdf8f68ead6183f98a16a54a1418a8048bf", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.2478/jomb-2014-0030", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5d7e0cdf8f68ead6183f98a16a54a1418a8048bf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256808629
pes2o/s2orc
v3-fos-license
Bipartite Euler Systems for certain Galois Representations Let $E/\mathbb{Q}$ be an elliptic curve with ordinary reduction at a prime $p$, and let $K$ be an imaginary quadratic field. The anticyclotomic Iwasawa main conjecture, depending upon the sign of the functional equation of $L(E/K,s)$, predicts the behavior of Selmer group of $E/\mathbb{Q}$ along the anticyclotomic tower of $K$. Some of the crucial ideas of Bertolini and Darmon on this conjecture have been abstracted by Howard into an axiomatic set-up through a notion of Bipartite Euler systems, assuming that $E[p]$ is an irreducible representation of $G_{K}$. We generalize this work by assuming only $(E[p])^{G_K}=0$. We use the results of Howard, Nekov\'a\v{r} and Castella \emph{et al}., along with those of Mazur and Rubin on Kolyvagin systems to show one divisibility of the anticyclotomic main conjecture, for both the signs. The other divisibility can be reduced to proving the nonvanishing of sufficiently many $p$-adic $L$-functions attached to a family of congruent modular forms. the Iwasawa algebra of Γ − . The anticyclotomic main conjecture for E predicts, when ǫ(N − ) = 1 ( known as the indefinite case), that the p ∞ -Selmer group Sel p ∞ (K ac , E[p ∞ ]) has Λ-corank one and the characteristic ideal of its Λ-cotorsion submodule can be expressed in terms of Heegner points, and when ǫ(N − ) = −1 ( known as the definite case), Sel p ∞ (K ac , E[p ∞ ]) is predicted to be Λ-cotorsion with characteristic ideal related to the p-adic L-function. Beginning with the monumental and fundamental work of Bertolini and Darmon on this conjecture [BD] in the definite case, much is known when the residual representation ρ E : G Q −→ GL 2 (F p ) of E is irreducible, with surjective image, and the modular form f attached to E is p-isolated. Some aspects of the indefinite case are also treated in [Be]. When the residual representation ρ E is reducible, some cases have been proved by Castella et al. in [CGLS] in the indefinite case using the theory of Kolyvagin systems, under an assumption of the Heegner hypothesis where N − is required to be equal to 1. Each of these works depend on the work of many authors, but they all have been inspired by the work of Bertolini and Darmon [BD]. Adapting the work of Mazur and Rubin ([MR]) on Kolyvagin systems, the argument of Bertolini and Darmon has been axiomatized by Howard in [H3], and it allows one to treat the definite and the indefinite cases simultaneously. Using this axiomatic set-up, in fact, the definite and indefinite cases are treated simultaneously, when E[p] is irreducible, in [BCC]. Howard's axiomatization is done through a notion of bipartite Euler system, which we briefly explain. Let f be the modular form attached to E/Q (by the work of Wiles, Taylor-Wiles, Diamond). For a choice of positive integer k one can define the set of k-admissible primes L k , all of which are inert in K, with the property that for any n ∈ N k ( the set of squarefree products of primes in L k ) there is a modular form f n of level nN − which is congruent to f modulo p k . One then considers a graph whose vertices are the elements in N k with edges connecting n to nl for each l ∈ L k coprime to n ∈ N k . A vertex n, generated by n ∈ N, is said to be definite or indefinite depending on whether ǫ(nN − ) is −1 or 1 respectively. In this way, we get a bipartition of the graph: every edge connects a definite vertex to an indefinite vertex. In the axiomatic set-up in [H3], Howard, as in [MR], considers, a rank two module T defined over a principal Artinian local ring A, with maximal ideal m, on which there is a continuous action of G K , and a perfect G K -equivariant alternating pairing into A(1). In this set-up, an admissible set of primes L of K is defined which consist of primes l of K which are inert and the Frobenius acts on T with eigenvalues N K/Q (l) and 1. The set of squarefree products of L is denoted by N . Then to each n, a Selmer group Sel F (n) is defined as a subspace of H 1 (K, T ) by imposing the unramified conditions at primes q ∤ n and an ordinary condition at l | n. Further, following [MR], Howard defines the sheaf of Euler systems by attaching to an even vertex n the module A, to an odd vertex Sel F (n) , and to the edge e(n, nl) an appropriate submodule of H 1 (K l , T ). For each k ∈ N, consider the Z p /p k Z p -module E[p k ]. Then at an indefinite vertex the modular form f n allows one to define a cohomology class κ n ∈ lim ← −m H 1 (K m , E[p k ]), which arises as the Kummer image of Heegner points on the abelian variety attached to f n . At the definite vertex one can attach to f n a p-adic L-function λ n ∈ Λ/p k Λ. There are reciprocity laws relating the elements at any two adjacent vertices, and the families {κ n |n ∈ N k , n indefinite } {λ n |n ∈ N k , n definite } forms a bipartite Euler system. Howard shows how a sufficiently non-trivial bipartite Euler system can be used to bound the lengths of Sel F (n) , and proves a rigidity theorem which gives a uniform estimate for those bounds for all the vertices. In this manuscript, we extend the work of Howard in [H3] where it is assumed that ρ E is irreducible to the situation where (E[p]) G K = 0. Note that (E[p]) G K = 0 is automatically satisfied when E[p] is irreducible representation of G K . In trying to do this, we are faced with a problem of not having a Chebotarev Density Theorem. In the irreducible case, this is crucial to getting the bounds on Selmer groups from an Euler system. In the case when (E[p]) G K = 0, we use results of [CGLS], which further relies on the work of Nekovář [Ne], that allows us to have a result similar to the Chebotarev Density Theorem, but for the twist T = E[p k ] ⊗ α for some non-trivial character α of Γ − (see §3 below). We write A = Z p /p k Z p . Theorem A (Lemma 3.7). Let length(A) > ε 0 = r(C 1 + C 2 + C α ). Then for any n ∈ N and any cyclic free A-module C ⊂ Sel F (n) , there are infinitely many l ∈ L such that loc l takes C isomorphically onto H 1 unr (K l , T ). Taking ε 0 = 0 in the above theorem, we can recover the [H3,Lemma 2.3.3]. However, relaxing the irreducibility condition ends up in introducing an error in the bounds for lengths of Selmer groups ( see Theorem 4.7). Theorem B (Theorem 4.7). Let length(A) > ε 0 . For any free Euler system of odd type for (T, F, L), π ε λ n ∈ Stub n for every n ∈ N even , and π ε κ n ∈ Stub n for every n ∈ N odd . Equivalently, in terms of the A-module M n in the decomposition The second problem that we faced is in trying to get an analogous rigidity theorem, as in [H3,Theorem 2.5.1] ( see Theorem 5.14) assuming only (E[p]) G K = 0, more precisely, in trying to characterize the core vertices (see Def 2.4.2 of [H3]). However, the notion of a core vertex, as in [MR] or [H3], as a vertex whose Stub module is A (more precisely, n is core if length(M n ) = 0), is not suitable as we get an error term ε 0 ( in the irreducible case this error term is zero). Therefore, we look at those vertices n, such that length(M n ) is minimal. Calling this length u, the Stub module is isomorphic to m u A, and we call these vertices absolute core vertices ( see Def 5.1). We show that this is well defined by characterizing these vertices in terms of vanishing under localizations and we call them universally trivial vertices ( see Def 5.4). Using these absolute core vertices we show that the graph we have considered is path connected ( see Proposition 5.11). As a result we get our desired Rigidity Theorem in Theorem 5.14. For each n ∈ N k , consider the ordinary conditions at primes l | n which allows us to define the Selmer groups S n (K ac , E[p k ]) ⊂ lim ← −m H 1 (K m , E[p k ]) for 1 ≤ k ≤ ∞ (see Def 6.3 below). Writing S = S 1 (K ac , T p ) and X for the Pontryagin dual of Sel(K ac , E[p ∞ ]), and assuming that the families of κ n and λ n are compatible as k varies, we have distinguished elements λ ∞ ∈ Λ and κ ∞ ∈ S. We then have the Main theorem which gives us a bound on the Selmer groups and an equality condition. As a consequence it can be applied to the anticyclotomic Iwasawa main conjecture for certain reducible Galois representations. Theorem C (Theorem 6.6). Let the distinguished elements λ ∞ , κ ∞ be nonzero, and let X tor denote the Λ-torsion submodule of X. Then, we have, (i) The rank formulas: (ii) For any height one prime P of Λ, we have (iii) The inequality (ii) is an equality if the following condition is satisfied: there exists a positive integer s such that for all t ≥ s the set contains an element which is nonzero in Λ/(P, p s ). Selmer Groups In this section, we recall some basic facts about Selmer groups over Artinian rings, as in Mazur-Rubin [MR] and Howard [H3]. The decomposition of the local cohomology groups and the definition of the modified Selmer groups are already present in these references. Much of the notations introduced in [H3] are also retained. The difference is in Hypothesis 2.8, where we relax the hypothesis that the residual representation is irreducible. Throughout, we fix an algebraic closure Q, a prime p > 2, and embedding ι p : Q ֒→ Q p and ι ∞ : Q ֒→ C. Let G Q = Gal(Q/Q) and G K = Gal(Q/K), and for each place w of K let G w ⊂ G K denote the decomposition group, and I w its inertia subgroup. We denote the arithmetic Frobenius at the place w by Frob w ∈ G w /I w . For the prime v | p, let G v denote the decomposition group which is identified with Gal(Q p /Q p ) via the embedding ι p . For any place w of K, the maximal unramified extension of K w is denoted by K unr w . Let K ac denote the anticyclotomic Z p -extension of K, and Γ − = Gal(K ac /K). we denote the anticyclotomic Iwasawa algebra. By fixing a topological generator γ ∈ Γ − , we have a non-canonical isomorphism Λ Let A be a principal Artinian local ring with maximal ideal m = πA and residue characteristic p ≥ 3. It is to be noted that the quotient of a discrete valuation ring by some k-th power of its maximal ideal is such a ring; conversely, it is not difficult to show that every principal local Artinian ring is a quotient of a discrete valuation ring. Let T be a free A-module of rank two equipped with a continuous (for the discrete topology) action of G K = Gal(K/K) for some number field K. We assume that T admits a perfect, G K -equivariant, alternating A(1)-valued pairing. Definition 2.1. For each prime w of K where T is unramified, and for the maximal unramified extension K unr of K the unramified cohomology is defined by (i) A Selmer structure (F, Σ F ) on T is a finite set of places Σ F of K containing the archimedean places, the primes at which T is ramified, all the primes above p; and, for every place w of K, a choice of submodule H 1 for every finite place w∈Σ F . (iii) Under the embedding from K to K w for every place w, which is fixed, we consider the localization maps which is given by restriction: (iv) Define the Selmer module or group Sel F = Sel F (K, T ) associated to F by the exactness of where we still use the notation loc w for the localization maps composed with the quotient maps. Remark 2.3. (i) A Selmer structure (F, Σ F ) on T induces a Selmer structure in a natural way on a G K -submodule (resp. quotient) S of T . If S is a submodule of T , then the preimage of H 1 for every place w of K defines a Selmer structure. By [MR,Lemma 1.1.9], H 1 F (K w , S) = H 1 unr (K w , S) for every w / ∈Σ F , and so this is well-defined. (ii) For p =2, we have H 1 (K w , T ) = 0 for all w archimedean. Definition 2.4. A set of primes L of K which is disjoint from Σ F and satisfies (i) for all l∈L, N (l) ≡1 (mod p), (ii) for all l∈L, the Frobenius Frob l acts on T with eigenvalues N (l) and 1, is called an admissible set of primes and is denoted by L. Definition 2.5. For each l ∈ L, T ∼ =A⊕A(1) as a G K l -module, and that the decomposition is unique. Then the ordinary cohomology is defined by By the local Tate duality, H 1 F (K l , T ) = H 1 unr (K l , T ) is maximal isotropic for all l / ∈ Σ F . For each l ∈ L, by Lemma [H3, Lemma 2.2.1], these cohomology groups appear as direct summands of G K l -modules of the local cohomology groups: Here each summand is free of rank one over A and is maximal isotropic under the local Tate pairing. The Selmer structures above may be modified by introducing new primes. Definition 2.6. Let N denote the set of squarefree products of primes in L. For any abc ∈ N we define a Selmer structure (F a b (c), Σ F a b (c) ) as follows: Whenever any one of a, b, c is the empty product, it is omitted from the notation. Definition 2.7. A Selmer structure (F, Σ F ) is cartesian if for every quotient T /m i T of T , every place w∈Σ F , and any generator π∈m, the isomorphism For the remainder of this section, we make the following assumptions: Hypothesis 2.8. Lemma 2.9. The Selmer structure F(n) is cartesian for any n ∈ N . For any choice of generator π∈m and any 0≤i≤ length(A), the composition Proof. The cartesian structure of F(n) is proved, exactly as in [MR,Lemma 3.5.4], assuming that (T /mT ) G K = 0. We can see that We now collect some of the results of Mazur and Rubin's ( [MR]) and Howard's on the structure of the Selmer groups. Note that the proof of the results do not require the irreducibility or reducibility of T /mT , but a modified form of the Cassels-Tate pairing along with the self duality hypotheses on T and F . Proposition 2.10. (i) [H3, 2.2.7] For any n ∈ N there is a (non-canonical) decomposition (ii) [H3,Prop 2.2.9] For any nl ∈ N there are non-negative integers a, b with a + b = length(A) in the diagram of inclusions where the labels on the arrows are the lengths of the respective quotients. Here all the four quotients are cyclic A-modules and a = length(loc l (Sel F (n) )), b = length(loc l (Sel F (nl) )). Then e(n) ≡ ρ(n) (mod 2), and for any l∈L prime to n ρ(nl) = ρ(n) + 1 ⇐⇒ loc l (Sel F (n) (K, T /mT )) = 0 Here ρ(n) remains unchanged, and the equivalences hold if one replaces Definition 2.11. Let N odd denote the subset of N for which e(n) = 1 and N even ⊂ N is the subset for which e(n) = 0 where e(n) is as defined in the first statement of the Proposition 2.10(i). The Key Lemma Let Γ − = Gal(K ac /K). Then Γ − ∼ = Z p , and we write γ for a topological generator of Γ − . Let T p be the Tate module of E, and ρ E : G Q −→ Aut Zp (T p ) denote the representation attached to the elliptic curve E. We set U = Z p × ∩ image(ρ E ). Let R be the ring of integers of a finite extension of Q p , with maximal ideal m and v p is the normalized valuation on R. Consider a character α : Γ − −→R × . Following [Ne, CGLS], we define We write π for a uniformizer of m. For k ≥ 1, let R (k) = R/m k R, T (k) = T p /m k T p and suppose ℓ be a rational prime. For each l|ℓ∈L, let I ℓ be the smallest ideal containing (ℓ + 1) for which the Frobenius element Frob l ∈G K l acts trivially on T p /I l T p , L k ={l∈L | I ℓ ⊂p k Z p } and let N k be the set of square-free products of primes in L k . For brevity, we write , which is a principal local ring with maximal ideal denoted by m again, (iv) T α := T (k) ⊗ A(α), which is the G K -module T (k) twisted by a character α. Definition 3.2. Let M be a finitely generated A-module. Then Suppose s : C −→ D be a surjective map of two finitely generated A-modules, then for x ∈ C We continue to assume that Hypotheses 2.8 holds. Then there are infinitely many primes l ∈ L such that loc l (c) = 0. Remark 3.4. It is to be noted that using the above Theorem, we do not need to twist T (k) by a non-trivial character α in the irreducible case. In other words, we can take α = 1 in this case, and C α = 0. Building on results of Nekovář [Ne, Lemma 6.6.1(iii), Prop 6.1.2, Cor 6.3.4] the following result is proved by Castella et al. Theorem 3.5 ([CGLS,Prop. 3.3.6]). Suppose α = 1 and c 1 , c 2 ∈ H 1 (K, T α ) are cocyles such that Ac 1 + Ac 2 contains a submodule isomorphic to m d 1 A ⊕ m d 2 A for some d 1 , d 2 ≥ 0. Then for any Proposition 3.6. Let k > ε 0 and α = 1. Then for any cyclic free A-submodule C = Ac of rank one contained in Sel F (n) (K, T α ), there exists infinitely many primes l ∈ L such that loc l (c) = 0. Proof. For brevity, we write T for T α . Since Sel F (n) (K, T ) ∼ =A e(n) ⊕ M n ⊕M n , and it contains a non-zero submodule C, we have e n = 0 or M n = 0. It is enough to prove the statement in the following cases. Case I: Let M n = 0. Then Sel F (n) (K, T )⊇M n ⊕M n = 0. Let x ∈ M n be a non-zero cohomology class. Let x be not torsion. Then Ax is a free A-submodule, and Ax ⊕ Ax contains a submodule isomorphic to A ⊕ A. Let x be A-torsion, and let t = ord(x). Then t < k, π t x = 0, and the natural map A −→ Ax has kernel generated by π t , and the natural surjective map A ։ π k−t A factors through π t A. Therefore, Taking c 1 to be equal to x coming from M n and c 2 from the other summand M n , we see that the submodule xA ⊕ xA has a submodule isomorphic to m t A ⊕ m t A. Then we get two cohomology classes Therefore, by Theorem 3.5, we have infinitely many l such that loc l (c) = 0. Here nl ∈ N . By definition, This is a contradiction as these lengths add up to length(A) = k by [H3,Prop 2.2.9]. This proves our claim as Sel F (n) (K, T ) = 0. . Then the submodule generated by x and c 2 contains a submodule isomorphic to A ⊕ m b for some a, b ≥ 0. Here again, for the free submodule Ac = C, by Theorem 3.5 we get infinitely many l such that loc l (c) = 0. In the other case, Sel F (n) (K, T ) = H 1 (K, T ) ∼ = A, so Ac = H 1 (K, T ), it is clear that we have infinitely many l such that loc l (c) = 0. This completes the proof. We now have the following key lemma. Lemma 3.7. Let length(A) > ε 0 . Then for any n ∈ N and any cyclic free rank one A-submodule C ⊂ Sel F (n) (K, T ), there exists infinitely many l ∈ L such that loc l (C) ∼ = H 1 unr (K l , T ). Proof. Let C be generated by c, i.e., C = Ac ∼ = A. Suppose k = length(A), then ord(c) = k. By the previous Proposition, there exists infinitely many l ∈ L such that Therefore, loc l takes C injectively into H 1 unr (K, T ), which by equation (2.1) is isomorphic to A. Hence loc l (C) ∼ = H 1 unr (K, T ). Corollary 3.8. Let length(A) > ε 0 . Then for any n ∈ N and any free A-submodule C ⊂ Sel F (n) (K, T ) of rank one, there exists infinitely many l ∈ L such that C/m ∼ = H 1 unr (K, T /mT ). In particular, loc l (Sel F (n) (K, T /mT )) = 0, for infinitely many primes l ∈ L. Proof. By the lemma above, for infinitely many primes l ∈ L, we have an isomorphism: which induces the following isomorphism going modulo m: Proof. Consider the commutative diagram Since loc l (Sel F (nl) (K, T )) = 0, we have loc l (Sel F (nl) (K, T /mT )) = 0. So, we have loc l (Sel F (nl) (K, T /mT )) ∼ = A/m, as the lower horizontal map is A/m-vector space homomorphism. By Proposition 2.10 (ii), we have loc l (Sel F (n) (K, T /mT )) = 0. However, it can be seen that length(loc l (Sel F (nl) (K, T )) = length(A), hence loc l (Sel F (n) (K, T )) = 0. It follows that that loc Remark 3.10. In the irreducible case, as mentioned in Theorem 3.3, the error term ε 0 does not appear. Bipartite Euler System over Artinian Rings We continue with the notations introduced in Sections 2 and 3, along with the Hypothesis 2.8. We still continue to denote T α by T . Following Howard [H3,Def 2.3.2], we define a bipartite Euler system as follows: Definition 4.1. A bipartite Euler system of odd type for (T, F, L) is a pair of families {κ n ∈ Sel F (n) (K, T ) | n ∈ N odd } and {λ n ∈A | n ∈ N even } related by the following reciprocity laws: (i) for any nl∈N odd , there exists an isomorphism of A-modules A/(λ n ) ∼ = H 1 ord (K l , T )/A. loc l (κ nl ), (ii) for any nl∈N even , there exists an isomorphism of A-modules A bipartite Euler system of even type is defined in the same way, but with the even and odd term interchanged everywhere in the definition. An Euler system is said to be non-trivial if λ n = 0 for some n. By the reciprocity law this is equivalent to saying κ nl =0 for some l ∈ L. Proof. Suppose there exists a non-trivial Euler system of even type. Then for some n ∈ N odd , λ n = 0. As e n = 1, Sel F (n) contains a free A-module of rank 1, and by Lemma 3.7, loc l (Sel F (n) (K, T )) = 0 for some l ∈ L. It follows that the injective map . Therefore, loc l (κ nl ) = 0, which is not possible by the reciprocity law. Proposition 4.3. Let length(A) = k > ε 0 , and consider a non-trivial Euler system of odd type for Proof. (i) Let n ∈ N even such that m k−1 M n = 0. This implies Sel F (n) (K, T ) contains a free submodule of rank one, say C. Let l ∈ L such that l ∤ n and loc l (C) ∼ = H 1 unr (K l , T ) (using Lemma 3.7). By Proposition 2.10 (ii), loc l (Sel F (nl) (K, T )) = 0, from which we get loc l (κ nl ) = 0. By the first reciprocity laws, λ n = 0, which gives a contradiction. (ii) Let n ∈ N odd such that m k−1 M n = 0. By Proposition 2.10(i) we know Sel F (n) (K, T ) contains a free submodule of rank two, say D. Thus by equation (2.1), the kernel of the following map, By the second reciprocity law, we have, loc l (κ n ) = 0 for all l ∈ L, l ∤ n. For κ n = 0, this gives a contradiction as there are infinitely many primes l ∈ L, such that loc l (C) ∼ = A ( by Lemma 3.7). Definition 4.4. A non-trivial Euler system of odd type is said to be free if for every n ∈ N odd , κ n is contained in a cyclic submodule of Sel F (n) (K, T ) which is A-free of rank one. Let ρ 0 := max{ρ(n) : n ∈ N }. For S a finite set of primes, let G S denotes the Galois group of the maximal extension of K unramified outside S over K. Then, it follows from the Hermite-Minkowski Theorem that dim A/m H 1 (G S , T /mT ) is finite. So ρ 0 is bounded above by dim A/m H 1 (G S , T /mT ). We define ε := ε 0 ρ 0 . Then ε is a non-negative integer independent of the length of A. Theorem 4.7. Let length(A) > ε 0 . For any free Euler system of odd type for (T, F, L), π ε λ n ∈ Stub n for every n ∈ N even , and π ε κ n ∈ Stub n for every n ∈ N odd . Equivalently, in terms of the A-module M n in the decomposition Sel F (n) ∼ = A en ⊕ M n ⊕ M n , we have Proof. We prove this by induction on ρ(n). If n is even and ρ(n) = 0, then length(M n ) = 0, so M n = 0. Hence Stub n = A. Similarly if ρ(n) = 1 and n is odd, then again we have M n = 0. So the result follows. We now assume that ρ(n) ≥ 2, and so length(M n ) = 0. (i) A proof of the above length bound is obtained in [H3,Theorem 2.3.7], assuming that T /mT is an irreducible representation of G K . There the error term ε does not occur. The above theorem subsumes the result of Howard by taking ε = 0 if T /mT is irreducible. (ii) The above theorem can be compared with [CGLS,Theorem 3.2.1]. An error term is also present there. Stub Modules We briefly recall some facts about Stub modules from [H3]. As in the previous section, we denote the twist T α by T , and retain the notations in the previous section. Our definition of a core vertex ( Def 5.1) is different from that of Howard or Mazur-Rubin's in [MR] and we call them absolute core vertices. Let X := (V, E) be a graph with the set of vertices V := {v(n) | n ∈ N } and the set of edges E := {e(n, nl) | l ∈ L}. A vertex v(n) is called even ( resp. odd) if n ∈ N even (resp. n ∈ N odd ). We often write n is an even or odd vertex accordingly as v(n) is even or odd vertex. Attached to this graph is an Euler System Sheaf of A-modules, which is defined for a vertex v = v(n) and an edge e = e(n, nl) as: If v is even then we fix, using equation (2.1), an isomorphism Here we fix a choice of this isomorphism for each edge e and even vertex v. Over each vertex v = v(n) and edge e = e(n, nl), the stub sheaf Stub(X ) is defined, as follows Stub(v) := Stub n ⊂ ES(v) and Stub(e) := loc l (Stub n ) if n ∈ N odd loc l (Stub nl ) if n ∈ N even . If v is an even vertex, and v ′ odd with edge e = e(v, v ′ ), then the vertex-to-edge map ψ e v ′ : Stub(v ′ )−→ Stub(e) is surjective. By Corollary 4.6, it can be seen that the map ψ e v gives an isomorphism Stub(v) ∼ = Stub(e). (i) Let u = min{length(M n )|n ∈ N }. Then a vertex v of X is called an absolute core vertex if Stub(v) ∼ =m u A. (ii) The absolute core subgraph X abs ⊂ X is the graph whose vertices are the absolute core vertices of X , with two vertices in X abs connected by an edge in X abs if and only if they are connected by an edge in X . We let Stub(X abs ) be the restriction of Stub(X ) to X abs . Remark 5.2. In the special case when T /mT irreducible, then by [H3,Lemma 2.4.9], the minimal length u = 0, and we recover the definition of a core vertex in [H3,Def 2.4.2] and [MR,Def 4.1.8]. In other words, in the irreducible case absolute core vertices are those vertices n ∈ N such that Stub n ∼ = A (see [H3]). Definition 5.3. (i) A path from a vertex v to a vertex v ′ in X is a finite sequence of vertices v = v 0 , v 1 , ..., v r = v ′ such that v i is connected to v i+1 by an edge e i . A path is surjective (for the locally cyclic sheaf Stub(X )) if the vertex-to-edge map ψ e i v i+1 : Stub(v i+1 )−→ Stub(e i ) is an isomorphism for every i. We define a path in X abs in the same way. (ii) The graph is said to be path connected if for any two vertices in the graph, there exists a surjective path between them. By [H3,Lemma 2.4.8], a path v 0 , ....., v r in X is surjective if and only if for every i length(Stub(v i+1 )) ≤ length(Stub(v i )). Remark 5.5. By Lemma 3.7, for any odd vertex v(a), there exists infinitely many l ∈ L such that loc l (Sel F (a) ) ∼ = A, so the above process continues indefinitely and both the notions are indeed well-defined. When n is odd the proof is similar to the above one. Indeed, let us consider n is odd, length(M n ) = u, but not universally trivial. Then there exists primes l 1 , ..., l r for some r ∈ N, such that loc l 1 (Sel F (n) ) ∼ = A, loc l 2 (Sel F (nl 1 ) ) = 0, ..., loc l 2r−1 (Sel F (nl 1 l 2 ...l 2r−2 ) ) ∼ = A but loc l 2r (Sel F (nl 1 l 2 ...l 2r−1 ) ) = 0. where a 2r is a non zero positive integer. This contradicts that length(M n ) = u is minimal. This completes the proof of the first part of the Proposition. By Before we proceed to prove the second part of this Proposition, we prove the lemma below. Lemma 5.7. Let a = l 1 l 2 ...l s be a universally trivial vertex. Let n be any vertex such that length(M n ) = u. Then there exists a universally trivial vertex naa ′ such that length(M n ) = length(M naa ′ ). Proof. Case-I: Let n be even. Then by Proposition 2.12, length(M n ) = length(M nl 1 ) and nl 1 is odd and universally trivial. By Lemma 3.7 there exists a prime l ′ 1 such that length(M nl 1 ) = length(M nl 1 l ′ 1 ) (Proposition 2.12) and nl 1 l ′ 1 is even and universally trivial. Proceeding like this and adding primes l 2 , l ′ 2 , · · · , l s , we get a universally trivial vertex naa ′ such that length(M n ) = length(M naa ′ ) for some a ′ . Case-II: Let n be odd. Then by Lemma 3.7, there exists a prime l ′ 1 such that length(M n ) = length(M nl ′ 1 ) ( by Proposition 2.12). Here as nl ′ 1 is even, universally trivial, by using Proposition 2.12 again, we have length(M nl ′ 1 ) = length(M nl ′ 1 l 1 ). Continuing like this we get length(M n ) = length(M naa ′ ) for some a ′ ∈ N as nl 1 l ′ 1 is again a universally trivial vertex. (ii) We now continue with the proof of Proposition 5.6. Recall that M n is of minimal length u, hence M naa ′ is minimal by the previous lemma. We show the second part of the proposition by induction on the number of prime factors of na ′ by constructing a surjective path from a to ana ′ such that length(M naa ′ ) = length(M a ). Let the number of prime factors of na ′ be greater than one. Case-Even: Suppose ana ′ is even. Then by Proposition 2.12, length(M ana ′ /l ) = length(M ana ′ )− b. Since length(M naa ′ ) is minimal, b = 0, so ana ′ /l is universally trivial, by part (i). By induction hypothesis there is a surjective path from a to ana ′ /l such that length(M a ) = length(M ana ′ /l ). Therefore length(M a ) = length(M naa ′ ). Case-Odd: Suppose b = ana ′ is odd. If there exists q|na ′ such that loc q (Sel F (b) ) ∼ = A, then by Proposition 2.12 we have, length(M b/q ) = length(M b ) and by induction hypothesis the result follows. Let loc q (Sel F (b) ) ≇ A for all q|na ′ . Let l ∤ b be such that loc l (Sel F (b) ) ∼ = A. By Proposition 2.12, we have length(M b ) = length(M bl ) = u. Since bl is even, by minimality, we have length(M bl/q ) = u for any q|na ′ . It can be seen that v(b), v(bl), v(bl/q) is a surjective path such that length(M b ) = length(M bl ) = length(M bl/q ). Again if we have one such prime τ | (bl/q) such that loc τ (Sel F (bl/q) (K, T ) ∼ = A, then by induction hypothesis we are done. Now suppose loc τ (Sel F (bl/q) ) ≇ A for all τ |(bl/q). Then, writing b ′ = bl/q, we have loc τ (Sel F (bl/q) ) = 0 for all τ |b ′ ( by Lemma 3.9). Then Since the length of Sel F (b) on the right is minimal, so equality holds everywhere, and this contradicts loc l (Sel F (b) ) ∼ = A. Therefore, there exists τ such that loc τ (Sel F By induction hypothesis the result follows. Remark 5.8. Note that Proposition 5.6 says that n is an absolute core vertex if and only if it is universally trivial. Lemma 5.9. Let v be any vertex of X . Then there is a absolute core vertex v 0 and a surjective path in X from v 0 to v. Moreover, for any n ∈ N there is a vertex n ′ ∈ N with n|n ′ such that v(n ′ ) is a absolute core vertex. This n ′ may be chosen either in N even or in N odd . If w i = w(n i ) is odd, using Lemma 3.7 choose l ∈ L prime to n i such that localization at l takes a free rank one submodule of Sel F (n i ) isomorphically onto H 1 unr (K l , T ), and set w i+1 = w(n i l). From the Proposition 2.10(iii) we have length(Stub(w i )) = length(Stub(w i+1 )), since a = length(A) and b = 0. Continuing this way we get a desired path from a absolute core vertex v 0 = w k to any given vertex v, and also we have for all i. By [H3,Lemma 2.4.8] the path w k , ..., w 0 is a surjective path from v 0 to v. Clearly, if v 0 is absolute core and even (resp. odd), then by the definition of universally trivial, it can be connected to an odd (resp. even) absolute core vertex. For any a ∈ N , let X abs,a be the subgraph of X abs whose vertices consist of those absolute core vertices v(n) with a|n. Two vertices are connected by an edge in X abs,a if and only if they are connected by an edge in X abs . Corollary 5.10. Let length(A) > ε 0 and v(a) be a absolute core vertex. Then the subgraph X abs,a is path connected. Proof. Follows from the proof of the Proposition 5.6. Proposition 5.11. The absolute core subgraph X abs is path connected and contains both even and odd vertices. For any vertex v of X and any absolute core vertex v 0 of X , there is a surjective path from v 0 to v. Proof. The proof goes along similar lines as in [H3,Prop 2.4.11]. That the absolute core subgraph contains both even odd vertices is clear from the proof of Lemma 5.9. Consider v(a) and v(b) two vertices of the absolute core subgraph X abs . By Lemma 5.9, we may choose n ∈ N divisible by lcm(a, b) such that v(n) is absolute core vertex. By Corollary 5.10 we know there is a path from v(a) to v(n) in X abs,a and similarly a path from v(b) to v(n) in X abs,b . Since any path in X abs,a and X abs,b is also a path in X abs , there is a path from v(a) to v(b). Since any path in X abs is surjective, any two absolute core vertices may be connected by a surjective path. Now given any v in X by Lemma 5.9 we know there exist a absolute core vertex v 0 and a surjective path from v to v 0 . Proof. Let v 0 be a absolute core vertex. Define δ 0 to be such that s(v 0 ) generates m δ 0 Stub(v 0 ). Since v 0 is absolute core, we have 0 ≤ δ 0 ≤ length(A) − u. By Proposition 5.11, for any vertex v in X consider a surjective map Stub(v 0 )−→ Stub(v) taking s(v 0 ) to s(v). Then the image s(v) generates m δ 0 Stub(v). We now have the rigidity theorem below. This may be compared with [H3, Th 2.5.1]. Theorem 5.14. (The Rigidity theorem) Assume the Hypotheses 2.8. In addition, assume that we have a nontrivial free Euler system of odd type for (T, F, L) such that π ε λ n = 0. Then there is a unique integer δ, independent of n ∈ N , such that π ε λ n generates m δ Stub n for every n ∈ N even and π ε κ n generates m δ Stub n for every n ∈ N odd . Furthermore, δ is given by δ = min{ind(π ε λ n , m u A)|n ∈ N even } = min{ind(π ε κ n , m u Sel F (n) )|n ∈ N odd }. Proof. Let v(n), v(nl) be two vertices of the graph X , with edge e = e(n, nl). Consider the global sections s(v), s(e) of Stub(X ) defined by s(v) = π ε λ n if n ∈ N even π ε κ n if n ∈ N odd and s(e) = π ε loc l (κ n ) if n ∈ N odd π ε loc l (κ nl ) if n ∈ N even . These maps are well defined by Theorem 4.7, and s is a non-trivial global section of the Euler system sheaf ES(X ). By Theorem 4.7 this global section is actually a global section of the subsheaf Stub(X ) ⊂ ES(X ). By Corollary 5.13, there is a unique 0 ≤ δ < length(A) − u such that s(v) generates m δ Stub(v) for every vertex v. For any other vertex v with s(v) = 0 we have with equality if and only if length(M n ) = u, i.e., v is absolute core. As there are even absolute core vertices by Proposition 5.9, we have δ = min{ind(s(v), m u ES(v))|v is even} and similarly with even replaced by odd. Iwasawa Theory Recall that K is an imaginary quadratic field of Q of discriminant D K and quadratic character ǫ, p ≥ 5 be a rational prime such that the elliptic curve E/Q of conductor N has good ordinary or multiplicative reduction at p. Also assume that (D K , pN ) = 1 and N − be the largest divisor of N such that (N − , p) = 1 and ǫ(q) = 1 for all primes q|N − . We write N + = N/N − , so that N = N + N − . As before, let K ac be the anticyclotomic Z p extension of K such that the Galois group Γ − := Gal(K ac /K) ∼ = Z p is characterized by τ στ = σ −1 for all σ ∈ Γ − and τ be a fixed complex conjugation. For any m ≥ 0, consider the subfields K ⊂ K m ⊂ K ac be such that [K m : K] = p m and Λ = Z p [[Γ − ]] be the Iwasawa algebra of Γ − over Z p . Throughout this section we assume the following hypothesis. Definition 6.2. A degree two prime l ∤ N of K is called k-admissible if N (l) = 1 (mod p) E[p k ] ∼ = Z/p k Z ⊕ µ p k as Gal(K un l /K l ) − module. Let L k denote the set of k-admissible primes, and N k denote the set of square free product of primes in L k . The set of 1-admissible primes of L are simply referred to as admissible. Let T p denote the Tate module of the elliptic curve E. Since N − is squarefree, for any rational prime q with q | N − , E has multiplicative reduction at q, and hence split multiplicative reduction at the prime q of K above q. By the Tate parametrization, the G Kq -representation on T p gives a filtration of G Kq -modules 0 ⊂ Fil q (T p ) ⊂ T p where Fil q (T p ) ∼ = Z p (ǫ cyc ) and ǫ cyc is the cyclotomic character of G Kq . For any extension F/K q , we define the ordinary submodule by . Similarly, we also define H 1 ord (F, E[p k ]). For a k-admissible prime l ∈ L k , we have a similar local condition H 1 ord (F, E[p k ]) for any extension F/K l . Since T p is ordinary, we have a filtration of G Qp -modules: 0 ⊂ Fil p (T p ) ⊂ T p on which the inertia subgroup at p acts on Fil p (T p ) by the cyclotomic character. We define the ordinary local condition at p by H 1 ord (F, T p ) := image H 1 (F, Fil p (T p )) −→ H 1 (F, T p ) for any finite extension F/Q p . Replacing T p by E[p k ] and E[p ∞ ], we can define the ordinary cohomology groups for these modules. We similarly define ordinary cohomology and selmer groups for the twists T p ⊗ α for characters α of Γ − . Writing T for T p ⊗ α, and for every integer m ≥ 0, we define: By [H3,Lemma 3.1.2], for any k-admissible prime l ∈ L k , the modules lim − →m H 1 ord (K m,l , E[p k ]) and lim ← −m H 1 unr (K m,l , E[p k ]) are free Λ/p k Λ-module of rank one. Definition 6.3 (Selmer groups). We define the compact selmer group by We write S := S(K ac , T p ). We similarly define S(K ac , E[p k ]) by replacing T p by E[p k ]. We define the standard Selmer group over the anticyclotomic Z p -extension: We also write X := Hom(Sel(K ac , E[p ∞ ]), Q p /Z p ) and for any n ∈ N k : be the Λ-submodule of classes which are ordinary at the primes dividing npN − and unramified at all other primes. Then S = S 1 (K ac , E[p k ]). Let V p = T p ⊗ Q p . Then we have an exact sequence Consider prime ideal P = pΛ of height one in Λ, and let O P denote the integral closure of Λ/P, viewed as a Galois module with trivial action. Then the quotient field L P of O P is a finite extension of Q p . Let m P denote the maximal ideal of O P . Let T P = T p ⊗ Λ, viewed as a G K -module via the natural map G K −→Λ × which is non-trivial. Tensoring with O P , we obtain an exact sequence of O P [[G K ]]-modules 0−→T P −→V P −→W P −→0. Now for any prime l of K and M be any one of T P , V P , or W P , we define the submodule H 1 F P (K l , M ) ⊂ H 1 (K l , M ) as follows. First suppose M = V P . We define 3). These local submodules define global Selmer groups which we denote by Sel F P (K, M ). Euler Systems over Λ. Definition 6.4. For n ∈ N 1 , let n ∈ N such that nO K = n. Then n is said to be definite if ǫ(nN − ) = −1 and indefinite if ǫ(nN − ) = 1. Let N def inite k ⊂ N k denote the subset which consists of products of definite primes and N indef inite k the set of products of indefinite primes. We also suppose that for any positive integer k we are given a pair of families: ] → E[p k ] and the inclusion N k+1 ⊂ N k as k varies. We also assume that it satisfies the first and second reciprocity laws: Since the empty product lies in N k for every k, we obtain two distinguished elements which is defined as the inverse limit of λ 1 and κ 1 as k varies. Regarding the rank of S and the characteristic power series of the torsion part of X, we have the following theorem, which is the main theorem here. Theorem 6.6. Let the distinguished elements λ ∞ , κ ∞ be nonzero, and X tor denote the torsion submodule of X. Then, we have the following. (iii) If there exists s ∈ N such that for all t ≥ s the set contains an element which is nonzero in Λ/(P, p s ), then equality holds in the inequality (6.5). After some preliminary results, we prove this theorem below in Section 6.3. Proposition 6.7 ( [H3,Prop 3.3.2]). Shapiro's lemma, the natural map T p ⊗Λ−→T P , and its dual induce maps The first map is injective. There is a finite set of height one primes, say Σ Λ of Λ, such that if P / ∈ Σ Λ , then these maps have finite kernel and cokernel both of which are bounded by a constant depending on [O P : Λ/P] but not on P itself. Proof. The proof follows as in [MR,Proposition 5.3.13,5.3.14]. There the Cartesian property of the selmer structure for T /mT is required to bound the kernel and cokernel of the second map (which is denoted by π * P in loc. cit). In our situation, when we have E[p] G K = 0, the Cartesian property of the Selmer structure for T /mT still holds, which we can use to bound the kernel and cokernel ( see the last paragraph of the proof of [MR,Proposition 5.3.14]). Lemma 6.8 ( [MR,Lemma 3.7.1]). Let S P := Sel F P (K, T P ). Then the natural map is injective, where the Selmer structure on T P /p k T P is induced from the Selmer structure on T P (see Remark 2.3). Let k, j be positive integers such that k ≤ j and u k = min{length(M (k) n ) | n ∈ N } for Selmer groups over the Artinian ring O P /p k O P as defined in 5.1. Set Set T (i) = T P /p i T P and R (i) = O P /p i O P for any i ∈ N. Remark 6.9. Note that the non-trivial anticyclotomic character Γ − −→ Z × p acts on T P , and hence on W P . Therefore the results in Sections 3, 4 and 5 are applicable to Selmer groups for T (j) . Given the Selmer structure F P on T P , let F be the Selmer structure on T (j) given by Remark 2.3. Similarly, we also denote the selmer structure induced from F P to W P to W P [p j ] by F P . Note that lim to the Euler system in (6.1), we get the pair of families (6.6) {κ n ∈ Sel F (n) (K, T (j) )|n ∈ N indef inite j } {λ n ∈R (j) |n ∈ N def inite j }. By assumption and Lemma 6.8 we have κ 1 or λ 1 is nonzero (where 1 ∈ N j is the empty product depending on whether ǫ(N − ) = 1 or −1). Fixing a uniformizer of O P we have an isomorphism T (j)∼ =W P [p j ], which gives us the isomorphism: where the last isomorphism follows from Lemma 2.9. Furthermore, the families in (6.6) form an Euler system of odd type for (T (j) , F, L j ). Proof. The proof goes in the same way as mentioned in [H3, lemma 3.3.5]. As in [H3], the Euler system in equation (6.6) for (T (k) , F, L k ) may not be free, but this can be obtained by shrinking the set of indexing primes L k . With suitable modifications, the proof of the results of Howard now goes through. Proof. The proof of this result is similar to that of [H3, 3.3.6]. Let n ∈ N indef inite j and j ≥ 2k. By Proposition 2.10(i), we have For the maximal ideal m of O P , we fix a uniformizer π. Let e be the ramification degree of O P . Then the length of R (k) is ek. If m ek−1 N = 0 then we have κ n = 0 by Proposition 4.3, so there is nothing to prove. Suppose that m ek−1 N = 0. By Lemma 2.9 we have the commutative diagram The diagonal isomorphism shows that m ek−1 . Sel F (n) (K, T (j) )[m ek ] is a cyclic module, and this implies m ek−1 M = 0. But since j ≥ 2k the image of M under the vertical arrow is zero. Therefore the image of the vertical arrow is free of rank one, and also κ n is contained in the image. This completes the proof. To prove the main Theorem 6.6, we first show the proposition below. Note that we get an error term, which does not appear in the irreducible case. Proposition 6.12. Let ǫ(N − ) = −1 and assume that λ ∞ ∈ Λ has nontrivial image in O P /p k O P . Then Let ǫ(N − ) = 1 and assume that κ ∞ has nontrivial image in S P /p k S P . Then (i) S P is a free O P -module of rank one, (ii) Sel F P (K, W P ) has O P -corank one, and (iii) length O P (Sel F P (K, W P )/Sel F P (K, W P ) div ) + 2δ P (k + ε) = 2. length O P (S P /O P κ ∞ ) + 2ε, where the subscript div indicates the the maximal O P -divisible submodule of Sel F P (K, W P ). Again as above we have length O P (M ) ≤ k − 1. Combining this with equation (6.7), we have which is a torsion-free rank one O P -module. Applying Lemma 6.8 we know that the reduction map S P /p k S P −→ Sel F (K, T (k) ) is injective. Theorem 5.14 now gives length O P (Sel F P (K, W P )/ Sel F P (K, W P ) div ) + 2.δ P (k + ε, j) = length O P (M ⊕ M ) + 2δ P (k + ε, j) = 2. ind(κ 1 , Sel F P (K, T (k+ε) )) + 2ε = 2. length O P (S P /S P κ ∞ ) + 2ε. Now taking j → ∞ we get the result. This proves the proposition. 6.3. Proof of Theorem 6.6. Let λ ∞ or κ ∞ be nonzero, accordingly as ǫ(N − ) = −1 or ǫ(N − ) = 1. By Lemma 6.5, S is finitely generated torsion-free Λ-module. If ǫ(N − ) = 1 then the image of κ ∞ in S/PS is nonzero for all but finitely many height one prime ideals P in Λ. Similar arguments hold for λ ∞ if ǫ(N − ) = −1. So let Σ Λ be a finite set of prime ideals of Λ large enough that it contains pΛ and all the prime divisors of the characteristic ideal of the torsion submodule X, and large enough such that the image of the distinguished elements in (6.4) has nonzero image in S/PS or Λ/PΛ for all P / ∈ Σ Λ . (i) We fix P / ∈ Σ Λ and suppose ǫ(N − ) = 1. In this case, it follows from Proposition 6.8 that κ ∞ has nonzero image in Sel F P (K, T P ). By Proposition 6.12, the rank of Sel F P (K, T P ) is one and the corank of Sel F P (K, W P ) is also one as O P -modules. Since S is torsion-free, it follows from Proposition 6.7, that rank Λ S = rank O P (S ⊗ O P ) = 1. For X, by Proposition 6.7, the map Sel F P (K, W P ) −→ Sel(K ac , E[p ∞ ])[P] has finite kernel and cokernel. Taking Pontryagin dual, it follows that rank Λ X = 1. Similarly, if ǫ(N − ) = −1, then by the previous proposition, Sel F P (K, W P ) is annihilated by a fixed power of p. Therefore Sel F P (K, T (j) ) is also annihilated by a fixed power of p for any j. As Sel F P (K, T P ) = lim ← − S P /p k S P , it follows from Proposition 6.7, that S/PS is annihilated by a fixed power of p. Lemma 6.5, then shows that S is Λ-torsion. A similar argument shows that rank Λ X = 0. This completes the proof of (i). (ii) Let P be a height one prime ideal of Λ which is generated by a distinguished polynomial g. For each positive integer m, consider the height one prime ideals It is clear that for large enough m, P m is a prime ideal which does not lie in Σ Λ . By Hensel's lemma we have Λ/P m ∼ = Λ/P. Then, as in the proof of [MR,Theorem 5.3.10] and using Proposition 6.7, we have length Zp (Sel F Pm (K, W Pm )/ Sel F Pm (K, W Pm ) div ) = m. rank Zp (O P ). ord P (char(X Λtor )) up to O(1) as m varies. Let S Pm = Sel F Pm (K, T Pm ). Then length Zp (S Pm /O Pm κ ∞ ) = m. rank Zp (O P ). ord P (char(S/Λκ ∞ )) length Zp (O Pm /O Pm λ ∞ ) = m. rank Zp (O P ). ord P (λ ∞ ) accordingly as ǫ(N − ) = 1 or − 1, up to O(1) as m varies. Let e be the absolute ramification degree of O Pm . This ramification degree is independent of m. Now for large enough k by Proposition 6.12 we have length O Pm (Sel F Pm (K, W Pm )/ Sel F Pm (K, W Pm ) div ) + 2e.δ Pm (k + ε) = 2. length O Pm (S Pm /O Pm κ ∞ ) + 2ε, length O Pm (Sel F Pm (K, W Pm )) + 2e.δ Pm (k + ε) = 2. length O Pm (O Pm /O Pm λ ∞ ) + 2ε accordingly as ǫ(N − ) = 1 or − 1. Since δ Pm (k + ε) ≥ 0, so taking limit m → ∞ we get (ii). (iii) We show that δ Pm (k) is bounded as m, k vary for j ≥ k 0 > ε. Let n(j) ∈ N def inite j be such that λ n(j) has nonzero image in Λ/(P m , p k 0 ). Then λ n(j) has nontrivial image in Λ/(P m , p k 0 ) for all m ≥ k 0 . Define C m = coker[Λ/P m ֒→ O Pm ]. Note that C m are finite and up to isomorphism do not depend on m. If k 1 is large enough that p k 1 −k 0 kills C m , then we have the following commutative diagram It follows that λ n(j) has nontrivial image in O Pm /p k 1 O Pm , so π ε λ n(j) has non-trivial image in O Pm /p k 1 +ε O Pm . By Theorem 5.14, π ε λ n(j) ∈ m u k 1 +ε O P /p k 1 +ε O Pm , and by the observation in equation (3.1), for j ≥ k + ε ≥ k 1 + ε we have δ Pm (k + ε, j) ≤ ind(π ε λ n(j) , m u k+ε O Pm /p k+ε O Pm ) < 2e(k 1 + ε).
2023-02-13T06:41:42.197Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "f15ead2d4c316042a4418cd470a6a839faac1c0c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f15ead2d4c316042a4418cd470a6a839faac1c0c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
262380141
pes2o/s2orc
v3-fos-license
Effect of sesame oil on diuretics or Beta-blockers in the modulation of blood pressure, anthropometry, lipid profile, and redox status. The study was undertaken to investigate the effect of sesame oil in hypertensive patients who were on antihypertensive therapy either with diuretics (hydrochlorothiazide) or Beta-blockers (atenolol). Thirty-two male and 18 female patients aged 35 to 60 years old were supplied sesame oil (Idhayam gingelly oil) and instructed to use it as the only edible oil for 45 days. Blood pressure, anthropometry, lipid profile, lipid peroxidation, and enzymic and non-enzymic antioxidants were measured at baseline and after 45 days of sesame oil substitution. Substitution of sesame oil brought down systolic and diastolic blood pressure to normal. The same patients were asked to withdraw sesame oil consumption for another 45 days, and the measurements were repeated at the end of withdrawal period. Withdrawal of sesame oil substitution brought back the initial blood pressure values. A significant reduction was noted in body weight and body mass index (BMI) upon sesame oil substitution. No significant alterations were observed in lipid profile except triglycerides. Plasma levels of sodium reduced while potassium elevated upon the substitution of sesame oil. Lipid peroxidation (thiobarbituric acid reactive substances [TBARS]) decreased while the activities of superoxide dismutase (SOD), catalase (CAT), and the levels of vitamin C, vitamin E, Beta-carotene, and reduced glutathione (GSH) were increased. The results suggested that sesame oil as edible oil lowered blood pressure, decreased lipid peroxidation, and increased antioxidant status in hypertensive patients. INTRODUCTION Recently, much attention has been focused on the antioxidant defense system in oxidative stress and cardiovascular dis-eases.Natural antioxidants and polyunsaturated fatty acids contained in dietary sources are candidates for the prevention of oxidative damage and cardiovascular dis-eases [1].Polyunsaturated fatty acids are essential for normal growth and development and may play an important role in the prevention and treatment of coronary heart disease, hypertension, diabetes, and arthritis and other inflammatory and autoimmune disorders.Clinical and epidemiological studies have shown the cardiovascular protective effects of oils rich in polyunsaturated fatty acids (PUFA) [2,3].In particular, these substances have been reported to lower blood pressure and prevent the development of hypertension [4,5]. Sesame seeds and oil have long been categorized as traditional health food in India and other East Asian countries.Sesame oil has been found to contain considerable amounts of the sesame lignans: sesamin, episesamin, and sesamolin.Sesame oil also contains vitamin E (40 mg/100 g oil), 43 percent of polyunsaturated fatty acids, and 40 percent monounsaturated fatty acids.The lignans present in sesame oil are thought to be responsible for many of its unique chemical and physiological properties, including its antioxidant and antihypertensive properties [6][7][8][9].In the present study, we evaluated the effect of sesame oil (rich in antioxidant lignans, vitamin E, and unsaturated fatty acids) in hypertensive patients on medication with either hydrochlorothiazide or atenolol as antihypertensive therapy. Subjects The present study consists of patients of both sexes in the age group 35 to 60 years with mild to moderate hypertension, medicated with diuretics (hydrochlorothiazide) or ß-blockers (atenolol), who were recruited from the Department of Medicine at Rajah Muthiah Medical College and Hospital, Annamalai University and Prof. Maniarasan Memorial Polyclinic, Chidambaram, Tamilnadu, India.The criterion for hypertension was systolic blood pressure greater than or equal to 140 mm Hg and diastolic blood pressure greater than or equal to 90 mm Hg, recorded on at least three different occasions after they had rested for 10 minutes supine.Patients with secondary hypertension, hypertension associated with diabetes mellitus, chronic alcoholism, female patients on oral contraceptives, pregnant females, and lactating mothers were excluded from the study.All the subjects gave informed consent to undergo the investigations, and the Ethical committee of Rajah Muthiah Medical College, Annamalai University, Tamilnadu, India, approved the study. Study design A detailed clinical history and physical examination were performed at baseline, and the following measurements were taken: blood pressure; anthropometric measurements such as height, weight, and body mass index (BMI); lipid profile (total cholesterol [TC], high density lipoprotein cholesterol [HDL-C], low density lipoprotein cholesterol [LDL-C], and triglycerides [TG]); electrolytes (Na + , K + ); lipid peroxidation (TBARS); and enzymic and non-enzymic antioxidants in blood.The patients were advised to continue their antihypertensive drugs as usual.The patients were on medication with hydrochlorothiazide or atenolol for one year prior to the enrollment in the study.The patients were supplied 4 to 5 kg of sesame oil (Idhayam gingelly oil) for a four-member family per month, which constitutes approximately 35 g of oil/day/person.The patients were asked to use sesame oil as the only edible oil for 45 days.At the end of the 45th day, the investigations were repeated.Finally, the patients were asked to switch over to whatever original oil they had been taking before the enrollment of the study for another 45 days.Mostly they were using either sesame oil, groundnut oil, or palm oil interchangeably.All the measurements were repeated at the end of the 90th day of our experiment.The patients were told to strictly adhere to the study protocol.Those who could not follow the protocol until the end of the experiment for any reason were excluded.To avoid much difference in dietary patterns and caloric changes, the same patients have been subjected to substitution of sesame oil and withdrawal of sesame oil substitution. Anthropometric and blood pressure measurements Body weight was measured, using a level balance, to the nearest 0.1 kg.Body height was measured without footwear to the nearest 0.5 cm.BMI was calculated as weight (in kg)/height (in m 2 ).Blood pressure was measured by using standard mercury sphygmomanometer Statistics Student's t test was applied for comparison between two related samples; values for continuous variables are expressed as means ± SD. RESULTS Table 1 shows blood pressure and anthropometric measurements at baseline, sesame oil substitution, and withdrawal of sesame oil.Replacement of sesame oil as cooking oil in hypertensive patients brought their systolic and diastolic blood pressure to normal in a statistically significant fashion.Significant reduction in body weight and body mass index also was noted.After the withdrawal of sesame oil substitution, the values rose again. Table 2 shows the plasma lipid profile at baseline, after sesame oil substitution, and after withdrawal of sesame oil.No significant alterations were seen in TC, HDL-C, LDL-C, and the TC/HDL-C ratio.TG levels decreased significantly and then rose, following sesame oil substitution and withdrawal, respectively. Table 3 shows the plasma levels of electrolytes at baseline, after sesame oil substi- Potassium levels increased significantly upon sesame oil substitution and subsequently decreased, but within normal limits.Table 4 shows the levels of TBARS, enzymic and non-enzymic antioxidants at baseline, after sesame oil substitution, and after withdrawal of sesame oil.Significant reduction in TBARS was noted, and the values were almost maintained even after withdrawal of sesame oil.Plasma CAT and erythrocyte membrane bound SOD activities significantly increased, while erythrocyte membrane bound GPx activity decreased gradually from sesame oil substitution to withdrawal.Significant of vitamin C, vitamin E, ß-carotene, and reduced glutathione were observed, and the levels decreased once sesame oil substitution was stopped. DISCUSSION In the present study, substitution of sesame oil lowered systolic and diastolic blood pressure remarkably in hypertensive patients.Studies reported that sesamin, a lignan from sesame oil, exerts antihypertensive action by interfering with renin-angiotensin system, as the lignan is more effective on the renin-independent DOCA (Deoxycorticosterone acetate) -salt hypertension than on the renin-independent 2K (two kidney), 1C (one clip) renal hypertensive model [6,8].In another study using the rat aortic ring, sesamin produced Ca 2+ antagonistic vasodilatory activity [8].This pharmacological action, at least in part, may contribute to its antihypertensive activity.Natural antioxidants and polyunsaturated fatty acids show protective function against hypertension [1].Supplementation of vitamin E reduced blood pressure in mild hypertensive patients and was associated with a remarkable decrease in a,1 p < 0.001 a -as compared with baseline value 1-as compared with sesame oil substitution systolic and diastolic blood pressure [23]. The fatty acid composition of dietary fat is a key determinant of membrane fatty acid composition [24].As PUFA substitution increases the fluidity of the bilipid layers, the distensibility of biomembranes may increase.The blood pressure-lowering effect of sesame oil may be due to its richness of antioxidant lignans (sesamin, episesamin, sesamol, and sesamolin), vitamin E, and unsaturated fatty acids.The risk of hypertension increases progressively with higher levels of body weight or BMI and parallels the degree of obesity.The association between BMI and blood pressure consistently has been shown in numerous studies [25].Numerous studies consistently have documented that for those who are already overweight, weight loss significantly reduces blood pressure and the incidence of subsequent hypertension.Large, randomized trials of weight reduction in adults with hypertension have shown significant reductions in blood pressure in response to weight loss [26].Studies suggest that polyunsaturated fatty acid increases the plasma levels of leptin, which, in turn, would facilitate the reductions of weight [27].Polyunsaturated fatty acids in sesame oil also may play a role in the reduction of body weight in our study, which in turn may reduce the blood pressure.The reduction of body weight and body mass index in our study mainly may be due to sesame oil substitution, since the values increased once the sesame oil substitution was withdrawn. Prior studies in rats have been shown that sesame lignans (sesamin and/or episesamin) lower serum and liver cholesterol concentrations by inhibiting absorption and synthesis of cholesterol [28].We did not find a cholesterol-lowering effect in hypertensive patients on medication with diuretics or ßblockers.This may be due to the negative effect of diuretics and ß-blockers on lipids.Recently, the Scientific Advisory of the American Heart Association reported that high monounsaturated fatty acids diets tend to lower triglyceride concentrations [29].We found that substitution of sesame oil as edible oil lowered plasma triglyceride concentrations. Reports suggested that antihypertensive compounds modulate the Na + -K + pump and thereby maintain the electrolytes levels in hypertensive patients.Cardiac output is influenced by blood volume, which is greatly dependent on body sodium.Thus, sodium excretion is central to blood pressure modulation.Decreasing sodium excretion increases fluid volume and leads to high cardiac output.Potassium can influence cell membrane stabilization and vascular smooth muscle relaxation [1].In our present study, we found that plasma levels of sodium decreased while potassium levels increased upon the substitution of sesame oil.However, the mechanism of reduction of sodium and elevation of potassium upon sesame oil substitution is not known.Thiobarbituric acid reactive substances, a measure of lipid peroxidation, decreased significantly upon sesame oil substitution.It has been reported that sesamolin, a lignan present in sesame oil, reduced lipid peroxidation in rats [30].Sesamin and sesamolin may potentiate the effect of vitamin E and they themselves act as antioxidants, which, in turn, may reduce lipid peroxidation.In our study, plasma levels of TBARS did not change even after withdrawal of sesame oil substitution.Perhaps the lignans stored in the body may be responsible for this. The role of the antioxidant defense system, which includes superoxide dismutase (EC 1.15.1.1;Cu/Zn SOD), catalase (EC 1.11.1.6;CAT), and glutathione peroxidase (EC 1.11.1.9;GSH-Px), in protection against oxidative insults is well characterized, and it has been suggested that this antioxidant defense system may be influenced by nutrition [31].Enzymatic antioxidants, such as SOD and CAT, play an important role in the conversion of ROS to oxygen and water.SOD is a well-known scavenger enzyme preventing the cell from oxidative stress.CAT is an important antioxidant enzyme whose physiological role is to detoxify H2O2 into oxygen and water and thus limit the deleterious effects of reactive oxygen species.Cells maintain their vital functions against oxidative damage with the help of a system that involves GPx, SOD, CAT, glutathione reductase, some trace elements, and vitamins A and E. The increase of SOD and CAT may be due to decreased utilization, since lipid peroxidation levels are low.GPx decreased probably due to the decreased synthesis, since lipidperoxidation levels were low.Vitamin E has been recognized as one of the body's major natural antioxidants.Sesame oil contains 40 mg of vitamin E per 100 g of oil [32].Vitamin E has several potentially cardio-protective effects: It decreases lipid peroxidation and spares glutathione [33,34].Vitamin E has been shown to lower blood pressure in spontaneously hypertensive rats [35].In the present study, plasma levels of vitamin E increased upon substitution, which could be due to the greater availability of vitamin E in sesame oil. Elevation of vitamin C upon the substitution of sesame oil could be due to the decreased utilization or due to increase in the levels of GSH, because vitamin C and GSH are synergistic antioxidants [36].Epidemiological reports show that carotenoids may play a preventive role in cardiovascular disease [37].Plasma levels of ß-carotene rose significantly upon the substitution of sesame oil, which could be due to the sparing action of vitamin E and sesame lignans. In conclusion, substitution of sesame oil, as the sole edible oil, lowered blood pressure in hypertensive patients who were taking diuretics and ß-blockers.Sesame oil also has beneficial effects on the levels of triglyceride, electrolytes, lipid peroxidation, and antioxidants. Table 4 . TBARS, enymic, and non-enzymic antioxidants at baseline, sesame oil sub- stitution, and withdrawal of sesame oil. Age 35 to 60 (n = 50) Erythrocyte membrane; x One unit of activity was taken as the enzyme concentration which gave 50 percent inhibition of NBT reduction in one minute; y µg of glutathione consumed/min/mg Hb; z µmole of H 2 O 2 consumed/min/mg protein.
2014-10-01T00:00:00.000Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "fa60119b036c2b1d5e6a51c2758c48e3355c127e", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fa60119b036c2b1d5e6a51c2758c48e3355c127e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
246544113
pes2o/s2orc
v3-fos-license
Modular Link Level Simulator for the Physical Layer of Beyond 5G Wireless Communication Systems The low THz band is a promising candidate to enable data rates of up to 1 Tbit/s. To develop suitable communications systems, novel simulation approaches are needed that account for the specifics of the evolving technology. This article presents a modular link level simulator for the physical layer of beyond‐5G and 6G wireless communication systems in the THz range. The simulator, that is oriented toward the IEEE Std 802.15.3d‐2017 is contrasted to the state of the art of physical layer simulation tools. Its concept and basic building blocks are presented and the simulator is validated by channel simulations considering an AWGN channel model. Moreover, it is applied to a top‐of‐rack scenario in a wirelessly augmented data center. Different parameter sets are compared showing that a LOS condition and sufficient transmit power are a prerequisite in order to profit from the large bandwidth in the low THz range. The extensive data set of simulation results serves as input for future studies with higher layer simulation tools. The DC use case offers great potential and fixed P2P links can be applied well. By integrating wireless links at THz frequencies, the DC experiences a performance improvement because of the new level of flexibility and adaptability (Hamza et al., 2016). In combination with beam switching, the network controller is able to reconfigure the network automatically and to modify the data center layout in a dynamic way (Rommel et al., 2018). To adapt to the new channel conditions and meet the requirements of the examined applications, novel transmission techniques and protocols have to be developed for the physical layer (PHY) and higher layers (Hossain & Jornet, 2019). New geometry dependent channel models incorporating antenna characteristics and beam forming are necessary to evaluate the system design. In the context of THz communications, especially single carrier (SC) systems are discussed (IEEE Std 802.15.3d-2017, 2017) offering a lower peak-to-average power ratio implying lower demands with regard to challenging THz device development. Since communication systems in the low THz band use cutting-edge hardware devices that are optimized to the limit of technically feasible solutions, the influence of device characteristics and resulting radio frequency (RF) impairments on the signal and the data transmission are of particular interest (Sha & Wang, 2021). In this paper, we present the novel link level module of the simulator for mobile networks (SiMoNe) (Rose et al., 2015) and analyze the performance of a top-of-rack (ToR) link in a DC. The simulator that addresses the described requirements provides the necessary simulation competences for the simulation of the PHY of high-bandwidth THz systems. It incorporates propagation and channel modeling based on ray-optical channel predictions in complex three-dimensional (3D) scenarios (Dreyer & Kürner, 2019) and enables a fully parameterized simulation of a P2P link. The waveform description as time-discrete signals of the modulation channel allows for a physical interpretation and relation to hardware devices and enables the use of signal and system theory. Thus, the simulator directly contributes to the state of the art of realistic, hardware-near simulations. The complete data set is provided to facilitate fundamental research on higher layers using realistic simulation data . The rest of the article is structured as follows. Section 2 presents and summarizes the state of the art of link level simulators (LLSs) identifying the need for new developments. In Section 3, the concept of the new development is outlined and Section 4 presents the detailed models and simulation mechanisms of the whole signal processing chain. The PHY simulator is validated in Section 5 by additive white Gaussian noise (AWGN) channel simulations and in Section 6, the LLS is applied to the DC use case and the performance of a ToR link in a realistic DC model is analyzed. Finally, in Section 7, the key points of the paper are summarized. Current Research Software simulations are indispensable for the development of novel communication systems. High-performance simulation tools reduce the development costs and speed up the design process. To produce realistic and meaningful simulation results, the simulator and the models have to cover all relevant influence factors. For low THz communication systems relevant modulation and coding schemes (MCSs), channel models, waveforms and hardware related impairments have to be supported. In this section, we present the state-of-the-art LLSs and evaluate their properties with regard to THz communications. The bit error rate (BER) analysis tool from MATLAB's communications toolbox offers three simulation modes (MathWorks, 2021). The theoretical mode provides the BER as a function of the signal-to-noise ratio (SNR), more specifically the bit energy per noise power spectral density (PSD) E b /N 0 for various MCSs and AWGN, Rayleigh or Rice channels. The semi-analytic mode considers different modulation schemes and waveforms but is limited to an AWGN channel realization. The Monte Carlo mode serves as an interface for a MATLAB or Simulink model and allows for a simulation of a selected SNR range. The BER analysis tool is a useful but limited tool that could serve as a reference for simple simulation scenarios. The MATLAB communication toolbox represents a broad collection of signal processing functions including standard-compliant waveform filters, MCSs, also comprising multi-carrier systems, statistic channel models, and antenna systems (MathWorks, 2019). The full simulation tool chain for PHY design is well documented but closed source and comes along with an extra license. Hence, the user has limited insights of specific realizations of algorithms and applications. Moreover, the closed design of functions and modules may impede the adaptation of methods to special customized use cases and applications. The channel is also limited to stochastic descriptions at a high abstraction level without a specific application scenario or specific subscriber interaction. A commonly used and standard compliant LLS for fifth generation (5G) communication systems is the Vienna 5G Link Level Simulator (Pratschner et al., 2018). It is designed to simulate new concepts in 5G communication systems such as beam forming, multiple input-multiple output (MIMO) techniques or new waveforms and has its main focus on multi-carrier systems. The implemented double-fading and spatial channel models that handle time-discrete signals are detached from any deterministic environment and stay at a general level implementing stochastic functionalities. With regard to future P2P links in the low THz range providing several hundreds of Gbit/s, the system performance heavily depends on the environment of deployment and single carrier systems experience a comeback. Here, the Vienna LLS does not provide the required functionalities. The Aff3ct library deals with the very efficient implementation of forward error correction (FEC) algorithms in C++ and provides a wide range of FEC codes and various decoders (Cassagne et al., 2019). It is also able to run LLS implementing the whole digital communication chain. Here, the simulator stays at the bit level implementing only digital channels without any waveform generation. Hence, the ability to analyze the impact of signal and waveform related effects on the data transmission such as impairments of the RF devices, inter-symbol interference (ISI), and interference simulation is limited. The ns-3 extension TeraSim is a system level simulator for network simulations (Hossain et al., 2018). The data transmission is modeled at a packet level and the successful reception of packets is determined based on the received power. The simulator is adapted to THz communications by its channel module that considers a frequency-dependent path loss and takes particularly molecular absorption in the entire THz band into account. However, the realization of signals is limited to the consideration of power density spectra. Thus, inter-symbol interference or the impact of multipath propagation cannot be examined. As the literature review shows, no currently available LLS is able to meet all of the above mentioned requirements for the simulation of multi-gigabit P2P links at THz frequencies. SiMoNe has been extended with a link level module to close this gap. In accordance with the IEEE Std 802.15.3d-2017, the SiMoNe LLS implements a fully parameterized communication chain. The channel representation includes broadband channel models, the processing of time discrete signals in the equivalent low pass region, and RF impairments. As a result of the integration in the SiMoNe framework, the LLS benefits from an interface to propagation simulations via ray-optical channel predictions in realistic environments and system level functionalities that allow for an easily accessible and realistic interference simulation (Eckhardt, Herold, Friebel, et al., 2021). Moreover, the combination of propagation channels gained from deterministic 3D models, bit transmission over the PHY and context-based system level simulations with realistic user movements enables simulations of real data transmissions and the evaluation of many complex application scenarios in addition to the usual Monte Carlo approach. Table 1 summarizes the properties of the compared LLSs and the next sections give an in-depth insight of the functionalities and implementation of the SiMoNe LLS. Technical Concept SiMoNe's development has started in 2014 as a tool for system level simulations of mobile wireless networks. Since then, it has developed into a simulation suite for realistic propagation and system level modeling for wireless communication systems. For the LLS, the focus on the signal processing of one link rather than the networks' behavior poses distinct requirements on the development of the components. Due to the large bandwidths and high data rates that are envisioned for THz communications, strict runtime requirements are imposed on LLSs. Statistically significant predictions of the performance of ultra-reliable low latency communication (uRLLC) systems require a significantly larger amount of bits to be simulated compared to less reliable systems, such as an LTE-system. Moreover, the level of detail of the simulated link is much higher compared to system level simulations. Every single transmitted bit is followed on its path between transmitter and receiver and evaluated afterward. In order to ensure that the developed LLS is able to cope with these and further challenges, five main design principles have been identified and followed during the development. Modular Composition As technical developments continue rapidly in the field of THz communication, the simulator must be adaptable and flexible to react upon new concepts, hardware and simulation technology. A modular composition of the simulator allows for that. Functionalities should be bundled in interchangeable and connectable modules. The programming language C# as an object-oriented language which offers concepts such as abstract classes and inheritance. By using parent classes to provide common functionalities to different children (e.g., different types of detectors), code duplicates can be reduced while allowing for interchangeability. Iterative Computations As mentioned above, a low BER requires large amounts of simulated bits. If the communication scenario produces a high BER, though, a lower number of bits is sufficient for the computation of statistics. Running the simulations in an iterative way enables to check for abortion criteria and drastically reduce the runtime for many simulations. As the transmitted bits themselves are often not of interest, they can be discarded after the BER calculation of an iteration, which greatly reduces the memory footprint of the simulations. Integration of Ray Tracing Results The simulator for mobile networks contains a framework for the analysis of 3D scenarios using ray-optical methods (Dreyer & Kürner, 2019). It can be used to predict different communication paths with their delay and amplitude while taking into account positions and orientations of transmitter (TX) and receiver (RX). By implementing an interface to this framework, ray-optical methods can be used in the LLS to derive channel information of realistic 3D scenarios. It thus allows to simulate a variety of scenarios and compare measurement results with the simulations or draw conclusions from scenarios that might not be accessible by measurements. THz communication systems benefit even more from the ray tracing approach since higher frequencies get closer to quasi-optical propagation. Therefore, ray tracing is often applied to model and simulate THz channels. A conceptual overview of the simulation process that combines ray tracing and link level simulator is shown in Figure 1. Visualization of Results The visualization of different stages of the transmission system is important for several reasons: It can help to provide visual aids to spot and interpret phenomena such as I/Q-imbalances in a constellation diagram or timing issues in an eye diagram. Both diagrams are widely used and are standard representations for transmission systems. The visualization capabilities of the link level simulator include constellation diagrams, eye diagrams, error vector diagrams, spectrum plots and frequency domain plots. Apart from that, the visualization features help and facilitate the scientific communication as exemplarily presented in Figure 2. A metric such as the BER is more tangible when seeing the effect on a constellation diagram than just comparing two numbers to each other for example. Interfacing With Other Scientific Tools SiMoNe is used in several research projects and in collaborations with other research partners. Having appropriate interfaces to enable collaboration and exchange of simulation data with third parties is hence important to consider. The framework offers import and export functionalities in .mat-and .csv-format. While MATLAB is a well-established tool, .csv-files are commonly used to exchange data and allow for a high interoperability and universal usage within many different programs and contexts. By keeping in mind the need of interoperability, it can be assured that the research conducted with SiMoNe can be shared with the research community and have a valuable impact. PHY-Layer Model and Simulator Implementation After the assessment of requirements for the LLS, a suitable software architecture has been derived and implemented. The architecture and its functional components are briefly introduced in this section. As described in Section 3, the modular composition is one of the most important features of the link level simulator. The simulator is implemented in a design pattern called pipes and filter (Buschmann, 1996) that is commonly used for data processing. All functionalities are kept within defined blocks that only handle their respective tasks and provide states as inputs to other blocks (Rose et al., 2015). The functional blocks that are used within the LLS are explained in the following sections. In addition, an overview of all available blocks and their connections is shown in Figure 3. Note that the variable k flags a discrete quantity with bit rate r b or symbol rate r S and the variables n and l refer to a time-discrete representation with sampling frequency f s in time and frequency domain, respectively. The so-called link level coordinator serves as a control instance of simulation. It holds all configuration parameters and provides them to the actual transmission chain blocks. The coordinator controls the block iteration until a statistically significant number of transmitted bits is reached in the bit sink block. The bit source block creates a configurable number of pseudo random bits each time the block is called by the coordinator. The created bits are provided as a state for the following encoder block. Channel Coding Channel coding for THz communications is a challenging task due to high data rates and the hence implied need to efficiently encode and decode transmitted symbols. The IEEE Std 802.15.3d foresees advanced codecs that allow efficient hardware implementations, namely (240,224)-Reed Solomon (RS), 11/15-Low Density Parity Check (LDPC) and 14/15-LDPC (IEEE Std 802.15.3d-2017, 2017. The Aff3ct coding library is a highly optimized C++-library and provides a fast implementation of a wide variety of coding schemes (Cassagne et al., 2019). Hence, its encoding and decoding functionalities have been integrated in the LLS in order to profit from its efficient channel coding. As the LLS is written in the programming language C# and the Aff3ct coding library partly uses native C++, both software projects are not directly compatible. A wrapper library using a Pointer to Implementation (PImpl) programming technique is added to handle and translate calls between both pieces of software. By employing this wrapper, Hamming, RS and LDPC codes are provided. Hamming and RS codes can be configured using internal library parameters while custom generator and check matrices for the LDPC codes were created according to the standard and imported into the integrated Aff3ct library. The encoded bits b[k] are provided as a state for the modulator block. Modulation One of the main motivations for using THz communication is the large available bandwidth. Hence, the IEEE Std 802.15.3d projects a single carrier system with low order modulation schemes combined with a high bandwidth up to 69.12 GHz (IEEE Std 802.15.3d-2017, 2017. The modulator of the LLS implements the single carrier system and maps the provided bits b[k] to complex symbols [ ] according to the selected modulation scheme. Here, on-off-keying (OOK), binary phase shift keying (BPSK), quadrature phase shift keying (QPSK), amplitude phase shift keying (APSK) and quadrature amplitude modulation (QAM) are implemented. First, the M-ary modulation scheme is normalized with the factor where | | denotes the amplitude of each possible complex symbol in order to ensure the same average transmit power P TX for all modulation schemes. After the mapping, the complex symbols [ ] are sampled with the sampling frequency f s resulting in the time-discrete expression where δ(⋅) denotes the Dirac delta function, T S the symbol duration and Δt the sampling interval. The sampling and the time-discrete signal representation are necessary in order to consider the actual waveform that plays a crucial role in THz communications. The transmit pulse that describes how the waveform unfolds in time allows for incorporating important characteristics of the channel and the RF hardware and thus has to be considered in link level simulations. Since the simulation is performed in the equivalent low pass representation according to (Glover & Grant, 1998), the carrier is omitted. The signal [ ] is passed on to the transmit filter. In order to model the radio channel as band-limited time-discrete system, the transmit pulse is allocated to the channel block implementing the so-called modulation channel. Accordingly, the transmit pulse is given by where h TX (t) is the impulse response (IR) of the transmit filter realized as root-raised cosine (RRC), rectangular or sinc filter and A the amplitude of the transmit pulse that regulates the transmit power. It has a pulse duration T g that is a multiple of the symbol duration T S and given in Table 2. The pulse amplitude A can be derived from the TX power in the equivalent low pass region that is obtained by dividing the energy of a transmit pulse by the symbol duration T S resulting in a pulse amplitude where P TX denotes the bandpass TX power and f s denotes the sampling frequency. Note that the symbol duration is a multiple of the sampling interval Δt. The latter is adapted according to the selected transmit pulse and the resulting bandpass bandwidth of the pulses. In order to reduce aliasing effects, the sampling frequency f s = 1/Δt is adapted to the transmit pulse and chosen as a function of the Nyquist bandwidth B N , the minimum bandwidth that allows for a transmission at a certain symbol rate without ISI, as presented in Table 2. In this way, the signal distortion ratio is limited to SDR = 21.9 dB (Glover & Grant, 1998). In addition, an oversampling factor allows for an increase of the sampling frequency for visualization purposes. Channel Implementation The challenge of the simulator is to model the time-continuous channel within a time-discrete computer simulation. Based on the results of the ray tracing simulation for the wave propagation and the antenna masking, the multipath components (MPCs) are provided as pairs of amplitude A i and delay τ i , where the amplitude is currently assumed as frequency-independent. The IR of the propagation channel can thus be written as In order to transfer this expression to a time-discrete regime, the modulation channel is modeled making use of the transmit pulse presented in the previous section. The band-limited transmit pulse is convolved with the channel impulse response and sampled with the sampling frequency leading to which serves as filter coefficients of a finite impulse response (FIR) filter that is implemented via a fast convolution approach. Thus, the incident signal at the receiver is obtained by the convolution In order to serve as an interference signal in other simulations, the incident signal at the RX [ ] can be stored enabling realistic interference simulations based on actual time-discrete signals (Eckhardt, Herold, Friebel, et al., 2021). A major benefit of the time-discrete and physically meaningful signal representation in the channel block is that quantitative information on various characteristics (e.g., noise or device impairments) from different sources can be taken into account on the basis of the SI units. For instance, the complex AWGN denoted by [ ] can be simulated in two different ways. One option that is also commonly supported by other simulators is to specify the ratio of the energy per information bit and the noise spectral density E b /N 0 . The associated noise power P N is then calculated as a function of the incident signal power at the RX ) leading to Note that E b /N 0 takes into account the code rate r c such that the relation between the SNR and E b /N 0 is given by with bit rate b = S ⋅ ld ( ) and r S denoting the symbol rate, ld (⋅) denoting the binary logarithm, M denoting the modulation order and B BP denoting the bandpass bandwidth. Another option offers the generation of actual thermal noise of the RX with a spectral noise density of where k B denotes the Boltzmann constant, T the absolute temperature and F the noise factor of the amplifier chain of the RX. Multiplying the spectral noise density with twice the Nyquist bandwidth B N leads to the noise power P N that determines the variance of the random noise generator that creates the complex AWGN [ ] . Here, the factor T S f s guarantees a constant spectral noise density in the simulation independently of the sampling frequency or the bandpass bandwidth of the different transmit pulses. In this way, simulations in complex environments assume a realistic thermal noise and estimate the SNR based on the TX power and the scenario. For modern communication systems, especially in the low THz band, the characteristics of cutting-edge hardware components have a crucial impact on the performance of the data transmission. Therefore, it is inevitable to model the impairments of RF devices on the signals and waveforms in order to get meaningful simulation results. Based on measurement results from the hardware characterization, the measured PSD of the phase noise (PN), usually given in dBc/Hz, has to be modified for compatibility with the time-discrete signal representation in the channel. First, the given PSD is converted to linear scale where P LO denotes the power of the local oscillator. Then, the single sideband spectrum is interpolated, discretized and enlarged to a double side-band spectrum by with ϕ PN [n] denoting the realization of the phase noise signal and [ ] denoting the AWGN. For broadband systems (e.g., up to 69.12 GHz of bandwidth in IEEE Std 802.15.3d), the device characteristics cannot be assumed constant over the whole bandwidth. The transfer functions (TFs) of the most important devices such as the power amplifier (PA) and the low noise amplifier (LNA) are taken into account in the transmit and receive filter, respectively. Therefore, the given sample points of the TF of the device that lie within the simulated frequency band have to be interpolated in order to match the number of filter coefficients of the FIR filter realizing h TX,C [n]. The interpolation is implemented by zero padding in time domain. In further developments, nonlinear effects such as compression will also be considered. Here, the challenge consists in modeling a system of which the characteristics depend on the actual instantaneous power of the input signal. The signal is finally processed by a matched filter h RX,LNA [n] incorporating the characteristics of the LNA. The detailed signal processing chain of the modulator block, channel block, and detector block is visualized in Figure 4 summarizing the explanations of the respective sections. Demodulation and Detection In order to obtain the received symbols, the received signal [ ] is resampled with a sampling period corresponding to the symbol duration using the sampling series where 2N denotes the number of samples that is considered for the interpolation and (⋅) mod (⋅) denotes the modulo operation (Jeruchim et al., 2000). Here, N = 100 is chosen. The delay introduced by the channel τ ch is assumed to be known and gathered from the channel block. In order to account for the channel losses, the received symbols are scaled with the factor and additional delays originating from the channel and bit processing are compensated. Finally, a maximum likelihood detector using hard decision (HD) detection provides input for the Hamming and RS decoder. It selects the estimated transmit symbol̂[ by minimizing the euclidean norm of the error vector. The LDPC decoder requires and the RS decoder supports a soft decision (SD) detection based on the Log-Likelihood Ratio defined as where b i,k is the ith bit of the kth receive symbol [ ] (Hagenauer et al., 1996). Bit Evaluation At the sink, several key performance indicators (KPIs) are evaluated for the transmission chain. In order to derive statistically dependable values, a certain number of bits has to be simulated. Three different methods are currently implemented to estimate the required number of bits: A common rule of thumb is that 10-1,000 times the reciprocal value of the expected BER has to be considered (Jeruchim et al., 2000). For an expected BER of 10 −6 , this would mean that 10 7 -10 9 bits need to be simulated. In (Mitić et al., 2012), the authors propose a statistical method to estimate the required number of bits. For the same expected BER, their formula where K sim denotes the total number of bits to be simulated, BER the expected BER, SLC ∈ [0, 1] the desired level of confidence and K err the number of already occurred bit errors in the simulation, returns 4.61 ⋅ 10 6 bits to be simulated assuming a confidence of 0.99 and zero errors during the transmission. A third option is to simulate until a certain number of errors is reached. All three of these methods are complemented by another abortion criterion which defines a maximum number of bits that are simulated in order to avoid infinite loops. Once the suggested number of bits is reached, the link level coordinator is notified by the bit sink block and stops the iterating. Finally, KPIs such as BER, data rate and error vector magnitude (EVM) are computed and saved. Simulator Validation via AWGN Channel The validation of simulation results against measurements of a real-world system is a common method to show that the simulations produce reliable outcome. As THz communication systems with the intended bandwidths of the LLS are not commercially available yet, this practice is not an option for the LLS. In its place, a multi-layer test concept has been used in order to ensure the correct and expected behavior of the simulation tool. At the first and second stage unit-tests and code reviews were used to validate and verify the behavior of single functions (e.g., a function for the computation of the signal's power) as well as complete blocks (e.g., the modulator block as shown in Figure 3) encapsulating independent tasks. At the third and final tier, simulation results of the LLS were compared to known and theoretically derived reference results that were computed by other well-established and commonly used simulation tools. A later comparison with results of experimental hardware is foreseen once said data is available to the scientific community. For this task, a comparison of an AWGN channel for different MCSs simulated with SiMoNe's LLS and MATLAB's BER tool was drawn. The AWGN has been selected as it provides well-studied results in a time-invariant setting as a reference for the data transmission. The channel delay τ and amplitude A are set to 0 ns and 1, respectively. The noise power is set accordingly to the predefined E b /N 0 . Note that the SNR is dependent on the modulation scheme and the transmit pulse for fixed E b /N 0 and thus not appropriate to compare BERs of different configurations. Four modulation schemes (BPSK, 8PSK, 16QAM and 64QAM) have been simulated and compared to the BER tool's results for uncoded transmission over the range of b∕ 0 = [0,18] . A good agreement between the respective values can be observed. Minor deviations between the LLS's and the BER tool's curves most likely arise from the statistical variations of the AWGN and are not of concern. The BER graph for uncoded transmission is depicted in Figure 5a. In order to verify the coded transmission, simulations of different pairs of MCSs have been conducted, namely QPSK modulation and a (255,239)-RS-code, 16QAM and (7,4)-Hamming code, 64QAM and (7,4)-Hamming code and 64QAM and a (255,239)-RS-code. Similar to the uncoded cases, the coded simulation results show a strong agreement with the theoretical data from the BER tool. For reference, a QPSK modulated transmission with the 11/15-LDPC and 14/15-LDPC codes as defined in the IEEE Std 802.15.3d were included. However, no equivalent simulation in MATLAB's BER tool exists. The E b /N 0 sweeps for coded transmissions are shown in Figure 5b. After testing of the LLS's components and comparing the simulated data with theoretical BER-curves, it can be concluded that the simulator is valid and applicable to other scenarios because the digital signal processing within the simulation yields the expected results for traceable cases. Thus, the simulator is applied to analyze the performance of ToR links in a realistic DC model in the next section. Top-of-Rack Link Analysis in a Data Center Scenario Highly efficient data center networks are mandatory to cope with the challenges of the current century and need to provide ultra-low latency, reliability, and flexibility. Additional wireless links will enable a fast reconfigurability that increases the performance and efficiency of a DC. The ToR area is therefore a promising region for interrack connections with a high probability for a line-of-sight (LOS) connection. Using a channel model based on ray tracing (Dreyer & Kürner, 2019) in a realistic DC model , the performance of a single ToR link is evaluated through link level simulations. The DC under investigation is organized in rows. Plastic curtains above each row of racks are a particularity of this DC that divide the DC into hot and cold regions in order to assure an energy efficient air flow for cooling. The TX and RX are located on the same row in the middle of the DC. TX and RX are initially placed on adjacent racks with a distance of d = 0.69 m, that is subsequently increased by one rack up to d = 16.50 m. Figure 6 shows a schematic overview of the simulation setup. At first, the ray tracing simulation is carried out in the DC model for each distance up to second order reflections and transmissions, respectively, presenting a reasonable trade-off between computational effort and level of detail. Figure 7 illustrates the ray tracing in the DC where reflected and transmitted paths are shown in green and yellow, respectively. For reasons of visibility, the direct path is not plotted here. The resulting MPCs from the ray tracing given by amplitude and delay that are part of the published research data are fed to the LLS to examine the link performance. Figure 8 illustrates an exemplary discrete IR h TX,C [n] of the channel convolved with an RRC transmit pulse and sampled with a sampling frequency of 3.52 GHz representing 575 filter coefficients. For a meaningful link evaluation realistic parameters have to be set. According to the THz SC PHY mode of the IEEE Std 802.15.3d, the chip rate is set to 1.76 Gchip/s, 3.52 Gchip/s, and 10.56 Gchip/s with a channel bandwidth of 2.16 GHz, 4.32 GHz, and12.96 GHz, respectively (IEEE Std 802.15.3d-2017, 2017). The MCSs also comply with the standard and are listed in Table 3. TX powers of up to 0 dBm are available at 300 GHz for unpackaged devices (Al-Khalidi et al., 2020). To compensate for the losses due to packaging and system integration, the TX power is assumed to be −8 dBm. The TF of the PA is provided in (John et al., 2020). In the considered frequency band, the forward transmission S 21 varies between S 21,min = 18.21 dB and S 21,max = 23.24 dB. The values for the PSD of the implemented PN model are taken from (Dan et al., 2020). They show a PSD of the PN of −70 dBc/Hz and −87 dBc/Hz at an offset frequency of 10 Hz and 10 kHz, respectively. For the mixer and the LNA, including its noise figure NF of 10 dB, characteristics The deployed antenna is a simulated antenna pattern of a standard gain horn antenna with a gain of 26 dBi and a half power beam width (HPBW) of 8° that equals the antenna used for the measurements in the DC . The maximum number of bits to be transmitted is set to 0.3 Gbit resulting in a minimum BER of 1 ⋅ 10 −8 that is assumed to be sufficient for a first order analysis. Table 3 summarizes all combinations of simulation parameters that are included in the complete data set. The complete simulation results can be found at . Due to the multitude of different simulation sets, the following evaluation has been conducted on a subset of simulation results showing the key findings on the P2P link analysis for ToR scenarios. The BER that is the most important quantity to evaluate the physical layer is closely linked to the E b /N 0 at the RX. Figure 9a reflects the E b /N 0 as a function of the channel impulse response (CIR) mapped to the resulting distance for an uncoded BPSK transmission with varying channel bandwidth. Here, two distances around 6 m have a significantly lower E b /N 0 that is due to shadowing effects and non-line-of-sight (NLOS) conditions caused by lower rack heights at the RX side. NLOS conditions occur also for distances greater than 14 m that are caused by shadowing effects and lower rack height, too. The E b /N 0 and thus the BER depend on the channel bandwidth since the energy per bit decreases with higher channel bandwidth and constant TX power. Although the symbol rate increases, the E b /N 0 decreases and the BER increases, too. Figure 9b presents the BER for the channel bandwidths under investigation 2.16 GHz, 4.32 GHz, and 12.96 GHz. Data points with a lower BER than 10 −8 did not produce any error during the simulation and therefore are not visualized. Note that for a constant distance and channel bandwidth, different modulation schemes face different values of E b /N 0 although the SNR stays constant. The BER of an uncoded transmission with RRC filter, a channel bandwidth of 2.16 GHz and different modulation schemes is shown in Figure 10a. Although the reflected MPCs seem relatively weak as exemplarily presented in Figure 8, the BER in the DC scenario increases compared to the simulations with the AWGN channel in Section 5. A logical explanation might be the influence of ISI despite the highly directional horn antennas. Obviously, lower order modulation schemes are more robust. However, OOK has a similar BER as QPSK, although OOK performs worse than QPSK on an AWGN channel for constant E b /N 0 . As a conclusion, the direct setup in the realistic environment is very significant to evaluate the performance of different system parameters. Channel coding is indispensable to reach interesting BERs at relevant distances for low THz links in a DC. Different coding schemes with QPSK, RRC and B ch = 12.96 GHz are shown in Figure 10b. As expected, the LDPC11/15 performs best and allows for an error-free simulation up to 10 m in LOS cases. The strong dependency on the device characteristics is specific for low THz systems that work with the latest generation of RF devices. Therefore, waveforms and RF impairments have to be considered for reliable predictions of the link performance. The impact of the waveform that is presented in Figure 11a for a Hamming coded data transmission shows a higher BER for sinc pulses that are probably more sensitive to ISI because of the longer pulse duration. The RRC shows a better performance especially for QAMs. Moreover, a lower modulation order in combination with a higher bandwidth is more robust than a high order modulation such as 16QAM with a smaller bandwidth providing the same data rate. This observation supports Table 3 Simulation Parameters the approach of THz communications to profit of the large available bandwidth at THz frequencies along with lower order modulation schemes. The RF impairments have a significant influence on the data transmission and reduce the maximum transmission distance as visualized in Figure 11b. Here, the amplitude is strongly affected by the TFs of the PA and LNA making a data transmission with QAM impossible. Also the pulse types interact with the RF impairments leading to different BERs. That underlines the absolute necessity to consider RF characteristics in order to reliably predict the performance of the PHY for low THz communication systems. Conclusion In this paper, we have presented a modular link level simulator for future communication systems that use a high bandwidth and single carrier modulation as defined in the IEEE Std 802.15.3d. Modeling the exact transmit pulse and the channel, incorporating the antenna characteristics and the actual thermal noise of the RX, enables a simulation closely modeling hardware features. The time-discrete model offers the possibility to include hardware characteristics and RF impairments, such as phase noise or compression and expansion effects, having an important impact in simulating cutting-edge hardware development in the low THz band. The simulator has been compared to the state of the art to show the necessity of novel simulation approaches and the benefit of a holistic simulation suite integrating propagation, link-and system-level simulations such as SiMoNe. To prove its operability, the simulator has been validated through AWGN simulations and a comparison with the MATLAB BER tool. Finally, the application to a ToR scenario in a wirelessly augmented DC shows the general system performance of a single link with state-of-the-art hardware parameters. The analysis reveals the influence of RF characteristics on the waveform and on signal transmission and underlines the importance of a LOS condition to provide a fast and reliable data transmission in the low THz band. The results of the simulation campaign with over 20,000 data points are provided to the community as an extensive data set for future research. Figure 11. BER in the DC evaluating waveforms and RF impairments.
2022-02-05T16:34:25.959Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "439ee04506baf6e3f2f6195e9da273cf6891d146", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2021RS007395", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "1fd30961d26e9508b76038d7906a6554fc28cfbf", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Physics" ], "extfieldsofstudy": [] }
173991103
pes2o/s2orc
v3-fos-license
More than the sum of its parts: combining parameterized tests of extreme gravity We connect two formalisms that describe deformations away from general relativity, one valid in the strong-field regime of neutrons stars and another valid in the radiative regime of gravitational waves: the post-Tolman-Oppenheimer-Volkoff and the parametrized-post-Einsteinian formalisms respectively. We find that post-Tolman-Oppenheimer-Volkoff deformations of the exterior metric of an isolated neutron star induce deformations in the orbital binding energy of a neutron star binary. Such a modification to the binding energy then percolates into the gravitational waves emitted by such a binary, with the leading-order post-Tolman-Oppenheimer-Volkoff modifications introducing a second post-Newtonian order correction to the gravitational wave phase. The lack of support in gravitational wave data for general relativity deformations at this post-Newtonian order can then be used to place constraints post-Tolman-Oppenheimer-Volkoff parameters. As an application, we use the binary neutron star merger event GW170817 to place the constraint $-2.4 \leq \chi \leq 44$ (at 90% credibility) on a combination of post-Tolman-Oppenheimer-Volkoff parameters. We also explore the implications of this result to the possible deformations of the mass-radius relation of neutron stars allowed within this formalism. This work opens the path towards theory-independent tests of gravity, combining astronomical observations of neutron stars and gravitational wave observations. We connect two formalisms that describe deformations away from general relativity, one valid in the strong-field regime of neutrons stars and another valid in the radiative regime of gravitational waves: the post-Tolman-Oppenheimer-Volkoff and the parametrized-post-Einsteinian formalisms respectively. We find that post-Tolman-Oppenheimer-Volkoff deformations of the exterior metric of an isolated neutron star induce deformations in the orbital binding energy of a neutron star binary. Such a modification to the binding energy then percolates into the gravitational waves emitted by such a binary, with the leading-order post-Tolman-Oppenheimer-Volkoff modifications introducing a second post-Newtonian order correction to the gravitational wave phase. The lack of support in gravitational wave data for general relativity deformations at this post-Newtonian order can then be used to place constraints on the post-Tolman-Oppenheimer-Volkoff parameters. As an application, we use the binary neutron star merger event GW170817 to place the constraint −2.4 ≤ χ ≤ 44 (at 90% credibility) on a combination of post-Tolman-Oppenheimer-Volkoff parameters. We also explore the implications of this result to the possible deformations of the mass-radius relation of neutron stars allowed within this formalism. This work opens the path towards theory-independent tests of gravity, combining astronomical observations of neutron stars and gravitational wave observations. I. INTRODUCTION Neutron stars are one of the prime objects in nature for confronting our understanding of fundamental physical interactions against observations [1][2][3]. Their small size (radius around ≈ 12 km) and large mass (≈ 1.4 M ) result in densities at their core that can exceed that of nuclear saturation density, at which hadronic matter can transmute into exotic forms, by 10 orders of magnitude [4]. Neutron stars are also extreme gravity objects, second only to black holes in the strength of their gravitational potential and spacetime curvature, with fields that exceed those that we experience in the neighborhood of our Solar System by 9 orders of magnitude. The strong-field regime of neutron stars, critical in determining their structure and stability [5][6][7], demands the use of relativistic gravity to describe these stars, with Einstein's general relativity (GR) as our canonical theory for doing so. Moreover, neutron stars unlike black holes, allow us to probe how matter couples with the very fabric of spacetime in the strong-field regime [8]. The piercing power of neutron stars as tools to test our understanding of nature is amplified when they are found in binaries. From the discovery of the very first binary pulsar [9] and the confirmation that its orbital period decays in agreement with GR predictions, through the emission of gravitational waves [10], to the spectacular detection of the first binary neutron star merger event GW170817 [11] by the LIGO/Virgo collaboration (LVC), neutron star binaries have been in the forefront of experi- * hector.okadadasilva@montana.edu † nicolas.yunes@montana.edu mental gravity in astronomical settings with implications to cosmology included [12][13][14][15][16] Experimental tests of relativistic gravity have a long history [17,18] and can basically be carried out in two ways. In the first approach, one assumes a particular theory, whose predictions are worked out and then tested against observations. In the second approach, one introduces deformations to the predictions or solutions of GR, in a particular regime of the theory, and one then works out the observational consequences of these deformations to confront them against observations. Both approaches have been successful in aiding our understanding of the nature of gravity. An example of the first approach is the ruling out of Nordström's theory of gravity (a predecessor to GR), which for example fails to predict the deflection of light by the Sun [19][20][21]. An example of the second approach is the parametrized post-Newtonian framework (ppN) [22][23][24], which allowed us to test GR against a myriad of new Solar System tests starting in the 1960s, although early ideas date back to Eddington [25]. Can we combine parametrized tests of gravity that involve observations of the strong-field gravity created by isolated neutron stars with those that involve the radiative and dynamical fields generated in the coalescence of neutron star binaries? The purpose of this paper is to build a bridge between two parametrizations for tests of GR: the parametrized post-Tolman-Oppenheimer-Volkoff (post-TOV) formalism [26,27] (which parametrizes deviations to the stellar structure of isolated neutrons stars) and the parametrized-post-Einsteinian (ppE) formalism [28,29] (which parametrizes deviations to GR in the inspiral, merger and ringdown of compact binary coalescence). This bridge provides a theory-independent framework to combine constraints on deviations to GR from the observation of the bulk proper- The lower support at zero is not an evidence of a deviation from GR as explained in the text, but rather it reflects a similarly skewed posterior distribution for δφ4, which peaks away from zero due to degeneracies between the various binary parameters and nonstationarity of the detector noise. The long tail of the distribution is produced by a similar tail in the marginalized posterior for δφ4, the parameter that encodes deformations in the gravitational wave Fourier phase at 2PN order (see Fig. 1 in Ref. [30]). ties of neutron stars and from the generation and propagation of gravitational waves produced in the coalescence of binary neutron stars. The connection between both formalisms is only possible by realizing that the modified exterior spacetime of neutron stars in the post-TOV formalisms affects the binding energy of a neutron star binary [27], and thus, the gravitational waves that such a binary emits [28]. This modification to the binding energy or the gravitational waves emitted can be mapped onto the ppE framework, which we have extended here to encompass a wider set of modifications to the conservative sector of the binary's Hamiltonian. This allows a particular combination of post-TOV parameters χ [defined in Eq. (6)] to be mapped to the ppE modification to the gravitational wave Fourier phase δψ ppE [cf. Eqs. (24) and (29)]. We find that χ modifies the gravitational wave evolution at second post-Newtonian order (2PN) 1 . The lack of support in gravitational wave data for a GR deformation then allows for constraints on deformations of the exterior metric of isolated neutron stars. In 1 The PN formalism is one in which the field equations are solved perturbatively as an expansion in weak fields and small velocities. A term of N PN order is of O(v 2N /c 2N ) relative to the leadingorder term, with v the orbital speed and c the speed of light [31] particular, the constraints on GR modifications obtained by the LVC [30] for the binary-neutron star gravitational wave event GW170817 [32] can be used to place the first observational constraint on χ, namely −2.4 ≤ χ ≤ 44 at 90% credibility (see Fig. 1). This result strengthens the case for compact binary mergers as laboratories to test GR, something which would otherwise be very hard (if not impossible) with only mass and radius measurements of isolated neutron stars due to strong degeneracies between matter and strong-field gravity. We provide explicit examples of this degeneracy by computing the post-TOV deformations to the mass-radius curves within −2.4 ≤ χ ≤ 44 for a fixed equation of state. The remainder of the paper presents the details that led to the results summarized above and it is organized as follows. In Sec. I we briefly overview the post-TOV and ppE formalisms, establishing the connection between the two. Next, in Sec. III we use the public data on tests of GR with GW170817 released by LVC to place constraints on a combination of post-TOV parameters. In Sec. IV we discuss the allowed deformation on the mass-radius curves of neutron stars under this constraint, discussing in detail the degeneracies between matter and strong gravity. In Sec. V, we present our conclusions and outline some directions for which our work can be extended. Throughout this work we use geometric units G = 1 = c and use a mostly plus metric signature. II. FROM POST-TOV TO PPE Let us start by briefly reviewing the post-TOV formalism developed in Refs. [26,27] and the ppE formalism introduced in Ref. [28] and expanded in [33]. A. Overview of the post-TOV formalism The idea behind the post-TOV formalism is quite simple. The formalism is based on the observation that the structure of static, spherically symmetric stars in GR is determined by only two differential equations: dp which respectively govern the pressure and mass gradients within the star. Here, r is the circumferential radius, m the mass function, p the pressure and the total energy density. The latter two variables are assumed to be related through a barotropic equation of state (EOS), i.e. p = p( ). For later convenience we recall that ε can be written as ε = ρ(1 + Π), where ρ is the baryonic restmass density and Π the internal energy per unit baryonic mass. The second set (P 2 , M 2 ) represents 2PN corrections which can be written in terms of fluid and metric variables. As explained in detail in Ref. [26], the 2PN terms which can be constructed from these primitive quantities can be gathered in five "families," each with an infinite number of terms and with each family yielding a distinctive change to the mass-radius relation of neutron stars. Fortunately, 2PN terms belonging to each family exhibit qualitatively the same radial profiles inside a star. This translates into terms belonging to the same family affecting the mass-radius relations in a self-similar manner (cf. [26], Figs. 3, 6 and 7). This fact allows one to choose a single representative member from each family to be included to the TOV equations. The criteria used in [26] to make this choice was that of overall magnitude of the modification (relative to other terms in the same family) and simplicity of the analytic form of the term. Equation (2) is sufficient to determine the interior of the star and its bulk properties i.e. the (Schwarzschild) enclosed mass M [≡ m(R)] and the radius R [location r = R at which p(R) = 0 when integrating the post-TOV equation outwards from r = 0.]. In [27], the exterior problem was addressed and it was found that the post-TOV equations result in a post-Schwarzschild exterior metric given by is a combination of the post-TOV parameters and is the Arnowitz-Misner-Deser mass of the star. Equation (7) was obtained under the restriction that µ 1 ∈ [−1.0, 0.1], outside of which the calculation of M requires solving a transcendental equation and for which the exterior metric cannot be written analytically in the simpler form (5). The fact that M = M is not unusual in modified theories of gravity (see e.g. [37]). In theories beyond GR, contributions to the star's mass due to the presence of new degrees of freedom, such as scalar or vector fields arise, although this is not always the case [38][39][40]. We stress that it is M , not M, which would be observationally inferred, e.g. by using Kepler's law. In dynamical situations, such as in the motion of a neutron star binary, these additional degrees of freedom can be excited, and thus, they can open new radiative channels for the system to lose energy, modifying the binary's dynamic. As formulated, the post-TOV formalism cannot account for the presence of extra fields and hence the radiative loses of the binary will be the same as in GR. On the other hand, since the exterior spacetime is different from that of Schwarzschild, the conservative sector of the binary motion will be different. As we will see next, the ppE formalism aims to capture generic deviations from GR to both sectors. This will allow us to obtain a mapping between the parameters (that control these deviations) in both formalisms. B. Overwiew of the ppE formalism The ppE formalism was developed to capture generic deviations from GR in the gravitational waves emitted by a binary system [28]. These deviations can be separated into those that affect the conservative sector (e.g. the binding energy of the orbit) and the dissipative sector (e.g. the flux of energy). In previous work, the conservative sector was modified in a rather cavalier way, making some assumptions about the structure of the deformations. Let us then here relax some of these assumptions and rederive the modifications. We begin with the Hamiltonian for a two-body system in the center of mass frame, working to leading order in the post-Newtonian approximation and to leading order in the GR deformation: where r is the relative separation of the binary, µ = m 1 m 2 /m is the reduced mass, with m 1,2 the component masses and m = m 1 + m 2 the total mass, and p r and p φ are the generalized momenta conjugate to the radial and azimuthal coordinates. The functions (δU, δp r , δp φ ) characterize the deformation to the standard Newtonian Hamiltonian. For the purposes of this work, we will parametrize these deformations as where (A, B, C) control the magnitude of the deformation (assumed small here), while (a, b, c) control the character of the deformation. We will also here assume that a = b = c, meaning that all deformations enter at the same post-Newtonian order, and we will discuss later how to relax this assumption. Physically, we can think of (δU, δp r , δp φ ) as modifying the (t, t), (r, r) and (φ, φ) components of the metric respectively. Notice also that if δp φ = 0, then the radius r and the angle φ are not your usual circumferential radius and azimuthal angle (though they are related to them via a coordinate transformation). With this at hand, we can now derive the constants of the motion and the field equations. Assuming the Hamilton equations hold, there are two constants of the motion associated with time translation and azimuthalangle translation invariance. The former is simply the Hamiltonian itself, which for a binary is the binding energy E b . The latter is the angular momentum of the orbit, which we can define as L ≡ p φ /µ. The azimuthal component of the generalized momenta can be obtained fromφ which then leads to where have used the definition ω ≡φ, and because δp φ was assumed to be independent of p φ by Eq. (9). With this at hand, we can now derive the radial equation of motion in reduced order form. We begin by evaluatingṙ, which by Hamilton's equation is simply (p r /µ)(1 + δp r ), where again we have used that δp r was assumed to be independent of p r from Eq. (9). We can then rewrite Eq. (8) aṡ Note that δp r , which is associated with a deformation of the (r, r)-component of the metric does not affect the location in phase space whereṙ = 0 (or equivalently where V eff = 0). Before we can find what the binding energy of the orbit is as a function of the orbital angular frequency, we must determine what the energy and the angular momentum of a circular orbit in this perturbed spacetime is. We can do so by setting V eff = 0 and dV eff /dr = 0 and solving for E b and L, which yields From the above expression for L, we can solve for ω(r) as well as r(ω) (i.e. the modification to Kepler's third law) to find Using this in Eq. (13), we then find the final expression Reference [33] carried out a similar calculation, except that in their calculation, the whole Newtonian effective potential was modified by the same term, namely Such a modification lead to a binding energy of the form [33] From this, Ref. [33] showed that the gravitational waves emitted by a binary, assuming the dissipative sector is not modified (i.e the flux of energy is the same as that in GR), and assuming gravitational waves contain the same two polarizations as in GR, lead to a Fourier detector response (in the stationary phase approximation) of the formh where A is the Fourier amplitude and Ψ is the Fourier phase. The latter can be decomposed into Ψ = Ψ GR +δψ, where Ψ GR is the Fourier phase in GR, while the GR deformation is where and f is the gravitational wave frequency. Given the similarities in the calculations, the easiest way forward is to map the results of Ref. [33] to the modifications we are considering here. Comparing the binding energies in Eqs. (18) and (16), we see that and where we have used that a = c = p. We then see clearly that the change in the Fourier phase is This deformation arising from a GR correction to the binding energy can be mapped to the ppE waveform as follows. Noting that the ppE phase is [29] we then realize that Therefore, a ppE constraint on β for a given value of b given a gravitational wave observation that is consistent with GR can be straightforwardly mapped to a constraint on A given a value of a. C. Relating the parameters in both formalisms Several paths are possible to relate the post-TOV and the ppE formalisms. The path we choose here is to compare the binding energy and angular momentum of a binary system composed of neutron stars whose metrics in isolation would take the form of Eq. (5). This can be achieved by transforming from the two-body problem to an effective one-body problem, in which a test particle of mass µ = m 1 m 2 /m moves in a background of mass m = m 1 + m 2 . Let us then consider the geodesic motion of a test particle in a generic (but still stationary and spherically symmetric) background. Consider the line element where the metric functions f and h are decomposed as f (r) = f 0 (r) + εf 1 (r) and h(r) = f −1 0 (r) + εh 1 (r), and where ε is a small bookkeeping parameter. In Appendix A we present a detailed analysis of geodesic circular motion in such a perturbed metric, and we compute the change to the binding energy E and the angular momentum L of the orbit. Identifying f 0 = 1 − 2M/r, f 1 = −(2χ/3)(M/r) 3 , substituting these expressions into Eqs. (A16) and (A17), and expanding both in ε 1 and in M/r 1, we find where χ is a post-TOV parameter. We can now compare Eq. (27) to Eq. (13) and Eq. (28) to (14) to find what A, C and a are in the post-TOV formalism. Doing so, we find that A = χ/3, C = 0 and a = 2. In fact, we could have predicted that C had to vanish, because the radial coordinate in the post-TOV formalism is the circumferential radius. With this in hand, the ppE parameters are then simply This is one of the main results of this paper, since a constraint on β can now straightforwardly be mapped to a constraint on χ and vice versa. Note that one could also use the mapping between (A, C, a) → χ to compute the modification to Kepler's third law through Eq. (15) or the binding energy as a function of the orbital frequency through Eq. (16), but this is not needed here. In the limit χ = 0 the evolution of a neutron star binary in GR and in the post-TOV formalism become identical. However, we emphasize that this limit does not necessarily correspond to the limit in which the post-TOV equation reduces to the usual GR TOV equations. Indeed, χ = 0 only places a constraint on the combination of some of the post-TOV parameters. Therefore, one can have the situation in which a neutron star binary inspiral is identical to GR, yet the structure of the individual stars is different from GR either because π 2 − µ 2 − 2πµ 1 = 0 and/or because the nonzero post-TOV parameters are the ones which do not affect the exterior space. Thus, we will refer to the case χ = 0 as the coincident limit. III. CONSTRAINTS ON THE POST-TOV PARAMETERS FROM GW170817 The LVC released constraints on model-independent deviations from GR to examine the consistency of the GW170817 event with GR predictions [30,41]. The constraints were obtained using a variant of IMR-PhenomPv2 [42][43][44][45], which improves upon IMRPhe-nomD [45,46] by phenomenologically including some aspects of spin precession and tidal effects [47,48]. In this variant, deviations from GR are described through relative shifts in the GR PN coefficients of the Fourier phase of IMRPhenomPv2 where δφ i are additional free parameters in the model. The parametrization used by LVC is an implementation of the ppE formalism as explained in [29], with β and δφ 4 being related as where φ 4 is the GR coefficient of the Fourier phase at 2PN order (cf. Appendix B in [46]). Comparing Eqs. (29) and (31) we obtain which establishes the relation between δφ 4 with χ. We can now translate the posterior distribution of δφ 4 into one for χ by using the MCMC samples available in [41], where, for each step, we calculate the corresponding value of χ using Eq. (32). The resulting probability density is shown in Fig. 1 with the 90% credible region corresponding to This is the first constraint on (a combination of) post-TOV parameters and another one of the main results of this paper. The fact that the posterior of χ has a peak outside of zero (the coincident limit) is perplexing at first sight and may be misinterpreted as evidence for a deviation from GR, but this is not to be the case. Rather, it reflects the qualitative behavior of the posterior distribution of δφ 4 (see Fig. 1 in [30]), which also does not exhibit a peak at δφ 4 = 0 and it is skewed to positive values. Both distributions, however, clearly do have a significant amount of support at zero, and thus, they do not indicate an inconsistency with GR. The skewness in the posterior for δφ 4 probably results from the marginalization process over the various parameters that describe the model, the degeneracies between these parameters, and the nonstationarity of the noise in the detectors. The similarity between the posteriors for χ and δφ 4 can be understood from the following argument. The . We see that whereas µ1 is essentially unconstrained, values of µ2 (π2) which are smaller (larger) are favored with peaks located at the boundary of our prior ranges. The strong degeneracy between these parameters follows from the fact that the constraints derive from an underconstrained system, only requiring to satisfy Eq. (6). In fact, this simple argument results in a posterior for χ that is very similar to that shown in Fig. 1. Having obtained a constraint on χ, is it possible to translate it into constraints on the three-dimensional parameter space spanned by µ 1 , µ 2 and π 2 ? The first step to do this, is to fix the prior ranges for these parameters. We take µ 1 ∈ [−1.0, 0.1] (for the reasons discussed in Sec. II A) and assume µ 2 and π 2 are in the ranges [−22, 22]. The latter domains are chosen such as to include the GR limit and to be large enough to include moderately large values of µ 2 and π 2 to encompass the upper bound χ = 44. We then draw samples from the probability density function P (χ) (shown in Fig. 1), and given a value χ i , we then draw samples of µ 1 , µ 2 and π 2 until Eq. (6) is satisfied. Figure 2 shows the result of this calculation. The diagonal panels in this corner plot show the marginalized posteriors on µ 1 , µ 2 and π 2 , while the off-diagonal pan-els show two-dimensional joint posteriors with the 90% credible contours delimited by the solid lines. The constraint on χ leaves µ 1 essentially unconstrained, while the favored values for µ 2 and π 2 are set by the bound of our priors. This occurs due to the strong degeneracy between these parameters arising from Eq. (6), which, together with Eq. (33), constrains π 2 −(µ 2 +4πµ 1 ) < const. Thus, if the prior ranges of µ 2 and π 2 were extended, the marginalized posteriors in Fig. 2 would retain their qualitative shapes, with peaks at the edge of their priors, as π 2 − µ 2 = const has an infinite number of solutions. IV. DEGENERACIES BETWEEN MATTER AND GRAVITY MODELS In the previous section we have constrained the magnitude of the post-TOV parameter χ, as well as µ 1 , µ 2 and π 2 . How do these results impact the allowed deformations away from a GR mass-radius curve as allowed by the post-TOV formalism? Could one, for example, use these deformed mass-radius regions, together with observations of the mass and radius of isolated neutron stars, to place further constraints on post-TOV parameters? We will show in this section explicitly that this is not possible due to degeneracies between post-TOV deformations and the EOS. To answer this question, we construct mass-radius curves with a restricted set of post-TOV equations and a fixed set of representative EOSs. The set of post-TOV equations is obtained from Eq. (2) by fixing all parameters to zero other than µ 1 , µ 2 and π 2 , and we make this choice because these three parameters are the only ones that can be directly probed by electromagnetic or gravitational wave phenomena. The set of EOSs consists of the SLy [49] and APRb [50] EOSs, which are favored by the tidal deformability measurements of the constituents of GW170817 [51] in GR and the observation of two solar masses neutron stars [52][53][54]. With this set of post-TOV equations and EOSs, we then construct one thousand mass-radius curves each with a different choice of post-TOV parameters that lay within the bound of Eq. (33). The value of these parameters was selected as follows. First, we drew random samples from the probability distribution function P (χ), only accepting values that satisfy (33). Next, we drew samples of µ 1 , µ 2 and π 2 (as in Sec. III) until Eq. (6) is met. The results of these integrations are shown in Fig. 3 for EOS APRb; the results for EOS SLy being very similar, so we do not show them here. In this figure, the vertical hatched (yellow) region contains all the mass-radius curves that are consistent with the post-TOV constraints derived in this paper, all truncated at the the maximum mass of the (stable) sequence. As is evident, the post-TOV formalism is capable of capturing a wide variety of curves that span a large region of the mass-radius plane, including exotic types, which e.g. have very low maximum masses M max ≈ 1.5 M (despite both EOSs sup- −0.10 M , shaded region) and the radius of canonical neutron stars [55] (R1.4 = 10.9 +1.9 −1.5 km, horizontal solid line) the allowed region is reduced to the horizontally hatched region. For reference, we also included in the limit set by Schwarzschild BHs (R = 2M ), Buchdahl's limit (R = 9M/8), the limit set by causality (R = 2.9M ) [56,57] in GR and the cut-off mass (dotted line) M = 2.6 M inferred from the mass distribution of compact binaries containing neutron stars [58]. porting 2 M stars in GR). Other curves can enter the region in the mass-radius plane that is excluded in GR (the "causality" curve), which is derived by requiring only a very minimal set of assumptions on the underlying unknown EOS [56,57], with some even extending close to Buchdahl's limit. 2 Further exotica include mass-radius curves that do not have an extrema at M max . These generically allow for very large radii ( 15 km), even when the mass is 1.4 M . More common curves are only small deformations away from the GR result. Although the region of the mass-radius plane allowed by Eq. (33) alone is rather large, it can be reduced by combining other sources of information on the masses and radii of neutron stars. For instance, by imposing that the mass-radius curves are consistent with (i) the existence of neutron stars with masses M = 2.17 +0.11 then 99.3% (for SLy) and 96.3% (for APRb) of the curves investigated are excluded. The resulting tighter contour due to the surviving mass-radius curves is shown by the horizontally hatched (red) regions in Fig. 3. Figure 4 vividly shows several difficulties in testing extreme gravity with observations of isolated neutron stars that yield mass and radius measurements alone. First, even in GR, our ignorance on the underlying neutron star EOS gives riseto mass-radius curves that can overlap (see the intersection of the SLy and APRb curves in Fig. 4). Second, even in the event of the EOS being tightly constrained in the future (under the assumption of neutron stars are described by GR), a measurement of χ still leads to degeneracies between the post-TOV parameters µ 1 , µ 2 and π 1 , as shown in the previous section Each value of (µ 1 , µ 2 , π 1 ) should correspond to a specific theory of gravity, and this degeneracy prevents us from singling one out. Third, the fact that the contours in Fig. 4 change as we change the EOS makes the degeneracy between EOS and theory of gravity explicit. This degeneracy arises in the post-TOV formalism in a very explicit way: the post-TOV equations (with P 1 = M 1 = 0) can be mapped into an effective barotropic EOS, with p = p(ε eff ) and ε eff ≡ ε + ρM 2 [26]. Therefore, observations of isolated neutron stars that yield mass and radius measurements alone cannot really be used to test gravity, unless more information is contained in the data, which can be folded into the models to test GR. V. CONCLUSIONS AND OUTLOOK Neutron star observations, both through electromagnetic and gravitational-wave astronomy, offer us a unique look into the fundamental interactions of nature. For gravity (in particular) it allows us to probe both the strong-field regime of neutron star interiors and the radiative aspects of gravity, when these object are found in binary systems. To be able to do theory-independent tests of gravity through neutron star observations, we have combined the post-TOV and ppE formalisms, constructing a single, unified framework for which tests of gravity can be performed from the radiative level down to the level of stellar structure. This framework is particularly relevant in light of ongoing events on the observational front. For instance, the Neutron Star Interior Composition Explorer (NICER) mission [60][61][62] will soon release the mass and radius measurements of a number of neutron stars within 10% precision and probes the effects of spacetime curvature on the motion of photons. Moreover, LIGO/Virgo is currently on its third scientific observing run, with a binary neutron star merger candidate already observed and tens of events expected to be seen in the next years. It would be interesting to combine these upcoming observational results to further explore the resulting constraints on the post-TOV parameters and thereby constrain modifications to GR in a theory-independent way. For instance, the contours in Fig. 4 reveal that the largest variability occurs for massive stars with M 1.8 M . One of NICER's targets (PSR J1614-2230) has a mass of 1.93 M [63,64] and a radius measurement of it would constrain this region of the mass-radius plane. In turn, these constraints could also be used to probe deviations from GR in a number of astrophysical scenarios, for instance in the quasiperiodic oscillations on matter disks in accreting neutron stars [27], or in the pulse profiles emitted by hot spots on the surface of rotating neutron stars (complementary to constraints on scalartensor gravity [65]). We have here only taken a first step on using this new framework and hope to explore further its applications in the near future. ACKNOWLEDGMENTS We thank the post-TOV practitioners Emanuele Berti, Kostas Glampedakis and George Pappas for numerous discussions on the topic over the years. We also thank Alejandro Cárdenas-Avendaño, Katerina Chatziioannou, Remya Nair, Thomas Sotiriou and Jacob Stanton for discussions on different aspects related to this work. Finally, we thank the anonymous referee for carefully reading our work. This work was supported by NASA Grants No. NNX16AB98G and No. 80NSSC17M0041. N. Y. also acknowledges the hospitality of KITP where some of this work was completed. Particle motion in perturbed spacetimes Consider the line element in Schwarzschild coordinates, on which a massive particle follows geodesic motion, with trajectory x α (τ ), where τ is the proper time. Let u α ≡ dx α /dτ be the particle's four-velocity, constrained by g αβ u α u β = −1 As usual, the spacetime symmetries imply the existence of two Killing vector fields which result in two conserved quantities respectively, the energy and angular momentum (per unit mass) of the particle. Due to the conserved angular momentum, orbits are confined to a single plane, which we take, without loss of generality to be the one for which θ = π/2. Using this result we find that Let us consider spacetimes with metric g, which are a small deformations to a static, spherically symmetric background g 0 . More specifically, let us write the metric functions f and h as where (in this Appendix only) ε denotes a small bookkeeping parameter. For convenience, we omit hereafter the dependence on r of the functions introduced above. Using these decompositions of f and h, into Eq. (A3) and then solving forṙ 2 , we find to leading order in ε, Equation (A5) suggests the definition of a zeroth-order effective potential V 0 eff , and a leading-order correction V 1 eff , such that Eq. (A5) becomes Properties of particles in circular orbits Now let us focus on the properties of particles in (not necessarily stable) circular orbits that we denote by r * . These orbits satisfy the conditionṡ where V eff ≡ V 0 eff + εV 1 eff . As a warm-up exercise, let us consider the limit ε → 0 and obtain general formulas of the (zeroth-order) energy E 0 and angular momentum L 0 of particles in circular orbits on g 0 . This calculation is particularly simple, because L 0 can be easily isolated from the dV eff /dr equation. With a little algebra we can obtain the general formulas In the particular limit of the Schwarzschild spacetime (f 0 = 1 − 2M/r) we readily obtain the familiar results Now, let us consider the general problem and obtain the corrections to E 0 and L 0 due to the perturbation V 1 eff in Eq. (A5). To do this, we first solve Eq. (A9) for E 2 and L 2 . Next, we expand the resulting expressions to leading order in ε. The outcome of this exercise is that E and L can be written as where the corrections to the zeroth-order energy and angular momentum [cf. Eqs. (A10) and (A11)] are and These expressions are the main result of this Appendix. Notice the absence of h 1 in these expressions. Finally, we can solve for E and L and write our final results. We emphasize that although the formulas obtained here were applied for the post-Schwarzschild metric, our results can be used to any perturbed spacetime -as long as its line element can be written in the form of (A1) -and then connected to the ppE formalism through Eq. (25). Appendix B: Orbital period decay rate In this appendix we derive an expression for the orbital period rate of changeṖ in the post-TOV formalism following closely [66] and obtain an order-of-magnitude bound on χ from binary systems. We start by assuming that energy is carried away from a circular binary according to the GR gravitational-wave luminosity formulaĖ at the expense of the orbital binding energy given by (16), i.e.Ė b = −Ė. Taking a time-derivative of Eq. (16) and using ω = 2π/P (t) we find: Now, let us return to (B1). We can eliminate r in favor of ω by using the modified Kepler's law (15). Solving for r, expanding in A, C and then substituting the resulting . (B3) We can now use Eqs. (B2) and (B3) in the energy balance law, solve forṖ (while expanding once more in A, C) and find: which is the main result of this appendix, where is the corresponding GR result. In the particular case of the post-TOV metric, we find after using A = χ/3, C = 0 and a = 2 thaṫ A simple constraint on χ (independent from the one in the main text) can thus be obtained as follows. Since binary pulsar observations of (Ṗ /P ) obs are in remarkable agreement with GR up to some observational error δ we can write (Ṗ /P ) obs = (Ṗ /P ) GR (1 + δ). Therefore, the post-TOV correction in Eq. (B6) is bound by δ, which then constrains χ to be |χ| ≤ 3 δ P 2πm where v c ≈ 2.1×10 −3 is characteristic velocity of the system and δ ≈ 1.3×10 −2 (for the quasicircular system PSR J0737-3039 [67]), giving the weak bound |χ| 7.2 × 10 8 . This result is seven orders of magnitude weaker than the bound obtained from GW170817 and exemplifies the constraining power of gravitational wave events on modifications to GR relative to binary pulsar constraints 3 .
2019-06-02T21:03:46.000Z
2019-06-02T00:00:00.000
{ "year": 2019, "sha1": "4f93bf46895ae1225c2f4d40796f7fdd90e7a842", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1906.00485", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4f93bf46895ae1225c2f4d40796f7fdd90e7a842", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258947697
pes2o/s2orc
v3-fos-license
ADLER -- An efficient Hessian-based strategy for adaptive learning rate We derive a sound positive semi-definite approximation of the Hessian of deep models for which Hessian-vector products are easily computable. This enables us to provide an adaptive SGD learning rate strategy based on the minimization of the local quadratic approximation, which requires just twice the computation of a single SGD run, but performs comparably with grid search on SGD learning rates on different model architectures (CNN with and without residual connections) on classification tasks. We also compare the novel approximation with the Gauss-Newton approximation. Introduction Recent improvements in Deep Learning have been coupled with a huge carbon footprint generated by the procedures needed to train large models [1]: for example, the common practice of employing grid search over the model hyperparameters to find the most accurate model is responsible for a multiple-fold increase in the emissions. Hyperparameter-free optimization is a yet under-explored research direction with huge potential benefits for the development of greener deep learning, especially if applied to widely popular learning algorithms and without sacrificing performance. In this context, Stochastic Gradient Descent (SGD) is at the core of most deep learning models that are trained daily. We take a first step towards removing hyperparameters from SGD, by introducing an adaptive learning rate strategy that leverages a novel positive semi-definite (PSD) approximation to the Hessian of popular supervised optimization problems in deep learning models. Experimental results show that our strategy only requires twice as much computation as a single SGD run while showing improved performance compared to classical grid search over SGD learning rates and the Gauss-Newton method. Empirical Risk Minimization in Deep Learning Consider the typical Empirical Risk Minimization (ERM) setting in supervised learning, where we are given a set S of n examples (x i , y i ) ∈ X ×Y sampled i.i.d. from an unknown probability distribution D ∈ P(X × Y ), a parametric function f : X × Θ → Y and a loss function : X × Y × Y → R; we are interested in the parameters θ ∈ Θ that results in the smallest possible empirical loss, i.e. θ)). To set some notation, let us define F S : Θ → Y ⊗n as the evaluation of f over all examples in S, i.e. F S (θ) := (f (x 1 , θ), . . . , f (x n , θ)); L S : Y ⊗n → R by L S (ŷ 1 , . . . ,ŷ n ) := 1 n n k=1 (x k , y k ,ŷ k ), and h S (θ) := L S (F S (θ)). This view clearly separates the two functions L and F which have different properties: F can be seen as an overparameterization 1 of the real objective function, is randomly initialized and thus amenable to techniques related to the central limit theorem, while L is typically convex 2 , and with its minimum value attained atŷ k = y k , so that it is amenable to classical optimization techniques. Proposed Hessian Approximation It is a matter of simple calculations to rewrite the Hessian of h S as where the indices µ, ν range over the output dimensions of F . Let us assume, as it commonly happens, that the last network layer is sampled independently of others from a zero mean distribution and the network output is linear in it. Multiple theoretical intuitions suggest dropping the last term: 1. For wide networks, the spectral norm of ∇ 2 F (θ) in a ball around the random initialization is O(1/ √ m), where m is the width of the network, while the first term is O(1) [2, Section 4.2, Theorem 5]. Thus the second term is a perturbation, the more negligible the wider the network is. 2. By overparameterization of the F map, we expect the optimization progress to overfit the training dataset, thus ensuring that lim t→∞ ∇L(F (θ t )) = 0 3 . Assume the loss function to be Then, by the zero-mean random distribution of the last layer, we expect y = f (x, θ) at initialization to be distributed randomly around zero irrespective of x, and thus E (x,y)∼D ∂ ∂ŷ (x, y, f (x, θ)) = 0, and cancellations are expected to occur between the second term weights ∇L(F (θ)) 5 . Thus the approximation is exact in limit settings 6 , and sound in typical cases due to the expected cancellations in the second term; moreover, the approximation is PSD 7 as ∇ 2 L(F (θ)) is PSD because of the convexity of L. The presented approximation provides a Hessian that doesn't necessarily vanish as the optimization progresses, in contrast with the Gauss-Newton method 8 , and enables it to better follow the local geometry of the optimization problem. Per-minibatch Computation of the Learning Rate Consider a random minibatch I ⊆ S and the update equation θ = θ − η∇h I (θ): we want to search for the η minimizing h I (θ ), that we approximate as from which the optimal learning rate is η(θ) : . A difficulty in a direct application of the above formula is the high variance of the obtained estimates for the learning rate, especially problematic in later optimization phases; we thus consider an exponentially weighted average between different mini-batches approximations of the learning rate 9 . Let us call η k the approximation for the k-th minibatch; the proposed effective rate iŝ η k := exp(( k i=0 β k−i log η i )/( k i=0 β k−i )) 10 , where β ∈ [0, 1] is a parameter determining how much earlier estimates are considered 11 . For clarity of exposition, Algorithm 1 details the full algorithm pseudo-code. 5 A more careful analysis can be made by using the Gradient Independence Assumption [3]. 6 Both on very wide networks (and NTK [4]) and on all networks at empirical convergence. 7 It is important that the approximation is PSD as this ensures that the taken direction is a descent direction, and enables the usage of Conjugate Gradient Descent [5] on the objective. 8 Which approximate the Hessian using ∇L(F (θ)) T ∇L(F (θ)) instead of ∇ 2 L(F (θ)). 9 The underlying intuition is that gradient steps become smaller as training progresses, and thereby the local curvature becomes more stable from one minibatch to the next one, which enables a simple averaging procedure to aggregate the information contained in multiple minibatches without a strong need to increase the mini-batch size. 10 Since the denominator ∇h I (θ) T Q I (θ)∇h I (θ) can be arbitrarily near zero, we need to average the logarithms of η k , as this better reflects the variations expected in different estimates. In contrast, in the Gauss-Newton approximation, we average the η k as they are, since in that case the denominator is with very high probability strictly greater than zero, and the distribution of the learning rates can be suitably approximated by a Gaussian distribution. 11 Ideally, we would like to have different β k → 1 − , since at the start of the optimization earlier estimates aren't much informative due to large steps, while near the optimization end values of β 1 are optimal to ensure full usage of the information contained in each minibatch. We suggest β = 0.99 as a good fixed value ensuring a decent tradeoff between being fast enough at the start and reach of the best end results; we leave β-autotuning to future research. Experimental Results To validate the effectiveness of the proposed algorithm, we perform experiments on Convolutional architectures with and without residual connections and on Vision Transformers (ViT) 12 [6] of different widths and number of layers, different activation functions, and different random seeds, totaling more than one hundred variations 13 . For each network variation, we compare: (i) the best result of a cost-intensive grid search over ten SGD learning rates 14 ranging in logarithmic scale from 1e−5 to 3e−1; (ii) our algorithm (ADLER) with β = 0.99, ε = 1e−10; and (iii) Gauss-Newton approximation (GN) with β = 0.99. Figure 1 and Table 1 show that ADLER performs much better than the Gauss-Newton approximation; moreover, it is on par with SGD grid search for CIFAR10 and achieves a better performance on CIFAR100. Accuracy performance apart, the method is quite effective in terms of the optimization of the given objective function, as it can be appreciated from the logarithmic slope of the training loss compared to the other methods ( Figure 1). Infact, the method is so effective at optimization that we see that the test loss starts to increase dramatically after the first ten epochs, while at the same time the accuracy performance stabilizes (Figure 1). We think that this overfitting effect is due to the employment of non-regularized ERM, and can be mitigated by a policy to increase the mini-batch size in time, which would allow the method to obtain more accurate geometrical information. Let us now analyze the way in which the learning rates are varied throughout the learning epochs: in Figure 2 we can observe that they resemble cyclical learning rate strategies [7] 15 in the case of ReLU networks; moreover we observe that the learning rates produced by GN inexorably decrease with time and are generally smaller than either of the other methods, which hints at the reason underlying its worse overall performance. ADLER achieves a comparable accuracy to grid-search SGD on CIFAR10 16 , while it shows a marked performance improvement on the more challenging CIFAR100 dataset (Table 1). While this may seem a contradiction, we think the low performance on CIFAR10 can be explained by the fact that on easier problems the local geometry appears flatter and thus the algorithm takes greater steps; these steps may sometimes be so big that the local approximation no longer holds. For this reason, we look forward to integrating a trust-region bound [8] in future extensions of the method. Fig. 2: Evolution of learning rates during optimization: for SGD we plot the one achieving the best performance at each epoch, while for the purpose of training, it is actually kept fixed to its grid search value. Conclusions In this work (1) we have presented an algorithm that relieves practitioners from computational and energy costs of grid search on SGD learning rates, without loss of predictive performance when optimizing popular convolutional architectures and Vision Transformers; (2) we have proposed a sound PSD Hessian approximation that can be easily instantiated in Conjugate Gradient methods, thus enabling to possibly match the performance of accelerated algorithms, and promoting the removal of hyperparameters from optimization procedures towards a greener AI.
2023-05-29T01:22:23.507Z
2023-05-25T00:00:00.000
{ "year": 2023, "sha1": "4da6696f2b28dfbe8a9314b56ca619afc8e3e5ba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4da6696f2b28dfbe8a9314b56ca619afc8e3e5ba", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
7217008
pes2o/s2orc
v3-fos-license
Interactive Proofs with Quantum Finite Automata Following an early work of Dwork and Stockmeyer on interactive proof systems whose verifiers are two-way probabilistic finite automata, the authors initiated in 2004 a study on the computational power of quantum interactive proof systems whose verifiers are particularly limited to quantum finite automata. As a follow-up to the authors' early journal publication [J. Comput. System Sci., vol.75, pp.255-269, 2009], we further investigate the quantum nature of interactions between provers and verifiers by studying how various restrictions on quantum interactive proof systems affect the language recognition power of the proof systems. In particular, we examine three intriguing restrictions that (i) provers always behave in a classical fashion, (ii) verifiers always reveal to provers the information on next moves, and (iii) the number of interactions between provers and verifiers is bounded. one-way quantum finite automaton (mo-1qfa, in short) of Moore and Crutchfield [13] and a measure-many one-way quantum finite automaton (1qfa, in short) of Kondacs and Watrous [11]. An initial study of QIP systems with those weak verifiers reveals their noticeable strength. Let us recall from [16] a general notation QIP( restrictions ) for bounded-error QIP systems under restrictions given by restrictions , analogous to the aforementioned notation IP( restriction ) of Dwork and Stockmeyer. For instance, QIP(2qf a) is obtained by restricting all verifiers to 2qfa's. Likewise, a use of mo-1qfa verifiers and 1qfa verifiers introduces the language classes QIP(mo-1qf a) and QIP(1qf a), respectively. The power of quantum interaction was exemplified in [16]. (i) With mo-1qfa verifiers, it holds that MO-1QFA QIP(mo-1qf a) REG, where REG is the class of regular languages and MO-1QFA is the class of all languages recognized by bounded-error mo-1qfa's. (ii) With 1qfa verifiers, we obtain 1QFA QIP(1qf a) = REG, where 1QFA is the class of all languages recognized by bounded-error 1qfa's. (iii) With 2qfa verifiers, it holds that QIP(1qf a) QIP(2qf a, poly-time) AM(2pf a). They also showed that 2QFA A ⊆ P and QIPC(2qf a, poly-time) ⊆ NP, whereC and A respectively indicate that all (transition) amplitudes of qfa verifiers are polynomial-time "approximable" complex numbers and of algebraic complex numbers. The last result clearly contrasts with the following classical containments: AM(2pf a) ⊆ IP(2pf a, poly-time) ⊆ PSPACE [6]. We intend to continue a study of qfa-verifier QIP systems from various aspects of computational complexity. In particular, This paper aims at presenting three intriguing subjects that were discussed in [15] but have been excluded from our early journal publication [16]. In this current publication, we shall examine strengths and weaknesses of qfa-verifier QIP systems by observing how various restrictions on the QIP systems affect their power of language recognition. Our investigation is focused on the following three selected subjects. 1. Classical provers versus quantum provers. In our model of QIP systems [15,16], provers are basically quantum machines, which can apply any predetermined unitary operators. In contrast, provers in IP systems of Dwork and Stockmeyer [6] are essentially "probabilistic machines," which can flip privately owned coins and decide what messages to send back to 2pfa verifiers. These probabilistic machines, however, are known to be reduced to deterministic machines, which are naturally associated with unitary operators whose entries are only 0s and 1s because the provers can use an unlimited amount of private memory storage. For convenience, we briefly call such provers classical provers; in contrast, we call standard provers quantum provers. Naturally, we raise a simple question of whether our quantum provers are truly different in recognition power from the aforementioned classical provers. It appears that a classical prover helps a 2qfa verifier much more than a quantum prover does. For instance, the language Center = {x1y | x, y ∈ {0, 1} * , |x| = |y|} is not yet known to belong to QIP(2qf a); however, interactions with a classical prover allow a 2qfa verifier to recognize this particular language. Such a strength of using classical provers stems from a simple fact that an analysis of classical-prover QIP protocols is much easier than that of quantum-prover ones. This paper further shows the following containments and separations concerning classical provers. (i) QIP(1qf a) ⊆ QIP(1qf a, c-prover). (ii) AM(2pf a) QIP(2qf a, c-prover). (iii) AM(2pf a, poly-time) QIP(2qf a, poly-time, c-prover) AM(2pf a). A core argument for these results is a technical construction of appropriate QIP protocols that recognize target languages. All the above results will be presented in Section 3. QIP(1qf a, public) denotes a language class obtained from QIP(1qf a) by restricting verifiers to publicly announcing their next moves. It turns out that public QIP systems remain significantly powerful. To be more precise, we shall prove the following three class relations. (i) 1RFA QIP(1qf a, public) 1QFA, (ii) QIP(2qf a, public, poly-time) AM(2pf a, poly-time), and (iii) QIP(2qf a, public, c-prover) AM(2pf a, poly-time), where 1RFA is the language family induced by one-way (deterministic) reversible finite automata (1rfa's, in short). In Section 4, we shall discuss those results in details. Number of interactions between a prover and a verifier. As suggested in [16,Section 6], the number of interactions between a prover and a verifier in a weak-verifier QIP system may serve as a complexity measure of classifying various languages. Unlike Dwork-Stockmeyer IP systems, the original QIP systems of the authors [15,16] were introduced as to force two parties-a prover and a verifier-to communicate with each other at every step and, through Sections 3-4, we shall take this definition of QIP systems. To study the precise effect of interactions, nevertheless, we need to modify this original model slightly so that the verifier can interact with the prover only when he needs any help from the prover. To express those new QIP systems and their corresponding language classes, we invent two new notations QIP # ( restrictions ) and QIP # k ( restrictions ), where k indicates the maximal number of iterations made during a computation. In Section 5, we shall prove that QIP # 0 (1qf a) QIP # 1 (1qf a) QIP # (1qf a). The first separation between QIP # 0 (1qf a) and QIP # 1 (1qf a) comes from the fact that the language Odd = {0 m 1z | z ∈ {0, 1} * , z has an odd number of 0s } belongs to QIP # 1 (1qf a) but it is not in QIP # 0 (1qf a) since QIP # 0 (1qf a) coincides with 1QFA. In contrast, the second separation of QIP # (1qf a) from QIP # 1 (1qf a) is exemplified by the language Zero = {x0 | x ∈ {0, 1} * }; however, the proof of Zero ∈ QIP # 1 (1qf a) is much more involved than the proof of Zero ∈ 1QFA that appears in [11]. QFA-Verifier QIP Systems Throughout this paper, C denotes the set of all complex numbers and ı is √ −1. Let N be the set of all natural numbers (i.e., nonnegative integers) and set N + = N − {0}. Given two integers m and n with m ≤ n, the integer interval [m, n] Z is the set {m, m + 1, m + 2, . . . , n} and Z n in particular denotes the set [0, n − 1] Z . All logarithms are to base 2 and all polynomials have integer coefficients. An alphabet is a finite nonempty set of "symbols" and our input alphabet Σ is not necessarily limited to {0, 1} throughout this paper. Following the standard convention, Σ * denotes the set of all finite sequences of symbols from Σ, and we write Σ n = {x ∈ Σ * | |x| = n}, where |x| denotes the length of x. Opposed to the notation Σ * , Σ ∞ stands for the set of all infinite sequences, each of which consists of symbols from Σ. For any symbol a in Σ, a ∞ denotes an element of Σ ∞ , which is the infinite sequence made only of a. We assume the reader's familiarity with classical automata theory and the basic concepts of quantum computation (refer to, e.g., [8,9,14] for its foundation). As underlying computation device, we extensively use measure-many one-way quantum finite automata (or 1qfa's, in short) and of measure-many two-way quantum finite automata (or 2qfa's), where we assume the reader's familiarity with the definitions of those quantum automata [11]. Basic Model of QIP Systems Let us review the fundamental definition of QIP system of the authors [15,16], in which verifiers are particularly limited to quantum finite automata. In Sections 4-5, we shall further restrict the behaviors of those verifiers as well as provers to obtain three major variations of our basic QIP systems. The reader may refer to [16] for a brief discussion on the main difference between QIP systems based on uniform quantum circuits and those based on quantum finite automata. We use the notation (P, V ) to denote a QIP protocol taken by prover P and verifier V (whose schematic diagram is illustrated in Figure 1). For convenience, we use the same notation (P, V ) to mean a QIP system with the prover P and the verifier V . The 2qfa verifier V = (Q, Σ ∪ {| c, $}, Γ, δ, q 0 , Q acc , Q rej ) is a 2qfa specified by a finite set Q of verifier's inner states, an input alphabet Σ and a verifier's transition function δ, equipped further with a shared communication cell using a communication alphabet Γ. The set Q is the union of three mutually disjoint subsets Q non , Q acc , and Q rej , where any states in Q non , Q acc , and Q rej are respectively called a non-halting inner state, an accepting inner state, and a rejecting inner state. In contrast to Q non , inner states in Q acc ∪ Q rej are simply called halting inner states. In particular, Q non contains a so-called initial inner state q 0 . An input tape is indexed by natural numbers (where the first cell is indexed 0). Two designated symbols | c and $ not appearing in Σ, which are called respectively the left endmarker and the right endmarker, mark the left end and the right end of the input on the input tape. For our convenience, setΣ = Σ ∪ {| c, $}. Assume also that Γ contains a blank symbol # with which the system (P, V ) begins in the communication cell. The verifier's transition function δ is a map from Q ×Σ × Γ × Q × Γ × {0, ±1} to C and is interpreted as follows. For any q, q ′ ∈ Q, σ ∈Σ, γ, γ ′ ∈ Γ, and d ∈ {0, ±1}, the complex number δ(q, σ, γ, q ′ , γ ′ , d) specifies the transition amplitude with which the verifier V in state q scanning symbol σ on the input tape and symbol γ on the communication cell changes q to q ′ , replaces γ with γ ′ , and moves his tape head on the input tape in direction d. When the tape head is located in a cell indexed t, it must move to the cell indexed t + d. At the beginning of the computation, an input string x over Σ of length n is written orderly from the first cell to the nth cell of the input tape. The tape head initially scans | c in the 0th cell. The communication cell holds only a symbol in Γ and initially # is written in the cell. Similar to the original definition of 2qfa in [11], our input tape is circular; that is, whenever the verifier's tape head scanning | c ($, resp.) on the input tape moves to the left (right, resp.), the tape head reaches to the right end (resp. left end) of the input tape. Figure 1: A schematic view of a QIP system with a qfa verifier Next, we explain two concepts of (global) configuration and visible configuration. A (global) configuration of the QIP protocol (P, V ) is a snapshot of a "computation" of the protocol, comprising the following visible configurations of the two players. Each player can see only his portion of a global configuration. A visible configuration of the verifier V on an input of length n is represented by a triplet (q, k, γ) ∈ Q × Z n+2 × Γ, which indicates that the verifier is in state q, the content of the communication cell is γ, and the verifier's tape head position is k on the input tape. Let V n and M be respectively the Hilbert spaces spanned by the computational bases {|q, k | (q, k) ∈ Q × Z n+2 } and {|γ | γ ∈ Γ}. The Hilbert space V n ⊗ M is called the verifier's visible configuration space on inputs of length n. For any input x of length n in Σ * , δ automatically induces the linear operator U x δ acting on the Hilbert is the ith symbol in x, and k ′ = k+d (mod n+2). The verifier is called well-formed if U x δ is unitary on V n ⊗M for every string x ∈ Σ * . Since we are interested only in well-formed verifiers, we henceforth assume that all verifiers are well-formed. For every input x of length n, the 2qfa verifier V starts with the initial quantum state |q 0 , 0, # . A single step of the verifier on x consists of the following process. First, V applies his operation U x δ to an existing superposition |φ in V n ⊗ M and then U x δ |φ becomes the new superposition |φ ′ . Second, we define W acc = span{|q, k, γ | (q, k, γ) ∈ Q acc × Z n+2 × Γ}, W rej = span{|q, k, γ | (q, k, γ) ∈ Q rej × Z n+2 × Γ}, and W non = span{|q, k, γ | (q, k, γ) ∈ Q non × Z n+2 × Γ}. Moreover, let k acc , k rej , and k non be respectively the positive numbers representing "accepting," "rejecting," and "non halting." The new superposition |φ ′ is then measured by the observable k acc E acc + k rej E rej + k non E non , where E acc , E rej , and E non are respectively the projection operators onto W acc , W rej , and W non . Provided that |φ ′ is expressed as |ψ 1 + |ψ 2 + |ψ 3 for certain three vectors |ψ 1 ∈ W acc , |ψ 2 ∈ W rej , and |ψ 3 ∈ W non , we say that, at this step, V accepts x with probability |ψ 1 2 and rejects x with probability |ψ 2 2 . Only the non-halting superposition |ψ 3 continues to the next step and V is said to continue (to the next step) with probability |ψ 3 2 . The probability that x is accepted (rejected, resp.) within the first t steps is thus the sum, over all i ∈ [1, t] Z , of the probabilities with which V accepts (rejects, resp.) x at the ith step. In particular, when the verifier is a 1qfa, the verifier's transition function δ must satisfy the following additional condition: for all q, q ′ ∈ Q, σ ∈Σ, and γ, γ ′ ∈ Γ, it holds that δ(q, σ, γ, q ′ , γ ′ , d) = 0 if d = +1 (i.e., the tape head does not move to the right). Unlike 2qfa verifiers, a 1qfa verifier must stop running after applying δ at scanning $ and then performing a projection measurement. In other words, the 1qfa verifier completely stops by the time the verifier's tape head moves off $ (conceptually, the tape head stops at | c since the input tape is circular). Therefore, on any input x, the 1qfa verifier halts in at most |x| + 2 steps. In contrast to the verifier, the prover P has a semi-infinite private tape and accesses input x and the communication cell. Let ∆ be a tape alphabet, which includes a special blank symbol #, for the prover's private tape. The prover is assumed to alter only a "finite" initial segment of his private tape at every step. Let P be the Hilbert space spanned by {|y | y ∈ ∆ ∞ f in }, where ∆ ∞ f in is the set of all infinite sequences of tape symbols containing only a finite number of non-blank symbols. The prover's visible configuration space is the Hilbert space M ⊗ P. Formally, the prover P is specified by a series {U x P,i } x∈Σ * ,i∈N + of unitary operators, each of which acts on the prover's visible configuration space, such that U x P,i is of the form S x P,i ⊗ I, where dim(S x P,i ) is finite and I is the identity operator. Such a series of operators is often called the prover's strategy on the input x. To refer to the strategy on x, we often use the notation P x ; namely, P x = {U x P,i } i∈N + . With this notation, the prover can be expressed as {P x } x∈Σ * . If the prover has string y ∈ ∆ ∞ f in on his private tape and scans symbol γ in the communication cell, then he applies U x P,i to the quantum state |γ |y at the ith step of the prover's turn. If U x P,i |γ |y = γ ′ ,y ′ α i γ ′ ,y ′ |γ ′ |y ′ , then the prover changes y into y ′ and replaces γ by γ ′ with amplitude α i γ ′ ,y ′ . A (global) configuration consists of the four items: V 's inner state, V 's tape head position, the content of the communication cell, and the content of P 's private tape. We express a superposition of such configurations of (P, V ) on input x as a vector in the Hilbert space V |x| ⊗ M ⊗ P, which is called the (global) configuration space of (P, V ) on the input x; in other words, a (global) configuration is of the form |q, k |γ |y , indicating that V is in inner state q, its tape head is at cell k, γ is in the communication cell, and P 's private tape contains y. A global configuration ξ is called a halting configuration (a non-halting configuration, resp.) if ξ contains a halting (non-halting, resp.) inner state of V . A computation of the QIP protocol (P, V ) on the input x constitutes a series of superpositions of configurations resulting by alternate applications of unitary operations of the prover and the verifier including his projection measurement in the following manner. The computation of (P, V ) on x starts with the global initial configuration |q 0 , 0 |# |# ∞ , where the verifier is in his initial configuration and the prover's private tape consists only of the blank symbol #. The two players P and V apply their unitary operators U x δ (as well as measurement) and P x in turn, starting with the verifier's move. A projection measurement is made after every move of the verifier to determine whether V is in a halting inner state. Through the communication cell, the two players exchange communication symbols, which cause the two players to be entangled. When the prover (verifier, resp.) writes symbol σ ∈ Γ in the communication cell, we customarily say that the prover (verifier, resp.) sends σ to the verifier (prover., resp.). More precisely, when |q, k |γ |y is a current (global) configuration, V changes it into (U x δ ⊗ I 1 )|q, k |γ |y = U x δ |q, k |γ ⊗ |y , where I 1 is the identity operator acting on P. After V applies the projection measurement E non , the global configuration becomes (E non ⊗ I 1 )(U x δ ⊗ I 1 )|q, k |γ |y . Finally, P changes |q, k |γ |y into (I 2 ⊗ U x P,i )|q, k |γ |y = |q, k ⊗ U x P,i |γ |y , where I 2 is the identity operator acting on V |x| . A superposition |Φ i of global configurations at the ith step is defined recursively as |Φ 0 = |q 0 , 0 |# |# ∞ , |Φ 2i+1 = (E non ⊗ I 1 )(U x δ ⊗ I 1 )|Φ 2i , and |Φ 2i+2 = (I 2 ⊗ U x P,i+1 )|Φ 2i+1 for every i ∈ N. For example, the superposition of (global) configurations after the 2i + 1st step becomes Given any global configuration ξ, a local computation path ending with (or leading to) ξ in computation (|Φ 0 , |Φ 1 , . . . , |Φ n ) of the QIP protocol (P, V ) on a given input is a series (ξ 0 , ξ 1 , . . . , ξ m ) of global configurations satisfying the following four conditions: |ξ 0 = |q 0 , 0 |# |# ∞ , (E non ⊗ I 1 )(U x δ ⊗ I 1 )|ξ 2i contains |ξ 2i+1 with non-zero amplitude for all i ∈ [0, ⌊(m − 1)/2⌋] Z , (I 2 ⊗ U x P,i+1 )|ξ 2i+1 contains |ξ 2i+2 with nonzero amplitude for all i ∈ [0, ⌊(m − 2)/2⌋] Z , and ξ m equals ξ. Moreover, a (global) computation path ending with ξ is a local computation path (ξ 0 , ξ 1 , . . . , ξ m ) ending with ξ for which |ξ i appears in |Φ i with non-zero amplitude for every i ∈ [0, m] Z . Each (global) computation path ends when V enters a certain halting inner state along this computation path. Furthermore, we define the overall probability that (P, V ) accepts (rejects, resp.) the input x as the limit, as t → ∞, of the probability that V accepts (rejects, resp.) x within the first t steps. We use the notation p acc (x, P, V ) (p rej (x, P, V ), resp.) to denote the overall acceptance (rejection, resp.) probability of x by (P, V ). We say that V always halts with probability 1 if, for every input x and every prover P * , (P * , V ) reaches halting inner states with probability 1. In general, V may not always halt with probability 1. Notice that, when we discuss the entire running time of the QIP system, we count the number of all steps taken by the verifier (including measurements) as well as the prover. Let a and b be any two real numbers in the unit interval [0, 1] and let L be any language. We say that L has an (a, b)-QIP system (P, V ) (or an (a, b)-QIP system (P, V ) recognizes L) if (P, V ) is a QIP system and the following two conditions hold for (P, V ): 1. (completeness) for any x ∈ L, the QIP protocol (P, V ) accepts x with probability at least a, and 2. (soundness ¶ ) for any x ∈ L and any prover P * , the QIP protocol (P * , V ) rejects x with probability at least b. Note that an (a, a)-QIP system has the error probability at most 1 − a. This paper discusses only the QIP systems whose error probabilities are bounded from above by certain constants lying in the real interval [0, 1/2). For simplicity, we say that L has a QIP system if there exists a constant (an error bound) ǫ Given any pair a, b ∈ [0, 1], the notation QIP a,b ( R ), where R is a set of restrictions, denotes a class of all languages recognized by certain (a, b)-QIP systems with the restrictions specified by R . In addition, we define QIP( R ) as the union ǫ>0 QIP 1/2+ǫ,1/2+ǫ ( R ). In this paper, we shall focus our attention on the following three basic restrictions R : 1qf a (i.e., 1qfa verifiers), 2qf a (i.e., 2qfa verifiers), and poly-time (i.e., expected polynomial running time). As an example, QIP(2qf a, poly-time) denotes the language class defined by QIP systems with expected polynomial-time 2qfa verifiers. Other types of restrictions will be discussed in later sections. What if Provers Behave Classically? To promote a better understanding of the roles of provers in our QIP systems described in Section 2.1, we shall examine a variant of those systems. Recall that, in Dwork-Stockmeyer IP systems [6], mighty provers are in essence probabilistic machines that probabilistically select messages to send to verifiers. As noted in [6], it is possible to weaken those provers to deterministic machines without compromising the language recognition power of the corresponding IP systems. Naturally, we can ask whether or not standard "quantum" provers in our QIP systems can be replaced by significantly weaker machines. Among many candidates for weak machines, we consider machines that operate only unitary operators whose entries are all limited to 0 and 1. Significance of such operators is that, using a semi-infinite private tape, those restricted operators essentially make their corresponding provers reduce to merely deterministic machines. Moreover, a real-life implementation of such restricted operators could be much simpler and easier than implementing arbitrary unitary operators. For those reasons, we call a prover classical if the prover's move is dictated by unitary operators whose entries are 0s and 1s. In comparison, we refer to the original provers (described in Section 2.1) as quantum provers. Remember that classical provers are still quantum provers. Hereafter, the restriction c-prover indicates that all provers behave classically as defined above. In our QIP systems, classical provers may play an essentially different role from quantum provers. ¶ As Lipton [12] demonstrated, this form of the soundness condition cannot be, in general, replaced by the following weaker form: "(P, V ) accepts x with probability at most 1 − b." See [6] for a discussion. In a strict sense, a more exact analogy to deterministic prover may require that even a communication cell behaves classically. Let us examine the power of classical-prover QIP systems when verifiers are limited to 1qfa's. It is not difficult to prove that 1QFA ⊆ QIP(1qf a, c-prover) by forcing provers to unalter the communication cell at any step. However, it is not clear whether QIP(1qf a, c-prover) coincides with QIP(1qf a). In what follows, we shall demonstrate that QIP(1qf a, c-prover) actually contains QIP(1qf a). Proof. In a quantum-prover model, it was shown in [16,Lemma 5 x in the reverse order) of "marked" even-length palindromes belongs to QIP(2qf a, poly-time). By a careful examination of the proof, we find that the same proof works for classical provers. This fact immediately places P al # into QIP(2qf a, poly-time, c-prover). Hence, the separation between AM(2pf a) and QIP(2qf a, poly-time, c-prover) naturally follows because P al # is located outside of AM(2pf a) [6]. This separation further leads to the difference between AM(2pf a) and QIP(2qf a, c-prover). To complete the proof, we shall prove that AM(2pf a) ⊆ QIP(2qf a, c-prover). Since the proof that begins below works for any time-bounded model, we also obtain the remaining claim that AM(2pf a, poly-time) ⊆ QIP(2qf a, poly-time, c-prover). Let L be any language in AM(2pf a) over alphabet Σ. We want to show that L is also in QIP(1qf a, c-prover). The important starting point is a fact that L can be recognized by special finite automata M , called 2npfa's [5], that make probabilistic moves and nondeterministic moves in turn as follows. If x ∈ L, then there exists a series of nondeterministic choices by which M halts in accepting states with probability at least 1 − ǫ; otherwise, for every series of nondeterministic choices, M halts in rejecting states with probability at least 1 − ǫ, where ǫ is a constant in [0, 1/2). Now, we take a 2npfa M = (Q, Σ ∪ {| c, $}, δ M , Q acc , Q rej ) with nondeterministic states and probabilistic states that recognizes L with error probability at most ǫ, where 0 ≤ ǫ < 1/2. To simplify our proof, we force M to satisfy the following two conditions: (i) M 's tape head does not stay still at any step and (ii) whenever M tosses a fair coin, the tape head moves only to the right. It is not difficult to modify any 2npfa to meet those two conditions. Based on this machine M , we shall construct the desired QIP system (P, V ) with classical prover P for L. Let x be any input string of length n. Let Q ′ = Q ∪ {p | p ∈ Q} be a set of inner states and let Γ = (Q ′ × {±1}) ∪ {#, κ} be a communication alphabet, wherep is a new inner state associated with p and κ is a fresh non-blank symbol. The verifier V carries out the procedure that follows δ M : Q×Σ → P(Q×{±1}), by which V simulates M step by step. Let us consider any step at which M tosses a fair coin in probabilistic state p by applying a transition δ M (p, σ) = {(p 0 , 1), (p 1 , 1)} for certain distinct states p 0 , p 1 ∈ Q, where "1" means that the tape head moves rightward by Condition (ii). The verifier V checks whether # is in the communication cell. If this is not the case, V rejects x immediately; otherwise, V makes the corresponding (Q × Γ)-transition V σ |p |# = 1 √ 2 (|p 0 |(p, 1) + |p 1 |(p, 1) ) with D(p 0 , (p, 1)) = D(p 1 , (p, 1)) = +1. The verifier expects a prover to erase the symbol (p, 1) in the communication cell by overwriting it with #. This erasure of symbols guarantees V 's move to be unitary. Next, consider any step at which M makes a nondeterministic choice in nondeterministic state p by a Here, deterministic moves are treated as a special case of nondeterministic moves. In this case, V takes two steps to simulate M 's move. The verifier V enters a rejecting inner state immediately unless the communication cell contains #. Now, assume that # is in the communication cell. Without moving its tape head, V first sends the designated symbol κ to a prover, requesting a pair (p ′ , d ′ ) in Q × {±1} to return. This is done by the special (Q × Γ)-transition V σ |p |# = |p |κ with D(p, κ) = 0. The verifier forces a prover to return a valid form of nondeterministic choice (namely, (p ′ , d ′ ) ∈ δ M (p, σ)) by entering a rejecting inner state if the prover writes any other symbol. Once V receives a valid pair ( and expects a prover to erase the communication symbol (p, d i ). The honest prover P must blank the communication cell at the end of every simulation step of V and, in request of V with the symbol κ, P returns a "correct" nondeterministic choice to V (if any). If x ∈ L, then there are a series of nondeterministic choices along which M accepts x with probability at least 1 − ǫ. Since the honest prover P sends such a series step by step, P guides V to make correct nondeterministic choices. Moreover, P allows V to simulate correctly M 's probabilistic moves by erasing V 's communication symbols. Hence, V successfully reaches M 's outcomes with the same error probability, and thus the protocol (P, V ) accepts x with probability at least 1 − ǫ. Next, consider the case where x ∈ L. Notice that no matter how nondeterministic choices are made, M rejects x with probability at least 1 − ǫ. Take a dishonest classical prover P * that maximizes the acceptance probability of V on x. This particular prover P * must clear out the communication cell whenever V asks him to do so since, otherwise, V immediately rejects x and further lowers the acceptance probability, a contradiction against the choice of P * . Since P * is classical, all the computation paths of V have nonnegative amplitudes, which cause only non-destructive interference. This indicates that P * cannot annihilate any existing computation path of V . On request for a nondeterministic choice, P * must return any one of valid nondeterministic choices; otherwise, V rejects immediately. With a series of nondeterministic choices of P * , if V rejects x with probability less than 1 − ǫ, then our simulation implies that M also rejects x with probability less than 1 − ǫ. This is a contradiction against our assumption. Hence, V rejects x with probability at least 1 − ǫ. Therefore, (P, V ) is a classical-prover (1 − ǫ, 1 − ǫ)-QIP system for L. ✷ In the above proof, we cannot replace a classical prover by a quantum prover, mainly because a certain quantum prover may fool the aforementioned verifier by (i) returning a superposition of those nondeterministic choices instead of choosing one of the two choices and (ii) using negative amplitudes to make the verifier's quantum simulation destructive. In the end of this section, we shall present a QIP protocol with classical provers for the non-regular language Center = {x1y | x, y ∈ {0, 1} * , |x| = |y|}, which is known to be in AM(2pf a) but not in AM(2pf a, poly-time) [6]. In the QIP protocol described in the next proof, an honest prover signals the location of the center bit of a given input and then a verifier tests the correctness of the location by employing the quantum Fourier transform (or QFT, in short) in a fashion similar to [11]. An interaction in a QIP protocol constitutes a verifier's transition, a projection measurement, and a prover's move. Lemma 3.3 For any ǫ ∈ (0, 1), Center ∈ QIP 1,1−ǫ (2qf a, poly-time, c-prover). Proof. Let ǫ be any error bound in the real interval (0, 1) and set N = ⌈1/ǫ⌉. In what follows, we shall define a desired QIP protocol that witnesses the membership of Center to QIP 1,1−ǫ (2qf a, poly-time, c-prover). Let Σ = {0, 1} be our input alphabet and let Γ = {#, 1} be our communication alphabet. Our QIP protocol (P, V ) comprises four phases. The formal description of the behavior of V is given in Table 1 using 1) In the first phase, the verifier V checks whether |x| is odd by moving the tape head toward $ together with switching two inner states q 0 and q 1 . To make deterministic moves during this phase, V forces a prover to return only the blank symbol # at any step by entering a rejecting state whenever the prover sends back non-blank symbols. When |x| is odd, V enters the inner state q 3 after moving its tape head back to | c. Hereafter, we consider only the case where the input x has an odd length. 2) In the second phase, V moves its tape head rightward by sending # to a prover until V receives 1 from the prover. Receiving 1 from the prover, V rejects x unless its tape head is currently scanning 1 on the input tape. Otherwise, the third phase starts. During the third and fourth phases, whenever the prover changes the communication symbol 1 to #, V immediately rejects the input. 3) Assume that the tape head is now scanning 1. In the third phase, the computation splits into N parallel branches by applying V 1 |q 3 |1 . This step is called the first split and it generates the N distinct inner states r 1,0 , r 2,0 , . . . , r N,0 with equal amplitudes 1/ √ N . The tape head then moves deterministically toward $ in the following manner: along the jth computation path (1 ≤ j ≤ N ) associated with the inner state r j,0 , the tape head idles for 2(N − j) steps in each tape cell before moving to the next one by changing inner states as where each number over arrows indicates the direction of the tape head. When the tape head reaches $, it steps back one cell by applying V $ |r j,0 |1 and V $ |s ′ j,0 |1 , and then starts the fourth phase. 4) During the fourth phase, the tape head along the jth computation path keeps moving leftward by idling in each cell for j steps, changing inner states as until the tape head reaches | c. At | c, the computation splits again into N parallel branches (called the second split) by applying the QFT V | c |s j,0 |1 , yielding either the accepting inner state t N or one of the rejecting inner states in {t j | 1 ≤ j < N }. From Table 1, it is not difficult to check that V is indeed well-formed (namely, U x δ is unitary for every x ∈ Σ * ). The honest prover P should return 1 exactly at the time when V scans the center bit of an input string and at the time when V sends # to P during the third and fourth phases. At any other step, P should apply the identity operator. Now, we shall check the completeness and soundness of the obtained QIP system (P, V ) for Center. First, consider a positive instance x, which is of the form y1z for certain strings y and z of the same length, say, n. Since the honest prover P signals just before V reads the center bit 1 of x, the first split given by |r j,0 |# occurs at the middle of x during the third phase (more precisely, exactly after n steps of V from the start of the second phase) after reading | cy1. Along the jth computation path (1 ≤ j ≤ N ) associated with the inner state r j,0 chosen at the first split, V idles for 2n(N − j) steps while reading z and also idles for 2nj steps while reading the whole input. Overall, the idling time elapses for the duration of 2n(N − j) + 2nj = 2nN , which is independent of j. Hence, all the N computation paths created at the aforementioned first split have the same length, and thus the superposition of global configurations prior to the second split becomes 1 √ N N j=1 |s j,0 , 0 |# |Ψ for an appropriate quantum state |Ψ in the Hilbert space P associated with the prover's private tape. The QFT given by V | c |s j,0 |1 = 1 √ N N l=1 exp(2πıjl/N )|t l |# then converges all the global configurations to the verifier's visible accepting configuration |t N |# ; that is, On the contrary, suppose that x is a negative instance of the form x = y0z with |y| = |z| = n. Consider the second, third, and fourth phases. To minimize the rejection probability, a dishonest prover P * must send the symbol 1 just before V scans 1 on the input tape during the second phase and then P * must maintain 1 because, otherwise, V immediately rejects x and, moreover, there is no way for classical provers to pass both 1 and # in a form of superposition to deceive the verifier. Let us assume that the eth symbol of x is 1 and P * sends 1 during the eth interaction, where 1 ≤ e ≤ 2n + 1. Obviously, e = n + 1 because the center bit of x is 0. Consider the first split caused by applying V 1 |q 3 |1 = 1 √ N N j=1 |r j,0 |# . For each index j ∈ [1, N ] Z , let p j be the computation path following the jth branch that starts with the inner state r j,0 generated at the first split. Along this computation path p j , the idling time totals 2(|x| − e)(N − j) + 2nj = 2(n + 1 − e)(N − j) + 2nN . Since 1 ≤ j ≤ N , two computation paths p j and p j ′ for any distinct values j and j ′ must have different lengths. Just before the second split, along the jth computation path, we obtain a quantum state 1 √ N |s j,0 , 0 |# |Ψ j + |∆ j , where |∆ j does not contain |s j,0 . At the second split, the QFT further generates N parallel branches 1 where |∆ ′ j is obtained from |∆ j by the QFT and |∆ ′′ is an appropriate quantum state not containing |t N . Thus, at most one of the computation paths can reach |t N , 0 |# . Hence, the probability of V reaching such an acceptance configuration is no more than 1/N 2 . Since there are N computation paths {p j } 1≤j≤N generated at the first split, the overall acceptance probability is at most N × (1/N 2 ) = 1/N . Since V 's computation paths always end with certain halting states, it follows that V rejects x with probability ≥ 1 − 1/N ≥ 1 − ǫ. ✷ What If a Verifier Reveals Private Information? In Dwork-Stockmeyer IP systems [6], the prover's view of the verifier's computation is limited to a small window (i.e., a communication cell) and the strength of a prover's strategy hinges on the amount of the information that a verifier is willing to reveal to the prover through this window. Let us consider a situation, in their IP system, that a verifier always unalters the communication cell. Since the behavior of a 2pfa verifier depends on not only messages from a prover but also its internal random choices (or its coin flips), no prover can gain more than the information on the number of the verifier's moves, and therefore any prover knows little of the verifier's actual configurations. In Babai's Arthur-Merlin proof systems [3] (also known as "public-coin" IP systems [7]), on the contrary, the verifier must always pass the information on his next move resulted by his internal random choices, and such information suffices for the mighty prover to keep track of the verifier's configurations. Dwork and Stockmeyer [6] defined AM( restriction ) as a variant of their original IP systems by requiring their verifiers to reveal publicly next inner states and tape head directions determined by internal coin flips. Here, we shall consider a straightforward quantum analogy of the above "public-coin" IP systems and investigate their language recognition power. In our QIP system, we demand the verifier to reveal through the communication cell his choice of non-halting inner state as well as his tape head direction at every step. Formally, we define a public QIP system as follows, whereas we sometimes call the original QIP systems private QIP systems in comparison. In particular, when the verifier V is a 1qfa, we omit the information on tape head direction d from the communication symbol ξ = (q ′ , d) in Definition 4.1 since V always moves its tape head to the right (i.e., d = +1) and the information on d is obviously redundant. To emphasize the "publicness" of this * * Another variant of public QIP system may require that U x δ |q, k, γ satisfies the same equality only for non-halting states q ′ , instead of q. See [15]. new system, we use the specific notation public . For instance, QIP(2qf a, public) indicates a collection of languages recognized by public QIP systems with 2qfa verifiers. By direct analogy with AM(2pf a), however, we might possibly write QAM(2qf a) for QIP(2qf a, public). In Definition 4.1, since there is no restriction on provers, all public QIP systems with 1qfa verifiers are also private QIP systems with the same verifiers. It therefore holds that, for example, QIP(1qf a, public) ⊆ QIP(1qf a) and QIP(2qf a, public) ⊆ QIP(2qf a). We shall further demonstrate the power of public QIP systems. Now, we shall concentrate on the language class QIP(1qf a, public). Unlike 1QFA ⊆ QIP(1qf a), the containment 1QFA ⊆ QIP(1qf a, public), which seems to hold naturally at a quick glance, is not yet known. The difficulty of proving this containment is caused by the publicness condition of the public QIP systems. The verifier must announce its next move to a prover, but he may allow the prover to unintentionally make the verifier's local system entangled with the prover's local system. We do not know how to cope with this entanglement. Despite the publicness condition, we can still demonstrate the power of QIP(1qf a, public) beyond 1QFA. Let us consider the language Zero = {w0 | w ∈ {0, 1} * } is in QIP(1qf a, public), which is known to reside outside of 1QFA [11]. In the next lemma, we shall prove that Zero has a public QIP system with 1qfa verifiers. ∈ QIP 1,1 (1qf a, public). Lemma 4.2 Zero The following proof exploits the prover's ability to inform the location of the rightmost bit 0 of an instance in Zero. To simplify the description of 1qfa verifiers V = (Q, Σ ∪ {| c, $}, q 0 , δ, Q acc , Q rej ), we abbreviate as q any communication symbol (q, 1) with q ∈ Q because V 's tape head direction is always +1. The protocol of V is described in the following. See Table 2 for the formal description of V 's (Q × Γ)transitions. Let x = yb be any input string, where b ∈ {0, 1}. The verifier V stays in the initial state q 0 by publicly announcing q 0 (i.e., sending the communication symbol q 0 to a prover) until the prover returns #. Whenever V receives #, he immediately rejects x by entering q rej,−1 (after applying either V $ |q 0 |# or V 1 |q 0 |# ) if its current scanning symbol is different from 0. On the contrary, if V is scanning 0, then he waits for the next tape symbol by entering q 1 . If the next symbol is $, then he accepts x after applying V $ |q 1 |q 1 ; otherwise, he rejects x by entering q ′ rej,i (after applying either V 0 |q 1 |q i or V 1 |q 1 |q i ). Our honest prover P does not alter the communication cell until V reaches the right end of | cy and P must return # just before V reads the symbol b so that V can apply V b |q 1 |# . It still remains to prove that (P, V ) recognizes Zero with certainty. Consider the case where our input x is of the form y0 for a certain string y. Since x is in Zero, the honest prover P returns # just after V reads the rightmost symbol of | cy. This information helps V locate the end of y. Moving its tape head rightward, V confirms that the next scanning symbols are 0$ and then enters an accepting inner state (either q acc,0 , q acc,1 , or q acc,−1 ) with probability 1. On the contrary, assume that x = y1. Clearly, the best adversary P * needs to return either q 0 or # (or their superposition). If P * keeps returning q 0 , then V eventually rejects x and increases the rejection probability. Since V 's computation is essentially deterministic, this strategy only decreases the chance of cheating by P * . To make the best of the adversary's strategy, P * must return the communication symbol # just before V scans 0. Nonetheless, when P * returns #, V applies V 0 |q 0 |# and then applies V 0 |q 1 |q i or V 1 |q 1 |q i , where i ∈ {0, ±1} and q −1 = #. Obviously, this leads to a rejecting inner state of V with certainty. Therefore, the QIP system (P, V ) recognizes Zero with certainty. ✷ It follows from Lemma 4.2 that QIP(1qf a, public) is powerful enough to contain certain languages that cannot be recognized by 1qfa's alone. It is also possible to show that QIP(1qf a, public) contains all languages recognized by 1qfa's whose transition amplitudes are limited to {0, 1}. Those 1qfa's are known as 1-way (deterministic) reversible finite automaton (1rfa, in short) [1]. Let 1RFA denote the collection of all languages recognized by such 1rfa's. As Ambainis and Freivalds [1] showed, 1RFA is characterized exactly as the collection of all languages that can be recognized by 1qfa's with success probability ≥ 7/9 + ǫ for certain constants ǫ > 0. 1 (1qf a, public) 1QFA. Proof. Firstly, we shall show that 1RFA is contained within QIP 1,1 (1qf a, public). Take an arbitrary set L recognized by a 1rfa M = (Q, Σ ∪ {| c, $}, q 0 , δ M , Q acc , Q rej ). Henceforth, we shall construct a public (1, 1)-QIP system (P, V ) that "mimics" a computation of M . The desired 1qfa verifier V = (Q ′ , Σ ∪ {| c, $}, Γ, δ, q 0 , Q ′ acc , Q ′ rej ) behaves as follows. Let Q ′ acc = Q acc , Q ′ rej = Q rej ∪ {q rej,p,q | p ∈ Q non , q ∈ Q, p = q}, and Q ′ = Γ = Q ∪ Q ′ rej , provided that q rej,p,q 's are all fresh symbols not in Q. Assume that V is in inner state p, scanning symbol b on an input tape. Whenever M changes its inner state from p to q after scanning b, V does so by revealing its next inner state q to a prover. As soon as V finds that the communication symbol has been altered intentionally by the prover, V immediately rejects the input. This process forces any prover to unalter the content of the communication cell. Table 3 gives a list of (Q × Σ)-transitions that induces V 's strategy δ. It is clear from the list that δ is well-formed and also the publicness condition for V is met. Finally, the honest prover P is a prover who does not alter any communication symbol; that is, P applies only the identity operator at every step. Table 3: (Q × Γ)-transitions {V σ } σ∈Σ of V for L with b ∈ Σ ∪ {$} and p, q ∈ Q. All inner states q rej,p,q are rejecting states. On input x ∈ Σ * , the QIP system (P, V ) accepts x with certainty if x ∈ L, since V exactly simulates M by the help of the honest prover P . Let us consider the opposite case where x ∈ L. It is easy to see that the best strategy for a dishonest prover P * is to keep any communication symbol unchanged because any alteration of the communication symbols causes V to reject x immediately and lowers the acceptance probability of V . Against such a prover P * , V obviously enables to reject x with certainty because, in this case, V 's final decision is not influenced by the communication symbols. Therefore, (P, V ) recognizes L with certainty. Since L is arbitrary, we obtain the desired containment 1RFA ⊆ QIP 1,1 (1qf a, public). Secondly, the separation between 1QFA and QIP 1,1 (1qf a, public) immediately follows from Lemma 4.2 together with the fact that Zero is not in 1QFA [1]. Moreover, since 1RFA ⊆ 1QFA and QIP 1,1 (1qf a, public) 1QFA, we can conclude that 1RFA = QIP(1qf a, public). This completes the proof. ✷ Next, we shall examine public QIP systems whose verifiers are 2qfa's. Similar to Theorem 3.2(2), we can claim the following two separations. A language that separates the public QIP systems with 2qfa verifiers from AM(2pf a, poly-time) is U pal = {0 n 1 n | n ∈ N}. Since U pal resides outside of AM(2pf a, poly-time) [6] and U pal belongs to 2QFA(poly-time) [11], the separation 2QFA(poly-time) AM(2qf a, poly-time) follows immediately. This separation, however, does not directly imply Theorem 4.4 because, for a technical reason similar to the case of 1qfa verifiers, it is not known whether 2QFA(poly-time) is included in QIP(2qf a, public, poly-time) or even in QIP(2qf a, public, poly-time, c-prover). Therefore, we still need to prove in the next lemma that U pal indeed belongs to both QIP(2qf a, public, poly-time) and QIP(2qf a, public, poly-time, c-prover). Proof. In what follows, we shall prove that U pal belongs to QIP 1,1−ǫ (2qf a, public, poly-time). The proof for U pal ∈ QIP 1,1−ǫ (2qf a, public, poly-time, c-prover) is similar. Let N = ⌈1/ǫ⌉. Let us define our public QIP system (P, V ). The honest prover P always applies the identity operation at every step. The verifier V acts as follows. In the first phase, it determines whether an input x is of the form 0 m 1 n . The rest of the verifier's algorithm is similar in essence to the one given in the proof of Lemma 3.3. In the second phase, V generates N parallel branches with equal amplitude 1/ √ N by entering N different inner states, say, r 1 , r 2 , . . . , r N . In the third phase, along the jth branch starting with r j (j ∈ [1, N ] Z ), the tape head idles for N − j steps at each tape cell containing 0 and idles for j steps at each cell containing 1 until the tape head finishes reading 1s. In the fourth phase, V applies the QFT to collapse all computation paths to a single accepting inner state if m = n. Otherwise, all the computation paths do not interfere with each other since the tape head reaches $ at different times along different computation paths. During the first and second phases, V publicly reveals the information (q ′ , d ′ ) on his next move and then checks whether the prover rewrites it with a different symbol. To constrain the prover's strategy, V immediately enters a rejecting inner state if the prover alters the content of the communication cell. An analysis of the QIP protocol (P, V ) for its completeness and soundness conditions is essentially the same as in the proof of Lemma 3.3. In conclusion, U pal is in QIP 1,1−ǫ (2qf a, public, poly-time), as requested. ✷ How Many Interactions are Necessary or Sufficient? In the previous two sections, despite heavy restrictions on QIP systems, we have witnessed that quantum interactions between a prover and a qfa verifier remarkably enhance the qfa's ability to recognize certain types of languages. Since our basic QIP model forces a verifier to communicate with a prover at every move, it is natural to ask whether such interactions are truly necessary. To answer this question, we shall remodel QIP systems so that verifiers are allowed to communicate with provers only at the time when the verifiers need any help from the provers. Throughout this section, we shall shed new light on the number of interactions between a prover and a verifier in our new QIP systems, and we shall carefully examine how many interactions are necessary or sufficient to conduct a given task of language recognition. Interaction-Bounded QIP Systems To study the number of interactions between a prover and a verifier, we want to modify our basic QIP systems so that a prover should alter a communication symbol in the communication cell exactly when the verifier asks the prover to do so. To make such a modification, we first look into the IP systems of Dwork and Stockmeyer [6]. In their system, a verifier is allowed to do computation silently at any chosen time with no communication with a prover; in other words, the verifier interacts with the prover only when the help of the prover is needed and the prover patiently awaits for next interactions without conducting any computation. We interpret the verifier's silent mode as follows: if the verifier V does not wish to communicate with the prover, he writes a special communication symbol in the communication cell to signal the prover that he needs no help from the prover. Simply, we use the blank symbol # to condition that the prover is prohibited to tailor the content of the communication cell. We formally introduce a new QIP system, in which no malicious prover P is permitted to cheat a verifier by willfully tampering with the symbol # in the communication cell. Since the verifier is governed by quantum mechanics, if a malicious prover willfully modifies #, the verifier's computation may be significantly hampered and the verifier may have no means to prevent such an action of the prover because of the unitarity requirement of the verifier's strategy δ. To describe a "valid and legitimate" prover P independent of the choice of verifiers, we require the prover's strategy P x = {U x P,i } i∈N + acting on the prover's visible configuration space M ⊗ P on each input x to do nothing (namely, apply the identity operator). To allow a prover P to maintain the unitarity of his strategy U x P,i , we also permit the prover to modify his private information γ (including a content of the communication cell) when γ never appears in a computation with non-zero amplitudes. To formulate this condition independent of the verifier, we need to introduce a series {S i } i∈N of elements in ∆ ∞ f in . This series {S i } i∈N is defined recursively as S 0 = {# ∞ } and S i (i ∈ N + ) is a collection of all elements y ∈ ∆ ∞ f in such that, for a certain element z ∈ S i−1 and certain communication symbols σ, τ ∈ Γ, the superposition U x P,i |σ |z contains the visible configuration |τ |y of non-zero amplitude, namely, | y| τ |U x P,i |σ |z | > 0. Now, our requested condition is expressed as follows. (*) For every i ∈ N + and every y ∈ S i−1 , U x P,i |# |y = |# |y . Any prover P who satisfies Condition (*) is briefly referred to as committed. † † A trivial example of a committed prover is the prover P I , who always applies the identity operator. A committed prover lets the verifier safely make a number of moves without any "direct" interaction with him. Observe that this new QIP model with committed provers is in essence closer to a circuit-based QIP model of Watrous [17] than the original QIP model is. For convenience, we name our new model an interaction-bounded QIP system and use the new notation QIP # (1qf a) for the class of all languages recognized with bounded error by such interaction-bounded QIP systems with 1qfa verifiers. Note that standard QIP systems can be naturally transformed into interaction-bounded QIP systems by (possibly) modifying the blank symbol appropriately to a fresh non-blank symbol. This simple fact implies that QIP # (1qf a) contains QIP(1qf a), which equals REG [16]. We are now ready to clarify the meaning of the number of interactions in an interaction-bounded QIP system (P, V ). Let us consider any non-halting global configuration in which V on input x communicates with a prover (i.e., writes a non-blank symbol in the communication cell). For convenience, we call such a global configuration a query configuration and, at such a query configuration, V is said to query a symbol to a prover. Recall from Section 2.1 the definition of global computation paths. The number of interactions in a given computation means the maximum number, over all global computation paths χ of the computation of (P, V ), of all query configurations of non-zero amplitudes along χ. Let L be any language and assume that (P, V ) recognizes L. We say that the QIP protocol (P, V ) makes i interactions on input x if i equals the number of interactions during the computation of (P, V ) on x. Furthermore, we call the QIP system (P, V ) k-interaction bounded ‡ ‡ if (i) for every x ∈ L, the protocol (P, V ) makes at most k interactions on the input x and (ii) for every x / ∈ L and for every committed prover P * , the protocol (P * , V ) makes at most k interactions on the input x. At last, let QIP # k (1qf a) denote the class of all languages recognized with bounded error by k-interaction bounded QIP systems with 1qfa verifiers. Since verifiers can control the number of queries, it is not difficult to show that 1QFA ⊆ QIP # k (1qf a) ⊆ QIP # k+1 (1qf a) ⊆ QIP # (1qf a) for any constant k ∈ N. In particular, QIP # 0 (1qf a) = 1QFA holds. As the main theorem of this section, we want to show in Theorem 5.2 that (i) 1-interaction helps a verifier but (ii) 1-interaction does not achieve the full power of QIP # (1qf a). Theorem 5.2 is a direct consequence of Lemma 5.3 and Proposition 5.4, and its proof proceeds as follows. For the first inequality of the theorem, we take the language Odd defined as the set of all binary strings of the form 0 m 1z, where m ∈ N, z ∈ {0, 1} * , and z contains an odd number of 0s. Since Odd ∈ 1QFA [2], it suffices to show in Lemma 5.3 that Odd belongs to QIP # 1 (1qf a). For the second inequality of the theorem, recall the regular language Zero = {x0 | x ∈ {0, 1} * }. We shall demonstrate in Proposition 5.4 that QIP # 1 (1qf a) does not include Zero. Since REG ⊆ QIP # (1qf a) by Lemma 5.1, Zero belongs to QIP # (1qf a), and therefore we obtain the desired separation of QIP # (1qf a) from QIP # 1 (1qf a). It therefore suffices to prove Lemma 5.3 and Proposition 5.4. As the first step, we prove Lemma 5.3 that asserts Odd ∈ QIP # 1 (1qf a). Proof. We shall design a 1-interaction bounded QIP system (P, V ) that recognizes Odd. Now, let Σ = {0, 1} and Γ = {#, a} be respectively an input alphabet and a communication alphabet for (P, V ). Let † † There are a number of possible variants, one of which requires that, for every i ∈ N + and for every y ∈ ∆ ∞ f in , U x P,i |# |y = |# |ψ x,y,i holds for a certain quantum state |ψ x,y,i . ‡ ‡ When x ∈ L and P * is a malicious prover, Condition (i) does not impose any restriction on the number of interactions for (P * , V ) on x. Instead of Conditions (i)-(ii), we could take a much stronger condition; for example, for every x and every committed prover P * , (P * , V ) makes at most k interactions. Such a stronger condition actually makes simpler the proof of, say, Proposition 5.4. Q = {q 0 , q 1 , q 2 , q acc , q rej,0 , q rej,1 } be a set of V 's inner states with Q acc = {q acc } and Q rej = {q rej,0 , q rej,1 }. The protocol of the verifier V is described as follows. Table 4 gives a formal description of V 's (Q, Γ)transitions. Making no query to a committed prover, V continues to read input symbols until its tape head scans 1 in the input tape. When V reads 1, he queries the symbol a to the committed prover by applying V 1 |q 0 |# = |q 0 |a . If the prover returns a, then V immediately rejects the input. Otherwise, V checks whether the substring of the input after 1 includes an odd number of 0s. This check can be done by V alone by applying V b |q 1 |# and V b |q 2 |# for b ∈ Σ. The role of the honest prover P is to work as an eraser, which erases any non-blank symbol written in the communication cell, to help the verifier safely make a transition from the inner state q 0 to q 1 . Note that, without the eraser, V alone cannot make such a transition because of the unitarity requirement of V 's strategy. To be more precise, whenever receiving the symbol a from V , P returns the symbol # and copies a into the first blank cell of his private tape. Technically speaking, to make P unitary, we need to map other visible configurations |# |y for certain y's not having appeared in P 's private tape to superpositions of the form |a |φ x,y with an appropriate vector |φ x,y . By right implementation, we can make P a committed prover. Next, we shall show that (P, V ) recognizes Odd with probability 1. Let x be any binary input. First, consider the case where x is in Odd. Assume that x is of the form 0 m 1y, where y contains an odd number of 0s. The honest prover P erases a that is sent from V when V reads 1. This helps V shift to the next mode of checking an odd number of 0s. Since V can check whether y includes an odd number of 0s without any communication with the prover, V eventually accepts x with certainty. Next, assume that x ∈ Odd. In the special case where x ∈ {0} * , V can reject x with certainty with no query to a committed prover. Now, focus our attention to the remaining case where x contains a 1. Assume that x is of the form 0 m 1y, where y contains an even number of 0s. The verifier V sends a to a committed prover when he reads 1. Note that V 's protocol is essentially deterministic. To maximize the acceptance probability of V , a malicious prover needs to return # to V since, otherwise, V immediately rejects x in a deterministic fashion. Since V can check whether y includes an odd number of 0s without making any query to the prover, for any committed prover P * , (P * , V ) rejects x with certainty. Since the number of interactions made by the protocol is obviously at most 1, Odd therefore belongs to QIP # 1 (1qf a), as requested. ✷ As the second step, we need to prove Proposition 5.4 regarding the language Zero = {x0 | x ∈ {0, 1} * }. This regular language Zero is known to be outside of 1QFA [11]; in other words, Zero ∈ QIP # 0 (1qf a) since QIP # 0 (1qf a) = 1QFA. Proposition 5.4 expands this impossibility result and shows that Zero is not even in QIP # 1 (1qf a). Since the proof of Proposition 5.4 is quite involved, it will be given in the subsequent subsection. Proof of Proposition 5.4 Our proof of Proposition 5.4 proceeds by way of contradiction. Towards a contradiction, we start with assuming Zero ∈ QIP # 1 (1qf a) and take a 1-interaction bounded QIP system (P, V ) with 1qfa verifier V that recognizes Zero with error probability at most 1/2 − η for a certain constant η > 0. Our goal is to pick a suitable stringỹ0 and an appropriate prover P ′ y and to prove that its associated protocol (P ′ y , V ) accepts y01 m with probability at least 1/2, because this contradicts our assumption that Zero ∈ QIP # 1 (1qf a). For this purpose, we shall employ a technical tool, called "query weight", which is the sum of all the squared magnitudes of query configurations appearing in a computation of the protocol (P ′ , V ) on an input. However, a different choice of provers may result in different query weights, as seen in the second and the third computation trees shown in Figure 2. To cope with this unfavorable situation, we shall introduce another computation model, which is not dependent on the choice of provers, and we shall prove that this model gives an upper-bound of the query weight induced by any committed prover. Using this model and its query weight, we shall finally select the desired stringỹ0 and the desired prover P ′ y . Now, let Q be a set of V 's inner states and let Γ be a communication alphabet. Write Σ for our input alphabet {0, 1}. For technicality, we assume without loss of generality that V never queries at the very time when it enters a certain halting inner state, that is, the time when V 's tape head scans $. First, we introduce two useful notions: "1-interaction condition" and "query weight." Fix an input x ∈ Σ * and let P ′ be any committed prover. For readability, we use the notation Comp V (P ′ , x) to denote a computation of the QIP protocol (P ′ , V ) on input x when P ′ takes a strategy P ′ x . A committed prover P ′ is said to satisfy the 1-interaction condition at x with V if the corresponding protocol (P ′ , V ) makes at most 1 interaction. Note that, when P ′ satisfies the 1-interaction condition at x with V , for any query configuration ξ of non-zero amplitude along computation path χ in Comp V (P ′ , x), there exists no other query configuration in Comp V (P ′ , x) between the initial configuration and this given configuration ξ along the computation path χ. Let C (1) x,V be the collection of all committed provers P ′ who satisfy the 1-interaction condition at x with V . It is important to note that, whenever a prover in C (1) x,V answers to V with non-blank communication symbols with non-zero amplitude, V must change these symbols back to the blank symbol immediately since, otherwise, V is considered to make the second query in the next round, according to the definition of our interaction-bounded QIP model. Now, we choose any prover P ′ in C (1) x,V and consider its computation Comp V (P ′ , x). By introducing an extra projection, we modify Comp V (P ′ , x) as follows. Whenever V performs a measurement onto (W non , W acc , W rej ), we then apply to the communication cell an additional projection that maps onto the Hilbert space spanned by |# . This projection makes all non-blank symbols collapse. The protocol (P, V ) then continues to the next step. By Condition (*) in Section 5.1, observe that the computation obtained by inserting an extra projection operator at every step of V is independent of the choice of committed provers. To express this modified computation of V on x, we use another notation M Comp V (x). Figure 2 illustrates the difference between such a modified computation and two computations generated by two different provers P 1 and P 2 . For two strings x, y ∈ Σ * , the query weight wt Figure 2: Example of a modified computation. The leftmost graph depicts the modified computation of V on input x. Two remaining graphs are computations of V on x using different provers P 1 and P 2 . The black circles indicate query configurations whereas the white circles indicate non-query configurations. Let query configuration ξ marked black have zero amplitude. The double circle is the place where prover P 2 forces V to generate a new computation path that destructively interferes with an existing path in the modified computation of V . Recall our assumption that (P, V ) is a 1-interaction-bounded QIP system recognizing Zero with success probability at least 1/2 + η. The following lemma shows two properties of the query weights of V . Lemma 5.5 Let P ′ be any committed prover and let x, y be any strings. 1. If P ′ ∈ C (1) x,V , then, for every query configuration ξ of non-zero amplitude in Comp V (P ′ , x), any computation path χ in Comp V (P ′ , x) ending with ξ appears in M Comp V (x) ending with ξ of the same amplitude. 2. If P ′ ∈ C (1) xy,V , then wt (x) V (y) is greater than or equal to the sum of all the squared magnitudes of amplitudes of query configurations in Comp V (P ′ , xy) while V 's tape head is reading y. Proof. (1) Take any committed prover P ′ in C (1) x,V . Let ξ be any query configuration in Comp V (P ′ , x) with non-zero amplitude, say, α ξ . Since α ξ is not zero in Comp V (P ′ , x), there must exist at least one computation path in Comp V (P ′ , x) ending with ξ. Let us consider such a computation path, say, χ. Since P ′ satisfies the 1-interaction condition, χ cannot contain any query configuration of non-zero amplitude except for the last configuration ξ. By the definition of M Comp V (x), no projection measurement on the communication cell is performed along χ. Hence, all configurations inside χ must be present also in M Comp V (x). Thus, χ appears in M Comp V (x). Since χ is arbitrary, all the computation paths in Comp V (P ′ , x) ending with ξ, which contribute to the amplitude α ξ , must appear in M Comp V (x). Therefore, the amplitude of ξ in M Comp V (x) equals α ξ , as requested. (2) Assume that P ′ ∈ C (1), for every query configuration ξ of non-zero amplitude in Comp V (P ′ , xy), the squared magnitude of the amplitude of ξ in Comp V (P ′ , xy) is equal to that of ξ in M Comp V (xy). Note that the converse in general may not be true; that is, there may be a query configuration of non-zero amplitude in M Comp V (x) that never appears in Comp V (P ′ , x). Figure 2 illustrates such a case. By summing up the squared magnitudes over all query configurations ξ in Comp V (P ′ , xy), we immediately obtain (2). ✷ We continue the proof of Proposition 5.4. Let us consider a value ν that is the supremum, over all strings w in Zero, of the query weight of V at w; namely, ν = sup w∈Zero {wt V (w)}. Observe that 0 ≤ ν ≤ 1 since any query weight is in the real interval [0, 1]. For readability, we omit the letter V whenever it is clear from the context. Let P I denote a committed prover applying only the identity operator at every step. Proof. Let us assume that ν = 0. From this assumption, it follows that wt(x) = 0 for all x ∈ Zero. To obtain a contradiction, we aim at constructing an appropriate bounded-error 1qfa M that recognizes Zero. Recall that P is a honest committed prover that makes the 1-interaction bounded QIP system (P, V ) recognize Zero. Henceforth, we want to assert that even a simple protocol (P I , V ) can recognize Zero. For each input x ∈ Zero, since wt(x) = 0, Lemma 5.5 (2) implies that all query configurations in Comp V (P, x) must have zero amplitudes. This situation implies that, in the superposition of global configurations at each step of the computation of (P, V ) on x, the verifier V 's next moves cannot be affected by any messages sent out by P . Hence, we can replace P by P I without changing the outcome of V . Let M 's inner state be of the form (q, σ), which directly reflects both V 's inner state q and a symbol σ in the communication cell. The desired 1qfa M behaves as follows. On input x, M simulates V on x using the "imaginary" prover P I by maintaining the content of the communication cell as an integrated part of M 's inner states. Now, we claim that M recognizes Zero with bounded error. let x be any input string. If x is in Zero, then, since any query configuration with the prover P I has the zero amplitude, M correctly accepts x with probability ≥ 1/2 + η. Likewise, if x is not in Zero, then (P I , V ) rejects x with probability ≥ 1/2 + η; thus, M also rejects x with the same probability. Therefore, M recognizes Zero with error probability ≤ 1/2 − η, as requested. Since Zero / ∈ 1QFA, we obtain a contradiction, and therefore ν > 0 follows. ✷ Next, we shall construct a committed prover P ′ and a string z / ∈ Zero that force the protocol (P ′ , V ) to accept z with probability at least 1/2. Let us recall from Section 2.1 the notation P w , which refers to a strategy {U w P,i } i∈N + of P on input w. Since ν > 0 by Claim 1, for every real number γ ∈ (0, ν], there exists a string w in Zero such that wt(w) ≥ ν − γ. Given any y ∈ Σ * , we set γ y = min{η 2 /16(|y| + 1) 2 , ν} and choose the lexicographically minimal string w y ∈ Zero satisfying wt(w y ) ≥ ν − γ y . For readability, we abbreviate the string w y y asỹ. Moreover, we define a new committed prover P ′ y that behaves on inputỹ01 m (= w y y01 m ), where m ∈ N + , in the following fashion: P ′ y follows the strategy Pỹ 0 while V 's tape head is reading | cw y and then P ′ y behaves as P I while V is reading the remaining portion y01 m $. Since P satisfies the 1-iteration condition, we obtain Pỹ 0 ∈ C (1) y0,V . By its definition, P ′ y also belongs to C (1) y0,V . Regarding the notation p acc (x, P, V ) in Section 2.1, we simply drop "V " and write p acc (x, P ) instead. We then claim the following. Challenging Open Questions Throughout Sections 3-5, we have placed various restrictions on the behaviors of verifiers and provers in our qfa-verifier QIP systems and we have studied how those restrictions affect the language recognition power of the systems. The restricted models that we have considered in Sections 3-5 include: classical-prover QIPs, public QIPs, and interaction-bounded QIPs. After an initial study of this paper, nonetheless, there still remain numerous unsolved questions concerning those QIP systems. Hereafter, we shall give a short list of the important open questions as a guide to future research. (1) The relationships between quantum provers and classical provers are still not entirely clear in the context of qfa-verifier QIP systems, simply because of the soundness condition imposed on the systems. In particular, we expect to see a fundamental separation between QIP(1qf a) and QIP(1qf a, c-prover) as well as between QIP(2qf a, poly-time) and QIP(2qf a, poly-time, c-prover). (2) In general, we need to discover precise relationships between "public-coin" IP systems (i.e., AM systems) and public QIP systems beyond Theorem 4.4. Moreover, associated with 2qfa verifiers, we may ask whether 2QFA(poly-time) is properly included in QIP(2qf a, public) and whether QIP(2qf a, public, poly-time) is different from QIP(2qf a, poly-time). Similarly, a separation between QIP(2qf a) and QIP(2qf a, public) is also unknown. (3) A new interaction-bounded QIP system is of special interest in analyzing the roles of interactions between provers and verifiers. For this model, we hope to see that the equality QIP # (1qf a) = QIP(1qf a) indeed holds. Unsolved so far is a general question of whether k + 1 interactions are more powerful than k interactions. Since a 1qfa verifier is unable to count the number of interactions (or queries), we may not directly generalize the proof of Theorem 5.2 to assert that QIP # k (1qf a) = QIP # k+1 (1qf a) for any constant k in N + . Nevertheless, we still conjecture that this assertion is true. (4) It is of great interest to seek an algebraic characterization of our qfa-verifier QIP systems. Such a characterization may shed new light on a nature of quantum interactions between two parties.
2014-08-15T09:38:52.000Z
2014-01-13T00:00:00.000
{ "year": 2014, "sha1": "7c36eb0f06d688cc69e3217300c3a6578178f468", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.tcs.2014.11.030", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "7c36eb0f06d688cc69e3217300c3a6578178f468", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics" ] }
574435
pes2o/s2orc
v3-fos-license
Degradable gene delivery systems based on Pluronics-modified low-molecular-weight polyethylenimine: preparation, characterization, intracellular trafficking, and cellular distribution Background Cationic copolymers consisting of polycations linked to nonionic amphiphilic block polymers have been evaluated as nonviral gene delivery systems, and a large number of different polymers and copolymers of linear, branched, and dendrimeric architectures have been tested in terms of their suitability and efficacy for in vitro and in vivo transfection. However, the discovery of new potent materials still largely relies on empiric approaches rather than a rational design. The authors investigated the relationship between the polymers’ structures and their biological performance, including DNA compaction, toxicity, transfection efficiency, and the effect of cellular uptake. Methods This article reports the synthesis and characterization of a series of cationic copolymers obtained by grafting polyethyleneimine with nonionic amphiphilic surfactant polyether-Pluronic® consisting of hydrophilic ethylene oxide and hydrophobic propylene oxide blocks. Transgene expression, cytotoxicity, localization of plasmids, and cellular uptake of these copolymers were evaluated following in vitro transfection of HeLa cell lines with various individual components of the copolymers. Results Pluronics can exhibit biological activity including effects on enhancing DNA cellular uptake, nuclear translocation, and gene expression. The Pluronics with a higher hydrophilic-lipophilic balance value lead to homogeneous distribution in the cytoplasm; those with a lower hydrophilic-lipophilic balance value prefer to localize in the nucleus. Conclusion This Pluronic-polyethyleneimine system may be worth exploring as components in the cationic copolymers as the DNA or small interfering RNA/microRNA delivery system in the near future. Introduction Gene therapy has become a promising strategy for the treatment of many inheritable or acquired diseases that are currently considered incurable. Because of their structural diversity, easy production, nonimmunogenicity, and safety, delivery of nucleic acids into cells using cationic polymers has recently attracted remarkable interest in the field of nonviral gene therapy. 1,2 Polycation polyethylenimine (PEI) is one of the most effective and widely studied synthetic nonviral gene delivery vectors, and it has been employed for designing DNA delivery vehicles. 3,4 Through the addition of cationic polymers to DNA, the condensed complexes are spontaneously formed through electrostatic interactions between the positively charged groups of the polycation and the negatively charged phosphate groups of the DNA, resulting in efficient transport of intact DNA into the nucleus. [5][6][7] The whole gene delivery using PEI involves condensation of DNA into compact particles, uptake into cells, release from the endosomal compartment into the cytoplasm, and uptake of the DNA into the nucleus. The high transfection efficiency of PEI/DNA complexes has been ascribed to the capacity of PEI to buffer endosomes, which protects DNA from nuclease degradation and facilitates endosomal escape of PEI/DNA complexes (the "proton sponge" effect). 8,9 However, polyplexes of PEI/DNA have shown low colloidal stability and considerable toxicity in vivo. 10 Meanwhile, as both the transfection efficiency and cytotoxicity seem to depend on such physicochemical properties as molecular weight and branching ratio, it becomes evident that polymer structure significantly influences the efficacy of PEI-based vectors. 9,11 Therefore, the discovery of new potent materials still largely relies on empiric approaches rather than a rational design. Pluronic ® block copolymers, which consist of hydrophilic ethylene oxide (EO) and hydrophobic propylene oxide (PO) blocks that are arranged in a basic A-B-A structure (EO x -PO y -EO x ), are amphiphilic molecules that can be used as structural elements of the polycation-based gene delivery system (polyplexes). 4,6 The number of hydrophilic EO (x) and hydrophobic PO (y) units can be altered, and the retention of the block copolymer in the organs increases as the length of the hydrophobic PO block increases, or as the hydrophiliclipophilic balance (HLB) value decreases. Kabanov and other researchers [12][13][14] reported that Pluronic could enhance cell interactions, DNA transport, and transgene expression, and that Pluronics possess the unique ability to incorporate themselves into cell membranes in the presence of the hydrophobic poly(propylene oxide) chain. In contrast, the hydrophilic EO chain does not interact with lipid membranes but it could be used to prevent the binding of other polymers with the membranes. 4 The system is stabilized in dispersion by the EO corona in a manner similar to regular Pluronic micelles. To design an effective gene delivery vector, many efforts have been made to improve the transfection efficiency and to reduce the cytotoxicity. Optimal efficacy has been observed with Pluronic copolymers, which have intermediate lengths of PO chains and relatively short EO segments. 15 The micelles of these copolymers are sufficiently potent in the ability of binding cellular membranes and a relatively high concentration of micelles could be reached in solution. 16 Based on previous studies, the authors prepared a series of cationic copolymers by grafting low-molecular-weight PEI with Pluronics consisting of different ratios of EO and PO. These copolymers were then evaluated as delivery systems for the plasmid DNA in vitro. The authors further investigated the structure-cellular uptake relationship of low-molecular-weight PEI modified by Pluronic with different EO/PO ratios and an HLB that could reduce the cytotoxicity of PEI and accelerate endocytosis of the polyplexes. This study was conducted to formulate a nonviral system for gene delivery and to clarify the relationship between the structure and the function of intracellular trafficking. Materials and methods Materials Branched PEI (molecular weight, 2000) was obtained from Sigma-Aldrich (St Louis, MO). The BASF Corporation (Mount Olive, NJ) kindly provided the Pluronics (F68, P123, P105, L61). Bis-(trichloromethyl) carbonate, N-hydroxysuccinimide, triethylamine, acryloyl chloride, anhydrous dichloromethane, and dithiothreitol (DTT) were purchased from Sinopharm Chemical Reagent Co, Ltd (Shanghai, China) and were used without further purification. A luciferase assay system for physicochemical characterization study and in vitro transfection assay and pGL-3 control vector with SC-40 promoter and enhancer encoding firefly (Photinus pyralis) luciferase were obtained from Promega (Madison, WI). The Institute of Life Science and Technology of Tongji University in Shanghai, China, kindly provided the quantum dots (QDs). Synthesis of polymer particle copolymers The PEI (0.10 mmol) was dissolved in 10 mL of anhydrous dichloromethane as solution A, and the predetermined amount of activated Pluronic (0.01 mmol) was dissolved in 10 mL of anhydrous ethanol as solution B. Solutions A and B were then slowly added to a base solution of anhydrous submit your manuscript | www.dovepress.com Dovepress Dovepress dichloromethane (10 mL) with constant stirring. The reaction mixture was left overnight at room temperature. After completion of reaction, the conjugate was dialyzed against distilled water at 4°C for 2 days using a Spectra/Por ® membrane (Carl Roth GmbH & Co. KG Karlsruhe, Germany) (molecular weight cut-off, 7000) and was lyophilized. Characterizations of polymer particles (nuclear magnetic resonance/gel permeation chromatography) The ratio of Pluronic and PEI in the copolymer samples was determined from proton nuclear magnetic resonance ( 1 H-NMR) spectra (Varian, Palo Alto, CA 300 MHz) using integral values obtained for the -CH 2 CH 2 O-protons of Pluronic and the -CH 2 NH-protons of PEI. The 1 H-NMR analysis was carried out with a 10 mg sample of polymer particles (PPs) dissolved in 0.6 mL of deuterium oxide at room temperature. Molecular weight and molecular weight distribution of the polymers were determined by gel permeation chromatography with multi-angle laser light scattering (Shimadzu LC-20 AD; Shimadzu, Kyoto, Japan) (laser wavelength, 690 nm), using a TSK-GEL G5000PW XL column (Tosoh, Japan) (temperature, 40°C) operated at a flow rate of 0.4 mL/ minute. Ammonium acetate (0.2 mol/L) was used as a mobile phase. Degradation of PPs Degradation of the branched PPs was estimated through the measurement of molecular weight. Briefly, 0.5 g of each polymer was dissolved in 10 mL of phosphate-buffered saline (PBS) (Gibco, Invitrogen, Carlsbad, CA) (0.1 mol/L, pH 7.4) and then incubated at 37°C with shaking at 100 rpm for specific time points. After incubation, solutions of polymers were lyophilized and the molecular weight of the lyophilized samples was measured by gel permeation chromatography with multi-angle laser light scattering (laser wavelength, 690 nm). Preparation of PP/DNA complexes The charge ratio of the PP/DNA complexes was expressed as the ratio of PP weight to DNA weight (w/w). Complexes were induced to self-assemble by mixing plasmid DNA with appropriate polymer solution (PBS 0.1 mol/L, pH 7.4) at the desired charge ratio. The complexes were allowed to stand at room temperature for 30 minutes. Agarose gel retardation assay Various amounts of Pluronic-g-PEI dissolved in deionized water were added into the aqueous solution with a fixed amount of plasmid DNA (50 ng/well) and incubated for 30 minutes at room temperature. The mixture was electrophoresed on a 0.7% (weight/volume) agarose gel for about 30 minutes at 100 V in the Tris-acetate-ethylenediaminetetraacetic acid buffer (Gibco, Invitrogen, Carlsbad, CA). The gel was stained with ethidium bromide (0.5 g/mL) and was illuminated on an ultraviolet illuminator (302 nm) to show the location of the DNA and various levels of polyplex formation. Measurement of particle size and zeta potential The size and zeta potential of cationic polymer/DNA complexes in PBS buffer at room temperature were measured using an electrophoretic light-scattering spectrophotometer (Zetasizer Nano ZS90, Malvern Instruments, Worcestershire, United Kingdom), with scattering angles of 90°. The polyplexes were prepared by adding an appropriate amount of cationic polymer solution based upon desired w/w ratios to DNA (20 µg) in 250 µL of PBS while vortexing. The complexes were incubated at room temperature for 30 minutes before measurement of size and zeta potential was carried out, and all the experiments were performed in triplicate. DNA protection and release assay Each complex solution with PPs to DNA at a weight ratio of 20:1 was divided into equal triplicates. One served as the control and the other two were incubated with DNase I at 37°C for 2 hours at a final DNase I concentration of 50 U/g of plasmid DNA. Ethylenediaminetetraacetic acid (3 µL, 0.5 mol/L) was added to one of these two solutions to immediately stop DNA degradation, and then 3 µL of sodium dodecyl sulfate (10%, w/v) was added to displace DNA. Naked DNA, with and without DNase I treatment, served as the control. All samples were then placed in an ice bath. Finally, 0.7% agarose gel electrophoresis was performed to evaluate the integrity of DNA in the complexes. Then 10 µL of PBS (as control) or of PBS containing DTT was added to give a final concentration of DTT 10 mmol/L in the resultant solution, and the dispersions were incubated for 60 minutes. The samples were then analyzed by gel electrophoresis, as described. Cytotoxicity assay Cell toxicity of PP/DNA complexes was investigated by cell proliferation assay. HeLa cells were seeded in 96-well plates at an initial density of 1 × 10 4 cells/well and were incubated for 18-20 hours to reach 80% confluency at treatment. The culture medium was then replaced with fresh serum-free media submit your manuscript | www.dovepress.com Dovepress Dovepress containing serial dilutions of PP/DNA or PEI complexes at various concentrations (4,6,8,16,24, and 32 mg/mL). After 24 hours of incubation, 20 µL of CellTiter 96 ® AQ ueous One Solution Reagent (Promega) was added to each well and the cells were further incubated for 1-4 hours. The absorbance was then measured at 490 nm using an enzyme-linked immunosorbent assay plate reader (Model 318MC, Sanco, Shanghai, China) to obtain the metabolic activity of the cells. The spectrophotometer was calibrated to 0 absorbance using culture medium without cells. The cell viability was calculated according to the following equation: where A test is the absorbance of the PPs or PEI-treated cells and A control is the absorbance of the untreated cells. Luciferase assay in vitro HeLa cells were seeded in a 24-well plate at a density of 1 × 10 5 cells per well and were incubated for 18-24 hours to 60%-70% confluency. Then 3 µg of pGL3 was formulated with the different polymers at various w/w ratios and incubated for 30 minutes at room temperature. Serum-free Dulbecco's modified Eagle's medium (DMEM) (Gibco, Invitrogen, Carlsbad, CA) was added to the complex-dispersed medium. The polymer/DNA complexes dispersed in serum-free DMEM were added to the 24-well plate and incubated for 5 hours at 37°C under a 5% CO 2 atmosphere. The medium was then replaced with growth medium containing serum and was incubated for 48 hours. The luciferase assay was carried out according to the manufacturer's instructions (Promega). The relative light units per second determined in 20 µL of cell extract were converted into the amount of luciferase (pg) using a luciferase standard curve. The standard curve was obtained by diluting known amounts of recombinant luciferase (E1701; Promega) in lysis buffer used to extract cells. The amount of cell protein in 50 µL of cell extract was determined using a Micro-BCA protein assay kit (Pierce, Rockford, IL). The transfection efficiency was expressed as relative light units per milligram of cell protein. All transfection experiments were performed in triplicate. Cellular uptake study Green QDs were used as molecular probes to label F68-, P123-, P105-, and L61-PEI 2KD. Cells were seeded on glass coverslips and were then transfected with the PP/DNA complexes (10:1, w/w), which had been incubated for 1, 3, 5, 10, 20, and 30 minutes in serum-free DMEM at 37°C before addition into the cell culture. At the end of each time point, cells were washed, fixed (using 4% paraformaldehyde), and examined under a confocal microscope (FluoView ® FV1000; Olympus, Tokyo, Japan). Statistical analysis The data are presented as the mean plus or minus the standard deviation. Statistically significant differences were determined using two-sample Student's t-tests and analysis of variance, with P , 0.05 as the level of statistical significance. Results and discussion Synthesis and characterization of PP copolymers Cationic copolymers were synthesized by grafting polyether chains, Pluronic, to the amino groups of PEI. The authors chose four different types of Pluronic, and the HLB order was F68 . P105 . P123 . L61 (Table 1). 17 In all cases, free hydroxyl groups of the Pluronic polyether chains were activated by succinimidyl carbonate in advance and were then linked to the amino groups of PEI, as shown in Figure 1. 18 The copolymers were separated from unconjugated polymers by dialysis, and then the preparation of PP copolymers was determined from 1 H-NMR spectra using integral values obtained for the -CH 2 CH 2 O-protons (Pluronic F68) and the -CH 2 CH 2 NH-protons (PEI). Figure 2 shows the 1 H-NMR spectra of Pluronic-g-PEI in deuterium oxide, where -CH 2 CH 2 O-proton peaks appear at δ3.8 ppm and -CH 2 CH 2 NH-proton peaks appear at δ2.7-3.5 ppm. Analysis and characterization of polyplex formation Gel retardation assays were performed to investigate the ability of PPs to condense DNA via electrostatic interactions between the PEI and DNA at various w/w ratios (Figure 3). In this experiment, movement of the plasmid DNA in the gel was retarded as the amount of the cationic copolymer increased, demonstrating the copolymer bound to the DNA, neutralizing its charge. 4 As the w/w ratios exceeded the neutralization composition, the complexes migrated slightly toward the anode, suggesting they have a small positive charge. The assay confirmed the polymer/DNA ratio required for complete condensation and retardation of DNA. In addition, for unconjugated PEI, DNA was retarded at a w/w ratio of 0.2 (data not shown). For F68-PEI 2KD, P105-PEI 2KD, P123-PEI 2KD, L61-PEI 2KD, and PEI 2KD, DNA was retarded at a w/w ratio of 0.8, 0.8, 0.4, 0.6, and 0.6, respectively. Thus, it could shield the positive charge of the complex surface and easily condense the DNA as the amount of Pluronic increased. Figure 4 shows that in the absence of DTT, PPs could completely retard plasmid DNA migration at the w/w ratio of 0.8:1. However, DNA release from PP polyplexes could be observed by gel electrophoresis in the presence of dithiothreitol 5.0 mmol/L, mimicking the intracellular reducing environment containing glutathione 0.1-10 mmol/L. For the nondegradable control polymer PEI 25KD, there was no DNA released from PEI/DNA complexes in the presence of DTT. 19 The rapid DNA release in the reduction condition suggests that the significant polyplex dissociation was mainly owing to the cleavage of the amide bond with DTT, leading to increased DNA release and increased gene expression. 20 Degradation study and resistance to nuclease degradation Degradation of gene delivery polymers in vivo is important for safe and efficient gene delivery, because the appropriate degradation of the polymer could reduce cytotoxicity and facilitate elimination through the excretion pathway in vivo. 21,22 Nondegradable PEI may accumulate in vivo because of a lack of degradation or excretion pathways, causing potential cytotoxicity. Therefore, it is expected that ester bonds in these polymers are susceptible to hydrolysis in physiological conditions, forming poloxamer oligomers and low-molecular-weight polyethylene. 23 Figure 5 shows degradation of the hyperbranched PPs over time at pH 7.4 and 37°C. The results indicate that the polymers degraded very slowly and the degradation rate of PPs was highly dependent on the hydrophilicity of the polymer. The primary responsibility of any gene delivery system is to protect DNA from degradation by nucleases. 24 Naked DNA was digested when treated with DNase I (Figure 6, Lane 1). The other two complexes could encapsulate DNA completely and Particle size and zeta potential measurement To investigate the effect of the cationic copolymer structure on the size of the complexes, each cationic copolymer was allowed to form complexes at different w/w ratios. Figure 7 shows that the mean particle size decreased significantly as the w/w ratio of PP/DNA increased, most probably because their high surface charge could condense DNA more effectively. The size of the particles formed in these systems varies from 80 to 220 nm depending on the composition of the complex, which is beneficial to efficient endocytosis and gene transfer. 25 No precipitation was observed at any w/w ratio in the range of concentrations studied. Compared with unmodified PEI/DNA complexes, addition of Pluronic to PEI-based polyplexes had little effect on polyplex particle size but it led to a good stability of polyplexes. 10 A positive surface charge of untargeted polyplexes is necessary for binding to anionic cell surfaces, which consequently facilitates cell uptake. 26 However, strong cationic charges of the complexes are often cytotoxic. 24 Changes in the charge of the complexes upon addition of the cationic copolymers were further characterized by measuring the zeta potential of various w/w ratios. Figure 8 shows zeta potentials of the complexes according to the w/w ratio. The zeta potential of Pluronic-g-PEI was significantly lower than that observed for unmodified PEI at various w/w ratios. As the w/w ratio increased, so the zeta potential of the complexes Cytotoxicity assay Knowing that the high-molecular-weight PEI is limited by its relatively high cytotoxicity when it is used as a transfection reagent, the authors focused on reducing the cytotoxicity and increasing the transfection efficiency. The cell proliferation assay was used to compare the cytotoxicity of Pluronic-g-PEI and PEI 25KD polymer. Figure 9 shows the cytotoxicity of the polymers at various concentrations with HeLa cell lines. It was found that Pluronic-g-PEI had significantly higher cell viability than PEI 25KD at any concentration (P , 0.01). It is worth noting that the modified Pluronic-g-PEI showed significantly higher cell viability at various concentrations, especially the high concentration, suggesting that transfection efficiency could be improved by administration of more PPs if necessary, knowing that the cytotoxicity is low. The enhanced cell viability is undoubtedly the result of the formation of low-toxic building blocks. 21 The cytotoxicity of cationic polymers is thought to be a result of the multiple attachment of PEI to the cell surface and the membrane-damaging effects. 27 The results of this study show that PEI macromolecules can be conjugated with Pluronic and that PPs are degradable under physiological conditions. The degradation products of the polymer are low-molecular-weight PEI and Pluronics, which are practically nontoxic and are rapidly excluded out of the cell nucleus and the cytoplasm or eliminated from the body. Thus, the safety of Pluronic formulations may be an important advantage for their utilization in nonviral gene delivery. Meanwhile, the number of PEI attachments to cell surfaces may also be reduced, resulting in a lower cytotoxicity to cells. In vitro transfection efficiency Complexes formed between plasmid DNA and cationic copolymers were assessed for their in vitro transfection activity utilizing a transient expression of luciferase reporter in HeLa cell lines. For all cationic polymers, the efficacy of transfection was dependent on the electrostatic interactions and the "proton sponge effect" of PEI. 3 As shown in Figure 10, transfection efficiency of PPs rapidly increased with the increase in ratio of PEI 2KD/Pluronic, which can be explained by the high degree of modification of PEI 2KD leading to an increase in positive surface and structural compactness of the complexes. The highest luciferase expression level was obtained at P105-PEI 2KD, and the Pluronic-g-PEI showed higher gene transfer ability than PEI 2KD. The enhanced transfection efficiency of Pluronic-g-PEI over PEI 2KD is because of the introduction of the hydrophilic EO chain and the hydrophobic PO chain of Pluronics. The hydrophilic EO chains extend into water and sterically prevent the complexes from approaching each other, 28 and they can shield the positive surface charge of complexes. 27 Therefore, the EO chain can reduce toxicity and improve the colloidal stability of the polymer/DNA complexes as well. 26 Also, the PO chain is believed to enhance lipophilic complexes, which possibly possess the ability of interacting with biological membranes and enhancing transport of the complexes into cells. 4 Therefore, the results of this study show that reducing cytotoxicity and increasing hydrophobicity and stability enhanced Cellular uptake study Efficient entry of synthetic polymers inside cells is a central issue in polymeric drug delivery. 16 There is ample evidence that polycationic compounds can interact with negatively charged cellular membranes. However, amphiphilic block copolymers, such as Pluronic, depend on their aggregation state to cellular membranes and are usually taken into cells by endocytosis. 14 Previous research has suggested that the addition of Pluronic could intensify nonspecific endocytosis in several cells and could dramatically increase cellular uptake, nuclear transport, and transfection efficacy of polyplexes. 12 Furthermore, the effect of Pluronic on cell uptake was usually observed as early as 60 minutes after exposure. 10 The magnitude of the effect varied for each Pluronic type, mainly depending on the HLB value. A high HLB value for Pluronic could significantly improve serum stability of the complex. However, complexes with a lower HLB value should theoretically show marked improvement in cellular uptake compared with those with a higher HLB value. The authors used confocal microscopy to study the intracellular trafficking of representative QDs labeled PPs. Figure 11 shows the cellular trafficking of PPs at the specific time. F68-PEI 2KD presented a slower association with Concentration (µg/mL) Cell viability (%) 16 24 32 Concentration (µg/mL) 10 20 Luciferase assay (RLU/mg protein) Figure 11A). This is possibly because the reduction of serum-mediated aggregation of gene transfer was prevented by the addition of high-HLB-value Pluronic F68. Serum stability of nonviral vectors is known to be a crucial factor for successful in vivo gene delivery, and therefore the authors think that these nonviral vectors incorporated with a higher HLB value for Pluronic could be used as potential vehicles for in vivo delivery of DNA because of marked improvement of their serum stability. Similar pictures were obtained for P123-PEI 2KD and P105-PEI 2KD. After 30 minutes of incubation, the two PPs were mainly associated with the outer membrane, and significant quantities of drugs were already observed in cytosol ( Figure 11B and C). This result reveals that P123/ P105-PEI 2KD complexes preferred intracellular accumulation sites in the perinuclear region after 30 minutes of incubation. The adjunction of Pluronic P123/P105, which had intermediate lengths of PO chains, relatively short EO segments, and appropriate HLB value, could not only form a stable dispersion but also facilitate cellular uptake of the synthetic polymers. 29 Therefore, P123/P105-PEI 2KD could be considered as a potential nonviral vector of systemic gene delivery, with a wide application in DNA transfection. L61-PEI 2KD showed a rapid association with outer membranes of HeLa cells (1 minute of incubation) ( Figure 11D) because of the larger component of hydrophobic PO chain. After 30 minutes of incubation, transfection with L61-PEI 2KD-based polyplexes resulted in a more localized appearance in the nucleus and a less diffuse appearance throughout the cytoplasm than with the other Pluronics. Although Pluronic with the lowest HLB value could not show much improvement in transfection activity under serum-containing or serum-free conditions, it could lead to a substantial increase in drug accumulation and a change in cellular distribution. 30 These polyplexes could be used as the vector of genes that are essential to express at the nucleus of cells. All evidence indicates that the supramolecular architecture of Pluronics may play a role in facilitating the entry of polyplexes through cell membranes and in regulating cellular distribution of the polyplexes. Conclusion This work reports the significant evaluation of the design and formulation of a DNA-polycation polyplex based on a Pluronic-PEI conjugate and the characterization of the cationic graft copolymer system, with the goal of developing a better understanding of this multicomponent delivery system. Complete understanding of this system is particularly important in light of the fact that Pluronics can exhibit biological activity, including effects on enhancing DNA cellular uptake, nuclear translocation, and gene expression. The authors' work indicates that the addition of Pluronics can significantly facilitate intracellular trafficking and can change cellular distribution. The Pluronics with a higher HLB value lead to homogeneous distribution in the cytoplasm; those with a lower HLB value prefer to localize in the nucleus. As is well known, plasmid DNA should enter the nucleus before effective gene expression, whereas small interfering RNA and microRNA always play their regulatory roles in cytosol. Therefore, it is expected that these molecules are worth exploring as components in the cationic copolymers as the DNA or small interfering RNA/microRNA delivery system, respectively, in the near future.
2017-04-02T13:08:51.580Z
2012-02-24T00:00:00.000
{ "year": 2012, "sha1": "f25c9c547d655ab1f7f6738b9fbdcff2dad773be", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=12140", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5dc758c9565874b83e4c8cdd1e37bd64d3e3c171", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
239174680
pes2o/s2orc
v3-fos-license
Evaluating the Impact of Environmental Education on Ecologically Friendly Behavior of University Students in Pakistan: The Roles of Environmental Responsibility and Islamic Values : With increasing global environmental problems, considerable evidence now suggests that environmental education can influence students’ ecologically friendly behavior significantly. Addressing increased environmental problems requires better understanding of the relations between focused and explicit environmental education, environmental responsibility, and religious values. The current study examined the relationship between environmental education and ecologically friendly behavior, utilizing insights from resource conservation theory. The relationship between the variables mentioned above was examined to determine the mediating effect of environmental responsibility and the moderating effect of Islamic values. Through a cross-sectional approach, data were gathered from 413 university students. The data were analyzed using analytical tech-niques such as “structural equation modeling” and “PROCESS.” The study’s findings support the predicted conceptual model, indicating that environmental education was positively related to environmentally friendly behavior. Furthermore, environmental responsibility partially mediated the relationship above, whereas Islamic values positively moderated the relationships between environmental education and ecologically friendly behavior as well as between environmental education and environmental responsibility. These findings emphasize the critical role of environmental education and Islamic values in comprehending the ecological behaviors of Muslim students. Introduction Environmental education is critical in understanding high-level ecological concerns and behaviors. Environmentally educated persons are more motivated to improve the environment because education raises awareness of the potential harm to the environment [1]. Generally, environmental education imparts a high level of information and awareness regarding environmental issues and solutions, resulting in sustainable and ecologically friendly behavior (EFB) [2,3]. Environmental education enhances understanding and sensitivity to environmental problems, broadens knowledge, and contributes to developing favorable attitudes toward ecological challenges [4]. It is believed that human behavior currently hurts the environment. Younger generations will be disproportionately affected due to current global environmental problems, which will only worsen if they are not Sustainability 2021, 13, 10188 2 of 17 adequately addressed [3,5]. As a result, it is critical to understand and improve individuals' ecological behavior. Individual responsibility grows due to environmental education, ethics, and skills required for a more sustainable and improved world. Thus, universities could play a critical role in enforcing ecologically friendly behavior and transforming societies toward environmental sustainability [6,7]. Universities have recently begun to promote pro-environmental and sustainable development through education and research by integrating sustainability into institutional agendas and fostering diverse initiatives for staff training, awareness, and development. A systematic review of a handful of research papers on the effect of education on EFB concluded that education might increase individuals' understanding of their EFB [8]. The leaders of tomorrow are being educated in universities. It is critical to provide them with ethics and environmental education so that they become psychologically empowered and their attitudes shift toward environmentally friendly behaviors, which could lead to societal sustainability [9,10]. Understanding one's tendency to adopt EFB is a complicated and complex issue. Various factors affecting EFB have previously been considered, including environmental concerns, intentions, self-identity, value orientation, personal norms, etc. [11][12][13][14][15][16][17]. Environmental responsibility plays a critical role in enhancing ecological behaviors by imparting an individual with a sense of responsibility and motivating people to protect the environment [18]. The present study proposed that environmental education is positively related to ecologically friendly behavior directly and indirectly (via environmental responsibility). Furthermore, given the importance of Islamic values on environmental protection, such as environmental balancing, environmental awareness, and resource conservation (water, trees, etc.) [19], Islamic values were proposed as a boundary condition to the previously proposed relationships in the current study (refer to Figure 1 for the conceptual framework of the study). portionately affected due to current global environmental problems, which will only worsen if they are not adequately addressed [3,5]. As a result, it is critical to understand and improve individuals' ecological behavior. Individual responsibility grows due to environmental education, ethics, and skills required for a more sustainable and improved world. Thus, universities could play a critical role in enforcing ecologically friendly behavior and transforming societies toward environmental sustainability [6,7]. Universities have recently begun to promote pro-environmental and sustainable development through education and research by integrating sustainability into institutional agendas and fostering diverse initiatives for staff training, awareness, and development. A systematic review of a handful of research papers on the effect of education on EFB concluded that education might increase individuals' understanding of their EFB [8]. The leaders of tomorrow are being educated in universities. It is critical to provide them with ethics and environmental education so that they become psychologically empowered and their attitudes shift toward environmentally friendly behaviors, which could lead to societal sustainability [9,10]. Understanding one's tendency to adopt EFB is a complicated and complex issue. Various factors affecting EFB have previously been considered, including environmental concerns, intentions, self-identity, value orientation, personal norms, etc. [11][12][13][14][15][16][17]. Environmental responsibility plays a critical role in enhancing ecological behaviors by imparting an individual with a sense of responsibility and motivating people to protect the environment [18]. The present study proposed that environmental education is positively related to ecologically friendly behavior directly and indirectly (via environmental responsibility). Furthermore, given the importance of Islamic values on environmental protection, such as environmental balancing, environmental awareness, and resource conservation (water, trees, etc.) [19], Islamic values were proposed as a boundary condition to the previously proposed relationships in the current study (refer to Figure 1 for the conceptual framework of the study). This study contributes to the literature in two significant ways. Firstly, the study broadens understanding of the mediating effect of environmental responsibility on the relationship between environmentally friendly behavior and environmental education. This study contributes to the literature in two significant ways. Firstly, the study broadens understanding of the mediating effect of environmental responsibility on the relationship between environmentally friendly behavior and environmental education. Very few studies have been carried out that considered environmental responsibility as the mediator between ecologically friendly behavior and environmental education [2,20]. Therefore, in this study, environmental responsibility is used as a mediator to explain the relationship between environmental education and environmentally friendly behavior. Secondly, although Islamic values are becoming increasingly important in environmental literature, attention to their role in ecologically friendly behavior is scarce [21]. This study broadens its scope to include ecologically friendly literature by proposing Islamic values as a moderator between environmental education and ecologically friendly behavior. In sum, the first aim of this study was to learn more about the relationship between environmental education and environmentally friendly behavior by looking into the mediating role of environmental responsibility. Additionally, the study sought to expand understanding of the role of Islamic values in environmentally friendly behavior. The following section discusses the conceptual model and the study's hypotheses. Following that, the study's methodology, as well as its findings and analysis, is presented. Structural equation modeling with AMOS and SPSS software (IBM, Armonk, NY, USA) was used to test the hypotheses. Data were collected from undergraduate and postgraduate students. Finally, the authors discuss their findings and conclusions and make future recommendations. Environmental Education and Ecologically Friendly Behavior Environmental education (EE) is education about, from, and for the environment [22]. Ecological knowledge and understanding are developed through EE, which also provides potential skills that benefit the environment. Education from the environment can be accomplished by utilizing the outdoors as a learning resource. In contrast, EE fosters awareness and a sense of responsibility for the environment, positively affecting attitudes and behaviors toward a green ecological lifestyle [23]. Environmental education is widely recognized as a critical component of biodiversity conservation efforts [24]. Increased knowledge and environmentally conscious behavior are two of the most debated educational outcomes in the literature. Numerous studies have been conducted on the return of education [11,19,23]. Environmental education is critical in combating ecological problems, aiming to protect and conserve the planet's resources for a healthy and prosperous life. The impact of EE on EFB has been extensively researched all over the world. The relationship varies according to region, religion, culture, and various other factors [3,25]. Although most studies found a positive relationship between EE and EFB, some studies suggested that a high level of environmental education does not always reflect environmentally friendly behavior [26]. For example, Ek and Soderholm [27] revealed no correlation between a high level of education and the choice to use green electricity. Furthermore, Ayalon et al. [28] found no evidence that education impacted recycling behavior. Wessells et al. [29] found that consumers with a high level of education were not more likely to purchase eco-labeled seafood. Finally, Grafton [30] discovered a negative correlation between water conservation and a high level of education. On the other hand, numerous studies have discovered that EE increases individuals' awareness of the environment and motivates them to engage in ecologically friendly behaviors in various contexts [31,32]. For instance, there is existing literature demonstrating that education promotes recycling behavior [33][34][35]. Other researchers discovered that education influenced people's food choices, with more people opting for environmentally friendly options due to their education. For example, an environmentally savvy individual typically prefers eco-friendly shopping [36,37]. Berl et al. [38] found that highly educated individuals practice water conservation. Similarly, other studies indicate that educated individuals exhibit energy-saving behavior [12,39]. Additionally, it was discovered that education is associated with a higher rate of EFB. For instance, Rowlands et al.'s [40] study discovered that individuals aware of green electricity would emphasize and advocate for increased production of eco-friendly electricity. Moreover, De Silva and Pownall [41] found that college students were willing to put their financial well-being on the line to improve environmental quality. A study by Xiao et al. [42] demonstrated that well-versed environmental education students have environmental awareness. Furthermore, Torgler and García-Valiñas [43] revealed that informal education through print, electronic, and social media, in addition to formal environmental education in universities, contributes to EFB. Aside from the existing literature on the EE-EFB relationship, according to Mitchell and Hodson [44], EE can also be thought of as a stand-alone resource that provides additional support to EFB; the phenomenon follows the conservation of resources (COR) model [45]. The COR model's central tenet is that people should strive to create, protect, retain, and maintain resources. Resources refer to the objects, individual characteristics, energies, or conditions that individuals value or that serve to achieve these objects, individual characteristics, energies, or conditions [45]. Examples of resources include self-esteem [46], learned resourcefulness [47], organizational behavior, behavioral medicine, social work, education, and employment [48]. The model indicates that those with a reliable resource pool are the most "resource secure," having developed a substantial reservoir of resources [49]. Because education is such a valuable and high-quality resource, it will positively influence people's attitudes toward resource conservation through EFB (i.e., gain of resource). As the preceding discussion implies that EE will influence an individual toward EFB, we propose that environmental education is positively associated with ecologically friendly behavior. The Mediating Effect of Environmental Responsibility Environmental responsibility (ER) is defined in this study as a sense of personal obligation toward the environment or sentiments of responsibility to take action to avoid negative environmental consequences. Responsibility has been investigated as a complex notion that has been quantified in terms of moral duty, responsibility sentiments, or ascription of responsibility (i.e., responsibility judgment) regarding the environment as a whole or a specific environmental issue [50]. Environmental responsibility is comparable with moral duty, which is determined by a person's responsible judgment, sentiments, and level of knowledge of the implications of a specific behavior [50]. ER has attracted a lot of attention, and environmentally friendly production has become more prevalent [51]. Han et al. [52] defined ER as sentiments of personal responsibility to engage in a particular action that is beneficial to society and the environment. According to Stern, environmental responsibility is an essential characteristic that may contribute to personal norms, and personal norms have a substantial impact on an individual's decision to engage in pro-environmental actions [53]. Researchers, especially in academies, retain a strong interest in what fosters and drives ER. ER is thought to be crucial in promoting EFB. ER plays a critical role in assisting governments and universities in developing environmental policies and businesses in mitigating risk, increasing environmental efficiency, and fostering societal resilience [54]. The growth of the perception of responsibility significantly increases a person's readiness to engage in pro-environmental behavior [55]. Clark et al. [56] also stated that environmental responsibility enables individuals to act for environmental protection. Zhu et al. [57] also revealed that different levels of responsibility influence one's conservation intention. ER has the capacity to persuade both individuals and organizations that they are accountable for generating different environmental issues as a result of their activities and that they should change their everyday practices to avoid negative effects [58]. ER is greatly personal in nature, which might result from both moral obligations to communities and/or nature and personal emotions of duty as a result of societal pressures [59]. It has been found that ER is a very effective tool to motivate people toward green actions taking inspiration from environmental education. However, research shows that considerable heterogeneity exists in attitudes in different sectors toward personal environmental responsibility. It has been found that some people take better care of the environment and their home and workplace than at tourist scenic spots they may visit [60]. Similarly, in terms of socio-demographic, psychological, and environmental behaviors, Dolnicar and Leisch [61] discovered substantial disparities between two tourist groups (high vs. low environmental responsibility). However, the relationship between environmental responsibility and environmental education has not been studied extensively in the context of pro-environmental behavior. Environmental responsibility has frequently been overlooked as a major predictor that may encourage suitable environmental activities through environmental education and Islamic values lessons in the educational institutes. Diverse responsibility characteristics in students may be further studied to understand better what kind of EFB they like. Such observations would help determine the merits of various environmental education programs that can determine how students will be encouraged to be environmentally responsible. Therefore, this study emphasized environmental education and Islamic values in creating ER feeling in the students. This work aimed to thoroughly review the available information on ER and compare its mediating role between EE and EFB. As mentioned previously, ER has been used as a mediating factor in several studies looking at psychological, social, and long-term environmental variables. Based on the literature cited above and the concept of resource caravans proposed by conservation of resource theory, we believe that environmental responsibility can mediate EE and EFB [48]. The EE resource pool is in a position to orchestrate environmentally friendly behaviors [62]. We believe that by utilizing the EE resource on students, another resource is created in environmental responsibility, which results in resource conservation through eco-friendly behaviors and sustainable lifestyles. As a result, environmental responsibility appears to act as a mediator between EE and EFB. Based on the preceding discussions, we propose that environmental responsibility mediates the positive relationship between ecologically friendly behavior and environmental education. The Moderating Effect of Islamic Values Theoretically, values have the potential to motivate and influence behavior [63]. Individuals' and societies' values and attitudes are shaped by religion, which guides how to live [64]. These values and attitudes shape community and society's behaviors. Islam instills in its adherents the values of sustainability, altruism, and resource conservation [21]. Those who adhere to religions in their true spirit and possess a high level of altruism are more likely to be actively involved in environmentally sustainable behaviors [65]. Islamic values are distinct from personal values. They are ethical principles derived from religious traditions founded on scriptures, such as the Quran and Hadith for Muslims, and ingrained in their lives. Religious values are frequently debated as a way of analyzing their impact on consumer behavior. Religious values significantly impact a person's way of life, thoughts, and habits, among other things. As a result, there has been a great deal of discussion about the impact of religious values on human behavior for the past few decades. When dealing with a religious country where the majority of the population adheres to the same religion, as in the case of Pakistan, where Islam is the official national religion, the significance is increased even further [66]. Their religious values heavily influence consumers' green purchasing decisions. Considering the profound impact of religiosity, several authors have proposed considering religious value's supremacy in the ecological green environment. Additionally, it was observed that among Muslims, their level of religiosity influences their behavior to spend sensibly and shop sustainably [67]. Additionally, religious values help individuals make purchasing decisions based on resource conservation, principles of suitability, and environmental stewardship, such as adopting sustainable clothing consumption [64]. Ecologically friendly behavior results in the protection of the environment's natural resources. In the literature, those who exhibit ecologically friendly behavior are referred to as green consumers [68]. Additionally, prior research indicates that religious values Sustainability 2021, 13, 10188 6 of 17 positively influence green consumer behavior that protects the world's natural ecological cycles. Most previous studies developed a religiosity scale for Christianity to assess its impact on ecological behaviors [69]. On the contrary, Razak et al. [70] examined the relationship between ecologically friendly behavior and Islamic religiosity and discovered a positive and significant correlation. Islam and Chandrasekaran [19] assert that more religious Muslim consumers make greater efforts to protect the natural environment than less religious Muslim consumers. Islamic values introduce the concepts of sustainability and balanced action, emphasizing the importance of not consuming more than one's needs and contributing to the well-being of others [71]. Islam teaches sustainability, impartiality, balanced actions, and judicial actions to safeguard the ecological system. According to Islam, humans do not own the Earth's natural resources. Additionally, Islam emphasizes the protection of natural resources through prudent resource consumption [67]. Previously conducted research established a link between religion and consumer behavior [72]. Religion instills values that serve as guiding principles for the individual's life. However, religion, which is highly personal and depends on an individual's level of piety or commitment to their religion, affects consumer behavior. Green consumerism is a matter of ethics and morality [73]. Religious beliefs assist believers in determining the appropriateness or inappropriateness of their behavior. Ideally, Islamic values could be thought of as a predictor of consumer behavior. Islamic values affect human behavior directly or indirectly [74]. According to Shariah principles, all Muslims are obligated to safeguard the Islamic faith, human life, property, and the mind [75]. Intentional harm to the natural environment and resources is a form of corruption that Islam forbids. In Islamic teachings, human beings are made "Khalifas" or Caliphs of the Earth and entrusted with looking after and caring for the Earth. However, some other schools of thought, e.g., Koehrsen [76], synthesizes existing research about climate change and Muslim communities. He found out that there is no uniform interpretation of climate change among Muslims. Muslims have developed several approaches to climate change based on their understanding of Islam. A small group of Muslim environmentalists engages in public campaigns to raise awareness about climate change, minimize carbon emissions through sociotechnical transition initiatives, and disseminate pro-environmental Islamic interpretations. However, it is unclear to what degree these efforts result in larger changes in the daily activities of Muslim communities and organizations. Contributions to this field of study are frequently theoretical, emphasizing only theological and normative elements of Islam. Comparative studies are needed to explore the role of Muslim environmentalism in climate change mitigation and adaptation on a local and global scale, taking into account regional and theological distinctions among Muslims. Among others, Taylor et al. [77] provided a thorough examination of the harmful environmental consequences of "Judeo-Christian" beliefs, as well as later assertions that the world's major faiths are becoming more ecologically friendly. The Islamic faith places a high value on environmental preservation. The Quran encourages believers to appreciate Allah's gifts as well as the material aspects of life. The term "believers" refers to those who maintain their focus on the aforementioned important factors. There is a lot of evidence in Islamic teachings that emphasize the importance of environmental protection. For example, some of the verses in the Quran state: "Walk on the Earth in humility (Quran, 18:63 [78]). Then We appointed you viceroys in the Earth after them, that We might see how ye behave (Quran 10:14)." "And when he turneth away (from thee) his effort in the land is to make mischief therein and to destroy the crops and the cattle; and Allah loveth not mischief (Quran 2:205)." "Do no mischief on the Earth, after it hath been set in order (Quran, 7:56)." Muslims are obligated to protect the Earth in all ways, as Islam considers the Earth a sacred and holy place. As Muslims, we can pray anywhere on the planet, and in addition to water, certain other elements on the planet can be used to purify even the most egregious impurities. As stewards of the Earth, we must protect the planet as a mosque, and it is each individual's responsibility to care for the Almighty God's entire creation. The earlier literature shows that Islamic values are a critical factor in the ecological behavior of Muslims. The moderating role of Islamic values in this context has received limited attention. As a result of the preceding discussion, it has been hypothesized that Islamic values will act as a moderating factor in the relationship between EE and EFB. In line with these arguments, we believe that Islamic values are a personal resource that, if invested in the EE and ER resource pools, will result in green ecological behaviors and, as a result, additional resources. This assumption corresponds to the COR's viewpoint [48]. Given these considerations, we believe that students who are provided with EE and have strong Islamic values will be more effective in protecting and conserving the environment. Thus, we propose that Islamic values play a moderating role in our study. Hypotheses Based on the aforementioned arguments and evidence, we established these hypotheses: Hypothesis 1 (H1). Environmental education is positively related to ecologically friendly behavior. Hypothesis 2 (H2). Environmental education is positively related to environmental responsibility. Hypothesis 3 (H3). Environmental responsibility is positively related to ecologically friendly behavior. Hypothesis 4 (H4). Environmental responsibility mediates the positive relationship between environmental education and ecologically friendly behavior. Hypothesis 5 (H5). The direct positive relationship between environmental education and ecologically friendly behavior is expected to be significant for those who are high in Islamic values. Hypothesis 6 (H6). The direct positive relationship between environmental education and environmental responsibility is expected to be significant for those who are high in Islamic values. Sampling Procedure Universities produce future leaders, decision makers, and scholars in the political, economic, and social sectors, and thus university students were chosen as the study's target population. Furthermore, the data gathered from students came from homogeneous groups with small random errors. Because they were studying much literature on these issues in their curriculum, university students were more concerned about the ecological wellbeing of nature [2,3]. Data were gathered from Peshawar's public and private universities (Capital of Khyber Pakhtunkhwa Province, Pakistan). According to the Pakistan Bureau of Statistics, Peshawar is Pakistan's sixth-largest city, and it is currently dealing with serious environmental issues. Furthermore, very little research is conducted throughout the country, particularly in Peshawar. The data were gathered from a random sample of students at six universities (three public and three private). Agricultural University Peshawar, University of Peshawar, and Islamia College University Peshawar were chosen as public sector universities. City University of Science and Information Technology Peshawar, Sarhad University of Science and Information Technology Peshawar, and CECOS University Peshawar were among the private universities visited. Before beginning the data collection, the heads of the department and the class in charge were approached for permission. Prospective students were given a brief presentation in the classroom about the research survey's goals and nature. In addition, the questionnaire survey included a cover letter stating that the purpose of the study was solely for research purposes and that the respondents would be kept anonymous and confidential. Students were also told that taking part in the survey was completely voluntary and that they could opt out at any point during the data collection process. Students were given sufficient time to respond to and complete a pen and pencil survey questionnaire, which they then returned anonymously to the researchers in envelopes. Five hundred surveys were distributed, and 452 were returned, resulting in a response rate of 90.4%. Thirty-nine questionnaires were invalid due to incompleteness or careless responses, leaving 413 usable questionnaires [79]. The sample consisted of 265 male respondents (64.1%) and 148 female respondents (35.9%). Of all valid respondents, 79.1% were undergraduates, while 20.9% were postgraduates. Instruments The measures used in this study were adapted from previous research and slightly modified to meet the requirements of the current study. The variables were rated on a five-point Likert scale ranging from 1 to 5, with 1 indicating "strongly disagree" and 5 indicating "strongly agree." The scale for environmental education consisted of two parts having eight items-formal and informal education. The former scale was derived from a study conducted by Pérez-Rodríguez et al. [80], whereas the latter was derived from a study conducted by Varela-Candamio et al. [81]. The environmental responsibility scale was constructed using five items adapted from Wang et al. [82], which were previously tested by [83] in the context of environmental responsibility and pro-environmental consumer behavior. Islamic values can be measured by the religiosity of individuals toward the basic Islamic principles. For this research, the religiosity scale was modified from Plante [84]. The dimensions for Islamic values are related to the Islamic faith and religious action; as a result, it is easier to adapt and is thought to be superior to other scales. A 14-item scale was adapted from Kaiser et al. [85]. It was used to assess ecologically friendly behavior. Cronbach's coefficients (α) were calculated for all scales that fell within the acceptable range. When it comes to university students, gender and age are the most influential variables [86]. Because the literature indicates that these variables significantly impact students' green behavior, we investigated their impact as control variables in this study. Analytical Approach and Construct Validity Because data were collected from individual participants in a cross-sectional study [87], the possibility of common method variance (CMV) was a concern. Harman's one-factor test was used to evaluate CMV [88]. All of the main constructs are entered into a principal component factor analysis in this test. When a single factor emerges from the analysis or when a single general factor accounts for the majority of the covariance in the interdependent and dependent variables, there is evidence of CMV. The results indicate that five factors with an eigenvalue greater than 1 explained 64.4% of the variance, whereas the highest single factor, representing EE, explained 25.9% of the variance. This indicates that CMV was not a significant issue in this study's data. Anderson and Gerbing [89] recommended that two-step analytical procedure was used to examine the proposed model using SPSS and AMOS versions 23 (IBM, Armonk, NY, USA). That is, the model variables were first analyzed using confirmatory factor analysis (CFA) and maximum likelihood estimation to determine the distinctness of the primary study constructs before moving on to structural equation modeling (SEM) [90]. These methods are effective statistical tools for examining a priori hypotheses about relationships between observed and latent variables and testing association among latent constructs [91]. All the study variables were found to have a significant correlation with each other at the 0.01 level. The latent factors were also evaluated for reliability and validity concerns. The composite reliability of all the study variables was above 0.7, thereby showing excellent internal reliability. Similarly, the average variance extracted (AVE), a measure of construct validity, of all the study variables was above the threshold value of 0.5 [92], which means no issues regarding the construct validity of all the study variables. Finally, all the study variables were evaluated for discriminant validity, as per criterion [92]; the square rooted AVEs (which are given diagonally with bold letters) of all the study variables were more extensive than the correlation between them. This means that all the study variables are significantly differentiated from each other. See Table 1 for descriptive, reliability, and validity estimates of the study variables. Measurement Model Evaluation The items for 413 responses were loaded on their respective latent factors through confirmatory factor analysis (CFA). The model provided a good fit to the data (χ 2 = 685.87, χ 2 /DF = 1.23, CFI = 0.98, TLI = 0.98, RMESA = 0.03, SRMR = 0.03). This model was compared against several other alternative models. In the first alternative model, the items measuring the Islamic values and ecologically friendly behavior were loaded on a single latent factor. The resulting model did not fit well to the data because the chi-square value increased by 2479, CFI and TLI, respectively, decreased by 0.25 and 0.27, and RMSEA and SRMR increased by 0.08 and 0.09, respectively. In the second alternative model, the items of three latent factors-environmental education, ecologically friendly behavior, and Islamic values-were merged into one factor. The resulting model showed a further decline in model fit as chi-square increased by 1959.1, CFI and TLI decreased by 0.22 and 23, respectively, and RMSEA and SRMR increased by 0.04 and 0.05. In the final alternative model, all the items were loaded on a single latent factor. The resulting model showed the worst fit to the data among all the available models as chi-square further increased by 292.64. At the same time, TLI and CFI decreased by 0.03 while RMEA increased by 0.01. Comparing the baseline model against different models showed that all the tested measures appropriately measure the respective latent factors. The measurement model comparison test is summarized in Table 2. Structural Equation Model Path Analysis The analysis was conducted in two steps. In the first step, the mediation model was tested using structural equational modeling (SEM). In the second step, the PROCESS macro was used to analyze the moderating effect. The fit statistics of the SEM model were (χ 2 = 456.10, χ 2 /DF = 1.33, CFI = 0.98, TLI = 0.98, RMESA = 0.03, SRMR = 0.03), implying that the model had a good fit [90,93]. AMOS was used to analyze the direct relationship between all the study variables using the structural equation model. All the linear relationships were observed in a single structural model. The control variables (age and gender) did not significantly influence the estimates. Their impact was tested on independent (environmental education) and dependent variables (ecologically friendly behavior). However, it is noteworthy that the insignificance of controls did not deteriorate our main model's results. The results depicted in Table 3 show the outcomes of the structural equation model path analysis. The study's analytical results reveal that the students' environmental education significantly influenced their environmentally friendly behavior (β = 0.28, p < 0.001), providing support to H1. Moreover, the results of the study show that students' environmental education had a significant effect on their environmental responsibility (β = 0.55, p < 0.001), and their environmental responsibility was further found to significantly influence ecologically friendly behavior (β = 0.27, p < 0.01), leading us to accept H2 and H3. The significance of both direct and indirect paths indicated partial mediation. Mediating Effect of Environmental Responsibility Indirect effect of environmental responsibility was estimated using the user-defined estimand in the above-discussed structural model. The indirect effect was found significant (β = 0.14, p < 0.01, CI [L = 06, U = 0.24]), supporting H4. Therefore, environmental responsibility significantly mediated the relationship between environmental education and ecologically friendly behavior. That is, environmental education had a significant impact on ecologically friendly behavior indirectly through environmental responsibility. Moderated Mediation In hypotheses H5 and H6, the current study expected that Islamic values would moderate environmental education's direct and indirect effects on ecologically friendly behavior via environmental responsibility. We examined the moderated mediation hypotheses with the PROCESS macro v.3.0 (Table 4), and the results are provided in Figure 2. The results show a significant positive interaction of environmental education and Islamic values on ecologically friendly behavior (β = 0.08, p < 0.01). Direct effect of EE was larger and highly significant at higher values (+1 SD) of Islamic religiosity (Estimate = 0.31, 95% CI CI [L = 0.19, U = 0.43]). Therefore, the direct positive relationship between environmental education and ecologically friendly behavior is expected to be significant for those who are high in Islamic values. Meanwhile, this effect was low and non-significant at lower values of Islamic religiosity (−1 SD) (Estimate = 0.09, 95% CI [L = −0.03, U = 0.23]) as the zero falls between the upper and lower bounds of the confidence interval. See Table 4 for the result. Similarly, the indirect effect of environmental moral education on environmentally friendly behavior via environmental responsivity was also tested. It was found that the indirect effect of environmental moral education on environment-friendly behavior was larger and significant (Estimate = 0.10, 95% CI [L = 0.01, U = 0.10]) at higher values of Islamic religiosity (+1 SD). Meanwhile, this effect was smaller and weaker (Estimate = 0.05, Similarly, the indirect effect of environmental moral education on environmentally friendly behavior via environmental responsivity was also tested. It was found that the indirect effect of environmental moral education on environment-friendly behavior was larger and significant (Estimate = 0.10, 95% CI [L = 0.01, U = 0.10]) at higher values of Islamic religiosity (+1 SD). Meanwhile, this effect was smaller and weaker (Estimate = 0.05, 95%, CI [L = 0.02, U = 0.11]) at lower values of Islamic religiosity. Hence Hypothesis 6 was supported. Discussion The current study sought to determine the relationship between university students' environmental education and their propensity for ecologically friendly behavior, with the mediating effect of environmental responsibility and the moderating effect of Islamic values. The findings indicate that environmental education is positively and directly related to ecologically friendly behavior. Additionally, the study established that environmental responsibility mediated the positive relationship between environmental education and ecologically friendly behavior to a degree. Furthermore, it was established that Islamic values acted as a positive moderator. Students with a high level of Islamic values behaved more environmentally friendly than those with a lower level of Islamic values. Similarly, there was a strong link between environmental education and ecologically friendly behavior among students with high environmental responsibility and vice versa. This study is critical in understanding how environmental education contributes to the promotion of pro-environmental or ecological lifestyles. There is a shortage of research on the effect of environmental education on ecological behavior through environmental responsibility. Our findings indicate that environmental education, directly and indirectly, motivates students toward green lifestyles via environmental responsibility mediation. This result supports previous findings that environmental education strengthens students' connections to nature and motivates them to a green lifestyle [7,35]. Similarly, Otto and Pensini [94] discovered that nature-based environmental education promotes environmentally friendly behavior. Ballantyne and Packer [95] discovered that when students learn about natural environment protection, their attitudes toward the environment, desires, and behaviors change. Environmental education has positively affected ecologically friendly behavior; consequently, educational institutions can incorporate more environmental education materials to promote ecological behavior. Islamic values promote environmental education and provide guidelines for behaving ecologically and conserving natural resources, providing a long-term approach to green behavior. As a whole, environmental education is a very effective way of promoting green ecological lifestyles because it empowers students to take responsibility for their actions in relation to the environment. The current study demonstrated that environmental responsibility partially mediates the relationship between environmental education and ecologically friendly behavior; thus, it is critical to empower students to protect and conserve nature, as well as to act environmentally friendly. The concept of environmental responsibility imparts a sense of care. Individuals with a strong sense of environmental responsibility believe they have a greater role in protecting nature. As a result, they will be more receptive to engaging in ecologically friendly behavior. This study's critical contribution is that, while prior research has primarily focused on many other mediators between environmental education and ecologically friendly behavior [5,20,96], this study, using conservation of resource theory, suggests that environmental education promotes environmental responsibility as a resource gain for students, increasing their environmentally friendly behavior. Thus, contributing to the theoretical development of the literature, this study demonstrates that environmental responsibility can be incorporated into the COR theory as a predictor of environmentally friendly behavior. This finding is consistent with previous research that claimed environmental responsibility influences ecologically friendly behavior [73,97]. According to the study, environmental responsibility motivates ecologically friendly behavior. These findings suggest that environmental responsibility gives people the sense to realize environmental issues they care about and motivates them to live a greener lifestyle, such as conserving water and electricity. Caravan et al. [60] found that cities with tourism as a primary industry paid more attention to similar ER activities. ER activities include efficient use of goods and natural resources; recycling, reuse, and waste management; environmental information transparency; and atmospheric governance [97,98]. Adoption of ER requires numerous environmental stakeholders, including the media, producers, customers, and states [97]. As a result, stakeholder engagement is widely regarded as a primary factor in ER. These studies illustrate the interdisciplinary and multidimensional nature of ER science. In general, this study's findings indicate that students' environmental responsibility is an effective tool for promoting green ecological behaviors. Additionally, this study examined the moderating effect of Islamic values on ecologically friendly behavior. Islamic values were found to act as a moderator in the relationship between independent and dependent variables. The findings indicate that their religious values influence Muslims' ecological behavior. The practice of religious values is highly individualistic in nature. As a result, adapting behavior to religious instructions is also dependent on the individual's religious commitment. The study also established that individuals with a high level of religious commitment are more environmentally conscious and vice versa. Another important contribution of the research is that it expands the role of religion in ecologically friendly behavior. According to the study, students' Islamic beliefs about the environment are found to reinforce the role of environmental education in promoting ecologically friendly behavior. Our findings are also supported by previous research. For example, Djallela et al. [74] discovered that Muslims consume moderately due to resource conservation instructions. Religious consumers are also reported to be less greedy and selfish and more selfless, implying that they are more involved in sustainable and better environmental deeds. Mohammad and Som [99] also reported similar findings; they concluded that religiosity plays a critical role in developing sustainable green consumer behavior. Rice [21] believed that religiosity does affect the environmental behavior of consumers. The current study's findings confirm that Islamic values have a positive moderating effect on green ecological lifestyles. However, it is worth noting that religious values are highly individual in nature, with each individual's religiosity being quite distinct from the next [68]. Limitations and Future Directions There were a few limitations in the current study that could be addressed in future research. Our study's scope was limited in terms of the society's targeted population, which could be expanded by including working professionals, housewives, and adults from various walks of life. The current study focused exclusively on university students, a very small proportion of the population; additionally, their behavior may be influenced in real life by the lifestyles of ordinary individuals. Additionally, people's green ecological behavior patterns at various stages of life and professions can be compared in the future, yielding some very positive results. The current study's findings can be compared to those from other parts of the world with diverse religions and cultures. This will aid in comprehending cultural and behavioral differences in the global adoption of green ecological behaviors. One of the study's limitations was that the model was tested using Islamic values for Muslims; as a result, the model can only be applied to a subset of the population. Furthermore, the current study's findings are based on cross-sectional data, making causal inferences more difficult. Future researchers might find it more useful to examine the interplay between the study variables using a longitudinal design. The experimental research approach required for measuring environmental responsibility is missing in the current study [100]. Although the COR theory can be extended for this study, additional research is required to test the COR theory on the environmental-responsibility-based reinforcement process in green ecological lifestyles by incorporating additional antecedents into the proposed theoretical model. Conclusions In conclusion, this study demonstrated the critical role of environmental education, including formal and informal education, in developing students' ecologically friendly behaviors. Environmental education creates a sense of environmental responsibility in the context of the Islamic values code of conduct, which also prohibits its followers from causing environmental damage. The current research findings align with previous research, namely that environmental education guides students to adopt environmentally friendly behaviors [101][102][103][104]. This study found that students felt environmental responsibility by environmental education and tended to engage in eco-friendly behaviors. Environmental education guides students to adopt environmentally friendly behaviors. Additionally, Islamic values contribute to students' development of ecologically friendly behaviors. Students with a high level of Islamic values were found to live more environmentally friendly lifestyles than those with a low level of Islamic values. As a result, it is suggested that higher institutions should promote both formal and informal environmental education to foster green and sustainable behavior. Our findings also encourage higher education institutions to consider how they might encourage religious values in ways that are legally and culturally appropriate and could enhance ecologically friendly behavior. This conclusion is also supported by prior research indicating that environmental education results in environmentally friendly behaviors [2,3]. Additionally, environmental education must emphasize the dissemination of information, the promotion of ecocentrism, and a desire to conserve nature.
2021-10-20T16:31:28.867Z
2021-09-12T00:00:00.000
{ "year": 2021, "sha1": "048c9c74336826606cc742f75493f3247d9ca640", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/su131810188", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0c08a0658be40d288a2edb2227490aedcc69ccf0", "s2fieldsofstudy": [ "Environmental Science", "Education" ], "extfieldsofstudy": [] }
255125334
pes2o/s2orc
v3-fos-license
Hydrogen isotope separation using graphene-based membranes in liquid water Hydrogen isotope separation has been effectively achieved using gaseous H2/D2 filtered through graphene/Nafion composite membranes. Nevertheless, deuteron nearly does not exist in the form of gaseous D2 in nature but in liquid water. Thus, it is a more feasible way to separate and enrich deuterium from water. Herein we have successfully transferred monolayer graphene to a rigid and porous polymer substrate PITEM (polyimide tracked film), which could avoid the swelling problem of the Nafion substrate, as well as keep the integrity of graphene. Meanwhile, defects in large area of CVD graphene could be successfully repaired by interfacial polymerization resulting in high separation factor. Moreover, a new model was proposed for the proton transport mechanism through monolayer graphene based on the kinetic isotope effect (KIE). In this model, graphene plays the significant role in the H/D separation process by completely breaking the O-H/O-D bond, which can maximize the KIE leading to prompted H/D separation performance. This work suggests a promising application of using monolayer graphene in industry and proposes a pronounced understanding of proton transport in graphene Large amount of heavy water is used as the neutron moderator in nuclear fusion reactors [1][2] . It is also widely used in laboratory spectroscopy and dynamics research, such as neutron scattering [3][4][5] , isotope tracing [5][6][7] , and solvent for proton nuclear magnetic resonance spectroscopy 8 . Due to its low abundance (~156 ppm) in nature, methods to gain heavy water with high purity is urgently required for industrial and research applications 9 . There are two traditional industrial methods to obtain heavy water: the Girdler-Sulfide process 10 and cryogenic distillation at 24 K 11 . However, both methods are relatively complicated, costly, and timeconsuming due to their quiet low separation factor below 2. Therefore, a novel technology is essentially needed which could separate and enrich heavy water efficiently and economically. Recently, graphene has been reported to be a feasible and effective solution for heavy water enrichment. With the aid of graphene, the separation ratio can reach as high as α~10 and energy consumption could be effectively reduced as low as 20 GJ/kg 12 , which means it is a more economic way for heavy water enrichment. The pristine monolayer graphene can be visualized as a sieve that only allows protons and deuterons to pass through. Nevertheless, deuterons will transport more slowly due to its lower value of zero-point energy, which results in the separation of H and D. The monolayer graphene has a high protium-to-deuterium (H/D) separation factor of around 10, which was first demonstrated by Lozada-Hidalgo et al.in 2016 with micro-sized mechanical exfoliated single-layer graphene 13 . Subsequently, Zhang et al. measured the H/D separation ratio (~8) of macro-sized (~1 square inch) graphene fabricated by chemical vapor deposition (CVD) by mass spectrometry 12 . CVD is feasible to synthesize graphene with large size (even square meters) which is more useful for practical applications. However, defects will be inevitably introduced in graphene during the growth process [14][15] , as well as the transfer process. 16 Unfortunately, these defects will deduce the H/D separation ratio. Thus, the defect-fixing technique is essentially required to achieve an excellent H/D separation ratio, especially for the large graphene membrane. The H/D separation for the studies mentioned above was achieved using gaseous H2 and D2 in laboratory. In practical, it should be noted that deuteron does not exist in gaseous D2 in nature but in liquid water, in the form of D2O or HDO. Thus, the separation and enrichment of deuterium from water is a more practical way compared to gaseous H2 and D2. Graphene needs to be transferred to a suitable substrate such as Nafion for this application. Nearly all previous studies employed Nafion as the substrate of graphene for gaseous H/D separation, which ascribes to the good transport ability of the channels in Nafion for protons and deuterons. However the technology they claimed encounters the stability issue of graphene devices in aqueous solution, due to the swelling of Nafion substrate, which will swell up to 15% after hydrated in wet state 17 , and thus cause tearing and breakage problem inside the monolayer graphene. Herein, in this work, the strategy to avoid swelling issue is to replace Nafion with a porous polyimide track-etched membrane (PITEM) with a swelling percentage of only 0.329% (100 RH%) 18 . The PITEM substrate can effectively prevent graphene from damage in liquid water. Furthermore, the strong adhesion between graphene and PITEM surface ensures the high integrity of graphene after transfer process and the long-term stability of the device 19 . Tracketch technique used on PITEM ensures the unhindered passage for protons with the help of artificial pores in the membrane. Moreover, in this work, interfacial polymerization (IP) was carried out to seal the defects and cracks created during the growth and transfer process to improve the integrity of the large-size graphene. As a result, the H/D separation ratio was markedly increased to 8.6 for IP sealed graphene membranes. H/D separation by graphene was reported to originate from the zero-point energy difference (60 meV) between proton and deuterium. However, this theory could not explain the role of graphene in this process because the zeropoint energy difference is only related to the vibrational frequencies of the O-H and O-D bonds, which does not involve any graphene's properties. To better understand the role of graphene in this separation process, a more specific model was proposed based on kinetic isotope effects (KIE) theory. Based on the new model, we predicted that H/D separation ratio for water electrolysis with and without graphene were 11.2 and 4.9, respectively, which were in agreement with the experimental data (8.6 and 4.0). This model implies the critical role of graphene in the isotopic separation process: completely break the O-H and O-D bonds to achieve the maximum KIE, as a result, high H/D separation ratio. RESULTS AND DISSCUSSION CVD Graphene on porous PITEM substrate. Polyimide track-etched membrane (PITEM) is a type of nuclear track membrane with uniform pores and vertical channels (Figure 1), which is used in laboratory filtration, water filtration, cell culture growth, and environmental studies [20][21][22] . Polyimide also possesses good chemical stability and adhesion to graphene 19 . Here PITEM with hydrophilic surface is employed as a porous substrate to prevent the damage of graphene. Graphene grown on copper foil by CVD was transferred to PITEM using a polymethylmethacrylate(PMMA)assisted transfer method 23 , and the graphene/PITEM composite membrane was defined as PITEM-G. After transfer, graphene was characterized by Raman spectra, scanning electron microscope (SEM), and atomic force microscope (AFM) ( Figure S1). The Raman spectrum showed a G/2D peak area ratio of 0.31, slightly higher than defect-free monolayer graphene (0.25), indicates the successful transfer of graphene24. The stability of PITEM-G in water was tested and compared with graphene/Nafion membrane ( Figure S2). The graphene on PITEM-G still kept high integrity after 48 hours of immersion in water, while graphene/Nafion sample was full of cracks and defects due to the swelling of Nafion. It can be seen from Figure 1(a,b) that the existing transfer process cannot guarantee that the graphene tightly adheres to the surface of PITEM in all regions, which is a negative factor for graphene integrity. Those defects formed during the transfer process have to be repaired to ensure the high H/D separation performance of graphene. The interfacial polymerization (IP) 23,[25][26] technique then was employed to seal defects and cracks of graphene. The monomers for IP are organic-soluble trichloromethane (TMC) and water-soluble m-phenylenediamine (MPD), which contacted and reactively polymerized at the water-organic interface [27][28][29] . The IP mechanism for defect repair is shown in Figure 1(c,d). Specifically, PITEM channels covered with defects have been fully blocked to ensure high separation selectivity. Since the intact graphene could prevent TMC in hexane from contacting MPD in water, no IP reaction occurred in the PITEM pores covered by undamaged graphene. In contrast, those two monomers for IP could penetrate and react at the crack regions, forming a polymer plug inside the PITEM channel. In this case, we defined this membrane as PITEM-G-IP. PITEM (2.12% porosity, Figure 1e) without graphene was also treated with IP (PITEM-IP) to demonstrate that the IP reaction occurs selectively. It can be seen that all those native pores disappeared after IP treatment (Figure 1h), which proves the ability and effectivity of IP to plug the channels of PITEM. In contrast, the SEM of PITEM-G-IP (Figure 2g.) showed that the surface of the membrane still has a lot of channels left which has not been blocked by IP (1.20% porosity after repair), indicating that the IP reaction is only feasible within the defective pore channels, and eventually retains the pores covered by graphene for subsequent H/D separation. Moreover, we designed a control experiment to prove that the polymers produced by IP were only generated in the channels of PITEM in the region where the graphene is defective, instead of all the PITEM's channels. ( Figure S4) We employed oxygen plasma, which can effectively remove graphene from the composite membrane and keep the polymers untouched. It is easy to understand that the conductivity (measured by electrochemical impedance spectroscopy in liquid phase) of PITEM-G-IP is lower than that before IP because of the blockage of pores inside PITEM. After oxygen plasma treatment for PITEM-G-IP, the conductivity increased, which means that the channels covered with graphene were not plugged and exposed again after the graphene was removed. Therefore, it is a powerful proof to show that the IP only occurs in the channels of PITEM where they are not covered by graphene. Then electrochemical impedance spectroscopy (EIS) was used to evaluate the ionic impedance of the membrane, which represents the flux of ion penetration through the membrane and thus determine whether the graphene repair is effective. We designed an experimental system shown in Figure 2a. The H-cell was filled with 0.5 mol/L H2SO4 as the electrolyte and connected to an electrochemical work station. The impedance of pristine PITEM (~50 mS/cm) was slightly lower than that of Nafion-XL 30 , which indicates that PITEM could hardly block the migration of hydrogen ions when used in a liquid phase. After IP for PITEM-G, the resistance of the membrane was dramatically increased by 4-fold. Since polymer plugs were selectively produced in the pores, the increment of impedance value proves that IP repair can prevent the leakage of hydrogen ions passing through the defects. Furthermore, the conductivity of graphene was also used to evaluate the integrity of graphene. The lower the conductivity, the higher the integrity of graphene. To show the high integrity of graphene sealed by IP, a comparison of the hydrogen ion conductivity for graphene in this work and some literature results can be seen in Figure 2c. The hydrogen ion conductivity of the repaired graphene was estimated to be 39.1 mS/cm 2 (effective area 1.77 cm2), which is comparable to the value of graphene in some micro-size devices(4 mS/cm 2 , 1.96×10-7cm 2 , loaded on a silicon chip) and is one to two orders of magnitude lower than the values reported in other graphene devices with defects and cracks31-36 (With the similar size, the lowest conductivity reported in literatures is 1667 mS/cm 2 , much higher than our work). This result suggests that IP sealing is very effective to repair defects and cracks on large area of graphene and achieve highly efficient H/D separation. H/D Separation in liquid water using graphene. The H/D separation ratio for graphene devices can be measured by mass spectrometry. As shown in Figure 3(a), a semi-solid reactor was used to obtain the accurate separation ratio. A platinum mesh electrode was used as the anode. The electrolyte was sulfuric acid solution (pH=0) with H/D=1:1. The graphene device (Figure 3b) acted as the cathode, and palladium was decorated on the surface of graphene to ensure that the protons/deuterons are reduced after passing through the graphene. Once bias applied, H2, D2 and HD gas diffused from the backside of the graphene device into a sealed chamber. Subsequently, the H/D separation ratio( The H/D separation factor was 4.0±0.4 for pristine PITEM, and it increased to 6.0±0.3 for PITEM-G. The former selectivity was the result of H/D isotopic effect of hydrogen evolution reactions over Pt catalysts. Graphene in PITEM-G exhibited a much lower H/D separation factor in aqueous solution than that (>8) in gaseous H2/D2 condition 12 . This poor performance could be attributed to large defects or cracks of graphene created in liquid water, and thus a dramatic decrease for separation ratio. In contrast, the separation ratio was increased as high as 8.6 for PITEM-G-IP in aqueous solution, suggesting that interfacial polymerization selectively enabled defects sealing in graphene. It is crucial to elucidate the fundamental mechanism for the hydrogen-deuterium separation through monolayer graphene. In recent years, Theoretical studies have proposed different mechanisms (via hydrogenation [37][38] or protonation 39 ) and transport pathways (straight perpendicular path through the center of the hexagonal ring 13 or via defects such as Stone-Wales(5577) [40][41] 42 . However, those explanations were not satisfactory so far. For instance, Hu et al. 43 claimed that proton transport across graphene is a thermally activated process, which could be demonstrated using the Arrhenius equation (Eq. 1). Lozada et al. 13 paved further and they proposed that the H/D separation ascribes to the difference of zero-point energy (60 meV) between hydrogen and deuterium which contributed a high H/D separation ratio, and the H/D separation ratio (α) can be derived from the equation(Eq. 2). where ΔE=ED-EH=60 meV (ED and EH represent the energy barriers required for protons and deuterons to pass graphene, respectively). = − (1) Although those two theories were consistent with the experimental data, the explanations focus only on the initial state of the separation process, whereas ignore the role of graphene in the H/D separation process. For water electrolysis process catalyzed by Pt (without graphene), ΔE is only related to the initial state, the separation ratio (α~10) can also be calculated by the theories above. However, the experimental measured H/D separation ratio for Pt-catalyzed water electrolysis is only about 2-6 44-46 , which was much lower than the calculated value. This large deviation between theoretical and experimental value may come from the ignoration of the transition states in the separation process. Atoms have zero-point energies only in boundary states (e.g., chemical bonding is a boundary force). As for the H/D separation process In fact, the energy barriers in transition state of H and D are still different due to the ZPE, which means that the above theories are not applicable in this work. A new model proposed for proton transport through monolayer graphene. We proposed a theoretical model based on kinetic isotope effects (KIE) for the proton transport through graphene in this work, which takes consideration of the ZPE of the transition state. Kinetic isotopic effect (KIE) refers to the phenomenon that isotopes react on the catalyst surface with different rates 47 . It is estimated that the ratio of the H/D reaction rate equals to the H/D separation ratio in this work. To simplify the transfer process of protons and deuterons, we assumed that the energy barrier is the same for protons and deuterons excluding ZPE, as shown in Figures 4a and 4c. This assumption is reasonable and reliable because the electron Schrodinger equation does not involve the mass of the nucleus. In short, the electromagnetic forces to protons and deuterons are identical because they have the same amount of charge, and thus the same energy barrier. Based on the assumption above, the H/D separation ratio is proportional to the difference in zero-point energy of the initial state and transition state. In this work, H/D separation by water electrolysis at the surface of Pt(111) without (Figures 4a and 4b, case 2) or with monolayer graphene (Figures 4c and 4d, case 1) were discussed to illustrate the rationality of the new model. The measuring conditions were the same for both without/with cases. The initial state is the water molecules absorbed on graphene or Pt(111), and the ZPEH-start (0.229 eV) as well as ZPED-start (0.167 eV) can be calculated based on equations 3 and 4 47 . Where h is the Planck constant, ν is the vibration frequency, is the wave number (obtained by DFT), c is the speed of light, k is the chemical bond force constant (The value of k is the same for both H and D), m1 is the atomic mass ratio of hydrogen/deuterium, and m2 is the atomic mass of oxygen. We first discuss the water electrolysis at the surface of Pt. In this case, the protons and deuterons react directly on the surface of Pt(111), the O-H and O-D bonds will not break completely at this step because of the transition state (Figure 4d). So, when the reaction occurs, H/D atoms need to conquer the origin energy barrier (EPt) and the zero-point energy (ZPEH-trans, ZPED-trans). In other words, the zero-point energy of the initial state is not fully exploited without graphene because of the ZPE of the transition state. According to Equations 2,3,4,5,6,7, the ΔE is 41 meV compared to 60 meV calculated by Lozada's theory, which resulted in a decrease in the H/D separation ratio from 11.2 to 4.9. The separation ratio of 4.9 is close to the value reported [44][45][46] (α=2~6) as well as our experiment data(α~4.0±0.4) (Figure 4c). The membrane with graphene prevents the direct contact of protons and deuterons with the catalyst, so the H and D atoms need to be separated from the O atoms to pass through the monolayer graphene and reach the surface of Pt to trigger the hydrogen evolution reaction (HER) (as shown in figure 4b). In this process, the O-H/O-D vibration frequency decreased to zero due to the completely broken O-H and O-D bonds. According to equations 2,3,4,5,6,7, the ZPEH-trans=ZPED-trans=0, and the separation ratio is 11.2. This result also agrees with the value reported in literature (α~10) and our experiment data(α~8±0.6), which illustrates the model's rationality in a broader range of scenario comparing with previous theory (all the data are collected in table 1). Literature --10 2~6 The separation ratio of 11.2 was estimated by our new model, which was close to 10 calculated by Lozada's model. The difference between the two values may be attributed to the error of DFT calculation. According to Eq2, the separation ratio α only relates to the ΔE. In the case with graphene, ΔE=ΔZPEstart-ΔZPEtrans (Figure 4a), whereas, ΔE=ΔZPEstart in Lozada's model, which the ΔZPEtrans was not involved., since the broken bond brought by graphene happens to make ΔZPEtrans=0 the correct value could be given by Lozada's model. However, Lozada's model is still limited for explaining all transport mechanism. As mentioned above, the ΔE=ΔZPEstart-ΔZPEtrans, and the ΔZPEstart is the same (62 meV) for both without-and withcase, so it is the difference of ΔZPEtrans (0 meV for the case without graphene, 21 meV for the case with graphene), resulting in the profound change of separation ratio. Thus, it can be assumed that the transition state plays a decisive role in separating hydrogen and deuterium. The existence of graphene gives rise to a completely different transition state, which could effectively increase the separation ratio from 4.9 to 11.2. In order to cross the monolayer graphene and access the catalyst, the H/D atoms are forced to completely separate from the O atoms (completely broken of O-H and O-D bonds). In other words, the maximum of KIE could be achieved with the aid of graphene. Meanwhile, it is reported that the KIE could reach its maximum value when the chemical bonds are broken completely 47 , which proves the rationality of our theory. Comparing with the model proposed by Lozada in 2016, our new model considers the effect of the transition state as well as clarifies the role of graphene in the H/D separation process. Furthermore, the new model is more applicable to describe the H/D separation process in different conditions (catalysis, with or without graphene, electrolytes, pH and so on). Based on this model, we would like to suggest some perspectives: (i) 2D materials including but not limited to graphene, can help reach the maximum H/D separation ratio; (ii) various chemical conditions, which could influence the ZPE, will affect the maximum H/D separation rate; (iii) the strategy is equally effective for H/T and D/T or even isotopes of other elements. All in all, separating hydrogen isotopes by monolayer graphene-based membrane to get the maximum separation ratio is a promising and effective strategy which has the potential to be widely applied. CONCLUSION In conclusion, we employed PITEM as the substrate for graphene to avoid the swelling issues of Nafion substrates in liquid water. Meanwhile, the defects in large-area monolayer graphene were successfully sealed by interfacial polymerization (IP) technology. After sealing, the H/D separation ratio was improved from 6.0 to 8.6, demonstrating the feasibility of IP repair for graphene breakage. Moreover, this work developed a new model based on kinetic isotope effects (KIE). Graphene prevents the direct contact of hydrogen and deuterium atoms with the catalyst, so that hydrogen and deuterium atoms can only pass through graphene when the O-H and O-D bonds are completely broken. As a result, the KIE value was enhanced close to the theoretical maximum value (~10). The H/D separation ratio calculated by the model is 11.2 and 4.9 for the presence (α~11.2) and absence (α~4.9) of graphene, respectively. These theoretical values are close to the experimental value (α~8.6 and 4.0), indicating the rationality and the universality of our new model. Membrane design. We designed and fabricated a composite membrane with graphene on top surface of a porous polymer substrate. Hydrophilic polyimide track-etched membranes (PITEM) (it4ip S.A.) were used as the porous support layer, due to the well-tailored cylindrical cores with optimized diameters which can provide good mechanical support for graphene and allow ions to pass through at the same time. It also shows good chemical stability and adhesion to graphene.19 Graphene used in this work was grown by chemical vapor decomposition (CVD) on Cu foil, and transferred to PITEM membrane via a poly (methyl methacrylate)-(PMMA)-assisted process. Then we carried out interfacial polymerization (IP) treatment for the composite membrane to repair the defects existed in graphene which were created in the process of graphene synthesis and transfer process. Finally, the repaired graphene-based membrane was placed into an electrochemical pump which was connected with mass spectrometry to detect the separation performance of hydrogen and deuterium. Synthesis of graphene. Graphene growth was conducted under H2 (99.999% purity, the concentration of H2O ≤ 3 ppm), Ar (99.999% purity, the concentration of H2O ≤ 4 ppm), CH4 (99.999% purity, the concentration of H2O ≤ 3 ppm), and O2 (0.1% diluted in Ar, the concentration of H2O ≤ 3 ppm). After the growth procedure, the samples were cooled to 700 °C under the same atmosphere as growth and then were moved outside of the hot zone under H2 (6-in. tube furnace) or N2 (A3-size industrial system), according to the literature 48 The fabrication of graphene/PITEM membrane. Graphene was normally grown on both top and back sides of copper foils, and the top side basically shows higher quality. Therefore, top side was spin-coated with PMMA and then baked at 130 °C for 3 min. Subsequently, the back side of copper foil was exposed to air plasma for 3 min to remove the residual graphene. Then, the copper foil was etched with 1 M Na2S2O8 solution by floating the whole PMMA/graphene/copper foil on the surface of the solution. Then the free-standing PMMA/ graphene membrane was washed with deionized water and then transferred onto PITEM. After air drying, the PMMA/graphene/PITEM was immersed in acetone at 80 ℃ for about 10 min to remove PMMA. At this point, graphene was successfully transferred to PITEM. Interfacial polymerization on graphene. Defects and tears in graphene were selectively sealed by interfacial polymerization (IP). IP is a reaction that takes place at the interface of two monomers, the reactants in different phases would react when they contact with each other. The reactants can contact and react through the defects and tears inside monolayer graphene, but they cannot pass through graphene. As a consequence, the product of IP would seal the defects and tears in graphene effectively. We carried out IP process by using an unstirred 7ml Franz cell with a 20mm orifice to. Graphene-based membrane was placed at the middle place of the cell which kept the graphene side down and PITEM side up. After clamping the clip on the cell, filled the upper chamber with 0.05wt% Trimesoyl chloride (TMC) in hexane and wait for 10 minutes to allow TMC to fully diffuse in PITEM. Then filled the lower chamber with aqueous 0.05wt% m-Phenylenediamine(MPD) and 0.2wt% Sodium dodecyl sulfate(SDS) slowly without any bubbles and made the reactants to react for 30 minutes. Then the reaction was stopped by pipetting out the TMC on the upper chamber, then the graphenebased membrane was put into the oven and dried at 100 ℃ for 10 min to ensure cross-linking effect. Finally, the membrane was immersed in deionized water to remove the residual reactants. O-plasma treatment of graphene device. To ensure the polymeric plugs formed selectively at the defect regions, we carried out a control experiment to measure the impedance of the membrane before and after O-plasma treatment, which was implemented by plasma equipment (CTP-2000 K, Suman, China) (power of ~4.95 W, oxygen flow rate of ~30 sccm and maintain for 1 min). Electrochemical impedance spectroscopy test. We used electrochemical impedance spectroscopy (EIS) to test the impedance of the repaired graphene-based membrane in acid solution. EIS was carried out in a two-chamber electrochemical cell(H-cell) with two electrodes. The graphenebased membrane was clipped between the two electrode chambers. The two Pt plate electrodes, serving as working and counter electrodes, were used to supply electrical power for the whole system. 0.5 mol/L H2SO4 was used as the electrolyte. EIS experiment was conducted using an electrochemical workstation (Zahner Zennium E4), which is in potentiostatic mode with a frequency range of alternating current from 100 Hz to 1MHz. The data points were collected on the logarithmic scale. The AC amplitude was set as 20 mV. EIS data were fitted by a nonlinear least-squares algorithm with the aid of Zview 3.0 software package. Preparation method of the electrochemical pump (graphene device). The membranes (Nafion, PITEM, PITEM-G, PITEM-G-IP, prepared by the method mentioned above) were cut into the size of 1.2 cm×1.2 cm. Subsequently, Pt ions sputtering was performed on the graphene side by the ion sputtering instrument(108AUTO). The sputtering time was 20 seconds. Then the membrane was fixed on the center of the titanium mesh (circle of 2 cm diameter, used as the current collector and support) by using scotch tape. To ensure that the device has good conductivity and leak-proof, we used the glue (Araldite) to seal the top of the device. The glue could prevent the electrolyte from contacting the Ti mesh, avoiding side-effects such as water electrolysis. Moreover, it could improve the mechanical strength of the device and prevent the damage of graphene during the testing operation. After 24 hours, when the glue is totally air dried, the wire was connected with Ti mesh by using conductive silver paste. Then another 24 hours was needed to air dry the silver paste. Mass spectrometry measurements. The H-cell was used as a heavy water separation reactor. PITEM-G-IP device was sandwiched between two chambers of H-cell, and one chamber was filled with electrolyte (0.5 mol/L H2SO4, 0.5 mol/L D2SO4 dissolved in 1:1 mixture of water and heavy water). The anode was 1*1 cm Pt mesh electrode and the cathode was connected with the wire of the PITEM-G-IP device. Another chamber was hermetically sealed, which served as a gas collector connecting with a sealed gas bag. A helium leak detector (Leybold Phoenix Quadro) was used to test the separation effect of repaired graphenebased membrane. The key part of the instrument was a sealed pipeline with flange and mass spectrometry. A device with tiny leakage holes was clipped between the flanges to ensure that the gas in the upper chamber could transport to the lower chamber at a relative small flow rate. The leakage hole can ensure that the lower chamber had a high vacuum so that the molecular pump of the mass spectrometer can operate normally. The gas in the upper chamber comes from the reaction gas generated by the heavy water separation device collected in the sealed gas bag. The gas flow rate from the sealed bag in the upper chamber was controlled by a mass flow meter (Horiba Stec S48 32). The small amount of mixture gas generated by passing through graphenebased membrane was pumped into the mass spectrum. As The amount of gas in the lower chamber was much less than that in the feed chamber, the hydrogen/deuterium atomic ratio in the feed chamber can be considered 1:1. Then the following formula can be used to describe the hydrogendeuterium separation ratio. 49 , using the projector augmented wave (PAW) pseudopotentials containing gradient-corrected Perdew-Burke-Ernzerhof exchange correlation [50][51] . The valence electronic states were expanded in a plane wave basis set with a kinetic cutoff energy of 400 eV. In the geometry optimizations, self-consistent field computations were repeated until the sum of forces acting on the relaxed atoms was below 0.05 eV Å-1. A vacuum thickness of 15 Å between the slabs was applied perpendicular to the surface to prevent spurious interactions between the repeated slabs. The climbing image general nudged elastic band (CI-NEB) 52 method was performed between the initial and final states to determine the transition states. Its results were further confirmed using frequency analysis. The metal surface was studied as slabs, for Pt(111), four layers with p(4×4) super cell were used. The top two layers were relaxed, whereas the bottom two layers were kept unchanged. Brillouin zone sampling was carried out by using 1×1×1 Monkhorst-Pack grids, which can suffice for well-converged results. The graphene was represented by a single layer slab with a 4×4 super cell. A 3×3×1 Monkhorst-Pack k-point mesh was used. In addition, the effect of van der Waals forces was involved by considering the weak interaction 53 . ASSOCIATED CONTENT Supporting Information. Author Contributions Xiangrui Zhang and Hequn Wang. contributed equally to this paper. Funding Sources Any funds used to support the research of the manuscript should be placed here (per journal style). Notes Any additional relevant notes should be placed here.
2022-12-27T06:42:17.635Z
2022-12-26T00:00:00.000
{ "year": 2022, "sha1": "9145c7e86f2c2136ad1acc62b0536ffdde837624", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "9145c7e86f2c2136ad1acc62b0536ffdde837624", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
255302837
pes2o/s2orc
v3-fos-license
Reply to Oppersma et al. 1. Marrazzo F, Spina S, Forlini C, Guarnieri M, Giudici R, Bassi G, et al. Effects of trunk inclination on respiratory mechanics in patients with COVID-19-associated acute respiratory distress syndrome: let’s always report the angle! [letter] Am J Respir Crit Care Med 2022; 205:582–584. 2. de Ryk J, Thiesse J, Namati E, McLennan G. Stress distribution in a three dimensional, geometric alveolar sac under normal and emphysematous conditions. Int J Chron Obstruct Pulmon Dis 2007;2:81–91. 3. Lee HJ, Lee HJ, Lee JS, Kang YN, Koo JC. A built-up-type deformable phantom for target motion control to mimic human lung respiration. Rev Sci Instrum 2020;91:054106. 4. Samimian S, Ashrafi S, Khaleghdoost Mohammadi T, Yeganeh MR, Ashraf A, Hakimi H, et al. The correlation between head of bed angle and intra-abdominal pressure of intubated patients; a pre-post clinical trial. Arch Acad Emerg Med 2021;9:e23. We thank Dr. Oppersma and colleagues for their interest in our study, in which we described the effects of changes in trunk inclination on respiratory mechanics in mechanically ventilated patients with COVID-associated acute respiratory distress syndrome (ARDS) (1). Reducing trunk inclination from semirecumbent (40 head-of-bed elevation) to supine-flat (0 headof-bed elevation) increased markedly (and reversibly) the compliance of the respiratory system, both due to an increase in chest wall and lung compliance. We did not measure changes in lung aeration (end-expiratory lung volume [EELV]); however, we think that the most relevant underlying mechanisms is the decrease in EELV in supine-flat position, favoring a reduction of overdistension of some, most likely ventral non-dependent, lung regions. Of note, similar observations regarding changes in respiratory mechanics were made in patients with "classical" ARDS (2, 3). Moreover, a reduction in EELV caused by a change in trunk inclination from semi-recumbent to supine-flat position has been described in "classical" ARDS (2, 3), mechanically ventilated patients with normal lungs (4) and in spontaneously breathing subjects (5). Dr. Oppersma and colleagues developed a mathematical model aimed at simulating the changes in lung aeration due to variations in trunk inclination. The model consisted of 15 blocks of homogenous material, with specific properties of either edematous, fibrotic or emphysematous lung tissue. In addition, the simulated pressure applied to the diaphragm (intraabdominal pressure) changed according to body position, and the gravitational forces (likely proxies of pleural and abdominal pressure gradients) were adapted according to the applied angle. The authors simulated several combinations of mechanical properties of the 15 blocks, having as output variable lung aeration. They found that the combination of an edematous lung with apical emphysema and basal fibrosis predicted the largest increase in lung aeration changing position from supine-flat to semi-recumbent (in their experiment up to 30 head-of-bed elevation). As stated above, the change in EELV associated with variations in trunk inclination is well-established (2)(3)(4)(5). However, we agree with the authors that, in the specific context of ARDS, it might be of interest to understand the pathophysiologic mechanisms underlying the variable change in EELV induced by the variation of trunk inclination (2,3). Heterogeneity of lung lesions/density is a key feature of ARDS and the distribution of lesions might play a role also in this context. In our study, patients were enrolled early, i.e., a median of 2.5 days after intubation. The timing of our observations makes it therefore difficult to ascertain the relative role of fibrosis and emphysema. Another aspect worth discussing is that the authors focus on lung aeration, while they did not model (or present) information regarding VT distribution. Besides changing lung aeration, i.e., the starting pulmonary volume, variations in trunk inclination most likely affect the pleural pressure and abdominal pressure gradients. As a result, it is conceivable that also the distribution of VT changes with trunk inclination, ultimately affecting ventilation homogeneity. In conclusion, we look forward to seeing a study in which their interesting computational model is integrated with patient data regarding lung aeration and VT distribution (e.g., electric impedance tomography). We agree with the authors that this approach could improve our understanding regarding the effects of trunk inclination on respiratory mechanics in mechanically ventilated patients with ARDS. Author disclosures are available with the text of this letter at www.atsjournals.org.
2022-06-04T06:23:09.254Z
2022-06-02T00:00:00.000
{ "year": 2022, "sha1": "5f8e35c73a2c445db2bcb3a98a6e0379e1d893d9", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1164/rccm.202205-0987le", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "96964a3abb7a029e1fba5394eed681b1cb17c27f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
114123
pes2o/s2orc
v3-fos-license
Recruitment in an indicated prevention program for externalizing behavior - parental participation decisions Background Parents are the ones who decide whether or not to participate in parent focused prevention trials. Their decisions may be affected by internal factors (e.g., personality, attitudes, sociodemographic characteristics) or external barriers. Some of these barriers are study-related and others are intervention-related. Internal as well as external barriers are especially important at the screening stage, which aims to identify children and families at risk and for whom the indicated prevention programs are designed. Few studies have reported their screening procedure in detail or analyzed differences between participants and dropouts or predictors of dropout. Rates of participation in prevention programs are also of interest and are an important contributor to the efficacy of a prevention procedure. Methods In this study, we analyzed the process of parent recruitment within an efficacy study of the indicated Prevention Program for Externalizing Problem behavior (PEP). We determined the retention rate at each step of the study, and examined differences between participants and dropouts/decliners. Predictors of dropout at each step were identified using logistic regression. Results Retention rates at the different steps during the course of the trial from screening to participation in the training ranged from 63.8% (pre-test) to 81.1% (participation in more than 50% of the training sessions). Parents who dropped out of the study were characterized by having a child with lower symptom intensity by parent rating but higher ratings by teachers in most cases. Low socioeconomic status and related variables were also identified as predictors of dropout in the screening (first step) and for training intensity (last step). Conclusions Special attention should be paid to families at increased risk for non-participation when implementing the prevention program in routine care settings. Trial Registration ISRCTN12686222 Background Research literature on the prevention of children's disruptive or externalizing problem behavior provides increasing evidence for the global efficacy of multifaceted intervention packages aimed at children who are at increased risk for the development of antisocial behavior [1]. However, one specific problem in investigating such programs is the recruitment to the program itself. Different studies provide differing amounts of information about the process of recruitment. The various steps in the decision process, especially those of parents, are of particular interest because they can show that recruitment to a certain study was selective. As a result, the findings of the study would be biased according to the criteria of the CONSORT group [2], who demand transparency at every step in the reporting of randomized trials. For future trials of indicated prevention programs, as well as for the clinical implementation and dissemination of such programs, it is important to know the barriers of participation. Two main types of barrier may influence parental participation decisions: 1. Study-related barriers, which have their origin in the demands of controlled efficacy studies. Examples include the number of assessment instruments used and the time required to fill out the questionnaires, randomization, and a lack of trust in data protection procedures. 2. Intervention-related barriers, which might even influence the indication procedure (screening), as such a step is a type of intervention itself. Such barriers will be of special importance during later steps when the training itself is offered. This study analyzes the process of recruitment within an efficacy trial of the indicated Prevention Program for Externalizing Problem behavior (PEP) [3][4][5]. Before describing the methods and results of our study, we review the findings of other efficacy and effectiveness studies that dealt with the same kind of problems (children's externalizing behavior) on a comparable level (indicated or secondary prevention) in young children. However, it is not easy to compare data of study flow/participation between studies because of differences such as the accuracy of data collection/report or rules of data protection for the community. Only a few studies have reported their findings of differences between participants and dropouts, or analyzed predictors of dropout. In a study of preschool children at kindergarten, Barkley and coworkers [6] could not estimate the proportion of their sample with disruptive behavior relative to the total number of registrants. Overall, 288 of 3100 children screened via parent-rating had scores above the 93rd percentile. Also, 158 (92.9%) of the 170 parents (59.0%) who accepted the invitation to participate in the project were randomly assigned to one of the two treatment groups that included parent training. Of these 158 parents, 66.7% attended at least one session of parent training, and only 13.3% attended between 9 and 14 sessions. Comparisons between parents who did and did not attend at least one session showed that non-attendees were less well educated and rated their child's behavior as less inattentive and aggressive (using the Child Behavior Checklist, CBCL) at the initial evaluation. Sonuga-Barke and coworkers [7] screened a total population of 3051 children at the 3-year developmental check and identified 286 children (9.4%) at risk for Attention Deficit Hyperactivity Disorder (ADHD). The parents of 105 of these children (36.7%) agreed to take part in the second step of the screening (clinical interview for ADHD) and 78 of these parents (74.3%) were included in the trial. Except for the comparison between those who declined and those who agreed to take part in the second step (slightly less severe symptoms of decliners), no findings about selectivity were reported and no information on participation was given. In their effectiveness trial of Webster-Stratton's 10week parenting program in a general population sample of parents, Stewart-Brown and coworkers [8] did not report details of the target sample, but mentioned a parental response rate of 69.4% for the parents of 2-8 year-old children registered with three general practices. Of the 387 parents who identified one child with worse behavior (i.e., a rating above the median of the Eyberg Child Behavior Inventory) and who were invited to enter the trial, 116 (30.0%) consented. The parents who participated did not differ from those who refused to participate in terms of their social class, but were more likely to have a child whose behavior scores were in the clinical range on the Eyberg Inventory (39.4% vs. 29.5%). The authors concluded that the approach they used seemed to reach those in need. Thirty four (56.7%) of the 60 parents in the intervention group attended at least half of the sessions, which was comparable to attendance rates in parenting programs of both high-risk or clinically-indicated samples. As the dropout rate was higher among parents of older children, the authors concluded that the optimum child age for invitation to this program was likely to be 2-3 years. Another randomized controlled trial [9] investigated the efficacy of the Webster-Stratton 14-week group program in children (aged 2-9 years) referred for help with conduct problems (n = 158). A total of 121 primarily lowincome families with parents who were able to attend group times and communicate in English met the inclusion criteria and were invited to participate in the study during a home visit by group leaders; 34 parents (28.1%) were unwilling to participate. The remaining 87 families were randomized to the intervention group (n = 44) or the wait-list group (n = 32); 11 were excluded from the analyses because they were randomized to a previously planned third arm of the study. All eligible parents who agreed to the research during the initial home visit consented to participate in the trial. Most parents of the intervention group participated in more than 5 sessions; 32 (72.7%) of 44 parents participated in 6 to 14 sessions. Hutchings and coworkers [10] investigated the 12-week group based Webster-Stratton Incredible Years basic parenting program in a real world setting. Of 240 families with children aged 3-5 years approached by health visitors because of problem behaviors, 178 (74.2%) were contactable and interested in participating in the screening. Of these 178 families, 164 (92.1%) fulfilled the eligibility criteria and 153 (93.3%) participated in the baseline assessment interview. The authors did not discuss possible reasons for loss of families before this step of the study and conducted intention to treat analyses from the baseline assessment onwards. Mean attendance was 9.2 (SD 3.2) of 12 sessions (rate 76.7%) for the 86 participants of the intervention group (n = 104) who completed the follow-up assessments. In the Early Risers effectiveness study, August and coworkers [11,12] investigated an indicated prevention program aimed at aggressive children and their parents or the children alone. A liberal gender-specific cutoff (t ≥ 55) was chosen. In two consecutive yearly cohorts (preschoolers and first-graders), a total sample of 2112 children was screened before obtaining informed consent. Children or parents who did not speak English sufficiently to complete the questionnaires were excluded from the study. Of the children screened, 819 (38.8%) were indicated and, of these children, 371 (45.3%) were recruited to the longitudinal study. Main reasons for loss of families were change of residence or refusal because of the possible time commitment for the intervention group. Of the families recruited, 327 were assigned to two intervention groups and 44 to a control group (normative sample). Differences between those who were retained in the study and those who officially withdrew were calculated for the course of the different interventions but not for the earlier steps of the study. No differences were found on age, gender, grade, ethnicity, IQ, female caregiver's age, SES, number of siblings living with the child and, most importantly, severity of initial aggressive behavior. However, there were more retained children who came from single parent households. Dropouts from the child-intervention-only group had significantly higher aggression scores than those from the control group. Treatment barriers have also been analyzed in universal prevention studies. Heinrichs and coworkers [13] analyzed barriers to research and program participation in a universal prevention program (Triple P) for child behavior problems in Germany. They reported a target sample of 915 eligible participants; 282 families (30.8%) were enrolled in the project. Analyses of the social structure within the sample, determined by an objective kindergarten social structure index (OKS), showed that preschool areas with few social structure problems (high OKS) were overrepresented compared to areas with moderate or low OKS. Because each preschool teacher team was asked to rate each family in their group on a number of sociodemographic variables, it was possible to analyze reasons for attrition. Logistic regression showed that parents from single-parent homes were 1.56 times more likely to participate after controlling for occupation, social status, number of family members and parental age. Parents with low or medium SES were less likely to participate after controlling for other variables. Forty percent of the non-participating families answered questions about their reasons for non-participation and mainly reported assessment-related barriers, such as intrusion of their privacy, as their primary concern (pretest at home visit). Of the186 families randomized to the intervention group, 144 (77.4%) attended at least one session; most families (89.0%) participated in three or four sessions. Logistic regression of predictors for non-participation (controlling for other variables) found a higher risk for single-par-ent families and families with low SES, whereas parents who described more externalizing problems were more likely to participate in the training. In this paper, we focus on parental decisions on participation at each step of the efficacy study of the indicated Prevention program for Externalizing Problem behavior from recruitment to the intervention phase. At each step of the study, we determined the retention rate, examined differences between participants and dropouts/decliners, and identified predictors of non-participation. Figure 1 gives an overview of the various steps of the study, which has been described in more detail elsewhere [4,5]. In summary, there was a screening, identification of those indicated and eligible for treatment, a pre-test assessment, and randomization to training. Study course Public preschools in a German city of about 1,000,000 inhabitants served as the primary recruitment sites and were selected in cooperation with the Department of Youth Welfare of the city. The randomized control group trial for an indicated prevention program required a screening procedure to select the target group. At this step, participation was anonymous and parents could Invitation to pretest decline to take part at this and any subsequent steps of the study if they were not interested in receiving feedback on the findings of the screening or if they did not want to participate any further. All screening participants who gave consent received a letter with a summarized feedback of the screening and those indicated were informed that project staff would telephone them within the next two weeks to tell them about the pre-test assessment, ask for consent, and to fix a date for the assessment. The pretest included a booklet of questionnaires for both the mother and father, and a home visit lasting up to 3 hours (with intelligence testing of the child, an interview with at least one parent, and a videotaped standardized interaction task with one parent and the child). As compensation for their time and effort, parents were offered €30 for the home visit and an additional €20 for completing the parents' questionnaires. Measures Information that could be used to identify reasons for refusal to participate, especially during the early steps of the process, could be taken from the screening instrument (PEP-Screen, see additional file 1 and 2), which has been described in detail elsewhere [14]. Similar to the study of the Conduct Problems Prevention Research Group [15,16], our screening used 13 items taken from the German version of the Child Behavior Checklist 4 to 18 [17,18], which assesses behavioral and emotional problems using a 3-point scale (0 = "not true", 1 = "sometimes or somewhat true", 2 = "exactly/often true"). An externalizing behavior score was empirically confirmed by factor analyses and showed satisfactory internal consistency (r it = 0.74-0.89). It was calculated from the sum of scores for the following 7 items: item 1 (argues a lot); item 5 (can't concentrate); item 6 (can't sit still or is hyperactive); item 8 (destroys things belonging to others); item 10 (impulsive or acts without thinking); item 12 (physically attacks others); and item 13 (temper tantrums). For co-morbid internalizing problems, items 4 (clings to or is too dependent to adults), 7 (too fearful or anxious), 9 (unhappy or sad), and 11 (pain without good somatic reason) were included. Item 2 (getting teased a lot) and item 3 (demands too much attention) remained in the questionnaire, but only counted in the total score. The sum of parents' and teachers' ratings of the externalizing score was used as the indication criterion with a cut-off at the 88 th percentile of the screening-sample (which was the closest raw-score to the 85 th percentile which we intended to use for cut-off ). In addition, the parents' and teachers' version of the PEP-screen had two global questions for an overall rating of the child's problems: (1) "How much do you feel bothered/burdened by the child's behavior?" rated as No, Yes a bit, Yes medium, or Yes a lot, and dichotomized as yes/no for the logistic regression analyses; and (2) "Do you think you or the child need(s) professional help because of the burden?" rated as Yes or No. When parents did not participate in the screening, some demographic information was available from the teachers' screening (age and gender of the child, parents' language (German/others)). Moreover, in a multiplechoice question, teachers were asked to assess parents' and their own view on the reasons for the parents' decision not to participate (language problems, concerns about data protection, additional free answers). Based on information obtained from the parents' screening questionnaire, SES was estimated as mean of education and profession of both parents, and classified as high, medium or low. Data from the pre-test assessment was another source of information for the analysis of parental interest and participation intensity in the training. Ethical concerns about using data from families who did not give consent were taken into account by using only the teacher's information about the child's behavior in preschool. Information about the family was only taken from the parents themselves when consent was given. The independent variables describe the behavior of the child or sociodemographic characteristics of the child or family and, therefore, are representative of the internal factors for parents declining. The external barriers (study-or program-related) in these analyses are represented by the different steps of the recruitment procedure (up to pre-test, beginning of the prevention program). Intervention The intervention consisted of two components: a parent training and a teacher training of 10 sessions each (one session of 90 to 120 min per week) conducted by a psychologist with special training in this intervention. Parents were told that the trainer worked with groups of up to 6 participants, with separate sessions for parents and teachers usually during preschool time, but that a different time and place could be arranged depending on individual needs as far as possible. Moreover, parents were told that homework assignments (practicing strategies individually planned during the sessions) were part of the training, which lasted up to three months and was followed by a post-test. Statistical analysis To analyze the parental decisions, all ranked variables were dichotomized. At every step, the differences between participants and dropouts/decliners for available variables were calculated using t-tests for continuous variables, and χ 2 tests for categorical variables. All variables available were included in a stepwise logistic regression analysis to determine the set of variables associated with participation versus non-participation at each step of the study. Because significance testing in stepwise logistic regression is of questionable reliability, the analyses were repeated using the "enter" method. That is, all independent variables identified in the stepwise analysis were entered into the model at the same time and the Odds Ratio (OR) of each variable in the model was estimated (including the 95%confidence interval, CI) and the reduction of the 2-Log-Likelihood was tested for significance (χ 2 -Likelihood-Ratio-Test) [19]. The goodness of fit of the entire model was tested using the Hosmer-Lemeshow chi-square test. Well-fitting models show non-significance on this test, indicating model prediction is not significantly different from observed values. In addition, statistical (non-)significance of the model does not mean that it necessarily explains much of the variance in the dependent variable only that however much it does explain is more than random. Therefore, odds ratio of the variables in the model will be reported as a descriptive measure. Moreover, if the sample gets smaller, as occurs during the course of the project, the test may overestimate the model fit [20]. Results The course of recruitment and retention during the efficacy trial, including screening, indication, pre-test, randomization, and training is shown in Figure 1. Sixty eight city-funded preschools in the city of Cologne, Germany, were selected by project staff with the aim of having nearly equal representation of districts with different levels of social burden, an indicator supplied by the Department of Youth Welfare of the city integrating different aspects of social burden and need for public youth welfare. Six preschools were excluded for logistic reasons, such as planned closing, rebuilding or planned changes of staff within the next months. For the remaining 62 preschools, the screening procedure focused on children aged 3 to 6 years who were expected to stay in preschool for at least one year and, therefore, would be applicable for the subsequent steps of the project at least up to the post-test immediately after the training. Screening In the screening procedure, parents could choose between three different levels of participation: (a) to assess their child via the screening questionnaire and give consent for the teacher to forward their name and address to project staff for later contact; (b) to complete the assessment but to remain anonymous; or (c) not to assess their child at all. Teachers collected parents' questionnaires and assessed the children themselves using the teachers' version of the questionnaire. A sample of n = 2845 children was assessed by at least one adult (parent or teacher). Data protection issues prevented us from checking the accuracy of the size of our target sample, but we consider it to be good because the only reason not to include a child at this step was a long lasting absence from preschool despite being formally enrolled. Half of the sample (50.2%) were boys, the mean age was 4.08 years (SD = 0.86), and different areas of social burden were represented equally. Parents of 2123 (74.6%) children actively participated in the screening procedure. For parents who declined to participate (n = 701), the teachers' information identified that the main language of the declining parents was German for 31.2% and another language for 44.1%; no information concerning language was available for the remaining 24.7%. Of the declining parents, 6.3% mentioned language problems as a reason for their refusal, but teachers suspected this reason in 19.0% of cases. Concerns about data protection were reported by 4.9% of parents and 6.3% of teachers. At this first step of the analysis of parents' decisions, children whose parents agreed to participate (n = 2123) and those whose parents declined participation in the screening (n = 701) -either actively (empty questionnaire) or passively (no feedback at all) -were compared using the available information from the teacher's screening: gender and age of the child, index of social burden of the district the preschool belonged to, teacher's burden by the child's problems, teacher's need for help because of these problems, and aggregated scales for internalizing and externalizing behavior, as well as the total score from the screening questionnaire. As Table 1 shows, parents who declined to participate in screening had older children and came from districts with a higher social burden compared with those who participated in the screening. The two groups did not differ significantly on externalizing, internalizing and total problem scores on the teacher screening checklist, and girls and boys were distributed equally. However, for the group that declined screening, teachers reported more need for help. In the stepwise logistic regression analysis, SBD (OR = 1.69; CI = 1.42-2.01) and teacher's need for help (OR = 1.34; CI = 1.09-1.64) were included in the model. The 2log-likelihood indicated a good model fit (3129.67-3085.75; p ≤ .05) and the non-significant Hosmer-Lemeshow test (χ 2 = 1.83; df = 2, p=.400) indicated no meaningful difference between observed and model-predicted values; therefore, improvement of the classification by including the identified variables in the model could be verified. Indication -eligibility Of the children screened, 260 (12.2%) were defined as being at risk for developing more severe problems and indicated for the intervention (see Figure 1). Mean age of these children was 4.17 years (SD = 0.85), 73.8% were boys, and the different areas of social burden were represented equally. Of the indicated families, 243 (93.5%) agreed to their address being forwarded and were defined as eligible. Pre-test One hundred and fifty five families (63.8% of those eligible) agreed to participate in the pre-test step of the study; 22.1% were single-parent families and mean age of the mothers was 33.26 years (SD = 6.03). Because this information was only available for participants, comparison with those who declined was not possible. Variables that were significantly different between the two groups at pre-test are summarized in Table 1. Declining parents (n = 88) rated their child's behavior (at screening) as showing less externalizing problems compared with participating parents, and (therefore) felt less burden and less need for help. In contrast, teachers' ratings of children's behavior showed higher ratings in externalizing and total problems in the declining group, but no significant difference in their felt burden or need for help. In the stepwise logistic regression, only parents' burden by child (OR= 0.34; CI= 0.19-0.61) was included in the model. The 2-log-likelihood significantly decreased (from 314.33 to 300.19; p ≤ .05) by including the variable, and the Hosmer-Lemeshow chi-square test could not be performed for the total model. Readiness for training Participants of the pre-test were randomly assigned to the training and control groups with oversampling for the intervention group using a 3:2-ratio to maintain a large sample of combined parent and teacher training for the efficacy analyses. Thus, 91 (58.7%) families and teachers were defined as the intervention group and received an offer to participate in the training. Children's mean age was 4.20 years (SD = 0.85) and 74.7% were boys. The dif-ferent areas of social burden were distributed nearly equally in the intervention group, mean SES was 0.72 (SD = 0.72), 25.3% lived in single-parent families, and mean age of the mother was 32.80 years (SD = 6.23). From this step on, teachers could participate in training independently from parents' decision. Parents of 74 children accepted participation in the training and attended at least one of the 10 sessions. The first section of Table 2 lists the variables available for comparison between parents willing to participate and those who declined to participate in the training. Children of declining parents (n = 17) were rated by their parents as showing less externalizing behavior problems in the pre-test (CBCL) and teachers felt less burden by those children in screening. In the stepwise logistic regression, children's internalizing behavior (screening) and externalizing behavior (pretest) as well as gender were included in the model. The odds ratios calculated within the confirming logistic regression (using enter method) show that parents who described more externalizing behavior were less likely to decline participation in training (OR= .88; CI= .80-.96), while parents describing higher rates of internalizing behavior in an earlier step (screening) were more likely to refuse training (OR= 2.00; CI= 1.30-3.08). Parents of boys were also more likely to decline participation at this step (OR= .08; CI=.01-.79). The indicators for the quality of the model were good: the classification of cases slightly improved (from 80.7% to 84.1%), the 2-log-likelihood significantly decreased (from 86.38 to 63.18 p ≤ .05), and the model fit was good as indicated by a non-significant difference of observed and predicted values on the Hosmer-Lemeshow test (χ 2 = 3.09; df = 8; p=.928). The parents' mean participation rate per training session was 74.9% and Figure 2 shows a slight decrease during the course of the training from 89.2% (session 1) to 60.8% (session 10). The mean number of sessions attended by parents was 7.5 (SD = 2.7). The corresponding figures for the 91 teachers participating in at least one session were a mean participation rate of 86.1%, ranging from 93.4% for session 3 to 79.1% for session 8 (Figure 3). A mean number of 8.8 (SD = 1.8) sessions was attended by teachers. Table 2 also lists the variables that were significantly different between those parents who took part in at least 6 of the 10 group training sessions (n = 60) and those who participated in fewer sessions (n = 14). The families who participated less showed a significantly lower SES and the children were rated significantly higher on the screening scale for internalizing behavior by their teachers. In the stepwise logistic regression, SES and teacher's burden by the child were included in the model. Only SES showed Discussion Analysis of participation rates at different steps of the decision process may be useful to document the course of recruitment and get hints on the generalizability of the findings of the efficacy study. The main objective of this study was to get information on barriers of participation as far as the data available allowed in a randomized clinical trial. This information may be useful for optimizing recruitment procedures for an indicated prevention program. We expected study-related barriers (e.g., investment of time for assessment) to be important in the decision of whether or not to participate in the screening process and in the pre-test assessment. In addition, intervention-related barriers (e.g., confrontation with family problems, time for participating in the training sessions) might be of special importance for the steps following the offer of training. Certainly, we can only compare the rates of decliners within these two sections of the trial. The highest attrition rates were observed for the screening and the subsequent pre-test assessment: one-quarter of all parents invited declined the screening and more than one-third declined the invitation to the pre-test. Since screening is a necessary step in an indicated prevention program, the extended pre-test in this trial can be interpreted as a study-related barrier. In the last step, 20% of parents decided not to take part in the training offered, whereas 80% of those who initiated the training attended more that 50% of the l0 sessions provided. In this trial of the efficacy of an indicated prevention program for children with externalizing behavior problems, nearly 75% of the community sample participated in the initial screening. Findings from epidemiological studies and a few studies similar to the present one (69.4% [8]; 74.2% [10]) suggest this rate can be considered satisfactory. However, the analyses of predictors for declining participation in the screening procedure showed that living in districts with a higher social burden and a higher need for help described by the teacher increases the odds of declining. This indicates that a substantial group of parents with children at risk for externalizing behavior problems was missing at the screening step. Only a small proportion of parents whose children were indicated were not eligible because they had not given their address (16.5%), but these parents (who were not interested in feedback) described less need for help than parents who participated. The highest attrition occurred at the pre-test, where more than one-third of the sample declined participation. Other studies starting with screening of a community sample have reported similar or higher figures, ranging from 28.1% [9] to 45.3% [12]. The only factor predicting attrition at the pre-test step was a reduced burden by their child as perceived by the parents. In agreement with Stewart-Brown and coworkers [8], we can conclude that our approach seems to reach those (more) in need based on parents' perception. However, from the teachers' perspective, there were trends in the opposite direction (higher scores in aggressive behavior and total behavior problems in those who declined), which may be partly due to a methodological artifact because patients with lower scores in parent's rating must have higher scores in teacher's rating in order to fulfill the indication criterion of combined parent and teacher ratings. The differences in parent and teacher ratings may reflect real differences in behavior problems in the different setting or they may reflect a rater bias. In the first case the higher attrition rate of the parents can be interpreted as a consequence of a reduced need for help. In the second case parents are in need for help but they refuse it since they do not consider their child's behavior as problematic due to a rater bias. In these cases the first step of a successful prevention program would be to increase the problem perception of the parents for example by discussing the different perspectives of the parents and the teachers. In agreement with Gardner and coworkers [9], we found lower rates of decliners after the pre-test. Therefore, we can support their conclusion that "when you reach to get behind the doorstep it is much more probable that the families take part in the next steps". Nearly 80% of the invited parents participated in at least one session of the training (and most of them more), which is better than the 66.7% participation rate reported by Barkley and coworkers [6]. But is has to be taken in account that the indication criterion in this study tried to identify a more "clinical" externalizing sample, not only children at risk. Studies comparable to ours did not report basic participation rates for training. Parents were more likely to decline participation in training if they identified less externalizing behavior problems in their child or described more internalizing problems at the screening step. The effect of gender on likelihood for participation in training was small and probably not clinically relevant. The results on readiness for training may be interpreted as those who especially need help from an indicated prevention program for externalizing problem behavior are likely to be included. Some findings of other studies show that this is not self-evident [11,12]. Our results on child symptom intensity correspond to those of Barkley and coworkers [6] but, in contrast to these authors, we did not find that parents' education or other SES-related variables were predictive of parent compliance in starting treatment (i.e., readiness for training). At this step one can assume that the main study-related barriers had already been overcome, whereas parental decisions about participating may be influenced by the training itself and related reasons (program-related barriers). A high proportion of parents regularly participated in the training (took part in ≥6 sessions), which is comparable to other well implemented programs (e.g., Webster-Stratton's "Incredible Years" [8,10]). The teachers' participation rate in the training was consistently higher than that of parents, but teachers could participate during their work time. Moreover, the program dealt with their professional and not their private/personal circumstances. The finding that families of lower SES had more problems in regular participation is consistent with that of Heinrichs and co-workers [13] in their investigation of universal prevention. It also corresponds to our finding that SBD (which may be an indicator of SES) correlates with screening participation in the first step of our analysis. That is, parents with a lower SES have a higher risk of declining screening and of less frequent participation in the treatment process compared with parents with a higher SES. Therefore, trainers should be aware that lower SES parents may need extra support to continue with the training. Individual reasons for missing a single session were not investigated systematically but may be associated with problems in practical organization (e.g., time, health, transport), attitudes towards the training (rank of importance), or experiences with the training (i.e., boring, not helpful, difficult). As Heinrichs showed in a trial with families from social disadvantaged areas focusing on different ways of recruitment, payment for participation was helpful in increasing the participation rates in a universal prevention program [21]. Satisfactory rates of participation in training showed that the program itself is well accepted, but the association with SES is alarming and sends an important message to trainers to pay special attention towards keeping low SES parents in the program. Moreover, teachers of children whose parents showed up to the sessions with less frequency more often reported need for help. This is related to the lower participation rates of parents of children with lower problems as rated by parents but higher problem scores as rated by teachers at pre-test. Limitations The results of these analyses are influenced by the criterion we used for indication. The combination of parent and teacher ratings was used to identify children with the highest risk. An alternative definition of this criterion (i.e., high scores in both settings) may have led to other results. In contrast to Heinrichs and coworkers [13], our analyses focused on variables gathered in the "natural" process of data collection. For project economic reasons, we only used a reduced "special" dropout questionnaire and did not compel the teachers to answer questions about parents not participating in screening. For the same reason, we did not systematically ask parents declining at each of the subsequent steps, especially pre-test and training participation, for their specific reasons for declining. At least one variable was included in the logistic regression model at each step. For some models it was not possible to calculate the goodness-of-fit tests. However, statistical (non-)significance alone might not be sufficient for defining important predictors because the sample size was quite large at least at the first steps. However, the ORs were low, indicating that other factors might be more important in explaining parental participation decisions.
2016-05-04T20:20:58.661Z
2010-05-28T00:00:00.000
{ "year": 2010, "sha1": "f4a595f265e3651cd879c6be3def7c7de7bda7a2", "oa_license": "CCBY", "oa_url": "https://capmh.biomedcentral.com/track/pdf/10.1186/1753-2000-4-15", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e06848f80c0aa9ffb8559b3812a4bfe31d8e5e5e", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
249934285
pes2o/s2orc
v3-fos-license
PM2.5 Synergizes With Pseudomonas aeruginosa to Suppress Alveolar Macrophage Function in Mice Through the mTOR Pathway High concentrations of PM2.5 in enclosed broiler houses cause respiratory disorders in humans and animals. Pseudomonas aeruginosa (P. aeruginosa) is an opportunistic pathogen that can induce severe respiratory disease in animals under stress or with abnormal immune functions. Alveolar macrophages are lung-resident immune cells that play important roles in lung host defence and immune balance. In this study, the mechanism by which PM2.5 synergizes with P. aeruginosa to damage alveolar macrophage function and induce inflammation was investigated. The results will provide a theoretical basis for improving the poultry breeding environment and preventing the recurrence of infection with P. aeruginosa. Alveolar macrophages were stimulated by PM2.5 collected in an enclosed broiler house and P. aeruginosa. Phagocytosis was determined by the neutral red test. The apoptosis rate and cytoskeleton changes were observed by flow cytometry assays and laser scanning confocal microscopy. Protein levels related to autophagy and the mTOR pathway were detected by Western blotting. The results indicated that PM2.5 in combination with P. aeruginosa could decrease phagocytosis, inhibit autophagy, increase apoptosis, and destroy the cytoskeleton in alveolar macrophages. In addition, alveolar macrophages had significantly increased expression of mTOR pathway-related proteins in response to the synergistic stimulation of PM2.5 and P. aeruginosa. The above results confirmed that PM2.5 in poultry houses synergized with P. aeruginosa to impede alveolar macrophage function and caused more severe respiratory system injuries through a process closely related to the activation of the mTOR signalling pathway. INTRODUCTION Atmospheric particulate matter (PM) is the general term for all solid, liquid and aerosol substances in the atmosphere (Hunt et al., 2003). Microorganisms suspended as single cells in the air form suspended microbial aerosols by adhering to dry solid particles and liquid particles (Schlesinger and Cassee, 2003). With the development of the livestock and poultry breeding industry, the closed intensive breeding mode has gradually become mainstream. However, due to the high stocking density and poor air flow in these systems, a large number of microorganisms in the animal excreta and bedding in the houses and the hair and villi shed by the animals are likely to form PM with complex components. The pathogenic microbial components of such PM can enter the lungs through the respiratory tract and cause inflammation, greatly endangering the health of livestock and breeding personnel (Lawniczek-Walczyk et al., 2013). The particle size determines the specific surface area of the harmful substances absorbed by PM, as well as the depth to which PM enters the respiratory tract. Particulates with a diameter ≤2.5 μm are called PM2.5. These particles have a larger specific surface area and are more likely than smaller particulates to combine with organic compounds, heavy metals, and microbial components in the air. PM2.5 can enter the deep part of the respiratory tract and are not easily excreted from the body. They are deposited in the bronchi, bronchioles, and alveoli of the lungs, and some PM2.5 components can even cross the pulmonary interstitium and enter the blood, seriously affecting the health of humans and animals (Franck et al., 2011). The damage caused by PM2.5 to the respiratory system includes respiratory infections, lung function decline, bronchitis, chronic obstructive pulmonary disease, and even lung cancer (Hsieh et al., 2008). In addition, PM2.5 can damage the immune system, cardiovascular system, and reproductive system (Schwartz, 2004). Pseudomonas aeruginosa (P. aeruginosa) is unlikely to cause disease in healthy individuals but tends to cause serious infections in patients with compromised or defective immune functions. P. aeruginosa is a very important pathogen that causes human pulmonary infection. It often causes respiratory tract infections, cystic fibrosis, diffuse panbronchiolitis, pneumonia, and many other diseases (Cachia and Hodges, 2003). P. aeruginosa can also cause disease in poultry, leading to pulmonary infection, septicaemia, and even death, with serious losses to the breeding industry (Walker et al., 2002). Alveolar macrophages are derived from bone marrow and are mainly distributed on the epithelial surface of the bronchial tract and in the alveoli. They are the first immune defence cells in contact with foreign bodies or microorganisms. These macrophages have strong phagocytic ability and can nonspecifically phagocytose a variety of pathogens, playing an important role in host defence and immune balance in the lungs. Under normal circumstances, alveolar macrophages can protect against PM or microorganisms entering the alveolar space through phagocytosis and prevent lung tissue damage through secretion and immune regulation (Byrne et al., 2015). Our previous studies have shown that macrophages significantly increase the expression levels of inflammatory cytokines, such as tumour necrosis factor (TNF)-α, interleukin (IL)-6 and IL-8, as well as nuclear factor-κB (NF-κB) pathwayrelated proteins under the synergistic stimulation of PM2.5 and P. aeruginosa in poultry houses . These findings indicate that high concentrations of PM2.5 can damage the respiratory system and cause secondary infections of P. aeruginosa, resulting in more severe respiratory system damage. This process is closely related to the activation of the NF-κB pathway. It has been reported that there is crosstalk between the NF-κB and mammalian target of rapamycin (mTOR) pathways, and the mTOR pathway is closely related to macrophage apoptosis, autophagy, and phagocytosis (Kim et al., 2007;Zoncu et al., 2011). In this study, we used PM2.5 in poultry houses together with P. aeruginosa to stimulate alveolar macrophages to establish a cell model and detect apoptosis, autophagy, and phagocytosis, as well as the expression of mTOR pathway-related proteins, to elucidate the molecular mechanisms of the mTOR signalling pathway in response to damage by alveolar macrophages and the induction of respiratory tract inflammation by PM2.5 and P. aeruginosa. This study helps clarify how PM2.5 and P. aeruginosa damage the respiratory system of humans and animals and induce an inflammatory response, and the results provide a theoretical basis for the prevention and treatment of secondary infection by P. aeruginosa. Ethical Statement All animal experimental protocols were performed following requirements and management guidelines that were reviewed and approved by the Animal Ethics and Experimental Committee of Ludong University (Permit Number: LDU-IACUC2018007). Mice were injected with anaesthetics (Zoletil 50, Virbac, France) according to the instructions before sacrifice. No endangered animals were involved in the experiment, and the pain of the animals was minimized as much as possible during the operation. PM2.5 Sampling and Processing The sampling site was a chicken house in Muping, Yantai, Shandong Province, China (37°25′3.78″N, 121°40′33.29″E). The sampling period was from 24 July 2019 to 24 August 2019. During the experiment, an air PM sampler (ZR-3920, China) was used for sampling at a flow rate of 100 L/min. The sampler was placed 1 m above the ground in the centre of the poultry house for sampling, and the sampling duration was 99 h. PM2.5 was collected with 9 cm × 9 cm waterproof glass fibre filter membranes, and the filter membranes were sterilized by dry heat before sampling . To better evaluate the effect of the microorganisms carried by PM2.5 on alveolar macrophages, we divided the collected filter membranes into two groups, one of which was treated with heat inactivation (PM2.5 -) and the other without heat inactivation (PM2.5). During the experiment, PM2.5 was dissolved and diluted to the desired concentration with sterile pyrogen-free phosphatebuffered saline (PBS) as needed. Animals Eight-week-old SPF male C57BL/6 mice with a body weight of 24 ± 2 g were purchased from Jinan Pengyue Experimental Animal Co., Ltd., and were used for the experiments. The temperature of the breeding environment was 23-25°C, the humidity was 50 ± 10%, and the photoperiod was 12/12 h. During the feeding period, the mice had ad libitum access to movement, food, and water. Bacterial Culture The P. aeruginosa PAO1 strain was maintained in our laboratory. After activation, the PAO1 strain was cultured and characterized by streaking on CN agar identification plates and incubating at 37°C for 24 h. Then, single colonies were gathered, inoculated into LB liquid medium, and incubated with shaking at 37°C and 170 rpm to expand the culture. During the experiment, the concentration of the bacterial culture was diluted to the required concentration with sterile pyrogen-free PBS. Isolation and Culture of Alveolar Macrophages After the mouse was anaesthetized and disinfected, the thoracic cavity of the mouse was opened, and its airway was exposed. Prechilled alveolar lavage buffer (1 mM EDTA, Ca 2+ -and Mg 2+free PBS solution) was used for flushing, and the alveolar lavage fluid was collected. The collected alveolar lavage fluid was centrifuged at 250 g for 10 min at 4°C. The collected alveolar macrophages were resuspended in Dulbecco's modified Eagle's medium (DMEM) (containing 10% foetal bovine serum (FBS), 20% L-929 cell culture medium, 1 mM sodium pyruvate, 10 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES), and 1 × penicillin/streptomycin) and were seeded into cell culture flasks (Nayak et al., 2018). Neutral Red Phagocytosis Assay The cell concentration was adjusted to 3 × 10 5 cells/ml, and the cells were seeded into a 96-well plate at 100 μl per well. After incubation at 37°C and 5% CO 2 for 24 h, follow-up experiments were performed. Each group was added to a corresponding volume of samples and stimulated for 12 h at 37°C with 5% CO 2 . The supernatant in each well was discarded, and the cells were washed twice with PBS before the detection of the ability of cells to phagocytose neutral red (Cui et al., 2021). The experimental process was performed in strict accordance with the instructions of the neutral red detection kit (Beyotime). Detection of Apoptosis The alveolar macrophages were diluted to 2 × 10 5 cells/ml, and the cells were seeded into 24-well plates at 500 μl per well. After incubating at 37°C and 5% CO 2 for 24 h, follow-up experiments were performed. Each group was mixed with a corresponding volume of samples and stimulated for 12 h before subsequent experiments were conducted. The cells were washed twice with PBS and trypsinized, after which the cells were harvested. The experiment was performed according to the instructions of the Annexin V-FITC/PI kit (Beyotime) to detect cell apoptosis by flow cytometry (Engelbrecht et al., 2004). Cytoskeletal Observations The cells were seeded into coverslip-containing six-well plates at 3 × 10 5 cells/well and cultured for 24 h at 37°C and 5% CO 2 before subsequent experiments. Each group was stimulated with a corresponding volume of sample for 12 h before subsequent experiments. The cells were first fixed on ice for 15 min using a PBS solution containing 3.75% formaldehyde. They were washed and then permeabilized with 0.5% Triton X-100 in PBS for 10 min at room temperature. The cells were rinsed again, and then 200 μl of phalloidin working solution was added and incubated at room temperature for 20 min for staining. The cells were rinsed, stained with 200 μl of 4′,6diamidino-2-phenylindole (DAPI) staining solution, and observed under a laser confocal microscope (Wang et al., 2019). Western Blotting The cells were seeded into coverslip-containing six-well plates at 3 × 10 5 cells/well and cultured for 24 h at 37°C and 5% CO 2 before subsequent experiments. Each group was stimulated with the corresponding volume of sample for 12 h before the follow-up experiments were performed. Cells were collected with a cell scraper and lysed by adding RIPA lysis buffer (Solarbio, China) containing protease inhibitors and phenylmethylsulfonyl fluoride (PMSF) for 30 min. Cellular proteins were extracted by centrifugation at 12,000 rpm for 15 min. After the protein concentration was determined with the bicinchoninic acid (BCA) method, equal amounts of protein in each group were loaded onto a sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) gel and transferred to a polyvinylidene fluoride (PVDF) membrane. The membrane was blocked for 2 h with 5% fat-free milk. Rabbit anti-mouse LC3II antibody (Cell Signalling Technology, United States, 1:1,000), rabbit anti-mouse p62 antibody (Cell Signalling Technology, United States, 1:1,000), rabbit antimouse mTOR antibody (Cell Signalling Technology, United States, 1:1,000), rabbit anti-mouse p-mTOR antibody (Cell Signalling Technology, United States, 1:1,000), rabbit anti-mouse Akt antibody (Cell Signalling Technology, United States, 1:1,000), rabbit anti-mouse p-Akt antibody (Cell Signalling Technology, United States, 1:1,000), and β-actin antibody (Proteintech, United States, 1:5,000) were incubated for 2 h at room temperature. Goat anti-rabbit IgG Dylight 800 antibody (Cell Signalling Technology, United States, 1:30,000) was incubated for 1 h at room temperature in the dark. Protein expression was detected using a fluorescence imager (LI-COR, United States), and relative grey values were calculated . Data Analysis Statistical analysis was performed using SPSS V17.0 (SPSS Inc., Chicago, IL, United States), and figures were generated using GraphPad Prism version 7.0 (GraphPad Software Inc., San Diego, CA, United States). The data are presented as the means ± SDs of at least three independent experiments. The statistical significance of differences between groups was determined by the unpaired t test or one-way analysis of variance (ANOVA). Differences were considered statistically significant for p values <0.05. PM2.5 and Pseudomonas aeruginosa Exposure Decreases Phagocytosis of Cells To more accurately analyse the effect of PM2.5 and P. aeruginosa on the phagocytic ability of alveolar macrophages, we performed a neutral red phagocytosis assay. The results are shown in Figure 1. Compared with the unstimulated normal cells, the phagocytic ability of cells in other groups was lower (p < 0.05), and stimulating the cells with PM2.5 and P. aeruginosa simultaneously had the most significant effect on the phagocytic ability of cells (p < 0.001). PM2.5 and Pseudomonas aeruginosa Mediate Macrophage Apoptosis In the detection of the effects of PM2.5 and P. aeruginosa on apoptosis using flow cytometry, it was found that the apoptosis FIGURE 1 | Costimulation by PM2.5 and P. aeruginosa can reduce the phagocytic ability of mouse alveolar macrophages. In the experimental groups, the final concentrations of the PM2.5 and PM2.5samples were 100 μg/ml, and the multiplicity of infection (MOI) between P. aeruginosa and the cells was 10:1 (number of bacteria:number of cells). Each group was stimulated for 12 h with the corresponding sample, and the control group was treated with an equal volume of PBS. The experiment was repeated three times. PBS, phosphate-buffered saline; PM2.5, particulate matter less than 2.5 μm in diameter; P. aeruginosa, Pseudomonas aeruginosa. The data are presented as the means ± SD. One-way ANOVA in SPSS. *p < 0.05, **p < 0.01, ***p < 0.001. rates of the other groups were higher than that of normal unstimulated cells (7.90%), as shown in Figure 2. The apoptosis rate of the PM2.5 + PA group was 34.48%, which was significantly higher than those of the PM2.5 group and the P. aeruginosa group (21.13% and 23.94%, respectively, p < 0.01). This finding indicated that the combined action of PM2.5 and P. aeruginosa was able to significantly promote apoptosis of alveolar macrophages. Effect of PM2.5 and Pseudomonas aeruginosa on the Cytoskeleton of Macrophages The results of laser confocal microscopy ( Figure 3) showed that the cells and nuclei of the normal group were morphologically normal, showing a round or nearly round shape with a smooth cell membrane surface. However, in the experimental group, the cell status was abnormal, mainly manifested as polygonal cell morphology, an uneven cell membrane surface, and the appearance of more pseudopodia. Additionally, the nuclear morphology changed, the cytoplasm expanded, and the nuclei of the PM2.5 group showed irregular morphology. In the FIGURE 2 | times. PBS, phosphate-buffered saline; PM2.5, particulate matter less than 2.5 μm in diameter; P. aeruginosa, Pseudomonas aeruginosa. The data are presented as the means ± SD. One-way ANOVA in SPSS. **p < 0.01; ***p < 0.001. FIGURE 3 | The combined action of PM2.5 and P. aeruginosa can cause changes to the cytoskeleton. After phalloidin and DAPI staining, laser confocal microscopy was used to observe the cytoskeletal changes in cells in each group stimulated by samples for 12 h. In the experimental groups, the final concentrations of the PM2.5 and PM2.5samples were 100 μg/ml, and the multiplicity of infection (MOI) between P. aeruginosa and the cells was 10:1 (number of bacteria:number of cells). An equal volume of PBS was added to the control group. Red, phalloidin; blue, DAPI; scale bar, 5 μm. The experiment was repeated three times. PBS, phosphate-buffered saline; PM2.5, particulate matter less than 2.5 μm in diameter; P. aeruginosa, Pseudomonas aeruginosa. Frontiers in Pharmacology | www.frontiersin.org June 2022 | Volume 13 | Article 924242 experimental groups, the changes in cells in the PM2.5 + PA group were the most substantial, and the cells in this group no longer had the morphology of normal cells but had a disordered microfilament structure. This outcome shows that PM2.5 and P. aeruginosa can destroy the skeletal structure of alveolar macrophages, and the effect is greatest when the two act synergistically. PM2.5 and Pseudomonas aeruginosa Exposure Suppresses Autophagy in Macrophages After alveolar macrophages were treated, the protein expression levels of LC3 and p62 in the cells of each group were detected by Western blotting. The expression of p62 protein is shown in Figures 4A,B. The expression of p62 protein was upregulated in The expression levels of p62 and LC3 in the cells of each group. The experiment was repeated three times. PBS, phosphate-buffered saline; PM2.5, particulate matter less than 2.5 μm in diameter; P. aeruginosa, Pseudomonas aeruginosa. The data are presented as the means ± SD. One-way ANOVA in SPSS. *p < 0.05, **p < 0.01. The protein expression level of phospho-Akt in cells of each group. The experiment was repeated three times. PBS, phosphate-buffered saline; PM2.5, particulate matter less than 2.5 μm in diameter; P. aeruginosa, Pseudomonas aeruginosa. The data are presented as the means ± SD. Oneway ANOVA in SPSS. *p < 0.05, **p < 0.01, ***p < 0.001. Frontiers in Pharmacology | www.frontiersin.org June 2022 | Volume 13 | Article 924242 6 all groups except for the PM2.5group. Among them, the upregulation of p62 was the most significant under the synergistic stimulation of PM2.5 and P. aeruginosa (p < 0.05). In addition, all of the groups of samples had reduced expression of LC3-II to varying degrees. Compared with the blank control group, concerted stimulation with PM2.5 and P. aeruginosa significantly downregulated the expression of LC3-II (p < 0.05). The expression of p62 and LC3-II can reflect the level of autophagy to a certain extent (Du et al., 2019). This finding suggests that combined stimulation by PM2.5 and P. aeruginosa can significantly inhibit autophagy levels in alveolar macrophages. Importance of Akt/mTOR in the Response to PM2.5 and Pseudomonas aeruginosa Real-time PCR and Western blotting were used to detect the expression of Akt/mTOR signalling pathway-related genes and proteins in alveolar macrophages, respectively. The results of Western blotting ( Figure 5) showed that the expression of Akt, p-Akt, mTOR, and p-mTOR was enhanced to a certain extent in all of the groups compared with that in the blank control group, and the protein expression was significantly increased when PM2.5 and P. aeruginosa acted together (p < 0.01). This outcome suggests that compared with stimulation by PM2.5 or P. aeruginosa alone, the combined action of PM2.5 and P. aeruginosa activated the Akt/mTOR signalling pathway to a greater extent. DISCUSSION In recent years, many studies have shown that the increasing incidence and mortality of lung diseases are closely related to the increase in atmospheric PM2.5 concentrations (Annesi-Maesano et al., 2007). Alveolar macrophages are free immune cells in the lung. They are generally derived from bone marrow and are mainly distributed on the surface of the respiratory tract and alveoli. They can eliminate foreign bacteria through phagocytosis and digestion. Therefore, they are referred to as the first line of defence against the invasion of foreign microorganisms in the lungs and airways. The lungs mainly rely on alveolar macrophages to phagocytose foreign PM or microorganisms and then break them down and remove them through lysosomes (Allard et al., 2018). Many studies have shown that PM2.5 can significantly reduce the phagocytic ability of alveolar macrophages in vivo and in vitro experiments, and as the concentration of PM increases, the phagocytic rate and phagocytic index of alveolar macrophages significantly decrease (Miyata and van Eeden, 2011). Our experimental results also confirmed that mouse alveolar macrophages had weaker phagocytosis and lower resistance to pathogenic bacteria after inhalation of PM2.5. Therefore, P. aeruginosa had the opportunity to adhere to and colonize the surface of the respiratory tract, which could cause greater damage to the phagocytic function of alveolar macrophages. Apoptosis is active cell death under the control of genes, and it is an inherent programmed biological phenomenon that widely occurs in many physiological and pathological processes of cells. Apoptosis can be induced by physiological stimulation signals or by external environmental factors (Elmore, 2007). Apoptosis maintains the dynamic balance of cell proliferation and cell death in living organisms, thereby maintaining a constant number of cells and normal physiological function in the organism. Abnormal regulation of apoptosis (insufficient activation of apoptosis, inappropriate activation, or inhibition of apoptosis) leads to the occurrence of various diseases (Fleisher, 1997). Many studies have shown that PM2.5-induced pulmonary cell apoptosis is an effective host defence mechanism that can effectively reduce the damage by PM2.5 to the host. Long-term exposure to an environment with a PM2.5 level greater than the hazard threshold value can easily disrupt the apoptotic balance of the respiratory system and induce lung injury (Xia et al., 2017). In the PM2.5-induced acute lung injury model, PM2.5 can stimulate the generation of reactive oxygen species, which activate the phosphorylation site Thr845 of ASK1 and thus activate the p38 and JNK signalling pathways and cause apoptosis of airway epithelial cells, eventually leading to acute lung injury (Ni et al., 2015). The results of apoptosis experiments showed that both PM2.5 and P. aeruginosa could promote the apoptosis of mouse alveolar cells, and concerted stimulation by PM2.5 and P. aeruginosa could aggravate the apoptosis of mouse alveolar macrophages. These outcomes indicate that PM2.5 can induce apoptosis of alveolar macrophages, disrupt the apoptotic balance of macrophages, and damage the defence function of the lungs. Pathogenic bacteria, such as P. aeruginosa, are more likely to colonize the lungs and further induce apoptosis of alveolar macrophages. Excessive apoptosis can lead to severe inflammatory responses and cause lung injury (Wang et al., 2005). The cytoskeleton is a complex network-like structural system that runs through the interior of eukaryotic cells and plays an important role in phagocytosis by macrophages. The cytoskeleton can not only maintain cell morphology but can also participate in cell division and motility, intracellular substance transport, phagocytosis, and other functions, and it has important regulatory functions in all aspects of signal transduction. The cytoskeleton is composed of microtubules, intermediate filaments, and microfilaments, and the main component of microfilaments is actin (Moller et al., 2005). The correct rearrangement of cytoskeletal actin is necessary for phagocytosis by macrophages (May and Machesky, 2001). Macrophage membranes undergo morphological changes and extend pseudopodia, lamellipodia, or phagocytic cups under the continuous polymerization and depolymerization of cytoskeletal actin to engulf pathogens and form phagolysosomes, thereby clearing pathogens (Allen and Aderem, 1996;Egami et al., 2017). In this study, we observed cytoskeletal changes in alveolar macrophages using laser confocal microscopy. The experimental results were consistent with the findings of phagocytosis experiments. Combined stimulation by PM2.5 and P. aeruginosa caused the most severe damage to the cytoskeleton of alveolar macrophages and significantly Frontiers in Pharmacology | www.frontiersin.org June 2022 | Volume 13 | Article 924242 inhibited phagocytosis by alveolar macrophages. This outcome also shows that high concentrations of PM2.5 could lead to impaired rearrangement of the cytoskeleton of mouse alveolar macrophages and that the delayed phagocytosis of P. aeruginosa further affects the cytoskeleton of alveolar macrophages, causing significantly decreased phagocytic ability. Autophagy is an important component of the innate and adaptive immunity of macrophages, playing an important role in regulating intracellular protein and metabolic homeostasis. Appropriate autophagy can increase the defence ability of cells. However, under pathological conditions, the level of autophagy in cells can be changed, and changes in the degree of autophagy can lead to different consequences. Excessive autophagy can lead to cell death, while autophagy dysfunction has been found to be associated with a variety of diseases, including cancer and lung diseases (Shintani and Klionsky, 2004;Nakahira et al., 2011). LC3 is an important autophagy-related protein that occurs in cells in three forms: LC3 precursor, LC3-I, and LC3-II. When autophagy occurs, cytosolic LC3-I is translocated to the membrane of autophagosomes and converted into LC3-II, marking the formation of autophagosomes (Tanida et al., 2008). As the action centre of regulatory proteins and ubiquitinated protein aggregates, p62 is involved in the regulation of protein metabolism and multiple signal transduction pathways and is a ubiquitin-binding protector that isolates harmful proteins. Increased expression of p62 often suggests blockage of autophagic flux (Lamark et al., 2017). Many studies have shown that autophagy is involved in different types of lung injury, such as chronic obstructive pulmonary disease caused by cigarette smoke and ventilator-induced lung injury (Ryter et al., 2010;Gao et al., 2013). PM2.5 exposure can cause different types of cellular autophagy disorders, resulting in a variety of diseases. PM2.5 induces autophagy and apoptosis in human lung epithelial A549 cells through oxidative stress, leading to cell death (Deng et al., 2013). Wan et al. showed that PM2.5 can downregulate autophagy in macrophage RAW264.7 cells by activating the PI3K/Akt/mTOR signalling pathway, increase the concentration of inflammatory cytokines such as IL-6 and TNF-α in the supernatant, and accelerate atherosclerosis in mice (Wan et al., 2021). Our findings showed that concerted stimulation by PM2.5 and P. aeruginosa could lead to autophagy dysfunction in mouse alveolar macrophages, which could in turn reduce the defence ability of the cells and lead to severe inflammation. In addition, autophagy can inhibit or enhance apoptosis or induce cell death independent of apoptosis, depending on the cell type and the type and duration of stimulation (Delgado and Tesfaigzi, 2013). This study showed that the concerted action of PM2.5 and P. aeruginosa could inhibit autophagy and induce apoptosis. The specific mechanism between the two requires further study. The mTOR signalling pathway is closely related to cell growth, proliferation, autophagy, apoptosis, and cytoskeletal rearrangement. mTOR, a serine/threonine protein kinase, is a central signalling regulatory molecule that integrates various intracellular and extracellular signals and regulates cell growth, metabolism, autophagy, and cytoskeletal rearrangement. Intracellular and extracellular signals (cytokines, growth factors, growth hormone, stress, etc.) can activate mTOR by stimulating the Akt signalling pathway (Zoncu et al., 2011). Akt, also known as protein kinase B, plays very important roles in regulating cell growth, proliferation, migration, and survival (Brown and Banerji, 2017). The mTOR signalling pathway can recruit and activate immune cells to secrete immune factors, thus playing an important role in the development and progression of inflammation (Felger, 2018). Some studies have shown that autophagy is regulated by the mTOR pathway, confirming that activation of the mTOR pathway can inhibit the occurrence of autophagy (Xu et al., 2020). Nicotine can inhibit autophagy and induce apoptosis of human alveolar epithelial cells by activating the mTOR pathway (He Li, 2021). Our study showed that both PM2.5 and P. aeruginosa can enhance the expression of Akt/mTOR proteins, and the enhancement effect is greatest when the two act synergistically. This finding suggests that PM2.5 and P. aeruginosa can destroy the cytoskeleton of alveolar macrophages and affect their phagocytic function by activating the mTOR pathway, thereby inhibiting autophagy and inducing apoptosis. Past studies have reported that there is crosstalk between the mTOR signalling pathway and the NF-κB pathway and that activated Akt can lead to activation of the NF-κB pathway and can enhance the activation of mTOR (Kim et al., 2007;Zoncu et al., 2011). As shown in our previous study, the lung tissues of mice exhibited greater pathological damage when costimulated by PM2.5 and P. aeruginosa, and this treatment increased the expression levels of IL-6, IL-8, and TNF-α through the NF-κB pathway. This finding demonstrated that PM2.5 in combination with P. aeruginosa can aggravate the inflammatory response by activating the NF-κB pathway . The present study suggests that stimulation by PM2.5 in poultry houses in combination with P. aeruginosa can lead to abnormal expression of the Akt/mTOR signalling pathway and damage the function of alveolar macrophages. Moreover, activated Akt further activates the NF-κB pathway, causing the secretion of a large number of inflammatory cytokines, which aggravate the inflammatory response of the respiratory system, resulting in more severe lung injury. CONCLUSION In summary, PM2.5 in poultry houses, together with P. aeruginosa, can activate the Akt/mTOR signalling pathway, disrupt the cytoskeleton of alveolar macrophages, affect phagocytosis, inhibit autophagy, induce apoptosis, and jointly regulate the inflammatory process through crosstalk with NF-κB, thereby further activating macrophages to secrete various inflammatory cytokines to produce more intense inflammatory responses. This outcome can explain the pathogenic mechanism by which, after PM2.5 in the poultry house damages alveolar macrophages, P. aeruginosa-induced secondary infection further aggravates the damage to alveolar macrophage function and exacerbates inflammatory responses. This finding provides a theoretical basis for preventing secondary infection by P. aeruginosa.
2022-06-23T15:23:04.465Z
2022-06-21T00:00:00.000
{ "year": 2022, "sha1": "7e987dde6800812ac5ccef166f89aa4e3d5bf325", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2022.924242/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e4bf6a4ddae0c5f532685b8d5ec18354194d975", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2216140
pes2o/s2orc
v3-fos-license
Diagnosis and control of anthelmintic-resistant Parascaris equorum Since 2002, macrocyclic lactone resistance has been reported in populations of Parascaris equorum from several countries. It is apparent that macrocyclic lactone resistance developed in response to exclusive and/or excessively frequent use of ivermectin or moxidectin in foals during the first year of life. The development of anthelmintic resistance was virtually inevitable, given certain biological features of Parascaris and unique pharmacologic characteristics of the macrocyclic lactones. Practitioners can utilize the Fecal Egg Count Reduction Test to detect anthelmintic resistance in Parascaris, and the same technique can be applied regularly to confirm the continued efficacy of those drugs currently in use. In the face of macrocyclic lactone resistance, piperazine or anthelmintics of the benzimidazole or pyrimidine classes can be used to control ascarid infections, but Parascaris populations that are concurrently resistant to macrocyclic lactones and pyrimidine drugs have been reported recently from Texas and Kentucky. Compared to traditional practices, future recommendations for ascarid control should feature: 1) use of only those anthelmintics known to be effective against indigenous populations, 2) initiation of anthelmintic treatment no earlier than 60 days of age, and 3) repetition of treatments at the longest intervals which prevent serious environmental contamination with Parascaris eggs. In the interest of decreasing selection pressure for anthelmintic resistance, horse owners and veterinarians must become more tolerant of the passage of modest numbers of ascarid eggs by some foals. Anthelmintic resistance is only one of several potential responses to genetic selection. Although still only theoretical, changes in the immunogenicity of ascarid isolates or reduction of their prepatent or egg reappearance periods could pose far greater challenges to effective control than resistance to a single class of anthelmintics. Anthelmintic resistance in parasites of horses was first reported approximately five decades ago when various researchers noted that phenothiazine treatment failed to reduce strongylid egg counts [1][2][3]. Anthelmintic resistance in cyathostomin (small strongyle) nematodes has since expanded to encompass nearly universal insusceptibility to benzimidazoles [4,5], resistance to pyrantel salts by nearly 50% of populations in the U.S. [4][5][6], and occasional resistance to piperazine [7]. Lyons et al. [8] recently reported shortened egg reappearance periods and Craig R Reinemeyer* survival of fourth-stage larval cyathostomins following treatment with macrocyclic lactone anthelmintics. These phenomena are considered to be companions and precursors of clinical anthelmintic resistance. Diagnosis and control of anthelmintic-resistant Parascaris equorum Yet, most concerns that parasitologists and equine practitioners harbored about anthelmintic-resistant cyathostomins were mitigated by the fact that small strongyles are generally not serious pathogens in wellmanaged horses. Concern was amplified into alarm, however, by the first published reports of anthelmintic resistance in Parascaris equorum [9,10]. P. equorum is the most pathogenic parasite of juvenile equids, and can cause poor growth, ill-thrift, weight loss, colic, and death subsequent to intestinal impaction or perforation. Although the parasitology community was taken aback by the development of macrocyclic lactone (ML) resistance in a non-strongylid nematode, an honest assessment of historical management practices for equine ascarids, with due application of resistance selection theory, should have predicted this circumstance. In retrospect, perhaps the most surprising element about the development of macrocyclic lactone resistance in equine ascarids is that it did not arise until nearly 20 years after the first approval of ivermectin for horses. Horse owners and equine practitioners are now aware of anthelmintic-resistance in ascarids, and have numerous practical questions regarding its detection, management, and prevention. The objectives of this paper are to review the current status of anthelmintic resistance in populations of P. equorum, to discuss the biological and management factors which promoted its development, and to offer practical methods of detection, chemical control, and prevention for breeding stables. Life cycle P. equorum (ascarid; roundworm) is a common nematode parasite which occurs in the small intestine of immature horses world-wide. Adult female ascarids lay eggs in the small intestine, and these eggs pass into the environment within the feces of the host. The infective stage is a larvated egg (containing a second stage larva [L 3 ]); development requires approximately 10 days at temperatures of 25°C to 35°C [11]. Larvated eggs survive in the environment for up to five or 10 years, and infection is acquired through inadvertent ingestion of eggs. Larvae emerge from eggs within the alimentary tract of a horse, and migrate through the liver and lungs before returning to the small intestine approximately one month later as fourth stage larvae (L 4 ). Ascarids mature progressively in the small intestine and achieve patency about 75 to 80 days after infection [11]. P. equorum is one of the rare nematodes which induces absolute acquired immunity. Most horses become immune during the first year of life, so patent ascarid infections are rarely diagnosed in horses over two years of age. Anthelmintic resistance Failures of macrocyclic lactone treatment to decrease Parascaris fecal egg counts were first reported in the Netherlands [9] and Canada [10]. Subsequently, macrocyclic lactone-resistant (ML-R) populations of P. equorum have been detected in numerous countries, including the United States [12,13], Denmark [14], Germany [15], Brazil [16], and Italy [17]. A comprehensive survey of the distribution of ML-R Parascaris populations has not been conducted, but anecdotal reports abound in North America. The initial clinical evidence of macrocyclic lactone resistance (ML-R) consisted of failures of ivermectin (IVM) or moxidectin (MOX) to decrease ascarid egg counts after treatment. To characterize this phenomenon more thoroughly, an efficacy study was conducted in 2005 with 11 foals that had been raised helminth-free. These foals were inoculated orally at 6 weeks to 3 months of age with ~500 larvated eggs of a Canadian isolate of P. equorum that was purportedly resistant to macrocyclic lactone anthelmintics [18]. Six foals were treated orally with ivermectin paste (200 μg/kg), and the remaining five served as untreated controls. Ivermectin treatment did not result in significant Fecal Egg Count Reduction (FECR), and worm numbers at necropsy were decreased by only 22%. This study unequivocally confirmed ivermectin resistance in P. equorum, and a subsequent study wherein alternating treatments of ivermectin and moxidectin failed to reduce egg counts demonstrated that such resistance involved the entire macrocyclic lactone class [19]. Inherent factors contributing to resistance All currently marketed equine anthelmintics are considered to be "broad spectrum", meaning they have good efficacy (>90%) against four groups of target parasites: large strongyles, cyathostomins, ascarids, and pinworms. Broad spectrum anthelmintics are not uniformly effective against all parasitic targets; invariably, one parasite always requires a higher dosage than the others to achieve efficacy [20]. These hardest-to-kill species are known as dose-limiting parasites (DLPs), and P. equorum is the DLP for most equine anthelmintics. The clearest example of Parascaris as a DLP is seen with fenbendazole (FBZ). In horses, the 5 mg/kg dosage of FBZ is effective against large strongyles, susceptible cyathostomins, and pinworms, but the recommended dosage for removal of Parascaris is 10 mg FBZ/kg body weight. Parasites & Vectors 2009, 2(Suppl 2):S8 http://www.parasitesandvectors.com/content/2/S2/S8 Because the magnitude of difference between an effective dosage and the label dosage is much less for DLPs than for other intended targets, dose-limiting parasites have a lower threshold for the development of resistance. Pharmacologic factors selecting for resistance Macrocyclic lactones are the most persistent anthelmintics used in horses, and effective drug levels may persist in the plasma for days to weeks after a single treatment. Drug concentrations inevitably decline, however, and parasites that are newly acquired during this phase may be exposed to subtherapeutic concentrations as a consequence. Low drug concentrations during the decay phase of persistent products can select for anthelmintic resistance (so-called "tail selection") [21]. In contrast, anthelmintics of the pyrimidine and benzimidazole classes are non-persistent. Resistance of P. equorum to pyrantel pamoate has been reported recently in herds from Texas and Kentucky [12,13]. Pyrantel pamoate resistance was possibly pre-selected by daily use of pyrantel tartrate in some herds for prevention of ascarid and strongyle infections. Pyrantel-resistant ascarids have not been reported outside of North America, which is the exclusive marketing range of pyrantel tartrate for prophylactic use in horses [5]. Ascarid resistance to benzimidazoles has not been reported in North America, perhaps because use of this class has been limited to non-persistent, therapeutic applications. Control practices which select for resistance Anthelmintics are used excessively by many breeding farms, where it is a common practice to administer ivermectin for treatment of suspected Strongyloides infection when foals are less than one month of age. Thereafter, frequent anthelmintic rotation is implemented, and juvenile horses are often dewormed at monthly intervals until their first birthday. Many farms use macrocyclic lactones at least bimonthly in juvenile horses [12]. Because macrocyclic lactones are larvicidal against Parascaris, the refugia within a host is minimized each time an infected foal is dosed. This happens routinely whenever the interval between treatments is shorter than the prepatent period for Parascaris (i.e., 75-80 days). In addition, susceptible genotypes in the local population are denied an opportunity to reproduce whenever macrocyclic lactone treatments are repeated at intervals which are less than the prepatent period or egg reappearance period, thus minimizing refugia in the environment. Typical parasite control practices for juvenile horses at many breeding operations essentially constitute exclusive and/or excessively frequent use of a single drug class, and thus select intensively for anthelmintic resistance [4]. Transmission among facilities It is likely that macrocyclic lactone resistance arose independently at multiple locations, and may do so again at any facility where traditional control practices are followed. As the prevalence of macrocyclic lactoneresistant ascarids increases, farms are at ever greater risk of inadvertently importing a resistant isolate. The major potential source is foals which harbor immature infections. Fecal examination of such animals would be fruitless because their worm burdens are not yet capable of sexual reproduction. This particular route of dissemination is a great threat to the Thoroughbred industry, which requires that offspring must be sired by natural service in order to be registered. This requirement results in significant traffic of mares, with foals-at-side, to breeding facilities for natural service by a stallion. If a foal acquires a macrocyclic-lactone resistant ascarid infection at the breeding farm, it will transport it back home, and only time will reveal its presence. Treatment of returning foals with ivermectin or moxidectin is ineffective because the target infection is ML-R. Carefully timed administration of non-ML anthelmintics could reduce the number of resulting adult worms, but probably would not eliminate them totally. Detection of resistant isolates Fecal flotation is a simple, inexpensive, and widely available procedure for detecting patent Parascaris infections in horses. Quantitative procedures (e.g., McMaster, Modified Stoll, Sucrose Centrifugation) provide valuable information regarding the magnitude of environmental contamination by individual animals. However, a correlation between egg counts (eggs per gram; EPG) and worm burdens has not been demonstrated for P. equorum, so one may not assume that horses with high egg counts are harboring large numbers of mature ascarids. The Fecal Egg Count Reduction Test (FECRT) is the standard method for detecting anthelmintic resistance in cyathostomin nematodes of horses, but this procedure has not been validated for Parascaris. Nevertheless, FECRT is the only currently available test for quantifying anthelmintic removal of reproducing, adult, female Parascaris from an individual horse. Parascaris FECRT can only be performed with horses that have positive egg counts, and some minimum quantitative standard (e.g., ≥200 EPG) should be established for inclusion in FECR calculations. Enrollment of large numbers of horses in an efficacy evaluation will provide a more accurate representation of the susceptibility status of the resident ascarid population. Following determination of pretreatment fecal egg counts, each candidate is treated according to label directions with the The magnitude of egg count reduction which comprises acceptable efficacy is generally accepted as >90% or >95% FECR. These ranges constitute rough guidelines only, but will have to serve until FECRT has been validated for Parascaris. Anthelmintic resistance appears to be a permanent genetic feature of a parasite population, and reversion to susceptibility may never occur. Accordingly, if the resident ascarid population is resistant to a particular drug class, products from that chemical group should never again be used alone for ascarid control on those premises. However, drugs to which ascarids are resistant may retain substantial efficacy against other important equine parasites, such as large strongyles or cyathostomins. Any drug classes that are known to be effective against the indigenous ascarid isolate should be evaluated annually for continued efficacy. Control recommendations Ideally, a decision to administer anthelmintics for removal of P. equorum infections would be based on a positive diagnostic result (e.g., fecal examination) for each animal to be treated. However, confirmation of patency also indicates that the environment is being contaminated with highly persistent ascarid eggs, which confounds the universal objective of parasite control. Ultimately, compromise is unavoidable, and some level of contamination must be accepted because suppressive programs select too intensively for the development of resistance. And, whenever treatment is indicated, it is desirable to use only anthelmintics with known efficacy against indigenous parasite populations. The specter of Strongyloides westeri infection is not sufficient justification for deworming foals with MLs during the first month of life. Strongyloides is relatively uncommon and only occasionally pathogenic. Initial treatment of foals for Parascaris infection should not begin earlier than 60 to 70 days of age, and treatments thereafter should be repeated at the longest intervals which minimize environmental contamination with ascarid eggs. One important feature of ascarid biology that should be considered in scheduling Parascaris treatments is that anthelmintic efficacy apparently increases as the target population ages. For example, oxibendazole (10 mg/kg) was 94% [13] to 100% [22] effective against patent (i.e., mature) ascarid infections when measured by FECRT. However, the same dosage removed only 44.5% of immature ascarids when administered at 28 days postinfection [23]. So, it is logical that anthelmintic treatments would be more effective against ascarids if administered just prior to patency, i.e., at 70 to 75 days post-infection. This knowledge has limited practical application, however, because natural infections "trickle" into the host, with multiple exposures occurring continuously on a daily basis. A foal with a negative fecal result could harbor ascarid populations ranging in age from 1 to 75 days, and anthelmintics directed against such a mixed population would likely remove the older ascarids but demonstrate little efficacy against juvenile worms. Traditional recommendations for ascarid control are to treat foals at bimonthly intervals (i.e., q ~60 days), but this schedule may be insufficiently frequent to minimize the passage of eggs in the feces of some foals. However, deworming more frequently, especially with macrocyclic lactone anthelmintics, minimizes refugia and selects for resistance. It may be preferable to tolerate some level of egg contamination, because a survey in the Netherlands found little ML resistance on farms where foals were dewormed less frequently than at bimonthly intervals [24]. If anthelmintic resistance is not an issue, acceptable efficacy can usually be achieved with any of the products listed in Table 1. If ML-R ascarids are present on a farm, benzimidazole or pyrimidine formulations can be administered easily and usually provide good efficacy. Rotation between effective drug classes is recommended [25][26]. Recently, ML-R Parascaris populations that are simultaneously resistant to pyrantel pamoate have been reported from Texas [12] and Kentucky [13]. For these populations, the only remaining, effective drugs are piperazine, fenbendazole, or oxibendazole. Due to the possibility of multiple drug resistance, the continuing efficacy of all drug classes used against Parascaris should be confirmed annually on each farm. Preventing the introduction of a ML-R strain to a farm is particularly difficult to manage, because the infection cannot be detected and efficacy cannot be verified. Furthermore, non-ML anthelmintics have no efficacy against migrating stages during the first month posttreatment, and only partial efficacy thereafter until the population becomes fully mature. A regimen of fenbendazole, 10 mg/kg daily for five consecutive days represents one possible tool for [27]. Although multiple-day fenbendazole is not specifically approved for removal of immature Parascaris infections, it is labeled for larvicidal therapy of migrating large strongyles and encysted cyathostomins. The suggested prophylactic uses of this regimen include treatment of foals when they return with their dams from a breeding facility, or treatment of any juveniles upon first introduction to a new facility. Possible biological changes Anthelmintic resistance is only one manifestation of genetic change in a parasite population in response to various selection pressures. Other biological adaptations are certainly feasible, and some could even impact practical control more deleteriously than drug resistance. For instance, acquired immunity is the ultimate ally in controlling equine ascarids, but if P. equorum isolates with low immunogenicity were to evolve, the challenges of ascarid control could extend to horses of all ages, rather than just juveniles. Variations from the typical host age spectrum have been reported with Oxyuris equi, and altered immunity is one feasible explanation [28]. It is also possible that the prepatent period or egg reappearance period of Parascaris could become abbreviated as a response to frequent anthelmintic treatment. This phenomenon has not yet been investigated in ascarids, but reduction of the egg reappearance period of cyathostomins has been documented as a consequence of anthelmintic selection pressure [8,15,[29][30][31][32]. The present and emerging threats associated with anthelmintic treatment lend particular urgency to the development of sustainable approaches to parasite management which are not exclusively dependent on chemical control. Conclusions The development of anthelmintic resistance in some populations of P. equorum means that casual selection of dewormers must be discontinued, and that treatments can no longer be administered at frequent intervals. In the future, the resistance status of each drug class should be evaluated against local isolates, and efficacy should be reconfirmed at regular intervals. The Fecal Egg Count Reduction Test is a simple procedure which can be adapted for this purpose. Although fecal monitoring will increase the costs of administering control programs, the alternative, i.e., expanding resistance, is unacceptable. Future management of the entire spectrum of equine parasites lies in the development of sustainable approaches which do not rely solely on anthelmintic treatment.
2017-06-25T03:46:05.325Z
2009-09-25T00:00:00.000
{ "year": 2009, "sha1": "f9d94d45d35902374f225e5b65e30c032e09e134", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/1756-3305-2-S2-S8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "511a34c5b2623482e18cf97fca1f929fdf5b5832", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
195831578
pes2o/s2orc
v3-fos-license
Deep Reinforcement Learning for IoT Network Dynamic Clustering in Edge Computing Processing big data generated in large Internet of Things (IoT) networks is challenging current techniques. To date, a lot of network clustering approaches have been proposed to improve the performance of data collection in IoT. However, most of them focus on partitioning networks with static topologies, and thus they are not optimal in handling the case with moving objects in the networks. Moreover, to the best of our knowledge, none of them has ever considered the performance of computing in edge servers. To solve these problems, we propose a highly efficient IoT network dynamic clustering solution in edge computing using deep reinforcement learning (DRL). Our approach can both fulfill the data communication requirements from IoT networks and load-balancing requirements from edge servers, and thus provide a great opportunity for future high performance IoT data analytics. We implement our approach using a Deep Q-learning Network (DQN) model, and our preliminary experimental results show that the DQN solution can achieve higher scores in cluster partitioning compared with the current static benchmark solution. I. INTRODUCTION Edge computing is an efficient solution to process large amounts of data collected from Internet of Things (IoT). Traditional collected data sets in IoT are of small size, such as sensing data. Therefore, a non-optimized data collection solution is not likely to affect the performance of the edge computing. Because the edge servers can re-allocate the data to multiple servers through wired networks, which can easily achieve very high communication speeds in the order of Gigabits per second. However, with the fast development of IoT in recent years, the property of data flow in IoT changes. Firstly, data in IoT becomes increasingly larger, such as video, images, etc. Secondly, data comes from not only homogeneous static IoT devices, but also from heterogeneous dynamic IoT devices. To process these data in edge servers in an efficient way, such as applying advanced parallel computing techniques, we need effective data collection in IoT, since a non-optimized data collection solution could result in performance bottlenecks for parallel computing. In this work, we focus on the load balancing issues in such scenarios. To date, many solutions for data collection in IoT networks and load balancing for parallel computing have been proposed respectively. However, almost all the techniques in the two fields are totally independent from each other, and the research on optimizing the performance on both aspects is limited due to two main challenges. not only consider the topology of a network, but also the dynamic objects in the network. As a dynamic object moves inside the detection range of IoT devices, such as proximity sensing, an IoT device would produce data triggered by the object. In this condition, the data collection scheduling should be adaptive to the dynamic networks. However, most existing solutions just focus on a static data flow pattern. In this paper, we propose a deep reinforcement learning (DRL) based solution for IoT network clustering. In each cluster, the data produced from cluster member IoT devices is collected to the edge server located in the cluster. In general, our solution has two main properties. • We assume each IoT device produces regular data of a fixed size periodically. Our solution partitions the IoT network into clusters to maximize the communication performance of IoT networks and computing performance of edge servers. • We assume objects move in the area of IoT network, which trigger the neighbor IoT devices to produce data that has larger size than regular data. Our solution adapts the pattern of network clustering to these changes in the collected data. To achieve the goal, we propose a Deep Q-Learning Network (DQN) model. The preliminary experiments prove the feasibility of our solution. The experimental results show that the DQN model improves the system performance significantly in the IoT network with dynamic objects compared with the solution using statically partitioned clusters. The paper is organized as follows. Related work is discussed in Section II. The system model is present in Section III. The DQN based solution is explained in Section IV. The preliminary experimental results are presented in Section V. Finally, we discuss the future work in Section VI and conclude the paper in Section VII. II. RELATED WORK Cluster based data collection is widely used in wireless networks [2] [3]. Typically, the network nodes are partitioned into several groups, and the member nodes in a cluster send the data to the cluster head. Network clustering is used to optimize the performance of the network, such as energy consumption, communication latency, etc. Most of the network clustering solutions are based on the global information of the network. To cope with the situation without global information, some researches focus on self-adaptive self-organization systems [4]. For example, [5] makes the network converge to the optimized network clustering by the local decision of each node using evolutionary theory. To cope with more complicated resource management problems in networks, some researches exploit reinforcement learning (RL). For example, [6] presents a solution that translates the tasks with multiple resource demands into a RL problem. Its experimental results show that the RL model has comparable performance to the stateof-the-art heuristic solutions. [7] presents a resource allocation mechanism based on DRL in vehicle-to-vehicle communications. The solution makes each agent optimize sub-band and transmission power for satisfying the latency constraints using distributed local information. III. SYSTEM MODEL The system model includes an IoT network and edge computing servers as shown in Figure 1. The IoT network consists of homogeneous devices. We assume that the IoT devices are randomly deployed in the area, and every IoT device has multi-hop communication routes to the edge servers. Each IoT device produces data periodically, and transfers messages to the edge servers by multi-hop communication. Several edge servers are deployed to make parallel computing for the collected data from the IoT network. For collecting data from IoT devices to multiple edge servers, the network is partitioned into clusters, and each edge server resides in a cluster. Each edge server is responsible for collecting the data from the IoT devices of a cluster. We suppose each IoT device sends regular report data of a fixed size periodically to the edge server of its cluster. An object moves in the area of IoT network, and triggers the neighbor IoT devices to produce sensing data of larger size. For example, the object could be a person moving around. The sensor attached to each IoT device detects the proximity of the person. Once a person appears in the detection range of an IoT device, the IoT device captures the image of the person and sends image data to the edge server of its cluster. The network clustering depends on the position of the moving object in the IoT network and the data collection requirements. Figure 2 presents an example. To guarantee the collected data size in both edge servers are balanced, the network clustering pattern must change to adapt to the movement of object in the area. In this paper, we utilize a DRL approach to find the optimized network clustering in the IoT network with dynamic objects. IV. DEEP REINFORCEMENT LEARNING SOLUTION We use Deep Q-Learning Network (DQN) as the DRL model for dynamic network clustering. We assume that an agent performing the DQN model resides in an edge server, and all the required information for calculating the DQN model is collected to the agent. A. Actions Suppose a set of edge servers P = {p 1 , ...p i , ...p n } reside in the network with integer i ∈ [1, n]. A cluster-core o i is set at the position of edge server p i . To partition the IoT network into clusters in the initialization, we first select the IoT network node that is closest to the cluster-core o i as the cluster head h i . After that, we partition the network into clusters C = {c 1 , ...c i , the network to Voronoi clusters. After the initialization, each edge server resides in a cluster. To control the partition of clusters, we move the clustercores of clusters. The action state space A is the movements of all the cluster-cores. In our implementation, each clustercore has five movement actions: Up, Down, Left, Right, Stay. At each time step, the DQN model selects and performs a movement action on a cluster-core. After the movement, the network re-selects the IoT network node that is closest to the cluster-core o i as the cluster head h i , and re-partitions the network into clusters based on the new cluster head. An example of moving cluster-cores and partitioning clusters is shown in Figure 3. B. Reward The reward of an action is the satisfactory to the requirements on both IoT network and edge computing aspects. We define the two requirements that the network clustering should fulfill as follows. follows, where β ∈ [0, 1] is the discount factor for reward r t and k is the number of time steps from t. C. Q-Value Name s t as the input state data of the DQN model at time t in state space S, which is an array value concatenated by: (i) the adjacency matrix of the network; (ii) the cluster ID of each node; (iii) the size of data that is produced in each node. A policy π is a mapping from the state space S to the action space A. DQN aims to find the optimal policy to maximize R t . The Q-value Q(s t , a t ) is defined as the reward R t when taking action a t in state s t using policy π. We use Q-value to measure the quality of a certain action in a given state. Our DQN model uses a fully connected DNN with weight set θ to approximate the Q-function. After taking the action, the agent receives the reward r t and the IoT network moves to a new state s t+1 . We select the action that maximizes the Q-value as the one to be taken in the state s t . The improved policy with Q-values is based on the following update equation, where Q is the improved policy and α ∈ [0, 1] is the learning rate. The DQN model updates its DNN at each iteration to minimize the loss function L. V. PRELIMINARY EXPERIMENTAL RESULTS We evaluate the performance of the DQN model in a simulation IoT network. The network is deployed in an square area of 150m×150m. The nodes are randomly scattered in the area. As a preliminary experiment, we test the model in 40 nodes with 2 clusters, 60 nodes with 3 clusters, and 80 nodes with 4 clusters respectively. The data transmission range of nodes is 40m, 35m, 30m for 2, 3 and 4 clusters respectively. The detection range of object is 2 times larger than the transmission range. We set one dynamic object in the IoT area. The speed of object is 1m/s, and it is initialized at the position of a random cluster head. We randomly select another cluster head as the moving direction of the object. The object moves to the direction until it is outside the deployment area. After that, the object moves to the opposite direction. We assume each node produces one message of 1Kb every 10 seconds. If the object is in the detection range of a node, the node produces a message of 10Kb. The communication speed between IoT nodes is 1Kb/s. To simplify the simulation, we ignore the interference in wireless communication. The DQN model is processed every 10 seconds. Each round of training lasts for 500 seconds. The parameter values of the DQN model are shown in Table I. We use a static cluster partition algorithm without DQN model for comparison. In the comparison solution, each edge server is set as a cluster head, and the network is partitioned to Voronoi clusters. We compare the average reward score between our DQN solution and the Voronoi clustering solution. The experimental results of 2, 3 and 4 clusters are shown in Figure 4(a), 4(b) and 4(c) respectively. The results of DQN model are shown by the moving average with window size 11, and the result trend is illustrated by a 4 round polynomial curve fitting. In all the three experiments, our DQN approach has significant improvement compared with the benchmark results. After convergence, the improvements are around 56%, 47%, and 32% in 2, 3 and 4 clusters respectively. The improvement decreases as the number of clusters increases. The main reason is that we only select one cluster-core to move in a time step. If all the cluster-cores perform actions in a time step, the chance to find the cluster partition with higher reward would increase. VI. FUTURE WORK There are multiple points in the DRL model that we will improve in the future work. • We use only one object with direct moving pattern in the implementation. This moving pattern cannot represent the real dynamic objects in IoT systems. We will evaluate whether the reinforcement learning model could cope with multiple objects moving in a more realistic pattern, such as random walk. • To simplify the DQN model, we set one cluster-core in each cluster. In our next step work, we will set multi-ple cluster-cores in a cluster. This will provide smaller granularity for controlling the cluster shape. • This paper assumes that the IoT network is static, and only the object moves around. It is unknown whether the reinforcement learning solution can cope with an IoT network in which every node moves randomly. VII. CONCLUSIONS This paper presents a DRL solution for dynamic network clustering in IoT networks with edge servers. The aim is to fulfill the requirements from both the IoT network and edge computing by optimizing the clustering of data collection. Our preliminary experiments show that the proposed DQN model can achieve better results compared to the static benchmark solution. In such scenarios, our work has showed that it is feasible to apply DRL techniques to IoT networks and edge computing. Moreover, we also believe that our design and implementation have offered an alternative to the current network clustering solutions.
2019-07-09T13:13:03.002Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "ee544730c0fc94c0053409e41886db58db3791b7", "oa_license": "CCBYNCSA", "oa_url": "http://doras.dcu.ie/24289/1/wsdn.pdf", "oa_status": "GREEN", "pdf_src": "IEEE", "pdf_hash": "ee544730c0fc94c0053409e41886db58db3791b7", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
266942278
pes2o/s2orc
v3-fos-license
EFL Pre-service Teachers’ Leadership Project Practices in Indonesia's Teacher Professional Development Program Indonesia's Ministry of Education has integrated the Leadership Project course into the Pendidikan Profesi Guru (PPG) program, emphasizing the development of professional qualifications and leadership skills among EFL pre-service teachers. This study scrutinizes the influence of both conceptual and practical frameworks of the course on cultivating leadership competencies. It poses two pivotal questions: Firstly, how do pre-service EFL teachers perceive these frameworks within leadership projects? Secondly, what challenges do they face in executing these projects effectively? Utilizing an explanatory mixed-methods design, the study collated and analyzed data from 31 participants through surveys and inductive thematic coding. It revealed that while EFL pre-service teachers have a positive reception of leadership involving collective knowledge and change initiation, they also report significant hurdles. Implementing complex frameworks like Sustainability NEWS (Nature, Economy, Well-being, Society) and Appreciative Inquiry presented difficulties, predominantly due to time limitations and communication barriers with target groups. Furthermore, a gap in understanding the conceptual underpinnings led to complications in the planning and sustainable implementation of projects, exacerbated by tight schedules and financial restrictions. Feedback from participants highlighted a need for program enhancements, suggesting refined policies, more rigorous consultation processes, and an enriched focus on reflective practices. The study offers valuable insights into the perceptions of EFL pre-service teachers regarding leadership programs. INTRODUCTION Leadership practices in Indonesia's educational system, including various models, principals' roles, pre-service teachers' practices, perceptions of educational leadership, and strategies to address leadership issues, have garnered significant attention (Sofo et al., 2012;Kadiyono et al., 2020;Eyal & Roth, 2010;Aydin et al., 2013;Ozgenel & Karsantik, 2020;Kuswandono, 2017;Aziz et al., 2020;Gaol, 2021).In response, the Indonesian Ministry of Education has included the "Projek Kepemimpinan" or Leadership Project course as a mandatory component in the Indonesian Teachers' professional development program.Despite the existing research on leadership practices and their impact, there's a notable gap in formal leadership training for pre-service teachers, especially those in English as a Foreign Language (EFL).This lack of formal training, along with political mandates (Björk, 2003) and limited opportunities (Campbell-Evans et al., 2014), may contribute to EFL pre-service teachers' reluctance to take on leadership roles.This study, therefore, aims to evaluate the formal leadership training and practices in Indonesia's professional development program, focusing on the impact of the Leadership Project course on EFL pre-service teachers' experiential leadership practices within the PPG program. Additionally, educational leadership studies often emphasize its relationship with students' academic performance and overall school impact (Clemson-Ingram & Fessler, 1997;Devos & Bouckenooghe, 2009;Bush & Glover, 2003;Grissom & Loeb, 2011).However, with evolving educational policies, it's crucial for pre-service teachers to be prepared as future leaders who recognize leadership as a transformational process.Kegan and Lahey (2009) argue that transformative leadership development should encompass changes in actions, perspectives, attitudes, and ways of thinking.Furthermore, teacher leadership needs to be versatile, inquiry-driven, collaborative, innovative, analytical, entrepreneurial, and advocacy-oriented (Smylie & Eckert, 2017).Therefore, teacher leadership extends beyond academic achievement, encompassing broader community collaboration and service.The Leadership Project course addresses this need by offering a collective and collaborative experiential learning experience that promotes transformational leadership.It focuses on nurturing pre-service teachers' leadership qualities through appreciative inquiry, enabling them to develop sustainable projects that positively impact the communities they serve (Dharma & Radyati, 2022). The Leadership Project course is designed to enhance students' leadership skills through school or community-based service-learning projects, which help students become more attuned to the needs of their project targets (Dharma & Radyati, 2022).This experiential learning approach employs two methods: it models leadership traits and gives students opportunities to exercise direct leadership through activities like observing, sensing, leading, and reflecting.Bush and Glover (2003) have identified three key dimensions of leadership that underpin the Leadership Project course: (1) Leadership as an influence process shaping organizational dynamics, (2) Leadership involving commitment to an organization's values, and (3) Effective leadership requiring a clear vision.The course aims for students to develop various leadership components such as social-emotional skills, project management, collaboration, needs analysis, decision-making, and empathy. Besides, the Leadership Project course incorporates two fundamental frameworks: Sustainability NEWS and Appreciative Inquiry.It promotes systems thinking, encouraging students to perceive elements as part of a holistic system.The Sustainability NEWS framework guides students to understand and focus on the elements that constitute a system in a community or organization.As Dharma and Radyati (2022) explain, NEWS represents Nature, Economy, Well-being, and Society.Students work in small groups to select and investigate one of these dimensions for their course projects.The Appreciative Inquiry (AI) model, initially proposed by David Cooperider in 1980, emphasizes recognizing and leveraging the positive attributes of individuals to benefit a group or community.As noted by Cooperrider and Whitney (2005), AI is a collaborative effort to uncover the positive, powerful aspects of individuals in an organization and its environment, whether past, present, or future.AI implicitly demands a leadership skill that can harness and activate potential to achieve impactful outcomes, echoing Peter F. Drucker's view on leadership as cited in Dharma and Radyati (2022): "The task of leadership is to create an alignment of strengths so strong that it makes the system's weakness irrelevant" (p.23). Furthermore, this study investigates the impacts of the Leadership Project course in the context of EFL pre-service teacher education.The course is a crucial component of the Indonesian Teachers' professional development program, aiming to enhance leadership skills through practical, service-oriented projects.The research focuses on two primary objectives.Firstly, it aims to gauge the extent to which EFL pre-service teachers understand and appreciate the conceptual and practical elements of the leadership projects.This includes assessing their comprehension of the course's aims, its methodologies, and the relevance of these elements in real-world applications.Secondly, the study seeks to identify the challenges these pre-service teachers encounter during their participation in the Leadership Project services.By addressing these aspects, the research intends to provide insights into the effectiveness of the Leadership Project course in preparing EFL preservice teachers for future leadership roles and to suggest improvements for the course's design and implementation, ensuring it more effectively equips future teachers with essential leadership skills and competencies. METHOD This research utilized an explanatory mixed-methods approach, incorporating an explanatory sequential design as delineated by Ivankova et al. (2006), which involves a two-phased approach, starting with quantitative and followed by qualitative data collection and analysis.Following Creswell (2003), this methodology acknowledges that quantitative data provides a broad overview of the research issue, while qualitative data grants a more detailed exploration of the participants' viewpoints and insights, thus enhancing the overall comprehension of the findings.The study commenced with a survey distributed to 31 EFL pre-service teachers from the PPG program at Sanata Dharma University Yogyakarta, chosen due to their completion of the relevant course and projects.The survey results informed the subsequent qualitative phase, where interviews with six participants investigated their experiences and challenges faced.This mixed-methods design, based on Creswell's (2015) framework, enabled a comprehensive analysis by first establishing a general understanding through quantitative data and then contextualizing these findings within the lived experiences of participants through qualitative inquiry. To investigate students' perceptions of the Leadership Project course, the study implemented a quantitative methodology, employing a questionnaire with Likert-scale (Strongly disagree, disagree, fairly disagree, fairly agree, agree, strongly agree) items as the primary instrument.This questionnaire was designed to assess the implementation of the course's conceptual frameworks within the experiential leadership service, with a particular focus on Appreciative Inquiry and Sustainability News.The statements on the questionnaire were informed by the Leadership Project module blueprint-for instance, assessing the ease of applying the Appreciative Inquiry stages (Define, Dream, Design, Deliver, known by the acronym BAGJA in Indonesian) in students' leadership projects.Additionally, the questionnaire was refined based on insights from a focus group discussion with students, which underscored the program's conceptual underpinnings and the practical aspects of project execution, such as the feasibility of completing the Leadership Program projects within the allocated one-credit timeframe. To thoroughly evaluate students' perceptions, the research utilized both Likert-scale statements for quantitative assessment and open-ended questions for qualitative insights.The Likert-scale data were statistically analyzed to determine mean scores for each statement, providing a quantitative measure of the students' perspectives.Concurrently, the qualitative data from the open-ended questions underwent thematic coding to discern recurring themes, allowing for a nuanced understanding of the student's experiences and feedback on the course's conceptual and practical elements.This dual approach enabled a comprehensive analysis of student perceptions, combining both broad trends from the Likert-scale data and in-depth insights from the open-ended responses. FINDING AND DISCUSSION The online survey executed at Sanata Dharma University to evaluate the Leadership Project course garnered complete responses from 31 EFL pre-service teachers.The data analysis involved categorizing the open-ended responses from the questionnaire to complement the Likert-scale data, ultimately organizing the statements into four overarching categories: the conceptual framework, the practical framework, project engagement, and future career prospects.This categorization served to depict a comprehensive picture of the Leadership Project course, particularly focusing on the students' perceptions of the project's execution.bubble, which is connected to and influenced by inputs on the left side, including leadership perception, the application of Sustainability NEWS in practice, and the 5-D Cycle of Appreciative Inquiry.These inputs feed into the two primary aspects of the course: project engagement, denoted by collaborative aspects like shared visions and open communication, and future career prospects, which consider the long-term effects on formal leadership services. On the right side, the outcomes of the Leadership Project are outlined in terms of practical elements such as timeframe, project sustainability, and project management.These elements are critical in shaping the overall effectiveness and impact of the Leadership Project on students' leadership development.Together, the figure and the survey data tell a story of how the course is perceived to fulfil its objectives, engage students in meaningful leadership activities, and potentially influence their future roles as educators. Leadership Perception Perception shifts occur with the introduction of innovative practices.The Leadership Project course, through experiential learning and collaborative projects, offers a new lens on leadership (Dharma & Radyati, 2022).Reza (2019), leadership is transformative, reshaping perceptions and inspiring ambitious goals.Most participants (58.1%) reinforced their view of leadership as knowledgeable and diversity-embracing change agents during the course.One participant redefined teacher leadership as empowering beyond hierarchical roles, resonating with Danielson (2006), who views leadership as action-driven.Furthermore, 25% of participants realized leadership's potential to build high-performing teams (Hogan & Kaiser, 2005) and unify diverse perspectives after the course.However, for two participants (6.5%), the concept of leadership was completely new, having not been exposed to it before.The course's basis in Sustainably NEWS (Nature, Economy, Well-being, Society) and Adaptive Inquiry allowed them to see teacher leadership as extending past formal classroom duties (Harris & Jones, 2019) and promoting change via community service. Sustainability NEWS (Nature, Economy, Well-being, Society) in Practice The Leadership Project module incorporates the Sustainability NEWS framework to help identify and leverage assets or potentials within the ecosystem of the project's target community (Dharma & Radyati, 2022).This framework aims to clarify the interconnections and interactions among assets within the complex systems of Nature, Economy, Well-being, and Society (NEWS) and how these dimensions can influence the target communities of the project.Participants applied this framework to craft plans aimed at enhancing educational services by capitalizing on strengths and opportunities within these four dimensions in schools or communities. The survey revealed mixed responses regarding the ease of implementing Sustainability NEWS.Over half of the participants (16.1% strongly agreed; 45.2% agreed) reported challenges, citing reasons such as unrelatable and unsustainable project topics, a lack of education-focused options, and incomplete dimension coverage.However, 32.3% found the application of Sustainability NEWS somewhat straightforward, arguing that despite its complexity, it was feasible with well-defined objectives, measurable outcomes, and collaborative efforts with the schools or communities involved.The primary concern for these students was the constraint of time.A small percentage (6.5%)encountered difficulties in applying Sustainability NEWS.During interviews, they highlighted that successful implementation hinged on effective teamwork and the selection of simple, achievable projects. The 5-D Cycle of Appreciative Inquiry Appreciative Inquiry (AI), as described by Whitney and Trosten (2010), focuses on amplifying an organization's strengths instead of fixing its weaknesses, which is typical of conventional problem-solving.They stress the importance of concentrating on positive commonalities within a group.AI involves a 5-D cycle: define, discover, dream, design, and deliver, which guides collaborative efforts in organizations and communities.The application of AI, or BAGJA as it is referred to in Indonesian, was seen positively by a majority of students-with 13.2% strongly agreeing and 36.8%agreeing-that it was straightforward to apply to their projects due to its clear and systematic nature.This clarity facilitated their collective work with targeted schools or communities, embodying what Whitney et al. (2019) suggest about AI: it fosters collaborative and simultaneous knowledge and meaning creation, allowing for the integration of diverse perspectives.A participant shared how minimal effort was required to identify and develop the potential within their target community.Nonetheless, some students (26.3%) faced challenges, such as selecting suitable long-term project targets or coordinating schedules for direct community engagement.Despite the generally positive feedback, 26.3% found the 5-D cycle difficult to implement, citing limited theoretical knowledge and a disconnect between expectations and actual delivery. Project Time Frame Proper scheduling ensures projects are completed on time through planned, sequenced activities and effective use of resources.In the Leadership Project at Sanata Dharma University, students took two courses: the first on project management theory and the second on executing these plans.Due to a condensed semester schedule, students had about two months to complete their projects while also fulfilling other program requirements.Survey responses indicated mixed views on the feasibility of this timeline.While 22.6% of students felt the projects were achievable within two months due to thorough preparation, a majority believed that more time would have allowed for better execution and problem-solving.Specifically, 35.5% somewhat agreed, and 25.8% disagreed, suggesting that an extended timeline could improve monitoring, evaluation, and outcomes.Those who strongly disagreed (16.1%) felt that the short period led to projects that were more formal than impactful.The concerns raised included insufficient time for detailed preparation, demanding project topics, a skewed credit allocation favouring theory over practice, and limited time for quality control.These findings indicate a need to reconsider the credit allocation for the practical implementation stage to enhance project outcomes. Project Sustainability A core objective of the Leadership Project program is to foster sustainable projects that not only aid in the leadership development of students but also provide lasting benefits to the involved schools or communities, with an emphasis on the Sustainability NEWS framework (Dharma & Radyati, 2022).Kuhlman and Farrington (2010) align project sustainability with the 'triple bottom line' concept, embracing economic, social, and environmental dimensions, with well-being intersecting the social and economic aspects.Sustainability in projects is crucial for fostering outcomes that are economically, environmentally, and socially beneficial in a long-lasting and equitable manner.Students were tasked with assessing the sustainability of their projects post-implementation.A majority were optimistic about the projects' continued success without further aid (19.4% agreed; 29.0% somewhat agreed), citing reasons such as official adoption by schools, no additional costs yet significant impact, and enthusiastic responses from stakeholders.Conversely, some students expressed concerns about the longevity of the projects once their direct involvement ended.Those who disagreed (19.4%) or strongly disagreed (3.2%) pointed to issues like shifting school priorities, inadequate human resources, and a lack of clear leadership for project continuation, leading to projects that were more ceremonial than substantial.They observed that without designated responsibility, the communities involved would require considerable motivation to maintain the initiatives. Project Management Shifting from traditional project management focused solely on time, budget, and quality to also considering the project's broader impacts on society, the environment, and the economy is crucial for sustainability (Silvius & Schipper, 2014, p. 78).This broader view is essential for meeting sustainability goals.Project maturity, as defined by Kerzner (2019, p. 24), is the ongoing improvement process in project delivery to enhance an organization's ability to achieve its objectives.In this context, students worked in groups to identify the needs of their target participants and aimed to develop projects that would be implemented and evaluated for their potential impact.The Project Management Institute (2013) outlines project management processes encompassing time, cost, and quality management.Students evaluated their project management effectiveness, considering these aspects.Most students agreed that their projects were well-planned, addressing time allocation, financial needs (managed collectively), and quality control through group discussions, participant engagement, and lecturer consultations.Those who agreed (16.1% strongly; 32.3% agreed) noted that project scope-defined as the extent of project workwas crucial for subsequent steps like quality control and budgeting.Simpler projects were seen as more efficient.They likened project management to a relay race, emphasizing the need for subsequent stakeholders to take over the project sustainably. However, managing cost and time proved challenging for some, as noted by Chow et al. (2021).Certain projects incurred substantial costs, leading students to use personal savings or seek sponsorship.Budgets were primarily allocated to workshops and project execution.The two-month timeline posed difficulties for thorough monitoring and evaluation.Despite these hurdles, students recognized the positive impact of intensive feedback from lecturers on project quality.While the groups led the execution, the mentor's guidance was vital in reinforcing the project's planning and delivery stages. Projects as Shared Visions The Leadership Project course, designed as a collaborative, experiential learning program, requires students to work together on issues, utilizing potentials within schools or communities to support the Sustainability NEWS framework through appreciative inquiry (Dharma & Radyati, 2022).Essential to this process is the development of a shared vision, which is crucial in the initial cycle of topic selection.Shared visions in a project set clear goals and directions, aligning with Burns' concept of transformative leadership (1978), which motivates followers towards common objectives (cited in Reza, 2019).Chai et al. (2017) assert that transformational leadership positively influences team members' shared vision at a team level, emphasizing that trust and commitment within the team are vital for aligning individual and group visions. A significant portion of the students (38.7% agreed; 41.9% strongly agreed) viewed their project as a culmination of combined values developed through intensive group discussions and consultations before receiving mentor approval.Initially, many felt overwhelmed by the flurry of ideas within their groups, leading to confusion.However, through mentor guidance, they were able to converge on a clear topic with defined objectives.A minority (16.1%) who somewhat agreed experienced challenges in collaboration, engaging in negotiations to refine their leadership skills.Contrarily, one student (3.2%) expressed disappointment, feeling that the project did not adequately incorporate everyone's ideas and was dominated by a single vision, highlighting issues in open communication and team dynamics.Thomas et al. (2009) emphasize the importance of willingness to share opinions openly for effective communication within a group, as open communication fosters greater involvement and engagement.While not all participants experienced such open discussions, most acknowledged that they could express their feelings and ideas openly and flexibly within their groups.A notable 29.0% who strongly agreed highlighted their use of a systematic, collective decision-making process, as described by Bose et al. (2017), which avoided centralization and encouraged member participation.The participants also noted how the characteristics of group members, such as being supportive, respectful, open-minded, and flexible, influenced the project process, aligning with Bose et al.'s concept of collective behaviour. Open Communication However, group dynamics often involve disagreements, leading to a sense of vulnerability, which Brown (2018) identifies as crucial in collaborative work.Vulnerability fosters trust and encourages openness and honesty.Yet, 25.9% of students who disagreed with the statement about effective group communication struggled with this vulnerability.They felt apprehensive about possibly offending others with their responses or being perceived as nonsensical.Brown (2018) suggests that some individuals might find collaborative work challenging, especially when assuming leadership roles.Additionally, some students mentioned the issue of passivity, where reliance on other group members hindered open communication, impacting the overall group dynamics and effectiveness. Students' Project Satisfaction Project success is a crucial determinant of project satisfaction, as noted by Chow et al. (2021).The effectiveness of project management, encompassing aspects such as time, cost, and quality (Project Management Institute, 2013), serves as a metric for assessing student satisfaction with project outcomes.Despite encountering various challenges and limitations, a majority of PPG students expressed satisfaction with their project results.Specifically, 22.6% were strongly satisfied, and 38.7% were satisfied.Their positive outlook was influenced by the collaborative efforts of group members, the active participation and feedback from targeted schools and communities, and the guidance provided by their mentors, which contributed to the project's benefits and potential sustainability. However, a group of students (25.8%) who found the project moderately successful identified key factors of dissatisfaction, including time allocation, project sustainability, collaboration quality, and overall project planning.Those who disagreed (12.9%) echoed these concerns but particularly highlighted issues arising from unexpected changes midproject, which impacted project management, especially in terms of financial support.Despite these difficulties, they remained optimistic about the potential benefits of their projects for the targeted participants and their communities. Future Career Prospect The Leadership Project course, designed to equip future teachers with leadership skills for enduring contributions to teaching and learning, responds to the educational transformation in Indonesia (Dharma & Radyati, 2022).This course emphasizes transformational leadership through experiential projects, aligning with the necessity for teachers to adapt to educational changes.As highlighted by Stewart (2006), drawing on Bass and Reggi (2010) perspective, effective leaders are instrumental in driving social change, an aspect this course seeks to develop by encouraging students to identify potential in schools or communities for sustainability.Participant feedback indicated varied perceptions of the program's effectiveness in professional preparation.Those who strongly agreed or agreed (totalling 74.2%) valued the program for its practical application of theory and skill development in critical thinking and adaptability.However, a portion of students (19.4% fairly agreed) were uncertain about their ability to significantly influence school settings, considering their novice status.Some participants did not perceive a substantial impact on their career prospects.One student's observations revealed that while teachers are adept at academic and administrative tasks, there's interest in non-academic involvement.This suggests a possible direction for future professional engagement.Nonetheless, the program's potential in shaping leadership development faces obstacles, such as administrative workload and limited support (Jomuad et al., 2021;Ismailos et al., 2022), which could restrict teachers' capacity to innovate and apply observed leadership potential. Challenges and Evaluation Theory and Concept Understanding Comprehending and applying the theoretical and conceptual aspects of leadership posed significant challenges for the participants, partly due to their limited exposure to experiential learning, which is vital for understanding and implementing the Sustainability NEWS framework.Kolb's (1984) theory of experiential learning suggests that knowledge emerges from active engagement and reflective, transformative experiences.This perspective was echoed by a participant who acknowledged their limited understanding stemmed from a lack of practical experience in this area. "The implementation of Sustainability NEWS theory in leadership project is quite challenging to execute.It was because we were not familiar with the term and lacked the experience to work on non-academic projects" (Participant 5) Considering the lack of knowledge that the participants experienced, they somehow managed to minimize the problem through implementing the Appreciative Inquiry (AI)/BAGJA.Rather than finding a problem to be solved, AI provides spaces for strengths and potentials as the asset and basis of the project (Cooperrider et al., 2008;Cooperrider & Whitney, 2005). "Using BAGJA, our group can implement leadership project activities easily because it's organized systematically, making it easier for us to think through identifying the asset to the process of developing it" (Participant 1) Communication and Communities' Demands In the implementation, the participants often experienced poor communication that led to misunderstandings and delays both within the group members and in communities or schools.The participants were reluctant to communicate their goals and limitations to the targeted participants.When stakeholders have different values, it is still possible to find a middle ground through negotiation.However, the participants found the implementation to be different. "The main issue is dealing with the community's high expectations about what our project could achieve.Trying to meet these big expectations put a lot of pressure on us, making it tough to deliver more than what was realistically possible with the time and resources we had.We needed to talk clearly with everyone and make sure we're all on the same page about what can be done" (Participant 4) "In our project, we faced issues with time and funding.The limited time for implementation, especially for our long-term garden planting project, posed a challenge.We struggled to find donors due to the tight schedule and ended up using self-funding" (Participant 2) Project Management To manage the sustainability of a project, the processes of planning, monitoring, and control should be carefully implemented (Project Management Institute, 2017).However, the participants were challenged to control the project's long-term impacts, which mostly dealt with communities' commitment to continue the program.In the interview, the participants addressed their concerns related to the sustainable issue as follows. "It was really tiring when we had to monitor and re-plan after knowing that our project only lasted for two weeks.The community didn't show much commitment, which was disappointing.Moreover, we had trouble with time and money for our garden project.We couldn't find donors in the short time we had, so we ended up using our own funds" (participant 1). The participants identified miscommunication within their groups and communities during the initial planning and introduction of the project as a key challenge impacting project implementation and sustainability (Silvius & Schipper, 2014).Clear communication of processes and outcomes is essential for smooth project execution. In response, they offered constructive feedback for enhancing the leadership project program.They proposed increasing the course credits for practical training, allowing for more hands-on experience and the application of theoretical knowledge in real-world scenarios.Additionally, they suggested more intensive and individualized consultations with lecturers to provide tailored guidance and support for their unique challenges.The participants also recommended a review of the block system's effectiveness in conjunction with the practicum teaching program to address time constraints, aiming to improve the program's structure.Moreover, they advocated for ongoing evaluation of individual reflections to gain a deeper understanding of student's progress and areas requiring additional support.These suggestions are in line with the study's focus on assessing the Leadership Project course's effectiveness in preparing future professional English teachers, emphasizing practical aspects of the projects and the importance of collaborative and collective work in developing teachers' leadership skills. Figure 1 . Figure 1.Leadership Project Coding Result Besides, Figure 3 visually encapsulates the summary of students' perceptions.It illustrates the interrelationship between the conceptual and practical frameworks of the Leadership Project, the student's engagement with the projects, and the projected impact on their future career prospects.Central to the diagram is the "Leadership Project (LP)" . Students' perceptions of the Leadership Project course
2024-01-12T16:04:36.143Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "8dac79f799c1e91b05ff0182e42e60a19d5d7a84", "oa_license": "CCBYSA", "oa_url": "https://e-journal.hamzanwadi.ac.id/index.php/veles/article/download/24108/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e3858c6db3bbd76742788889622cd77050634303", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
244305352
pes2o/s2orc
v3-fos-license
Retrieving the refractive index of a sphere from the phase spectrum of its light-scattering profile We studied the Fourier spectrum of the light-scattering profiles of single particles in the Rayleigh-Gans-Debye (RGD) and Wentzel–Kramers–Brillouin (WKB) approximations. In the case of a homogeneous sphere, we found the relationship between the key parameters of the spectrum (including its phase) and the sphere characteristics – both analytically and numerically in the framework of the approximations and the rigorous Lorentz–Mie theory, respectively. Based on these results, we have improved the existing spectral characterization method for spheres extending the applicability range to particles with a higher refractive index. Introduction Light scattering is one of the most common approaches to the non-invasive characterization of microparticles. Among multitude of existing methods, the most promising ones are those working with single particles, since they are more reliable than ensemble ones (the corresponding inverse problem is typically well posed). There are many approaches to solving such inverse light-scattering problems (Ref. [1] and references therein), one of which is the compression of information in the measured light-scattering profiles or patterns (LSP) into several parameters that determine the characteristics of the particle under study. An important common advantage of such methods is the resistance to various distortions introduced both by experiment and by the imperfection of the particle model. Several recent examples include spectral methods for determining the size and refractive index of spheres [2,3] and for non-sphericity estimation [4,5]. In particular, an accurate and robust characterization of spheres was demonstrated in a limited range of size and refractive index [2]. This method extracted two parameters from the Fourier spectrum of the one-dimensional LSP: the main peak position and the zero-frequency amplitude, which strongly correlated with the size and refractive index, respectively. However, the ambiguity of the latter parameter for high refractive indices limited its use for widespread polystyrene beads. In this work we show the reasons for this ambiguity and propose a way to improve the capabilities of the spectral method based on the usage of the phase spectrum. Main analytical results We focus on the versatility of our approach for variety of applications and mostly consider the scattered intensity of unpolarized light. However, one of the prominent examples, the scanning flow cytometer, measures the following combination of Mueller scattering matrices: where is polar angle from 1 = 10° to 2 = 65°. Note that the integral of 14 is exactly zero for any axisymmetric particle [6]. We apply the FFT-based spectral transformation, described in [4], to the discrete measured data, in order to have a discrete representation of the following continuous transform: where w(θ) is the Hann window function for the range [10°,65°]. Analyzing the light-scattering problem in the framework of the RGD approximation, we have managed to explain the origin of the main peak in the Fourier spectrum and its relation to the particle size. Furthermore, analyzing the WKB approximation, we found that the given Fourier transform of the scattering intensity has a phase factor linearly proportional to both the refractive index and the frequency. Such a transformation result can be expected already proceeding from the given formula for the intensity where ( − 1) is a phase factor, and is some function defined by particle geometry, the spectrum of which we obtain as a result of the Fourier transformation. This will be discussed in more detail at the conference. A factor of this type in the Fourier image indicates a shift of the original function (LSP) along the coordinate axis ( ) by a value proportional to the refractive index, which was observed and used earlier for spheres [7,8] Numerical calculations and discussions The above theoretical analysis suggests that the phase of the spectrum at peak frequency (further denoted as peak phase) strongly depends on the refractive index, so it is natural to combine it with the peak location, which strongly depends on size and only weakly on refractive index. In contrast to the zero-frequency amplitude, used in [4], the peak phase is equally applicable for any refractive index at the cost of being cyclic, i.e. inherently non-invertible in any wide range. Thus, its natural application niche is characterization of particles in a relatively narrow range of size and refractive indices. In particular, we further apply it to characterization of 4-μm polystyrene beads. Note, however, that similar calculations can be performed for other ranges of characteristics. We used the Lorenz-Mie theory to calculate the LSPs in the ranges [3.7 μm, 4.3 μm] and [1.56, 1.62] for size and refractive index, respectively, and Fourier-transformed them according to Eq. (2). We assumed the wavelength 0.66 μm and host-medium refractive index equal to 1.333. In the resulting spectra, we determined the position of the main peak and the phase value at this point. The dependence of these parameters on the sphere characteristics are shown in Fig. 1. As can be seen from the figure, the phase has an almost linear dependence on the refractive index; however, there is some ripple associated with effects not accounted for in the WKB approximation. Inverting this dependence by plotting the same data points in other coordinates (characteristics of the sphere as functions of the parameters of the spectrum), we solve the inverse problem using interpolation. Despite the underlying linear structure, ripples present significant problems by compromising uniqueness and affecting interpolation accuracy. Note that these ripples present problems for any other characterization methods as well, even for the least-square fit. For example, the LSPs of two spheres with size different by a ripple period (approximately 0.15 μm in this case) maybe more similar to each other than to LSP of spheres with sizes in between. To assess the influence of these ripples on our method, we tested it on synthetic LSPs of spheres, computed in the same range of characteristics but different from the interpolation grid nodes. While the maximum errors are 10 nm and 10 -3 for size and refractive index, the mean errors do not exceed 1 nm and 10 -4 , respectively. The results of processing experimental LSPs of polystyrene beads will be presented at the conference. Conclusion We have analyzed the application of the phase spectrum of 1D LSP to the characterization of spheres, both analytically using the WKB approximation and numerically using the Mie theory. Moreover, we constructed an interpolant to characterize homogeneous spheres using the position and phase of the main peak in the Fourier spectrum. This interpolant works in the limited range of size and refractive indices (due to the cyclicality of the phase parameter), but proved to be fast, robust, and accurate for characterization of latex beads.
2021-11-18T20:07:58.905Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "679d2bc2f743d3b0f7668ca965dcd90ebfe0e00a", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2015/1/012125/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "679d2bc2f743d3b0f7668ca965dcd90ebfe0e00a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11800889
pes2o/s2orc
v3-fos-license
Towards quasi-biological nanodosimetry The increasing utilization of charged particle beams for therapeutic purposes requires designing novel detector systems which shall be capable of assessing radiation quality for a diversity of ion species. It is shown that the pattern of energy deposition in thermoluminescent phosphors and biological tissue contains conceptual parallels. The correlation of physical and radiobiological parameters observed experimentally for specific endpoints (single- and double-strand breaks of DNA) opens the possibility of realizing successfully quasi-biological solid-state nanodosimetry on the basis of thermoluminescence. Introduction Significant progress in radiobiology has refined our understanding of radiation-induced biological response at the cellular level and challenged the conventional macroscopic description of radiation action in favour of a microdosimetric approach. It is inherent to the macroscopic concepts of absorbed dose and linear energy transfer (LET) that the energy deposition of a charged particle is treated as a continuous process along the particle track [1], neglecting the chaotic tangle of secondary electron paths of which it is composed. While the energy deposited by γ rays at large doses can be imagined to be deposited "homogeneously" relative to the size of biological targets, the energy deposited by heavy ions is much more heterogeneous. The universal use of dose as a normalizing parameter in radiobiology is based entirely on the availability of measuring instruments; it is a poor basis for predicting or understanding the relationship between an irradiation and the resulting endpoint [2]. This has stimulated the development of microdosimetry which takes into account the stochastic nature of interaction processes. But in the many years since its introduction, and in spite of the enormous efforts on its behalf, microdosimetry alone has not led to any fundamental understanding of radiobiology. Quoting Kellerer, "Concepts of microdosimetry are of course essential in any analysis of the action of ionizing radiation on the cell. Their employment has led to important insights but not, as yet, to a quantitative treatment of primary cellular changes" [3]. Up to now, the only instruments that are capable of measuring microdosimetric quantities, e.g. distributions of specific or lineal energy, are small gaseous proportional counters, scaled by density to cellular volumes of micrometre or nanometre diameter [4]. It would, however, be a great step forward to find a detector for which the pattern of energy-deposition events resembles the situation in a cell. Waligórski and Katz were the first to recognize that the response of solid-state thermoluminescence dosimeters (TLDs) depends on ionization density in a qualitatively similar way to the relative biological effectiveness (RBE) for many endpoints; this led them to conclude that TLDs appear to be "good candidates for mimicking the response of biological systems to heavy-ion irradiations" [5]. The occurrence of thermoluminescence (TL) is coupled to the presence of impurities or defects in a given substance. However, association of specific imperfections with a certain peak in the TL glow curve is not straightforward. It may very well happen that a certain impurity or defect is abundant in the sample, but does not contribute to the emitted TL. On the other hand, other defect structures, sometimes undetectable by other means due to their low concentrations, are found to be responsible for the TL signal [6]. From the physical point of view, understanding of the role of defect centres and the processes by which energy is first stored in the material and is then released in the form of light during heating of the sample is fundamental to develop new dosimeter materials with properties tailored to specific needs. The most popular TL dosimeter in use today, LiF doped with Mg and Ti (most often containing additional OH impurities), has been available commercially since the late 1960s. The LiF crystal consists of two interpenetrating fcc lattices, one for Li + and one for F − ions. The ions are closely packed with a lattice constant of 0.4 nm. To describe the dose response and TL efficiency with respect to 60 Co γ rays, Olko [7] has adopted microdosimetric multi-hit models which have originally been proposed to explain the inactivation of microorganisms. A solidstate TL detector contains a large number of independent structures (further on called "targets"). It is assumed that only one type of target is present in the detector and each target can respond upon an energy deposit (termed a "hit"). The target can tolerate a number of m − 1 hits without being affected; however, if m or more hits occur, this will generate a response: TL emission in our case, cell inactivation or other endpoints in biological systems. Target size can be varied as a free parameter. The relative TL efficiency of the dominant peak 5 is found to decrease with ionization density (Fig. 1a) [8]. Its γ-ray response shows a linear-supralinearsublinear slope. The related defect centre is assumed to be the combination of a "one-hit" and "two-hit" trap [5]. Being capable of capturing one charge carrier, a "one-hit" trap requires a single energy deposit (m = 1) and produces a linear response which is found experimentally for modest doses <10 Gy. The characteristic target size of a "one-hit" trap was estimated to be ~10 nm [5]. This model is confirmed by optical absorption measurements from which it is known that the structure responsible for peak 5 is composed of a Mg 2+ -Li vacancy trimer (the electron trap) coupled to Ti(OH) n (the luminescence centre). The relative efficiency of the high-temperature TL (HTTL, 248−310°C), on the other hand, increases with ionization density to pass through a maximum at an LET of ~100 keV/µm [8]. The slope is particularly impressive if the HTTL efficiency is scaled to the same level of absorbed dose (Fig. 1a); the resulting parameter is called the high-temperature ratio (HTR). This behaviour is a typical example of dominating "two-hit" response (m = 2), the corresponding centre being capable of capturing two charge carriers of the same sign. Following high-LET irradiation, the "two-hit" traps would be preferentially populated compared to traps associated with "one-hit" centres, as the probability of multiple ionizations increases dramatically in vicinity of the particle track. The early onset of supralinear γ-ray dose response at ~200 mGy [9] further confirms the dominance of "two-hit" centres being responsible for the HTTL; a pure "two-hit" trap would produce a quadratic TL response. The characteristic target size of a "two-hit" trap was estimated to be ~40 nm [5]. The molecular nature of the trapping and luminescence centres giving rise to the HTTL has not been identified yet. However, there is considerable evidence that Ti-related structures are involved [10]. Correlation of physical and radiobiological endpoints In biological systems, the degree of irreversibility and/or functional lethality may be correlated with the distance between DNA single-strand breaks (SSBs). It has been estimated that double-strand breaks (DSBs) arise from SSBs within approximately ten base pairs, i.e. 3.4 nm [12]. SSBs are per se the consequence of a "one-hit" response, while for DSBs the spatial correlation of two "one-hit" events resulting in a "two-hit" response is required. The yield of SSB and DSB induction, i.e. their efficiency, was measured by Kampf [11] for particles of different LET and effective charge. The slope of SSB induction (Fig. 1b) corresponds with the LiF:Mg,Ti peak 5 relative TL efficiency, while the dependence of the DSB yield on LET (Fig. 1b) may be correlated with the HTR. Fürweger et al. [13] exposed cultivated human skin fibroblasts and LiF:Mg,Ti TL detectors to high-energy 4 He 2+ , 12 C 6+ , 20 Ne 10+ , 28 Si 14+ and 56 Fe 26+ ions, with special emphasis being laid on the low-dose region of some ten mGy where bystander effects could be expected to contribute significantly to the overall radiation risk. The investigated biological effects included specific biochemical events that are known to play a major role in the early cellular response to DSBs, such as the formation of pATM (serine 1981), γH2AX (serine 139) and pDNA-PKcs (threonine 2609) foci. Analysis at three different points of time (20 min, 1 h, and 2 h) after irradiation could elucidate the time response of cellular signalling and damage repair. Again, the ionization density dependence of the initial radiation-induced DSB induction in both directly hit and bystander cells was found to be correlated to the HTTL. 1) LET is an insufficient parameter to characterize ionization density which depends on both LET and effective charge of the ionizing particle.
2009-06-26T10:42:00.000Z
2009-06-26T00:00:00.000
{ "year": 2009, "sha1": "bf70d8b25a2ee38ee7b66790d3c110e73828554c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bf70d8b25a2ee38ee7b66790d3c110e73828554c", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250937636
pes2o/s2orc
v3-fos-license
Sexual Dimorphism in the Fibular Extremities of Italians and South Africans of Identified Modern Human Skeletal Collections: A Geometric Morphometric Approach Simple Summary The extremities of the fibula may reflect differences between males and females, although so far only few studies included this bone for post-cranial sex assessment. Our work explored shape and size variation between sexes in identified skeletal samples comprising different populations from Italy and South Africa and showed that fibular extremities are significantly smaller, with narrower articular surfaces in females than in males. Consistent sex-related differences are revealed in fibular form and size in Italians but not in South Africans. Potential application in forensic and bioarcheological contexts may benefit from the use of this approach. Abstract Fibular metric variations have revealed their potential in distinguishing between males and females; however the fibula remains scarcely analyzed in studies of sexual dimorphism. This work aims at investigating sexually dimorphic features in fibular proximal and distal epiphyses through geometric morphometrics methods. A total of 136 left fibulae, from two Italian and one South African identified skeletal collections were virtually acquired through CT and laser scanning and analyzed using geometric morphometric methods. Statistical analyses were performed on shape, form, and size variables. Results show that fibular epiphyses are smaller with narrower articular surfaces in females than in males in both extremities. Relevant sexual differences emerge in fibular form and size for the two Italian samples but not for the South African one, likely for its small sample size. Discriminant analysis on form principal components (PCs) offers accuracy above 80% when the samples are pooled, and reaches accuracy of 80–93% when the Italian samples are considered separately. However, our method on form PCs was not successful for the South African sample (50–53% accuracy), possibly due to the small sample size. These results show relevant morphological variation in relation to fibular form and size, with a degree of accuracy that indicates the utility of the present method for sexing human fibulae in both forensic and bioarchaeological contexts for Italian samples. Introduction Shape and size of leg bones are widely used in both bioarcheological and forensic contexts as sex indicators, e.g., [1][2][3][4]. Generally, males display larger and more robust tibiae In this work we applied for the first time a 3D GM approach that integrates fixed landmarks and sliding (semi)landmarks of curves and surfaces to investigate sexually dimorphic features in the fibular proximal and distal extremities. The study involves three samples from identified modern skeletal collections (19-20th century), two from Italian and one from South African populations. The main goal of this study is to provide a new method for sex assessment of the human fibular extremities based on 3D GM to be applied in forensic and bioarcheological investigations. Based on previous studies on sexual dimorphism in the fibulae, we tested the two following hypotheses: (a) Fibular epiphyseal form (size + shape) will be sexually dimorphic, likely reflecting functional differences (body shape, size, and proportions), while shape alone will be less informative [30]. (b) Fibular epiphyseal form dimorphism will differ between populations due to size variation (different patterns of epiphyseal size dimorphism due to differentiation of body size among Italian and South African groups), but shape alone will not reveal differences in the sexual dimorphism between the populations [30]. Sample In this study we analyzed 136 left fibulae (Table 1) belonging to late 19th-early 20th century individuals. The total sample consists of 17 South African individuals of various ancestral origins from the Raymond A. Dart Collection of Human Skeletons at the University of the Witwatersrand (Johannesburg, South Africa) [34], and 119 Italian individuals (Emilia-Romagna, N = 47; Sardinia, N = 72) belonging to individuals from the Modern Human Identified Skeletal Collection of the University of Bologna. Year of death for the whole South African collection spans 1920s to 2000s, while Italian samples include individuals who died between 1898 and 1944 [35]. For each subsample (from now on referred to as population), the sex of the individuals is known from hospital or cemetery records as well as other archival sources (e.g., birth certificates). Age-at-death spans between 17 and 90 years. The fibulae used in this study were selected for their general good state of preservation and for the absence of pathological markers, with fully fused secondary centers of ossifications [36]. The specimens were either digitized through computed tomography (CT) or laser scanning, as previous studies have shown that digital models of long bones from these two acquisition methods are comparable [37][38][39]. The acquisition of the 3D models of fibulae belonging to the Raymond A. Dart Collection was performed at the Microfocus X-ray Computed Tomography facility of the University of Witwatersrand (Johannesburg, South Africa) on a Nikon Metrology XTH 225/320 LC (Voltage 70 kV, current 120 µA, no filter used, pixel size 120 µm). The acquisition of the fibulae of the Italian collections was carried out at Istituto Ortopedico Rizzoli (Bologna, Italy), utilizing Revolution Discovery CT dual energy, with GSI Revolution and HD Revolution configurations (Voltage 100 kV, current 360 µA, standard filter, slice thickness, and acquisition interval at 0.625 mm, followed by a reconstruction with monochromatic beams at 40 keV, using the "Detail" filter). A subsample of 20 Italian individuals of Bologna was digitized with an ARTEC Space Spider 3D (Luxembourg) laser scanner (3D precision: 0.05 mm; 3D resolution: 0.1 mm) in the Data from CT scans were processed in Avizo 9.2 (Thermo Fisher Scientific, Waltham, MA, USA) for image segmentation, utilizing the threshold-based segmentation protocol of half-maximum height (HMH) outlined by Spoor, Zonneveld, and Macho (1993) [40]. We applied the protocol following the modified version detailed by Coleman and Colbert (2007) [41], already used in several applications in anthropology (e.g., [37,42,43]). Then, a surface model was generated (isosurface reconstructions), saved in STL format, and loaded into Viewbox 4 v. 4.0 (dHAL Software, Kifissia, Greece) for landmarking. 3D Geometric Morphometric Analysis A 3D template configuration ( Figure 1, Table 2) of 16 fixed landmarks, 25 curve semi-landmarks, and 101 surface (semi)landmarks captured both the distal and proximal extremity of the fibula. The template was created in Viewbox 4, to cover major muscle, ligament, and tendon attachment sites and articular surfaces on the fibula ( Figure 1, Table 2). Viewbox 4 software was utilized to apply the configuration to the targets in the sample, with (semi)landmarks sliding on curves and surfaces to minimize thin-plate spline (TPS) bending energy between the targets and the template [44]. Following that, (semi)landmarks can be considered geometrically homologous among specimens [45,46]. The template repeatability (i.e., high intra-observer agreement) and reproducibility with different scanning devices were tested [37]. This work revealed that for this template (1) the intra-observer error is negligible, (2) and that the 3D-GM comparisons of specimens scanned with different scanning devices are not influenced by repeating the template on the same individual (intra-observer error). Landmarks and (semi)landmarks raw coordinates used to describe the specimens of the study are available in Tables S1 and S2. After importation of raw coordinates in R (version 4.0.3) [47], the set of (semi)landmarks of proximal and distal epiphysis was separated into two different datasets (i.e., a dataset of Cartesian coordinates for the distal and a dataset for the proximal fibular extremity). Each dataset was subjected to a further sliding step against recursive updates of the Procrustes consensus, while two different Procrustes superimpositions were computed, allowing the conversion of raw coordinates into standardized, scaled, centered, and oriented shape coordinates (i.e., Procrustes coordinates) via Generalized Procrustes Analysis (GPA) [31,44] using the R package "geomorph" version 3.3.2 [48]. Centroid size (CS) for each extremity, which is the square root of the summed squared distances between each (semi)landmark and the centroid of the (semi)landmark configuration, was also calculated and used as a proxy of the size of the distal and proximal extremities [44]. Procrustes coordinates were then subjected to two separate principal component analyses (PCA) to explore shape variations based on different sex groups for both distal and proximal extremities separately, both considering each population separately and the pooled sample [49]. Two form-space (i.e., shape plus size) PCAs were computed by augmenting the Procrustes shape coordinates of each dataset of Procrustes coordinates by the natural logarithm of CS (lnCS) [50]. Visualization of extreme shape and form changes along the principal axes and means based on sex and population groups were obtained by TPS deformation [51] of the Procrustes grand mean shape surface utilizing the R package "Morpho" v. 2.8 [49]. Linear correlation among CS and the scores along the first two shape principal components of proximal and distal epiphyses were assessed by a Pearson's correlation test, to assess whether the distribution of PC scores in the shape-space PCA plot is influenced by size. The assumptions of this test (linearity, continuous and paired variables) were all met by our variables. Procrustes ANOVA was adopted to test shape differences among sexes, both considering each population and the pooled sample, utilizing Procrustes distances among specimens and using a residual randomization procedure (RRPP = T, iterations = 1000), with the R package "geomorph" version 3.0.7 [48]. Differences in size among sexes and populations were evaluated considering CS through ANOVA and subsequent post hoc Table 2 for a detailed description of the anatomical landmarks [37] (license: CC BY 4.0). Allometric trajectories were estimated for both fibular epiphyseal extremities following Sorrentino and co-workers (2020) [29], by computing a multivariate regression of shape and form variables (using all the PCs) on CS. Then, all the obtained coefficients (slope and intercept) were utilized to compute a permutation test (N = 1000) on lengths (i.e., magnitude of the variability) and angles to assess significant differences (i.e., p < 0.05) among trajectories by populations and sexes group trajectories [52]. The assumptions of this multivariate regression (independence; linearity; normality; homoscedasticity) were all met by our variables (PCs). Finally, linear discriminant analysis (LDA) with leave-one-out cross-validation testing assessed classification accuracy of sex based on shape variations using shape and form PC scores and centroid size to classify the specimens (i.e., either male or female). Selection within the first 10 PCs to include into LDA was based on optimization between the amount of explained variance, considering 70% as threshold [53] and the highest accuracy values obtained. Again, the assumptions of LDA (independence; linearity; normality; homoscedasticity) were all met by our variables (PCs). L1 Point where the fibular anterolateral border divides into two ridges: the proximal apex of the subcutaneous triangular surface (STS) L2 Most Medial border of the subcutaneous triangular surface (STS) 5 SSML_malleolar fossa Surface of the malleolar fossa, attachment site of the transverse tibiofibular and posterior talofibular ligaments. 7 SSML_ILA Attachment surface of interosseous tibiofibular ligament and part of interosseous membrane (ILA) 13 SSML_fibular groove Groove for tendons of m. peroneus longus and m. tertius and attachment site of posterior tibiofibular ligament 13 Pooled Sample Italian populations (i.e., Emilia-Romagna, ER; and Sardinia, SAR) mostly overlap in shape space with each other and with South African individuals (SA) for both proximal and distal epiphyses. Additionally, no clear separation is present among individuals according to sex groups for any populations ( Figure 2). For the proximal epiphysis, the first two principal components (PCs) accounts for 33.61% of total variance (PC1: 20.15%; PC2: 13.46%), while for the distal epiphysis the variance explained by the first two PCs is 56.41% (PC1: 33.76%; PC2: 22.65%). For both extremities, the first two PCs do not show a distinction among males and females (p > 0.05), with comparable males and females with extreme shapes (Figure 2). Few to no morphological changes are seen in the extreme shape along PC1 and PC2 axes for both extremities ( Figure S1) and are mostly related, for the proximal epiphysis, to a more pronounced styloid process towards positive scores and a more laterally protruding insertion of the m. fibularis longus towards negative scores of PC1 and PC2; for the distal epiphysis, to a more elongated and less pronounced area of tibiofibular syndesmosis towards positive PC1 scores and negative PC2 scores, and to a more craniocaudally elongated subcutaneous triangular surface along negative scores on PC1 and PC2. For the proximal epiphysis, no significant correlation is present between the first two PCs scores and CS. In contrast, for the distal epiphysis, a Pearson's correlation test shows that PC2 is significantly negatively correlated with CS (r = −0.565; p < 0.001), indicating that static allometry could account for morphological differences in shape along this axis. Angles of allometric trajectories between population groups do not differ significantly (Table 3, Figure 2). The form-space PCA plots of both extremities ( Figure 3) show a clear separation among the sexes, with males plotting towards positive scores along PC1 when considering all populations pooled and grouped by sex. The first two PCs account for 49.05% of total variance for proximal epiphysis (PC1: 36.86%; PC2: 12.72%), while for distal epiphysis the first two components account for 73.34% of total variance (PC1: 58.35%; PC2: 14.99%). For both extremities, PC1 significantly contributes to the separation among males and females (proximal epiphysis: ANOVA; PC1: F-value = 88.39, p < 0.001; distal epiphysis: ANOVA; PC1: F-value = 18.26, p < 0.001), accounting mainly for size variation (i.e., CS). No distinction is present between sexes along PC2 or PC3 (p > 0.05). Generally, fibular epiphyses are smaller in females than in males, with narrower articular surfaces ( Figure S2). For proximal epiphyses, males (PC1 positive) have a more pronounced styloid process, and a bulkier, laterally protruding fibular head compared to females. For distal epiphyses, males (PC1 positive) show a longer insertion of tibiofibular interosseous ligament, site of the tibiofibular-syndesmosis, longer subcutaneous triangular surfaces, peroneal grooves, protruding malleoli, attachments for calcaneofibular ligament and anterior and posterior talofibular ligaments, and a deeper and wider malleolar fossa, attachment for the transverse tibiofibular and posterior talofibular ligaments in comparison to females. Angles of allometric trajectories between groups do not differ significantly according to different populations (Table 3). Cross-validation LDA of the pooled sample reaches accuracy above 80% when using the first six form space PCs (76.87% of variance explained) and the first four form space PCs (83.38% of variance explained) for proximal and distal epiphyses, respectively (Table 4). Using CS, accuracy is between 68 and 76%, considering both epiphyseal ends. Fewer individuals are correctly classified according to sex when using shape-space PCs, with accuracy of 57-62% using 10 PCs (82.05% and 85.12% of variance explained for proximal and distal epiphysis, respectively) ( Table 4). Separate Populations Figures 5 and 6 show the shape-space PCA plots of proximal and distal fibular extremities, respectively, for each population grouped by sex and their related mean shape. In all populations and for both fibular extremities, PCs scores are not significantly correlated (−0.5 < r < 0.5) with CS, except for the negative correlation of PC2 in SA for distal epiphysis (r = −0.706, p = 0.002). In general, only subtle shape morphological changes are seen among mean shapes of males and females of each population and are mostly related to a slightly wider anteroposterior expansion, with longer malleoli in males in comparison with females, which possess, for ER, a shallower malleolar fossa for distal epiphyses. In shape-space, indeed, PC scores along the first three PC axes do not contribute to the separation among males and females in any of the three populations (ANOVA, p > 0.05) for both proximal and distal epiphyses, except for the distal extremity of ER which shows weak significant differences between males and females (ANOVA, PC1: F-value = 4.21, p = 0.046). Angles of shape allometric trajectories do not differ significantly according to sex, considering each population separately (Table 5). Figures 7 and 8 show the scatterplot of form-space PCA plot of proximal and distal fibular extremities, respectively, for each population grouped by sex and their related mean shape. For all populations, PC1 shows the main separation between sexes as accounting for size variation (i.e., CS represented by PC1). For the two Italian populations, males plot towards PC1 positive scores and females plot towards negative scores (Figures 7 and 8), although slightly overlapping, especially on distal fibula extremity (Figure 8). ANOVAs show significant separation (p < 0.05) along PC1 between sexes for both proximal and distal epiphyses for the two Italian populations but not for South Africans. For distal epiphyses, moreover, Italian ER population also presents a significant separation according to sex along PC2 (ER: PC2, F-value = 15.54, p < 0.001). For proximal epiphysis (Figure 7), Italian males show wider fibular heads with more anteroposteriorly expanded tibiofibular articular surfaces and a more pronounced styloid process than females. In addition, males possess an area of insertion of knee collateral ligaments, m. biceps femoris, and fibularis longus, which is more postero-laterally protruding than in females, especially evident in SAR population. For distal epiphysis (Figure 8), Italian males exhibit more mediolaterally expanded proportions, with longer and pronounced malleoli, attachment for calcaneofibular ligament and anterior and posterior talofibular ligaments, and a deeper and wider malleolar fossa, attachment for the transverse tibiofibular and posterior talofibular ligaments, in comparison to females. In contrast, South African males do not show such marked pattern of form variation, with comparable mean form morphologies seen in both sexes. Angles of form allometric trajectories do not differ significantly according to sex considering each population separately (Table 5). For both extremities, CS is significantly different between pooled males and females (proximal epiphysis: ANOVA; F-value = 89.39, p < 0.001; distal epiphysis: ANOVA; Fvalue = 7.38, p < 0.001) (Figure 4). Cross-validation LDA of the pooled sample reaches accuracy above 80% whe the first six form space PCs (76.87% of variance explained) and the first four form PCs (83.38% of variance explained) for proximal and distal epiphyses, respectivel 4). Using CS, accuracy is between 68 and 76%, considering both epiphyseal ends individuals are correctly classified according to sex when using shape-space PC accuracy of 57-62% using 10 PCs (82.05% and 85.12% of variance explained for p and distal epiphysis, respectively) ( Table 4). Figures 5 and 6 show the shape-space PCA plots of proximal and extremities, respectively, for each population grouped by sex and their shape. In all populations and for both fibular extremities, PCs scores are n correlated (−0.5 < r < 0.5) with CS, except for the negative correlation of PC2 epiphysis (r = −0.706, p = 0.002) Figure 5. Scatterplot of shape-space PC scores of proximal fibular epiphyses fo (Emilia-Romagna and Sardinia) and the South African populations, divided by se mean male (colored) and female (grey) shape variation along the PC axes, rep scatterplot as squared dots, in medial (top) and lateral (bottom) views. Lines rep trajectories for males and females for each population. In general, only subtle shape morphological changes are seen among mean s males and females of each population and are mostly related to a slightl anteroposterior expansion, with longer malleoli in males in comparison with which possess, for ER, a shallower malleolar fossa for distal epiphyses. In shap Figure 6. Scatterplot of shape-space PC scores of distal fibular epiphyses for the two Italian (Emilia-Romagna and Sardinia) and the South African populations, divided by sex groups and presenting mean male (colored) and female (grey) shape variation along the PC axes, represented in the scatterplot as squared dots, in medial (top) and lateral (bottom) views. Lines represent allometric trajectories for males and females for each population. Table 5. Allometric trajectory comparisons of males and females computed for each population separately, for proximal and distal fibular epiphyses in shape and form space. α ( • ) indicates the angle of divergence of each trajectory in pairwise comparison, with relative p-value (p). Proximal Distal For both extremities, CS is significantly different between males and females of the two Italian populations (ANOVA, ER, and SAR: p < 0.05), but does not significantly discriminate between sex groups in South Africans (Figure 9). For proximal epiphyses, females of all populations do not differ among one another, and neither do SA and SAR males, who differ from ER but not between each other. For distal epiphysis, ER females and males differ from their SAR counterparts, while both Italian males and females do not differ significantly with their SA equivalents. Table 6 shows cross-validation LDA accuracy percentages of sex determination by populations for both fibular epiphyses. Shape PCs provide less discriminatory power in all three populations, with accuracy percentages of correct sex estimation using proximal epiphysis spanning 40-54% in Italians and 60% in South Africans. Similarly, for distal epiphyses, accuracy is lower for shape PCs, ranging 51-72%. As regards form PCs and considering both extremities, in ER the first 3-5 PCs (76-77% of variance explained) provide 89-93% accuracy in sex determination; in SAR the first 4-6 PCs (71-84% of variance explained) provide 80-83% accuracy in sex determination. Accuracy percentages of LDA on form PCs on South Africans are lower (including both epiphyses, 50-53%, with 77-79% of variance explained considering the first four PCs). Centroid size separates sexes-considering both epiphyses-with 69-87% accuracy for Italian populations, and with 66-68% accuracy for South African populations. For both extremities, CS is significantly different between males and two Italian populations (ANOVA, ER, and SAR: p < 0.05), but does no discriminate between sex groups in South Africans (Figure 9). For proxim females of all populations do not differ among one another, and neither d males, who differ from ER but not between each other. For distal epiphy and males differ from their SAR counterparts, while both Italian males and Table 6 shows cross-validation LDA accuracy percentages of sex determinatio populations for both fibular epiphyses. Shape PCs provide less discriminatory pow all three populations, with accuracy percentages of correct sex estimation using prox epiphysis spanning 40-54% in Italians and 60% in South Africans. Similarly, for d epiphyses, accuracy is lower for shape PCs, ranging 51-72%. As regards form PCs considering both extremities, in ER the first 3-5 PCs (76-77% of variance expla provide 89-93% accuracy in sex determination; in SAR the first 4-6 PCs (71-84 variance explained) provide 80-83% accuracy in sex determination. Accuracy percen of LDA on form PCs on South Africans are lower (including both epiphyses, 50-53%, 77-79% of variance explained considering the first four PCs). Centroid size sepa sexes-considering both epiphyses-with 69-87% accuracy for Italian populations with 66-68% accuracy for South African populations. Table 6. Sex determination accuracy percentages obtained by cross-validation LDA on population for shape and form PC scores (PCs) and centroid size (CS) of proximal and epiphyses. Discussion The aim of this study was to highlight patterns of sexually dimorphic form variations in the fibular proximal and distal epiphyses in individuals of three different populations from Italy and South Africa belonging to identified modern skeletal collections (19th-20th century), utilizing a 3D GM approach [54]. Our study provides a new method for sex estimation on the fibula, potentially applicable in bioarcheology [20], where this method may find primary implementation, and forensics, offering moderate to high accuracy (80-93% for Italian populations) for identification in case of the recovery of isolated fragments of this bone. We expected the presence of variations of fibular epiphyseal size in relation to sex. Results from our analyses support a distinction among sexes, mainly more robust epiphyses in males than in females and especially in the Italian samples. Our second hypothesis was that little sex variation would have been present in the shape of the fibular epiphyses in relation to sex, depending on different ancestries and mostly referring to different patterns of epiphyseal size in different populations, due to a differentiation of body size among Italian and South African groups. Our results support the former point and provide only partial support to the latter one. In fact, the form and size of fibular extremities significantly distinguish between sexes in the Emilia-Romagna (ER) and Sardinia (SAR) populations but not in the South African population. Overall, our results show that fibular extremities account for subtle shape changes, while significant form and size differences are present between sexes, even with differences related to ancestry. The sex determination method provided in this paper, based on cross-validation LDA on form PCs, provides accuracy above 80% when the samples are pooled and reaches accuracy of 80-93% when Italian populations are considered separately. However, the method was not successful for the South African sample (50-53% accuracy). Our results are consistent with previous investigation on sex determination using the fibula, adopting both traditional metrics [4,12,16,22,55] and virtual methods [25][26][27][28]30]. Specifically, we obtained similar results as previous studies that utilized 3D GM methods on the whole fibula [30], which highlighted different degrees of size dimorphism between males and females, but no significant shape differences between the sexes were observed once size information is removed. In our study, morphological differences of both extremities in form space suggest that females possess smaller articular surfaces, shorter and narrower malleoli, and shallower malleolar fossas, with subsequently reduced areas of insertion of knee and ankle collateral ligaments, and less robust muscle insertions for the m. fibularis longus, m. biceps femoris, and interosseous ligament distal attachment (Figures 3, 7, 8, and S2). This indeed may reflect a sex-specific pattern of knee [56][57][58][59] and ankle/foot morphology and posture [29,60,61], and in general supports the existence of sex-segregated patterns of lower limb muscle activation in the different sexes [62][63][64][65]. Regarding the ankle, previous studies have demonstrated sex-related differences in anatomical and biomechanical features of the joint [59]. Adjei, Nalam, & Lee [66], congruently with Trevino and Lee [67], found that ankle stiffness varied significantly between sexes, both along the sagittal and frontal planes (therefore concurring in dorsiflexion/plantarflexion and inversion/eversion, respectively), with greater joint stiffness in females while both quiet standing and muscle activation occurred. The authors explain their findings with the greater passive resistance of females to a greater range of motion, lower elastic modulus, and higher ligamentous laxity [68][69][70] than males, which by contrast possess more leg muscle mass [71] and a higher muscle and cortical bone cross-sectional area in the whole lower limb [7,9,72] than females. Females have an increased range of motion in the sagittal plane of rearfoot and midfoot, peak plantarflexion angle of rearfoot, peak dorsiflexion and abduction angle of midfoot, as compared with males [73]. Females also possess greater ligamentous laxity, resulting subsequently in this increased mobility of the ankle joint [68,74]. Indeed, our results show that females have shallower malleolar fossae with reduced area of insertion of ankle stabilizers (transverse tibiofibular and posterior talofibular ligaments), less protruding malleoli with reduced area of insertion of ankle collateral ligaments (anterior, posterior talofibular, and calcaneofibular ligaments), and less robust muscle insertions for the m. fibularis longus, the main evertor of the ankle (Figure 3, Figure 7, Figure 8 and Figure S2). This result seems also to indicate sex-specific differences in structural properties of ankle stabilizers, as differences in tendon tissue properties of the leg have already emerged [70]. Regarding the knee, important differences among the sexes have been established in morphology and kinematics [75]. El Ashker and colleagues [57] found that males have higher functional hamstring-to-quadriceps strength ratios than females, suggesting possible quadriceps dominance in females or greater contribution of the hamstrings in males than in females, either in motion or during foot contact with the ground, stabilizing the joint angle. Their findings seem congruent with our result, showing a protruding area of attachment of m. biceps femoris in the proximal fibular extremity in males (Figures 3, 7, 8, and S2). While the specific relationship between a muscle and its tendon can change depending on several factors (e.g., muscle/tendon specificity, loading degree, age), in our sample, such variations are minimal, since all individuals possess similar activity levels (i.e., sedentary lifestyle [76], and age variations in the samples are similar (Table 1). Therefore, tendon size could be used to assess muscle size [54]. Finally, we did not find significant shape/form sex-related variation regarding articular orientation in agreement with previous studies on sex variations of the bones of the leg [28], but see also [77]. Our results show only weak sex-related differences in fibular muscle and ligament insertions when size is excluded from analysis (Figures 2, 5, 6 and S1). It is interesting to notice that studies have found higher mean torque of all major muscle groups in the lower extremity in men (women < 62-70% of men), but when measurements are standardized by size (i.e., bone mass index or body weight), no significant differences in strength, endurance, or torque between sexes have been found [78]. However, when body surfaces and dimensions are evaluated aside from muscle attachments, the relationship between sex, shape, and size is more complex: Wunderlich and Cavanagh [79] stressed that female lower limbs are not solely scaled-down versions of the male lower limb. Particularly, the lateral side of the foot, directly involved in eversion, showed marked sex-related differences even after standardization of size [60]. Indeed, our analysis indicates consistent differences between males and females in size, as sexes differ according to CS in the two Italian populations and when the sample is pooled. According to our results, fibulae of males possess larger linear dimensions than females, due to a generally larger body size (regardless of body mass), possibly due to a relatively longer pubertal growth period, and the subsequent development of larger bones and muscles under the influence of sex genes, sex steroids (androgens and estrogens), and other hormones, such as growth hormone (GF) and insulin-like growth factor 1 (IGF1) which, concurring with mechanical loading, may further contribute to the development of skeletal sexual dimorphism [80][81][82]. Indeed, bone growth is differentiated between sex groups, mostly evident during puberty but to a lesser extent also apparent in early childhood [83][84][85], and this concurs with the observed greater cortical bone plasticity in males, modelled by greater muscle mass during ontogeny, determining greater long bone lengths and breadths, also already observed for the fibula in a similar sample [86]. During puberty, males develop higher peak bone mass, greater bone size, and, ultimately, a stronger skeleton than females [87,88], with different skeletal maturation timing for both fibular extremities according to sex [55]. Our results are also congruent with the already observed different sizes in males and females for the whole size of the tibiofibular [27,28] and with prior assessments on the fibula alone [30]. The present study highlights the importance of a population-based approach for sexing the human fibula, as our results point to a different degree of sexual dimorphism in populations with different ancestries (Figures 7-9). Maass and Friedling [30] found that Coloured males and females differed significantly from Black and White males and females, with the latter groups not differing from one another, and they interpreted their results in light of the genetic contribution of small-bodied groups within the Coloured group. However, in their case, males and females had significantly different size within the three populations, considered separately. Our results comparing different ancestries, on the other hand, show that sexual variations in fibular extremities are not strictly related to size: in fact, while Sardinians and South Africans have similar (smaller) fibular epiphyseal sizes, the former show a difference in size among sexes, while the latter do not (Figure 9). Some fluctuations in the degree of sexual dimorphism among different populations are expected and could be ascribed to either proximity among individuals of the same population (i.e., response to nutritional stress or overall improvements in the environment) or ultimate causations (i.e., selection and genetic adaptation to a variety of ecological, social, or economic factors) [89,90]. Fibular extremities appear more dimorphic in the Italian samples than in the African one, likely as the result of the small sample size characterizing the South African sample or for other reasons which are not explored in this study. This result needs to be further investigated by increasing the sample size and including populations with different ancestry, subsistence strategies, and lifestyles that may lead to differences in the shape of the bones [76]. However, the 80-93% accuracy in sex determination observed for the Italian samples shows the potentiality of the method presented in this study and is congruent with similar discriminant functions provided by studies on fibular sexual dimorphism [4,[12][13][14][15][16]22,23,91], mostly reaching the threshold of 80% suggested for sex estimation using long bones [92]. Indeed, when form PCs do not differ significantly among the sexes, accuracy percentages are much lower. When size is considered aside from shape variations (CS), accuracy percentages are quite reduced (69-87%), suggesting that the interaction between size and shape (i.e., form) mainly accounts for sex differences in the human fibula. The main limitation of this study is the low sample size of the South African sample, which may have influenced the lack of significance in all comparisons performed in our study. The small sample size is due to limited resource availability at the time of the sample collection. Caution should therefore be taken in drawing conclusions on the lack of sexual dimorphism within this sample, as a larger number of individuals may indeed reveal sexrelated patterns that are here undetected. A further confounding factor in the South African sample may be the mixed origin of the populations included in this study, whose different body may translate into masking the effect of size sexual dimorphism [34]. Moreover, other factors may have contributed to the lack of sex variation in fibula, such as genetic background, environmental changes, and changes in socioeconomic and health conditions, as already observed in the larger bones of the lower limb [93]. Even though previous GM results on the fibula indicate a lack of sexual dimorphism in the fibula [30], the different methodology here employed (with sliding semi-landmarks instead of only fixed landmarks) could better detect subtle differences. Further studies are therefore necessary to evaluate in more detail this aspect in the South African population. Another limitation of the methodology here proposed is its time-consuming programming phase, which could limit its application in forensic contexts despite its accuracy. Conclusions In this study, we provided a novel methodology that can be used to determine sex in bioarcheological and, to a lesser extent, forensic investigations using the proximal and distal extremities of the human fibula, rarely analyzed in anthropology. To our knowledge, this is the first study that addresses fibular epiphyseal shape, form, and size in relation to sexual dimorphism, adopting a 3D GM method that includes both fixed landmarks and sliding (semi)landmarks, which together account for detailed morphological variations of fibular extremities. Our work included three different populations, two from Italy and one from South Africa, belonging to identified modern skeletal collections (19th-20th century). In comparison to other sex determination methods based on fibular linear measurements, our method, despite being more time-consuming, provides an objective, more reliable, and repeatable tool for investigating this district in Italian populations when fibular extremities are found to be isolated and intact, which could be useful, especially for bioarcheological purposes but also in the forensic field. This method could also be integrated into a wider, accurate evaluation of the whole tibiofibular complex for sex determination, alongside the tibia. The results of the 3D GM study of the proximal and distal extremities of the fibula showed that the fibula can be used to assess sex with accuracy ranging from 80% to 93% in the two Italian populations. The differences mainly consist in larger and more robust epiphyses in males than in females, with females showing narrower articular surfaces, shorter and narrower malleoli, shallower malleolar fossa and peroneal groove, and less protruding muscle insertions for the m. fibularis longus, m. biceps femoris, and interosseous ligament distal attachment. Such distinct form morphology may reflect a sex-specific pattern of knee and ankle/foot posture, with distinctive lower limb morpho-functional aspects in relation to sex. Our results also highlight the importance of a population-based approach to sex determination of the human fibula, as evidenced by our finding that sexual dimorphism can be determined for the two Italian populations but not for the South African one, reasonably related to its small sample size. This may suggest a different degree of sexual dimorphism in populations with different ancestries, which is not strictly related to size variations. However, when the sample is pooled, our sex estimation method based on cross-validation LDA on form PCs gives accuracy above 80%, which is lower when considering the size alone and the shape PCs, suggesting that the interaction of size and shape is the pattern that mainly reflects sexual dimorphism in the human fibula. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/biology11071079/s1. Table S1: Landmarks and (semi)landmarks raw coordinates. Table S2: Sample list. Figure S1. Shape-space morphings along PC1 and PC2 for both proximal (on the left) and distal epiphyses (on the right) considering extreme positive and negative shapes. Figure S2. Form-space morphings along PC1 and PC2 for both proximal (on the left) and distal epiphyses (on the right) considering extreme positive and negative forms. Author Contributions: A.P., conceptualization, investigation, formal analysis, validation, writing-original draft, methodology, and writing-review and editing; R.S., validation, writing-original draft, methodology, and writing-review and editing; S.D., methodology, data acquisition, validation, and software; D.M., S.B. and M.G.B., conceptualization, supervision of the research, methodology, project administration, software, and writing-review and editing. All authors have read and agreed to the published version of the manuscript.
2022-07-22T15:09:51.726Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "82ec65eb82b509490993d6eca6271081db58728c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-7737/11/7/1079/pdf?version=1658235295", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6eb8767f0f60ae63a6d2b768855adafe4536e43", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53003194
pes2o/s2orc
v3-fos-license
Plane quartics over $\mathbb{Q}$ with complex multiplication We give examples of smooth plane quartics over $\mathbb{Q}$ with complex multiplication over $\overline{\mathbb{Q}}$ by a maximal order with primitive CM type. We describe the required algorithms as we go, these involve the reduction of period matrices, the fast computation of Dixmier-Ohno invariants, and reconstruction from these invariants. Finally, we discuss some of the reduction properties of the curves that we obtain. Introduction Abelian varieties with complex multiplication (CM) are a fascinating common ground between algebraic geometry and number theory, and accordingly have been studied since a long time ago. One of the highlights of their theoretical study was the proof of Kronecker's Jugendtraum, which describes the ray class groups of imaginary quadratic fields in terms of the division points of elliptic curves. Hilbert's twelfth problem asked for the generalization of this theorem to arbitrary number fields, and while the general version of this question is still open, Shimura and Taniyama [50] gave an extensive partial answer for CM fields by using abelian varieties whose endomorphism algebras are isomorphic to these fields. A current concrete application of the theory of CM abelian varieties is in public key cryptography, where one typically uses this theory to construct elliptic curves with a given number of points [8]. Beyond the theoretically well-understood case of elliptic curves, there are constructions of curves with CM Jacobians in both genus 2 [52,60,7] and 3 [27,63,33]. Note that in genus 2 every curve is hyperelliptic, which leads to a relatively simple moduli space; moreover, the examples in genus 3 that we know up to now are either hyperelliptic or Picard curves, which again simplifies considerations. This paper gives the first 19 conjectural examples of "generic" CM curves of genus 3, in the sense that the curves obtained are smooth plane quartics with trivial automorphism group. More precisely, it conjecturally completes the list of curves of genus 3 over Q whose endomorphism rings over Q are maximal orders of sextic fields (see Theorem 1.1). The other curves of genus 3 with such endomorphism rings are either hyperelliptic or Picard curves. The hyperelliptic ones were known to Weng [63], except for three curves that were computed by Balakrishnan, Ionica, Kılıçer, Lauter, Vincent, Somoza and Streng by using the methods and SageMath implementation of [3,2]. The Picard curves had all previously appeared in work by Koike-Weng [27] and Lario-Somoza [33]. To construct our curves, we essentially follow the classical path; first we determine the period matrices, then the corresponding invariants, then we reconstruct the curves from rational approximations of these invariants, and finally we heuristically check that the curves obtained indeed have CM by the correct order. In genus 3, however, all of these steps are somewhat more complicated than was classically the case. The proven verification that the curves obtained indeed have CM by the correct order is left for another occasion; we restrict ourselves to a few remarks. First of all, there are no known equivalents in genus 3 of the results that bound the denominators of Igusa class polynomials [35]. In fact very little is known on the arithmetic nature of the Shioda and Dixmier-Ohno invariants that are used in genus 3, and a theoretical motivation for finding our list was to have concrete examples to aid with the generalization of the results in loc. cit. Using the methods in [9] one could still verify the endomorphism rings of our curves directly; this has already been done for the simplest of our curves, namely X 15 : x 4 − x 3 y + 2 x 3 z + 2 x 2 y z + 2 x 2 z 2 − 2 x y 2 z+ 4 x y z 2 − y 3 z + 3 y 2 z 2 + 2 y z 3 + z 4 = 0 . The main restriction for applying these methods to the other examples is the time required for this verification. At any rate, the results in the final section of this paper are coherent with the existence of a CM structure with the given order. The CM fields that give rise to our curves were determined by arithmetic methods in [22,26]. This also gives us Riemann matrices that we can use to determine periods and hence the invariants of our quartic curves. However, we do need to take care to reduce our matrices in order to get good convergence properties for their theta values. The theory and techniques involved are discussed in Section 1. With our reduced Riemann matrices in hand, we want to calculate the corresponding theta values. We will need these values to high precision so as to later recognize the corresponding invariants. The fast algorithms needed to make this feasible were first developed in [30] for genus 2; further improvements are discussed in Section 2.1. In the subsequent Section 2.2 we indicate how these values allow us to obtain the Dixmier-Ohno invariants of our smooth plane quartic curves. This is based on formulas obtained by Weber [62,16]. The theory of reconstructing smooth plane quartics from their invariants was developed in [42] and is a main theme of Section 3. Equally important is the performance of these algorithms, which was substantially improved during the writing of this paper; starting from a reasonable tuple of Dixmier-Ohno invariants over Q, we now actually obtain corresponding plane quartics over Q with acceptable coefficients, which was not always the case before. In particular, we developed a "conic trick" which enables us to find conics with small discriminant in the course of Mestre's reconstruction algorithms for general hyperelliptic curves (by loc. cit., the reconstruction methods for non-hyperelliptic curves of genus 3 reduce to Mestre's algorithms for the hyperelliptic case). Section 3 discusses these and other speedups and the mathematical background from which they sprang. Without them, our final equations would have been too large to even write down. We finally take a step back in Section 4 to examine the reduction properties of these curves, as well as directions for future work, before giving our explicit list of curves in Section 5. Riemann matrices Let A be a principally polarized abelian variety of dimension g over C, such as the Jacobian A = J(C) of one of the curves that we are looking for. Then by integrating over a symplectic basis of the homology and normalizing, the manifold A gives rise to a point τ in the Siegel upper half space H g , well-defined up to the action of the symplectic group Sp 2g (Z). The elements of H g are also known as Riemann matrices. In Section 1.1, we give the list, due to Kılıçer and Streng, of all fields K that can occur as endomorphism algebra of a simple abelian threefold over Q with complex multiplication over Q. In Section 1.2, we recall Van Wamelen's methods for listing all Riemann matrices with complex multiplication by the maximal order of a given field. In Section 1.3, we show how to reduce Riemann matrices to get Riemann matrices with better convergence properties. 1.1. The CM fields. Let A be an abelian variety of dimension g over a field k of characteristic 0, let K be a number field of degree 2g and let O be an order in K. We say that A has CM by O (over k) if there exists an embedding O → End(A k ). If A is simple over k and has CM by the full ring of integers O K of K, then we have in fact O K ∼ = End(A k ) and K is a CM field, i.e., a totally imaginary quadratic extension K of a totally real number field F [32]. The field of moduli of a principally polarized abelian variety A/k is the residue field of the corresponding point in the moduli space of principally polarized abelian varieties. It is also the intersection of the fields of definition of A in k [28, p.37]. In particular, if A is defined over Q, then its field of moduli is Q. The field of moduli of a curve or an abelian variety is not always a field of definition [48]. However, we have the following theorem. Theorem 1.1. There are exactly 37 isomorphism classes of CM fields K for which there exist principally polarized abelian threefolds A/Q with field of moduli Q and End(A) ∼ = O K . The set of such fields is exactly the list of fields given in Table 1. For each such field K, there is exactly one such principally polarized abelian variety A up to Q-isomorphism, and this variety is the Jacobian of a curve X of genus 3 defined over Q. In particular, the abelian variety A itself is defined over Q. Proof. The first part, up to and including uniqueness of A, is exactly Theorem 4.1.1 of Kılıçer's thesis [22]. These 37 cases are listed in Table 1. Therefore we need only prove the statement on the field of definition, which can be done here directly from the knowledge of the CM field. By the theorem of Torelli [34,Appendix], k is a field of definition for the principally polarized abelian threefold A if and only if it is a field of definition for X. This implies that the field of moduli of X equals Q, and we have to show that this field of moduli is also a field of definition. In genus 3 all curves descend to their field of moduli, except for plane quartics with automorphism group Z/2Z and hyperelliptic curves with automorphism group Z/2Z × Z/2Z (see [38,40]). We finish by showing that neither of these occurs in Table 1. If Q(i) is a subfield of K, then by Weng [63, §4.4-4.5], the curve X is hyperelliptic with automorphism group containing Z/4Z, in which case it descends to its field of moduli. We therefore assume the contrary. If the curve X over Q is hyperelliptic, then its automorphism group is the group µ K itself. Since this group is cyclic, it cannot be isomorphic to = Z/2Z × Z/2Z and the curve X descends to its field of moduli. If X is non-hyperelliptic, then its automorphism group is µ K /{±1}. Because of our assumption on K, this group is not isomorphic to Z/2Z, and again X descends to Q. Table 1 gives a list of cyclic sextic CM fields K, arranged as follows. Let K be such a field. Then it has an imaginary quadratic subfield k and a totally real cubic subfield F . In Table 1, the number d k is the discriminant of k; the polynomial p F is a defining polynomial for F . These two entries of the table define the field K. The number f F is the conductor of F , and d K is the discriminant of K. The entry # is the order of the automorphism group of the Jacobian of the corresponding curve, which is nothing but the number of roots of unity in K. The "Type" column indicates whether the conjectured model of the curve is hyperelliptic (H), Picard (P), or a plane quartic with trivial automorphism group (G). The "Curve" column gives a reference to the conjectured model over Q of the curve. The cases 1, 2, 3, 5, . . . , 20 correspond to the smooth plane quartics X i in Section 5. Some of these curves were already computed by Weng [63]. The final cases 4, 25, 26 were found by Balakrishnan, Ionica, Kılıçer, Lauter, Somoza, Streng and Vincent and will appear online soon. The Picard curves can be obtained as a special case of our construction, but are more efficiently obtained using the methods of Koike-Weng [27] and Lario-Somoza [33]. The rational models in [63,27,33] as well as those that can be obtained with [2,37] are correct up to some precision over C. In case 23, the hyperelliptic model was proved to be correct in Tautz-Top-Verberkmoes [57,Proposition 4]. The hyperelliptic model y 2 = x 7 − 1 for case 36 is a classical result (see Example (II) on page 76 in Shimura [49]) and the Picard model y 3 = x 4 − x for case 37 is similar (e.g. Bouw-Cooley-Lauter-Lorenzo-Manes-Newton-Ozman [6, Lemma 5.1]); both can be proven by exploiting the large automorphism group of the curve. Remark 1.2. In fact the curve in Case 4 also admits a hyperelliptic defining equation over Q, which is not automatic; a priori it is a degree 2 cover of conic that we do not know to be isomorphic to P 1 . However, in this case the algorithms in [9] show that the conjectural model obtained is correct, so that also in this case a hyperelliptic model exists over the field of moduli Q. In this paper, we construct models for the generic plane quartic cases. 1.2. Obtaining Riemann matrices from CM fields. Let L be a lattice of full rank 2g in a complex g-dimensional vector space V . The quotient V /L is a complex Lie group, called a complex torus. This complex manifold is an abelian variety if and only if it is projective, which is true if and only if there exists a Riemann form for L, that is, an R-bilinear form E : V × V −→ R such that E(L, L) ⊂ Z and such Table 1. CM fields in genus 3 whose maximal orders give rise to CM curves with field of moduli Q, sorted by the order # of the group of roots of unity. that the form is symmetric and positive definite. The Riemann form is called a principal polarization if and only if the form E on L has determinant equal to 1. We call a basis (λ 1 , . . . , λ 2g ) of L symplectic if the matrix of E with respect to the basis is given in terms of g × g blocks as For every principal polarization, there exists a symplectic basis. If we write out the elements of a symplectic basis as column vectors in terms of a C-basis of V , then we get a g × 2g period matrix. The final g elements of a symplectic basis of L for E form a C-basis of V , so we use this as our basis of V . Then the period matrix takes the form (τ | I g ), where the g × g complex matrix τ has the following properties: (1) τ is symmetric, (2) Im(τ ) is positive definite. The set of such matrices forms the Siegel upper half space H g . Conversely, from every Riemann matrix τ , we get the complex abelian variety which we can equip with a Riemann form given by Ω g with respect to the basis given by the columns of (τ | I g ). Given a CM field K, Algorithm 1 of Van Wamelen [60] (based on the theory of Shimura-Taniyama [50]) computes at least one Riemann matrix for each isomorphism class of principally polarized abelian variety with CM by the maximal order of K. For details, and an improvement which computes exactly one Riemann matrix for each isomorphism class, see also Streng [55,54]. In our implementation, we could simplify the algorithm slightly, because the group appearing in Step The isomorphism class of principally polarized abelian variety (C g /(τ Z g +Z g ), Ω g ) of Section 1.2 depends only on the orbit of τ under the action of Sp 2g (Z), so we change τ into an Sp 2g (Z)-equivalent matrix on which the theta constants have faster convergence. For this, we use [31, Algorithm 2 in §4.1]. To avoid numerical instability, we replace the condition |τ ′ 11 |≤ 1 in Step 3 of loc. cit. by |τ ′ 11 |< 0.99. The result of this reduction then is a matrix τ ∈ H g such that the real parts of all entries have absolute value ≤ 1 2 , such that the upper left entry has absolute value |τ 11 |≥ 0.99 and such that the imaginary part Y = Im(τ ) is Minkowski-reduced, i.e., (a) for all j = 1, . . . , g and all v = (v 1 , . . . , v g ) ∈ Z g with gcd(v j , . . . , v g ) = 1, we have t vY v ≥ Y j j , and (b) for all j = 1, . . . , g − 1, we have Y j j+1 ≥ 0. For example, taking i = j and v = e i ± e j in (a) gives Y ii ± 2Y ij ≥ 0, while taking j = 1 and v = e i in (a) gives Y ii ≥ Y 11 , so We also have We have implemented this algorithm, as well as a version with Minkowski reduction replaced by LLL reduction, which scales better with g than Minkowski reduction. For a self-contained exposition including a proof that the LLL-version of the algorithm terminates, see the first arXiv version 1 of our paper, and for an alternative approach with LLL reduction, see Deconinck, Heil, Bobenko, van Hoeij, and Schmies [10]. 1.4. Conclusion and efficiency. It takes only a minute to compute the reduction with either the Minkowski or the LLL version of the reduction algorithm for all our Riemann matrices. We did not notice any difference in efficiency of numerical evaluations of Dixmier-Ohno invariants (as in Section 2) between the Minkowskiversion of the reduction and the LLL-version of the reduction. Without reduction, we were unable to do the computations to sufficient precision for reconstructing all the curves. We conclude that for g = 3, there is no reason to prefer one of these algorithms over the other, but it is very important to use at least one of them. We do advise caution with the LLL-version, as the analysis in Section 2 below is valid only for Minkowski-reduced matrices. Computing the Dixmier-Ohno invariants In this section, we show how given a Riemann matrix τ we can obtain an approximation of the Dixmier-Ohno invariants of a corresponding plane quartic curve. One procedure has been described in [20] and relies on the computation of derivatives of odd theta functions. Here we take advantage of the existence of fast strategies to compute the Thetanullwerte to emulate the usual strategy for such computations in the hyperelliptic case [63,3]: we use an analogue of the Rosenhain formula to compute a special Riemann model for the curve from the Thetanullwerte, from which we then calculate an approximation of the Dixmier-Ohno invariants. By normalizing these, we find an explicit conjectural representative of the Dixmier-Ohno invariants as an element of a weighted projective space over Q. Fast computation of the Thetanullwerte from a Riemann matrix. Definition 2.1. The Thetanullwerte or theta-constants of a Riemann matrix τ ∈ H 3 are defined as where a, b ∈ {0, 1/2} 3 . We define the fundamental Thetanullwerte to be those ϑ [a;b] with a = 0; there are 8 of them. In many applications, only the 36 so-called even Thetanullwerte are considered, which are those for which the dot product 4a · b is even. The other Thetanullwerte turn out to always be equal to 0. We further simplify notation by writing In other words, we number the Thetanullwerte by interpreting the reverse of the sequence (2b||2a) as a binary expansion. This is the numbering used in, e.g., [13,30]. For notational convenience, we write ϑ n1,...,n k for the k-tuple ϑ n1 , . . . , ϑ n k . In this section, we describe a fast algorithm to compute the Thetanullwerte with high precision. Note that it is sufficient to describe an algorithm that computes the fundamental Thetanullwerte; we can then compute the squares of all 64 Thetanullwerte by computing the fundamental ones at τ /2, then use the following τ -duplication formula [21, Chap. IV]: We can then recover the 64 Thetanullwerte from their squares, by using a lowprecision approximation of their value to decide on the appropriate square root. Both algorithms described in this subsection have been implemented in Magma [29]. 2.1.1. Naive algorithm for the Thetanullwerte. A (somewhat) naive algorithm to compute the Thetanullwerte consists in computing the sum in Definition 2.1 until the remainder is too small to make a difference at the required precision. We show in this section that it is possible to compute the genus 3 Thetanullwerte up to precision P (that is, up to absolute difference of absolute value at most 10 −P ) by using O(M(P )P 1.5 ) bit operations. Here M(P ) is the number of bit operations needed for one multiplication of P -bit integers. This running time is the same as for the general strategy given in [11], as analyzed in [30,Section 5.3]. Let t m,n,p = e iπ(m,n,p)τ t (m,n,p) , so ϑ 0 (τ ) = m,n,p∈Z t m,n,p . Our algorithm computes the approximation The main idea is to use the following recurrence relation. Let q jk = e iπτ jk . Then we have t m+1,n,p = t m,n,p q 2m 11 q 11 q 2n 12 q 2p 13 , t m,n+1,p = t m,n,p q 2n 22 q 22 q 2m 12 q 2p 23 , (2.6) t m,n,p+1 = t m,n,p q 2p 33 q 33 q 2n 23 q 2m 13 . This algorithm can be modified to compute approximations of any fundamental Thetanullwerte ϑ [0,b] by adjusting the sign of each term (with a factor (−1) (m,n,p).b ). Hence, the computation of S B reduces to the computation of the q i and the use of the recursion relations to compute each term. We prove in the rest of the section that, for this algorithm to compute ϑ 0 up to 2 −P , taking That is, we prove that This allows the computation of the genus 3 Thetanullwerte in O(M(P )P 1.5 ); we refer to our implementation [29] of the naive algorithm for full details. Our analysis is similar to the ones in [13,30]. We use the following lemma, of which we defer the proof until the end of §2.1.1. Note that by (1.9), we have 1 100 Y 11 ≥ 0.0085. For the theoretical complexity bound O( √ P ), it will suffice to use this lemma as it is. However, for a practical algorithm, the 1 100 Y 11 is far from optimal, and we use the following better constant. Let which in practice tends to be much larger than 1 100 Y 11 . Proof. In case c = 1 100 Y 11 , use Lemma 2.9. Otherwise, we have c = c 1 . Now, for (m, n, p) ∈ R 3 , using the inequalities 2|mn|≤ (m 2 + n 2 ) and Y 12 , Y 23 ≥ 0 we have since we have an absolute lower bound c ≥ 0.0085. Therefore taking is enough to ensure that S B is within 2 −P of ϑ 0 . This proves our complexity estimates for Algorithm 2.7, when combined with the following deferred proof. so all terms on the right hand side are non-negative. We distinguish between three cases: I, II+ and II−. Case I: There exists a j = I with s Ij n j > − 3 4 . Case II±: For all j = I, we have s Ij n j ≤ − 3 4 and s 13 = ±1. Proof in case I. Without loss of generality j = J. We take two terms from (2.12): which is a contradiction. Proof in case II+. In this case, we have s ij = 1 for all i and j. In particular, we have n J , n K ≤ − 3 4 both negative. We again take two terms from (2.12): so Y JJ ≤ 8/75. By symmetry, we also have Y KK ≤ 8/75. Using (2.12) again, we get which is another contradiction. Proof in case II−. The proof in this case is different from the other two cases: we will show that Y is close to gives for all {i, j, k} = {1, 2, 3}: contradiction. Fast algorithm for the Thetanullwerte. In this section, we generalize the strategy described in genus 1 and 2 in [13] and ideas taken from [30,Chapter 7]. This leads to an evaluation algorithm with running time O(M(P ) log P ). We start, as in [13], by writing the τ -duplication formulas in terms of ϑ 2 i . For example, we can write, These formulas match the iteration used in the definition of the genus 3 Borchardt mean B 3 [13]. They can be seen as a generalization of the arithmetic-geometric mean to higher genus, since both involve Thetanullwerte and converge quadratically [13]. Applying the τ -duplication formula to the fundamental Thetanullwerte repeatedly gives (recall that we write ϑ n1,...,n k for the k-tuple ϑ n1 , . . . , ϑ n k ) assuming one picks correct square roots ϑ i (0, 2 k τ ) of ϑ i (0, 2 k τ ) 2 . By the homogeneity of the Borchardt mean, we can write We wish to use this equality to compute the right-hand side from the quotients of Thetanullwerte; this is a key ingredient to the quasi-linear running time of our algorithm. The difficulty here stems from the fact that the Borchardt mean requires a technical condition on the square roots picked at each step ("good choice") in order to get a quasi-linear running time, and sometimes these choices of square roots do not correspond to the values of ϑ i we are interested in (i.e., would not give 1/ϑ 0 (0, τ ) 2 at the end of the procedure). We sidestep this difficulty using the same strategy as [30]: we design our algorithm so that the square roots we pick always correspond to the values of ϑ i we are interested in, even when they do not correspond to "good choices" of the Borchardt mean. This slows down the convergence somewhat; however, one can prove (using the same method as in [30,Lemma 7.2.2]) that after a number of steps that only depends on τ (and not on P ), our choice of square roots always coincides with "good choices". After this point, only log P steps are needed to compute the value with absolute precision P , since the Borchardt mean converges quadratically; this means that the right-hand side of Equation (2.22) can be evaluated with absolute precision P in O(M(P ) log P ). The next goal is to find a function F to which we can apply Newton's method to compute these quotients of Thetanullwerte (and, ultimately, the Thetanullwerte). For this, we use the action of the symplectic group on Thetanullwerte to transform (2.22) and get relationships involving the coefficients of τ . Using the action of the matrices described in [13,Chapitre 9], along with the Borchardt mean, we can build a function f with the property that However, the above function is a function from C 7 to C 6 ; this is a problem, as it prevents us from applying Newton's method directly. As discussed in [30,Chapter 7], there are two ways to fix this: either work on the variety of dimension 6 defined by the fundamental Thetanullwerte, or add another quantity to the output and hope that the Jacobian of the system is then invertible. We choose the latter solution, and build a function F : C 7 → C 7 by adding to the function f above an extra output, equal to −i det(τ ), which is motivated by the symplectic action of the matrix J = 0 −I g I g 0 on the Thetanullwerte: The following Algorithm 2.25 explicitly defines the function F that we will use. Algorithm 2.25 (Given a 7-tuple a 1 , a 2 , . . . , a 7 ∈ C, computes a number F(a 1 , . . . , a 7 ), defined by the steps in this algorithm. Here we are specifically interested in the value F( ϑ 1,...,7 (0, τ ) 2 /ϑ 0 (0, τ ) 2 ), so for clarity we abuse notation and denote a i by choosing the square root that coincides with the value of ϑ i (0, τ ) (computed with low precision just to inform the choice of signs). (iv) Apply the τ -duplication formulas to the t i to compute complex numbers that by abuse of notation we write as The final part of our algorithm applies Newton's method to F, by starting with an approximation of the quotients of Thetanullwerte with large enough precision P 0 to ensure that the method converges. In practice, we found that a starting precision P 0 = 450 was on the one hand large enough to make Newton's method converge quickly and on the other hand small enough so that the fast algorithm does not get slowed down too much by first doing the naive algorithm to precision P 0 . Since computing F is asymptotically as costly as computing the Borchardt mean, and since there is no extra asymptotic cost when applying Newton's method if one doubles the working precision at each step, we get an algorithm which computes the genus 3 Thetanullwerte with P digits of precision with time O(M(P ) log P ). This algorithm was implemented in Magma, along with the aforementioned naive algorithm. For our examples, the fast algorithm always gives a result with more than 2000 digits of precision in less than 10 seconds. the values correspond to a smooth plane quartic curve if and only if 36 of them are non-zero. If this condition is satisfied, the following procedure determines the equation of a plane quartic X C for which there is a Riemann matrix τ that gives these Thetanullwerte. Using [62, p.108] (see also [16]), we compute the Weber moduli (2.26) Note that these numbers depend on only 18 of the Thetanullwerte. The three projective lines ℓ i : a i1 x 1 + a i2 x 2 + a i3 x 3 = 0 in P 2 C , together with the four lines will form a so-called Aronhold system of bitangents to the eventual quartic X C . Considering the first three lines as a triple of points ((a i1 : a i2 : a i3 )) i=1...3 in (P 2 ) 3 , one obtains a point on a 6-dimensional quasiprojective variety. Its points parametrize the moduli space of smooth plane quartics with full level two structure [19]. From an Aronhold system of bitangents, one can reconstruct a plane quartic following Weber's work [62, p.93] (see also [46,16]). We take advantage here of the particular representative (a i1 , a i2 , a i3 ) of the projective points (a i1 : a i2 : a i3 ) to simplify the algorithm presented in loc. cit. Indeed, normally that algorithm involves certain normalization constants k i . However, in the current situation [16,Cor.2] shows that these constants are automatically equal to 1 for our choices of a ij in (2.26), which leads to a computational speedup. Let u 1 , u 2 , u 3 ∈ C[x 1 , x 2 , x 3 ] be given by (2.28) Then X C is the curve defined by the equation We now have a complex model X C of the quartic curve that we are looking for. Note that there is no reason to expect X C to be defined over Q; its coefficients will in general be complicated algebraic numbers that are difficult to recognize algebraically. To get around this problem, we first approximate its 13 Dixmier-Ohno invariants, which were defined in [12,15,17] Therefore the evaluation of these invariants at X C (which we still denote by I) gives rise to a point in the weighted projective space P d . Note that I 27 is the discriminant of X C , which is non-zero. For a ternary quartic form over Q that is equivalent to a ternary quartic form over Q, the tuple I defines a Q-rational point in P d . This is not to say that the entries of I itself are in Q. However, we can achieve this by suitably normalizing this tuple. When I 3 = 0 (as will always be the case for us), we can for instance use the normalization , . (2.31) Our program concludes by computing the best rational approximation of (the real part) of the Dixmier-Ohno invariants I norm by using the corresponding (Pari [4]) function BestApproximation in Magma at increasing precision until the sequence stabilizes. In practice, this does not take an overly long time: we worked with less than 1000 decimal digits and the denominators involved never exceeded 100 decimal digits. For some of the CM fields, we in fact obtain 4 isomorphism classes of principally polarized abelian varieties. But by Theorem 1.1, we know that exactly one of them has field of moduli Q. Of course we do not know in advance which of the four complex tori under consideration has this property. In such a case, we use BestApproximation for each of the four cases and we observe that this succeeds (at less than 1000 decimal digits) for exactly one of them. We then only set aside the Dixmier-Ohno invariants of that case for later consideration. We can now find the prime factors p of I 3 and look at the valuations at p of each entry of I ′ . Since for an invariant I of degree 3n, we have that by this procedure, we can reduce the valuations at p of these invariants. Applying this as much as possible while preserving positive valuation, we find I min = (2 5 · 3 · 23 : 2 3 · 3967 : 2 3 · 3 · 5 · 41 · 173 · 19309 : · · · : 2 5 · 3 27 · 19 7 ). (2.35) Note that we cannot always get a representative with coprime entries; already in the case under consideration the prime 2 divides all the entries). Optimized reconstruction Having the Dixmier-Ohno invariants at our disposal, it remains to reconstruct a corresponding plane quartic curve X over Q. It was indicated in [42] how such a reconstruction can be obtained; however, the corresponding algorithms, the precursors of those currently at [41], were suboptimal in several ways. To start with, they would typically return a curve over a quadratic extension of the base field, without performing a further Galois descent. Secondly, the coefficients of these reconstructed models were typically of gargantuan size. In this section we describe the improvements to the algorithms, incorporated in the present version of [41], that enabled us to obtain the simple equations in this paper. The basic ingredients are the following. A Galois descent to the base field can be found by determining an isomorphism of X with its conjugate and applying an effective version of Hilbert's Theorem 90, as was also mentioned in [42]. After this, a reduction algorithm can be applied, based on algorithms by Elsenhans [14] and Stoll [53] that have been implemented and combined in the Magma function MinimizeReducePlaneQuartic. However, applying these two steps concurrently is an overly naive approach, since the Galois descent step blows up the coefficients by an unacceptable factor. We therefore have to look under the hood of our reconstruction algorithms and use some tricks to optimize them. Recall from [42] that the reconstruction algorithm finds a quartic form F by first constructing a triple (b 8 , b 4 , b 0 ) of binary forms of degree 8, 4 and 0. Our first step is to reconstruct the form b 8 as efficiently as possible. This form is reconstructed from its Shioda invariants S, which are algebraically obtained from the given Dixmier-Ohno invariants I min . Starting from the invariants S, the methods of [37] are applied, which furnish a conic C and a quartic H in P 2 that are both defined over Q. This pair corresponds to b 8 in the sense that over Q the divisor C ∩ H on C can be transformed into the divisor cut out by b 8 on P 1 . A priority in this reconstruction step is to find a conic C defined by a form whose discriminant is as small as possible. 3.1. Choosing the right conic for Mestre reconstruction. Let k be a number field whose rings of integers O k admits an effective extended GCD algorithm, which is for example the case when O k is a Euclidean ring. We indicate how over such a field we can improve the algorithms developed to reconstruct a hyperelliptic curve from its Igusa or Shioda invariants in genus 2 or genus 3 respectively [44,36,37]. Recall that Mestre's method for hyperelliptic reconstruction is based on Clebsch's identities [37,Sec.2.1]. It uses three binary covariants q = (q 1 , q 2 , q 3 ) of order 2. From these forms, one can construct a plane conic C q : 1≤i,j≤3 A i,j x i x j = 0 and a degree g + 1 plane curve H q over the ring of invariants. Here g is the genus of the curve that we wish to reconstruct. Given a tuple of values of hyperelliptic invariants over k, we can substitute to obtain a conic and a curve that we again denote by C q and H q . Generically, one then recovers a hyperelliptic curve X with the given invariants by constructing the double cover of C q ramified over C q ∩ H q . Because the coefficients of the original universal forms C q and H q are invariants of the same degree, the substituted forms will be defined over k. Finding a model of X of the form y 2 = f (x) over k (also called a hyperelliptic model ) is equivalent to finding a k-rational point on the conic C q by [37,39]. Algorithms to find such a rational point exist [51,61] and their complexity is dominated by the time spent to factorize the discriminant of an integral model of C q . While a hyperelliptic model may not exist over k, it can always be found over some quadratic extension of k. It is useful to have such an extension given by a small discriminant, which is in particular the case when C q has small discriminant. Accordingly, we turn to the problem of minimizing disc(C q ). In order to do so, we use a beautiful property of Clebsch's identities. By [37, Sec.2.1. (5)], we have that where R q is the determinant of q 1 , q 2 , q 3 in the basis x 2 , xz, z 2 . If q ′ 3 is now another covariant of order 2, we can consider the family of covariants q λ,µ = (q 1 , q 2 , λq 3 + µq ′ 3 ), λ, µ ∈ k. For this family, the multilinearity of the determinant shows that The values R (q1,q2,q3) and R (q1,q2,q ′ 3 ) are invariants that can be effectively computed and which are generically non-zero. (If either of these invariants is zero, then, one can usually take different covariants q i ; if all of these fail to give non-zero values, then typically X has large reduced automorphism group and other techniques can be used.) The key point is that we can minimize the value of R q λ,µ , and by (3.1) the value of disc(C q λ,µ ) with it, by using the extended Euclidean algorithm to minimize the combined linear contribution of λ and µ to the linear expression R q λ,µ . This allows us to reduce the discriminant all the way to gcd(R (q1,q2,q3) , R (q1,q2,q ′ 3 ) ) or beyond. Note that we do not have C q λ,µ = λ 2 C (q1,q2,q3) + µ 2 C (q1,q2,q ′ 3 ) . However, the coefficients of the family of conics C q λ,µ and of H q λ can be quickly found in terms of the invariants and λ, µ by using the same interpolation techniques as in [37,Sec. 2.3]. 3.2. Reconstruction of a plane quartic model from the invariants. With these precomputations out of the way, we now search for a binary octic form b 8 whose Shioda invariants come from the first step of the reconstruction algorithm of [42] applied to the Dixmier-Ohno invariants of case 15 (cf. Table 1). Except for this case and case 6, all the other cases give conics C with no rational point and as such, a Galois descent phase is needed to find a rational quartic (cf. Section 3.3). Case 15 is the easiest CM plane quartic that we have to reconstruct. From the associated Shioda invariants, we compute an invariant R q λ,µ . By using the extended GCD algorithm and substituting the result for λ and µ, we are left with R q equal to the left hand side coefficient 2 61 · 3 18 · · · 201049 . This factor is almost equal to the Dixmier-Ohno invariant I min 12 , the discriminant of the covariant used in our quartic reconstruction. Indeed, the considerations in [42] show that our reconstruction algorithm fails when I min 12 = 0, and more precisely that this failure occurs when trying to reconstruct b 8 via Mestre's method. Hence the primes which divide I min 12 naturally appear in the discriminants of C q λ,µ . A substantially smaller R q cannot therefore be expected. Now as we know the factorization of R q , we can efficiently determine if the conic C q λ,µ has a rational point. Unexpectedly, it has one, and after a change of variable we map it to the point (1 : 1 : 0). The conic is then C = x 2 − y 2 − z 2 and the corresponding quartic H has approximately 50-digit coefficients. Finally, it remains to compute the geometric intersection C ∩ H. This yields the octic b 8 . The forms b 0 and b 4 computed by the plane quartic reconstruction algorithm [42] are therefore defined over Q as well. By applying the linear map (ℓ * ) −1 defined in loc. cit., we get a plane quartic defined over Q too. It remains to reduce the size of its coefficients as explained in Section 3.3 to obtain the equation given in Section 5. 3.3. Galois descent and minimization. Now suppose that we have in this way found a pair (C, H) as above, for which C has minimal discriminant. We can then further optimize this pair by applying the following two steps: (i) Minimize the defining equation of C by using the theory of quaternion algebras (implemented in the Magma function MinimalModel); (ii) Apply the reduction theory of point clusters [53] applied to the intersection C ∩ H (implemented in the Magma function ReduceCluster). The second step above is more or less optional; typically it leads to a rather better H at the cost of a slightly worse C. Regardless, at the end of this procedure, we can construct a binary form b 8 over a quadratic extension K of Q by parametrizing the conic C, and we then reconstruct b 4 and b 0 as in [42]. The associated ternary quartic form F is usually defined over a quadratic extension of Q. Since its covariant ρ(F ) from [42] is a multiple of y 2 − xz, we can immediately apply the construction from [59] to obtain an element [M ] ∈ PGL 3 (K) that up to a scalar λ transforms F into its conjugate σ(F ): for some scalar matrix π. Conjugating this equality shows that in fact π ∈ Q, and taking determinants yields δσ(δ) = π 3 , where δ is the determinant of M . Now let M 0 = (π/δ)M . Then we have We may therefore assume that M ∈ GL 3 (K) corresponds to a lifted cocycle. The Galois cohomology group H 1 (Gal(K | Q), GL 3 (K)) is trivial; Hilbert's Theorem 90 can be used to construct a coboundary N for M , that is, a matrix in GL 3 (K) for which M σ(N ) = N. (3.6) After choosing a random matrix R ∈ GL 3 (K), one can in fact take (3.7) We thus obtain a coboundary N corresponding to the cocycle M . If we put F 0 = F.N , then the class [F 0 ] is defined over Q. The transformed form F 0 itself still need not be defined over Q, but this can be achieved by dividing it by one of its coefficients. A complication is that the determinant of a random matrix N as in (3.7) typically has a rather daunting factorization. These factors can (and usually will) later show up as places of bad reduction of the descended form F 0 . It is therefore imperative to avoid a bad factorization structure of the determinant of N . This, however, can be ensured by performing a lazy factorization of this determinant and passing to a next random choice if the result is not satisfactory. After we have obtained a form F 0 , one can apply the Magma function Minimize-ReducePlaneQuartic; this function combines a discriminant minimization step due to Elsenhans in [14] with the reduction theory of Stoll in [53]. Typically the first of these steps leads to the most significant reduction of the coefficient size, since it applies a suitable transformation in GL 3 (Q) whose determinant is a large prime, whereas the cluster reduction step is a further optimization involving only the subgroup SL 3 (Z). As mentioned above, we can save some time in the minimization step by carrying over the primes in the factorization of the determinant of the coboundary N , since these will recur in the set of bad primes of F 0 . All in all, we get the following randomized algorithm whose heuristic complexity is polynomial in the size of the Dixmier-Ohno invariants, if we assume that the factorizations of I min 12 and I min 27 are known, and that det N behaves as a random integer. [37]). (b) Evaluate the conic C q λ,µ at S and determine (λ, µ) by using the extended Euclidean algorithm (so that disc C q λ,µ ≃ I min 12 , see Section 3.1). (c) Choose a point P on the conic C q λ,µ and use it to parametrize the conic. (To achieve this, let P to be any rational point of C q λ,µ if it is easy to find. Otherwise intersect C q with a random rational line with a defining equation of small height and let P be the quadratic point defined by this intersection.) (d) Intersect C q λ,µ and H q λ,µ to obtain the octic b 8 , then calculate the forms b 4 and b 0 and reconstruct a quartic F via the map ℓ * . (e) If F is defined over Q then set N to be the identity matrix of GL(3, Q), else let N be a random coboundary as in (3.7). (f) Try to compute a factorization of det N . If this fails within the allocated time, then start over. (ii) Let F 0 = F.N and divide the result by one of its coefficients, so that F 0 has coefficients in Q. (iii) Reduce the coefficient size of F 0 (with MinimizeReducePlaneQuartic, using the prime factors of det N and I min 12 ). One important practical speedup for Algorithm 3.8 exploits that if we take R in (3.7) to be integral, then the determinants of the random coboundaries that we compute in Step (i)(e) share the same denominator, namely that of π/δ, where π is as in (3.4) and where δ is the determinant of M . In turn, the quantity π/δ only depends on the choice of the random line in Step (i)(c). A straightforward optimization is thus to loop over the Steps (i)(a) − (d) until a lazy factorization of the denominator of det(1 + M ) yields its full factorization (note that here M is the cocycle defined by equation (3.3) and 1 + M = R + M σ(R) for R the identity matrix). Once done, we can loop over the Steps (i)(e) − (i)(f ) to test as many coboundaries N = R + M σ(R) from random integral matrices R as needed, once more until the lazy factorization of the denominator of det N is its full factorization. In the most difficult case, i.e., case 16 (cf. Table 1), the candidates for det N have approximately 500-digit denominators and 700-digit numerators. If we allow less than a second for the lazy factorization routine in Magma, then the total computation in the end takes less than 5 minutes on a laptop. In this case, the descended form F 0 = F.N has 1500-digit coefficients! Once the discriminant minimization steps from [14] are done for each prime divisor of det N , we are left with a form that "merely" has 50-digit coefficients. Stoll's reduction method [53] then finally yields the 15-digit equation given in Section 5. Remark 3.9. Bouyer and Streng [7,Algorithm 4.8] show how one can avoid factoring in the discriminant minimization of binary forms. Such a trick enabled them to eliminate the need for a loop like that in Step (i) of Algorithm 3.8 when considering curves of genus 2. It remains to be seen whether a similar trick applies to Elsenhans's discriminant minimization of plane quartics [14]. If it does, then that would greatly speed up the reconstruction. Remarks on the results Our (heuristic) results can be found in the next section; here we discuss some of their properties and perform a few sanity checks. The very particular pattern of the factorization of the discriminants is already a good indicator of the correctness of our computations. Note that as the Dixmier-Ohno invariants that we use involve denominators with prime factors in {2, 3, 5, 7}, we will not look at the valuation of our invariants at these primes. As was mentioned in the introduction, one of the motivations for computing this list of curves was to have examples in hand to understand the possible generalization of the results of Goren-Lauter [18] in genus 2 to non-hyperelliptic curves of genus 3. In genus 2, all primes dividing the discriminant are primes of bad reduction for the curve. This bad reduction provides information on the structure of the endomorphism ring of the reduction of the Jacobian. This particular structure allows one, with additional work, to bound the primes dividing the discriminant. Taking this even further allowed Lauter-Viray [35] to find out exactly which prime powers divide the discriminant. Similar bounds on primes dividing invariants have been obtained by Kılıçer-Lauter-Lorenzo-Newton-Ozman-Streng for hyperelliptic [24] and Picard [24,25] curves. Beyond these cases, so for "generic" genus 3 CM curves X, the situation is more involved. Let us fix terminology for a prime by calling it (i) a potentially plane prime if, after extending the base field, X has good non-hyperelliptic reduction at this prime; (ii) a potentially hyperelliptic prime if, after extending the base field, X has good hyperelliptic reduction at this prime; (iii) a geometrically bad prime in the remaining cases. The first case can be detected easily, but distinguishing the second from the third case from the knowledge of the Dixmier-Ohno invariants is a difficult task and will be the main result of [43]. Applying these forthcoming results to the list of curves of Section 5, it can be proved for those curves that all primes p > 7 dividing I min 27 with exponents 7 and 14 are potentially hyperelliptic whereas the few primes that are not of this kind are geometrically bad. Primes p > 7 dividing disc X / I min 27 for the curves of Section 5 are all potentially plane primes. This profusion of hyperelliptic primes is typical of the CM case. Since the curves that we consider are CM curves, their Jacobian has potentially good reduction at all primes. Therefore, a prime is bad for X if and only if the Jacobian of X reduces to a product of two abelian sub-varieties with a decomposable principal polarization. The locus of such abelian threefolds is of codimension 2 in the moduli space of principally polarized abelian threefolds, whereas the locus of Jacobians of hyperelliptic curves has codimension 1. We therefore expect that "most" of the non-potentially-plane primes dividing the discriminant of a CM plane quartic are potentially hyperelliptic primes. It should be mentioned that the results of [43] do not provide a closed formula for the potentially hyperelliptic primes simply in terms of the CM-type and polarization. In fact we wish to conclude this section with two remarks on the primes dividing I min 27 that suggest that new phenomena occur for potentially hyperelliptic primes of plane quartics that do not have an exact equivalent in lower genus and that will require new theoretical developments in order to be fully explained. First, unlike the factorization pattern of the discriminants in the genus-2 CM, hyperelliptic and Picard cases, the factorization pattern of the product b of the potentially hyperelliptic primes seems to fit with that of a random integer of size b. For example, in case 16 below we have b = 19 · 37 · 79 · 13373064392147. Secondly, the following proposition (applied for instance to X 9 at the primes 233 and 857 which are both totally split) shows that the reduction of the Jacobian of X at a potentially hyperelliptic prime can still be absolutely simple. Proof. By a theorem of Serre and Tate [47], the abelian variety A has potentially good reduction. Extend k so that it has good reduction and so that k contains the reflex field. The Shimura-Taniyama formula [50, Theorem 1(ii) in Section 13.1] then gives a formula for the Frobenius endomorphism of the reduction as an element π ∈ O K up to units. A theorem in Honda-Tate theory [56, Théorème 1] then gives a formula for the endomorphism algebra in terms of this π. We did the computation for all possible splitting types of a prime in a cyclic sextic number field and found the above-mentioned endomorphism algebras over some finite extension of F p . Moreover, we found that the endomorphism algebra from loc. cit. in our cases does not change when taking powers of π (i.e., extending k and the extension of F p further), so that these are indeed the endomorphism algebras over F p . Finally, suppose further that A = J(X) and X does not have potentially good reduction. Then by [6,Corollary 4.3], we get that the reduction of A is not absolutely simple, which gives a contradiction in cases (i) and (ii).
2018-02-09T08:43:54.000Z
2017-01-23T00:00:00.000
{ "year": 2017, "sha1": "2b3882f973880c835557148e29be10f6d374da48", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1701.06489", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "25cc5873c6d11bf003127d03b1052f89b7c09af4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
12211902
pes2o/s2orc
v3-fos-license
MORA: an Energy-Aware Slack Reclamation Scheme for Scheduling Sporadic Real-Time Tasks upon Multiprocessor Platforms In this paper, we address the global and preemptive energy-aware scheduling problem of sporadic constrained-deadline tasks on DVFS-identical multiprocessor platforms. We propose an online slack reclamation scheme which profits from the discrepancy between the worst- and actual-case execution time of the tasks by slowing down the speed of the processors in order to save energy. Our algorithm called MORA takes into account the application-specific consumption profile of the tasks. We demonstrate that MORA does not jeopardize the system schedulability and we show by performing simulations that it can save up to 32% of energy (in average) compared to execution without using any energy-aware algorithm. Introduction Context of the study. Nowadays, many modern processors can operate at various supply voltages, where different supply voltages lead to different clock frequencies and to different processing speeds. Since the power consumption of a processor is usually a convex and increasing function of its speed, the slower its speed is, the less its consumption is [17]. Among the most recent and popular such processors, one can cite the Intel PXA27x processor family [21], used by many PDA devices [20]. Many computer systems, especially embedded systems, are now equipped with such voltage (speed) scaling processors and adopt various energy-efficient strategies for managing their applications intelligently. Moreover, many recent energy-constrained embedded systems are built upon multiprocessor platforms because of their high-computational requirements. As pointed out in [10,11], another advantage is that multiprocessor systems are 1 more energy efficient than equally powerful uniprocessor platforms, because raising the frequency of a single processor results in a multiplicative increase of the consumption while adding processors leads to an additive increase. Supported by this emerging technology, the Dynamic Voltage and Frequency Scaling (DVFS) [15] framework becomes a major concern for multiprocessor power-aware embedded systems. For real-time systems, this framework consists in reducing the system energy consumption by adjusting the working voltage and frequency of the processors, while respecting all the timing constraints. Previous work. There are a large number of researches about the uniprocessor energy-aware real-time scheduling problem [5,9,22,23,32]. Among those, many slack reclamation approaches have been developed over the years. Such techniques dynamically collect the unused computation times at the end of each early task completion and share it among the remaining pending tasks. Examples of such approaches include the ones proposed in [5,27,28,33]. Some reclaiming algorithms even anticipate the early completion of tasks for further reducing the CPU speed [5,27], some having different levels of "aggressiveness" [5]. In [15], Kuo et al. propose a state-of-art about energyaware algorithms in multiprocessor environment. As it is mentioned in this state-of-art, many studies (see for instance [14,16,17,18,34,31]) consider the frame-based task model, i.e., all the tasks share a common deadline and this "frame" is indefinitely repeated. Among the most interesting studies which consider this task model, Zhu et al. [34] explored online slack reclamation schemes (i.e., running during the system execution) for dependent and independent tasks. In [18], Kuo et al. propose a set of energy-efficient scheduling algorithms with different task remapping and slack reclamation schemes. In [17], the authors address independent tasks, where task migrations are not allowed. In [14], the authors provide some techniques with and without allowing task migration, while assuming that tasks share the same power consumption function and each processor may run at a selected speed, independently from the speeds of the others. In [16], the authors con-sider that tasks are allowed to have different power consumption functions. In [31], energy-aware multiprocessor scheduling of frame-based tasks was explored for multiprocessor architectures, in which all the processors must share the same speed at any time. Finally the authors of [13] propose a slack reclamation scheme for identical multiprocessor platforms, while considering frame-based tasks of which the distribution of the computation times is assumed to be known. Targeting a sporadic task model, Anderson and Baruah [3] explored the trade-off between the total energy consumption of task executions and the number of required processors, where all the tasks run at the same common speed. In previous work [26], we provided a technique that determines the minimum common offline speed for every task under global-EDF policy [7], while considering identical multiprocessor platforms. Furthermore, we proposed in the same study an online algorithm called MOTE which was, to the best of our knowledge, the first to address the global and preemptive energy-aware scheduling problem of sporadic constrained-deadlines tasks on multiprocessors. The main idea of MOTE is to anticipate at run-time the coming idle instants in the schedule in order to reduce the processors speed accordingly. This algorithm cannot be considered as a slack reclamation scheme since it does not directly take advantage from early tasks completion, but it can be combined with slack reclaiming techniques (and in particular with MORA) in order to improve the energy savings. Contribution of the paper. In this paper, we propose a slack reclamation scheme called MORA for the global and preemptive energy-aware scheduling problem of sporadic constrained-deadline real-time tasks on a fixed number of DVFS-capable processors. According to [15] and to the best of our knowledge, this is the first work which addresses a slack reclamation scheme in this context. Although most previous studies on multiprocessor energy-efficient scheduling assumed that the actual execution time of a task is equal to its Worst-Case Execution Time (WCET), such that those in [2,6,14,31] for instance, this work is motivated by the scheduling of tasks in practice, where tasks might usually complete earlier than their WCET [5,34]. The proposed algorithm MORA is an online scheme which exploits early task completions by using as much as possible the unused time to reduce the speed of the processors. Although it has been inspired from the uniprocessor "Dynamic Reclaiming Algorithm" (DRA) proposed in [5], the way in which it profits from the unused time is very different from the DRA since MORA takes into account the application-specific consumption profile of the tasks. Organization of the paper. The document is organized as follows: in Section 2, we introduce our model of computation, in particular our task and platform model; in Sec- Platform model We consider multiprocessor platforms composed of a known and fixed number m of DVFS-identical processors {P 1 , P 2 , . . . , P m }. "DVFS-identical" means that (i) all the processors have the same profile (in term of consumption, computational capabilities, etc.) and are interchangeable, (ii) two processors running at a same frequency execute the same amount of execution units, and (iii) all the processors have the same minimal and maximal operating frequency denoted by f min and f max , respectively. The processors are referred to as independent, with the interpretation that they can operate at different frequencies at the same time [29,24]. Furthermore, we assume that each processor can dynamically adapt its operating frequency (and voltage) at any time during the system execution, independently from each other. The time overheads on frequency (voltage) switching are assumed to be negligible, such as in many researches [4,9,25,32,35]. We define the notion of speed s of a processor as the ratio of its operating frequency f over its maximal frequency, i.e.: s def = f fmax -with the interpretation that a job that executes on a processor running at speed s for R time units completes s×R execution units. When only K discrete frequencies are available to a processor, they are sorted in the increasing order of frequency and denoted by f 1 , . . . , f K . For each frequency f k such that 1 ≤ k ≤ K, we denote by s k the corresponding speed (i.e., s k def = f k fmax ) and by P (s k ) the power consumption (energy consumption rate) per second while the processor is running at speed s k . The available frequencies and the corresponding core voltages of the Intel XScale processor [1] that will be used in our experiments are outlined in Table 1. Notice that, from our definition of the processor speed, s max is fmax fmax = 1 whatever the considered processor. Moreover, due to the finite number of speeds that are available to any practical processor, any speed s computed by any energy-aware algorithm must be translated into one of the available speeds. In this work, this translation is performed by the function S(s) def = min{s i | s i ≥ s}. Application model A real-time system τ is a set of n functionalities denoted by {τ 1 , τ 2 , . . . , τ n }. Every functionality τ i is modeled by a sporadic constrained-deadline task characterized by three parameters (C i , D i , T i ) -a Worst-Case Execution Time (WCET) C i at maximal processors speed s max (expressed in milliseconds for instance), a minimal interarrival delay T i and a relative deadline D i ≤ T i -with the interpretation that the task τ i generates successive jobs τ i,j (with j = 1, . . . , ∞) arriving at times a i,j such that a i,j ≥ a i,j−1 + T i (with a i,1 ≥ 0), each such job has a worst-case execution time of at most C i time units (at maximal processors speed s max ), and must be completed at (or before) its absolute deadline noted D i,j def = a i,j + D i . According to our definition of the processors speed, a processor running at speed s max = 1 may take up to C i time units to complete a job τ i,j and, at a given speed s, its WCET is Ci s . Notice that, since D i ≤ T i , successive jobs of any task τ i do not interfere with each other. We define the density δ i of the task τ i as the ratio of its WCET at maximal speed s max over its deadline, i.e., Di . We assume that this ratio is not larger than 1 for every task, since a task with a density larger than 1 is never able to meet its deadlines (since task parallelism is forbidden in this work). The maximal density δ max (τ ) of the system is defined as δ max (τ ) def = max n i=1 {δ i } and its total density is defined as δ sum (τ ) def = n i=1 δ i . In our study, all the tasks are assumed to be independent, i.e., there is no communication, no precedence constraint and no shared resource (except the processors) between them. At any time t in any schedule S, a job τ i,j is said to be active iff a i,j ≤ t and it is not completed yet in S. Moreover, an active job is said to be running at time t in S if it is executing on a processor. Otherwise, the active job is pending in a ready-queue of the operating system and we say that it is waiting. Furthermore, a job is said to be dispatched at time t in S if it passes from the waiting state to the running state at time t. Although certain benchmarks provide measured power consumption, we should not ignore that different applications may have different instruction sequences and require different function units in the processor, thus leading to different dynamic consumption profiles. As it was already done in [30], we hence introduce a measurable parameter e i for each task τ i that reflects this application-specific power difference between the applications and the measured benchmark. Accordingly, the consumption of any task τ i executed for 1 time unit at speed s k can be estimated by e i · (P (s k ) − P idle ) + P idle [30], where P (s) and P idle are defined as in Table 1. In the remainder of this paper, we denote by E i (R, s k ) the energy consumed by the task τ i when executed for R time units at speed s k and we define it . As we will see in Section 3.3, MORA uses these energy consumption functions in order to improve the energy saving that it provides. This improvement makes MORA very different from the uniprocessor dynamic reclaiming algorithm DRA proposed in [5]. Scheduling specifications We consider in this study the global scheduling problem of sporadic constrained-deadlines tasks on multiprocessor platforms. "Global" scheduling algorithms, on the contrary to partitioned algorithms, allow different tasks and different jobs of the same task to be executed upon different processors. Furthermore, we consider preemptive scheduling and Fixed Job-level Priority assignment (FJP), with the following interpretations. In the preemptive global scheduling problem, every job can start its execution on any processor and may migrate at run-time to any other processor if it gets meanwhile preempted by a higher-priority job. We assume in this paper that preemptions are carried out with no loss or penalty. Fixed Job-level Priority assignment means that the scheduler assigns a priority to jobs as soon as they arrive and every job keeps its priority constant until it completes. Global Deadline Monotonic and Global Earliest Deadline First [7] are just some examples of such scheduling algorithms. Notations During the system execution, every active job τ i,j has two associated speeds noted s i,j and s off i,j . The speed s i,j denotes the speed that a processor adopts while executing τ i,j . We assume that these execution speeds s i,j can be modified at any time during the system execution, even during the execution of τ i,j , and it is instantaneously reflected on the processor speed. On the other hand, the speed s off i,j is the offline precomputed execution speed of τ i,j , in the sense that the value of s i,j is always set to s off i,j at τ i,j arrival time. These offline speeds s off i,j are determined before the system execution and remain always constant at run-time. They may be simply set to the maximal processors speed s max , or they can be determined by an offline energy-aware strategy, such that the one proposed in [26] for instance. These offline speeds must ensure that all the deadlines are met when the set of tasks is scheduled upon the m processors, even if every job of every task presents its WCET. Notice that, since each task generates an infinity of jobs, the method proposed in [26] determines a common speed for every task and assumes that every job τ i,j inherits from the offline speed of τ i at run-time. MORA is based on reducing online (i.e., while the system is running) the execution speed s i,j of the jobs in order to provide energy savings while still meeting all the deadlines. To achieve this goal, MORA detects whenever the speed s i,j of an active job τ i,j can safely be reduced by performing comparison between the schedule which is actually produced (called the actual schedule hereafter) and the offline schedule defined below. We will see in the remainder of this section that our algorithm MORA always refers to this offline schedule in order to produce the actual one. Definition 1 (The offline schedule) The offline schedule is the schedule produced by the considered scheduling algorithm on which every job of every task τ i runs at its offline speed s off i,j and presents its WCET. Figure 1.(a) depicts an example of an offline schedule and illustrates the notations that will be used throughout the paper. In this picture, a 5-tasks system is executed upon 2 processors, where only the first job of each task is represented. The characteristics of the tasks are the following (remember that τ i = (C i , D i , T i )): τ 1 = (6, 14, 30), τ 2 = (6, 15, 35), τ 3 = (8, 16, 40), τ 4 = (2, 17, 45) and τ 5 = (6, 18, 50). Assuming Global-EDF, we have the following priority order: τ 1,1 > τ 2,1 > τ 3,1 > τ 4,1 > τ 5,1 . Furthermore, we assume in this example that the offline speed s off i,j of every job τ i,j is the maximal processors speed s max = 1. (a) Offline schedule. Figure 1. Offline and actual schedules. At run-time, whenever any job is dispatched to any processor P in the offline schedule, MORA also dispatches it to P in the actual one. That is, assuming the same set of tasks as in Figure 1.(a), Figure 1.(b) depicts the actual schedule that is produced if the actual execution time of the jobs τ 1,1 , . . . , τ 5,1 are respectively 3, 2, 3, 2, 6. At any time t, we denote by rem i,j (t) and rem off i,j (t) the worstcase remaining execution time of job τ i,j at speed s max in the actual and offline schedule, respectively. We assume that these quantities are updated at run-time for every active job τ i,j . For instance in Figure 1 at time t = 3, we have The α-queue Since the jobs arrival times are unknown while considering the sporadic task model, computing and storing the entire offline schedule cannot be done before the system execution. Hence, our algorithm only stores and updates at run-time a sufficient part of the offline schedule. This kind of approach (i.e., using a dynamic data structure for embodying a sufficient part of the offline schedule) was previously proposed in [5]. As in [5], we call this data structure α-queue. The α-queue is a list that contains, at any time t, the worst-case remaining execution time rem off i,j (t) of every active jobs τ i,j in the offline schedule. This list is managed according to the following rules, which are widely inspired from [5]. α-Rule 1 At any time, the α-queue is sorted by decreasing order of the job priorities, with the m highest priority jobs at the head of the queue. α-Rule 2 Initially the α-queue is empty. α-Rule 3 Upon arrival of a job τ i,j at time t, τ i,j inserts its WCET C i into the α-queue in the correct priority position. This happens only once for each arrival, no re-insertion at return from preemptions. α-Rule 4 As time elapses, the m fields rem off i,j (t) (if any) at the head of the α-queue are decreased with a rate proportional to the offline speeds s off i,j . Whenever one field reaches zero, that element is removed and the update continues, still with the m first elements (if any). Obviously, no update is performed when the α-queue is empty. For the same reasons than those explained in [5], the following observation holds. Observation 1 At any time t, the α-queue updated according to α-Rules 2-4 contains only the jobs that would be active at time t in the offline schedule. Moreover, the rem off i,j (t) fields contain the worst-case remaining execution time of every active job τ i,j at time t in the offline schedule. By consulting the α-queue at any time t, MORA is able to get the required information about any active jobs τ i,j in the offline schedule, i.e., its worst-case remaining execution time rem off i,j (t), its next dispatching time disp i,j (t) and the next job dispatching time nextdisp(P , t) on any processor P . Due to the space limitation, we omitted the implementation details about the procedures which compute disp i,j (t) and nextdisp(P , t). Notice that, as explained in [5], the dynamic reduction of rem off i,j (t) from α-Rule 4 does not need to be performed at every clock cycle. Instead, for efficiency, we perform the reduction only before MORA modifies a speed, by taking into account the time elapsed since the last update. Formally, if ∆t time units elapsed, the m fields at the head of the α-queue are updated as follows: rem off i,j (t + ∆t) ← rem off i,j (t) − s off i,j · ∆t. The above approach relies on two facts: as we will see in the next section, the speed adjustment decisions will be taken only at job arrival time (i.e., the execution speed of the arriving job is set to its offline speed), job dispatching time in the offline schedule and whenever a processor is about to get idle in the actual schedule. Hence, it is necessary to have an accurate α-queue only at these instants. Second, between these instants, each task is effectively executed non-preemptively in the actual schedule. Principle of MORA As explained in Section 3.1, whenever a job is dispatched in the offline schedule, it is also dispatched in the actual one. However, as we will see below, MORA profits from an early job completion by starting the execution of some other jobs earlier in the actual schedule than in the offline one. As a result, when a job (say τ k, ) is dispatched at time t in the offline schedule (and thus also in the actual one), its worst-case remaining execution time rem k, (t) could be lower than rem off k, (t) if it was executed earlier in the actual schedule. For example, Figure 2 depicts the same set of tasks than in Figure 1. At time t = 2, τ 2,1 completes in the actual schedule on processor P 2 and leaves 4 unused time units. These 4 time units are reclaimed by starting the execution of τ 5,1 (we will see below how MORA selects the job which profits from the slack time) and therefore, when τ 5,1 is dispatched to P 2 in the offline schedule at time t = 8, it is also dispatched to P 2 in the actual one and we have rem 5,1 (8) < rem off 5,1 (8). The difference between these remaining execution times is called the earliness of the job and we denote it by k, (t) def = rem off k, (t)−rem k, (t). According to this earliness, whenever any job τ k, is dispatched in both schedules, its execution speed s k, may safely be reduced to s k, so that Indeed, under this speed s k, , τ k, would complete simultaneously in both schedules if it presents its WCET. This leads to the first rule of MORA. Rule 1 Any job τi,j which is dispatched to any processor P at time t in the offline schedule is also dispatched to P at time t in the actual one and its execution speed si,j is modified according to The main idea of MORA can be summarized as follows. When any job completes in the actual schedule without consuming its WCET, the unused time may be reclaimed by starting the execution of any waiting job earlier; and since this waiting job receives additional time for its execution, it can thereby reduce its execution speed. Using this concept, Figure 2 depicts an example of how MORA takes advantage from an early job completion. When τ 2,1 completes at time t = 2 in the actual schedule, MORA selects a waiting job (here, τ 5,1 ) and executes it during the 4 time units left by τ 2,1 . Since τ 5,1 is granted to use 4 additional time units, MORA reduces its execution speed s 5,1 so that its worst-case remaining execution time increases by 4 time units. The selected job is the one for which the resulting speed reduction leads to the highest energy saving. Formally, MORA selects a waiting job and decreases its execution speed as described by Rule 2. Step 2. Compute the amount Li,j(t) of additional time units that τi,j could reclaim in the actual schedule if it was dispatched at time t, i.e., In Figure 2 we have nextdisp(P2, 2) = 6 and Li,j(2) = 4 ∀τi,j. Step 3. Compute what would be the resulting execution speed s i,j if τi,j was granted to use both its earliness and these Li,j additional time units, i.e., s i,j is computed so that Step 4. Estimate what would be the resulting execution speed s i,j if τi,j was not granted to use these Li,j(t) additional time units. According to Rule 1, si,j will be modified to s i,j when τi,j will be dispatched in the offline schedule (say at time t ). By assuming that τi,j will not be executed in the actual schedule until time t , we will have remi,j(t ) = remi,j(t) and from Expression 1 Step 5. Compute the energy saving ∆Ei,j between execution at speed s i,j and at speed s i,j : Step 6. Dispatch the job τ k, with the largest ∆E k, to processor Pr. If ∆Ei,j ≤ 0 for all the waiting jobs, then dispatch the waiting job τ k, (if any) with the highest priority in order to complete it earlier and to potentially increase the length of future slack time. Step 7. If there is a selected job τ k, , set its execution speed s k, to the computed one s k, . Otherwise, turn the processor Pr into the idle mode. Notice that, if a processor is about to be idle in the actual schedule exactly when a job is dispatched in the offline one, only Rule 1 is applied. Algorithm 1 presents the pseudocode of MORA and we demonstrate its correctness in the following section. Algorithm 1: MORA Determine the offline speed s off i,j of every job τi,j ; 1 α-queue ← φ ; 2 At job arrival (say τi,j ) at time t: Update the α-queue according to α-Rule 4 ; 3 Insert the value of Ci into the α-queue according to α-Rule 3 ; 4 Set si,j to s off i,j ; 5 Whenever any processor Pr is about to get idle at time t: Update the α-queue according to α-Rule 4 ; 6 apply Rule 2 ; 7 Whenever any job τi,j is dispatched to any processor Pr in the offline schedule at time t: Update the α-queue according to α-Rule 4 ; 8 if (a job τ k, = τi,j is running on Pr) then Preempt τ k, ; 9 apply Rule 1 ; 10 Correctness of MORA In this section, we formally prove that using MORA does not jeopardize the system schedulability. Lemma 1 Let S be any preemptive and FJP global scheduling algorithm and let τ be any set of real-time tasks. Suppose that τ is scheduled by S while using MORA, and at time t during the system execution we have ∀τ i,j and ∀0 ≤ t ≤ t: Then, t with 0 ≤ t ≤ t such that ∃τ i,j running at time t in the offline schedule and waiting at time t in the actual one. Proof The proof is obtained by contradiction. Suppose that at any time t such that 0 ≤ t ≤ t, ∃τ i,j running in the offline schedule and waiting in the actual one. It implies that at time t in the offline schedule, there are at most (m − 1) jobs with an higher priority than τ i,j , whereas there are at least m such jobs in the actual one. In other words, there is at least one job (say τ k, ) at time t with an higher priority than τ i,j , such that τ k, is completed in the offline schedule, and not in the actual one. For this job, it holds that rem k, (t ) > rem off k, (t ), leading to contradiction with our hypothesis. The property follows. Lemma 2 Let S be any preemptive and FJP global scheduling algorithm and let τ be any set of real-time tasks. Suppose that τ is scheduled by S while using MORA, and at time t during the system execution we have ∀τ i,j and ∀0 ≤ t ≤ t: Then, t with 0 ≤ t ≤ t such that ∃τ i,j running at time t in the offline schedule and such that the last speed modification of τ i,j was performed according to Rule 2. Proof The proof is obtained by contradiction. Suppose that at time t with 0 ≤ t ≤ t, ∃τ i,j running at time t in the offline schedule and such that the last modification of s i,j was performed according to Rule 2. Let t actual and t off be the largest instants before time t at which τ i,j was dispatched in the actual and offline schedule, respectively. Notice that the case where τ i,j is not dispatched before time t in the actual schedule leads to a contradiction of Lemma 1. Therefore, only two cases may arise: (i) t actual ≤ t off , in this case s i,j would have been modified at time t off according to Rule 1, leading to a contradiction of our hypothesis, or (ii) t actual > t off , which leads to a contradiction of Lemma 1. The property follows. Theorem 1 Let S be any preemptive and FJP global scheduling algorithm and let τ be any set of real-time tasks which is schedulable by S when every job τ i,j is executed at its offline speed s off i,j . Then, every job deadline is still met when the system is scheduled by S while using MORA. Proof The proof consists in showing that ∀τ i,j we have while using MORA. Indeed, since the offline schedule meets all the deadlines, we have rem off i,j (d i,j ) = 0 ∀τ i,j . Therefore, having rem i,j (d i,j ) ≤ rem off i,j (d i,j ) leads to rem i,j (d i,j ) = 0 ∀τ i,j , meaning that the actual schedule also meets all the deadlines. Initially at time t = 0, we obviously have rem i,j (0) = rem off i,j (0) ∀τ i,j . Now, let t > 0 be any instant and suppose that ∀τ i,j and ∀0 ≤ t ≤ t we have rem i,j (t ) ≤ rem off i,j (t ). We prove in the following that it yields rem i,j (next(t)) ≤ rem off i,j (next(t)) ∀τ i,j where next(t) denotes the earliest instant after time t such that one of the following events occurs: arrival of a job, deadline of a job, completion of a job in the actual schedule or in the offline schedule, dispatching of a job in the actual schedule or in the offline schedule. Obviously if Inequality 3 holds then Inequality 2 also holds since next(t) can denote every job deadline. From the definition of next(t), every processor of both schedules is either idle or it executes one and only one job during any time interval [t, next(t)]. In other words, the state (waiting or running) of any active jobs in any schedule does not change during any time interval [t, next(t)]. As a result, the following relations hold at time t: • For any waiting job τ i,j in the actual schedule: • For any waiting job τ i,j in the offline schedule: • For any running job τ i,j in the actual schedule: Case 2. τ k, is running at time t in the actual schedule and the last modification of s k, was performed according to Rule 2. Therefore, we know from Lemma 2 that τ k, is waiting at time t in the offline schedule, and since by hypothesis rem k, (t) ≤ rem off k, (t), we know from Equalities 5 and 6 that rem k, (next(t)) ≤ rem off k, (next(t)). The theorem follows. Simulation results In this section, we compare the effectiveness of MORA with other energy-aware algorithms. However, it is meaningful to only compare MORA with approaches that consider the same models of computation and the most related paper to ours is [26], where two methods with the same task and platform model are proposed. However, these two methods do not take into account the application-specific parameter e i of task τ i . The first method proposed in [26] (that we denote by OFF hereafter) is an offline speed determination technique for Global-EDF which determines an unique and constant speed s off for all the processors such that all the job deadlines are met under this speed. In our simulations, this OFF method is used by MORA in order to provide the offline speed s off i,j of every job τ i,j , i.e., s off is determined at line 1 of Algorithm 1 and s off i,j is set to s off between lines 4 and 5. The second method proposed in [26] is the MOTE algorithm. At run-time, it anticipates the coming idle instants in the schedule and adjusts the speed of the processors accordingly, i.e., it reduces the processors speed in order to minimize the proportion of time during which the system is idle. Since this algorithm is also based on the concept of the offline speeds, we consider that OFF is also used to provide it. Although MORA could also be compared with frame-based scheduling algorithms (since the sporadic task model is a generalization of the frame-based task model), we do not perform such comparisons in this paper. In our simulations, we schedule periodic implicitdeadline systems (i.e., ∀τ i , T i is here the exact inter-arrival delay between successive jobs and D i = T i ). The energy consumption of each generated system is computed by simulating three methods: MOTE, MORA and MORAOTE, i.e., a combination of the MOTE and MORA. Indeed, since these algorithms do not interfere with each other, the MOTE rule can be applied on the offline speeds just before applying Rule 1 of MORA (i.e., between lines 9 and 10 of Algorithm 1). Although the implementation details of MORAOTE are omitted here due to the space limitation, we will see in our simulation results that this combination always improves the provided energy savings. The consumptions provided by these three methods are compared with the consumption of the MAX method, where all the jobs are executed at the maximal processors speed s max = 1. That is, we consider that the consumption by MAX is 100% and the consumptions of the other methods are normalized. In every simulation, we generated 100 set of tasks with a total density δ sum (τ ) within [d, d + 0.05] where d = 0, 0.05, . . . , 9.95, leading to an amount of 20000 generated task sets for each simulation. The upper bound on δ sum (τ ) (i.e., 10) was chosen in order to cover a large number of systems while keeping the simulation time reasonable. For a given total density, tasks densities δ i are uniformly generated within [0.01, D max ] until the total density δ sum (τ ) reaches the expected one (the upper bound D max on the tasks density will be discussed later). Notice that the number n of tasks is not fixed beforehand, i.e., it depends on this step that generates task densities. Next, other task parameters C i , D i and T i are randomly generated according to their respective density δ i . Finally, the application-specific parameters e i are uniformly chosen in [0.8, 1.2] so that the consumption of the tasks varies between 80% and 120% of the power of the measured benchmark. Once a set of tasks is generated, it is executed during 100 hyper-periods (i.e. the least common multiple of the task periods) by the four methods MAX, MOTE, MORA and MORAOTE. This upper bound on systems execution time was chosen to ensure that every task generates at least 100 jobs (for the same reason as those mentioned above). During each system execution, the actual execution time of every job τ i,j is uniformly generated in Ci 10 , C i . This lower bound Ci 10 was chosen in order to reflect the fact that a job may take up to 10 times less than its WCET. Finally, for every generated task set τ , the number m of processors must be sufficient to schedule τ by MAX without missing any deadline. Hence, we set m to the lowest integer that passes one of the following EDF-schedulability tests: the density-based test [19], the load-based test [12] and the test denoted Test 13 in [8]. Simulations were performed while considering different scheduling algorithms (Global-EDF and Global-DM) and various processor models. However, due to the space limitation, we only depict in this paper the results provided by Global-EDF on Intel XScale processors (outlined in Table 1 page 2). Observation 2 The effectiveness of both MORA and MOTE mainly relies on the ratio m n , but antagonistically. This observation stems from the fact that MORA saves energy via the waiting jobs whereas MOTE profits from the absence of waiting jobs. When m n tends to 1, jobs tend to never wait for a free processor and MOTE therefore provides significant energy savings whereas the effectiveness of MORA is almost null. On the other hand when m n tends to 0, processors tend to consecutively execute several distinct jobs and jobs are often waiting. As a result, MORA is often able to reclaim unused time and provides important energy savings whereas the effectiveness of MOTE is negligible. According to our task generation process, we are not able to directly set the ratio m n to any given value. However, the number m of processors is obtained by using a combination of sufficient schedulability tests and the accuracy of these tests mainly relies on δ max (τ ). Basically, the ratio m n increases as δ max (τ ) becomes larger and since the generated task sets are more likely to have a large δ max (τ ) when the upper bound D max is high, we can indirectly control the ratio m n via D max . The Y-axis of Figure 3 represents the ratio Figure 4 shows that MORA can save up to 32% of energy (in average) over the MAX method (for D max = 0.1) and the algorithm MORAOTE provides important energy savings for various values of D max . Notice that a part of the energy savings is explained by the use of the OFF method, which leads MOTE and MORA to an energy savings of about 10% when m n tends to 0 and 1, respectively. Furthermore, although other processor models and scheduling algorithms led to different average consumptions, the evolution of the consumption with respect to D max remains similar than in Figure 4. Conclusion In this paper, we propose a slack reclamation scheme called MORA which reduces the energy consumption while scheduling a set of sporadic constrained-deadline tasks by a global, preemptive and FJP algorithm on a fixed number of DVFS-identical processors. According to [15] and to the best of our knowledge, we are the firsts to address such approach in this context. The proposed algorithm MORA exploits early job completions at run-time by starting the execution of the next waiting jobs at a lower speed. Compared with other reclaiming algorithms such that the DRA proposed in [5], MORA takes into account the applicationspecific consumption profile of the tasks in order to improve the energy saving that it provides. Moreover, we proved that using MORA does not jeopardize the system schedulability and we show in our simulations that it can save up to 32% of energy (in average) compared to execution without using any energy-aware algorithm. In our future works, we aim to specialize MORA so that it will take into account more practical constraints such that preemption costs, migration costs and time overheads due to the multiple frequency switching. Moreover, we aim to extend our processor model in order to handle the various idle and sleep modes of the processors and to take into account the energy costs due to frequency switching. In other future works, we also aim to propose a new multiprocessor reclamation scheme which anticipates the early completion of jobs for further reducing the CPU speed. This approach will be based on statistical informations about tasks that are assumed to be known a priori. Some uniprocessor energyaware algorithms already exploit this concept (see the AGR algorithm proposed in [5] for instance).
2009-06-01T05:10:10.000Z
2009-06-01T00:00:00.000
{ "year": 2009, "sha1": "b36c72b0cf19a1eb736d4e43790e50cc467eb8e3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0906.0268", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f2f875ec053d62aa2b5ac0713c6a4707f630a2a5", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
267722443
pes2o/s2orc
v3-fos-license
Methods to improve antibacterial properties of PEEK: A review As a thermoplastic and bioinert polymer, polyether ether ketone (PEEK) serves as spine implants, femoral stems, cranial implants, and joint arthroplasty implants due to its mechanical properties resembling the cortical bone, chemical stability, and radiolucency. Although there are standards and antibiotic treatments for infection control during and after surgery, the infection risk is lowered but can not be eliminated. The antibacterial properties of PEEK implants should be improved to provide better infection control. This review includes the strategies for enhancing the antibacterial properties of PEEK in four categories: immobilization of functional materials and functional groups, forming nanocomposites, changing surface topography, and coating with antibacterial material. The measuring methods of antibacterial properties of the current studies of PEEK are explained in detail under quantitative, qualitative, and in vivo methods. The mechanisms of bacterial inhibition by reactive oxygen species generation, contact killing, trap killing, and limited bacterial adhesion on hydrophobic surfaces are explained with corresponding antibacterial compounds or techniques. The prospective analysis of the current studies is done, and dual systems combining osteogenic and antibacterial agents immobilized on the surface of PEEK are found the promising solution for a better implant design. Introduction Improvements in establishing the standards to control infections in the operating rooms and antibiotic treatment during the surgical procedure result in low infection rates.Despite all the precautions, infection is the second reason for the revision of the orthopedic implant for total knee arthroplasty [1].Besides the peri-surgical infection control procedures, implants with antibacterial properties gained importance.Implant design with an antibacterial effect decreases the infection risk and accelerates the osteoblast adhesion to the surface, directly impacting the operation's success. The factors affecting bacterial adhesion and biofilm formation are defined as surface roughness, surface charge, surface free energy, and hydrophobicity.Surface roughness should be above 0.2 µm to promote bacterial adhesion with a positive correlation.On the other hand, there is no correlation between bacterial adhesion and surface roughness below R a < 0.2 µm [2].Hydrophobicity is another factor that affects bacterial adhesion.As hydrophobicity increases, bacterial adhesion decreases [3].Besides the surface properties, releasing materials with antibacterial effects enhances bacterial inhibition. Polyether ether ketone (PEEK) is a semicrystalline thermoplastic polymer that is used in industrial applications such as aircraft [4] and turbine blades [5,6], missile connectors and radomes, cable insulation, acid pipelines, valve and pump parts, bearings [6], orthopedic and spine implants [7][8][9] and recently, fuel cell membranes when it is sulfonated [10,11].The distinctive properties that enable such applications can be listed as resistance to temperature, chemicals, radiation, and the environment [6].Moreover, mechanical properties such as cut-through resistance, fatigue resistance, and abrasion resistance make PEEK a raw material candidate for challenging conditions [6]. PEEK polymer commercially emerged as a biomaterial for implants in 1998 [12].As a highperformance polymer, it has become an alternative for metal implant components in orthopedics [7] and trauma [13].In terms of orthopedics, spine implants [8,9], femoral stems, cranial implants [14,15], and joint arthroplasty [16] are the products available on the market.The stability, biocompatibility, radiolucency, and mechanical properties similar to cortical bone make PEEK a good candidate for biomedical applications [17].It can be formed easily in different shapes by using 3D printing techniques and can be customized [18].Beacuse of its anti-wear performance, it is used in knee and hip joint replacements [18].It is a radiolucent polymer; therefore, the defects can be observed easily by x-ray compared to metal implants [19]. Clinical studies that use PEEK as an implant material include the cervical area in cervical degenerative disc disease treatment, cranioplasty, and face reconstruction [20].The comparison between pure PEEK cages and iliac crest autografts showed that PEEK cages serve as the substitutes for fusion with an effective restoration of physiological curvature and the intervertebral height and a facilitated radiological follow-up [21].Another clinical application was cranioplasty.Compared to titanium implants, the failure rate decreased from 25% to 12.5% after using pure PEEK cranial implants, according to the retrospective records of patients [22].Face reconstruction with pure PEEK was applied to four patients, and ease of working and high durability were the main advantages [23].Another clinical case included 3D printed PEEK grafts for mandibular defects.It provided primary security and decreased the stress shielding effect compared to the metallic implants [24,25].In the field of dentistry, PEEK-based dentures, crowns, and bridges were produced by additive manufacturing [26].Artificial teeth and double crown retained dental prostheses were implemented in the patients, and satisfactory results were reported in the clinical cases [26]. As a synthetic polymer, PEEK is suitable for extrusion/drawing-based techniques or additive manufacturing techniques based on powder bed fusion [24,27].3D printing makes PEEK a good candidate for the complex geometries of bone implants.Moreover, the properties essential for bone implants can be tailored by processing parameters and functional additives.In terms of processing parameters of fused filament functioning of PEEK, nozzle temperature and layer height significantly affected surface roughness, elastic modulus, and ultimate tensile strength [28].The printing techniques such as selective laser sintering and fused deposition modeling enabled the addition of functional materials such as graphene nanoparticles, carbon nanotubes, graphene oxide, titanium dioxide, aluminum dioxide, zirconium dioxide, or hydroxyapatite (HA) into the PEEK structure [24,29].The composite materials showed good mechanical properties such as tensile strength, compressive strength, and elastic modulus and increased osteogenic differentiation [24,29].Composites of PEEK and Hydroxyapatite (HA) with percentages of 20 and 40 were produced by fused filament fabrication for better osteogenic properties [30].The composite increased the cell density and expressions of RunX2, OCN, ALP, and Collagen Type 1 genes as cell differentiation indicators.In vivo experiments showed that bone formation volume increased and the gap between the host bone and the scaffold decreased with HA addition [30]. Pure PEEK as a heart valve was simulated, showing high durability and smooth operation.Pure PEEK became an alternative biomaterial to produce pumps for intracardiac left and right ventricular assistance [20].The equivalent modulus (0.5-17.3 MPa) and tensile strength (0.7-8.3 MPa) of PEEK costal cartilage produced with the 3D printing method gave similar results with natural costal cartilage (Elastic modulus: 8.7-12.6MPa, tensile strength: 4-7 MPa) [31].PEEK polymer has −OH groups at chain endings, resulting in a negative surface charge at pH 7 and an isoelectric point of about 4.5 [32]. Infection control is one factor that defines the success of the intervention.It dramatically impacts the revision, stability, or rejection of the implant by increasing the rates of morbidity, mortality, and medical costs [1].Therefore, antibacterial properties are essential to prevent implant rejection.PEEK polymer biofilm formation showed an exponential increase for bacterial colonies such as S. epidermidis, S. aureus, P. aeruginosa, and E. coli.On the other hand, it showed a linear increase for Enterococcus.PEEK showed the highest biofilm affinity compared to Ti (up to 6.7 times higher) and Si 3 N 4 surfaces (up to 16 times higher for as-fired samples).Similarly, the number of live bacteria is the highest, up to 30 fold for PEEK compared to Si 3 N 4 as-fired surface [32].The effects of production techniques on bacterial adhesion in dental applications were analyzed.Among commercial PEEK dental products, there were no significant differences between injected molded samples and printed samples of PEEK in adhesion of S. sanguinis.In contrast, pressed PEEK samples showed significantly higher adhesion [33].On the other hand, another bacteria, S. mutans, had no differences in adhesion based on the manufacturing technique [33]. Since 1985, studies on PEEK polymer have shown an exponentially increasing trend.When the list of the records of search from Web of Science, Scopus, and PubMed indexes based on keywords 'PEEK' , 'Polyether ether ketone' , 'Poly-ether-ether-ketone' , 'Polyetheretherketone' , 'Poly ether ether ketone' is refined to those records related to antibacterial properties of PEEK by searching the keyword 'bacteria' and 'microbial' only 3 records are found for antibacterial properties of PEEK between years 1996 and 2009.The number of studies started to increase in 2010 and there were 16 articles published between 2010 and 2014.In the next five years (between 2015 and 2019), ∼4.1-fold increase in articles is seen.There are 169 papers published between the years 2020 and September 2023.It is obviously seen that as the share of PEEK as a raw material in the medical device industry grows, the studies related to its antibacterial properties will continue to increase. The antibacterial properties of orthopedic implants are as essential as mechanical and osteogenic properties.PEEK usage has a high potential in orthopedic implants.The review articles published in the last four years (between 2020 and September 2023) included most of the modification methods, especially surface modifications, with detailed techniques grouped into physical, chemical, and biological modifications [34][35][36][37][38][39].A few reviews included composite production as a method to increase the antibacterial properties of PEEK [34,39].Moreover, most of the reviews presented the antibacterial properties under the topic of the improvements of osseointegration [18,38].In terms of clinical perspectives, the antibacterial properties of PEEK have been mostly studied in the scope of dental applications since the biofilm formation has been the main problem for implant integration [34,37,38,[40][41][42].The development of PEEK in bone tissue engineering for orthopedic surgery has also been covered in terms of clinical perspectives [43].This review presents a comprehensive up-to-date overview of the studies focused mainly on the improvements of the antibacterial properties of PEEK for biomedical applications.The improvements were discussed in terms of immobilization of antibacterial materials on the PEEK surface, coating antibacterial material on PEEK, production of composites, and changing the surface texture to obtain an antibacterial property.Testing methods and promising results are summarized to support the future studies in the field to carry the improvements to one step further.This review is distinctive in presenting a broad perspective of measurement methods for testing the antibacterial properties of PEEK, and mechanisms of bacterial inhibition achieved after modification of PEEK are discussed in depth.It covers all modification methods specific to antibacterial properties without limiting the application area in the biomedical field.The design requirements for the best antibacterial properties are discussed as a future perspective. Measuring strategies of antibacterial activity of PEEK There are different strategies to observe the antibacterial property of PEEK.Qualitative methods with different imaging techniques, quantitative methods, and in vivo studies are applied.Table 1 summarizes the definitions of the methods applied to measure the antibacterial properties of PEEK.Although the quantitative methods are found adequate to discover the antibacterial rate of a sample in most of the studies, the support of qualitative methods should be considered.For the studies in which quantitative analysis is applied in the short term, observing the bacterial cell morphology gives the researcher insight into the later stages of bacterial growth.Therefore, it would be better to support quantitative analysis with a qualitative one for short-term analysis to obtain valuable information about the time frame, which enables making comments about the race-for-thesurface concept.In this context, antibacterial longevity and kinetic tests deserve considerable attention due to the importance of timing among those methods.After the implantation, osteoblasts and bacteria compete for the attachment on the surface.If the antibacterial property lasts an adequate time for the osteoblast attachment, implant rejection is prevented.S. aureus is the most tested bacteria type to observe antibacterial properties.It has been a good choice since S. aureus and S. epidermidis form 66% of the pathogenic species among orthopedic clinical isolates of implant-related infections [1].However, the antibacterial effect with a broad spectrum should be targeted to obtain an effective biomaterial.In terms of measuring methods, colony-forming unit calculation and calorimetric assays have given accurate and quantitative results.Moreover, measuring antibacterial longevity provides information about the loading amount of the antibacterial agent for release to support osteoblasts for race for the surface. The most common methods used are plate counting and measuring zone of inhibition due to their ease of application.The plate counting method enables researchers to discriminate between dead or live bacteria and adherent or planktonic bacteria.Therefore, a more detailed analysis is obtained in colony forming unit (CFU)/ml, a parameter used in the medical device industry to calculate the bioburden.Therefore, making comparisons for the real cases is possible.Intracellular reactive oxygen species (ROS) and glutathione depletion assays are specific to antibacterial mechanisms, and their application area is limited.Phagocytic activation of macrophages is another technique to measure the antibacterial efficiency of the PEEK samples.It is helpful regarding the body's reaction to the bacteria and is like an in vivo simulation.Pathogenic gene detection is another method that gives more specific detection of the pathogens and gives more accurate results in terms of the implant's safety compared to colorimetric assays and plate counting methods. In vivo studies provide valuable information related to the rate of inflammation after implant replacement; they are essential to comment on the success of the implant.However, due to ethical considerations, high costs, and time limitations, in vivo Bacterial attachment The sterile samples are co-cultured with a specified amount of bacterial suspension for a specified time interval.After removing the non-adherent bacteria, ultrasonication is applied to detach the adherent bacteria.The bacterial colonies are counted after spreading on the agar plate. [3] Membrane permeability The medium is refreshed with and an addition of sodium dodecyl sulfate (SDS) (0.1% concentration) after incubation of samples with bacterial cells for a specified time.Optical density was measured at 570 nm.Lower optical density showed a strong ability to rupture the bacterial membrane and higher antibacterial properties. [52] Colorimetric assay The assay is based on staining the attached cells to the sample after culturing with a specified concentration of bacteria.Crystal violet, formazan dyes, and Alamar Blue reagent were used in studies of PEEK.[44,53,54] Antibacterial kinetic test The absorbance of bacterial suspensions at 600 nm is recorded after incubating the samples with a specified number of bacterial solutions at defined time intervals. [55] Phagocytic activity evaluation of macrophages Macrophage cells are cultured with the bacterial solution, including fluorescently dyed bacterial cells.Bacterium-infected cells are plated in a different well plate after flow cytometry.Extracellular bacteria are killed by incubation with gentamicin.The intracellular bacteria are released by using 1% Triton.The spread plate method counts all collected bacteria. Glutathione depletion assay The detection of glutathione depletion in the infectious environment indicates oxidative stress.Ellman's assay is used to detect the capability of glutathione breakage.The loss of glutathione percentage is calculated by the formula below with the absorbances collected at 420.Loss of Glutathione (%) = (A negative control -A sample )/ A negative control × 100% A: absorbance of the corresponding samples. [57] Antibacterial longevity The incubation period extends up to 28 d.The samples are collected at a specified time point. Colony formation unit calculation is applied after one day of incubation. [58] Pathogenic gene detection by real-time polymerase chain reaction (PCR) Real-time PCR was applied to detect pathogenic gene expression.mRNA levels of the Fim gene for P. gingivalis and the Gtf gene for S. mutans were analyzed as pathogenic genes. [59] Qualitative Biofilm formation and bacterial attachment observed by SEM (Scanning Electron Microscopy) The bacteria are seeded on the samples at a specific density.The samples are incubated in tryptic soy broth for a specified time.The bacterial cells are fixed using 2.5% glutaraldehyde in 0. [56] In vivo experiments -MRI and Micro CT were used to observe the tissue around the implant with a specified bacterial concentration.-Staining with hematoxylin, eosin, or Giemsa investigates inflammatory tissue proliferation and colony distribution.-Periosteum reaction against bacterial infection or osteomyelitis model system is preferred to observe the infection.-Tibia and femoral condyle have been the regions studied before [50,58,61,62] studies have been applied in studies less than quantitative and qualitative methods. The mechanisms of the antibacterial effect of modified PEEK There are mechanisms proposed for the antibacterial ability of the modification techniques.These are ROS generation, contact reactions between the bacterial cell membrane and the antibacterial agent, the hydrophobicity of the surface, trap killing, and the nanoblade effect (figure 1).Among those mechanisms, ROS generation and contact-killing mechanisms have been widely observed in the studies.On the other hand, relatively few studies have studied the mechanism of surface hydrophobicity or trap killing and nano-blade effect.In each method, an optimization is required.The amount of the metal ion, compound, or drug should be adjusted to find a perfect composition for the bacterial detachment concurrently with cell attachment since all antibacterial mechanisms (contact killing, generation of ROS, and increase the hydrophobicity of the surface mechanisms) also have similar effects on cells.In this sense, the modification should serve the race-for-the-surface concept, and the duration of the effectiveness of the modification should be adjusted.For the nano-blade effect or trap killing, the dimensions of the topographical changes should be optimized.Controlling the dimensions on the surface topography at the micron level is more complex than controlling the composition of the modification.Therefore, the modifications for contact killing and ROS generation are widely studied compared to modifications for nano-blade effect or trap killing.However, if the balance deteriorates, it causes irreversible DNA damage.In the other mechanism, contact killing, electrostatic interaction destroys cell membrane integrity.Most of the metal ions with positive charges deterioriate the negatively charged cell membrane and result in increase in permeability of rupture of the membrane.Those ions pass through the cell membrane and trigger the oxidative distress by creating ROS.PEEK powder and nano-ZnO were melt blended to obtain a composite for a filler material of artificial joint.The composite showed an antibacterial effect on E. coli (42%) and S. aureus (39%) when added at 7.5 wt.% [63].The mechanism was explained by two methods: contact reaction and photocatalytic reaction.In contact reaction mechanism, electrostatic interaction occurs between the positively charged Zinc ions and the negatively charged bacterial membrane [63].Another mechanism targets the proteases.After adding 7.5 wt.% to the PEEK composite, the zinc ion inactivates the protease, an enzyme that breaks down the protein into peptides, and the physiological activity of bacterial cells deteriorates [63].The same antibacterial mechanism was explained for PEEK implants coated with dexamethasone-loaded Zn and Mg-containing organic frameworks for bone graft applications [74].Likewise, in a composite system, the zinc ions become integrated into the bacterial membrane by binding hydrophobic imidazole amino and carboxyl groups and destroy the cell membrane, which results in the leakage of the cell content in the coating system [63,74].According to the results, an inhibition rate of 100% against S. aureus and E. coli was obtained for the samples coated with Zn-Mg-metal organic frameworks [74]. Photocatalytic activity includes the activation of ROS with the interaction between ZnO and UV irradiation.The strong chemical activity generated by ROS kills the bacteria [63]. Another material for both contact reaction and ROS generation mechanisms proposed was Ag nanoparticles [64].When Ag nanoparticles were decorated onto PEEK/Gelatin blend hydrogel, the antibacterial rates increased from 57.1% to 88.4% against S. aureus.Similarly, the antibacterial rate reached 95.7% from 61.8% against E. coli [64]. There are dual or ternary systems that combine antibiotics and ceramics or nanoparticles to enhance antibacterial properties or obtain antibacterial and osteogenic properties simultaneously for biomedical applications [75,76].For example, in a gentamicinloaded brushite system, only gentamicin possessed antibacterial properties [75].On the other hand, a synergistic effect formed when Ag nanoparticles and gentamicin sulfate (GS) were coated together on sulfonated PEEK [52].Two mechanisms were proposed to explain the antibacterial property.The first one is the binding ability of these molecules to biological elements like DNA, protein, or cofactors due to their high affinity for amines, phosphates, and thiol groups and disturbing cell metabolism.GS binds to the 30S subunit of the ribosome and deteriorates protein synthesis.The synergistic effect of localized GS and Ag nanoparticles causes an increase in ROS production.They scavenge the intracellular reductase enzymes, and the catalytic process of ROS production is boosted.Ag nanoparticles and GS stimulate nicotinamide adenine dinucleotide oxidation.Hyperactivation of the electron transport chain leads to superoxide formation.ROS formation by the Fenton reaction is triggered by the ferrous irons formed after the damage of iron-sulfur clusters by superoxide formation [52].Direct interaction of Ag nanoparticles with bacteria results in cytoplasm leakage and bacteriolysis [77]. Cu 2+ ion immobilization on PEEK with polydopamine (PDA) or magnetron sputtering was used to obtain antibacterial implants for biomedical applications [62,65].In a study, magnetron sputtering was used for immobilization, and sessile Methicillin resistant S.aureus (MRSA) bacteria amount decreased from 84.77 × 10 5 CFU (PEEK) to 4.93 × 10 5 CFU (PEEK with immobilized Cu 2+ (49 µg/L)) [62].Cu 2+ was also used for ROS generation and contact-killing mechanisms against MRSA.The contact-killing occurred with the destruction of the cell membrane by Cu 2+ ions by their electrostatic interaction with the bacterial membrane.Cu 2+ ions in the bacterial cell cause the generation of ROS by Fenton reactions, inhibition of RNA/DNA replication due to toxicity, protein denaturation, and DNA cleavage after binding proteins and DNA [62,65]. It was reported that Au nanoparticles coated on carbon fiber reinforced PEEK composites by metal-organic chemical vapor deposition and physical vapor deposition techniques to obtain a functionalized implant surface resulted in inhibition against S. aureus, S. epidemidis, Str.pyogenes, P. aeruginosa, Ent.faecium [78].Bacterial colonies grew larger than 700 for carbon fiber-reinforced PEEK, whereas the value decreased to 300-500 colonies after coating with Au [78].Like in Cu 2+ and Ag + , the mechanisms for Au were explained by the destruction of cell membrane and ROS production after the change in the membrane charge by interaction of Au nanoparticles with the phospholipids in the bacterial cell wall [78].Another nanomaterial that causes bacterial inhibition with the same two mechanisms was n-TiO 2 .The antibacterial effect of n-TiO 2 was explained by creating mechanical stress and generating ROS.PEEK and polyglycolic acid blend with n-TiO 2 powders to obtain a scaffold for bone tissue engineering applications.By adding 5 wt.% of n-TiO 2 , an antibacterial rate, higher than 85% was obtained against S. aureus and E. coli [79].The mechanical stress generated by the contact action deformed the bacterial cell membrane.Moreover, ROS production was triggered by the reaction between water, oxygen, and n-TiO 2 .ROS production resulted in oxidative stress that collapsed the bacterial antioxidant defense system [79]. Three mechanisms for their effect were proposed for antibiotics (vancomycin, gentamicin, ampicillin, amoxicillin, etc.).These are an increase in cell membrane permeability followed by loss of its function, prevention of replication, and inhibition of cell wall synthesis [80,81].Resveratrol, an antioxidant, has been used for its antibacterial properties.The mechanism proposed was similar to antibiotics and defined as the increase in cell permeability and inhibition of cell wall synthesis [82]. A single mechanism based on electrostatic interactions to penetrate the bacterial cell wall was proposed for chitosan and peptide-based compounds to explain the antibacterial effect.A solution including chitosan, hydroxyapatite (HA), and PEEK solution was applied on stainless steel (316L) by electrophoretic deposition to obtain a composite coating for biomedical applications [83].Chitosan increased the bacteriostatic percentage to above 80% against S. aureus and E. coli [83].When chitosan was coated directly onto PEEK by UV-induced graft polymerization and wet chemical methods, the number of E. coli was reduced by about 70%.In those systems, NH 3+ groups in the structure of chitosan form osmotic imbalances when chitosan was in contact with the negatively charged bacterial cell wall.When cationic groups such as NH 2 interact with the negative bacterial cell surface, surface zeta potential difference is induced, causing damage to the bacterial cell membrane [84].Another mechanism related to the electrostatic interactions is the deformation of peptidoglycans in the cell wall.Such damage causes an increase in the penetration/permeability of vital intracellular molecules such as potassium, proteins with low molecular weight, and their eventual loss [83]. On the other hand, the mechanism of the antibacterial effect of the K-12 protein is related to its amino acid sequence.In its sequence, five positively charged amine groups attract negatively charged bacterial cells and disrupt bacterial cell wall via the charge effect [85][86][87].Antibacterial peptide GL13K uses the same mechanism against S. aureus [88]. Other choices that can be used for cell wall destruction of bacteria are lactam and lysozyme.Lysozyme is a protein that can break the β-1,4glycosidic bond between N-acetylcytidylic acid and N-acetylglucosaminoglucose to convert insoluble mucopolysaccharide into soluble glycopeptides in bacteria.Then, the destruction of the cell wall occurs [89].PEEK surface coated with PDAmodified nanohydroxyapatite and lysozyme to obtain orthopedic implants with a functionalized surface. The antibacterial ratio of 98.7% and 96.1% against S. aureus and E. coli were obtained with the aforementioned mechanism [89]. On the other hand, the bromine and chlorine content of the lactam inhibits the biofilm of S. mutans [90].Lactam was used as a coating constituent combined with PEEK and dip-coated on a glass-based substrate to obtain an oral implantology biomaterial resistant to biofilm formation.The absorbance of spectrophotometry at 630 nm for biofilm formation was decreased from 0.09 to 0.01 when lactam was added to the coating [90]. Different mechanisms were proposed for the considerable antibacterial effect of PEEK and nanofluorohydroxyapatite composites [91].The antibacterial effect of fluoride was explained by the inhibition of the glycolytic enzyme enolase, the proton-extruding ATPase, and bacterial colonization and competition.Moreover, some enzymes such as acid phosphatase, pyrophosphatase, peroxidase, and catalase were affected by fluoride ions, and the disintegration of bacteria occurs.Another factor was the positive effect of nanofluorohydroxyapatite on cell adhesion.If the cell adhesion on the implant's surface is higher than bacterial adhesion, bacterial colonization is prevented [91]. Black phosphorus inhibited S. aureus by only the generation of ROS [92].Similarly, the bacterial reduction with ZrO 2 nanoparticles on the surface stemmed from the formation of ROS.An alkaline effect by forming hydroxyl groups around ZrO 2 increased local pH [93].The change in the pH of the environment affected the bacteria.For example, bioglass 45S5 particles in the PEEK matrix increased the pH to a level that bacteria could not live [67].Adding GO into PEEK with 0.02 wt.% increased the antibacterial ratio from 82.10% to 99.56% against S. aureus, which has a single cell wall.The antibacterial mechanism of GO was explained by ROS generation and the nanoblade effect.GO induced the generation of hydroxyl radicals, singlet molecular oxygen, and superoxide anions which damaged DNA, proteins, and intracellular components in terms of ROS generation [94]. Hydrophobicity of the surface Bacteria are more likely to attach to hydrophilic surfaces.However, this property changes according to the bacteria type with different surface tensions [95,96].The hydrophobic surface is counted as one reason for the antibacterial property.PEEK is a hydrophobic polymer.Coating with more hydrophobic materials increases the water contact angle of PEEK.Ion doping can alter hydrophobicity.For example, although the Ag ion is hydrophilic, it increased the water contact angle of PEEK coated with TiO 2 / poly dimethylsiloxane (PDMS) hybrid structure depending on the amount of doping [48].As the Ag content in the structure was increased, the surface roughness was altered, which improved hydrophobicity.Similarly, coatings formed by the addition of bioglass 45S5 particles into the PEEK matrix showed antibacterial properties against E. coli due to the formation of needle-like structures on the surface that increased hydrophobicity [67]. Trap killing and nano-blade effect Trap killing and nano-blade effects are related to the physical interaction between bacteria and material surface.For example, trap killing was observed when the bacteria (pore size: ∼0.5 µm) were trapped in the porous surface (pore size: ∼1 µm) of Cu 2+ ion immobilized surfaces, and proliferation was restricted [62].ZnO/GO coatings on PEEK have shown trap killing mechanism against F. nucleatum.GO layers trapped the bacteria and prevented biofilm formation [68].Another mechanism proposed for GO was the nano-blade effect which was explained by bacterial membrane destruction after contact with the sharp edges of GO nanosheets [94]. The methods to improve the antibacterial properties of PEEK As mentioned in the introduction, improving the antibacterial properties of PEEK affects the success of orthopedic operations.The methods to improve the antibacterial properties of PEEK were analyzed and discussed under four different categories based on the production methods: (1) immobilization of functional materials and functional groups, (2) coating with antibacterial material, (3) forming composites and nanocomposites, (4) changing the surface topography. Immobilization of functional materials and functional groups The most widely used method to improve the antibacterial properties of PEEK is the immobilization of functional materials and functional groups on the surface of the material.The compounds immobilized onto the surface consist of antibacterial drugs (ampicillin and vancomycin), ions (Zn 2+ , Ag + , Cu 2+ , F), peptides, functional groups such as SO 3 H, NO, -NH 2 , oxides (GO, ZrO 2 ). The most common method for immobilization is dripping the solution of the antibacterial agent onto the functionalized or neat PEEK surface (figure 2).The dripping method is preferred to obtain more precise control over the material.It is an easy method, and a small amount of material can be used to observe the antibacterial properties.Similar methods, such as immersion and soaking, provide more surface area for the interaction of bacteria and antibacterial agents.Wet chemical methods are applied by immersion to immobilize functional materials onto PEEK [45].For example, the carboxyl groups grafted PEEK was immersed in 0.1 wt.% EDC (1-(3-dimethylaminopropyl) −3-ethylcarbondiimide hydrochloride) solution and the pH of the solution were adjusted to 4.7 with acetic acid before the immersion into the solution with chitosan dissolved in acetic acid [45].Immersion and soaking are easy to apply; however, they require given volume of the solution that the substrate can sink in.Table 2 summarizes the immobilization techniques of antibacterial materials onto a PEEK substrate.Since PEEK is a chemically inert material, the study uses PDA to immobilize the functional compounds.First, PEEK samples were coated with PDA as an adhesive layer.Then, the molecule with an antibacterial effect was added [47,65,74,97,98].Some studies first produced a composite of functional material and PDA, and then coating was applied [61].Sulfonation is another technique used to attach molecules onto chemically inert PEEK.It is the first modification applied in plenty of studies.After the sulfonation, SO 3 H ions were attached to the surface of PEEK, which had an inhibitory effect on S.aureus and E.coli.Depending on the process parameters and the amount of sulfur attached, the antibacterial rate changed [50,99]. According to table 2, Ag + is a powerful antibacterial agent, and it was shown to increase antibacterial rates in 90% of all studies.Therefore, it is a widely used ion to add antibacterial properties to the biomaterials.However, the amount of the ion should be adjusted to provide biocompatibility.The antibacterial properties of the same compound can be adjusted by combining PDA and applying phototherapy.For example, GO had a moderate antibacterial rate if coated on a PEEK substrate O 2 and OH -, generating much ROS [109]. In some studies, there were differences in responses of different types of bacteria.Most of the studies tested both Gram-negative and Grampositive bacteria.In a study, GO showed moderate antibacterial properties on hydrophobic E. coli, but it did not affect hydrophilicS.aureus [106].ZrO 2 was another compound that gave different results for different types of bacteria.A moderate effect was detected for S. aureus, whereas no antibacterial effect was seen for E. coli due to its stronger resistance to ZrO 2 nanoparticles.E. coli is a Gram-negative bacteria with an effective barrier (a complex cell membrane including lipopolysaccharide molecules) [93].When the antibacterial compound was quaternary ammonium salt, the length of the alkyl chain of quaternary ammonium salt caused more effective inhibition of S. aureus than E. coli since S. aureus was a Gram-positive bacteria [103].Vancomycin was more effective on S. aureus than E. coli [76]. The difference between the antibacterial rate of planktonic and adherent bacteria may be observed in some studies.A sulfonation process followed by the immobilization of Cu nanoparticles on the PEEK surface gave a higher antibacterial rate on adherent MRSA than planktonic ones.It was explained by the synergistic effect of trap-killing and contact-killing mechanisms.Since adherent bacteria were trapped, they had more direct contact with the surface and they were more affected by Cu nanoparticles [62]. Methods of immobilization of functional groups and changing surface texture concurrently The cold plasma method changed the surface textures by increasing the surface roughness and adding nitrogen-containing groups to the structure [60].Nitrogen-containing groups increased the positive charges on the PEEK surface, whereas the bacterial membrane was negatively charged.Therefore, it was expected to increase bacterial cell attachment on the PEEK surface due to electrostatic interactions.However, since the surface texture was changed and bacterial attachment mechanisms were dependent on many factors, such as topography, chemical composition, and hydrophilicity, an increase (∼26%) in the antibacterial efficiency after the cold plasma treatment with N 2 was observed [60].The inhibition of bacterial growth in the presence of nitrogen-containing groups was the reason for the increase in the antibacterial rate [60,111,112]. Plasma immersion ion implantation is another technique to change the surface texture and composition of PEEK.In a study in which ZrO 2 ions were implemented by plasma immersion ion implantation, the antibacterial reduction of S. aureus was detected as 62.7% [93].ROS formation explained the reduction due to ZrO 2 nanoparticles on the surface and the alkaline effect that was explained by the formation of hydroxyl groups around ZrO 2 and increased local pH [93]. Sulfonation is a standard method to obtain a porous structure on PEEK.Besides, this method attaches SO 3 H groups to the surface.SO 3 H groups decreased the bacterial viability on the surface depending on the sulfur content [50].A composite of nano magnesium silicate and PEEK showed no antibacterial property against E. coli and S. aureus.On the other hand, the antibacterial rate increased to 98.29% and 99.76%, respectively, in 24 h after sulfonation [113].According to the study of Ouyang et al, the hydrothermal treatment of sulfonated PEEK samples decreased the sulfur content and antibacterial efficiency of E. coli [99].For example, when the sulfur content was decreased from 13.47 wt.% to 0.74 wt.%, the antibacterial efficiency was decreased from 100% to 24% [99]. On the other hand, no change in antibacterial rate was observed for S. aureus; it stayed at 100% [99].The difference between the antibacterial rates of E. coli and S. aureus stemmed from the different pH endurance of the two bacteria.Since E. coli produced gaseous ammonia during the transfer of Gln to Glu, neutralization of protons occurred in the acidic environment, and intracellular pH was increased [99].On the contrary, S. aureus can endure a pH between 4.0 and 7.0 [99].Another reason for the different antibacterial efficiency was the morphology of the two bacteria.E. coli is rod-shaped with 1 µm of diameter and can not be trapped on the porous surface of sulfonated PEEK [99].Conversely, S. aureus has a spherical shape with a diameter of 0.5 µm and can be trapped easily in the pores of the sulfonated samples [99]. Ion immobilization Some elements such as Ag, Cu, and Zn show antibacterial effects when coated, mixed to form a PEEK nanocomposite, or immobilized on the surface of the PEEK.Zn-doped samples increased the antibacterial efficiency by destroying bacterial nucleic acids, DNA, and RNA synthesis.It penetrates the cell wall and reacts with −SH−, −NH2 2 groups [105].The antibacterial effect of Cu 2+ ions was studied on the sulfonated PEEK samples.Trap killing and contact killing were the mechanisms of bacterial inhibition [62].In vivo studies showed 97% improvement in antibacterial effectiveness by incorporating Cu 2+ ions with 1.40 at.% [62].Fluoride (F) is another element immobilized on the PEEK surface.Argon plasma immersion ion implantation technique was applied to increase its immobilization efficiency.The mechanism of F on the antibacterial property of PEEK was explained by the inhibition of proton-translocating F -ATPases [107]. A study on the antibacterial ability of ZIF-8 showed that it had an excellent loading capacity of Ag + ions with a steady release behavior. Moreover, the gradual degradation of ZIF-8 occurred in an aqueous environment due to the hydrationdeprotonation released Zn 2+ ions, which improved the antibacterial property [108]. Graphene oxide immobilization GO is a carbon-based compound with functional groups such as hydroxyl, epoxy, carboxyl, carbonyl, phenol, lactone, and quinone [114].GO immobilization on sulfonated PEEK increased the antibacterial effect against E. coli [106].The mechanisms proposed for affecting E. coli were explained by acid endurance, shape, membrane structure, and oxidative stress caused by ROS [106].First, GO neutralizes the surface of PEEK, which decreases the number of E. coli since E. coli has mechanisms for acid resistance.Secondly, E. coli has a rod-like shape with a diameter of about 1 µm, which the sharp edges of GO can easily deform.A thin peptidoglycan membrane of E. coli results in a decrease in its survival rate.The other mechanism was related to ROS.GO generates ROS production that causes oxidative stress.Oxidative stress results in rupture, mutation, and change in the thermal stability of DNA [106]. Drug immobilization Immobilizing the drugs onto the coatings of PEEK or sulfonated PEEK samples is another technique to gain antibacterial properties.Moxifloxacin hydrochloride is a drug that inhibits DNA gyrase and topoisomerase IV and blocks DNA replication [50].Although PEEK had no antibacterial effect, the release of moxifloxacin hydrochloride coated on PEEK increased the antibacterial effect by about 100% [50].Vancomycin and Amphicilin are the antibiotics used to improve the antibacterial properties of PEEK.Vancomycin improved antibacterial results for S. aureus, whereas Amphicilin was a good inhibitor of E. coli [76].An antimicrobial peptide recombinant mouse betadefensin-14 was used to improve the antibacterial property of PEEK.It has a broad spectrum of antibiotic activity, encompassing gram-positive, gram-negative, fungi, viruses, and multi-drug resistant bacteria.It avoids immune system responses since it has a biological origin [58].Hinokitiol is another compound with a natural origin and has antiviral, antibacterial, antifungal, antitumor, and insecticidal properties without cytotoxic effects [102].Hinokitiol loaded on PEEK showed excellent antibacterial properties due to its slow release.The antibacterial mechanism of hinokitiol was explained by the degeneration of proteins in the bacterial membrane [102]. Mino is another antibacterial drug that breaks the association of aminoacyl-tRNA and bacterial ribosome and disintegrates bacterial cells [47].Minocycline-loaded liposomes were immobilized on the PEEK surface to increase the antibacterial effect against S. mutans and P. gingivalis [47]. The effect of berberine as an antibacterial agent was investigated in vivo [61].The berberine was adsorped on sulfonated PEEK functionalized with osthole nanoparticles (an extract of cnidium fruit that supports osteogenesis).The results showed that severe edema around PEEK implants was observed without berberine at the end of the second week [61].Additionally, severe osteomyelitis was detected at the end of the fifth week [61].A high degree of inflammatory hyperplastic tissue was formed around the femoral condyle, and displacement of the implants occurred [61].On the other hand, no inflammatory response was detected for berberine-containing samples that supported collagen formation and were tightly wrapped with bone collagen [61]. Genistein is a phytoestrogen molecule extracted from soy products.It is a good antioxidant, antiinflammatory, antimicrobial, and anti-carcinogenic compound, showing good biocompatibility.In a study, 40 vol.% Ta and PEEK sulfonated composites loaded with genistein showed a bacteriostatic rate above 97%.Unloaded samples showed a bacteriostatic rate of 68.37% for S. aureus and 61.02% for E. coli [100].Similarly, genistein loading on tantalum pentoxide and PEEK composites increased the antibacterial rate from 90.27% to 100% for E. coli and 88.27% to 100% for S. aureus after sulfonation [115].Salts of vancomycin and ampicillin combination was loaded on PEEK to obtain antibacterial property [76]. Immobilization with graft polymerization UV-induced graft polymerization technique was used to introduce carboxylic groups on the surface of PEEK [45].Acrylic acid was used as a source of the functional groups [45].The amino groups of chitosan were attached by wet chemical methods after the carboxyl groups were formed.Presenting carboxyl groups onto the surface of PEEK increased the chitosan grafting degree by 1.4% [45].Polystyrene sulfonate was another compound that was immobilized onto PEEK by UV.After one day of incubation, a significant decrease was observed against E. coli, S. aureus, and P. gingivalis for grafted samples [116]. Another study grafted PEG as an antifouling agent and quaternized poly(dimethylaminoethyl acrylate) as a bactericidal on the PEEK surface.As the molecular weight of PEG was increased, the hydrophilicity increased, whereas protein adsorption decreased.These changes resulted in the inhibition of cell attachment [104].The synergistic effect of the bactericidal and antifouling parts was only achieved after PEG grafting with Mn:2000 g mol −1 as short quaternized poly(dimethylaminoethyl acrylate) chains exposed to the bacterial suspension [104].PEG with higher molecular weight caused steric hindrance and the interaction between the bacterial wall and quaternized poly(dimethylaminoethyl acrylate) was inhibited by larger-sized PEG chains [104]. Coating with an antibacterial material Coating PEEK with an antibacterial material is the second mainly studied method to improve the antibacterial property of PEEK.Coating techniques are separated into two: self-assembly and classical methods.The details of the coating methods applied to enhance the antibacterial properties of PEEK are summarized in table 3, with their advantages and disadvantages.The layer-by-layer self-assembly coating examples were Zn/chitosan, Ag/alginate, and brushite/gentamicin surface (figure 3).In the layerby-layer self-assembly technique, the generation of charges between the solution and coated layer resulted in a coating of another layer [75,117].Surface modifications such as sulfonation or application of an adhesive coating with polyethyleneimine [118] and polystyrene sulfonate were used to overcome the chemical inertness of the PEEK surface.Polydopamine is another compound that is widely used to form an adhesive interlayer with its large amount of free catechol groups [89].Classical methods for coating PEEK are immersion, dip coating, precipitation, vapor deposition, magnetron sputtering, and radio-frequency co-sputtering [44,53,55,56,[119][120][121][122][123][124][125][126].There are examples of coating with one element, such as Ag + , Cu 2+ , Mg 2+ , red selenium, and gray selenium, and dual systems, such as hydroxyapatite with the combination of drugs, GelMA, sodium butyrate and hydrogels combined with bone-forming peptides and chlorogenic acid (figure 3).Since immersion and dip coating methods are easy and cost-effective, they are frequently preferred for coating. The antibacterial studies of coated PEEK with different techniques and materials are listed in table 4. According to table 4, the coating systems composed of two or more agents are more pronounced.The materials in those systems have been chosen to simultaneously increase osteogenic and antibacterial abilities. In most of the studies using the coating process to increase the antibacterial response of PEEK, osteogenic properties and biocompatibility were investigated [52,55,56,59,68,80,119,128,133,135].The coating of ZnO/Ag nanoparticles on PEEK showed elongated and overlapped lamellipodia in MG-63 cells, which was the indicator of healthy cells.Compared to Ag-decorated samples, enhanced cell spreading, proliferation, alkaline phosphatase activity, and osteogenesis-related genetic expression results were obtained [117].Layer-by-layer coated brushite/gentamycin sulfate on PEEK gave acceptable biocompatibility results in vitro on MG-63 cells.Osseointegration ability in bone healing was detected In vivo experiments for the samples with 6 layers [75].In another study, a coating system that incorporated Cu into a PDA adhesive layer was produced.The osteogenetic activity of rBMSCs was measured, and angiogenesis was measured using a Matri-gel tubeforming assay using HUVEC cells.Both parameters gave superior results [126].Moreover, illuminationsensitive and pH-sensitive systems target antibacterial and osteogenic abilities at the same time.The systems with PDA-wrapped zeolitic imidazolate framework-8, (CuFe 2 O 4 )/GO and black tantalic oxide resulted in hypothermia and ROS generation at 808 nm NIR illumination [57, 120,135].The pH-responsive system included copper citrate.The release of copper increased as the pH of the environment decreased. Copper release elevated the cell structure's copper content, producing ROS and damaging protein [137].In another study, Ag nanoparticles were trapped in PDA layers.As pH decreased, Ag + ion release started, and bacterial infection occurred [138]. Bioactivity is another parameter that shows the success of the implant.Formation of the apatite structure on the surface of bone implants increases the sites for the cells to attach, proliferate, differentiate, and adsorption of proteins.The multi-layer coatings with bioglass 45S5/PEEK composite at the lower layer and silver nanoclusters/silica composite at the upper layer showed apatite-like crystals formation due to bioglass incorporation [125].In another study, nanoporous magnesium calcium silicate was coated on PEEK with a melting method.Compared with uncoated PEEK, better apatite mineralization in simulated body fluid was observed in coated samples [134].The coating system, including PDA, nanohydroxyapatite, and lysozyme on PEEK, showed apatite-like deposits in simulated body fluid.The phenol groups in PDA impacted the biomineralization [89]. Wear properties gain importance, especially in artificial joint implants.Coating of the implant is a solution to add a wear resistance property to the material.In a study in which hard TaN-(Ag, Cu) nanocomposite films were applied on PEEK, frictional forces and wear rate decreased after annealing since Ag and Cu particles acted as solid lubricants [122]. Adhesion properties of the coating material should be investigated for coated biomaterials.The formation of the cracks on the coating material forms potential sites for bacterial growth.In a study, a multi-layer coating composed of Ag nanoparticles, silica, bioglass 45S5, and PEEK was produced by a radio-frequency co-sputtering method with a sputtering time of 15 min.Good adhesion properties were obtained with second critical load values between 17.60 and 12.82 N [125]. In recent studies, two or more constituents have been added to the coating material to enhance various properties of substrate PEEK.Adding an antibacterial ion such as Ag + was a common technique used for this purpose, whereas, in some studies, more than one antibacterial constituent was used.For example, the study aimed at coating PEEK with Ag-doped trimagnesium phosphate hydrate showed antibacterial properties depending on Ag concentration.Without Ag, no antibacterial property was mentioned [139].-Applied only for charged solutions [117] a ICPECVD: inductively coupled plasma-enhanced chemical vapor deposition., respectively with permission.Adapted from [130] with permission from the Royal Society of Chemistry.Adapted from [131].CC BY 3.0.Adpated from [132].CC BY 4.0. Similarly, in the study which combined gelatin and vancomycin as a composite coating on PEEK, the gelatin increased the number of colonies without vancomycin [80]. Among the studies, Ag-included coatings gave highly effective results in terms of antibacterial properties [3,52,117].Tantallic oxide was used in coating and composite forms in the studies.The antibacterial properties were less pronounced in the composite form and without NIR.In the coating form, black tantallic oxide showed an antibacterial ratio higher than 90% under NIR illumination [120]. On the other hand, white tantallic oxide results stayed at a ratio lower than 50% [120].Black tantallic oxide was the processed version of white tantallic oxide.Structural defects and oxygen vacancies were produced on white tantallic oxide with the magnesium thermal reduction method.Unlike white tantallic oxide, black tantallic oxide had high photothermal effects, increasing the local temperature under NIR and killing the bacteria [120].Compared to other metallic nanoparticles, selenium gave less effective results [53].The biofilm formation could not be prevented entirely and increased in three days, however, the density of P. aeruginosa decreased in grey and red selenium-coated samples compared to uncoated PEEK [53].Loading antibacterial drugs or compounds in the coating has been an effective solution to increase the antibacterial properties of PEEK.For example, the slow release of curcumin (65.17% of curcumin was released in 336 h) reduced the amount of E. coli from 54.87% to 98.59% and S. aureus from 48.71% to 99.62% [134]. The responses to the coating material changed according to the type of bacteria.It was shown that ZnO/GO coatings on PEEK gave higher antibacterial rates against initial colonizer S. sanguinis (∼97%) and late colonizer P.gingivalis (∼89%) [68].On the other hand, middle-stage colonizer F. nucleatum was affected less compared to the initial and late stage colonizer examples.Therefore, it was important to study the effects on S. sanguinis and P.gingivalis in the biomaterials studies for dental applications [68]. Coating with ions Silver, copper, and zinc are antibacterial ions directly coated or incorporated into the coating material.Silver-containing coatings possessed cytotoxicity when the percentage of silver in the compound was 0.21 wt.% [140].The thickness of the coating of Ag nanoparticles is another important parameter for the antibacterial property.For example, 3 nm of coating showed 99.4% and 99.7% antibacterial rates against S. mutans and S. aureus, respectively [3].When the thickness was increased to 9 nm, a 100% antibacterial effect against the two bacteria species was observed [3].The number of adhered colonies was consistent with the antibacterial rate results [3].Ag + doping is another technique to provide Ag + ion release.An Ag + doped TiO 2 /PDMS hybrid coating was produced for antibacterial purposes [48].As the Ag + doping amount was increased in the coating, the optical density, the indicator of S. aureus bacterial concentration, decreased [48].The release of Ag + ions was dependent on the TiO 2 /PDMS content.The total inhibition was seen at 38.4 µl of Ag [48].S. epidemidis was more sensitive than S. aureus.Total inhibition was seen at the same Ag concentration (38.4 µl).However, the optical density values dramatically decreased even at a doping volume of 2 µl [48].A sustained release was obtained for the study with between 58% and 65% release of Ag at the end of the 1000th hour.When the amount was reduced by 10 fold, the release amount decreased to between 7% and 10%.The initial burst was seen in the first 150 h [48].In another study, PDA was used as a carrier material of Ag and coated on PEEK [97].A release profile with an Ag amount between 5% and 10% was seen in 20 d.Additionally, this study showed that the initial burst of Ag inhibited MC3T3-E1 cells in first 3 d [97].Therefore, the addition amount should be adjusted to so that it should not cause cytotoxicity.The carrier material and antibacterial agent also define the release profile.In a study, cefuroxime sodium salt antibiotic loaded on hydroxyapatite and a burst release resulted in a release amount between 86.1% and 96% in 24 h [141].The high porosity of the carrier and weak bonds between the antibacterial agent and carrier resulted in a high initial burst.Similarly, the coating of gentamycin sulfhate-loaded brushite on PEEK lost all gentamycin sulphate before 72 h [75].Moreover, the antibacterial agent form is important.The salt form of cefuroxime was not chemically stable and suggested to be used by local systems [141]. In another study [44], the antibacterial effect of Cu nanoparticles was screened by C:F thin film, a thin layer used to stabilize Cu nanoparticles on the surface of the PEEK.The Cu layer was washed away without the C:F thin film.On the other hand, 40 nm of thickness resulted in a bacteriostatic surface without reduction of bacterial growth [44].The optimum C:F film thickness was 10 nm, which enabled the stabilization of Cu nanoparticles and water penetration into the Cu layer [44].The water penetration led to the dissolution of Cu 2+ ions and the initiation of the antibacterial effect. Mg 2+ is another ion that is used for obtaining antibacterial PEEK.Highly pure Mg was coated on PEEK with the vapor deposition technique.In 21 d, Mg coating degraded with an effect of 99% antibacterial rate on S. aureus.The mechanism of bacterial inhibition was explained by strong alkali environment formation by releasing Mg 2+ ions [121]. The processing parameters affected the antibacterial property.In a study investigating the dual ion incorporation as a coating material, Ag and Cu particles were nucleated and grown on the TaN matrix using rapid thermal annealing at 200 • C [122].As the annealing time increased to 8 min, the antibacterial efficiency increased from ∼55% to 70% and ∼60%-80% for S. aureus and E. coli, respectively (the deposition was 2 at.%) [122].The augmented number of Ag and Cu particles on the surface with annealing time explained the improved antibacterial property.Ag + and Cu 2+ ions destroyed the bacterial membrane.It was also inferred that E. coli was affected more than S. aureus due to their high sensitivity to Ag + ions appearing on the surface more quickly than Cu 2+ ions [122].In another system, dual-metalorganic frameworks were used as an antibacterial coating for PEEK [74].Besides its drug carrier ability, it released metal ions (Zn 2+ and Mg 2+ ) and 2,5dihydroxyterephthalic acid, increasing the pH and forming an alkali environment on the surface.The increase in pH reached 8.4 in 24 h, which inhibited bacteria with about 100% efficiency [74]. Coating with a drug carrier material The antibacterial effect can be provided by coating with a carrier of an antibacterial drug.For example, antibiotics can be loaded on coating materials such as HA.In a study [141], HA showed no antibacterial properties compared to the control unless cefuroxime sodium salt antibiotic was loaded.Another antibiotic coated on PEEK was GS.It was a broad-spectrum antibiotic with low toxicity for the human body.GS was coated on PEEK by mixing with a ceramic material, brushite [75].According to the fluorescence spectrophotometer measurement at 340 nm, the release amounts of GS from the coating system were detected as >10 µg, >60 µg, and >100 µg in simulated body fluid.These release amounts resulted in 100% inhibition of S. aureus up to day 5.The antibacterial coating started to lose its activity after the fourth day [75].Silk was another alternative to obtain a sustained release of gentamicin.When the combination was coated on SrCO 3 -PDAmodified PEEK, it significantly improved the antibacterial effect of PEEK and sulfonated PEEK [142].Nano magnesium silicate was an alternative bioactive ceramic material to load a natural antibacterial compound, curcumin [134].Similarly, tobramycin, an antibacterial drug, was used to gain antibacterial properties in PEEK samples.The antibacterial mechanism of tobramycin was explained as preventing the translation from mRNA into protein.Modification of the PEEK samples by combining tobramycin with GelMA increased osteogenicity.Tobramycin screened the bacterial viability effect of GelMA [55]. Butyrate, a fermentation product of gut microbiota, possessed anti-inflammatory, antimicrobial, and immunomodulatory properties.Sodium butyrate increases the phagocytic activity of macrophages by improving the release of ROS [56].Chlorogenic acid is another compound that has antioxidant and anti-inflammatory effects.It has been used in a sodium alginate hydrogel system with a combination of bone formin peptides [123].Coating PEEK with crosslinked benzophenone-substituted hydrogel showed up to log5 fold effective against MRSA and E.coli.Therefore, biofilm formation was significantly reduced.Benzophenone is used to design anti-adhesive and antimicrobial surfaces with improved cell viability [143].In another study, a traditional Chinese medicine-inspired compound, total alkaloids from Semen Strychnine (TASS), was used to enhance antibacterial properties.The bacterial inhibition rate against S. aureus and E. coli increased with TASS content.TASS had healing effects on antiinflammation and analgesia, according to the in vivo studies [144].A mouse ear swelling test was applied to detect the antiinflammation by weighing the control group and samples after the treatment [145].The analgesic effect was observed by the formalin test.Formalin solution was injected into drug-treated mice, and their reactions were graded after 10, 30, 60, and 90 min [145]. Polyvinyl alcohol (PVA) and PLGA were two polymers used as drug carriers coated on sulfonated PEEK by cyclic freezing and thawing.The initial burst of vancomycin hydrochloride-loaded PVA increased antibacterial activity, whereas the sustained release of Dexamethasone-loaded PLGA enhanced osseointegration [146].In another study, PLGA was used to trap vancomycin and ampicillin salts to obtain S. aureus bacterial inhibition by over 40% in 30 d [147]. Coating with oxides Coating with oxides gained importance in photothermal therapy.They showed antibacterial properties with near-infrared irradiation.Black bioactive materials became popular due to photothermal therapy.They can transform the energy of near-infrared light into heat energy.Therefore, photothermal therapy has proposed a solution for bacterial destruction.Black tantalic oxide coating on PEEK showed considerable antibacterial properties with NIR irradiation [120].A bone channel with a 1.5 mm diameter was formed at the femur of rat samples.The specified amount of S. aureus suspension was injected, and sterilized samples were implanted into the bone channel.NIR irritation was prolonged for 5 min to three days.A thermal imager was used to detect the temperature changes.The implanted samples were removed after 14 d, and the bacteria on the surface of the implanted samples were collected by ultrasonic vibration.The specified amount of diluted bacterial suspension was spread onto the agar plate and incubated at 37 • C for one day.In vivo studies showed that the temperature increased to 51.8 • C in black tantalic oxide-coated PEEK with NIR irradiation.The corresponding antibacterial rate was 93.1% [120].Copper ferrite (CuFe 2 O 4 )/GO is another coating material showing strong antibacterial properties after photoactivation.The antibacterial property was provided by localized hypothermia and ROS generation at 808 nm NIR illumination.GO addition increased the antibacterial rate from 83.85% to 99.94% against S.aureus and 76.43% to 99.57% against E.coli [57].In another study, CuS/GO system was used to obtain antibacterial efficiency under NIR at 808 nm.The local temperature increased to 58.4 • C around the material subcutaneously implanted in mice.In vivo Composite production to obtain antibacterial PEEK consists of two steps in most studies.The first step includes powder mixing using a liquid medium such as ethanol to obtain a homogeneous mixture.PEEK powder or blends of PEEK and reinforcements in powder form are dispersed in the medium [92].The second step includes heating above the melting temperature (PEEK melting temperature: ∼343 • C) for injection molding, compression molding, selectively laser sintering, cold pressing sintering, and twin-screw extrusion [51,54,144,149,150] to obtain composite materials or electrophoretic deposition for composite coatings [83].The figures that represent injection molding [151], swin screw extruder [151] and selectively laser sintering [152] are adapted with permission.Cold pressing sintering figure is reprinted (adapted) with permission from [153]. antibacterial efficiency against S. aureus was detected as 99.9% after applying NIR at 808 nm for 10 min in samples with glucose oxidase immobilization after being coated with PDA/CuS/GO [148].GO and bone-forming protein immobilized surface of PEEK showed an improved antibacterial effect after treating with NIR at 808 nm for 10 min.The antibacterial rate reached ∼97.55% and ∼90.57% against S. aureus and E. coli, respectively [109]. Composites with antibacterial properties Forming composites is another method to improve the antibacterial properties of PEEK.Ceramics are the most common reinforcement materials used in the studies.In addition to ceramics, organic compounds such as gelatin, drugs such as lactam, and total alkaloids have been studied (figure 4).Table 5 lists the composite materials with their antibacterial effects on PEEK. In some of the studies, it was reported that the reaction of bacteria changed according to their types.For example, a study that investigated the antibacterial properties of (GO), carbon fibers, and PEEK composite on Ti-6Al-4 V showed that the cell membrane structure defined the antibacterial properties.Graphene oxide showed a nano-blade effect and concurrently resulted in the chemical extraction of cell membrane lipids concurrently.Compared to Gram-negative E. coli (with a membrane consisting of a thin peptidoglycan layer and an outer lipid membrane) and Gram-positive S. mutans with a thick capsule, S. aureus (with a membrane consisting of thick peptidoglycan) is more vulnerable to the effects of GO and showed better antibacterial rate [94].In the case of Total alkaloids from Semen Strychnine (TASS), an antibacterial drug, loaded PEEK/ polyglycolide acid composite, E. coli gave superior results compared to S. aureus due to its thinner cytoderm that enabled TASS infiltration.The cell membrane thickness of E. coli was about 10 nm, whereas this value varied between 20 and 80 nm for S. aureus [144].Tantallum-included systems gave moderate to low antibacterial rates in the studies reviewed [115,149].Therefore, a combination with genistein improved the antibacterial rate [115].The mechanism was explained by the electrostatic interaction between negative charges of the bacterial cell membrane and positive charges of the tantallum ions on the surface.The interaction caused cell wall leakage and bacterial cell death [115].The mechanical properties are not altered by changing the surface properties.The composite production method enables tailoring the mechanical properties of PEEK.PEEK polymer has an elastic modulus (3-4 GPa), which can be tailored to have close values with cortical bone (18 GPa) [8].The implant's elastic modulus that matches the bone results in reduction in stress shielding.A study analyzed stress distribution at the implant and bone interface using 3D-FEM.This property is important in terms of the initial stability of the implant and directly affects the success of the implant.The composition with 8 wt.% TiO 2 , 8 wt.% SiO 2 and 84 wt.% PEEK gave the minimum stress distribution with trapezium profile thread [51]. Wear is another problem that causes implant failure by loosening or debris formation.Wear resistance and friction coefficient are essential parameters for artificial joint composites.ZnO nanoparticles were incorporated to obtain a wear-resistant PEEK implant material.When the addition amount was 5 wt.%, the wear rate decreased by 68% compared to PEEK polymer [63].Black phosphorus is another reinforcement used in PEEK composite to improve the wear properties.The wear rate decreased 95% when 10 wt.% PTFE and 0.5 wt.% black phosphorus were included in the PEEK structure [92].Black phosphorus formed a transfer film with Van der Waals force, providing a good adhesion with tribopairs [92].PEEK composite coatings with carbon fiber (25 wt.%) and GO (0.02 wt.%) improved the wear resistance of Ti-6Al-4V alloy [94].The composites of PEEK/ nano-ZnO (7.5 wt.%) and short carbon fiber (15 wt%) showed better wear performance and lower friction coefficient compared to the pristine PEEK [150]. Antibacterial and osteogenic properties were also investigated concurrently in most of the studies.The observations based on fluorescence staining showed that the morphology of mouse chondroblasts ADTC5 cells had a flat strip shape on day seven, which was the indicator of proliferation and differentiation [63].Similarly, nanoTiO 2 (addition amounts: 1, 3, 5, and 7 wt.%)enhanced the proliferation and attachment of MG-63 cells when it is used as a reinforcement in PEEK/PGA blend [79].40 wt.% addition of nanofluorohydroxyapatite into PEEK improved alkaline phosphatase activity and biomineralization activity of MG-63 cells (human osteoblast-like cells) [91].Silicon nitride/PEEK (v/v 50%) and Tantalum/PEEK (v/v 50%) enhanced MC3T3-E1 cell proliferation and differentiation in vitro and new bone formation and osseointegration in vivo [149]. The systems with antibiotics are effective in planktonic bacteria.However, when the bacteria become colonized and form biofilms, the concentration of antibiotics used in the treatments should be increased a thousand times [90].Therefore, the antibiofilm efficacy is an important property for an implant material.In a PEEK/lactam composite coating study, the prevention of biofilm formation is provided by lactam addition after surface functionalization with sulfonation [90].Similarly, the biofilm formation deteriorated by 40 wt.% nanofluorohydroxyapatite addition into PEEK.There were lots of dead S. mutans bacteria detected on the samples compared to pristine PEEK after adding 40 wt.% nano-fluorohydroxyapatite. Fluoride ions inhibit bacterial metabolism and dental plaque acidogenicity [91]. In the case of nano-ZnO, a gradual increase in antibacterial properties with nano-ZnO addition was seen [63,154].The maximum antibacterial property was reached at 7.5 wt.% [154].Modifying ZnO nanoparticles with a silane coupling agent, APTMS, prevented the release of ZnO nanoparticles.The increasing trend of the antibacterial property with ZnO content was explained by ROS (H 2 O 2 ) generation [154].The different responses of E. coli and S. aureus against H 2 O 2 supported the experimental results since E. coli is affected more by ZnO content [154].The effect of short carbon fiber in the ZnO/PEEK composite structure was investigated.The 15 wt.% addition of short carbon fiber increased the zone of inhibition from 11.95 mm to 28.9 mm for E. coli and 11.43-22.2mm for S. aureus [150]. The gradual increase of antibacterial effect for n-TiO 2 was up to 5 wt.% [79].Adding n-TiO 2 higher than 5 wt.% reduced the effective surfaceto-volume ratio due to the formation of clusters.Therefore, the interaction between the material and bacteria decreased [79].Another reinforcement, S 3 N 4 , affected the antibacterial property of PEEK.The reaction between the aqueous environment and S 3 N 4 released ammonia and silicic acid from the surface of S 3 N 4 .Ammonia increased the extracellular pH and triggered the formation of free radicals, resulting in death in some bacteria [54]. Similarly, another silicon-based compound, silicon nitride included -NH 2 groups and gave -NH 3 + ions to the environment.The negatively charged bacterial cell wall interacted with positively charged ions and destructed.Moreover, the alkaline environment formed by -NH 3 + ions affected biofilm formation negatively [149]. Besides ceramics, an organic compound, gelatin was used as a reinforcement.Gelatin killing counts were 32.6% and 39.2% against S. aureus and E. coli, respectively.It was shown that PEEK/gelatin nanocomposites studied in hydrogel form increased bacteria killing counts from 23.3% (for pure PEEK) to 57.1% against S. aureus and 34.7% (for pure PEEK) to 61.8% against E. coli.The PEEK/gelatin weight ratio was 1.73 to obtain the hydrogel [64]. Changing the surface topography Surface topography is a property that defines surface adhesion.The surface features smaller than the bacterial size inhibit the bacterial attachment [156][157][158][159][160]. The methods for the surface texture of PEEK for better antibacterial efficiency are cold plasma treatment, sulfonation, plasma immersion ion implantation, and micro/nano array formation by colloidal lithography, plasma etching, forming laser-induced periodic surface structures (figure 5).Table 6 summarizes the treatments to change surface topography to improve the antibacterial property of PEEK.Changing the surface topography is the least studied, and relatively less effective results were obtained regarding antibacterial properties.The plasma immersion ion implantation method is widely used to change the surface topography.This method gave relatively high antibacterial results for the Zn and O ions combination and TiO 2 ions [161,162].On the other hand, cold plasma technique with Ar improved the antibacterial property below 20%, and the reduction was improved with the contribution of N 2 [60]. Colloidal lithography and plasma etching have produced cones and pillars in nano and micro dimensions.The antibacterial tests on E. coli showed that the dimensions and density of microstructures gave more effective results than nanostructures.Moreover, the cone shape increased the antibacterial rate compared to the pillar shape due to its sharp edges [168].In another study, the KrF excimer laser was used to obtain ∼100 nm wide stripes on the PEEK surface.Moreover, the structure was decorated with Ag nanoparticles.The surface structured samples showed better antibacterial properties, although they had lower Ag concentration than the untreated samples [169]. As seen in table 6, zinc and oxygen plasma immersion ion implantation on carbon fiber reinforced PEEK gave the highest antibacterial results on MRSA, S. aureus and S. epidermidis in 24 h compared to plasma immersion ion implantation of N 2 [161,167].The trap-killing mechanism of the surface topography by forming micro pits with dimensions (∼800 nm) that fit the bacteria showed a reduction higher than 90% in MRSA, S. aureus and S. epidermidi.In contrast, no antibacterial effect was seen on E. coli and P. aeruginosa [161].In the case of TiO 2 plasma immersion ion implantation, nanoparticles with the dimension of about 20 nm prevent bacterial adhesion of S. mutans, F. nucleatum and P. gingivalis by providing less attachment area [162].Moreover, antibacterial longevity was tested for 28 d and above 80% of antibacterial rates were obtained.Similarly, the significant difference between the antibacterial rates of S. aureus (71.4%) and E. coli (5.3%) stemmed from their different shapes.E. coli has an elongated shape that reduces the direct contact with the surface of PEEK treated with TiO 2 plasma immersion ion implantation and decreases the effect of ROS [166]. PEEK polymer is classified as bioinert; therefore, in most of the applications, osseointegration is limited.Since its high chemical resistance, the modification techniques gain importance in studies of PEEK as an implanted biomaterial.Similar techniques to increase antibacterial properties are applied to increase biocompatibility of PEEK.Cold plasma technique with argon and N 2 , was shown in vitro to increase the biocompatibility [60].The best results for osteogenic activity were obtained for the N 2 treated samples.Therefore, those samples showed an efficient race-for-the-surface property.Similarly, the nitric and sulfric acid mixture increased bioactivity and cell biocompatibility [84].When the surface was treated with argon plasma immersion ion implantation and hydrofluoric acid, rat bone mesenchymal stem cell adhesion, spreading, proliferation and ALP activity enhanced on the fluorinated surface of PEEK [107]. Changing the gas composition in the cold plasma technique resulted in different topographic structures.For example, 25 min of treatment with Ar showed regularly arranged scaly nanoprotrusions, whereas N 2 formed dendritic nanoprotrusions.When the gas composition was arranged as 90% Ar/10% N 2 , an unorganized and scaly texture with the densest and finest nanoprotrusions was observed according to the SEM results [60].Therefore, the highest antibacterial rate was seen for the samples treated with a gas content of 90% Ar/10% N 2 .The area for bacterial adhesion and the interaction between bacteria and the surface decreased.The morphology of S. aureus was observed with SEM on cold plasma-treated PEEK samples [60].Bacteria have a spherical shape on untreated samples with a smooth and continuous cell membrane. On the other hand, in the samples treated with 90% Ar/10% N 2 , the bacteria chain length was shorter, and destructed cell membranes were observed.Pre-annealing (10 • C min −1 -180 • C) changed the types of nanolamellae formed by argon plasma treatment for 45 min [171].Verticle and tilted nano lamellae increased antibacterial efficiency against S. aureus and E. coli, significantly compared to PEEK samples treated with argon plasma treatment for 5 min [171].Without annealing, verticle nano lamellae were obtained.Verticle nano lamellae were more destructive against S. aureus and E. coli than tilted nano lamellae that were produced with pre-annealing [171]. Moreover, roughness and breakage in the cell membrane were observed, indicating better antibacterial properties than untreated samples [60].Similarly, SEM and live/dead staining images showed that E. coli and S. aureus had irregular shapes due to bacterial lysis and distortion of the membranes after Ar-plasma treatment of sulfonated PEEK for 5 and 10 min [46].Since plasma treatment formed nanoprotrusions on the surface, E. coli (1 µm) and S. aureus (0.5 µm) contact area decreased.Therefore, the adhesion of bacteria was inhibited [46].An electrostatic repulsion between the surface and the bacteria was formed by Ar-plasma treatment, adding carboxyl, hydroxyl, and sulfonic acid groups on the surface, increasing electronegativity [46]. Plasma immersion ion implantation Plasma immersion ion implantation is a technique for changing the surface topography of semiconductors, metals and dielectrics [172].In this technique, the target is surrounded by the plasma; therefore, the method can be used for 3D geometries [173].The flux of energetic plasma ions is implanted on the surface using high negative voltages [172].TiO 2 , Zn 2+ , and O 2− ions and N 2 are molecules used to obtain a surface texture on PEEK with antibacterial properties by using this technique.TiO 2 nanopores with 150-200 nm diameters were obtained by plasma immersion ion implantation on carbon fibers and PEEK composites [162].Since there was no release of Ti ions from the structure, the antibacterial improvement was attributed to the surface structure containing 20 nm nanoparticles [162,166].The small dimension of the nanoparticles decreased the contact area between the substrate and bacteria and inhibited bacterial adhesion.Moreover, according to x-ray photoelectron spectroscopy results, possible vacancies of oxygens in the nanostructure increased with the depth of TiO 2 [162].Oxygen vacancies in TiO 2 were highly reactive and produced ROS, damaging bacteria [162]. Zinc and oxygen ions were implemented on carbon fiber-reinforced PEEK using the plasma immersion ion implantation method [161].According to the antibacterial studies, there was no significant change in antibacterial efficiency with a low amount of Zn ion release.The results showed that the topography, which formed micro-pits with a diameter of about 800 nm, impacted the antibacterial properties of S. aureus, MRSA, and S. epidemidis [161].The bacteria with the similar size micro-pits (800 nm) were trapped in the pits and could not provide interconnectivity with the bacterium.Therefore, biofilm formation was inhibited [161].N 2 -treated samples with the same method showed a 19% reduction in bacterial growth [167].The surface roughness of the treated samples was lower than 900 nm which was within the limit for bacterial attachment (between 10 nm and 900 nm) [167].Therefore, the effect of N 2containing functional groups on bacterial activity was hypothesized [167]. Conclusion and prospect According to the studies aimed at designing new biomaterials with antibacterial properties for infection control in dental and orthopedic surgery applications, immobilization of an antibacterial agent provides a more controllable environment in which the release profile of the immobilized compounds can be adjusted easily without changing the dimensions of the implant.For example, 6 months should be covered for better attachment to avoid earlystage dental implant failure.Moreover, immobilization eliminates the cracks and bulky release of the antibacterial material compared to the coatings.Another critical issue is the timing of the osteoblast attachment and bacterial attachment.The racefor-the-surface between the osteoblasts and bacteria defines the success of the implant.Therefore, antibacterial agents possessing longevity in release and functional compounds enhancing osteoblast attachment are required.Dual systems formed by immobilizing an antibacterial agent and compounds with osteogenic origin promote osteoblast attachment and inhibit bacteria simultaneously.Therefore, those systems are superior to PEEK nanocomposites and coated PEEK in the scope of antibacterial effects.Forming nanocomposites, coating, and changing surface topography can enhance the immobilization of functional materials and support them in antibacterial ability. The limitations of PEEK used in dental applications as endo-crowns and abutments are loosening and infection [18].Modifications to prevent loosening and infection are listed as sustained release of antibacterial material, a composition showing high osseointegration, and elastic modulus values close to bone located in reconstruction areas.Regarding orthopedic implants, PEEK is used as cranioplasty and maxillofacial reconstruction implants, fixation devices, spinal implants and joint replacements (knee and hip) [18].In spinal implants, poor osseointegration is detected as a main problem.Loosening and infection are common for the cranioplasty and maxillofacial reconstruction implants, fixation devices, and joint replacements [18]. Mechanical properties can be altered by only producing composites, whereas osseointegration and antibacterial properties can be enhanced by coating, immobilizing molecules on the surface, and changing the surface texture.The elastic modulus can be increased using reinforcements such as TiO 2 and SiO 2 to provide better osseointegration [51].Moreover, sustained release should be provided to support osteoblasts for race-for-the surface for the coating application with a start of a burst release.A sudden drug release between five to seven days should be provided to prevent biofilm formation.Since most of the studies were conducted at pH 7.4, the compounds that are effective in several pH values should be used according to the application area, especially for dental applications, because of sudden changes in pH of the oral environment.Therefore, a layer-bylayer approach would be an good solution to control the release amount and time [51,117].It is possible to construct a system that provides a burst release of an antibacterial agent at the first layer and a sustained release of an antibacterial agent combination with a compound that supports osseointegration.Ag + ion gave inhibition rates of bacteria higher than 90% in all types of applications [3,52,117].Therefore, a compound including Ag + ion would be a good choice as a burst release layer.However, cytotoxicity should be considered because initial burst of Ag + inhibited MC3T3-E1 cells in the first 3 d [97].The high porosity of the carrier and weak bonds between the antibacterial agent and carrier resulted in a high initial burst [75]. Phototermal therapy is another approach to regulating antibacterial properties.A sudden increase in the temperature at the infection site can be provided using the photothermal ability of coated materials instead of the burst release of an antibacterial agent [120,135]. The spectrum of the antibacterial agent is another parameter that should be considered.An antibacterial agent with a broad spectrum is required for dental applications since the oral environment contains several types of bacteria that result in oral diseases [174].As a testing model, bacteria representing the initial (S.sanguini) and late (P.gingivalis) colonization periods should be chosen [68].On the other hand S. marcescens becomes prominent for the spinal infections [136].Therefore, drugs with broad spectrum should be chosen for more efficient antibacterial therapy.The amount of antibacterial agent is also a parameter to regulate the antibacterial properties.The reduction in concentration decreased the release amount [48]. The bioinertness and chemically inertness of PEEK are the challenges researchers face during their studies.Surface modification techniques such as sulfonation and coating with PDA were used in most of the studies to overcome chemical inertness of PEEK.The proposed systems should support antibacterial and osteogenic properties to overcome the bioinertness.The antibacterial agent's type and its concentration optimization are crucial parameters.For example, although Ag is a superior antibacterial agent, it causes cytotoxicity.Therefore, the amount of Ag should be optimized.The most used modification techniques to obtain the antibacterial property are immobilizing an antibacterial agent and coating PEEK with an antibacterial material.Those modifications are obtained by immersion and dip coating.Although those methods are easy to apply and costeffective, non-homogenous surfaces can be obtained which resulted in failed results.Another problem is to find a broad spectrum antibacterial material.As seen in most of the studies, the response of bacteria changes based on the antibacterial agent. Smart biomaterials are another area that emerged in the field [175].Since there were some examples of photo-responsive [57, 120,135] and pH-responsive [137,138] systems for PEEK, other techniques such as enzyme-responsive, electrical stimuli-responsive, vibration-responsive and magnetic-responsive systems can be developped for PEEK implants to provide a solution-oriented approach and better control over drug release, biocompatibility and antibacterial efficiency [175].Salivary and bacterial enzymes can be used as a stimulus for releasing antibacterial agents.The charges on the biofilms are the starting point of the electrical-stimuli responsive materials.The combination with Fe 3 O 4 adds a magnetic responsiveness to the material.BaTiO 3 gives a piezoelectric property to the material affects biofilms when there is only mechanical stimulation [176]. A successful biomaterial for implants should provide the best osseointegration to avoid implant rejection and revision.Although PEEK has an advantage in its mechanical properties, it is a bioinert material and requires modifications to render antibacterial and osseointegration properties.Being a chemically inert biomaterial, PEEK challenges the researchers in the field in modification techniques.The most widely used techniques to overcome PEEK's chemical inertness are sulfonation and PDA coating.Bioactive molecules/compounds are attached to the surface of PEEK with the help of those techniques to bring antibacterial and biocompatibility properties to this material. This review summarizes the strategies to improve PEEK's antibacterial properties.Moreover, the measurement methods and the mechanisms of bacterial inhibition are explained.According to the studies analyzed, the immobilization of functional groups and compounds has been studied the most.It was reasonable since immobilizing the functional groups provided a more controllable release profile for antibacterial purposes.Moreover, the osteogenic properties of the PEEK can be tailored by adding different functional groups.Ag + ions, Ag nanoparticles, and various antibiotics for immobilization are mainly used agents.Bone-forming and osteogenic growth peptides have been used to add osteogenic properties to the system.The coating materials included Ag, Cu, Zn, and Mg elements, ceramics (HA, brushite), drugs (cefuroxime sodium salt, tobramycin, and GS), red/gray selenium nanorods, and black/white tantalic oxides.Immersion and dripping are the two most used methods for coating and immobilizing functional groups.The blending method is used widely to form PEEK composites with TiO 2 , SiO 2 , ZnO, HA, lactam, black phosphorus, and S 3 N 4 to add an antibacterial effect.Cold plasma technique, plasma immersion ion implantation, and treatment with a strong acid are methods used to change the surface topography to enhance the antibacterial properties of PEEK.ROS generation is the main mechanism for the antibacterial effect, and colony-forming unit calculation is used widely as a quantitative measurement method to detect antibacterial properties. Figure 1 . Figure 1.The mechanisms of the antibacterial effect of modified PEEK: (a) ROS generation and contact killing: Electrostatic interactions between bacterial cell membrane and antibacterial ions deteriorate the integrity of bacterial cell membrane.The antibacterial nanoparticles pass through the cell membrane and trigger ROS generation, which results in protein damage, DNA damage, or mitochondria damage [52, 62-65].Adapted with permission from [66].Copyright (2017) American Chemical Society.(b) Hydrophobicity of the surface: prevents hydrophilic bacteria from attaching the surface.Either coating can provide hydrophobicity with a hydrophobic material or change the surface texture [48, 67].(c) Trap killing: Bacteria are trapped and can not form a biofilm on the porous surface of the material.The size of pores and bacteria determines the efficiency of the method [62, 68]. Figure 2 . Figure 2. Immobilization of functional groups onto PEEK with the two most common methods, dripping and immersion.Antibacterial effect and osteogenic properties obtained by immobilization of two different materials offer solutions for a better implant design [50, 61]. Figure 4 . Figure 4.Composite production to obtain antibacterial PEEK consists of two steps in most studies.The first step includes powder mixing using a liquid medium such as ethanol to obtain a homogeneous mixture.PEEK powder or blends of PEEK and reinforcements in powder form are dispersed in the medium[92].The second step includes heating above the melting temperature (PEEK melting temperature: ∼343 • C) for injection molding, compression molding, selectively laser sintering, cold pressing sintering, and twin-screw extrusion[51,54,144,149,150] to obtain composite materials or electrophoretic deposition for composite coatings[83].The figures that represent injection molding[151], swin screw extruder[151] and selectively laser sintering[152] are adapted with permission.Cold pressing sintering figure is reprinted (adapted) with permission from[153].Copyright 2018 American Chemical Society.Adapted from[151].CC BY 4.0.Adapted with permission from[153].Copyright (2018) American Chemical Society.Adapted from[152].CC BY 4.0.Adapted from[151].CC BY 4.0. Figure 5 . Figure 5.The methods for changing PEEK's surface and addition of functional groups.The methods widely used are sulfonation, cold plasma treatment, and plasma immersion ion implantation.Sulfonation, other acid treatmets, and plasma immersion ion implantation methods change the surface topography and incorporate ions such as SO3H, NO2, Zn 2+ , and O 2− to the surface [84, 99, 161].The cold plasma technique changed the surface roughness and dendritic or scaly nanoprotrusions were obtained on the surface [60].The figures that represent cold plasma treatment[163] and plasma immersion ion implantation[164] figures are adapted with permission.Colloidal lithography figure is reprinted (adapted) with permission from[165].Copyright 2017 American Chemical Society[163].[05/07/2022], adapted with permission of the publisher (Taylor & Francis Ltd, www.tandfonline.com.)Adapted from[164].CC BY 4.0.Reprinted with permission from[165].Copyright (2017) American Chemical Society. Table 1 . Measurement methods of antibacterial property of PEEK. Table 2 . The antibacterial effect of the modification via immobilization of functional materials. Table 3 . Coating techniques used to improve antiacterial properties of PEEK. Table 4 . The antibacterial effect of the modification with coating with an antibacterial material. Table 4 . (Continued.) a Cu deposition was applied for 2 min and C:F thickness was 10 nm.b GAS deposition: Haberland-type gas aggregation source was used to deposit Cu nanoparticles.c Near-infrared. d Poly(D,L-lactic acid-co-glycolic acid). Table 5 . The antibacterial effect of the modification with reinforcements. Table 6 . The antibacterial effect of the modification with changing the surface topography. combined treatment of osteosarcoma and bacterial infection ACS Appl.Mater.Interfaces 12 45891-903 [56] Yang C et al 2019 Sodium butyrate-modified sulfonated polyetheretherketone modulates macrophage behavior and shows enhanced antibacterial and osteogenic functions during implant-associated infections J. Mater.Chem.B 7 5541-53 [57] Zhang J, Gao X, Ma D, He S, Du B, Yang W, Xie K, Xie L and Deng Y 2021 Copper ferrite heterojunction coatings empower polyetheretherketone implant with multi-modal bactericidal functions and boosted osteogenicity through synergistic photo/Fenton-therapy Chem.Eng.J. 422 130094 [58] Yuan X, Ouyang L, Luo Y, Sun Z, Yang C, Wang J, Liu X and Zhang X 2019 Multifunctional sulfonated polyetheretherketone coating with beta-defensin-14 for yielding durable and broad-spectrum antibacterial activity and osseointegration Acta Biomater.86 323-37 [59] Guo C, Lu R, Wang X and Chen S 2021 Antibacterial activity, bio-compatibility and osteogenic differentiation of graphene oxide coating on 3D-network poly-ether-ether-ketone for orthopaedic implants J. Mater.Sci., Mater.Med.32 135 [60] Liu C, Bai J, Wang Y, Chen L, Wang D, Ni S and Liu H 2021 The effects of three cold plasma treatments on the osteogenic activity and antibacterial property of PEEK Dent.Mater.37 81-93 [61] Sang S, Wang S, Yang C, Geng Z and Zhang X 2022 Sponge-inspired sulfonated polyetheretherketone loaded with polydopamine-protected osthole nanoparticles and berberine enhances osteogenic activity and prevents implant-related infections Chem.Eng.J. 437 135255 [62] Liu W, Li J, Cheng M, Wang Q, Qian Y, Yeung K W K, Chu P K and Zhang X 2019 A surface-engineered polyetheretherketone biomaterial implant with direct and immunoregulatory antibacterial activity against methicillin-resistant Staphylococcus aureus Biomaterials 208 8-20 [63] Wu T, Zhang X, Chen K, Chen Q, Yu Z, Feng C, Qi J and Zhang D 2022 The antibacterial and wear-resistant nano-ZnO/PEEK composites were constructed by a simple two-step method J. Mech.Behav.Biomed.Mater.126 104986 [64] Jiang J, You D, Wang Q and Gao G 2022 Novel fabrication and biological characterizations of AgNPs-decorated PEEK with gelatin functional nanocomposite to improve superior biomedical applications J. Biomater.Sci.Polym.Ed. 33 590-604 [65] Wang L, He H, Yang X, Zhang Y, Xiong S, Wang C, Yang X, Chen B and Wang Q 2021 Bimetallic ions regulated PEEK of bone implantation for antibacterial and osteogenic activities Mater.Today Adv. 12 100162 [66] Li M et al 2017 Construction of functional coatings with durable and broad-spectrum antibacterial potential based on mussel-inspired dendritic polyglycerol and in situ-formed copper nanoparticles ACS Appl.Mater.Interfaces 9 35411-8 [67] Seuss S, Heinloth M and Boccaccini A R 2016 Development of bioactive composite coatings based on combination of PEEK, bioactive glass and Ag nanoparticles with antibacterial properties Surf.Coat.Technol.301 100-5 [68] Yang S, Yu W, Zhang J, Han X, Wang J, Sun D, Shi R, Zhou Y, Zhang H and Zhao J 2022 The antibacterial property of zinc oxide/graphene oxide modified porous polyetheretherketone against S. sanguinis, F. nucleatum and P. gingivalis Biomed.Mater.17 025013 [69] Sies H and Jones D P 2020 Reactive oxygen species (ROS) as pleiotropic physiological signalling agents Nat.Rev. Mol.Cell Biol.21 363-83 [70] Azzam E I, Jay-Gerin J P and Pain D 2012 Ionizing radiation-induced metabolic oxidative stress and prolonged cell injury Cancer Lett.327 48-60 [71] Maier P, Hartmann L, Wenz F and Herskind C 2016 Cellular pathways in response to ionizing radiation and their targetability for tumor radiosensitization Int.J. Mol.Sci. 17 102 [72] Dwyer D J, Kohanski M A and Collins J J 2009 Role of reactive oxygen species in antibiotic action and resistance Curr.Opin.Microbiol.12 482-9 [73] Kessler A, Hedberg J, Blomberg E and Odnevall I 2022 Reactive oxygen species formed by metal and metal oxide nanoparticles in physiological media-a review of reactions of importance to nanotoxicity and proposal for categorization Nanomaterials 12 1922 [74] Xiao T, Fan L, Liu R, Huang X, Wang S, Xiao L, Pang Y, Li D, Liu J and Min Y 2021 Fabrication of dexamethasone-loaded dual-metal-organic frameworks on polyetheretherketone implants with bacteriostasis and angiogenesis properties for promoting bone regeneration ACS Appl.Mater.Interfaces 13 50836-50 [75] Xue Z, Wang Z, Sun A, Huang J, Wu W, Chen M, Hao X, Huang Z, Lin X and Weng S 2020 Rapid construction of polyetheretherketone (PEEK) biological implants incorporated with brushite (CaHPO4•2H2O) and antibiotics for anti-infection and enhanced osseointegration Mater.Sci.Eng.C 111 110782 [76] Lau N C, Lai Y C, Chen D W and Cheng K W 2022 Antibacterial activity studies of 3D-printing polyetheretherketone substrates with surface growth of 2D TiO2/ZnO rodlike arrays ACS Omega 7 9559-72 [77] Gao A, Hang R, Huang X, Zhao L, Zhang X, Wang L, Tang B, Ma S and Chu P K 2014 The effects of titania nanotubes with embedded silver oxide nanoparticles on bacteria and osteoblasts Biomaterials 35 4223-35 [78] Dorovskikh S I et al 2022 Biological studies of new implant materials based on carbon and polymer carriers with film heterostructures containing noble metals Biomedicines 10 2230 [79] Shuai C, Shuai C, Feng P, Gao C, Peng S and Yang Y 2018 Antibacterial capability, physicochemical properties, and biocompatibility of nTiO2 incorporated polymeric scaffolds Polymers 10 328 [80] Chen T, Chen Q, Fu H, Wang D, Gao Y, Zhang M and Liu H 2021 Construction and performance evaluation of a sustained release implant material polyetheretherketone with antibacterial properties Mater.Sci.Eng.C 126 112109 [81] Qi D, Wang N, Cheng Y, Zhao Y, Meng L, Yue X, She P and Gao H 2022 Application of porous polyetheretherketone scaffold/vancomycin-loaded thermosensitive hydrogel composites for antibacterial therapy in bone repair Macromol.Biosci.22 1-10 [82] Cai G et al 2020 Hierarchically porous surface of PEEK/nMCS composite created by femtosecond laser and incorporation of resveratrol exhibiting antibacterial performances and osteogenic activity in vitro Composites B 186 107802 [83] Abdulkareem M H, Abdalsalam A H and Bohan A J 2019 Influence of chitosan on the antibacterial activity of composite coating (PEEK /HAp) fabricated by electrophoretic deposition Prog.Org.Coat.130 251-9 [84] Ding R, Chen T, Xu Q, Wei R, Feng B, Weng J, Duan K, Wang J, Zhang K and Zhang X 2020 Mixed modification of the surface microstructure and chemical state of polyetheretherketone to improve its antimicrobial activity, hydrophilicity, cell adhesion, and bone integration ACS Biomater.Sci.Eng. 6 842-51 [85] Meng X, Zhang J, Chen J, Nie B, Yue B, Zhang W, Lyu Z, Long T and Wang Y 2020 KR-12 coating of polyetheretherketone (PEEK) surface: via polydopamine improves osteointegration and antibacterial activity in vivo J. Mater.Chem.B 8 10190-204 [86] Jacob B, Park I S, Bang J K and Shin S Y 2013 Short KR-12 analogs designed from human cathelicidin LL-37 possessing both antimicrobial and antiendotoxic activities without mammalian cell toxicity J. Pept.Sci.19 700-7
2024-02-18T06:16:31.527Z
2024-02-16T00:00:00.000
{ "year": 2024, "sha1": "ac7000ed0fc3f2c66e752c59706d89e59b5ae911", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1748-605X/ad2a3d/pdf", "oa_status": "HYBRID", "pdf_src": "IOP", "pdf_hash": "f44ac9adb7c5b6f490065ea916eaff2c8c29a89a", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
56170489
pes2o/s2orc
v3-fos-license
The Relationship between Moral Climate of Sports and the Moral Behavior of Young Athletes: A Multilevel Meta-analysis Sports are among the most important leisure activities for youth and adolescents. Both positive (i.e., prosocial) and negative (i.e., antisocial) moral behaviors occur on the playing field. To stimulate positive sports experiences, it is important to understand which factors are related to the moral behavior of young athletes; one of these is the moral climate, that is, the socio-moral environment in which sports take place. Little is known about the overall strength of the relationship between moral climate and moral behavior of young athletes, as well as the potential moderating factors of this relationship. A meta-analysis of 27 studies containing 117 effect sizes and N = 7726 young athletes (age < 18 years) was conducted. The results show that there is an overall significant association between these two variables (r= 0.40), indicating that a prosocial moral climate is related to less antisocial and more prosocial behavior, while an antisocial moral climate is associated with more antisocial and less prosocial behavior of young athletes. Two study characteristics significantly moderated this relationship: specifically, stronger associations were found in cross-sectional and in older studies. In addition, the strength of the association between moral climate and moral behavior was stronger for antisocial moral climate compared to prosocial moral climate. Finally, associations for team members were stronger than those of coaches or a broad moral club climate. Implications for further research and sports practice are discussed. Introduction Sports represent one of the most popular leisure activities for youth (Ntoumanis et al. 2012). Participation in organized youth sports offers athletes many opportunities for social interactions with peers and adults, which could lead to the development of moral norms and values Rutten et al. 2011). Participants are challenged to make moral decisions about behaviors that would increase or thwart their chances of winning, and that are in line with, or contravene, the fundamental morals of sports. Because of the nature of sports activities, sports have the potential to shape the moral behavior of youth for better or worse (Rutten et al. 2008;Shields and Bredemeier 1995). While sports are believed to promote prosocial values, such as sportsmanship and fair play, incidents of antisocial behavior on the playfield have been documented extensively (Fields et al. 2010;Hodge and Lonsdale 2011;Rutten et al. 2011). Experiencing incidents of antisocial behavior within the sports context can have negative consequences for the sports participation of youth and limit the opportunities of effectively using sports activities as a vehicle of moral development (Alyaaribi et al. 2016;Fraser-Thomas and Côté 2009). Therefore, it is important to understand what factors contribute to antisocial and prosocial behavior of young athletes in order to stimulate positive moral development and prevent antisocial transgressions. These authors contributed equally: Tim Smit and Marlous IJntema. One of the factors within the sports context that has been linked to moral behavior of athletes is the moral climate, that is, the socio-moral environment in which sports take place (Rutten et al. 2007). In the broader field of youth studies, the socio-moral environment created by peers, parents, and teachers in which the youth develops, is believed to be of great importance in shaping moral behavior of youth (Kohlberg 1984;Piaget 1965). In addition, findings of empirical studies highlight the importance of the moral sports climate in the moral behavior of youth (Shields et al. 2007;Rutten et al. 2011). However, to date, convincing evidence on the overall strength and potential moderators of this relationship is not available. These insights are necessary in order to be able to provide guidelines for creating a sports context that supports positive moral development of young people. Therefore, the current study aims to examine the relationship between moral climate and moral behavior of young athletes by means of a metaanalysis. Moral Climate and Moral Behavior The term moral behavior refers to acts that can have positive or negative consequences for athletes' psychological and physical well-being (Al-Yaaribi et al. 2016;Kavussanu 2012;Kavussanu and Stanger 2017). In the current study, a distinction is made between prosocial and antisocial moral behavior. Prosocial and antisocial behavior were operationalized by using the definitions utilized by Kavussanu and colleagues (e.g., Kavussanu 2008;Kavussanu and Stanger 2017;Sage et al. 2006). Prosocial behavior is behavior intended to help or benefit another, while antisocial behavior is behavior intended to harm or disadvantage another, including aggression. Examples of antisocial sports behavior are intentionally injuring an opponent, intimidating or hurting opponents, verbally abusing team members or opponents and being rebellious towards the referee (Kavussanu 2008;Sage et al. 2006). In contrast, prosocial behavior is characterized by cooperative behaviors, where people voluntarily help, take care of each other and share their resources and skills through feedback and encouragement (Kavussanu and Boardley 2009). Both prosocial and antisocial behavior can take place before, during or after the sports activities. Moreover, a recent meta-analysis (Graupensperger et al. 2018) showed that these behaviors are relatively independent, so it is important that both types of behavior are examined in sports. The term moral climate is a broad term used to refer to a social environment in sports that has moral connotations. This includes moral atmosphere and caring climate. Moral atmosphere refers to the shared norms and values within sports context (Shields and Bredemeier 1994). This includes the shared norms and values of coaches, team members, parents, spectators, and moral club culture (Rutten et al. 2007;Shields et al. 2005). Caring climate refers to perceptions of team members on interpersonal warmth and support (Newton et al. 2007), and behaviors of "engrossment (listening, accepting, and attending), motivational displacement (honoring interests, supporting and helping achieve goals, empowering), respect (trust, sensitivity)" (Fry and Gano-Overway 2010, p. 295). The current study uses the term moral climate to refer to all of the above. The moral climate can promote prosocial or antisocial behavior, thus it can be distinguished in prosocial and antisocial moral climate. Prosocial moral climates refer to prosocial norms, values, and behaviors within the sports context, including the presence of fair play attitudes and caring behaviors among players. Antisocial moral climate refers to a moral climate that is likely to promote antisocial behavior. Antisocial moral climates consist of shared antisocial norms and values by the social actors in the sports context, including antisocial team norms, the approval of cheating behaviors by coaches and spectators, the acceptance of injuring or intimidating opponents, and aggressive behaviors on, or around the field by team members (Guivernau and Duda 2002). Kohlberg's theory Kohlberg (1984) on moral development is often used to explain the relationship between moral climate and moral behavior of athletes (Jones and McNamee 2000;Shields and Bredemeier 1994;Stephens and Bredemeier 1996). Kohlberg (1984) proposed that moral reasons or beliefs are important motives to moral behavior. Believing that certain behavior is the right or acceptable thing to do, has a great motivational power to act in concordance with that belief. According to Kohlberg, individual moral beliefs in real life almost always originate in the context of group norms. Therefore, individual moral behavior can be perceived as a function of group norms (Higgins et al. 1984), explaining the association between moral climate and moral behavior in sports (Jones and McNamee 2000;Shields and Bredemeier 1994;Stephens and Bredemeier 1996). The moral climate in the sports context is considered to be of substantial influence on moral outcomes in young athletes (Guivernau and Duda 2002;Kavussanu and Stanger 2017). The moral climate of sports teams could provide the base for moral judgments and related behavior of the team members. Numerous studies have shown that collective team, coach, parent, spectator, and club norms are related to the moral functioning and behavior of individual team members (Arthur-Banning et al. 2009;Guivernau and Duda 2002;Steinfeldt et al. 2011;Stephens et al. 1997). For example, Stephens and Bredemeier (1996) found that reported likelihood of team members to act aggressively was higher when young football players believed that other team members would play unfairly. Other research has shown that when the moral climate of the sports environment is characterized by prosocial norms, young athletes tend to show more prosocial behaviors (Rutten et al. 2007). Several empirical studies have examined the relationship between the quality of the moral climate and moral behavior of young athletes, generally showing significant positive associations between the two variables (e.g., Shields et al. 2007;Rutten et al. 2011). The importance of a prosocial moral climate in youth sports is widely acknowledged in both research and practice. For example, on national level, organizations have been making efforts to create prosocial sports environments and ban antisocial behavior (e.g., UNICEF 2010). However, primary studies on the relationship between moral climate and moral behavior of young athletes show inconsistencies regarding the strength of this relationship, ranging from small (Bolter and Kipp 2016) to large effect sizes (Kavussanu and Spray 2006). To date, the results of primary studies on moral climate and moral behavior of young athletes have not yet been summarized quantitatively. Thus, the overall strength of the relationship between the moral climate and the behavior of young athletes is unknown. In addition, there is a lack of knowledge on the moderators of the relationships between moral climate and moral behavior of young athletes. This knowledge would allow researchers and practitioners insights on the implications for theory and practice to enhance the moral development of youth within the sports context. Potential Moderators In order to fully understand the relationship between the moral climate and the moral behavior of youth, it is important to assess potential moderators of this relationship. The strength and the direction of the association between moral climate and moral behavior may depend on study, sample, sports, moral climate, and moral behavior characteristics. With regard to study characteristics, the design of the study is a potential moderator, because crosssectional studies measure the relationship between moral climate and moral behavior at one point in time, and longitudinal studies take the developmental aspect of the association into account. Second, because it is expected that the quality of older studies is generally lower than the quality of more recent studies, as the statistical and methodological knowledge has increased largely in social research over the last decades, the year of publication could be a potential moderator. Finally, the publication status of the study (i.e., whether it was published in a peer-reviewed journal) could be a moderator, because this is an indication of whether publication bias is likely or not. Sample characteristics, such as the proportion of males in the sample, are a potential moderator, because previous research has indicated that male athletes are more prone to aggressive behavior and that male players consider aggression and rule-violating behavior as more legitimate than female athletes (Coulomb-Cabagno and Rascle 2006;Shields et al. 2007;Tucker and Parks 2001). Consequently, it is expected that male athletes are more vulnerable to antisocial influences, because the threshold for antisocial behavior may be lower for male athletes. The proportion of athletes from Caucasian decent in the sample could also moderate the relationship between moral climate and moral behavior, because it is unknown how well the findings of previous research generalize across ethnic groups (Fredricks and Eccles 2008). Moreover, age could be a potential moderator, because research has shown that prosocial behavior and antisocial behavior increases as age increases in adolescent male football players, and similar changes occur in motivational climate . Also, during different developmental stages, people can be more vulnerable and sensitive to the impact of social relationships and contextual influences (Kohlberg 1984;Strong et al. 2005). Fourth, the experience in the sports is a potential moderator, because the experience with sports is typically correlated with age (e.g, Kavussanu et al. 2006). The type of sports may also influence whether the moral climate is related to moral behavior of youth (Rutten et al. 2007;Shields et al. 2007). For example, Endresen and Olweus (2005) and Vertonghen and Theeboom (2010) describe that the climate in contact sports (e.g., wrestling, power lifting, boxing, etc.) is merely built on beliefs in the value of toughness and consists of violent attitudes towards opponents, which may enhance more aggressive behavior in the sports context and everyday life. Also, in a contactsports setting, physical aggression is rewarded with on-field success and increased prestige. Consequently, a masculine and dominant attitude could be encouraged by coaches, peers and parents in contact sports, causing a more aggressive climate and dominant behavior (Kuśnierz and Bartik 2014). Morover, stronger effect sizes for team sports are assumed, because the social influence within team sports is stronger than in individual sports. Characteristics of the moral climate could also serve as potential moderators. First, Shields and colleagues (2007) showed that specific social actor of the moral climate (e.g., coach or team members) can influence the moral behavior of young athletes differently. Second, whether the moral climate is prosocial (e.g., caring climate) or antisocial (e.g., antisocial behavior of team members) orientated may moderate the relation between moral climate and moral behavior, because in general, antisocial influences have a larger impact on behavior then prosocial influences (Baumeister et al. 2001). Finally, moral behavior characteristics are potential moderators. For instance, whether the moral behavior is prosocial or antisocial could be a moderator, because there are indications that the etiology of prosocial and antisocial behavior differs (Krueger et al. 2001). Prosocial behavior is believed to be largely learned by (social) reinforcement, while the development of antisocial behavior is more complex and, next to social reinforcement, involves genetic, individual, family, peer, and society influences (Bortoli et al. 2012;Carlo et al. 1999;Krueger et al. 2001). Further, whether the behavior occurred on-field or off-field could moderate the relationship between moral climate and moral behavior of young athletes. Larger effect sizes for on-field behavior are expected, because off-field behavior takes place in a different context than the sports context, and it is uncertain to what extent behavior is transferred to other contexts. In addition, research has shown differences in moral behavior between sports and other contexts such as school (e.g., Kavussanu et al. 2013;Kavussanu and Ring 2016). The Current Study As indicated above, the relationship between moral climate and moral behavior in youth has not been quantitatively summarized across studies, and the potential moderators of this relationship have not been examined. This is important because such knowledge will enhance the understanding of the factors that moderate this relationship, with further implications for theory and practice. To fill in the gaps in the literature, the current study conducted a multilevel metaanalysis on the relationship between the moral climate of the sports environment and young athletes' moral behavior. A meta-analysis can provide a summary of previous research more adequately and precisely than a narrative review (Lipsey and Wilson 2001) and is an appropriate method to quantify and analyze any inconsistencies in primary studies. A multilevel approach that allows inclusion of more than one effect size per study was used, and comprehensive moderator analyses were conducted to assess the influence of possible moderators on the relationship between moral climate and moral behavior of young athletes (Spruit et al. 2016, ab;Van Den Noortgate and Onghena 2003). The multilevel meta-analytic techniques enable the use of all available effect sizes in the analyses, so all information can be preserved and maximum statistical power can be generated (Assink et al. 2015). This metaanalysis had two aims. The first aim was to examine the strength of the association between the moral climate and moral behavior of young athletes. The second aim was to test the moderating influence of study, sample, sports, moral climate, and moral behavior characteristics on this relationship. Inclusion Criteria The studies for this meta-analysis were selected using several inclusion criteria. First, articles had to examine moral climate in the sports teams and measure this by an appropriate scale, for example team norms, (socio-)moral climate, moral context, caring climate, and moral environment. The scales of moral climate had to focus on moral behavior or norms of actors in the sports context (i.e., coaches, team members, parents and spectators) or on the broad moral club culture (e.g., Rutten et al. 2007). Second, studies had to report on some type of moral behavior (e.g., prosocial behavior, antisocial behavior, aggression, etc.) of the athlete measured by a scale on past behavior (for example, questions on how often someone engaged in certain behavior in the past period of time) or intended behavior (for example, questions on how youth think they would behave in hypothetical situations). Third, only studies with a sample with a mean age below 18 were included. Finally, the included studies had to provide sufficient statistical information to calculate an effect size. Only studies reporting on bivariate associations between moral climate and moral behavior were included, because in multivariate effect sizes the set of covariates varies greatly among different studies. Therefore, combining and comparing differently adjusted effect sizes limits the ability to estimate the true overall relationship (Mulder et al. 2018). Selection of the Studies and Handling Publication Bias All studies examining the relationship between moral climate in the sports context and prosocial or antisocial behavior in youth available until January 2018 were included. Five electronic databases were searched: ScienceDirect, PsychINFO (including Medline), Web of Knowledge (all databases), EBSCOhost (all databases), and Google Scholar. The search string included four combined variables: a moral climate element, a behavioral element, an age element and a sports element. For the moral climate element, the following keywords were used: "moral atmosphere", "moral climate", "caring climate", "team norm*", sportsmanship, and "fair play attitude". For the behavioral element, the following keywords were used: behavio*r, antisocial, prosocial, aggress*, violen*, and "moral functioning". Youth, child*, and adolescen* were used as the age element. The last keyword used is "sport*" to ensure the search was focused on studies which investigated moral climate in the sports context. Reference sections of relevant articles and publication lists of scholars, who frequently publish on this topic for qualifying studies were screened, and scholars were contacted to ask whether they would have any unpublished manuscript on this research topic. A common problem in this type of search is that studies that do not find any significant result are often not published. Therefore, it is possible that the studies included in the meta-analysis are not an adequate representation of all previous studies that have been conducted. This creates the so-called "publication or file drawer bias" (Duval and Tweedie 2000). To prevent the problem of publication bias, unpublished studies were searched by screening all databases, including the American Doctoral Dissertations database in EBSCOhost, and by contacting authors once. The initial search resulted in 193 studies, selected based on the title and abstract. After deletion of double articles, the abstract and methods section of 149 studies were read. Next, 47 articles were thoroughly read to examine usability for the meta-analysis. In total, 27 studies met all the inclusion criteria and were included in this meta-analysis. Figure 1 presents a flow chart of the search of articles, while Table 1 presents the characteristics of the included studies. Coding the Studies The included studies were coded according to the guidelines of Lipsey and Wilson (2001). The codebook was developed by the first, third, and last authors, and all studies were double-coded by the third and last author. In case of differences in coding, the authors discussed the coding until consensus was reached, or they consulted the first author. Potential moderators of the association between moral climate and moral behavior were grouped into study, sample, sports, moral climate, and behavior characteristics. To prevent the problem of multiple testing (Tabachnik and Fidell 2007), only moderators that had theoretical potential (as described in the Introduction), and moderators that had enough variability among the included studies were included. For example, the informant on the moral behavior outcome was not included, because there were too few studies that used informants other than self-report. For study characteristics, the design of the study (crosssectional or longitudinal), the year of publication (continuous variable), and the publication status of the study (i.e., whether it was published in a peer-reviewed journal) were coded. For the sample characteristic, the proportion of males in the sample, the proportion of athletes from Caucasian decent in the sample, the mean age of the participants, and the mean number of years of experience in the sports were coded. Further, different sports characteristics were coded as potential moderators. It was first coded if the sports were contact or non-contact sports, or that it was a mix of contact and non-contact sports. Second, a subdivision between individual, team, or mixed sports was made. Because of the limited number of non-contact and individual sports, individual sports were combined with mix, and non-contact sports with mix. With regard to the characteristics of the moral climate, the specific social actor of the moral climate used in the study was coded (e.g., coach, team members, or a mix/broad measure), as well as if the moral climate was prosocial (e.g., caring climate) or antisocial (e.g., antisocial behavior of team members) orientated. Finally, for the moral behavior characteristic, whether the moral behavior was prosocial or antisocial was coded, as well as whether the moral behavior measure consisted of 'reported behavior' (e.g., how often did you engage in certain behavior?) or 'intended behavior' (e.g., if situation X would happen, how would you behave?), and if the behavior occurred on-field or off-field. Calculation and Analysis Reported statistics in the primary studies were transformed into correlation coefficient r, according to formulas from Lipsey and Wilson (2001); they consider r = 0.10 as a small effect size, r = 0.25 as moderate, and r = 0.40 as a large effect size. Effect sizes r were coded in the expected direction: A positive correlation indicated that a prosocial moral climate was positively related to desirable moral behavior (i.e., more prosocial and less antisocial behavior), and an antisocial moral climate was positively related to undesirable moral behavior (i.e., less prosocial and moral antisocial behavior). A negative correlation indicated that athletes engaged in antisocial behavior in a prosocial moral climate or showed prosocial behavior in an antisocial moral climate. If an article indicated that the relationship was not significant, but did not provide any statistical information, the effect size was coded as zero (Lipsey and Wilson 2001). Continuous variables were centered on their mean, and categorical variables were re-coded into dummy variables (Lipsey and Wilson 2001). Extreme values of the effect sizes were checked (>3.29 SD from the mean: Tabachnick and Fidell 2007), but no outliers were present. Correlation coefficients r were re-coded into Fisher z-values for the usage of the analysis (Lipsey and Wilson 2001). For interpretation and reporting, Fisher z-values were transformed back into correlation coefficients after the analysis. The standard errors and sampling variance of the effect sizes were estimated, based on the formulas of Lipsey and Wilson (2001). In most studies, it was possible to extract multiple effect sizes of individual studies. By using a multilevel approach, this meta-analysis accounts for the hierarchical structure of data, in which the effect sizes were nested within the studies (Van den Noortgate and Onghena 2003). This meta-analysis applied a 3-level random-effects model. The effects are accounted for three levels of variance: Level 1 contains the sampling variance for each effect size, level 2 contains the variance between effect sizes within a specific study, and level 3 contains the variance between studies (Assink and Wibbelink 2016). The authors used R (version 3.5.0) within the foreignand metafor-package, employing a multilevel randomeffects model (Assink and Wibbelink 2016), which is often used for multilevel analyses (e.g., Assink et al. 2018;Spruit et al. 2016a, b;Ter Beek et al. 2018). To estimate the model parameters, the restricted maximum likelihood estimate (REML) was applied (Van den Noortgate and Onghena 2003). The Knapp and Hartung-method Knapp and Hartung (2003) was used to test individual regression coefficients of the models and for calculating the corresponding confidence intervals. Likelihood ratio tests were used to compare the deviance scores of the full model and the models excluding the variance parameters of level 2 or 3, making it possible to determine whether significant variance is present at the two levels (Assink and Wibbelink 2016). In case there was significant variance on these two levels, the distribution of effect sizes was considered to be heterogeneous. This indicates that the effect sizes could not be treated as estimates of a common effect size, and moderator analyses were performed. For models including moderators, an omnibus test of the fixed-model parameters was conducted, which tests the null hypothesis that the group mean effect sizes are equal. Therefore, the test statistics of the moderator analyses were based on the Fdistribution. Results To assess the relationship between moral sports climate and moral behavior of young athletes, a multilevel meta-analysis of 27 independent samples, 117 effect sizes, and N = 7726 participants was conducted. The overall association between moral climate and moral behavior of young athletes as well as the results of the moderator analyses are presented in Table 2. Relationship between Moral Climate and Moral Behavior Overall, a significant, large positive association between the moral climate in the sports context and moral behavior of young athletes (r = 0.40; 95% CI: 0.33-0.48; p < 0.001) was found. The findings show that a prosocial moral climate is related to more prosocial and less antisocial behavior, whereas an antisocial moral climate is related to more antisocial and less prosocial behavior. The likelihood ratio test comparing models with and without between-study variance (level 3) showed significant variance at the between-study level (σ² level3 = 0.03, χ² (1) = 24.33; p < 0.0001). The variance between the effect sizes within studies (level 2) was significant as well (σ² level2 = 0.05, χ² (1) = 945.70; p < 0.0001), which indicates that there is a heterogeneous effect size distribution. About 4% of the total effect size variance was accounted for the sampling variance (level 1), 57% for the variance between effect sizes within studies (level 2), and 39% for the variance between studies (level 3). Because of the heterogeneous effect size distribution, the assumption for homogeneous effect sizes was not met. Therefore, a trim and fill procedure to test for publication bias was not performed, because this procedure would not have given a reliable estimate of effect sizes (Terrin et al. 2003). However, because of the heterogeneous effect size distribution, moderator analyses were performed. Moderator Analyses Several variables were examined as moderators of the relationship between the moral climate and moral behavior of young athletes. First, it was investigated whether study characteristics moderate the relationship between the moral climate and moral behavior of young athletes. The publication year and the type of study significantly moderated this relationship, such that larger effect sizes were found in older and cross-sectional studies. Publication status was not a significant moderator. Next, sample and sports characteristics were examined, but none proved to be a moderator. The proportion of males, the proportion of athletes from Caucasian background, the mean age of the sample, and the mean number of years of experience with that particular sports did not moderate the strength of the association between a moral sports context and moral behavior of young athletes. Whether the sports in the studies were team sports or individual and mixed, and contact sports or non-contact or mixed, also did not moderate the relationship between moral climate and moral behavior of young athletes. Then, it was examined whether the moral climate was measured as prosocial (e.g., caring climate and prosocial behaviors) or antisocial (e.g., antisocial team norms or cheating behaviors) significantly moderated the association between moral climate and moral behavior: Larger effect sizes were found for antisocial compared to prosocial moral climate. In addition, the actor of the moral climate (i.e., coach, team member, and mix/moral club culture) significantly moderated the relationship between moral climate and moral behavior of young athletes. Larger effect sizes were found for team members, compared to coach and a mix/broad moral club culture. Finally, moral behavior characteristics were examined. The type of moral behavior (reported behavior vs. intended behavior and prosocial vs. antisocial behavior) and whether the behavior was on-field or off-field did not significantly influence the relationship between moral climate and moral behavior of young athletes. Discussion Engaging in sports is an important leisure activity for youth and adolescents (Ntoumanis et al. 2012). Because of its social nature, sports provide many opportunities for moral decision making and moral behaviors, resulting in both prosocial and antisocial behaviors Rutten et al. 2011). Due to the consequences of antisocial acts on the sports experiences of youth, it is important to understand the factors that are associated with moral behavior of young athletes in order to prevent immoral behavior. Previous research has shown that the moral climate, that is, the sociomoral environment in which sports take place, plays an important role in young athletes' moral behaviors (Guivernau and Duda 2002;Kavussanu and Stanger 2017;Rutten et al. 2008), but to date it is unknown to what extent the moral climate and moral behavior are associated. This multilevel meta-analysis of 27 independent samples, 117 effect sizes, and N = 7726 participants examined the relationship between moral climate in the sports context and moral behavior of young athletes. The strength of the overall relationship between moral climate and moral behavior of young athletes, and potential moderators of this association were tested. Overall, a significant, large correlation (r = 0.40) was found, which indicates that a prosocial moral climate is associated with less antisocial and more prosocial behavior, while an antisocial moral climate is associated with more antisocial and less prosocial behavior of youth. These findings suggest that the sociomoral context within which young athletes' behavior takes place can have an important influence on this behavior. In general, this relationship can be explained by Kohlberg's theory Kohlberg (1984) on the influence of group norms on the moral behavior (Higgins et al. 1984). Moral decisions in the sports context are made in a social setting, under the influence of peers, coaches, parents, and other social agents. The current study showed that the norms and values of the social actors of the sports context are related to prosocial and antisocial behaviors of young athletes. The findings are in line with Kohlberg's (1984) theorizing on the importance of group norms on the moral decision making and highlight the role of the sociomoral environment on moral behavior in sports. The majority of the participants in the included samples in this meta-analysis were adolescents. The overall large positive correlation between moral climate and moral behavior in young athletes identified in this study is not directly generalizable to adult populations. Adolescence is a developmental stage that is characterized by enhanced sensitivity to social cues (Blakemore and Mills 2014;Somerville et al. 2010). Sociomoral influences in general are thus highly influential in adolescence. Moreover, although parents are often part of the moral sports climate, the moral climate in youth sports is dominated by peers and non-parental role models, such as coaches (Ntoumanis et al. 2012). Adolescence is the time in which youth begin to separate themselves from parents, as they seek emotional support from caring non-familial adults, establish relationships with peers in increased importance, and feel the need to belong and connect to groups other than the family (Fredricks and Eccles 2008). Therefore, the finding of the current study that the moral climate is strongly related to the moral behavior of young athletes and the associated theoretical explanations may be especially relevant to the adolescent population. Various moderators of the relationship between moral climate and moral behavior of young athletes were found. First, publication year was a significant moderator. Recently published studies yielded smaller effect sizes then older studies. A possible explanation for this could be that the social environment of youth changed dramatically over the last 20 years. The social media world nowadays has major impact on the norms and behavior of youth (Elmore et al. 2017;Lin et al. 2016). With the launch of social media such as Facebook, Instagram, and Snapchat that allow the creation and exchange of user-generated content and the interaction with unfamiliar people on a large scale, unique opportunities to gather normative information from sports celebrities, online peer groups, or certain trends were added to the lives of youth (Elmore et al. 2017), above and beyond the normative influences of parents, coaches, and offline friends. Consequently, the unique contribution of the offline world to the moral behaviors of youth declined, explaining the smaller association between moral climate and moral behavior in recent studies. The design of the study was also a significant moderator of the relationship between moral sports climate and moral behavior. Studies with a cross-sectional study design showed larger effect sizes then longitudinal studies. The moral climate in a sports team or club is not a fixed characteristic (Rutten et al. 2010), it may be influenced through different events happening at the club, on the field or in society, causing the moral climate to fluctuate. Therefore, the moral climate can be time dependent, making it more sensitive to yield significant effects for cross-sectional designs. Third, whether the prosocial dimension (e.g., caring climate and fair play attitudes of coaches) or the antisocial dimension (e.g., aggressive behavior of the coach and antisocial team norms) of the moral climate was measured, significantly influenced the relationship between moral climate and moral behavior of young athletes. The antisocial dimension of moral climate yielded larger effect sizes than prosocial dimension. Baumeister and colleagues (2001) proposed the general law of psychological phenomena in which "bad" influences are stronger then "good" influences. Negative valanced experiences (for example, antisocial role models, negative communication, and conflict) tend to have greater impact on behavior than positive valanced experiences (for example, harmonious peer interactions, being praised, and respected; Baumeister et al. 2001). In addition, during adolescence, it is common for youth to experiment with risk taking behaviors, social boundaries, and antisocial behaviors, and it has been argued that these behaviors actually serve developmental purposes in adolescence (Blakemore and Mills 2014;Moffitt et al. 2002). Consequently, adolescents may be more sensitive to antisocial influences compared to prosocial influences, explaining why this study found that antisocial moral climate was more strongly related to moral behavior then prosocial moral climate in young athletes. Finally, the social actor of the moral climate was a significant moderator of the relationship between moral climate and moral behavior. This study found larger effect sizes for team members, compared to coaches and a mix or broad moral club climate. This could indicate that the influence of the moral norms and behaviors of team members is larger than those of coaches and the broader club climate and underlines the important role peers play during adolescence. Indeed, peer cultures have been shown to be of great importance in shaping development and behaviors of youth, and adolescence is a time of increased sensitivity to peer influence and decreased sensitivity to adult influences (Knoll et al. 2015;Piaget 1965;van Hoorn et al. 2016). In addition, Piaget (1965) proposed that although adult authority figures could teach children about specific moral values, it is only in the context of peers that youth are truly able to put their morality into practice. This would explain the larger effect sizes for team members in the relationship between moral climate and moral behavior. Limitations of the Study Several limitations of this meta-analysis should be mentioned. First, the primary studies included in this metaanalysis did not report on all potential moderators of interest. Therefore, studies with missing information on a particular moderator could not be included in that specific moderator analysis. This may pose a threat to the external validity of those findings, because not all available studies on the relationship between moral sports climate and moral behavior were included. Second, some of the hypotheses on potential moderators could not be fully tested, because there were too few or no primary studies in a particular category. For instance, too few studies reported on the relationship between moral climate and moral behavior in individual and non-contact sports. Third, in the large majority of the primary studies included in the meta-analysis both constructs were measured by the same informant and the same measurement type (i.e., questionnaires), resulting in the issue of shared method variance (Lindell and Whitney 2001). Also, it is likely that participants "project" their own behavior on others, so when they act aggressively they perceive others as acting aggressively, because they want to justify their behavior (Cho and Knowles 2013). These limitations of the primary studies may possibly cause an inflated overall effect size in the current meta-analysis. Fifth, some categories of the moderators were filled with a limited number of studies and effect sizes. As a result, there might be a power problem in identifying moderators with a small influence. Although the difference between effect sizes of published and not published studies was small, it may be considerable (r = 0.12), but was not identified as a significant moderator. The same was observed for the difference between team sports and a mix of team and individual sports (r = 0.11). A final limitation is the fact that the authors could not check for publication bias since the assumption for a homogeneous effect size distribution was not met (Terrin et al. 2003). In case of publication bias, there could be an overrepresentation of significant effect sizes, because especially non-significant findings would be difficult to publish (Thornton and Lee 2000). However, in this study, publication bias effect is not very likely, because most studies focused on multiple predictors of moral behavior of young athletes, so nonsignificant results on the moral climate-moral behavior relationship are not expected to influence publication status. Directions for Future Research and Practice Despite the limitations, important recommendations for future research can be made from the current study. First, because this study was not able to test all the hypotheses on potential moderators, future studies should focus on answering those questions that are currently left unanswered, for example, by comparing the association between moral climate and moral behavior among different type of sports, in different sports settings, and among different samples. Second, it is important to understand the direction of the association between moral climate and moral behavior. Currently, it is assumed that moral climate has a causal effect on moral behavior, but this has not been tested yet. Studies similar to Sage and Kavussanu (2007) may be insightful. They conducted an experiment in which athletes were randomly assigned to one of two motivational climate conditions (i.e., the extent to which the sports climate is oriented towards promoting task mastery and learning goals or social comparison and performance goals) or a control condition, and examined the effects of motivational climate in moral behavior of the athletes (Sage and Kavussanu 2007). Although the moral climate may be more difficult to manipulate, and it may not be ethically right to assign youth to an antisocial moral climate condition, it could be possible to follow youth from the start of sports activities, and investigate whether natural variation in the moral climate influences moral behavior and vice versa. Finally, future studies should examine the predictors of moral sports climate. For example, all female teams, and good coach-athlete relationships are associated to prosocial moral atmosphere (Rutten et al. 2007). A predictor of antisocial moral climate may be teams that consist entirely of youth at risk for antisocial behavior, due to deviancy training. Deviancy training refers to "the interpersonal dynamic of mutual influence during which youth respond positively to deviant talk and behavior" (Dishion and Tipsord 2011, p. 189), and is known for its reinforcing effect on antisocial behavior in at-risk peer groups (Dishion and Tipsord 2011). This is especially important, considering the large influence of team members that was suggested in the current study. In addition, coaches with specific pedagogical skills may be able to create a prosocial moral climate . Understanding the predictors of moral climate may prevent antisocial moral climate, and therefore, antisocial behaviors in the future. This meta-analysis offers recommendations for the practice of youth sports. First and foremost, the current study emphasizes the importance of the moral climate in youth sports. More specifically, this study shows that especially an antisocial moral climate is a predictor of less prosocial and more antisocial behavior of young athletes. Therefore, youth sports organizations should make efforts to ban antisocial norms and behaviors within the sports context. Great importance has been given to the coach in preventing both on and off-field antisocial behavior (Haudenhuyse et al. 2012;Kavussanu and Spray 2006;Martin et al. 2014). Coaches should actively discourage antisocial norms and behaviors of youth (Kavussanu and Spray 2006). To facilitate coaches, youth sports clubs could consider training coaches to create an "ethical climate" by promoting prosocial behavior, to give effective feedback to youth, and are capable of acting as role models of ethical behavior for their athletes. In addition, this meta-analysis suggested that the influence of peers is even larger than that of coaches. The behaviors and norms of peers could be influenced indirectly by means of training and education of coaches to make sure that they use appropriate educational techniques while influencing team dynamics and peer interactions (Beauchamp and Eys 2014;Coakley 2011;Conroy and Coatsworth 2006), especially considering that the majority of youth sports coaches do not have formal training in pedagogical coaching strategies or youth development (Wiersma and Sherman 2005). Moreover, youth sports organizations and policy makers could also consider influencing peer cultures directly, for example by developing programs for youth leaders or team captains to enhance positive leadership and ethical behaviors. This is underlined by research showing that adolescents have hyperresponsive neural reward systems to socially desirable peers (Somerville et al. 2010), suggesting that interventions directed at (popular) youth leaders are especially effective. Although this study implies that team members had a relatively large influence on the moral behaviors of youth, it was also found that the moral behaviors, norms and values of coaches and the broader club culture are related to the moral behaviors of young athletes. To prevent immoral behavior of young athletes, the moral climate needs to be targeted from a broad perspective, involving all actors of the sociomoral sports environment. According to Fields and colleagues (2010), "effective interventions will likely require multifactorial approaches addressing diverse issues including peer-pressure, coaches' influence, parental examples and expectations, media's influence, sports figures' influence, community and school legislation, referee enforcement of sporting rules, environmental design of sporting venues, etc." (p. 35). In one study, small improvements of moral team atmosphere and on-field antisocial behavior were found after an intervention using forum theatre performance to address moral behavior in sports for young athletes, coaches, parents, and club managers (Rutten et al. 2010). Although research on interventions aimed at influencing moral climate and preventing antisocial behavior in youth sports is at its starting point, the current study would suggest the importance of such efforts, especially when considering the finding of larger associations between moral climate and moral behavior in crosssectional studies. It may be that the effect of moral climate is counter-acted when youth are no longer in that environment. Changing the moral climate for the better could lead to improved moral behavior of young athletes. Conclusions Despite the important role of sports in the moral development of adolescence, little is known about the factors within the sports context that are associated to moral behavior of young athletes. One of these factors that has often been linked to athlete's moral behavior is the moral climate, that is, the sociomoral environment in which the sports take place. However, an in-depth understanding of the relationship between moral climate and moral behavior is necessary in order to stimulate prosocial and prevent antisocial behavior of young athletes. The current study is a systematic review that synthesized all available empirical studies on the relationship between moral sports climate and moral behavior of young athletes, by means of a multilevel meta-analysis of 27 independent samples and 117 effect sizes. Results suggest that the overall association between moral climate and moral behavior is large, implicating a significant role of moral norms and values of coaches, team members, and the broader club culture in the moral behaviors of young athletes. To promote positive experiences in sports activities, it is important that the sociomoral environment of the sports context is characterized by prosocial norms and values, and the absence of antisocial attitudes and behaviors. Special attention should be given to team dynamics and peer influences, as the current study showed that team members have a stronger influence on moral behavior, compared to coaches or a broad moral club climate. Authors' Contribution A.S. conceived of the study, designed the study, participated in the search and coding of the studies, performed the statistical analysis, interpreted the results and drafted the manuscript. M.K. participated in the design of the study and assisted in drafting the manuscript. T.S. and M.I.J. participated in the search and coding of the studies, and critically reviewed the manuscript. All authors read and approved the final manuscript. Data Sharing and Declaration The datasets generated and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request. Compliance with Ethical Standards Conflict of Interest The authors declare that they have no conflict of interest. Ethical Approval Because the research is a literature review, obtaining approval of an ethical committee was not necessary. Research Involving Human Participants and/or Animals: This research is a literature review, so these statements are not applicable. Informed Consent This research is a literature review, so these statements are not applicable. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2018-12-19T14:03:53.991Z
2018-12-17T00:00:00.000
{ "year": 2018, "sha1": "38af888797314426751e234b872e822f1e3c8eeb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10964-018-0968-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7d1666386e1ba7b7f71c9d6032ee28b1141f662c", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
119418814
pes2o/s2orc
v3-fos-license
Global Representation of the Fine Structure Constant and its Variation The fine structure constant, alpha, is shown to be proportional to the ratio of the quanta of electric and magnetic flux of force of the electron, and provides a new representation, which is global across all unit systems. Consequently, a variation in alpha was shown to manifest due to a differential change in the fraction of the quanta of electric and magnetic flux of force, while a variation in hcross.c was shown to manifest due to the common mode change. The representation is discussed with respect to the running of the fine structure constant at high energies (small distances), and a putative temporal drift. It is shown that the running of the fine structure constant is due to equal components of electric screening (polarization of vacuum) and magnetic anti-screening (magnetization of vacuum), which cause the perceived quanta of electric charge to increase at small distances, while the magnetic flux quanta decreases. This introduces the concept of the bare magnetic flux quanta as well as the bare electric charge. With regards to temporal drift, it is confirmed that it is impossible to determine which fundamental constant is varying if alpha varies. Introduction The fine structure constant is very important in QED 1 , It was first introduced by Sommerfeld 2 to explain the fine structure in hydrogen atoms, as the ratio of the electron velocity to the speed of light; the meaning of the constant has now evolved to be a measure of the strength of the electromagnetic field. Recent experimental evidence that the fine structure constant may be drifting 3,4 and has triggered much interest in theories that account for the drift in fundamental constants [5][6][7][8][9][10] . Also, it has provided stimulus to laboratory tests, which aim to improve the precision of measurements of the constancy of the fine structure constant [11][12][13] . Furthermore, it is well established that the fine structure constant varies with energy (or distance) as one probes close to the electron 14 . In this paper the fine structure constant is represented in terms of only electromagnetic quantities 15,16 . For the SI unit representation, this includes the quantum of electric charge e, the quantum of magnetic flux φ 0 , and the permittivity and permeability of free space, ε 0 and µ 0 . To solve systems in Classical Electrodynamics it is common to represent the problem in terms of the co-ordinates of charge or magnetic flux. Both are conserved quantities and are considered dual variables 17 , and one may formulate a problem in terms of either of these quantities and get the same solution. This fact may lead one to consider solving quantum systems in terms of magnetic flux, which has been attempted in the past. Jehle spent a large part of his life developing a theory of the electron and elementary particles based on quantized magnetic flux loops 18 . Also, Dirac suggested the existence of the magnetic monopole to describe the charge of the electron 19 . Both these theories rely on a physical relationship between flux and charge. In the case of the Jehle model, he suggested that the electron was made of quantized flux loops, spinning at the Zitterbewegung frequency. Since it is well known the magnetic flux is quantized, it seems plausible that the quantized value of flux should play a role in the description of particle physics. Jehle developed a series of papers, which attempted to do this based on the quantized flux loop 18,[20][21][22] . The relation between the electric and magnetic properties is fundamental to electrodynamics as charge in motion produces magnetic flux. Alternative representation of the fine structure constant Usually the fine structure constant (or coupling constant) is defined from the static Coulomb force between two charges. The force, F e , in SI units is given by; Here e is the electric charge in Coulombs, r is the separation between the charges in meters, and ε 0 is the permittivity of free space in Farads per meter. To consider the strength between two static charges in terms of fundamental constants, one must ignore the inverse square nature of the force and just consider the constant of proportionality e 2 4!" o . Because this proportionality constant has the dimensions of energy × distance, the dimensionless constant can be constructed by dividing by hc, which also has the same units. Thus, it is usual to write the fine structure constant in SI units as the following; Similar coupling constants are also written for the strong, weak and gravitational forces. If one analyses these coupling constants it may be seen that they too are represented as a dimensionless constant by comparing the constants of proportionality of the force with hc 1 . Thus, when we discuss these dimensionless constants relative to one another they represent comparative strengths of the different forces. For example, the strong force coupling constant, is approximately one, the electromagnetic is 1/137 the weak is 10 -6 , and gravity 10 -39 . The logic of defining (2) above, leads many to state that α must be proportional to the strength of the Coulomb (electric) force, as it is proportional to e 2 . However, atomic systems are not just comprised of static charge, as they also exhibit spin and magnetic moments. Thus, one may expect the magnetic nature to be present in the definition as well as the electric. The magnetic nature is actually hidden in the hc term that we divided by. For example, all transitions between electron orbit and spin states, when they interact with electromagnetic radiation, are governed by the following equation where E ph is the energy of the absorbed or emitted photon, λ ph is the wavelength and hc is the constant of proportionality. Also, hc is the proportionality constant that relates the Casimir force to the dimensions of two electromagnetic neutral electrodes, and thus incorporates both the electric and magnetic zero point properties of the vacuum. The key relations between the fundamental constants for classical and quantum electromagnetism in SI units are: Here µ 0 is the permeability of free space and φ 0 is the quantum of magnetic flux due to the spin of Representation in terms of static magnetic and electric flux of force In this section a representation of the fine structure constant is formulated in terms of the quanta of static magnetic and electric flux of force, which turns out to be global for all unit systems (see appendix A). First, a simple classical static model of a magnetic flux is introduced, which is similar to a magnetic circuit. Since magnetic fields are solenoidal, magnetic flux may be modeled as a loop (i.e. no matter on how complicated the path it will eventually come back to where it started). A simple circuit model of this type is shown in figure 1. One may consider the flux loop as a circular magnet with a north and a south pole held together by an attractive magnetic force. In SI units the force between the north and south pole is given by; where A is the effective cross section area of the magnetic loop, as shown in figure 1. The magnetic and Coulomb force given by equations (1) and (7) are not constants. For example, (1) follows the inverse square law and (7) Here, Φ e and Φ φ may be considered as the flux of electric and magnetic force generated by a static quantized charge, e, and a static quantized magnetic flux loop, φ 0 , respectively. Considering equation (4), (5), (6) and (8) one can then show the following relations; Equations (9) and (10) together now portray a representation of α and hc, which are symmetric in its electric and magnetic parts. In appendix A it is shown that the equations (9) and (10) are global representations for all unit systems, which is important if one wants to analyze the variation of α in terms of dimensioned constants. In the following section, this permits an analysis of the running of the fine structure constant, as well as putative temporal variations, which would not be otherwise possible. Variation of the fine structure constant in terms of electric and magnetic quantities It is interesting to consider the meaning of (9) and (10) if the Fine Structure Constant varies. Because they are independent of unit representation they may be implicitly differentiated to obtain; Thus, we have succeeded in representing a change in α and hc in terms of the change of the electric and magnetic quantities defined by (8). Here a change in α is due to a differential variation in the strength of the electric and magnetic force, while a change in hc is due to a common mode variation. The running electromagnetic coupling constant It is well established that at high energies as one probes closer to the sub-structure of an electron, the fine structure constant increases 14 . This is due to the perceived screening of the bare electron charge by polarized virtual electrons at low energies, E (or large distance). If (11) is applied to this situation, we can assume hc does not vary and the following must be true, and thus, Thus at low energies, the density of the curl-free electric lines of force is reduced, due to the vacuum field opposing the field of the bare electron. In actual fact the virtual electrons are also in motion and must posses magnetic properties as well 27 . In contrast at low energies, the density of the divergence-free magnetic lines of force of the bare electron is enhanced. This means the magnetic moments of the virtual electrons align with the magnetic moment of the bare electron, and one can say the vacuum is also magnetized like a paramagnet. This has the effect of the magnetic flux quanta reducing to a lower value at high energies, in the opposite way to the charge, and the magnetic flux quanta is thus anti-screened. This leads to the concept of the "bare magnetic fluxquanta" being less than the magnetic flux quanta at low energies. Previously 28 , this concept has also been explained by a 'bare magnetic monopole charge'. However, we point out here that it is not necessary to introduce the concept of the magnetic monopole to describe this phenomenon, when one may define it in terms of closed loops of flux. Also, because of equation (11) it is evident that the running of the fine structure constant is due to equal components of electric screening and magnetic anti-screening. Putative drift of the fine structure constant It is widely accepted that when considering problems that deal with time variations of fundamental constants, that it only makes sense to consider dimensionless constants 7,10,29-33 . However, because (9) and (11) are global across all unit systems, a putative drift in the fine structure constant may be interpreted as a differential drift between the strength of the electric and magnetic flux of force (which of course have dimension). In this section we show that this finding does not contradict the accepted belief, that one cannot determine whether or not e or c drifts if there is a putative drift in α 7,10,29-33 . In general one could also consider h as well as e or c, but mostly it has been assumed to remain constant. In this work we consider more generally the product hc, as it naturally fits with the magnetic and electric flux of force representation. Case I: Firstly, it is assumed that the total electromagnetic energy of an electron remains constant. In this case, !" e " e = # !" $ " $ , and from (11) the putative drift in α may be written as; Physically this means that the electric energy would be converted to magnetic energy or vice versa. If this occurred one would then get a drift in α independent of hc due to only a differential change in the electric and magnetic flux of force. Case II: Secondly, the magnetic and electric parts are assumed to vary at the same rate (common mode drift). In this case ! "# e /# e = "# $ /# $ , and from (11) α will not drift but hc will. In this case another energy process would need to be involved. Case III: Finally, the case is considered when putative α and hc drift are related. In this case, the differential and common mode components of (11) must be correlated. This would occur if, for example, another form of energy was converted to only magnetic or electric energy but not both, such that either !" e " e = 0 or !" # " # = 0 . In this case; Thus, to unequivocally interpret which fundamental constant drifts if α drifts, both the common mode component and differential component of (11) must be known. However since hc (common mode component) has dimension it makes no sense to try and measure its drift. Actually, the measurement of drift in hc has been attempted previously, for reviews on these measurements see 1 . However, Bekenstein 34 showed these attempts generated null results as the constancy was actually implied in the analysis. Therefore, we still may conclude that the absolute variation of any of the dimensioned fundamental constant, whether it be e, hc, Φ e or Φ φ , cannot be measured. However, despite this, the time variation of α may be interpreted as the differential drift of the strength of the Coulomb force with respect to the magnetic force, even though they have dimension. Discussion The fine structure constant, α, may be represented in many ways depending on the unit system and the selection of fundamental constants. In this paper, a representation, which is global across all unit systems, has been successfully obtained in terms of the quanta of electric and magnetic flux of force of the electron. It was shown that α is proportional to the ratio of the square root of quantized electric and magnetic flux of force, while hc was shown to be proportional to the product. Because the representation is global across all unit systems an analysis of the variation of the fine structure constant was successfully made. The variation in α was described as a differential change in the flux of force associated with the quanta of electric charge and magnetic flux of the electron. In contrast, a change in hc (and hence Casimir Force) can be described as a common mode change in the same variables. With regards to the running of α at high energies, it was shown that it may be described by both vacuum polarization and magnetization effects, and introduces the new concept of the 'bare magnetic flux quanta'. It was also shown that it is not possible to determine the physical process behind any putative α drift, as the measurement process does not allow the determination of the common mode drift of the quanta of electric and magnetic flux of force, even if it occurs. Appendix. Global unit system In this appendix equations (9) and (10) Thus in the general unit system (6) becomes, where the vacuum impedance is given by ! Z 0, g = 4" k 2 k 1 k 3 and the quantum Hall resistance by ! R h,g = 2" 0,g e g . In the same way, the electric, ! " e,g , and magnetic, ! " # ,g , flux of force of the electron given by (8) can be shown to be Combining (A4) with (A3) to eliminate k 1 , k 2 and k 3 gives; which is equivalent to (9) as expected. To proceed and show that (10) is also global across all unit systems, the fine structure constant given in (2) can be represented in the global unit system as: as expected. From the above analysis, it is also possible to represent the constants k 1 , k 2 and k 3 in terms of only fundamental constants and is valid for all unit systems. Examples for four commonly chosen unit systems are shown below in table I. To uniquely determine an arbitrary unit system in classical electrodynamics and in vacuum, one needs to specify the three constants k 1 , k 2 and k 3 . † However, in quantum electrodynamics one other constant that specifies the quantum nature must also be specified, along with k 1 , k 2 and k 3 . For example, one could choose one of ! h g , e g or φ 0,g and all others would follow by applying equations (A1) to (A8). Another approach would be simply to choose the values of the four fundamental constants c g , ! h g , e g and φ 0,g , then all other parameters in the unit system including k 1 , k 2 and k 3 , could be determined. Table 1. Values of some constants of quantum electrodynamics for some selected unit systems. † The inclusion of media is left out as it adds the extra complication of including polarization and magnetization of magnetic and dielectric media without adding to the discussion. However, if one wants to define the permittivity and permeability within the unit system, it is necessary to consider these details (see Jackson for further details 35 ). ! h g are selected to be unity. In contrast, SI and CGS units were developed within a framework that would facilitate relating the standard units of mechanics to electromagnetism. In the SI system, the definition of the absolute ampere and the speed of light determine the parameters c SI = 299792458 m/s, e SI = 1.60217733×10 -19 C, k 2 = 10 -7 and k 3 = 1. The rest of the parameters listed in Table 1 can then be determined from these four values. In CGS units a similar approach could be made, with
2019-04-14T02:35:36.515Z
2003-06-24T00:00:00.000
{ "year": 2003, "sha1": "bf8fe6cd556c5329316f35c640ee427c5ec1d889", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0306230", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bf8fe6cd556c5329316f35c640ee427c5ec1d889", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252151605
pes2o/s2orc
v3-fos-license
The Efficacy of Integrated Rehabilitation for Post-Stroke Anxiety: Study Protocol for a Prospective, Multicenter, Randomized Controlled Trial Background Post-stroke anxiety (PSA) remains a challenging medical problem. Integrated rehabilitation involves a combination of traditional Chinese medicine (TCM) and Western conventional rehabilitation techniques. Theoretically, integrated rehabilitation is likely to have significant advantages in treating PSA. Nevertheless, the therapeutic effect of integrated rehabilitation needs to be verified based on large-scale trials with sound methodology. Thus, the aim of this trial is to assess the efficacy and safety of integrated rehabilitation on PSA. Methods The study is a prospective, multicenter, randomized, controlled trial involving 188 PSA patients from four clinical centers in China. Eligible participants will be randomly divided into the integrated rehabilitation group or the standard care group. Participants in the integrated rehabilitation group will receive a combination of TCM and Western conventional rehabilitation methods, including acupuncture, repeated transcranial magnetic stimulation, traditional Chinese herbal medicine, and standard care. The primary outcome will be the Hamilton Anxiety Rating Scale (HAM-A). The secondary outcomes will include the Self-Rating Anxiety Scale (SAS), the Activities of Daily Living (ADL) scale, the Montreal Cognitive Assessment (MoCA) scale, the simplified Fugl–Meyer Assessment of motor function (FMA) scale, and the Pittsburgh Sleep Quality Index (PSQI). Outcome measurements will be performed at baseline, at the end of the 4-week treatment and the 8-week follow-up. Conclusion Results of this trial will ascertain the efficacy and safety of integrated rehabilitation on PSA, thereby providing evidence regarding integrated rehabilitation strategies for treating PSA. It will also promote up-to-date evidence for patients, clinicians, and policy-makers. Trial Registration ClinicalTrials.gov NCT05147077. Introduction Stroke is the second leading cause of death globally. 1 The estimated global lifetime risk data show that the lifetime risk of stroke for people aged above 25 years was 24.9%. 2 China has the highest estimated lifetime risk of stroke (39.3%) in the world according to global, regional, and country-specific lifetime risk of stroke and the 2017 Global Burden of Disease Study. 3 And stroke has been the leading cause of death and disability in China in recent years. 4 mortality, and recurrence rates of stroke. Therefore, it is vital to perform early clinical interventions for PSA to improve the prognosis and restore the social function of stroke patients. To date, there have been no specific and uniform guidelines for the treatment of PSA. For the treatment of PSA, current treatments mainly include pharmacological therapies, psychotherapy, and rehabilitation. In addition, some patients also seek complementary and alternative therapies. 7,8 As it is widely believed that anxiety is associated with increased disability and decreased quality of life, 9 improving movement function is fundamental for PSA. Thus, rehabilitation therapy is an important treatment approach for PAS. However, it is worth noting that the majority of previous studies focus on the efficacy of specific rehabilitation therapy, but ignore the important concept of "integrated rehabilitation", thereby failing to achieve optimal efficacy. Thus, it is of significance to develop an integrated rehabilitation scheme for PSA. The concept of "integrated rehabilitation" in our study is derived from "integrative medicine", which is defined as "medicine that reaffirms the importance of the relationship between practitioners and patients, focuses on the whole person, and makes use of all appropriate therapeutic approaches, healthcare professionals and disciplines (including conventional and complementary therapies) to achieve optimal health and healing". 10 Notably, in most Asian countries, especially China, with the aim to optimize the therapeutic effect through closer collaborations between traditional Chinese medicine (TCM) and Western medicine, clinicians often adopt an integrative medicine approach to treat a wide range of diseases in hospitals, 11,12 rather than just adopt a sole TCM or Western medicine approach. Therefore, the concept of "integrated rehabilitation" in this study can promote the development of a rehabilitation scheme that integrates both TCM and Western medicine, which has promising guiding value in real-world practice. Specifically, our integrated rehabilitation scheme incorporates two TCM therapies (ie, acupuncture and Chinese herbal medicine), repeated transcranial magnetic stimulation (rTMS) and standard care. The rationales for selecting these components are interpreted in detail as follows. First, acupuncture and Chinese herbal medicine are two important components of TCM, which is an essential part of China's healthcare system. Acupuncture is commonly used for mental diseases such as anxiety disorders, and previous systematic reviews and meta-analyses reveal promising results that favor the efficacy and safety of acupuncture on anxiety disorders. 13,14 Similarly, findings of previous meta-analyses suggest that Chinese herbal medicine has certain beneficial effects on reducing anxiety. 15,16 In addition, with increasing numbers of clinical studies, there is growing evidence that anxiety can be treated with rTMS. 17,18 Lastly, standard care (eg, pharmacological drugs, conventional rehabilitation) is a fundamental treatment for PSA. Taken together, the integrated rehabilitation scheme incorporating the aforementioned therapies in our study is commonly used and has promising application value for PSA in real-world clinical settings, especially in Asia, so we intend to investigate the therapeutic effect of this integrated rehabilitation protocol. Study Objectives and Hypothesis The main objective of this trial is to compare the efficacy and safety of integrated rehabilitation with standard care. To further enrich the treatment of PSA in real-world practice, we choose an integrated rehabilitation scheme that combines TCM and Western conventional rehabilitation treatments. We hypothesize that integrated rehabilitation will lead to a more significant improvement in PSA when compared with standard care. Study Design and Setting The study is a prospective, multicenter, randomized controlled, clinical trial that includes 188 participants from four clinical centers in China. All patients who meet the enrollment criteria will be randomly assigned to either the integrated rehabilitation group or the standard care group. The study duration will be 12 weeks, which includes a 4-week treatment period, and an 8-week follow-up period. The research flow chart is shown in Figure 1. The trial schedule of enrolment, treatments, and assessments is displayed in Table 1. The reporting of this protocol is strictly based on the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT), 19 Randomization and Allocation Concealment After signing the informed consent, eligible patients will be randomly divided into the integrated rehabilitation group or the standard care group in a 1:1 ratio through a central randomization system. Personnel permission levels in the randomization system are rigorous, and only the highest-level system administrator has access to the randomization scheme. Once a participant is enrolled, an independent staff, who is not involved in participant screening, treatment or outcome assessment, will operate the randomization system to generate a random number to determine the corresponding group that the participant should be assigned to. The researcher who applies for the random number will call the randomization center and enter the subject's information in the voice form or via a text message. When the central 7103 randomization system has received the application for a random number, the patient's random number and group information will be sent to the researcher. Allocation concealment will be maintained by means of central randomization. Blinding Implementation Owing to the characteristic of the complex integrated rehabilitation procedures and distinct differences in interventions between the two groups, it is hard to blind both the manipulators of integrated rehabilitation and patients in this study. However, to ensure the blinding principles and avoid potential bias as much as possible during our trial, the operators, outcome assessors, and statistical analysts will be separated and performed by respective specially assigned persons. The outcome measurement and statistical analyses will be carried out by specially designated researchers who are unaware of the grouping information. Sample Size Calculation Participants will be divided into two groups in a 1:1 ratio. Due to the lack of previously published relevant RCTs, the sample size calculation is based on the results of our pilot study. It is assumed that after 4 weeks of intervention, the mean of the HAM-A scores is 12.7 in the integrated rehabilitation and 14.4 in the standard care group, with a pooled standard deviation of 3.27. According to the following sample size formula, to achieve 80% statistical power and keep the type I error rate to <0.05, a total of 156 participants (with 78 in each group) will be required to detect a two-sided significant difference between the two groups. To compensate for an estimated dropout rate of 20%, 188 participants (with 94 in each group) will be ultimately required. Eligibility Criteria The inclusion criteria for this trial are as follows: 7104 • Patients meet the diagnostic criteria of cerebral infarction or cerebral hemorrhage with anxiety disorder, and they meanwhile meet the TCM syndrome differentiation of "Liver stagnation transforming into fire" type; • Consciousness, stable vital signs, ability to understand and follow the instructions, Barthel Index (BI)>20, FMA (0-95); • 25≤age≥85 years, male or female; • The first episode of stroke, no personal or family history of mental disability before the stroke; • Anxiety level as mild or moderate (ie, HAM-A scores ≥7 and≤21); • Anxiety symptoms occur after the stroke in a clear temporal sequence; • The duration of the disease range between 2 weeks and 36 months after the stroke; • Participants can understand the study protocol and written informed consent is signed. The exclusion criteria are as follows: • Patients with acute brain trauma, brain infection, effusion, or tumor occupation; • There are intracranial metals and other foreign bodies (such as orthopedic materials, arterial clips, etc), cardiac pacemakers, deep brain stimulators, and other electronic devices; • Previous seizures, including primary and secondary seizures; • Patients have severe complications in cardiovascular, liver, and kidney; or patients have psychiatric history; • There is a significant cognitive impairment (MMSE: illiteracy ≤17, primary school level ≤20, secondary school level (including technical secondary school) ≤22, and college-level (including junior college) ≤23 points), or hearing impairment, aphasia; • Patients who are unconscious; • Patients have taken psychotropic drugs or been treated for anxiety in the last month; • People with unstable vital signs or patients with other mental disorders. Participant Recruitment All PSA patients will be recruited from four clinical centers, which include the Third Affiliated Hospital of Zhejiang Chinese Medical University, Zhejiang General Hospital of the Armed Police, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, and The Second Hospital of Jinhua. Recruitment strategies will include posters, public social networks, and advertisements. Interventions Standard Care Group The standard care includes pharmacological drugs, general care, and exercise therapy. Pharmacological drugs mainly consist of anti-anxiety drugs and cerebrovascular drugs. For anti-anxiety, the selective 5-HT reuptake inhibitor (escitalopram oxalate 10mg qd, 4 weeks) will be used for all participants, which has anxiolytic effects as an antidepressant. Other medications are referred to the Chinese Stroke Association guidelines for clinical management of cerebrovascular disorders. 20 When appropriate, corresponding drugs that have the effect of lipid regulation, blood sugar control, antihypertension, and anticoagulation will be used depending on patients' individual conditions. In addition, general care and exercise therapy will be provided. General care is the assessment of the patient's general and psychological condition and family situation by the nursing staff. And the exercise therapy is based on the Chinese Stroke Association guidelines for clinical management of cerebrovascular disorders and the 2019 update of clinical management of stroke rehabilitation. 21 The patient's Brunnstrom stage will be assessed by a professional therapist, and the prescriptions for motor function rehabilitation will be tailored to the changes in muscle strength and tone, and motor function. They include, but not limited to, good limb positioning, joint release and muscle massage on the affected side, muscle strength training, balance training in sitting, and standing positions, gait training, etc. Every individual exercise therapy lasts 30-45 minutes once a day, 5 times per week and the treatment duration lasts 4 weeks. Integrated Rehabilitation Group In addition to standard care, participants in the integrated rehabilitation group will receive a combination of TCM and Western conventional rehabilitation methods, including acupuncture, repeated transcranial magnetic stimulation (rTMS), and traditional Chinese herbal medicine. Acupuncture Locations of acupoints applied in this trial are summarized in Table 2 The treatment frequency of acupuncture will be 5 times per week. And the treatment duration will last 4 weeks. rTMS We will administer rTMS using a CCY-I-type magnetic field stimulator (Wuhan Yiruide Medical Equipment New Technology Company Limited). Active treatment will consist of 900 pulses per session, delivered over the right prefrontal cortex at 90% of motor threshold (MT). We will determine MT at screening and at the beginning of every treatment block (every 5 sessions) using electromyographic measurement of the resting left thumb (left abductor pollicus brevis). Treatment will be delivered at a pulse frequency of 1 Hz with continuous stimulation and each rTMS treatment session last for 15 minutes. Patients will receive 20 sessions of treatment, with a frequency of 5 times per week and the treatment duration will last 4 weeks. Traditional Chinese Herbal Medicine Based on syndrome differentiation, traditional Chinese herbal medicine called "danzhixiaoyao power" will be selected and its specific prescriptions are as follows: Danpi 10g, fried Zhizi 10g, Danggui 12g, Baishao 12g, fried Chaihu 6g, Fuling 10g, fried Baishu 10, and roasted Gancao 3g. The dosage of Chinese herbal medicine is 2 times a day for 4 weeks. Outcomes Measures Primary Outcome The primary outcome is the Hamilton Anxiety Rating Scale (HAM-A). This scale will be assessed before treatment, at the end of the 4-week treatment, and at the end of the 8-week follow-up after treatment. The HAM-A scale is utilized to assess the subjects' anxiety symptoms. The CCMD-3 Chinese Diagnostic Criteria for Mental Disorders 22 lists the HAM-A scale as an important diagnostic tool for anxiety disorders, 23 and it is often used clinically as a recognized tool for the diagnosis and classification of anxiety disorders. 24 It is widely accepted that the HAM-A scale is a valid and reliable tool for stroke patients 25,26 and it is also the most extensive semi-structured assessment scale for evaluating the severity of anxiety, 26 so we chose it as the primary outcome. Secondary Outcomes Secondary outcomes include Zung's Self-Rating Anxiety Scale (SAS), Activities of Daily Living (ADL) scale, the Montreal Cognitive Assessment (MoCA) scale, the simplified Fugl-Meyer Assessment of motor function (FMA) scale, and Pittsburgh Sleep Quality Index (PSQI). These scales will be assessed before treatment and at the end of the 4-week treatment. The ADL and PSQI need to be assessed again at the end of the 8-week follow-up. The justifications for selecting them as secondary outcomes are elaborated as follows. The SAS is used to assess the subjective feelings of patients with anxiety. SAS has good psychometric credentials, and it is suitable for all types of psychiatric disorders in which anxiety symptoms are predominant. There are 20 items on a four-point scale, and higher scores indicate greater anxiety. 27,28 SAS has good psychometric credentials 28 and can assist in the assessment of patients' anxiety status. The ADL scale consists of the following two components: 1) the Physical Self-maintenance Scale (PSMS), which consists of six items (ie, toileting, eating, dressing, grooming, walking, and bathing); 2) the Instrumental Activities of Daily Living Scale (IADL), which consists of eight items (ie, telephone, shopping, meal preparation, housework, laundry, transportation, medication, and economic self-management). 29,30 Another aim of this trial is to improve the patient's ability to perform activities of daily living, so ADL is used as one of the secondary indicators. The MoCA is based on clinical experience and is referenced to the Mini-mental State Examination, which has been widely used to measure the consciousness of test-takers and is more sensitive to screening for mild cognitive impairment. The scale covers eight cognitive domains, including attention and concentration, executive function, memory, language skills, visual structure, abstract thinking, numeracy, and orientation, among other 11 items. The total score is 30 and a score of more than 26 is considered to be normal. If the subject has less than 12 years of education, one point is added to the total score. The questionnaire is ideal for experimental use because of its sensitivity, coverage of important cognitive domains, and short testing time. 31,32 It is worth noting that cognitive decline and anxiety symptoms commonly interact and co-occur 33 ; this is why we set MoCA as one of the secondary outcomes. It is widely accepted that the FMA is the gold standard for evaluating the motor function in patients with post-stroke hemiparesis, 34 with a score of 100 for motor function (66 for upper limb and 34 for lower limb). 35 Therefore, we utilize it as one of the main components of secondary outcomes. The PSQI is a standardized clinical tool that measures a wide variety of sleep quality markers. 36 It combines the qualitative and quantitative aspects of sleep to assess the quality of the subject's sleep over the last month. The scale can be used to evaluate the sleep behavior and habits of the general population, as well as the sleep quality of clinical patients. Notably, sleep disorders can predict depression and anxiety, and short sleep duration and poor sleep quality are identified as risk factors for PSA. 37,38 Reduced sleep quality is a common symptom in patients with depressive disorders. Thus, PSQI is adopted as one of the secondary outcomes. Statistical Analysis Statistical analysis will be performed using SPSS 25.0 for Windows. Continuous variables in normal distribution will be expressed as means ± standard deviations, while continuous variables in non-normal distribution will be reported as median with quartiles. In addition, categorical variables will be expressed as frequency rates and percentages in each category. For continuous variables in a normal distribution, repeated-measures analysis of variance (repeated measures ANOVA) will be used to perform within-group and between-group comparisons by assessing changes in continuous variables before and after treatment at different time points, while a non-parametric test will be adopted to conduct within-group and between-group comparison for continuous variables in non-normal distribution. For categorical variables, between-group differences will be assessed by the chi-square test or Fisher's exact test. All hypothesis tests will be performed as two-sided tests, and differences will be considered statistically significant at P<0.05. The statistical analysis plan will be completed by professional statisticians who are not involved in other procedures of the trial. After all data entry and review, the statistician will complete the analysis in time and produce a written statistical analysis report. Ethical Approval and Study Registration The trial will be conducted by the principles of the Declaration of Helsinki. It has been approved by the Medical Ethics Committee of the Third Affiliated Hospital of Zhejiang Chinese Medical University (Approval ZSLL-KY-2021-009-01). Written informed consent will be signed by each participant who is included in the trial. The study protocol has been prospectively registered in the Clinical Trials Registry with the identification code NCT05147077. Quality Control A standard operating procedure (SOP) will be developed by the research group. A special training session will be held 1 month before the official launch of the clinical trial, which provides standard training to all research personnel involved in the trial. The training session will focus on the procedure of the trial and ensure each researcher is familiar with the research process and masters the SOP, thereby guaranteeing the quality of the trial. Adverse Events Adverse events (AEs) throughout the study will be assessed and recorded by the investigators in the case report form. After each patient visit, participants will be inquired about using questions related to AEs. In addition, throughout the trial, subjects will also be instructed to report any adverse events to investigators spontaneously. Investigators will record the occurrence date, degree, duration, and treatment measures of AEs. Discussions Previous clinical studies mainly focus on post-stroke depression. There are very few studies on PSA. Thus, more studies are urgently needed to be conducted on the pathogenesis and treatment methods of PAA. Currently, although the pathogenesis of PSA remains unclear, it may be related to the following mechanisms: 1) Cerebrovascular circulation disorder leads to brain tissue damage, and there is a correlation between areas of impaired hypoperfusion and the morbidity of anxiety, 39 especially in areas of dominant psycho-behavioral functions such as the frontotemporal lobe and cortex 40 ; 2) Activated NF-kB cells in the neuroglia may mediate anxiety behavior by enhancing nNOS-CAPON-Dexras expression and association 41 ; and 3) Impaired neuronal function leads to dysregulation of neurotransmitters such as adrenaline, 5-hydroxytryptamine, and dopamine to trigger depression and anxiety. 42 At present, there is a paucity of clinical research on PSA. Previous available studies mainly adopt a solo therapy for PSA, and the integrated rehabilitation incorporating both TCM and Western medicine is rarely used in previously published studies in this field. Based on the available literature, current treatment options include new techniques such as Armeo Spring robot-assisted trainer and the Kinect-based system, 43 acupuncture, 44 aquatic exercise programs, 45 medications (eg, escitalopram, 24 Sertraline 46 ) and rehabilitation treatments such as problem-solving therapy, 24 7108 relaxation, 9,47 etc. To further enrich the treatment of PSA, we choose an integrated rehabilitation scheme in this trial, which integrates both TCM and Western medicine. Such an integrated rehabilitation scheme is widely used and it has significant guiding value in real-world clinical settings, especially in Asian countries. Among the specified components of the integrated rehabilitation protocol, scalp acupuncture can improve anxiety symptoms after stroke and reduces the HAM-A scores by promoting cranial nerve recovery 48 and body acupuncture can also decrease the level of anxiety. 49 Compared to conventional pharmacotherapy, to a certain extent, Chinese herbal medicine or Chinese herbal compound can significantly relieve anxiety symptoms in PSA patients with fewer adverse events. 15 One retrospective study 50 indicated that acupuncture combined with Chinese herbal medicine has superior effects in alleviating post-stroke anxiety. In addition, the French guidelines 51 for rTMS include anxiety as one of its indications. Another study favored the feasibility and safety of rTMS for improving anxiety symptoms. 52 Therefore, we design this present study to evaluate the safety and efficacy of integrated rehabilitation in PSA patients. Notably, the investigation of the efficacy and safety of our integrated rehabilitation protocol will have significant guiding value in clinical practice. With increasingly closer cooperation between TCM and Western medicine, as well as the popularity of the concept of "integrative medicine", such integrated rehabilitation strategies that combine TCM and Western medicine in our study are expected to be widely used in real-world practice, especially in Asian countries. Integrative medicine makes use of all appropriate therapeutic approaches, healthcare professionals and disciplines (including conventional and complementary therapies) to achieve optimal health and healing. In our study, the treatment strategy focuses on the improvement of the patient's overall symptoms through integrated rehabilitation. We believe that the findings of this study will contribute to promoting the application of integrative rehabilitation strategies for PSA. However, this study has certain limitations. First, limited by research funds, patients could not be followed up for a longer period, making it impossible to observe the long-term effect of the integrated rehabilitation. Second, due to the characteristic of the complex integrated rehabilitation procedures and distinct differences in interventions between the two groups, doubleblind and placebo controls are not utilized, which will arise a certain bias and also fails to exclude a placebo effect of integrated rehabilitation. Nonetheless, with the aim to minimize bias, we have applied blindly to the outcome assessors and statistical analysts. Last, since there is a lack of objective evaluation indicators for PSA, our outcome measures mainly rely on scales that are relatively subjective. Nonetheless, to reduce the bias induced by subjective scales, each center of this multicenter trial will assign a professional physician as a fixed outcome assessor to perform outcome measurements of various subjective scales. Moreover, prior to the trial, the outcome assessor from each center will gather together to receive intensive training to practice outcome assessments in a consistent manner to improve the reliability of outcomes. Conclusions This study is a prospective, multicenter, randomized controlled trial that aims to assess the efficacy and safety of integrated rehabilitation for PSA. It is expected that its subsequent findings will promote a comprehensive treatment scheme incorporating various rehabilitation modalities, thereby improving the therapeutic strategy of PSA. Data Sharing Statement Upon the completion of the study, supporting data will be available from the corresponding author by request. Ethics Statement Ethical approval (number: ZSLL-KY-2021-009-01) has been obtained from the Ethics Committee of The Third Affiliated Hospital of Zhejiang Chinese Medical University. reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work. Funding The trial is financially funded by the 2021 Special Project for Modernization of Chinese Medicine in Zhejiang Province (No. 2021ZX010) and Zhejiang Provincial Famous Traditional Chinese Medicine Experts Inheritance Studio Construction Project (grant no. GZS2021027).
2022-09-09T17:04:12.990Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "c7f70deeea8c40098697cfaef18c509ec796a448", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=83660", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d98abd86b186d2bcb918719d3594a69bbad70288", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
245438842
pes2o/s2orc
v3-fos-license
Observation of quantum spin Hall states in Ta$_2$Pd$_3$Te$_5$ Two-dimensional topological insulators (2DTIs), which host the quantum spin Hall (QSH) effect, are one of the key materials in next-generation spintronic devices. To date, experimental evidence of the QSH effect has only been observed in a few materials, and thus, the search for new 2DTIs is at the forefront of physical and materials science. Here, we report experimental evidence of a 2DTI in the van der Waals material Ta$_2$Pd$_3$Te$_5$. First-principles calculations show that each monolayer of Ta$_2$Pd$_3$Te$_5$ is a 2DTI with weak interlayer interactions. Combined transport, angle-resolved photoemission spectroscopy, and scanning tunneling microscopy measurements confirm the existence of a band gap at the Fermi level and topological edge states inside the gap. These results demonstrate that Ta$_2$Pd$_3$Te$_5$ is a promising material for fabricating spintronic devices based on the QSH effect. Two-dimensional topological insulators (2DTIs), which host the quantum spin Hall (QSH) effect, are one of the key materials in next-generation spintronic devices.To date, experimental evidence of the QSH effect has only been observed in a few materials, and thus, the search for new 2DTIs is at the forefront of physical and materials science.Here, we report experimental evidence of a 2DTI in the van der Waals material Ta2Pd3Te5.First-principles calculations show that each monolayer of Ta2Pd3Te5 is a 2DTI with weak interlayer interactions.Combined transport, angleresolved photoemission spectroscopy, and scanning tunneling microscopy measurements confirm the existence of a band gap at the Fermi level and topological edge states inside the gap.These results demonstrate that Ta2Pd3Te5 is a promising material for fabricating spintronic devices based on the QSH effect. Two-dimensional topological insulators (2DTIs), also known as quantum spin Hall (QSH) insulators, feature a bulk band gap and helical in-gap states at the material boundaries [1][2][3].The edge states of a 2DTI can serve as one-dimensional conducting channels in which backscattering is forbidden by time-reversal symmetry.Therefore, 2DTIs provide an ideal platform to fabricate low-dissipation spintronic devices.To date, 2DTIs have only been realized in two types of materials.The first type is quantum well systems, including HgTe/CdHgTe [4] and InAs/GaSb [5][6][7]. Realizing QSH states in van der Waals materials, such as WTe 2 , offers great opportunities to fabricate quantum transport devices, as monolayer or few-layer materials for realizing the QSH effect are very easy to obtain.However, as a prototypical van der Waals material, 1T ′ WTe 2 was confirmed to be a 2DTI only in the monolayer limit [19].In bulk form, WTe 2 becomes a metal with zero energy gap [21,22].As a result, QSH states disappear in multilayer WTe 2 because of the hybridization of the bulk and edge states.Therefore, QSH devices based on WTe 2 suffer from easy degradation of monolayer samples under ambient conditions.Since multilayer materials are typically more inert and have higher tunability via the thickness or twist angles, realizing QSH states in multilayer materials is highly desirable.This requires a semiconducting van der Waals material that hosts a similar inverted gap as the monolayer. In this work, we report the observation of QSH states in the van der Waals material Ta 2 Pd 3 Te 5 [23], which hosts a band gap in both the bulk and monolayer forms. We synthesize Ta 2 Pd 3 Te 5 single crystals and investigate their electronic structures by combined first-principles calculations, transport, angle-resolved photoemission spectroscopy (ARPES), and scanning tunneling microscopy/spectroscopy (STM/STS) measurements.We prove that Ta 2 Pd 3 Te 5 hosts a band gap at the Fermi level.Because of the weak interlayer coupling, the topmost layer of Ta 2 Pd 3 Te 5 can be viewed as a monolayer material placed on a singlecrystal substrate.As expected, we directly observe topological edge states using STS.These results provide strong evidence for QSH states in Ta 2 Pd 3 Te 5 .The discovery of QSH states in van der Waals materials with a significant band gap could pave the way to realizing practical QSH devices. Ta 2 Pd 3 Te 5 single crystals were synthesized by the selfflux method.The starting materials of Ta (99.999%),Pd (99.9999%), and Te (99.9999%) were mixed in an Arfilled glove box at a molar ratio of Ta:Pd:Te=2:4.5:7.5.The mixture was placed in an alumina crucible and sealed in an evacuated quartz tube.The tube was heated to 950 • C over 10 h and maintained at this temperature for 2 days.Then, the tube was slowly cooled to 800 • C at a rate of 0.5 • C/h.Finally, the extra flux was removed by centrifugation at 800 • C.After centrifugation, single crystals of Ta 2 Pd 3 Te 5 could be selected from the remnants in the crucible.To investigate the crystalline structure, single-crystal X-ray diffraction (XRD) was carried out at 273 K using Mo Kα radiation (λ = 0.71073 Å).The crystalline structure was refined by the full-matrix least-squares method on F 2 by using the SHELXL-2018/3 program.Electrical resistivity (ρ) measurements were carried out on a physical property measurement system (PPMS, Quantum Design Inc.) using a standard dc four-probe technique. ARPES experiments were performed at beamline BL-1 of the Hiroshima synchrotron radiation center [26].The clean surfaces required for the ARPES measurements were obtained by cleaving the samples in an ultrahigh vacuum chamber with a base pressure of 1.0×10 −9 Pa.Both the cleavage process and ARPES measurements were performed at 30 K. The energy resolution of the ARPES measurements was approximately 15 meV.STM/STS experiments were carried out in a homebuilt low-temperature (∼5 K) STM system with a base pressure of 2×10 −8 Pa.The clean surfaces for STM/STS measurements were also obtained by cleaving the samples in situ at low temperature. First-principles calculations were performed within the framework of the projector augmented wave (PAW) method [27,28] implemented in the Vienna ab initio simulation (VASP) package [29,30].The Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA) exchange-correlation functional [31] was implemented in the calculations.The cutoff energy for plane wave expansion was 500 eV.Spinorbit coupling was self-consistently taken into account within the second variational method.A 4-unit-cell slab structure (with 20 Å vacuum) was built to simulate the surface spectrum.From the STM image, the rectangular structure of the bc plane of Ta 2 Pd 3 Te 5 can also be identified in the fast Fourier transformed image in Fig. 1(e).The lattice constants determined from our STM results are ∼3.6 Å and ∼18.7 Å, respectively, which agree well with the lattice constants along the b and c directions. The temperature dependence of the electrical resistivity of Ta 2 Pd 3 Te 5 is displayed in Fig. 1(g).When the temperature is decreased from 300 K to 2 K, the resistivity increases monotonically, indicating semiconductor behavior.The temperature-dependent resistivity can be fitted with the Arrhenius model ρ ∼ exp(ǫ act /k B T ), where k B and ǫ act are the Boltzmann constant and thermal activation energy, respectively.The fitting results are shown by the red line in Fig. 1 (g). The fitted ǫ act is approximately 14 meV.Therefore, bulk Ta 2 Pd 3 Te 5 is a narrow-gap semiconductor with a global band gap of ∼14 meV. Before showing further experimental results of Ta 2 Pd 3 Te 5 , we briefly discuss the topological properties based on our first-principles calculation results.For bulk Ta 2 Pd 3 Te 5 , the symmetry indicators (Z 2 × Z 2 × Z 2 × Z 4 ) are 0.However, it has a nontrivial mirror Chern number in the k y = 0 plane due to the band inversion at the Γ point [23], which indicates the topological nature of bulk Ta 2 Pd 3 Te 5 .In the monolayer limit, Ta 2 Pd 3 Te 5 becomes a 2DTI [23] with a similar band inversion.Its nontrivial topology has also been confirmed by the one-dimensional Wilson loop method.Figure 2(a) shows the band structure of monolayer Ta 2 Pd 3 Te 5 , where an inverted band gap near the Fermi level can be identified.The calculated gap along the Γ-Z direction is approximately 5 meV.Notably, the gap fitted based on our transport measurements (∼14 meV) on bulk samples is larger than the calculated value.We will later show that our STS measurements also indicate a significantly larger gap compared to the calculation results.This inconsistency probably originates from the fact that density functional theory (DFT) calculations may underestimate the band gap of materials. To study the electronic structure of Ta 2 Pd 3 Te 5 , we performed ARPES measurements on a freshly cleaved surface.An ARPES intensity map of the Fermi surface is displayed in Fig. 2(c), which shows a weak spectral weight along the Γ-Ȳ direction.Because of the large lattice constant along the c direction, we observed four BZs along Γ-Z.With increasing binding energy, an oval- like pocket appears at the Γ point, as shown in Fig. 2(d).The band structure along the Γ-Ȳ direction is shown in Fig. 2(e), which agrees well with our slab calculation results (see Fig. 2(f)).These ARPES results, combined with the DFT calculations, provide strong evidence of the topological band structure of Ta 2 Pd 3 Te 5 .Now that we have shown the existence of a topological band structure and a band gap in Ta 2 Pd 3 Te 5 , we proceed to studying the topological edge states, which are a key signature of the QSH state in monolayer Ta 2 Pd 3 Te 5 .Because of the weak interlayer coupling, topological edge states are expected to exist at the periphery of the topmost layers.An ideal technique to study the edge states is STS because the tunneling conductance is proportional to the local density of states (LDOS).Figure 3(a) shows an STM image that contains a step edge.From the line profile in Fig. 3(b), the step height is 1.4 nm, which corresponds to the lattice constant along the a direction.Figure 3(c) shows the dI/dV curves taken on the flat terrace and near the step edge.On the flat terrace, we observe a band gap at the Fermi level, in agreement with the semiconductor behavior of Ta 2 Pd 3 Te 5 .The estimated gap size is approximately 43 meV, with the valence band top and conduction band bottom at -33 meV and 10 meV, respectively.The band gap shows negligible variation across the flat terrace, despite the height variation of the Te chains, as shown in Supplementary Fig. S1.This indicates the high spatial homogeneity of the surface, which provides further evidence for the global nature of the band gap.Notably, the gap estimated from our STS data is larger than that fitted based on the transport measurements.This may originate from the surface sensitivity of the STS technique, which indicates a larger band gap in monolayer Ta 2 Pd 3 Te 5 than in the bulk material.Near the step edge, however, the LDOS is dramatically enhanced, featuring a V-shape in the energy range between -40 and 40 meV.This indicates the existence of edge states inside the band gap.To better visualize the evolution of the edge states, we present a series of dI/dV curves across the step edge in Fig. 3(d).When the tip approaches the step edge, the tunneling conductance inside the gap gradually increases, as indicated by the red dashed line in Fig. 3(d). To confirm the spatial distribution of the edge states, we performed real space dI/dV mapping, as shown in Fig. 3(e).When the bias voltage is set to be within the band gap (e.g., 0 and -20 mV), the tunneling conductance is dramatically enhanced near the step edge.The enhancement of the tunneling conductance vanishes at bias voltages outside the band gap, resulting in a uniform LDOS over the entire surface (e.g., at ±80 mV). Figure 3(f) shows an averaged dI/dV map at several bias voltages that span the band gap, where the enhancement of the LDOS near the step edge can be clearly seen.The fact that these edge states are located inside the gap agrees well with their topological nature.Notably, the step edge is not straight and contains several different terminations.Nevertheless, edge states always exist, despite the slight variation in the details of the STS spectra.This provides strong evidence for the robustness of the edge states, which is also consistent with their topological nature [23].Similar topological edge states have been reported in several other topological materials, such as ZrTe 5 [24] and TaIrTe 4 [25]. In summary, our results support the existence of significant inverted gap and topological edge states in the van der Waals material Ta 2 Pd 3 Te 5 , thus providing strong evidence for QSH states in Ta 2 Pd 3 Te 5 .In stark contrast to WTe 2 , multilayer and even bulk Ta 2 Pd 3 Te 5 hosts a similar inverted gap at the Fermi level.This is beneficial for device applications because multilayer samples are typically more inert and have higher tunability.Therefore, we expect that Ta 2 Pd 3 Te 5 will become a promising material for fabricating QSH devices. FIG. 1 : FIG. 1: Structure and characterization of Ta2Pd3Te5 single crystals.(a) Schematic drawing of the atomic structure of Ta2Pd3Te5.(b) Schematic drawing of the Brillouin zones and high-symmetry points of monolayer Ta2Pd3Te5.(c) X-ray diffraction spectrum on a flat surface of Ta2Pd3Te5.The inset shows a photograph of typical Ta2Pd3Te5 single crystals.(d) Large-scale STM topographic image of the Ta2Pd3Te5 surface (VB=1 V; I=0.05 nA).(e) Zoomed-in STM image showing atomic resolution (VB=50 mV; I=0.05 nA).The atomic structure of the surface Te atoms (red balls) is superimposed on the left part of the image.(f) Fast Fourier transformed STM image, which shows a rectangular reciprocal lattice.Red arrows indicate the reciprocal lattice vectors.(g) In-plane resistivity of Ta2Pd3Te5 single crystals as a function of temperature.The red line is the fitting result obtained using the Arrhenius formula. Ta 2 Pd 3 Te 5 crystallizes in an orthorhombic structure with the space group Pnma (No. 62).Schematic drawings of the atomic structure and Brillouin zones (BZs) of Ta 2 Pd 3 Te 5 are shown in Figs.1(a) and 1(b), respectively.Each unit cell contains two Ta 2 Pd 3 Te 5 monolayers, which are stacked along the a direction via weak van der Waals interactions.Each monolayer contains a Ta-Pd mixed layer sandwiched between two Te layers.Figure 1(c) shows the XRD spectrum on a flat surface of Ta 2 Pd 3 Te 5 , whereby only (h00) peaks are observed.A photograph of a typical Ta 2 Pd 3 Te 5 crystal is displayed in the inset of Fig. 1(c).The picture shows that the crystal is as large as 1 mm and has shiny surfaces, indicating the high crystallinity of our samples.The lattice parameters of Ta 2 Pd 3 Te 5 determined from the XRD data are a = 13.9531(6)Å, b = 3.7038(2) Å, and c = 18.5991(8)Å. Figure 1(d) shows a large-scale STM image of Ta 2 Pd 3 Te 5 (the bc plane).The surface is slightly corrugated, forming periodic stripes along the b direction.A zoomed-in STM image with atomic resolution is displayed in Fig. 1(e).Each bright protrusion corresponds to a Te atom, which well matches the structure model of Ta 2 Pd 3 Te 5 . FIG. 2 : FIG. 2: Topological band structure of Ta2Pd3Te5.(a) Calculated band structure of monolayer Ta2Pd3Te5.(b) Magnified view of the blue-shaded area in (a), showing the existence of a band gap.The calculated gap is approximately 5 meV.(c) and (d) ARPES intensity plots at the Fermi level and EB=0.35 eV, respectively.The blue lines indicate the surface BZs of Ta2Pd3Te5.(e) ARPES intensity plot of the band structure along the Γ-Ȳ direction.(f) Slab calculation results of the band structure along the Γ-Ȳ direction. FIG. 3 : FIG. 3: STM characterization of the topological edge states in Ta2Pd3Te5.(a) STM topographic image containing a step edge.Dashed black lines indicate the position of the step edge.(b) Line profile along the solid black line in (a).(c) dI/dV curves taken on the flat terrace (blue) and near the step edge (black).(d) Series of dI/dV spectra taken along the red arrow in (a).The red dashed line indicates the emergence of in-gap states near the step edge.(e) dI/dV maps taken in the same area as (a).The bias voltage is indicated on the left side of each map.Pronounced edge states appear when the bias voltage is in the range of -40 to 20 meV, as indicated by the black dashed lines.(f) Averaged dI/dV map at four different bias voltages: -40, -20, 0, and 20 mV.
2020-12-15T02:15:52.928Z
2020-12-14T00:00:00.000
{ "year": 2020, "sha1": "19749e86f0d93cd20d2680f9cbf5fb1bfb76ef37", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2012.07293", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "19749e86f0d93cd20d2680f9cbf5fb1bfb76ef37", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
31105927
pes2o/s2orc
v3-fos-license
Structure Collisions between Interacting Proteins Protein-protein interactions take place at defined binding interfaces. One protein may bind two or more proteins at different interfaces at the same time. So far it has been commonly accepted that non-overlapping interfaces allow a given protein to bind other proteins simultaneously while no collisions occur between the binding protein structures. To test this assumption, we performed a comprehensive analysis of structural protein interactions to detect potential collisions. Our results did not indicate cases of biologically relevant collisions in the Protein Data Bank of protein structures. However, we discovered a number of collisions that originate from alternative protein conformations or quaternary structures due to different experimental conditions. Introduction Most molecular processes involve interactions between proteins. The physical contact between protein interaction partners is formed at defined binding interfaces, and one protein may bind various interaction partners at the same interface or at different interfaces. Due to the increasing number of protein structures available in the Protein Data Bank (PDB) [1], systematic protein interaction studies integrating structural information have become more and more attractive [2,3,4,5]. It has been a commonly accepted assumption that a protein containing multiple, non-overlapping interfaces can always interact simultaneously with other proteins. As part of a largescale structural analysis of a protein interaction network in yeast, Kim and colleagues presumed that the number of simultaneous interactions a protein can participate in is determined by the number of its non-overlapping binding interfaces [6]. To this end, the authors gave a structure-based definition of single-and multi-interface proteins and found differences in expression profiles and evolutionary rates. Subsequently, Kim et al. investigated the role of disorder in structural networks and discovered that disordered interface regions are more common in single-interface proteins [7]. Other studies also included structural information into their systematic analyses to increase the informative value of a given network or the reliability of protein interaction predictions [8,9]. Further protein network analyses concentrated on various aspects of single-and multi-interface proteins, ranging from protein interaction partners to interface specificity and interaction motifs. For instance, Keskin and Nussinov studied multispecific interfaces known to bind proteins with different structures [10]. They primarily focused on the ability of one binding interface to form interactions with different proteins and identified key residues potentially responsible for binding. In a related study, Humphris and Kortemme analyzed restrictions imposed on the protein sequences for permitting multiple binding partners and predicted residues essential for the respective interactions [11]. Aragues and colleagues analyzed hub proteins, i.e., highly connected proteins, in the context of interaction motifs (iMotifs) [12] and compared their results to those previously found by Kim et al. [6]. The iMotif approach is based on the idea that proteins sharing interaction partners most likely interact with them via the same binding sites. Clustering proteins according to their interaction partners showed that the number of iMotifs correlated with the number of protein interfaces in the work by Kim et al. [6]. Aragues and coworkers also found that cellular essentiality and gene conservation correlate better with the number of interacting motifs than with the absolute number of interactions. Furthermore, Tuncbag et al. presented a concept integrating the time dimension into protein interaction networks using protein structures and interface information, which was utilized for the characterization of interactions in the p53 pathway [13]. This work highlights the fact that the formation of simultaneous protein interactions depends on various factors including temporal aspects, which should be considered in the analysis of protein interaction networks. To our knowledge, however, the above-described basic assumption has never been investigated that simultaneous interactions at different interfaces are always spatially possible. In detail, two or more binding partners R and S of a protein P might collide in three-dimensional (3D) space, which would prevent the simultaneous interaction of R and S with P even though the binding sites are non-overlapping ( Figure 1). Therefore, we developed a structure collision approach for interactions between protein structure chains in the Protein Data Bank (PDB) to examine spatial conflicts between interaction partners. Materials and Methods In this study, we investigated whether a protein P can simultaneously bind two different proteins R and S at distinct binding interfaces. We refer to protein P as the primary protein, while its interaction partners R and S are the secondary proteins. In principle, we regarded all known protein structures that contain an interaction between proteins P and R in one structure and between proteins P and S in another structure, requiring that R and S were bound to P at different interfaces. After the two primary proteins P of the pairwise protein interactions P-R and P-S were superimposed, a collision detection method was applied to identify structure collisions between simultaneously possible interactions of the three proteins ( Figure 1). In detail, we first retrieved all protein structure files from the PDB [1]. In case of NMR entries, we used the first model since it is regarded as the representative protein structure according to the PDB instructions. We identified the binding interface residues between all pairs of interacting protein structure chains by means of the SPPIDER web service (http://sppider.cchmc.org/) [14]. Then we annotated all PDB chains with UniProtKB accession numbers using the mapping provided by PDBSWS [15]. We used the resulting annotations to identify pairs of protein interactions P-R and P-S, where the UniProtKB accession numbers of the primary protein P were identical for both interactions while the UniProtKB accession numbers of the secondary proteins R and S were different. We compared the binding interface residues of each protein interaction pair to find pairs with overlapping or distinct interfaces. The binding interfaces of P in the interaction pair P-R and P-S were defined to be distinct if all interface residues in P-R were different from those in P-S (analogous to the study by Kim et al. [6]). If at least one interface residue was involved in both interactions, we regarded the interface as overlapping and the simultaneous interaction of the three proteins as impossible. This definition is intentionally strict to exclude any potential overlap of the binding interfaces since we want to detect solely collisions of proteins R and S that have clearly disjunct binding interfaces. To further ensure that the proteins can really establish a functional interaction, we considered only those interaction pairs P-R and P-S whose number of interface residues for each interaction was at least five residues. After all pairs of interactions P-R and P-S that met the described criteria were identified, the primary proteins were superimposed and tested for collisions between the secondary proteins. Even if the UniProtKB accession numbers of two PDB chains are identical, the actual structure may not contain the complete protein because certain protein regions might not have been structurally determined. Therefore, the primary proteins P had to be aligned with each other to identify their corresponding PDB residues for computing the transformation matrix of the superposition. The alignments were performed using ClustalW [16], and the resultant files were parsed to extract the matching PDB residues. To quantify the extent of the collision between the two secondary proteins, we computed the volume of the overlap of the secondary proteins after superimposing the primary proteins. C a atoms of the corresponding residues in the primary proteins were superimposed by a rigid-body transformation (translation and rotation) to minimize the RMSD between corresponding C a atoms. The rotation was determined by Kearsley's quaternion method [17], posing the minimization as an eigenvalue problem, which is solved by a singular value decomposition. After optimal rigid-body superimposition of the primary proteins, the overlap volume of the secondary proteins was computed as the difference between the sum of the individual volumes of the secondary proteins and the volume of the union of the secondary proteins. For the computation of the molecular volumes, we calculated the solvent excluded volume with MSMS by Sanner et al. [18]. To confirm the results of this collision detection method, we alternatively computed the volume within the solvent accessible surface using ALPHAVOL [19]. Using these two complementary methods and measures, we filtered out few cases with numeric irregularities or instabilities. The high correlation of both methods ( Figure 2) also confirms that both are suitable for the task of collision detection. We kept only those results in our dataset that were consistently identified by both collision detection methods. Identification of Colliding Interaction Pairs The generation of the results proceeded in four main steps (see Figure 3). First, we identified all potential pairs of primary proteins, that is, all pairwise combinations of protein chains with identical UniProtKB accessions that were contained in at least two PDB structures and could serve as the primary proteins P of the interaction pair P-R and P-S. We found 4,832 proteins that were contained in at least two PDB files (out of a total of 17,213 relevant PDB files). This resulted in 1,145,086 possible combinations of potential primary proteins P. However, while the number of pairwise combinations of P is large, the number of involved proteins is much smaller. Many PDB files contain the same proteins, and one PDB file may contain multiple copies of the same protein. Thus the number of possible combinations of primary proteins grows quadratic. Second, to obtain the interaction pairs, we filtered for those primary proteins that interact with at least two different secondary proteins. When examining the primary proteins and their respective secondary proteins, we identified a total of 2,309,561 interaction pairs with different secondary proteins according to their UniProtKB accession numbers. Again, as above, the number of interaction pairs is much larger than the number of involved proteins because we need to combine all protein instances in an all-versus-all approach. Third, we compared the interface residues forming the interactions P-R and P-S in order to remove those interaction pairs with overlapping interfaces. Regarding the overlap of binding interfaces that were excluded due to our strict definition that requires no overlapping residues, most overlapping interfaces share at least 20% of the interface residues (average overlap 41%, Figure 4A). After this filtering step, 551,944 interaction pairs with distinct interfaces remained involving 1,432 primary proteins, which could be assigned to 6,691 PDB structures (see Figure 5A for the molecular functions of these proteins). Finally, all these interaction pairs were used as input for the collision detection method, and the volume overlap of the secondary proteins was computed for each interaction pair. Refinement of Collisions We defined a collision to occur if both collision detection methods (MSMS and ALPHAVOL) consistently reported an overlap of the secondary proteins of at least 2000 Å 3 . Based on this definition, we identified 12,772 interaction pairs with colliding secondary proteins. As can be seen in Figure 2, the correlation of the overlap values produced by the two applied collision detection methods is 0.85, indicating a high reliability of the detected overlaps. The results were further refined and collisions were retained only if the RMSD of the superposition of the primary proteins was less than 7 Å , to avoid false positives due to improper superposition. For the large majority of the detected collisions, the RMSD was close to 1 Å , which is indicative of only small structural differences between the superimposed primary proteins ( Figure 4B). We also excluded results where the sequence lengths of the primary proteins differ by more than 15 residues in order to avoid large structural differences between the primary proteins. Additionally, we required the alignment of the two primary proteins to cover at least thirty amino acids in order to remove interaction pairs where the primary proteins corresponded to small fragments of a full-length protein. These constraints reduced the number of colliding interaction pairs to 4,874 with an average RMSD of 1.23 Å and average overlap results of 2659 Å 3 (MSMS) and 7049 Å 3 (ALPHAVOL). The results were derived from 244 PDB structures, and 37 different primary proteins as well as 86 different secondary proteins participated in the interactions (see Figure 5B for the molecular functions of the primary proteins). These numbers show that many collisions of interaction pairs involved the same proteins. However, the number of colliding interaction pairs varied substantially with respect to the recurrences of the identified primary proteins, ranging from 1 to 3,777 structural instances. We also observed that, in 98% of the 4,874 interaction pairs, both the primary and the secondary protein chains comprise single SCOP domains [20]. Therefore, almost all collisions occur between single structural units of the participating proteins. One of the exceptions is illustrated in Figure 6C, where the extracellular domain of the growth hormone receptor contains two SCOP domains and the collision involves both domains. Notably, 97% of the collisions were derived from human interactions (see Tables S1 and S2 for details on all colliding protein interaction pairs). Analysis of Binding Interfaces Since protein interactions are often formed by domain-domain interactions, we studied the binding interfaces of the detected interaction pairs in more detail. To this end, we analyzed Pfam-A domains [21] because their interactions are available in domain interaction databases. Our analysis revealed that, for most of our results (4,807 colliding interaction pairs, ,98%), the interface residues of the primary proteins could not be exclusively assigned to a single Pfam-A domain-coding region. Instead, the interface residues belonged either to unstructured protein parts shared between one domain and additional unstructured parts of the primary proteins or shared between more than one domain and unstructured parts. This is particularly interesting since binding residues outside domain regions can stabilize the interaction additionally, but are not considered in domain interaction databases. In the collision results, we found only 42 interaction pairs consisting of P-R and P-S where the interface residues of both primary proteins P could exclusively be assigned to the same single domain-coding region. The latter regions included 9 different Pfam-A domain families occurring in up to 13 interaction pairs, of which 5 domain families participate in catalytic activities (see Tables S3 and S4 for details). Filtering for Biological Interactions To identify protein interactions that are reported as truly interacting, we used the database 3D Complex [22]. We kept only those results in which both protein interactions P-R and P-S are contained in 3D Complex. This reduced the number of colliding interaction pairs to 219. Of those, 5 collisions included multidomain secondary proteins containing two SCOP domains, but the collisions always occurred in the binding domain. Most of the biological interaction pairs, i.e., 184, involved interactions between hemoglobin protein chains. For the other colliding interaction pairs, the number of instances was below ten. The overrepresentation of hemoglobin likely results from a bias in available PDB protein structures towards certain well-studied protein complexes (see Table S5 for a list of all 219 colliding interaction pairs and their instances). A manual investigation, however, revealed that all of the detected collisions occur as a consequence of non-natural structural conformations due to artificially constructed protein interactions. Examples of Structure Collisions In the following, we show three examples of colliding protein interaction pairs ( Figure 6). Figure 6A shows the superposition of Rac1 protein chains (primary protein) that are in complex with an Arfaptin fragment or crystallized as a Rac1 trimer (secondary proteins). Regarding the superposition of the Rac1 protein chains, 177 residues were aligned and the RMSD of the superimposed primary proteins is 1.99 Å . The overlap between the secondary proteins is ,2215 Å 3 according to MSMS and ,5368 Å 3 according to ALPHAVOL. Rac1 is a hub protein that forms part of more than 70 complexes in the PDB and participates in well over 200 different pairwise protein interactions (see BioMyn database at http://www.biomyn.de [23]). Arfaptin functions as an effector of Rac1 [24]. One chain of the Rac1 trimer collides with the Arfaptin fragment. Rac1 trimerisation was experimentally triggered by unnatural high levels of zinc that do not occur in living cells [25]. Therefore, this trimer complex is not expected to exist in vivo. Figure 6B visualizes the superposition of the primary proteins cyclophilin A, which are in complex with a mutated HIV-1 capsid protein in one PDB structure and with a calcineurin B subunit in the other structure. 164 of the residues of the cyclophilin A chains could be aligned, resulting in a very precise superposition with an RMSD of 0.61 Å . The detected collision is larger than in the previous example, with ,2807 Å 3 reported by MSMS and ,5995 Å 3 by ALPHAVOL. Cyclophilins are enzymes involved in diverse functions including protein folding, transport and signaling [26]. They possess both sequence-specific binding and proline cis-trans isomerase activities. Cyclophilin A binds the HIV-1 capsid protein and facilitates virus replication. Calcineurin B participates in signaling for T-cell activation. The interaction between cyclophilin A and calcineurin B is part of a ternary complex with the immunosuppressive drug cyclosporin A. The latter binds to cyclophilin A, enabling both the binding and the inhibition of calcineurin B and is thus an artificial construct [27]. Figure 6C shows a collision between a growth hormone receptor (GHR) and a growth hormone (GH), which are both crystallized in interaction with the primary protein GHR. GHR was aligned with an RMSD of 1.65 Å ranging over 186 residues. A collision was detected between the second GHR from the dimer with the GH chain from the monomer, and MSMS reported ,2330 Å 3 and ALPHAVOL ,6194 Å 3 . The active signaling complex has a stoichiometry of one GH molecule bound to two copies of its receptor [28]. The detected collision originates from the artificial construct of a GHR monomer in complex with GH (PDB 1a22), which does not exist in vivo [29]. Conclusions Our structure collision approach enabled the discovery of several cases of protein interaction pairs with colliding protein structures. We did not detect biologically relevant 3D collisions of simultaneously possible protein interactions, but our analysis was limited by the low number of structurally determined protein complexes in the PDB. The identified collisions usually occurred between protein structures that were determined under different experimental conditions to study alternative conformations or quaternary structures of the proteins. Nevertheless, our analysis approach revealed several interesting occurrences of structural collisions. Therefore, it is still important for future studies of protein interaction networks that separate binding interfaces might not imply simultaneously possible protein interactions. The functional implications of spatially colliding interaction partners can be manifold and similar to those of overlapping or identical binding sites such as the temporal control or inhibition of protein binding. In particular, structure collisions might be due to disease-associated mutations or constitute essential regulation mechanisms for transient protein interactions as they occur in signaling processes [30]. Here, collisions might involve adaptor and scaffold proteins and their interaction partners. These proteins frequently have a greater number of interaction partners than binding interfaces [23]. Thus, the combination of proteins that bind simultaneously to another protein at a specific time point or cellular location needs to be well-defined [31]. Regulatory mechanisms different from the number of binding interfaces are needed for understanding the binding of specific combinations of proteins. Finally, aside from the lack of structural data, there might be other reasons for not observing biologically relevant collisions in our study. For instance, PDB structures often consist of single protein domains as independently folded structural units instead of complete proteins. Therefore, different domains from a multi-domain protein can be found in multiple PDB structure chains. Modeling structural linkers between the domains is still a very difficult task and cannot be performed at large scale yet. Consequently, we might have missed collisions between protein chains that bind the same protein in separate domains. Further issues are the existence of disordered regions and allosteric effects [32,33], i.e., the flexible nature of proteins, which might promote or prevent collisions. However, the required flexibility data on minor and major structural movements have not been available yet for such large-scale analyses as performed by us as well as other researchers. When more comprehensive structural datasets of protein complexes will be available, further work might shed light on the presence and functional relevance of naturally occurring structure collisions. Supporting Information Table S1 List of colliding protein interaction pairs. (PDF) The structures of the primary proteins were superimposed (green arrows), and colliding regions are marked by red arrows. (A) Collision of the secondary proteins Arfaptin (PDB 1i4d, chain A, blue) and Rac1-GDP (PDB 2p2l, chain B, yellow), using Rac1-GDP as primary protein (PDB 1i4d, chain D, and PDB 2p2l, chain C). (B) Collision of calcineurin B subunit isoform 1 (PDB 1mf8, chain B, blue) and HIV-1 capsid protein (PDB 1m9x, chain D, yellow), using cyclophilin A as primary protein (PDB 1mf8, chain C, and PDB 1m9x, chain A). (C) Collisions of growth hormone receptor (PDB 1hwg, chain B, blue) and growth hormone (PDB 1a22, chain A, yellow), using soluble growth hormone receptor as primary protein (PDB 1hwg, chain C, and PDB 1a22, chain B). doi:10.1371/journal.pone.0019581.g006
2018-04-03T01:49:46.411Z
2011-06-02T00:00:00.000
{ "year": 2011, "sha1": "8bf8835ee4374b406642e2f53a547fabfe607655", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0019581&type=printable", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c40e74be7fc73a22644f28312ba34615b23e633e", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
257430608
pes2o/s2orc
v3-fos-license
When the COVID-19 pandemic collides with the obesity epidemic in the United States: a national survey Background COVID-19 has disrupted life and put a spotlight on obesity as a risk factor for severe COVID-19 outcomes. Five years ago, we performed a survey exploring ways Americans view obesity and its treatment. We repeated the survey in the COVID-19 era to explore the impact of this once-in-a-century public health crisis on public perception and behavior surrounding obesity. Objective To explore if America’s views on obesity have changed after more than 2 years of living through COVID-19. Setting The national survey was conducted by the National Opinion Research Center (NORC) from December 10 to 28, 2021. Methods We revisited some of the questions posed in a survey 5 years ago and added questions asking whether COVID-19 has changed views on obesity. We surveyed 1714 Americans sampled from a probability-based, nationally representative panel. Responses of Americans to questions about obesity were compared with the same or similar questions asked 5 years ago. Results COVID-19 has led to a change in how Americans view risks of obesity and benefits of treatment. Nearly one third (29%) of Americans became more worried about having obesity, and this is more pervasive among Black and Hispanic Americans (45%). This heightened concern led an estimated 28 million people to explore treatments not considered before the pandemic, including 6.4 million who thought about bariatric surgery or taking prescription obesity drugs. Conclusions COVID-19 may have heightened Americans’ worry about obesity. This may present an opportunity for conversations about treatments, including metabolic surgery. In March 2020, the World Health Organization declared the spread of COVID-19 a pandemic, and since then, the United States has recorded more deaths from the infection than every other country in the world [1]. More than 1 million Americans have died of COVID-19 and related complications [2], and many of these deaths occurred in people with the underlying disease of obesity [3]. According to the U.S. Centers for Disease Control and Prevention (CDC), obesity affects 42.4% of Americans [4]. Studies show obesity can weaken or impair the body's immune system and cause chronic inflammation and increase the risk for cardiovascular disease, stroke, type 2 diabetes, certain cancers, and many other diseases [5]. Recently obesity was tied to severe COVID-19, characterized by an increased need for mechanical ventilation, hospital admissions, longer length of stay, and increased mortality [6,7]. One study showed that 50.8% of emergency room or inpatient visits were with patients who had both COVID-19 and obesity [6]. At the same time, another study found an association between the substantial weight loss achieved with bariatric surgery and improved outcomes of COVID-19 [8]. A little more than 5 years ago, we (the American Society for Metabolic and Bariatric Surgery (ASMBS)) did a study that found 81% of Americans viewed obesity as an extremely or very serious health problem, equal in seriousness only to cancer and even more of a threat than heart disease and diabetes [9]. However, relatively few seemed to know the best ways to combat obesity, overestimating the safety and effectiveness of some treatments and underestimating it in others. Most people say they did not even involve their doctors in their thinking about obesity. In the current study, we revisited some of the questions posed in this survey to explore if America's views on the risks of obesity and its treatment have changed after more than 2 years of living through the COVID-19 pandemic. We hypothesized that COVID-19 heightened Americans' worry about the threat of obesity. The goal of this study is to heighten awareness of the dangers of obesity and the importance of weight loss interventions such as bariatric surgery and prescription obesity drugs, which are among the most underutilized treatments in medicine. Methods The 2022 American Society for Metabolic and Bariatric Surgery (ASMBS)/National Opinion Research Center (NORC) Obesity in America Survey was conducted by NORC at the University of Chicago and funded by the ASMBS and the ASMBS Foundation, a nonprofit dedicated to obesity research, education, and advocacy. Data were collected using AmeriSpeak, NORC's probability-based panel designed to be representative of the U.S. household population. During the initial recruitment phase of the panel, randomly selected U.S. households were sampled with a known, nonzero probability of selection from the NORC National Sample Frame and then contacted by U.S. mail, e-mail, telephone, and field interviewers (faceto-face). The panel provides sample coverage of approximately 97% of the U.S. household population. Interviews were conducted from December 10 to 28, 2021, with adults aged 18 and older representing the 50 states and the District of Columbia. Panel members were randomly drawn, and 1714 completed the survey (1644 via the web and 70 via telephone). Panel members were invited by e-mail or by phone from a NORC telephone interviewer. Interviews were conducted in both English and Spanish, depending on respondent preference. Respondents were offered a small monetary incentive ($2) for completing the survey. Quality assurance checks were conducted to ensure data quality. In total, 108 interviews were removed for nonresponse to at least 50% of the questions asked of them, for completing the survey in less than one third the median interview time for the full sample, or for straight-lining all grid questions asked of them. These interviews were excluded from the data file before weighting. Once the sample has been selected and fielded, and all the study data have been collected and made final, a poststratification process was used to adjust for any survey nonresponse as well as any noncoverage or under-and oversampling resulting from the study-specific sample design. Poststratification variables included age, gender, census division, race/ethnicity, education, housing tenure, and telephone status. Weighting variables were obtained from the 2021 Current Population Survey. The weighted data reflect the U.S. population of adults aged 18 and older. Results The final stage completion rate was 22.9%, the weighted household panel response rate was 17.1%, and the weighted household panel retention rate was 75.6%, for a cumulative response rate of 3.0%. The overall margin of sampling error was 63.3 percentage points at the 95% confidence level including the design effect. In addition, Black and Hispanic respondents were sampled at a higher rate than their proportion of the population for reasons of analysis. The overall margin of sampling error for the 471 completed interviews with Black respondents was 65.8 percentage points at the 95% confidence level including the design effect. The overall margin of sampling error for the 438 completed interviews with Hispanic respondents was 66.6 percentage points at the 95% confidence level including the design effect. However, nearly 4 in 10 of all Americans view obesity as a larger health risk now than they did before the pandemic and many more are worried about it personally, particularly Black and Hispanic Americans (45% Black and Hispanic adults versus 20% White adults seen in Fig. 2). Fig. 3 illustrates that COVID-19 was a motivating factor for 39% of Americans who had attempted weight loss in the past year, a number that grows even higher among Black (43%) and Hispanic adults (51%). COVID-19 motivates millions to lose weight Almost 20% or 28 million Americans who had attempted weight loss at some point in the last year considered weight loss methods they had not tried before the pandemic. Fig. 4 illustrates percentages of this group who mentioned specifically which methods they considered, including diet and exercise (65%), working with a doctor (32%), taking prescription medications (14%), or having weight loss surgery (14%). Most Americans think obesity is a more serious problem than COVID-19 After 2 years of living through the pandemic, 68% of Americans consider COVID-19 a serious problem in the United States, and another 19% consider it moderately serious. Those with obesity view COVID-19 as an even bigger problem (73%), as do women (75%) and Black Americans (87%). Nearly two thirds (64%) of Americans say they are paying more attention to their overall health because of COVID-19. Black Americans are more likely to pay more attention to their overall health (78%) than Hispanic (67%) or White adults (60%), who are paying the least attention since the pandemic. Majority of Americans trying to lose weight Weight loss remains a struggle for most Americans. Three fourths (76%) have tried to lose weight at some point in their lives, with more than half (58%) currently in the process. The number of people with obesity trying to lose weight is even higher-91% say they have tried to lose weight in the past and 70% are trying to do so now. Nearly three fourths (73%) consider dieting and exercising to be the most effective method for long-term weight loss, even more effective than involving a doctor (65%) or weight loss surgery (56%), the latter of which has been shown to produce the greatest and most durable weight loss and health benefits among people with obesity and related conditions. Only 23% deemed taking prescription medications as effective followed by dietary supplements (18%). Those with obesity tended to consider methods such as losing weight on their own, with the help of a doctor, dietary counseling with a dietician, and formal exercise and weight loss programs as less effective than did their counterparts who did not have obesity. Perceptions of weight loss method effectiveness and safety Weight loss surgery is seen as safe by one third of Americans (33%). More Americans believe someone would have a greater chance of dying from complications of obesity (47%) or COVID-19 (39%) than weight loss surgery (19%). Nonetheless, more than 85% of those surgically eligible for weight loss surgery based on body mass index have not received a recommendation for it from their doctor. Reasons for obesity: lack of will power or genetic? Americans are split as to whether obesity is the result of lifestyle choices (47%) or from genetic, environment, and social factors (53%). Men more than women tend to see obesity as a lifestyle choice resulting from a person's eating habits and lack of exercise (57% versus 38%), whereas women tend to view obesity more as resulting from different genetic, environmental, and social factors (62% versus 43% among men). Nearly three fourths (73%) of Americans trying to lose weight cite a lack of will power as the biggest reason for obesity, despite medical consensus and scientific evidence that find genetics, environment, social, and behavioral factors combined are the primary causes of obesity. Not being able to find healthy foods easily, conveniently, or cheaply were also considered major or minor barriers to more than half (53%) of those who have tried to lose weight. One in 10 (13%) felt that they could have a better chance of losing weight if weight loss methods were covered by their insurance and considered a lack of insurance coverage a major barrier to treatment. Medical community and consumers see it differently-is obesity a disease or a risk factor? The survey found that the public thinks differently about obesity than the medical community. Most Americans view it as a risk factor (61%) for other diseases rather than a disease itself, and nearly three fourths (73%) of those who have tried to lose weight believe obesity is caused by a lack of willpower-percentages that haven't changed much since the 2016 ASMBS/NORC Obesity in America Survey even though the American Medical Association, the nation's largest physician group, classified obesity a disease nearly a decade ago [10]. Health professional involvement in addressing obesity Only 41% report having spoken to their doctor about their weight, with patients much more likely to initiate discussion than their physicians (60% versus 39%). Nearly 1 in 5 (18%) said COVID-19 increased the chances that they would bring up the issue of their weight-a percentage that grew to nearly a third among Black (28%) and Hispanic (29%) Americans and individuals with obesity (27%). Discussion COVID-19 has changed the way millions of Americans thinks about obesity. Nearly one third say the pandemic has made them more worried than ever about obesity and 39% of those attempting weight loss said the risks associated with COVID-19 directly contributed to their decision to lose weight. While Americans consider obesity as the biggest health threat facing the nation, tied only with cancer, most do not treat the disease as seriously as cancer. Most of those trying new approaches to lose weight during the pandemic turned to diet and exercise alone (65%), which have proven to be insufficient solutions for most people with obesity. Bariatric surgery such as gastric bypass and sleeve gastrectomy have been shown to be the most effective and long-lasting treatment for severe obesity [11]. These operations often improve or resolve obesity related diseases including type 2 diabetes, heart disease, and high blood pressure and lead to significant and durable weight loss. Bariatric surgery has a safety profile comparable to some of the safest and most commonly performed surgeries in the United States, including gallbladder surgery, appendectomy, and knee replacement [12]. Nonetheless, the surgery, also known as metabolic surgery, is among the least utilized treatments in medicine [13]. The COVID-19 pandemic may have changed perceptions and behaviors. Our survey found about 14% or 6.4 million Americans considered having weight loss surgery or using anti-obesity prescription drugs amid the pandemic. This is a particularly notable finding given that each year only about 1% of eligible patients have weight loss surgery and only 1% to 3% take prescription drugs for obesity. In 2020, the number of bariatric procedures dropped to under 200,000 for the first time since 2015, a decline that can be largely attributed to COVID-19 restrictions in place at the time on performing what was considered "elective" surgery. During the year prior, an estimated 256,000 procedures were performed, which still is a fraction of the 25 million adults in the United States with severe obesity [14]. One limitation to this study is that it does not represent a true longitudinal cohort study-2 different populations were surveyed (one 5 years ago and one in 2021). However, both were representative of the U.S. population, as certified by AmeriSpeak, and designed to be representative of American households. Furthermore, the main goal of this paper and survey was to investigate whether COVID-19 has changed the way Americans view obesity and obesity treatments, and results from our current survey answer this question. Conclusion Our hypothesis that the COVID-19 pandemic would heighten Americans' worry about the threat of obesity is supported by our results, as 29% answered that COVID-19 made them worry more about having obesity. COVID-19, as devastating as it has been and continues to be, has created an unprecedented opportunity to turn consideration into action for the millions of people struggling with obesity and thinking about new strategies to address it. We may be on the cusp of a new era in obesity treatment as the increasing dangers of the disease and the lifesaving benefits of evidence-based treatments, such as bariatric surgery, become harder to ignore.
2023-03-11T14:04:04.723Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "4045153ff5183ed6a18c21da26ba460092987d93", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9995298", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "755de9b938d931c6a7fc1e225d670efc1bc4f7dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
251403274
pes2o/s2orc
v3-fos-license
The Influence of Visual Provenance Representations on Strategies in a Collaborative Hand-off Data Analysis Scenario Conducting data analysis tasks rarely occur in isolation. Especially in intelligence analysis scenarios where different experts contribute knowledge to a shared understanding, members must communicate how insights develop to establish common ground among collaborators. The use of provenance to communicate analytic sensemaking carries promise by describing the interactions and summarizing the steps taken to reach insights. Yet, no universal guidelines exist for communicating provenance in different settings. Our work focuses on the presentation of provenance information and the resulting conclusions reached and strategies used by new analysts. In an open-ended, 30-minute, textual exploration scenario, we qualitatively compare how adding different types of provenance information (specifically data coverage and interaction history) affects analysts' confidence in conclusions developed, propensity to repeat work, filtering of data, identification of relevant information, and typical investigation strategies. We see that data coverage (i.e., what was interacted with) provides provenance information without limiting individual investigation freedom. On the other hand, while interaction history (i.e., when something was interacted with) does not significantly encourage more mimicry, it does take more time to comfortably understand, as represented by less confident conclusions and less relevant information-gathering behaviors. Our results contribute empirical data towards understanding how provenance summarizations can influence analysis behaviors. INTRODUCTION Exploratory analysis involves the process of gathering information, identifying patterns, and investigating hypotheses. Due to the openended nature of exploratory analysis, there can be uncertainty in understanding the thought processes and the factors contributing to conclusions [5,51]. With multiple ways of working through the data, individuals might arrive at different conclusions. Often this work happens in collaborative sessions or team environments [37,75], so communicating how a discovery is reached is critical to all parties maintaining understanding with each other. By assisting with tracking the analysis history, software can help facilitate collaboration and hand-offs of information across shifts. By tracking the analytic provenance [5,44,52] of an investigation, other analysts can later review the history to reveal what information was considered-or not considered-and how connections in the data led to the development of hypotheses or conclusions. While the potential value of provenance information is strong, core challenges remain with how to process provenance data and design effective representations to support easy human understanding. As has been well documented in the visualization community, the representation of information can have dramatic effects on human interpretation of data [16,20,64,68]. Furthermore, there is relatively limited empirical knowledge of how provenance information is used in hand-off scenarios where a second analyst continues an analysis with provenance records from a prior analyst [75]. And while it is expected that awareness of a previous analyst's thought process would influence the analysis strategies for a new analyst, there is a need to better understand how the approach might be affected. Additionally, our research studies how different forms of provenance representation might influence continuing analysis We conducted a user study on how people use provenance information when continuing an analysis after a previous analyst's partial progress. The study is situated in the context of an intelligence analysis scenario with a text data set. To explore different analyst responses to provenance representation, the study compares two types of summarized provenance representations (based on interaction history and data coverage) along with a control condition without explicit provenance information. Through this study, we characterize patterns in participants' analysis findings, interaction behaviors, and approaches to using the provenance information in an information hand-off scenario. We contrast the effects of two provenance representations to show how confidence in one's conclusion relates to the level of detail in handedoff data representations, describe interaction metrics for investigation behaviors, and confirm analysis strategies identified in prior work. RELATED WORK Here we describe the nature of sensemaking, the uses for provenance information to represent how users understand problems, and the various techniques used to evaluate and correct for bias in analyst processes. Collaborative Sensemaking From the early work of decision theory [10,63], the goal was to understand how users arrived at a clear conclusion when working with ill-structured data. Researchers continued this work with the study of the sensemaking process, which covers the human tendency to oscillate between foraging for new information and schematizing how it fits with what one already knows [48]. While multiple definitions have evolved from this preliminary work, the general understanding is that sensemaking "involves the ongoing retrospective development of plausible images that rationalize what people are doing" [69]. Of interest to the work discussed in this paper is the Data-Frame model proposed by Klein et al. [34]. In their work, they explain how something is introduced can have a direct impact on how users frame and continue their analysis. Intelligence analysis [9,18], medical diagnosis [23,39], and even humble internet research [41,42,49] all share tasks where complex data requires thoughtful consideration and artifact synthesis to arrive at and present a formal conclusion [58]. So, while common analytic settings clearly employ parts of the sensemaking process, they also define operations involving collaborative teaming or hierarchical units [37]. Collaborators can distribute workload, but they need to establish a common ground to understand the investigation status and contribute toward the goal [53]. Even without the added complexity of coordination and knowledge sharing among collaborators, sensemaking tasks are often nonlinear by nature. Insights and connections may be discovered independent of a strict method or procedural scaffolding, making it more challenging to systematically describe how concepts build on each other or explain the overall relationships. Add to the combination that collaborating analysts are dealing with multiple layers of uncertainty and trust [55], and the situation becomes even more complicated. Among the many purposes targeted for provenance support in visual analysis applications [50], aid for collaborative work is bolstered by helping people recall what they know, clearly communicate the steps taken, and making it possible to reproduce past work [58]. As demonstrated by Mathisen et al. [38] in their description of the InsideInsights tool, capturing a user's interactions as they worked and providing an interface for the addition of annotations, improved the collaboration and established common ground. This technique of enhancing interaction data with analyst-generated annotations is a common way to help maintain context for an analyst or different audience members [46]. Still, often the process of annotating (e.g., writing notes, tagging information) distracts from the gathering of information because it requires users to synthesize their fuzzy concepts into specific terms that may or may not communicate their exact meaning upon later review [58]. At the same time, the process of exploratory data analysis is inherently dynamic, leading to the requirement for plans to constantly change with the situation [61]. Without the transcription of accurate mental schemes, details can be forgotten, leading to false conclusions and inaccuracies [51,52]. By partnering with computers, human analysts can focus less on annotation tasks, like recording how they arrived at different concepts, and shift their attention toward directing the analysis and hypothesizing relationships between discovered ideas [11,19,49]. In this study, we examine how the representation of a user's process impacts the sensemaking processes of new analysts. Provenance and Visual Summarization Analysis tools that capture analytic provenance information aim to improve the understandability [4], reproducibility [47], and transparency [14,28] of insight generation over time by providing the "story" of data exploration and interpretation [46]. Yet, there are a variety of provenance types that serve different purposes depending on the context [29,50]. Provenance can aid in the recall of past work, the verification of others' work, the recovery of past actions, the review and optimization for future analysis, the presentation of new information, and ultimately the communication of insights between collaborators [50]. Visual provenance summarizations describe events that occur over time, and myriad approaches have been demonstrated for both algorithmic summarization and visual representation to ease human interpretation of the workflow, especially in collaborative settings. There is a need for accurate techniques that compress temporal events that occur while matching an appropriate level of temporal granularity and summarization to best serve different audiences. Some techniques focus on preserving the timing and order of events to allow for the review of specific analysis turning points (i.e., History representations) [17,42,75] while others provide a high-level summary of topics reviewed and remove elements of timing completely to make it easy to see what has been explored and what needs further analysis (i.e., Coverage representations) [20,57,68]. Provenance helps collaborators to maintain common ground as they work synchronously [14], or asynchronously [72,75]. To assist in collaborator communication, many provenance visualizations focus on providing a reference to a user's interaction history or data actions using a timeline [75] or branching trees [17,28,41]. Others take a drastically different approach by removing the temporal aspect entirely and instead aim to describe what data was explored rather than when it was explored. By ignoring time in the representation, viewers can focus on the context of data coverage and patterns (e.g., [57,68,72,74]). These techniques trade a higher level of summarization for less emphasis on analysis step replication and the specific actions taken to arrive at an analysis state. The potential value of provenance summarization features in analysis tools is well justified by prior literature [28,45,50,71,72,74]. However, it is less clear how differences in the way the provenance information is summarized and visualized can affect an analyst's process and decisionmaking. Numerous studies from the visualization community have demonstrated that differences in visual representation or the addition of new information can influence user bias. For example, Dimara et al. [16] have shown how allowing users to intentionally remove salient data (e.g., outliers) can lead to less bias and more rational decisions. Also, Wall et al. [68] have shown how providing users a summarization of what they have reviewed (i.e., an overview of their data coverage) can increase some user's awareness of unconscious biases, while potentially encouraging others to amplify their biases by intentionally focusing their analysis on specific areas or hypothesis too. Considering the common aim of using provenance visualization to support collaboration among multiple analysts [14,56], and the impact that visualization can have on user interpretations [13], there is a need to better understand the ways the availability and representation of provenance information from one analyst may influence another analyst's behaviors, biases, or conclusions. In addressing this gap, our work draws from lessons learned from existing empirical studies on visual design, provenance, and users' analysis choices to further understand how to best provide provenance information to users. Empirical Studies of Visual Design Influences Within the visualization community, there is a long history of evaluating the effects of using visual tools on user performance which is consistent with provenance representations. Of specific interest are the behavioral effects when making history information available. For example, Zhao et al. [75] focused on the various strategies participants used when examining the work of collaborators, suggesting that different strategies lead to different degrees of investigation completeness. Similar work has evaluated the types of strategies users employ when completing sensemaking tasks [32] and there are concerns that the strategies are influenced by the interface. This becomes quite obvious when considering the types of interactions afforded to users in an interface. For example, tools like Hindsight clearly describe recall provenance through scented widgets to make it clear what parts of the data individuals have already explored in more detail [20]. By lowering the opacity for parts of the data recently examined, unexplored areas were, therefore, more prominent, and users were influenced to extend their examination. Xu et al. [72], in a more collaborative setting, similarly invited participants to review what data had been previously inspected and how it was visualized by prior analysts. By spatially distributing others' attempts in a "constellation," finding commonly used data and alternative areas of interest become more available to the user. This is a direct result of the interface and its influence on user interactions. Consistent with this line of work, multiple studies have shown how cognitive factors (like the anchoring effect [12]) can significantly impact users' insights and exploration in decision-making tasks [15]. It is clear that representations influence analyst behaviors [55]. One of the goals of analytic provenance research is to prevent the effects of belief perseverance [12], base rate biases [40], misuse of representative heuristics [1], and resolving conflicting insights [35]. As an example, visualization techniques have tried to prevent selection bias by showing an overview of how the data has been filtered, and data type comparisons to help users recognize when they may be examining a subset of data too closely or with too much emphasis [8]. Similarly, by showing the work already complete and suggesting ways for the analysis to continue, SOMflow also helped analysts complete a more thorough investigation [54]. These techniques rely on provenance information to help users recall what has been explored and potentially rectify their cognitive biases. Applicable to this study, Sarvghad and Tory [56], compared how coverage and timeline representations improved an analyst's accuracy, and the amount of data explored when analyzing structured numerical data. To extend their findings, we compare effects in textual data analysis and also inspect the strategies employed. Although there is evidence that provenance representation influences user strategies and performance, there are outstanding questions about how different provenance representations compare, especially in textual data analysis. To understand the influence of provenance representations on continuing open-ended investigations, we conducted a between-subject experiment. Participants were asked to complete an exploratory data analysis task started by a prior investigator with different representations of provenance information available. We describe the key factors of interest and experimental design in the next section. Motivation and Study Design Designs for summarizing analytic provenance can take a wide variety of forms. For the focus of our study, we consider two common classes of provenance summaries as generalizations of designs found in the research literature. Specifically, we distinguish provenance representations into either interaction history or data coverage summaries. Both designs bring uniquely different benefits to analysts in practice. Provenance summaries using an interaction history approach tend to provide a timeline of events or describe which data are processed over time. This type of provenance helps identify critical moments of failure, reproduce results, or provide additional data transparency, but it takes more time to review because there is often more data to make sense of. Unfortunately, while computing systems are capable of capturing interaction events, this process can quickly bloom into large sequences that are challenging to summarize. Many types of provenance visualization tools do not distill meaning from raw interaction logs, choosing instead to visually represent all interactions or analysis stages in interactive tools to uncover patterns or flows (e.g., [14,28]). In contrast to provenance designs emphasizing the temporal flow of the analysis, we describe a separate generalized class of provenance summary as data coverage designs, which typically represent an overview of what data was explored instead of when it was explored. By compressing time, users can see what has been explored and what remains [56,57,68,72,74]. These techniques provide a higher level of summarization, focusing on providing a sense of context but lack enough detail to clarify or recreate past work [7]. While modern techniques still implement examples from both data coverage and interaction timelines [60,66,75], prior work has seen greater emphasis on using machine learning to extract patterns and assist in the summarization of time [24,27,59,74] and less emphasis on comparative studies investigating the implications of different provenance summaries with people. Questions remain about how best to summarize interaction histories in digestible ways that help analysts by balancing content and cognitive load. Many works have shown that provenance representations influence the analytical behaviors of users [4,13,20,25], yet they do not directly compare the effects of the prototypical provenance representations we discussed. A more direct comparison between interaction history and data coverage would be beneficial since both techniques summarize and present the past in a digestible way and future automation techniques would benefit from guidance on selecting the appropriate level of detail for a user's task. Therefore, we designed a between-subject experiment to study how individuals work with different forms of provenance information while completing a textual data investigation. We address the following research questions to help direct our analysis: 1. RQ1: How does the inclusion of analysis history or data coverage influence the conclusions reached by a secondary analyst? 2. RQ2: How do secondary analyst behaviors differ when provided the analysis history or data coverage from a prior analyst? 3. RQ3: How does the inclusion of analysis history or data coverage influence the types of strategies a secondary analyst uses to solve the problem? As a basis for the study, we used a browser-based, direct manipulation interface to display documents and record participant interactions. Since provenance representations are commonly used in collaborative data analysis scenarios, we simulated a hand-off scenario where users pick up and finish the analysis started by someone else. In an online study, participants were asked to review a set of documents and describe any associations they were able to make. Provenance representations informed by the same prior analyst allowed for comparing analyst conclusions and strategies. Based on prior work with provenance, we hypothesized two main effects on behavior in collaborative hand-off cases. We expected history summaries to encourage the continuing analyst to engage in more verification of the prior analyst's progress due to the inclusion of a more complete record of the prior analyst's work (Hypothesis H1). And since data coverage summaries provide context about what has been explored at a glance, we expected users to quickly understand the prior analysis and explore other topics (Hypothesis H2). With a combination of a think-aloud protocol [43], screen recordings, interaction logs, and semi-structured interviews, we sought to identify differences in participant conclusions reached and strategies used to better characterize the impact of provenance on collaborative analysis. Visual Analysis Task and Tool In collaborative data analysis tasks, users work together to uncover relationships and share results. Frequently, these analysis tasks require users to communicate ill-structured and potentially relevant information for future analysts to recognize and use. For our study of how a second analyst picks up after a prior analyst makes partial progress, we used a synthetic intelligence analysis scenario based on an existing publicly available dataset from the first micro challenge of the 2010 VAST Challenge data series [26]. This dataset, and others from the VAST Challenge, are commonly used as realistic proxies to simulate analysis tasks for visualization research (e.g. [56,75]). The data consists of fictional phone transcripts, email correspondence, forum posts, newspaper articles, and other intelligence reports about fictional illegal arms traders. Of the 103 documents in the dataset, only 16 contain information relevant to the solution we asked participants to complete. Within the tool, users could flexibly move and collapse documents as they explored, similar to other analysis workspace tools [2,33,52]. Specifically, users were tasked with determining if illegal arms traders were responsible for the spread of a mysterious pandemic. Participants could right-click to access a context menu. From this menu, they could trigger a search event for the term under their cursor or type out their own query in a text field. Searches highlighted the set of document title bars that contained exact string matches to the queried text. Critical to this experiment was identifying when and what kinds of information was revealed to users. The tool would actively log user events (e.g., document opens, mouse enters and exits, and searches conducted) as they explored the dataset. Each logged interaction was recorded with the time since the session began, the event type, the element's identifier, and other relevant information (position on screen, content of search, etc.). All participants were given the same set of documents, instructions, a "Summary for Supervisor" field, and a note from the prior analyst (see figure 1). The instructions further reiterated the scenario introduced to the participants and the "Summary for Supervisor" was a blank note where participants would type out their conclusions at the end of the session. We describe the provenance representations and their interactions in the next section. Conditions As described in Section 2, we define two general categories for provenance representations based on how they communicate time. To further explore how data coverage and interaction histories impact investigation conclusions, behaviors, and strategies, we examined prototypical representations of each. Below, we describe the different provenance representations used in the experiment: To serve as a Control, all participants were given a textual description designed to mimic the series of conclusions or cursory annotations a prior analyst may string together as they completed the same analysis scenario (i.e., the "Notes from Analyst A"). This note served as an easy way to provide context and offer potential entry points for a participant's data exploration. Inspired by similar methodologies [56,75], the note was based on the interaction log of a researcher's pantomimed analysis that followed specific details through documents to arrive at a partially correct conclusion. Several statements were intentionally hedged to provide ample openness for where participants could begin. In the pantomimed interactions, 15 documents were opened, along with 31 searches and 53 highlights. Generally, the referenced documents and approach would uncover the majority of details required to solve the whole solution (i.e., 6/16 documents). These participants were not given additional panels to filter the dataset and had to rely on the search tool described earlier to find information and solve the scenario. We used this same interaction history to construct the other provenance representations discussed below. Some participants were additionally given a Coverage representation (Figure 2 A) of the data explored by the prior analyst. This view showed the countries that received the most attention from the prior analyst as a list with miniature bars. Each country's bar displayed the ratio of documents explored by the prior analyst and the bars were arranged in descending order by the country's mention frequency. The original dataset [26] did not have labels for the countries mentioned in a document, so these had to be hand-labeled by the researchers. Intending to simulate how a hypothetical tool would work, the researchers attributed the first city mentioned in a document to its corresponding country. These hand-labeled countries were also added to the preamble of each document, to help balance the conditions. This way the other conditions could also filter specific countries with the search tool. The Coverage panel made these country-focused searches conveniently accessible by displaying them as a list. By clicking a country from the list, a "tooluse" event would be logged, and the affiliated documents would be revealed-distinguishing those explored by the prior analyst and those not reviewed-by coloring the documents' title bar. Clicking a selected country again removed document coloring. Participants could select a country to filter the documents and help direct their investigation. Other participants were provided a History representation (Figure 2 B) that summarized the steps the prior analyst took to complete their task. This view gathered and displayed the searches and highlights of analyst A as different segments of the analysis. The interactions from the baseline log were augmented with a handful of additional highlight terms or relevant search terms to provide a bit more content for the users of the History panel. Participants could scan through the History panel to help get a feel for the general terms (searches) and evidence (highlights) the prior analyst cared about over time. While manually segmented, the segmentation was done systematically. Segments were delineated based on a search event and contained the corresponding highlight events that took place between searches. In total, there were 11 segments, each labeled with the number of documents reviewed as well as a small timeline in the right corner of each segment to visualize its duration and placement in the prior analysis. Much like the Coverage representation, when a participant clicks a segment, the corresponding documents from that segment would be revealed by changing the color of affiliated document titles. Clicking a search or highlight term would run the interface's search command and simulate the results the prior analyst would have seen. Clicking a segment again removed the applied colors. Participants could select segments to filter the documents and help review aspects of the prior analysis. Procedure This research was approved by the organization's institutional review board (IRB). Participants joined a virtual meeting room and were asked to complete a demographic questionnaire capturing age, gender, academic program, and self-report measures for the ability to complete analysis tasks and their communication abilities. The experimenter then explained the web application interface and its functions to participants using a set of slides via screen-share. Within the tutorial slides, participants were introduced to the think-aloud protocol and then asked to demonstrate the technique with a short, irrelevant document. The researcher offered feedback and ways to help improve their think-aloud (e.g., they were asked to read aloud and verbalize their plans for what they would do next). Also, it is known that first impressions can have a large degree of influence over what people focus on [62], so all participants were read the same starting scenario (see section 3.2) to control for variations in wording. Participants were invited to ask questions about interactions throughout the tutorial, and a researcher was present with them in the interface to answer interface function questions as they worked. Participants were not told to explicitly "strategize," but rather to "choose what to read with intention" because "they would not have enough time to read everything." Starting a screen recording, participants were given 30 minutes to complete the task and asked to think aloud [21] as they read and contemplated what associations they saw with the expectation that reading aloud would encourage deeper reflection [6]. At 10,20,25, and at the end of 30 minutes, participants were warned about the time elapsed and reminded of their task: "Try to identify what associations may exist and prepare your summary for your supervisor." After the analysis concluded, a post-task interview (included in the session recording) helped capture their mental model and opinions of the tools they used. With their final analysis workspace still visible, Participants were asked a series of semi-structured interview questions to further specify their understanding. These questions asked participants about the relationships they were aware of, identify retroactive strategies they used to arrive at their conclusions, their thoughts about the prior analyst's work, and the provenance views as applicable. Critically, they were asked to make a judgment on the relationship of arms dealing with the Nigerian disease (referred to as their conclusion). Participants We recruited 41 undergraduate university students from an upper-level computer science course as participants in the study. Participants were compensated with course credits. Data from five participants were excluded from analysis due to technical problems, communication uncertainty because of language barriers, or misunderstanding of task instructions. Of the remaining 36 participants, they were initially randomly distributed, before researchers assigned later participants to the smallest groups to finish with experimental groups of equal size (12 participants per condition). Fourteen participants (38%) identified as female, and 1 (3%) identified as non-binary/third-gender. All but 6 students (83%) were completing a degree in computer science, computer engineering, or software engineering. The majority of participants were between 18 and 24 years of age (77%), 5 participants (14%) were within the 25-34 age category and 3 participants (8%) were older than 35. We asked participants to selfreport their ability to complete data analysis on a scale from 1-10; an average reported score of 3.14 signified general inexperience in completing analysis tasks, indicating the participant sample might be considered analogous to novice analysts with no or limited experience. RESULTS In this section, we present the results and insights drawn from our quantitative and qualitative data analysis. How people find and interact with the data can directly impact how they arrive at their conclusions and what conclusions they can make. To understand the ways people use provenance representations, we looked at participants' analysis behaviors who pick up from a prior analyst's progress. We studied behaviors captured through video recordings, think-aloud comments, interaction logs, and post-study interview responses. User Findings and Confidence We studied whether the availability of the provenance views affected the participants' findings and final conclusions (RQ1) to look for implications of early bias influencing analysts' ability to correct their preconceived expectations. A single author scored each participant's written conclusions using a four-level rubric, according to four factors, including the accuracy of their reported findings; the recognition of errors made in the prior analysis; the number of findings and amount of detail provided; and the depth of relationships or connections among entities and events in the data. These qualities were a set of features we expected differences in based on provenance representations, yet the scores varied greatly. Due especially to the open-ended nature of user-directed analysis, the diversity of user aptitude with analysis and the various ways participants could write their findings, our analysis did not find systematic differences in participants' written conclusions caused by the provenance conditions. For example, when summarizing key findings, some participants formatted a formal report in prose, while others opted for a series of evidentiary bullet points. Often the written conclusion left out parts of the analysis and therefore was not a fair representation of the area's a participant explored. For a similar reason, we do not report on participant encounters with the 16 solution-relevant documents, because we wanted to understand what information "stuck" and was reported in their concluding thoughts. Due to the range of individual differences among participants and personal styles for reporting, the quality and completeness of written conclusions varied greatly. We did not find meaningful differences in analyst written conclusions. Therefore, we prioritized the analysis of data from the personalized post-study interview questions as a basis for identifying differences in participant conclusions and analysis behavior. As part of this analysis, we assessed participants' confidence in the post-task interview when describing their final answer. High confidence implies that the user has convinced themselves of a specific relationship, and we want to see if that behavior varied systematically with the experimental conditions. One author coded participants' interview responses to the question regarding their final answer about the possible existence of a relationship in the data (i.e., the main investigation goal in the analysis scenario). With the support from a second author to review the handful of edge cases, we separated participant responses into two categories (high and low confidence) based on the number of hedge statements in their verbal conclusion. To do this, in the interview, we asked participants if they identified a relationship between the concepts they were investigating. Participants who stated clearly that there was or was not a relationship were designated as high confidence (55%). Alternatively, those who suggested only a potential relationship, a need for additional time to review the data, or uttered more than two hedged phrases (e.g., "there might be...," "I guess...," "I think...,", etc.), were placed in the low confidence category (45%). Upon first inspection, Figure 3 shows a strong difference in participant confidence. We see a main effect with Fisher's exact test (p < 0.05) by condition. Yet, post hoc comparisons with Bonferroni correction fail to identify the specific differences between conditions. While we cannot state one condition varied significantly from the others, those with a Coverage representation tend to exhibit conclusions more similar to the Control than those with the History representation. This is to say that the majority of conclusion qualities are not as dramatically impacted by the representation of provenance information. We do see some impact on participants' confidence when they verbally explain their conclusion, but without significant interaction effects, we instead turn our attention to the behaviors exhibited by participants to better explain how their analysis differed. Quantitative Indicators of Behaviors The way individuals interact with an interface indicates how they make sense of the data. With the variety of user activities, we choose to use a quantitative indicator to more clearly describe behavior patterns in participant approaches (RQ2). As captured in their interaction logs, we turn to the various analysis events and actions participants ran in the interface as a quantitative proxy for their understanding and to tease out the influence of provenance representation on user analysis behaviors. We look at three key representations of interest: the degree of similarity, and difference to the prior analyst's review, and the rate of filtering for specific information. Of concern to the inclusion of provenance information is the influence on how much repetition is in subsequent analyses, or the amount of similarity to the prior analyst. While verification can be beneficial when auditing the veracity of a result, in most cases repeated work is not encouraged. We wanted to compare the analysis of each participant to the referenced prior analysis to understand if the provided provenance representation influenced how they addressed the problem of "continuing the analysis." If participants chose to look into the same concepts as the prior analyst, how similar was their review (overlap), and if they looked at different things, how unique was their investigation (independence)? To quantify the amount of overlap and independence in participants' behaviors we looked at which documents in a participant's interaction history were opened. Since critical information in the underlying dataset is separated into individual documents, we can determine how similar an investigation was to another by examining the set of documents opened. Both provenance representations were based on the same set of 15 documents, and we calculated two separate, but similar ratios (overlap and independence) from the sets of documents participants reviewed. We calculated a participant's overlap ratio by considering the intersection between the set of documents a participant reviewed that was also reviewed by the prior analyst and divided by the number of documents the prior analyst reviewed ( = {User}∩{Analyst} {Analyst} ). An overview ratio of 1.0 implies that a participant saw all of the documents that analyst A reviewed. The independence ratio captures the proportion of documents a participant reviewed that were different from the set reviewed by the prior analyst ( = {User}−{Analyst} {User} ). An independence ratio of 1.0 implies that a participant only reviewed documents that the prior analyst did not. Although these ratios examine similar participant behavior properties, they are not inverse because they compare different document sets. When we compare the degree of overlap among participants ( Figure 4a), we do not see a strong difference between the conditions, implying that the amount of overlap a participant may have with a prior analysis has more to do with individual differences in approach and investigation intentions. But, some expected trends are visible. For example, those in the Control condition straddle the center (∼50%) as though they were unaware of which documents were reviewed by the prior analyst. We also see more spread in the Coverage condition, likely because they could filter documents and choose to follow or avoid the prior analysis. Finally, the History condition has the highest median overlap ratio likely since their representation emphasized how the prior analyst worked through documents. We see a much different behavior among participants' ratios when we examine how independent their analysis was from the prior analyst (Figure 4b). A Kruskal-Wallis test revealed a significant difference in independence ratios H(2) = 7.01, p < 0.05. For those in the Control condition, more than half of the documents they reviewed had not been explored by the prior analyst. The result can be explained since they had no idea which documents were specifically reviewed by the prior analyst, and opened many more documents on average. With more documents opened, the likelihood of a document belonging to the set from the prior analyst goes down, leading to a higher independence ratio because there were many more documents that were not reviewed by the analyst. We also see a long tail for those in the History condition. The pairwise post-hoc Dunn test with Bonferroni adjustments showed to be only significant for the History and the Control (p < 0.05) suggesting that they generally spent their session reviewing mostly the same documents as the prior analyst. They also looked at fewer documents overall. With fewer documents reviewed, and an emphasis on documents opened by the prior analyst, these participants were more likely to maintain lower Independence ratios. Overall, we see that while the degree of overlap is not significant, there appear to be some differences in the diversity of documents participants are exposed to when given various provenance representations. This is to say that the affordances provided by different tools can influence which information participants review. Another aspect of our analysis relates to how participants direct their investigation. We looked at the total number of interactions and divided it by the length of a participant's session to determine an average rate of interaction. The interaction rate (see Figure 4c) can serve as a proxy for the level of control a participant has over the interface and their investigation. Participants with exceptionally low interaction rates may be taking a long time to review documents or working very methodologically, whereas exceptionally high interaction rates may imply they have opened numerous documents at once or are not spending enough time understanding each document. For example, the Control condition opened the most documents (41.5 documents) on median, while maintaining the slowest median interaction rate (0.95 interactions/sec). This contradictory phenomenon may suggest that those in the Control condition opened many more documents on median in an attempt to understand the data, but worked through the documents and the task slowly. Yet, uncovering more descriptive interpretations would require the review of additional metrics. For example, we can turn to the frequency of filtering events to understand how participants search and reduce the data space (See Figure . Filtering Events refers to interactions that help users find documents and is the combination of clicks in a provenance representation combined with search events. Interaction rate is calculated as the ratio of total interactions over the length of analysis (i.e., interactions/sec). 4d). While the differences are not significant, there are more median filtering events completed (18.0) by those in the History condition while they also look at the fewest documents (28.5) on average. This is to suggest that those in the History condition were more likely to be methodological and consistent since they were not exposed to as many documents and spent more of their session filtering and refining their investigation criteria. On the other hand, those in the Coverage condition appear to have overall investigation behaviors most similar to the Control, but also have the fastest interaction rate. Those with the Coverage representation may have allowed participants to work faster and independently. To further describe the ways participants were exposed to information, we examined the timing of users' searches and provenance representation usage. In this case, we define the term filtering to describe the sum of searches and provenance panel usage per participant. Although those in the Control condition did not have access to a provenance representation, they relied more heavily on search to find information. This is to say that search and panel usage both helped to reduce the search space in the dataset and their combination provides a more universal comparison. We examine when these behaviors occur to understand when users are selecting and directing their investigations. Since each participant filtered the data their own number of times, we calculate a filter percentage over time. All but 2 participants eventually reached 100% of their filtering events within their 30-minute session, but ultimately, we draw attention to the rate of change. In Figure 5, there is lots of overlap in when participants are completing their filtering behaviors. To clarify the trends, we apply a 4-factor polynomial since it gave the highest r 2 coefficient (0.745) and helps characterize the behaviors observed. As evidenced in the modeled regression, participants from the History condition complete about 50% of their filtering events by about 12 minutes, while it takes 17 or almost 19 minutes for the Coverage and Control conditions to conduct half of their filtering events respectfully. We counted the number of participants who had completed at least half of their filtering events by the halfway point in The thicker lines represent a four-factor polynomial of trends by condition. Although the History condition takes some time to get started, they conduct more filtering earlier before tapering off later, whereas the Coverage and Control conditions used more filtering later in time and some continued to filter right up to the end. the investigation (15 minutes) as a proxy for how active participants were selecting data to review. Because the provenance representations provided a more convenient (i.e., a single click) way to filter the data, we see that both the History and Coverage condition completed the majority of their filter interactions before those in the Control condition. A Fisher's exact test confirmed that there were differences in the rate of filtering among the different conditions (p < 0.001). Further post hoc pairwise Fisher's exact tests with Bonferroni corrections confirm that the amount of filtering completed by those in the History condition differed from the Control (p = 0.001) and Coverage conditions (p < 0.001), but no difference was detected between the Coverage and Control conditions at 15 minutes. More specifically, those in the History condition complete the majority of their filtering behaviors before the other conditions (and in the first half of the session). We believe this difference is due to the interaction pattern required for the use of the History representation. Since information about which documents were reviewed is only accessible after clicking a segment, participants from the History condition would commonly click through multiple segments at once in pursuit of a subset of documents to review instead of intentionally selecting a segment to review or independently typing out their own search query (discussed more in Section 6). This pattern of clicking through each segment in time, or scanning the resulting documents to find something previously reviewed likely led to an increase in filtering events overall (median 18.0 events) and also a higher frequency of events earlier in a session while participants were still gathering information. On the other hand, it appears that the addition of Coverage information for participants did not significantly change when participants would be selecting/filtering data (as compared to the Control). Ultimately, we see some quantitative differences in participant interactions, including how little content those in the History condition were able to review, and how frequently they were filtering the dataset. We now take these quantitative differences and construct some qualitative definitions for user strategies. Qualitative Analysis of Strategies The strategies employed by participants during an open-ended evaluation likely depends on the kinds of information they are presented with (RQ3). Comparing and analyzing the strategies users take when approaching their exploration can shed light on how provenance information is used when investigating the relationships of various data. In alignment with the work on situated planning [61], of the participants who verbalized a preparatory plan, most were vague and often only consisted of 2-3 steps. These early plans were interesting to us as we wanted to see if the availability of provenance information would influence how plans were made and adapted (see RQ3). With analysis plans being constantly adjusted and renegotiated as new information is acquired, we ultimately simplified our analysis by focusing on initial plans and the actions observed in the first 10 minutes of activity. From participants' think-aloud and retrospective interviews, we tried to reconstruct participants' intentions as they began their analysis. To do this, one author analyzed the data by conducting two rounds of open coding to establish a set of emergent features from user actions and analysis foci. In a similar data analysis task, Zhao et al. [75] defined a set of analysis strategies derived from their own qualitative coding. Their work compared the strategies participants used when constructing knowledge graphs with or without an interactive state timeline. Our work differs in that we compare different provenance representations, while they only evaluate differences with or without provenance features. While their study offered participants different data analysis affordances, we borrow similar qualitative analysis steps in our work as well. From their identified strategy definitions, we cross-referenced our most common tags and refined a set of similar analysis strategies. In contrast, ours are more focused on the amount of similarity to the original analyst's investigation as well as our users' commitment to their investigation plan. We clarify and define our categories below. • Keyword Browsing -These participants set a plan for how they wanted to understand the data before they began or shortly after reviewing the note provided by the analysts. They made an intentional plan, maintained attention to the planned areas of interest, and frequently hypothesized different relationships. They had a medium amount of overlap with the prior analyst and tended to have a higher independence score since they were trying to extend the analysis and were more likely to have a more active role in the investigation. • Random Access -These participants had less structure to their investigations-often working without stating how they wanted to systematically approach the problem-and spending the majority of their time gathering information instead of synthesizing hypotheses. These participants bounced around the dataset with about equal amounts of overlap and independence from the prior analyst. • Reviewing Origin -These participants expressed a plan to verify the work from the prior analyst. Often this was motivated by a lack of trust in the conclusions made in the analyst's summary or started verifying the prior analyst's work and ran out of time for their own investigation. Therefore these participants have noticeably lower independence scores and higher amounts of overlap. • Starting Over -These participants explicitly stated that they wanted to work independent of any influence from the prior analyst. They were worried explicitly about bias or interested in comparing their own understanding with the work by the prior analyst later. These participants intentionally closed the guidance from the prior analyst and waited till at least 15 minutes into the task before they reviewed the analyst's summary or the provenance panels. Fig. 7. There are some interaction patterns among the participants in various strategy types. Due to the limited participants in the starting over (n=4), we do not compute statistical differences and instead focus on visual analysis and descriptive statistics. For details on the factors described, see Figure 4. As reported in Figure 7, there are some interesting differences among users and their strategies. With only a handful of participants in the starting over group (n=4), we rely on descriptive statistics and visual analysis to describe the groups and their differences instead of standard statistical methods. To begin, all groups appear to open the same number of documents, except for those in the starting over group (45.5 document open events). Those in the starting over group also had the highest median interaction rate (1.20 interactions per second). Much of this is likely due to the small number of participants in the group, but the way most of these participants worked was often without a stated plan and typically focused on gathering information generally. Similar behaviors were seen in the random access group. Without a plan, these participants also had high median interaction rates (1.07 interactions per second) and the highest interquartile range (0.90-1.74 interactions per second). This can be explained by the lack of intention and purposeful activity these participants appeared to execute during the analysis and reduced specificity of what they were looking for. Frequently, these participants had bursts of activity where they would be gathering information slowly before they would find a recognizable piece of information and open multiple documents they already reviewed to find interrelationships and similarities. Without a strong plan set at the beginning and generally less filtering to help find the information of interest, these participants were most likely to require more review of the data. The other two categories had more explicit intentions for what they wanted to review. Those in the reviewing origin group completed the most filtering events on median (23.0 events). Likely due to the high ratio of participants from the History condition, this characteristically high number of filtering events is a direct result of using the History panel to review various segments of time; users had to trigger a filtering event each time they wanted to examine the documents reviewed in each segment of the prior analysis. These participants also maintain a high degree of overlap with the prior analyst (80%) and the lowest median independence (45%). On the other hand, those in the keyword browsing group also had a stated plan and had a much different degree of overlap with the prior analyst (only 40%). Most of these participants intended to explore key aspects of the data described by the prior analyst and tended to open more documents the prior analyst had not touched. We noticed that those who used a keyword browsing strategy generally uncovered a more complete and accurate picture of the dataset. We believe this is because they independently verified aspects of interest and followed keywords instead of the work by the prior analyst. Peculiarly, while there is not a significant difference, the distribution of strategies almost appear to follow with the three conditions(see Figure 6), where keyword browsing was mostly associated with those in the Coverage condition, random access was mostly members from the Control condition and those in the reviewing origin group were from the History condition. Yet, without strong evidence that provenance representations influence the type of strategy participants employ when planning for their analysis, we cannot conclude that these strategies are determined by the affordances in the interface. DISCUSSION Our work studies how providing provenance summaries can influence future investigators who pick up a partial analysis from a prior analyst. We see evidence that: (1) listing interaction histories can result in a type of secondary sensemaking task for trying to understand the earlier analysis, (2) individual differences introduce a large amount of variation in how people choose to utilize provenance summaries, and (3) the types of strategies exhibited by the continuing analysts show some similarities with analytic strategies predicted by prior work. Considering the possible strategies participants adopted, we expected those who reviewed the prior analyst's progress would first start at the beginning of the prior analyst's work and then select relevant information to take into their own analysis and establish their own understanding. Yet, the observed results show the opposite trend. For example, the majority of participants in the reviewing origin group were also from the History condition. Interestingly, our interviews and observations of participants in the History condition found they often felt overwhelmed with the extra available information, and we saw the majority of the History condition had low confidence in their conclusions (see Figure 3). This corresponds with the frequent triggering and review interaction pattern required to see the documents reviewed by the prior analyst. This is also evidenced by the group's more frequent filtering behavior prior to 15 minutes, which may be due to the amount of information they were tasked to review in the limited time. We find supporting evidence for H1, as it appears to take more time to construct an accurate mental model because participants were exploring the events in the data but also understanding how the prior analyst approached the problem. The broader implication is that having History information may exaggerate the task's difficulty by making more content accessible for review and not summarizing enough to serve users hoping to pick up a prior analysis. On the other hand, providing Coverage information to users does not feel exceptionally beneficial beyond what was provided by the Control. While a greater proportion of participants with a Coverage representation may maintain more keyword browsing strategies, there are no significant deviations from the control, thus rejecting H2. In open-ended analysis tasks, there are no clear paths that lead to a solution. Among the various metrics collected in our study to characterize analysis, we see the breadth of approaches through the large degree of variation and spread in the data. While some significant differences among conditions emerge (i.e., for the degree of investigation independence), many cross-condition differences may be hidden by the high variability in individual preferences and the way participants adapted their approaches in situ. Though the experiment provided the same instructions for the analysis tasks for all participants, we clearly found four unique types of approaches emerge. While a trend toward a specific strategy appears to align with the conditions (e.g., in Figure 6 we see 6 participants from each condition associated with different strategies), due to lack of significance we cannot conclude a direct relationship between provenance usage and strategy employed. A participant's choice to verify the prior work (i.e., reviewing origin) or follow their own set of keywords (i.e., keyword browsing) is likely a result of their personal experience completing analysis tasks or interest in referencing provenance information as well as other factors, and not due to how provenance information is provided. Expanded knowledge on the topic may benefit from future work that considers users' preconceptions and other factors that influence how users develop and enact their strategies. Finally, among the set of strategies we see, there is evidence that the techniques are in alignment with earlier work. We found participants' initial strategies were markedly similar to the groupings found by Zhao et al. [75]. In a similar data hand-off task with an interaction historylike provenance representation, they identified five typical strategies. Their strategies "random access," "tracing from origin," and "starting over" closely align with our random access, reviewing origin, and starting over categories, respectively. The key difference is that we have combined their "naive browsing," and "hubs and bridges" categories into one group of keyword browsing. While they described how some participants adjusted and shifted their strategies, we did not see as many transitions in our shorter 30-minute analysis session, likely due to the limited time participants had to complete the task. Yet, the definitions they used to describe the various interaction strategies and techniques were in alignment with the set of strategies we observed our participants employ. Since their set of categories was also based on traditional, non-collaborative sensemaking strategies, our work further reinforces the idea that hand-off strategies are similar to other data analysis and sensemaking techniques in other settings [32]. Drawing on work for creative collaboration, one way we may learn more from these analysis tasks would be to observe strategies employed in more creative scenarios and the user interactions longitudinally [22]. We also see further evidence in support of the work by Sacha et al. [55]. They identified how skeptical users will only begin verifying another's work if they see anomalies or have hypotheses about where mistakes were made. Those who used a Reviewing the Origin strategy were often intrigued by some aspect of the prior analysis and sought to resolve these needs for evidence by reviewing many of the same documents already reviewed. This led to higher ratios of overlap and less independence from the prior analysis. With similarly identified strategies to prior work [75], our findings further reinforce that a handful of common strategies exist for different analysis tasks that seem to be based on a user's situated approach. Limitations Our study of how users pick up an analysis with provenance information from prior collaborators was based on a single analysis scenario and targeted participants with limited data analysis experience. To build further knowledge on the topic, following research is needed with additional data sets and with participants with varying backgrounds from different collaborative data analysis communities like intelligence operators, medical teams, and academic researchers. Studies could also consider how differences in participants' abilities, problem-solving aptitude, or preferences for particular analytic strategies might influence different approaches or interaction patterns. A challenge with working with open-ended sensemaking tasks is that a user's conclusions may not fit within certain bounds of available conclusions and captured data metrics. We see this in capturing participant conclusions. We set out to capture participant accuracy, error correction, findings made, amount of detail, and depth of relationships but did not have the fidelity in the task complexity nor measures to capture meaningful differences. In future work, more thoughtful care and constraints should be taken to more formally compare these factors and the influence of provenance. In this work, the researchers describe the tool and scenario to participants. Due to the natural variation in speech, there are potential confounds introduced based on different inflections being interpreted by different users. While all participants were read the same statements, future work could better control for these effects by using a prerecorded procedure or expanding to additional datasets with different contexts to help generalize findings. While our work introduced the comparison of two prototypical techniques used in the literature, there are also more ways of representing past work and their influence on user performance ought to be explored. for example, some work focuses on generating textual summaries [31,70], while others show branching timelines [8,17,65,67] that communicate how analysis adapts, transforms, and evolves. Still, others use graphs and networks as a way of providing concept maps [36,73,75], while others design comics as a technique for summarizing segments of time [3,30]. Questions remain not only in the representation provenance should take, but also at what level of detail the provenance should be maintained. The comparison among how these techniques influence user performance as well as variations in the level of detail ought to be explored in the future. CONCLUSION In this paper, we examine the effects of provenance representations on future investigator behaviors. In an open-ended textual data analysis task users were given different kinds of providence information visualizations and asked to pick up the analysis. We examine the downstream effects of two prototypical provenance representations for collaborative sensemaking (i.e., Coverage and History). Like the findings of Kang et al. [32], while provenance representations do not appear to have significant impacts on user strategy, it does suggest that there are benefits of providing a succinct representation of provenance to help users pick up where other users left off. While both representations take users time to situate their understanding, we see evidence that the History representation takes more time to comprehend. Because it does not simplify the prior analysis as well as the Coverage representation, the History representation appears to introduce an additional sensemaking task for participants. This appears to be especially true when the purpose of provenance is to collaboratively communicate [50]. It would be interesting to see how these prototypical representations behave when different analysis tasks are evaluated. Our results imply that provenance summaries that reduce the complexity of the analysis will be beneficial in hand-off analysis scenarios. To prevent the overwhelm associated with information overload, perhaps dynamic, user-defined levels of detail in provenance would shed additional light on design guidelines for the analytic provenance community. These results contribute to the refinement of design guidelines for provenance representations and further emphasize the need for provenance summary techniques.
2022-08-09T01:16:30.644Z
2022-08-08T00:00:00.000
{ "year": 2022, "sha1": "b4fae1d8013e7b003d6fb664decded5901f5b870", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b4fae1d8013e7b003d6fb664decded5901f5b870", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1660483
pes2o/s2orc
v3-fos-license
Understanding the Degradation of Hominid Gene Control Peter D. Keightley, Martin J. Lercher, Adam Eyre-Walker Recently, two groups have examined the level of sequence constraint in noncoding DNA flanking mammalian genes, and appear to have found conflicting results. By comparing 500-bp blocks in mice and rats, we found that mean nucleotide divergence within 2 kb of the start and stop codons of protein-coding genes is substantially lower than that of introns, and decreases when approaching the coding sequence [1]. If nucleotide changes within introns are largely free from selection, this implies that noncoding blocks close to genes evolved under selective constraints, presumably because they contain gene expression control regions. In contrast, we find that upstream sequences in hominids do not evolve slower than introns, while downstream regions are under about half of the constraint seen in murids [1]. By analysing a similar set of noncoding DNA sequences, Bush and Lahn also found that the mean level of selective constraints in upstream regions between humans and chimpanzees is very low. However, their slightly more complex main analysis was to search for 16-bp sequences within upstream regions that are strongly conserved between humans, mice, and either dogs or chickens. They then examined the divergence between humans and chimpanzees at the flanking nucleotides, finding substantially reduced divergence compared with the genomic mean. This demonstrated selective constraints at certain upstream sequences in hominids. An analogous analysis of mouse–rat sequences showed that the selective constraints are about twice as strong in murids as in hominids [2]. These two findings—on one hand, a near absence of selective constraints in blocks upstream of hominid genes [1], and on the other, evidence for strong selective constraints in these regions [2]—appear to contradict each other. How can we square the two sets of results? The answer is rather simple—windows with high conservation scores are relatively rare, and they contribute little to the mean calculated over 500-bp windows (unfortunately, Bush and Lahn do not tell us the fraction of 59 alignments within high conservation scores). Bush and Lahn also suggest that the apparent discrepancy ‘‘likely results from the fact that in large 500-bp blocks, functional elements that are under constraint are mixed with large sections of nonfunctional DNA, which are not under constraint’’ [2]. We believe that this interpretation, while formally correct, obscures important and interesting information that can be gained from combining the two studies. Some sequences outside the conserved 16-mers identified by Bush and Lahn are also likely to be functional, since the same 500-bp regions (largely ‘‘nonfunctional’’ according to Bush and Lahn) show strong evidence of evolutionary constraints between mice and rats [1]. Bush and Lahn also note that constraint, on either side of conserved windows, is greater in murids than in hominids. However, they observe a much smaller difference than that seen in our analysis. This is deceptive because by concentrating attention on regions that are conserved between humans, mice, and dogs, they ignore the fact that there might be many more highly conserved regions in murids than there are in hominids. In summary, there is no conflict between our results and those of Bush and Lahn; they concentrate their attention on a preselected subset of the sites we considered and so have a different perspective on the problem. What is clear from both studies is that there is a qualitative difference in the level of conservation in the 59 flanking sequences between murids and hominids. We have argued that this is likely to be due to the fixation of slightly deleterious mutations in hominids that are otherwise selectively eliminated in rodents. Differences in constraints between hominids and murids demonstrate that the overwhelming majority of changes at upstream regulatory sites have only small effects on fitness. This has counterintuitive consequences: to obtain a comprehensive list of human regulatory sites, it might be better to examine conservation in murid rather than hominid genomes. “ Recently, two groups have examined the level of sequence constraint in noncoding DNA flanking mammalian genes, and appear to have found conflicting results. By comparing 500-bp blocks in mice and rats, we found that mean nucleotide divergence within 2 kb of the start and stop codons of protein-coding genes is substantially lower than that of introns, and decreases when approaching the coding sequence [1]. If nucleotide changes within introns are largely free from selection, this implies that noncoding blocks close to genes evolved under selective constraints, presumably because they contain gene expression control regions. In contrast, we find that upstream sequences in hominids do not evolve slower than introns, while downstream regions are under about half of the constraint seen in murids [1]. By analysing a similar set of noncoding DNA sequences, Bush and Lahn also found that the mean level of selective constraints in upstream regions between humans and chimpanzees is very low. However, their slightly more complex main analysis was to search for 16-bp sequences within upstream regions that are strongly conserved between humans, mice, and either dogs or chickens. They then examined the divergence between humans and chimpanzees at the flanking nucleotides, finding substantially reduced divergence compared with the genomic mean. This demonstrated selective constraints at certain upstream sequences in hominids. An analogous analysis of mouse-rat sequences showed that the selective constraints are about twice as strong in murids as in hominids [2]. These two findings-on one hand, a near absence of selective constraints in blocks upstream of hominid genes [1], and on the other, evidence for strong selective constraints in these regions [2]-appear to contradict each other. How can we square the two sets of results? The answer is rather simple-windows with high conservation scores are relatively rare, and they contribute little to the mean calculated over 500-bp windows (unfortunately, Bush and Lahn do not tell us the fraction of 59 alignments within high conservation scores). Bush and Lahn also suggest that the apparent discrepancy ''likely results from the fact that in large 500-bp blocks, functional elements that are under constraint are mixed with large sections of nonfunctional DNA, which are not under constraint'' [2]. We believe that this interpretation, while formally correct, obscures important and interesting information that can be gained from combining the two studies. Some sequences outside the conserved 16-mers identified by Bush and Lahn are also likely to be functional, since the same 500-bp regions (largely ''nonfunctional'' according to Bush and Lahn) show strong evidence of evolutionary constraints between mice and rats [1]. Bush and Lahn also note that constraint, on either side of conserved windows, is greater in murids than in hominids. However, they observe a much smaller difference than that seen in our analysis. This is deceptive because by concentrating attention on regions that are conserved between humans, mice, and dogs, they ignore the fact that there might be many more highly conserved regions in murids than there are in hominids. In summary, there is no conflict between our results and those of Bush and Lahn; they concentrate their attention on a preselected subset of the sites we considered and so have a different perspective on the problem. What is clear from both studies is that there is a qualitative difference in the level of conservation in the 59 flanking sequences between murids and hominids. We have argued that this is likely to be due to the fixation of slightly deleterious mutations in hominids that are otherwise selectively eliminated in rodents. Differences in constraints between hominids and murids demonstrate that the overwhelming majority of changes at upstream regulatory sites have only small effects on fitness. This has counterintuitive consequences: to obtain a comprehensive list of human regulatory sites, it might be better to examine conservation in murid rather than hominid genomes. " Authors' Reply In their letter responding to our recent paper in PLoS Computational Biology [1,2], Keightley et al. provide a clear summary of the similarities and differences between the method used in their study [3] and that which was used in ours. They correctly point out that our study supports their conclusion that compared with rodents there has been an increase in sequence divergence rate in hominid noncoding sequences upstream of genes. They are also correct to say that the two studies are looking at slightly different populations of upstream noncoding sites. In their study, they calculate divergence in large blocks that include many different kinds of sites. Among these are nonfunctional sites, sites conserved among primates, and sites conserved among all mammals. As a result, their method can be thought of as broad but low resolution. In contrast, our method considers only sites that are likely to be conserved among all mammals, making it more restricted but higher resolution. Our difference in focus allows us to make an important clarification of their earlier results. We find that despite the overall increase in divergence rate in hominid noncoding regions, significant constraint remains at some sites. In their letter, Keightley et al. acknowledge this point, but argue that such sites are likely to be relatively rare. To respond to this, we can calculate frequency values for different conservation scores from Table S1 of our paper. Windows with a score of 13 or higher constitute 2.7% of the total. (Sites next to these have an average hominid divergence of 0.0086, which is significantly constrained compared with the genome-wide average divergence rate of approximately 0.012.) This means that an average 10-kb upstream noncoding region would have hundreds of bases of this type. This is not a trivial number, and suggests that there are many highly conserved noncoding sites in hominids. On the other hand, we agree that the method of Keightley et al. includes many sites that we ignore, and may reveal things that our method misses. These sites include functional sites that are not conserved in mouse or dog. Such sites might show an especially high divergence rate in hominids. It would be very interesting to quantitate the hominid divergence rate specifically at such sites, and compare it with the corresponding divergence rate in other mammals. We would also like to take this opportunity to bring up a cautionary note that applies equally to both studies. Comparing human-chimpanzee divergence with mouse-rat divergence raises a number of complex technical issues because human-chimpanzee divergence is more than one order of magnitude smaller. Such issues include back mutations and varying contributions of polymorphisms and sequencing errors. To extend the work by Keightley et al. and our group, a ''cleaner'' future study might be to compare hominids with two closely related rodents (or other mammals) whose divergence is on par with humanchimpanzee divergence. It would be ideal to look at several such species pairs with varying population sizes, which may help one to assess whether the difference in divergence rate between hominids and rodents can be attributed to smaller historical population size in hominids. Finally, whereas relaxation of selective constraint is a favored explanation for the higher divergence rate in hominids, it is by no means the only explanation. In the longer term, we also look forward to studies that quantitatively address the extent to which the higher hominid divergence rate is due to relaxation of functional constraint, positive selection, or other-as of yet poorly characterizedselective forces such as compensatory mutations.
2014-10-01T00:00:00.000Z
2006-03-01T00:00:00.000
{ "year": 2006, "sha1": "b7579fb07c75f5f1ea76017c47ab35e479406c69", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.0020019&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2e09b11ab936970df5369da54fa3fe1f9e11760", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Biology" ] }
269008629
pes2o/s2orc
v3-fos-license
Exploring the microbial savanna: predator-prey interactions in the soil Soils host complex multi-trophic communities with diverse, mostly microbial, predator and prey species, including numerous bacterivorous protists and bacterial prey. The molecular mechanisms underlying microbial predator-prey interactions have thus far mainly been explored using reductionist methods, outside the soil environment and independent from the broader life history strategies that microbes display in soils. In this Comment, we advocate for an integrative research approach, combining molecular systems biology and microbial ecology, to investigate how predator-prey interactions shape microbial life history strategies and thereby population dynamics in natural soil communities. Soils host complex multi-trophic communities with diverse, mostly microbial, predator and prey species, including numerous bacterivorous protists and bacterial prey.The molecular mechanisms underlying microbial predator-prey interactions have thus far mainly been explored using reductionist methods, outside the soil environment and independent from the broader life history strategies that microbes display in soils.In this Comment, we advocate for an integrative research approach, combining molecular systems biology and microbial ecology, to investigate how predator-prey interactions shape microbial life history strategies and thereby population dynamics in natural soil communities. " I f I could do it all over again, and relive my vision in the 21st century, I would be a microbial ecologist.Ten billion bacteria live in a gram of ordinary soil, a mere pinch held between thumb and forefinger.They represent thousands of species, almost none of which are known to science.Into that world, I would go with the aid of modern microscopy and molecular analysis.I would cut my way through clonal forests sprawled across grains of sand, travel in an imagined submarine through drops of water proportionately the size of lakes, and track predators and prey in order to discover new life ways and alien food webs."(Wilson, 1994). In his autobiography, the naturalist E. O. Wilson eloquently described the foreign ecology of soils, which despite their importance, remain among the most elusive ecosystems on the planet: soil communities are incredibly diverse; strongly structured in both time and space and largely hidden from our sight, making it practically impossible to directly observe how microbes are interacting.Over the last decade, many studies have used sequencing-based approaches to map out soil diversity, revealing communities that are largely centered around bacterial and fungal growth.Soil communities also contain legions of predators, including protists, nematodes, springtails, and mites (Potapov et al, 2022).These predators differ in size, activity, locomotion, and diet, and engage in many unexpected interactions, from nematode-trapping fungi to pack-hunting amoebae (Thakur and Geisen, 2019).Together with their prey, predators give rise to a 'microbial savanna', a complex food web that drives belowground community dynamics. Bacterivorous protists are among the most diverse predators in the soil.Being higher up in the food chain, these protists are far less abundant than their prey, but their impact on bacterial populations can nonetheless be enormous.A single gram of soil easily contains tens of thousands of bacterivorous protists, each one of them can consume hundreds to thousands of bacterial cells per hour, and protistan populations can grow rapidly with many protists dividing every few hours under favorable conditions.Such strong predation pressures limit bacterial growth and alter the community composition, while conversely, changes in bacterial densities directly affect protistan growth, linking population dynamics in both predator and prey (Thakur and Geisen, 2019). In search of their prey, protists employ diverse offense mechanisms that promote the detection, ingestion, and digestion of bacterial cells, as apparent from their diverse feeding strategies (Esteban and Fenchel, 2020).Raptorial protists, for example, actively move towards their prey, like amoebae that can sense minute changes in chemotactic cues and capture prey using their pseudopodia (Fig. 1A).Other protists, including many ciliates and dinoflagellates, are filter feeders, ingesting large volumes of water to capture prey species.These predators can either be sessile-attached to soil particles-or planktonic-suspended in water-filled pores-and frequently express specialized organelles or cell extensions to capture and kill their prey (Leander, 2020) (Fig. 1B,C).With the expanding number of omics approaches, our understanding of the molecular mechanisms underlying these diverse feeding strategies is advancing rapidly. After ingestion, protists digest their prey through a common process of phagocytosis, which is at the heart of their predatory lifestyle.During phagocytosis, bacteria are exposed to a series of stressors that mediate their digestion, including acidification, enzymatic digestion, oxidative stress, metal deprivation, and metal poisoning (Fig. 1D).The cocktail of enzymes (e.g., hydrolases and proteases), as well as their effectiveness in digesting different bacterial species, varies between protists: some bacteria are more resilient to digestion than others, thereby increasing the time and energy needed for their consumption.One would therefore expect that the stressors expressed during phagocytosis are tailored to digest the bacterial species that each predator is likely to encounter in the soil. The diversity of offense mechanisms in protists is mirrored by diverse defenses in their bacterial prey (Jousset, 2012 and references herein).Similarly to offenses, defense mechanisms can coarsely be divided between pre-and post-ingestion mechanisms.Bacteria can, for instance, secrete small molecules that kill or repel protistan predators, like toxins, reactive oxygen species, and biosurfactants (Fig. 1B).Based on the number of biosynthetic gene clusters, Developmental Biology Unit, European Molecular Biology Laboratory, 69117 Heidelberg, Germany.✉ E-mail: jordi.vangestel@embl.dehttps://doi.org/10.1038/s44320-024-00033-w| Published online: 8 April 2024 bacteria can, in principle, secrete a vast potential of secondary metabolites.However, the ecological function of most of these secondary metabolites remains unknown.Bacteria can also prevent ingestion by outrunning or outsizing predators: bacterial cells can swim away, become filamentous, and form small aggregates or adhesive surface-bound biofilms (Fig. 1C).In other cases, bacteria might evade detection or lower detection rates by masking their cell envelope through modified lipopolysaccharides. Even after ingestion, bacteria can express defenses, which either actively or passively prevent digestion.Pathogenic bacteria can, for example, interfere with the phagocytic process by hijacking the phagosome for replication, as seen in Legionella pneumophila that modulates its vacuole by secreting effector proteins into the protistan host using the type IV secretion system, akin to the development of Legionellacontaining vacuoles in human phagocytes (Fig. 1E).Opportunistic pathogens like L. pneumophila can resist predation by many protistan species (Park et al, 2020).As such, protistan predation is often thought to promote the emergence of bacterial virulence in humans.But not all defenses actively interfere with phagocytosis.Passive resistance occurs when bacteria resist phagocytic stressors without manipulating the host (Jousset, 2012).Bacillus subtilis spores can, for example, resist phagocytosis by Tetrahymena thermophila through the formation of an impenetrable spore coat (Fig. 1F). Given the multitude of offense and defense mechanisms, it is not surprising that offenses in protists and defenses in bacteria are often combined.Bacteria can, for example, simultaneously produce biofilm, form filaments, and secrete biosurfactants.The combination of defenses determines the effective predation rate of protistan predators.To coordinate expression, defenses are often co-regulated in response to, for example, quorum-sensing signals or environmental cues.Similarly, protists undergo strong gene expression changes in response to the available prey species and can modulate their offenses, like the digestive enzymes expressed during phagocytosis.Although response mechanisms are widespread in both predator and prey, their molecular underpinnings, as well The interplay between protistan predators and bacterial prey within the soil environment is reflected in their diverse lifestyles.(A) An ameba can stretch its thin pseudopodia between micropores to reach its bacterial prey, attracted by chemotactic cues produced by the microbial community.However, bacteria can protect themselves by masking or minimizing the production of such cues.Additionally, being agile predators, naked amoebae are themselves exposed to other predators, such as nematodes.(B) A flagellated protist uses its flagellum to detach bacterial cells from the soil's surface.But these can, in turn, express toxins or surfactants to drive the predator away.(C) The cilia of protists can create water currents that direct bacteria towards their cytopharynx, an invagination acting as a "mouth", promoting their ingestion.However, bacterial differences in size and shape, like filamentous bacteria or biofilms, can withstand the current or outsize the cytopharynx, thereby preventing ingestion.(D) After ingestion and during phagolysosome maturation, bacteria are exposed to a series of stressors, including digestive enzymes, toxic metals, and high acidity.(E) While non-pathogenic bacteria are often easily digested during phagocytosis, intracellular pathogens, such as L. pneumophila, can secrete effector proteins through their type IV secretion system and replicate within the phagosome.(F) Spores can passively resist digestion within the phagosome.as ecological impact, often remain unknown.How bacteria assess the predation risk in their environment and, conversely, how do protists optimize their feeding strategy in response to the available prey species? There is a limit to the number of offense or defense mechanisms microbes can express.Many mechanisms are costly and affect important life history traits, like growth, dispersal, and survival.Biofilms, for instance, protect bacteria against predation, but can simultaneously lower growth rates and limit dispersal, by trapping cells in adhesive extracellular polysaccharides.By reducing the predation risk through biofilm formation, bacteria might thus limit their ability to find new resources.Similarly, the time and energy protists invest in digesting bacterial prey might strongly differ between prey species.Protists might therefore benefit from consuming more digestible prey species first, depleting those species from the community.Feeding strategies can also directly affect the survival chances of protists themselves.For example, the agility of amoebae makes it possible to engulf prey in micron-scale pores, but can also make them susceptible to predation by larger predators, like nematodes (Fig. 1C).Testate amoebae reduce this risk by hiding in a protective shell.Understanding life history trade-offs is central to understanding the repertoire of offense or defense mechanisms microbes express. The life history strategies of protistan predators and bacterial prey can only be fully understood in the context of the soil environment, whose strong spatial structure inevitably affects the spatial and temporal scale at which microbes interact (Erktan et al, 2020).Ecology should therefore be leading in studying predator-prey interactions.This poses a major challenge because microbes are difficult to monitor directly in heterogeneous soil environments.In addition, to quantify how offense and defense mechanisms affect life history strategies, we need to perturb them and determine their effect on growth, dispersal, and survival.Only in this way, can we understand how life history trade-offs shape the repertoire of expressed mechanisms.We think this calls for an integrative research approach, combining methods from molecular systems biology and microbial ecology, which starts from the offense and defense mechanisms predator and prey species express, and systematically studies their impact on microbial life history strategies and community dynamics in soils. Recent technological advances strongly promote such an integrative research approach.Advances in gene engineering, like CRISPR, make it possible to create large-scale mutant libraries in an increasing number of species, thereby paving the way to perturb offense and defense mechanisms in a highthroughput fashion (Stewart et al, 2022).In addition, advances in parallel culturing make it possible to study microbial interactions across hundreds of culturing conditions and community compositions.These approaches are complemented by microfluidic experiments with synthetic soils, where we can directly monitor how microbes interact at a spatial and temporal scale that mimics that of a soil environment (Erktan et al, 2020).Such experiments can offer key insights into spatiotemporal community dynamics affecting microbial life history strategies.To date, many of the above methods are applied to either bacteria or protists in isolation, but in the future, we think these efforts could easily be expanded to study the predator-prey interactions that dominate natural soil environments. By connecting approaches across fields, from molecular systems biology to microbial ecology, we can investigate how life history strategies of protistan predators and bacterial prey arise and evolve.These strategies ultimately shape microbial communities and, thereby, the belowground microbial savanna; a savanna filled with "new life ways and alien food webs" (Wilson, 1994) that await to be discovered. Figure 1 . Figure 1.Offense and defense mechanisms in the soil's microbial savanna. Figure inspired by Erktan et al, 2020 and with modified protist figures from Keeling and Eglit, 2023. copyright holder.To view a copy of licence, visit http://creativecommons.org/licenses/by/4.0/.Creative Commons Public Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the data associated with this article, unless otherwise stated in a credit line to the data, but does not extend to the graphical or creative elements of illustrations, charts, or figures.This waiver removes legal barriers to the re-use and mining of research data.According to standard scholarly practice, it is recommended to provide appropriate citation and attribution whenever technically possible.© The Author(s) 2024
2024-04-10T06:17:44.920Z
2024-04-08T00:00:00.000
{ "year": 2024, "sha1": "b2409aaecc4d04aabb6ecea33a08e230ec9940b9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s44320-024-00033-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b95943060e9cc685f46aab74ec7057234efde12", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54968050
pes2o/s2orc
v3-fos-license
A Family Affair: Caring in Teaching and Implications for Teacher and Researcher Preparation The purpose of this study was to explore how perceptions of remembered instances of teacher caring in K-College impacted the motivation of a college student. Implications for teacher preparation programs and educational research were then drawn from these perceptions. The first part of the title “A Family Affair” stems from the fact that the authors are members of the same family – Father, Mother, and Son. Both the father and mother had prior knowledge of some (not all) of the instances of caring and non-caring described by their son and thus shared a privileged insider position that offered unique insights while cooperative peer checking was used both during and after the interview to help promote the trustworthiness of findings. It was found that the degree of caring shown by teachers had a profound influence on the participant’s willingness to put forth effort especially in those courses that were not his favorite subjects which suggests that a strong connection exists between caring and student motivation. An important implication of this study is that teachers and those responsible for teacher preparation programs would benefit by being aware of the impact of caring on students’ engagement and attitude toward learning. If the ultimate purpose of educational research is to contribute to effective teaching, then the “soft variable” of caring should be considered an important component of researcher preparation. It is hoped that readers will find this study to be transferable to the degree that it resonates with their own experience as teachers, students, and parents, and which we refer to as “experiential validity”. Brock Education Journal, 26(2), 2017 "I wish I could care what you do or where you go but I can't…My dear, I don't give a damn." (Gone With The Wind, 1939) "For I don't care too much for money, for money can't buy me love" (Can't Buy Me Love, The Beatles, 1964) "The longer we consider and examine the present day methods of education, the more clearly we recognise that children lack the care and consideration which would be in accord with their present and future needs, a care which considers equally the child's mental and physical needs and capacities. We notice that if children are not given the care which takes their stage of human development into consideration, they will lack the foundation for the task ahead in school and for their later lives in general." (Friedrich Froebel, Founder of Kindergarten, 1782-1852Moore, 1991) "They don't care what we know until they know that we care." (Madeline Hunter, 1982) The above array of quotes (where care and caring have been emphasized) convey some of the meanings and nuances that can be used to define these terms. In this article, the concept of caring in education was investigated from the perspective of a Son-Researcher (SR) and further elucidated by the Mother-Researcher (MR). SR was the primary focus in this study since he was asked by the Father-Researcher (FR) to share his experiences and perspectives related to remembered instances of caring and non-caring during his elementary, high school, and college years during a group interview. The following Research Questions guided data collection: RQ1. What instances of caring and non-caring does SR readily recall from elementary and high school and to what extent are these recalled instances perceived by SR as having a continuing impact on motivation to do well in college? RQ2. What instances of caring and non-caring does SR readily recall from college during his Freshman and Sophomore years and to what extent are these recalled instances perceived by S as having a continuing impact on his motivation to do well in college? RQ3. Based on recalled instances of caring and non-caring in elementary school, high school, and college, what suggestions would SR make to help teachers and professors make a stronger impact on their students in terms of their motivation to do well in school and to pursue their career aspirations? RQ4: What perspectives are offered by MR in relation to RQ1 to RQ3? FR envisioned these questions as a fertile field for transforming data via "description, analysis, and interpretation" (Wolcott, 1994) including both tacit and propositional knowledge (Polanyi, 1962) in order to understand the role of caring in relation to Brock Education Journal, 26(2), 2017 teaching, learning, and research. It was anticipated that themes would emerge from intrafamily dialogue where the interactions among MR, SR, and FR were seen as positive contributions toward understanding the influence of caring on learning and motivation rather than as a source of "bias". FR used dialogue coupled with empathetic understanding and reflection as a type of "member checking" to promote the trustworthiness of data and to integrate tacit knowledge including emotion, intuition, and body language with propositional knowledge that was conveyed via straight-forward language captured in the oral responses. The ultimate aim was to get to the heart of the matter regarding the impact of caring on students and therefore its potential role in both teacher and researcher preparation programs. The Qualitative and Quantitative Traditions Is it not sometimes the case that we qualitative researchers believe that our quantitatively-oriented colleagues do not care as much as we do about participants and that these colleagues think that qualitative research is inferior to quantitative research? One older and one more recent source may weaken this belief. Bauswell (1994) introduces his book Conducting Meaningful Experiments: 40 Steps to Becoming a Scientist by first saying that "meaningfulness" can be defined in different ways by different people but that "I happen to define a meaningful research study as one that has the potential of actually helping people and improving the human condition" (p. 1). Pilcher and Cortazzi (2016) interviewed 17 researchers who leaned quantitatively and found that most of them not only did not deprecate qualitative approaches but found these approaches to be valuable to scientific inquiry. So, even if we differ with our quantitatively-oriented colleagues in terms of epistemology and methods, it might be beneficial to remind ourselves once in awhile that we are all on the same team and we hope that this article is received in this same spirit! Purpose of Education Purpose is behind everything that we do. In the case of education, aspiring teachers are typically required to arrive at their own "philosophy of education" that captures their values and beliefs. Here is our suggested statement of educational purpose -- The purpose of education is help students grow artistically, cognitively, emotionally, morally, and socially within a safe, encouraging, and caring environment, leave as lifetime adventurers who are ever ready to question and learn about themselves, others, and their world, and to meaningfully contribute to the interrelated welfare of self, others, and world throughout their lives. Purpose is described here in terms of growth and caring in relation to five inborn capacities and provided a context for both data collection and data analysis in this study. We additionally suggest that this purpose statement can provide a sound basis for developing and improving teacher and researcher preparation programs. Framework and Methods A group interview was conducted on August 4, 2016 and FR tried to keep the stated Brock Education Journal, 26(2), 2017 educational purpose firmly in mind during this interview. Based on the process of Oral Coding (2015a), both analysis and interpretation were refined in the weeks that followed. Oral Coding was used to analyze interview data because it relies on an aural-oral approach for making sense of data. FR introduced the process of Oral Coding in a study that focused on higher education (Bernauer, Semich, Klentzin, & Holdan, 2013) where one of his co-authors used phenomenological coding while FR used Oral Coding to analyze data that were collected during focus group sessions. It was found that there was substantial agreement across findings and interpretations. FR then used Oral Coding in a study that explored the remembrances of graduates of Catholic schools over several decades and found that Oral Coding helped to preserve the unique voices of participants as he went about transforming raw data into written text (Bernauer, 2015b). Finally, FR attempted to codify Oral Coding more explicitly into seven steps (Bernauer, 2015a). While analyzing data for this current study, FR found that, while he adhered to the spirit of the seven steps, that he drifted from them especially in the way that the three technologies of GarageBand on the Mac laptop that was used to initially record the interview, QuickVoice Pro on the IPhone that served as the secondary recording device, and Dragon Dictate on the Mac that was used to transcribe from voice to text in Microsoft Word overlapped and intersected. Consequently, it would be difficult to describe the exact sequence of using these technologies because FR found himself "jumping around" among them and would be hard-pressed to try and present a linear account of a very non-linear process. Nonetheless, FR intends to continue experimenting with this method of transforming data that will hopefully serve to generate useful information to those who are interested in using this method in their own work. In addition to the foundational role of educational purpose, Cooper and Garner (2012) stress that the sequence of Relationships-Relevance-Rigor is critical since developing relationships with students lays the necessary prerequisite for the other two components of effective teaching. In addition both Noddings (2005) and Heshusius (1996) provide persuasive arguments for positioning caring into any discussion of teaching and learning. FR also tried to follow Wolcott's (1994) suggestion to transform qualitative data into a written account through description -analysis -interpretation while recognizing that in the actual practice of making sense of data, that these processes often overlap. FR also tried to incorporate the ideas related to critical thinking in qualitative data analysis (Bernauer, Lichtman, Jacobs, & Robertson, 2013). These sources offer a framework for making sense of qualitative data and it was within this framework that FR utilized "Oral Coding" to analyze and interpret data in relation to the question of caring in education using a multi-phase process using voice recordings to transform data into a written account rather than using verbatim transcriptions of oral data. Finally, while there was only one primary informant (SR) and two secondary informants (MR and FR), it is hoped that readers will identify their own points of connection based on their personal experiences as teachers, students, and/or parents. We refer to these connections as exemplifying experiential validity and offer this concept in lieu of "external generalizability" under the quantitative tradition. Description, Analysis, and Interpretation SR was a first semester junior in college at the time of the interview and although major psychological and emotional changes occur as a result of "going away to college", Brock Education Journal, 26(2), 2017 the interview was only about two years removed from when SR graduated from high school so the memories of caring and non-caring from his high school days were still relatively fresh. Probably more importantly, if we subscribe to the notion that "we may forget what people say but we will never forget how they made us feel" then remembrances that had an emotional impact on us (for good or ill) are still readily available to us even if these remembered instances were from our younger years. It is within this context that the responses from SR should be understood. What follows are first the conversation prompts derived from the purpose of this study followed by description, analysis, and interpretation of the data that SR provided in response to these prompts. It should also be pointed out that while both SR and MR knew that the purpose of this group interview was to recall instances of caring and non-caring in school and to tease out impacts on motivation and learning, no specific examples were discussed in advance. Rather, the construct of caring was allowed to emerge as SR reflected upon these conversation prompts. Prompt 1.1: Thinking about your days in elementary and high school, what instances of teacher caring or non-caring do you recall that had an effect on your motivation to do well in school? Unfortunately, SR immediately related negative perceptions about his religion teacher in high school (SR attended a Catholic school). He said that during a typical 45 minute class, the teacher talked for 30 minutes about his own opinions and took no questions. SR also said that when a student got a question wrong on a test and asked about this question that the teacher's response was simply that "the answer was wrong and there was no more discussion or explanation." SR went on to say that this was probably the worst class that he ever had because it was simply "somebody standing up there telling students what he thought." SR then related another instance of non-caring in high school when he described his English teacher from his junior (third) year. He said that whereas the religion teacher suffered from too much "self dialogue", his English teacher was almost the opposite. He described a typical class as one where students took turns reciting sections of literature by going up and down the rows. SR indicated that he felt like students were still being treated like they were in elementary school instead of individuals who were now capable of independent thinking. According to SR, this teacher sat at her desk, presumably listening to students read, while she did other work and then assigned homework for them to do at the end of the class. As SR harkened back to elementary school, he recalled his fourth grade teacher as another example of non-caring similar to his examples from high school but he added that "at least we had a recess in elementary school whereas we did not in high school!" This elementary teacher was described not so much in terms of classroom practice but rather her demeanor and behavior in general. As noted above, SR indicated that unlike high school where he had to sit through long boring classes with no break, recess was a regular feature in elementary school. However, he said that when recess had to be held in the classroom because of bad weather, that the teacher would sit outside the door to the hallway and if any student's foot inadvertently would go beyond the threshold of the door that the student was immediately given a detention. SR indicated that "this type of Brock Education Journal, 26 (2), 2017 teaching whether in elementary school, high school, or college results in students feeling like the teacher really does not care about them as individuals and so very little learning results." In addition, SR referred to a "lack of respect" as being the major feeling he experienced during these times when caring was not demonstrated. It seems, based on these perceptions, that the personal characteristics of the teacher and the teacher's relationship to students were inextricably connected to student learning. Consequently, it may very well be that if teachers are not perceived as caring then their instructional methods may also be perceived as inadequate. When encouraged to recall positive instances of caring in elementary school and high school, SR immediately said that "Mr. A" (a teacher in 8 th grade) is probably the most effective teacher I have had through my sophomore year of college." He described effectiveness in terms of the way that Mr. A interacted with him and his classmates and the respect that he showed towards their ideas and perspectives. SR described this interaction as one where Mr. A. did not place himself in a position of authority but rather where students felt like he listened to them as if "he was talking to another adult". SR said that consequently he and his classmates felt more "grown up" with this teacher and also felt like they mattered to him as individuals. He also identified "Mrs. D." who taught math in high school as a positive influence. SR confessed that he is not a "math person" but that Mrs. D. not only took the time to explain concepts to him that were unclear but did so on her own personal time such as before and after school or during lunch. Prompt 1.2: To what extent do any of these instances of caring and non-caring continue to have an effect on you in terms of your motivation to do well in college? Both of the instances of caring from high school described above regarding Mr. A. and Mrs. D. were not only deeply felt by SR but he said that they continue to have a positive impact on his learning in college. SR then abruptly posed this question "how can teachers in large lecture halls connect with their students?" and related this barrier to his negative elementary and high school experiences. The question about large lecture halls was actually the only connection to non-caring that the participant offered. He then went on to talk about three instances of caring that he has experienced as a college freshman and sophomore described next under Prompt 2. This was a welcome change from his predominant focus on instances of non-caring in elementary and high school. Prompt 2: What instances of teacher caring or non-caring do you recall from your experiences thus far in college that have had a strong effect on your motivation to do well during the remainder of your college years and on your career aspirations? The first instance described by the participant offered a great counterpoint to the negative experiences he described in high school. SR said that his college instructor for philosophy and theology, rather than lecture about his own thoughts and experiences like his high school teacher, so engaged him in the content "that it didn't matter how much time and work was needed --I was motivated to learn because this teacher was very much interested in what he was teaching and also his students". SR went on to say that because of the obvious enthusiasm of the instructor and his concern for students that he Brock Education Journal, 26(2), 2017 was motivated to learn a subject that previously held little interest for him. During the interview he said that even though he was not Catholic (which was news to his parents) that he experienced valuable learning in this college class related to a deeper meaning of religion and that what he learned in grade school and high school was a sham. [both FR and MR tried to console themselves by classifying his response as a normal part of rebellion against all things parental]. He reiterated that while there was a lot of work required in this college class, he didn't care because he stressed that his college professor is the only religion teacher he ever liked because he cared and wanted them to learn. The second instance of caring in college described by SR related to his Spanish teacher who required that assignments given every Monday, Wednesday, and Friday had to be submitted at the following class session. Regarding these assignments, he said that "while they were hard he gladly did them because he knew that the teacher really cared that he learned Spanish." He contrasted this positive experience with his negative high school English experience described earlier -"It was a lot different than doing mindless work at the end of the day in order to assign a grade." He also cited high teacher expectations (for both his religion and Spanish teachers) as something that had a very positive effect on him although these expectations presented formidable challenges. However, he went on to say that the "personal care that these two teachers exhibited dwarfed the challenge of the work". He concluded his evaluation of these two teachers by saying that they were passionate about what they taught and passionate about their students. [At this point, FR engaged in some "private speech" about the "Three R's" (Relationships-Relevance-Rigor) as the necessary sequence for creating a "learning classroom" where students' engagement in learning is the primary focus (Cooper & Garner, 2012)]. When asked about negative instances of caring in colleges his immediate response (again) was "lecture halls don't work" but he added that in college "you almost are treated as an adult" [this elicited a discrete smile from both MR and FR!]. SR then recalled a third instance of caring during college that did not involve a college instructor per se but rather the owner of a café named the "Spirited Goat" located near his college. SR identified this individual as a person who had a very positive impact on him because of the respect and caring he felt as well as the wisdom he shared. He went on to relate an experience he had where he had such a great conversation with the owner that he forgot to pay him and the owner also forgot to ask him to pay. SR related that he was so moved by the genuine caring shown by the owner "that I went out of my way to go back to the shop to pay what I owed". Although this instance was not "academic" in the sense of formal schooling, it reinforced the construct of caring and also demonstrates that valuable learning experiences should be recognized as extending "beyond the walls" of the school. SR characterized all three of these individuals (religion, Spanish teacher, and café owner) by saying that "all three of them had a balance of relationships, professorship, and setting expectations and who took the time to form a relationship that formed the basis for my learning." SR also said that he wasn't sure where he picked up the following quote --"they don't care what you know until they know that you care" but he said that it epitomized these three teachers. [FR knew that the quote was from Madeline Hunter that he must have shared earlierkids really do listen sometimes!]. Brock Education Journal, 26(2), 2017 Prompt 3. Based on all of the examples you provided from Elementary, High School, and College, what would you suggest to teachers so that they could have a stronger positive effect on their students in terms of their wanting to do well in school and to pursue their career aspirations? When asked this question SR immediately responded with "don't get comfortable". He then went on to say that his current experience working part time as a supervisor for a nutrition company taught him that he needs to "talk to the new person in the room" because everyone is learning something new including him and that both teachers and students should consider themselves as "perpetual students". He then said that teachers and professors should teach as if the labels of "teacher" and "student" are removed in order to create an environment where teachers are seen as fellow learners. He added that when he presents to a group of people that it's like he is "teaching and learning for the first time. When asked about the concept of caring, he connected it to the idea of us all being perpetual students "because teachers are human beings just like their students and that by learning right along with their students it comes across as genuine caring." The final research question applies to the Mother Researcher (MR) in terms of her reactions to the responses of SR. Prompt 4. Based on what you have heard during this interview what reactions do you have? While MR intentionally did not interject during the interview (except for saying that she really did not know how deeply negative experiences had affected SR), she now joined SR in discussing the ill effects of large lecture halls in college. She especially resonated with his feeling that he felt like he was treated like a number and that many students are lost both physically and mentally in such a large classroom environment. Because MR had experienced the same kind of environment in college, she commiserated with SR about this experience. In addition, because she owns her own company to educate adult students to use computer software, she was especially sensitive to the load on the teacher in such a large classroom since it prevents the teacher from developing any kind of relationship with students. She also agreed with SR that the workload for a teacher with such a large number of students would be quite overwhelming and again would prevent personal relationships and communication. This conversation between SR and MR ended the interview. Implications of Caring for Researcher and Teacher Preparation: FR's Reflections What is disturbing about SR's elementary and high school remembrances is not only the perceived lack of caring, but that these instances were foremost in the memory of SR rather than more positive experiences. It is also important to note the close connection between the teaching process and the impact on student perceptions of teachers caring about their students including their views and perspectives. In addition, what became most apparent during this interview was that SR automatically linked his interest and commitment to put forth effort in a subject matter, regardless of his prior interest, to the passion of teachers for their subject and the concern that they evidenced for their Brock Education Journal, 26(2), 2017 students. Hopefully someday positive remembrances will overshadow the negative instances in his mind. I recognize that our more quantitatively-inclined colleagues would point out that there is too much "noise" or confounding of variables to be able to deem findings valid and therefore quite problematic for generalizing beyond this admittedly small sample size of n = 1. On the first count of noise and confounding of variables I plead "guilty" and happily so because I don't think that the complexity of human beings, including learning and teaching, is amenable to partitioning of variables. As for the second charge of findings not being valid, I also admit that, judged by the quantitative criteria of validity or even by the criteria of credibility put forward by some qualitative researchers, that the findings from this study would not be considered trustworthy especially since they are based on data obtained from just one primary participant. However, I appeal to readers to evaluate the findings of this study, not based on the traditional criteria of internal or external validity, but rather to draw upon their own life's experiences in order to come to their own conclusions about trustworthinesswe refer to this as experiential validity. To carry this idea of experiential validity a bit further, I ask you to think back to those teachers who made a real difference in the way that you felt about a subject, yourself, or your future. Was it not those teachers who you perceived as caring about you as an individual who you still remember as having a positive impact on your life even if the particular way that it was expressed may have varied among these teachers? Whenever we find ourselves nodding in agreement as we think back to our own experiences throughout our lives, aren't we giving silent assent to the validity of what we hear now? In fact, whenever two or more individuals really agree on a perspective based on their own history and interpretation of this history, does this not constitute a type of validity because of a shared perceptual understanding of phenomena? And, while this shared understanding cannot be shown to be statistically generalizable, I venture to hypothesize that this generalizability could be confirmed through large-scale interviews. So if our solitary participant does in fact speak for many of us then what might schools of education that prepare future teachers and those departments that prepare researchers take from these findings? I would suggest that caring, although not a variable that is typically used in prediction equations, is critically important in helping students to grow and achieve academically, emotionally, and socially. One of the things that I believe must happen is that education must shed the paradigm where scores on achievement tests, that are concerned primarily with right and wrong answers, serve as the primary indicators of student, teacher, or school success (Cooley & Bernauer, 1991;Powell, Bernauer, & Agnihorti, 2011). Rather, if we begin with the belief that every student has unique interests, motivations, capabilities, potential, and ability to learn, should we not first discover what motivates our students to learn and then adapt our teaching approaches based on this recognition? If schools of education should include "soft" concepts such as caring in their preparation programs, what about programs that prepare students to conduct research in schools? While I was trained to become an educational researcher it was strictly in line with the quantitative paradigm where the search was on for "variables" that could be used to help "explain and predict" student cognitive achievement (as measured on tests) using techniques such as regression analysis and factor analysis. It is the ability to compartmentalize in order to arrive at cause-effect statements and then to generalize Brock Education Journal, 26(2), 2017 these statements that is at the heart of the quantitative paradigm. It seems that it would be both problematic and unwise to somehow transform the construct of caring into a variable that could be analyzed quantitatively. On the other hand, the qualitative paradigm can readily admit soft variables such as "caring" into its methodologies since it's ontology is not anchored in a stable reality that can be parceled into variables but rather is embedded in multiple perceptual realities. A solution to this apparent problem may be for teacher preparation and educational research programs to not only talk about "mixed methods" but to also recognize that when it comes to the complexity of teaching and learning, that while quantitative methodologies can be used to investigate the impact of some aspects of the educational system (such as SES, expenditures, school size, etc.), that they are of limited usefulness when investigating the intimacy of teaching and learning. Rather, it is those very things that promote student motivation and interest in learning such as caring and high expectations that more than likely hold the real key to doing well academically. On the other hand, when it comes to assessing the impact of social and economic factors on schools, the sophisticated techniques employed by quantitative researchers admirably fit the bill. It is therefore suggested, based on the findings of this study and this researcher's own sense of "experiential validity", that educational research should start with the learner and then move outward to those influences that are more peripheral to teaching and learning. I am quite sure that there is a place for every type of educational inquiry as we search for ways to create a more caring and effective educational environment. Looking to literature as another manifestation of experiential validity, Thomas Mann (1952) in his novel The Magic Mountain, speaks through his characters thus…. One day all the world would realize that our system, which had developed out of the cloister school of the Middle Ages, was a ridiculous bureaucracy and anachronism, that nobody in the world any longer owes his education to his schooling, and that a free and public instruction through lectures, exhibitions, cinematographs, and so forth was vastly to be preferred to any school course (p. 519). Although dated both historically and geographically, I wonder how many of us might agree with this position when viewed through the lens of experiential validity; that is when we as adults look inward and backward to discover what really mattered in our cognitive, emotional, moral, and social growth is it not those teachers who demonstrated caring as well as those individuals who, while not a part of the formal school system such as the owner of the "Spirited Goat" café, who have had a lasting positive impact on us as learners? Noddings (2005) writes persuasively that the primary purpose of education is not cognitive but rather moral and that caring and growth should be of primary concern rather than a focus on achievement. This thesis matches well to the purpose statement offered earlier. However, she "complicates" the issue of caring when she notes that even if teachers try to be caring that these efforts must be perceived as such in order to have a positive impact on students. In fact, it could very well be the case that some of the teachers that SR perceived as non-caring may have been well-intentioned. However, perceptions can become quite complex within the swirl of the school milieu where power differences intermix with the ever-changing chemistry of peer relations and "growth Brock Education Journal, 26(2), 2017 pains". Again, this suggests that the traditional way of conducting educational research is not well-suited for identifying and appreciating these complexities. Heshusius (1996) begins her article by describing her experiences as a graduate student in special education with a somewhat humorous (but mostly sad) account of her first course in an American university. She was late for her "Learning Theories for Educators" course and thought that when she heard the professor talk about rats that she was in the wrong classroom. However, she discovered that she was indeed in the right classroom and then adds "needless to say, at the end of the course we were still talking about rats and pigeons doing very strange and silly things to get a pellet of food into their half-starved bodies" (p. 50). The fact that this anecdote revolves around behavioral learning theory is no coincidence since the tenets of behavioral theory with its focus on measurable performances fits perfectly with the quantitative paradigm. Again, it would appear to be a very difficult task to see how the attributes of caring in the classroom fits into this educational paradigm. The question also arises how we would gain any important information by quantifying something that is important in its own right and which would lose its essence if we tried to quantify it. It seems that trying to do so would be similar to trying to quantify the characteristics of poetry, drama, or art. Because the experience that Heshusius (1996) described was to help prepare aspiring teachers to work with students with learning differences, makes it especially depressing since it treats these students not as individuals with unique emotional, social, and moral needs but only as students who need "fixed" so that they can function in the real world. And, if teachers go into classrooms steeped in this mindset, then the construct of caring must indeed "take a backseat" to the more quantifiable and measurable aspects of learning. We should not let this happen.
2018-12-13T21:26:41.694Z
2017-10-23T00:00:00.000
{ "year": 2017, "sha1": "43e3b82e09c08827268538b8aef9e041050cc459", "oa_license": "CCBY", "oa_url": "https://journals.library.brocku.ca/brocked/index.php/home/article/download/602/316", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "015b86153ac27726427714740183f9c330625c7d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
248239630
pes2o/s2orc
v3-fos-license
"Flux+Mutability": A Conditional Generative Approach to One-Class Classification and Anomaly Detection Anomaly Detection is becoming increasingly popular within the experimental physics community. At experiments such as the Large Hadron Collider, anomaly detection is at the forefront of finding new physics beyond the Standard Model. This paper details the implementation of a novel Machine Learning architecture, called Flux+Mutability, which combines cutting-edge conditional generative models with clustering algorithms. In the `flux' stage we learn the distribution of a reference class. The `mutability' stage at inference addresses if data significantly deviates from the reference class. We demonstrate the validity of our approach and its connection to multiple problems spanning from one-class classification to anomaly detection. In particular, we apply our method to the isolation of neutral showers in an electromagnetic calorimeter and show its performance in detecting anomalous dijets events from standard QCD background. This approach limits assumptions on the reference sample and remains agnostic to the complementary class of objects of a given problem. We describe the possibility of dynamically generating a reference population and defining selection criteria via quantile cuts. Remarkably this flexible architecture can be deployed for a wide range of problems, and applications like multi-class classification or data quality control are left for further exploration. Introduction Nuclear and particle physics are often characterized by problems where one needs to identify particles or events that (i) belong to or (ii) deviate significantly from a specific 'reference' arXiv:2204.08609v1[cs.LG] 19 Apr 2022 class.In the first case we refer to one-class classification (OCC) -to identify objects of the reference class amongst all objects, in the latter to anomaly detection (AD) -which can leverage on OCC to detect abnormal data points compared to the reference class. Examples of OCC can be found in an extensive literature review provided by [1].As for AD, there is a growing number of applications that span from accelerator operations to physics analyses, the latter being of great interest for example at the Large Hadron Collider (LHC) since new physics beyond Standard Model (BSM) remains elusive (as discussed, e.g., in [2][3][4][5]). 1 In both cases, one typically deals with multiple features that vary as a function of the phase space of the final state particles reconstructed in the detector. In this paper we introduce a novel approach to cope with OCC and AD that leverages two different stages that we call "flux" and "mutability" (F+M).In the first stage, flux, we learn the distributions of the reference class data, and in doing that we utilize a combination of conditional autoencoders (cAE) and flow-based models, particularly conditional masked autoregressive flows (cMAF), which are conditioned to the kinematics of the particles or events we are trying to distinguish.As we will describe herein, this allows us to augment the space of the features with a gain in classification performance.The second stage, mutability, consists of addressing if the data in the inference phase -which undergoes a forward pass in the cAE has significantly deviated from the reference class.In other words, at a given kinematics one can dynamically generate a reference cluster in the augmented space and measure if an object belongs to the reference cluster or not. 2 Hierarchicalbased clustering (namely, Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) [6]) is used to fit these data and come up with a probability cut that provides the confidence level for an object to belong or not to the reference class. In this work we focus on two applications: i) distinguishing neutrons from photons in the Barrel Calorimeter (BCAL) of the GlueX experiment [7] where neutrons in certain kinematic regions are difficult to simulate or isolate from real data and photons are therefore used as reference class; and ii) identifying possible rare BSM dijet events from QCD dijet background events at LHC [3], the latter representing our reference class.One of the advantages of our novel and flexible architecture described in the following sections, is that it relies only on the 'reference' class and remains agnostic to the class of objects complementary to the reference class during both the training and inference phases.The two classes can be thought of as 'signal' and 'background' in physics applications.When using strictly supervised methods instead, the model typically requires both signal and background as input in order to learn the feature space and produce a binary output.While these algorithms can be efficient and accurate, they are limited by the quality of data we inject.That is to say, they are prone to any bias we may introduce when constructing our training samples.This can be critical when our control over one of the two classes, (e.g., the background), is limited and we need to rely on the other class (e.g. the signal) or vice versa.Using an architecture that relies on the information of one class removes any assumptions we must make about the complementary class and can be extremely important for different applications: for example, it can be utilized for anomaly detection of rare events as well as a method to increase the purity of samples when the original set of real data is characterized by two classes only. The remainder of the paper is structured in the following way: In Sec. 2 we will describe the developed architecture and provide a detailed discussion of the training and inference phases.In Sec. 3 we introduce the two problems that our architecture has been applied to: detection of neutrons within the GlueX BCAL, and tagging of Z → t t from QCD dijet background.In Sec. 4 we present our analysis and results.Finally, in Sec. 5 we conclude with a summary and perspectives on future work. Flux + Mutability The F+M approach can be broken into three components, namely: (i) a cAE, (ii) a cMAF, and (iii) a clustering algorithm. (i) The cAE is trained to reconstruct features as a function of kinematic parameters.In this paper we will show two examples: a) identifying single neutral showers that depend on 14 reconstructed observables which vary as a function of the shower energy and location in the BCAL calorimeter at GlueX; and b) analyzing topologies of events at LHC characterized by 2 jets, which are described by 15 reconstructed observables that depend on the transverse momentum of the jets.Note that in both cases the kinematic variables are continuous.The reader can find more details about these datasets in Sec. 3.This model is trained first, independently of the cMAF, deploying Huber Loss (see Eq. 1): Using this trained model we forward pass all training samples and obtain both the reconstructed vectors (x ) and the residual vectors (x − x) which are then combined into an augmented space.Namely, the augmented space will consist of 28 dimensions for the GlueX and 30 dimensions for the LHC problems, respectively. (ii) This augmented dataset is then used to train the cMAF.Let x ∈ X denote an element from the set of input vectors within the training dataset, k ∈ K the conditional vector for the kinematics, and z ∈ Z represent the transformed Gaussian vector given by the invertible bijection f . 3.A conditional flow with N layers can be described by: The logarithm of the transformed probability is then given by Eq. 3, where π denotes the probability under a Gaussian distribution: The loss function is then given by the negative log-likelihood: It has been found that using the features reconstructed by cAE (x ) instead of the original features (x) allows for a better separation of classes at inference.We also observed that an augmented space made by the recontructed features and the residuals (x −x) increases the separation power.As it will be shown in Sec. 4, our reference class data at each kinematics can be represented by a cluster in the feature space that is normalized on a hypersphere.It turns out that features provide localization in space, while residuals push events deemed "anomalous" radially outward, in otherwords, the augmented space with residuals allows the extraction of clusters originally nested within the main population.Hence, cMAF is trained on the augmented space of residuals and reconstructed features.The conditions remain the kinematic variables discussed prior, although now we must normalize them on the interval (0,1) to allow better convergence of the flow network.Normalization of the conditions for the cAE is not mandatory, although if the conditions exist over a large domain they should be normalized prior to injection.The flow network will be used as the conditional generator to form the reference cluster in the augmented space as a function of the kinematics. 4iii) The last part of the architecture consists in clustering based on HDBSCAN.This allows us to fit the objects in the inference phase with respect to the reference cluster on an object-by-object basis, i.e., on a particle-by-particle basis for the neutral shower identification problem of GlueX, and on an event-by-event basis for the LHC jet problem, as described in Sec. 4. The following provides more details on the approach taken to deal with certain aspects that characterize the F+M architecture.Sec.2.1 describes the continuous conditional generation; Sec.2.2 covers the separation via clustering and the choice of the dynamic cuts that are applied; finally Sec.2.3 provides a global overview of the workflow during the inference phase with Fig. 1 depicting the connection of the components (i), (ii), and (iii) described in this section. Continuous conditional generation Continuous conditions give rise to the problem of sparsity within the dataset, meaning low numbers of events per condition, i.e.,in different kinematic domains.The obvious method to circumvent this issue is pre-binning of conditions such that they become discrete.We instead choose to take a different approach, in which we allow conditions to remain continuous but enforce sampling from restricted domains.This is achieved using Kernel Density Estimation (KDE) to model the transformed probability distribution of the training data in kinematic bins.That is to say, for each bin in the conditional space, we form a density estimation object in which we can call upon to sample sets of latent vectors from restricted domains.In inference, we use the inference particle's kinematics to map to a KDE model, fit on the training samples transformed distribution.This model then generates a sample in the 2N augmented space of transformed features and residuals, where N is the dimensionality of the feature space. 5We then concatenate the original inference kinematics to each generated 2N Gaussian vector, forming a vector of size 2N + L and forward pass this vector through the cMAF to generate our augmented space. Separation via clustering and dynamic cut We introduce an outlier score which is obtained comparing each object candidate (e.g., either a particle or a physics event) to the reference cluster.This can be thought of as a probability of being an outlier with respect to the reference class.In other words, it corresponds to the complementary probability of being an inlier: This metric is used to make a decision if the object is more likely to belong to the reference class or not.Every point of the reference cluster is characterized by an inlier probability and therefore by an outlier score.In what follows we detail the process of generating the outlier distribution for the reference cluster.We will then compare the outlier score of the candidate object to the reference distribution. The following is a brief and concise description of the HDBSCAN algorithm.The reader can find more details about the algorithm in [9,10]; for our implementation in particular we utilized the documentation in [6].HDBSCAN utilizes the mutual reachability distance between the points to form a weighted graph, which is in turn used to build the clustering hierarchy. 6Heuristically the density corresponds to the inverse of a distance, λ = 1 distance ; the smaller the distance between points the higher the density.In order to get more information on the clustering, a condensed tree is formed.This tree contains clusters and each cluster contains underlying leaf clusters.This stems from the hierarchical nature of the algorithm.In the algorithm the user has control over hyperparameters such as "min samples" which sets a lower limit on the number of points needed to be considered a core point and for the algorithm to perform any mutual reachability calculation.The algorithm prioritizes regions of high density, eventually merging less dense regions with the main cluster if they are reachable under some threshold.Persistence is introduced defining a notion of membership based on how long the point was retained in the cluster, i.e., position in the spanning tree, in order to compare the relative distance scales between a fixed cluster and a point in question.We also want to consider a density-based notion of membership.This is done via a modification of the Global-Local Outlier Score from Hierarchies (GLOSH) algorithm [10] that allows us to perform the comparison of the points membership persistence with the maximum persistence of the cluster, in order to get a measure of how much of an outlier the point is relative to the fixed cluster. Combining notions of both distance and density we can now obtain a membership distribution for the reference cluster at each kinematics.This is used to define an outlier metric when classifying new data points.This metric is dynamic in that requires generating a cluster representative of the reference population at any kinematics.Therefore one can define a quantile threshold, which can be some outlier score value corresponding to, e.g., keeping 95% of the population. It should be stressed that the quantile cut defines the outlier score of the candidate, that is the probability that an object is an outlier with respect to the reference class.Thus, when we classify data via our quantile metric, we define an outlier score cut that corresponds directly to a certain confidence level in data.This metric allows us to remain completely agnostic with respect to the complimentary class, removing the need for semi-supervised methods-which require an example signal in mind during training, see, e.g., [11]-in defining the optimal selection threshold. Workflow at the inference phase The workflow of F+M is depicted in Fig. 1 which describes the flow of an individual object (a shower in the GlueX BCAL case or a dijet event in the LHC case) in which we wish to perform inference on.The object is initially fed through the cAE, producing both the reconstructed and residual feature vectors to be augmented. Subsequently, the kinematics of the input object are mapped to a KDE instancepre-fit on the transformed distribution of training samples-which in turn produces a set of Gaussian vectors from a restricted domain.It is worth mentioning again that using KDE is necessary, given that we are using continuous conditions.For a given condition vector k, there are not enough data to reliably sample a full width Gaussian, so we must instead restrict the cMAF sampling domain at the generation stage, for which using KDE is an effective approach.The inference kinematics are concatenated to the vectors and fed to the cMAF for a forward pass.The model is then forced to interpolate over the restricted domain at the generation stage.The generated data is normalized (either on a hypersphere or by applying a standard scaler) and fed to an HDBSCAN clustering instance, forming the reference cluster for the given kinematics.The inference object is added to the cluster, and classification is performed via the dynamic quantile cut.We directly include the inference object in the initial reference cluster since this greatly improves the speed of the algorithm (saving a second clustering).In doing so we are careful to keep the true reference population large such that an individual point has no influence on the quantile metric. Physics applications and corresponding datasets Our approach can be applied to a plenitude of problems in different research areas.We selected two examples in particular, one related to the identification of neutral showers caused by neutrons in the GlueX BCAL, the other one to BSM dijet events that significantly differ from SM background at LHC. Neutral showers in the GlueX BCAL The GlueX experiment, located at Hall D Jefferson Lab, aims to confirm the predictions of Lattice Quantum Chromodynamics, searching for a class of particles known as exotic hybrid mesons [12]. 7,8The theory predicts multiplets of exotic mesons with different quantum numbers, and the unambiguous establishment of exotic hybrids requires the full mapping of the hybrid multiplet spectrum.This mapping demands the identification of neutral and charged particles in the final state in several topologies and the validation of the results through consistency checks between different decay modes of the same hybrid meson.Production of charged exotic mesons implies a particle other than a proton must be produced in the reaction.This limits the resulting products to be either a ∆ baryon or a neutron.Charge exchange can also occur in which the proton provides its charge to a positively charged exotic meson, resulting in the production of a neutral ∆ and a neutron.In practice, ∆ baryons are difficult to work with.This is due to large underlying physics background, accompanied by difficulties describing their kinematics, which are necessary for analysis purposes.Ideally, we would like to detect and isolate the neutrons as they do not require detailed modeling of production and decays, and provide constraints to theoretical predictions. We focus on the Barrel Calorimeter of GlueX [13], a 400 cm long (about 115 • in polar angle coverage) electromagnetic calorimeter designed primarily for photon detection.The detector consists of scintillating fibres compressed between thin layers of lead (see Fig. 2).The detector is segmented into 48 azimuthal modules.Each module is partitioned into four readout channels, consisting of double-ended readout using Silicon Photo-Multipliers (SiPM).The GlueX photon beam is incident on a liquid hydrogen target (γ + p) which results in many different final states (termed "reaction channels").Many of these particles leave the target and strike the BCAL, creating electromagnetic showers within the detector.It is from these showers we attempt to classify particles based on their profiles.Low-level and high-level observables reconstructed in BCAL are injected in the algorithm as input features (the reader can find a detailed description of the features obtained from the BCAL detector response in Appendix A). Dealing with neutrons is typically more complicated, in part due to the BCAL detector's response being more difficult to extract compared to a photon.As a result, the extraction of reconstructed neutrons can be affected by sizable uncertainties. 9Calibrations from real data are also easier for photons, since one can rely on large samples of standard "candles" like π 0 decaying into photons which are abundantly produced at GlueX and are detected in the BCAL.With this novel architecture, we are able to limit assumptions we impose on neutrons in the training phase of our algorithm as we rely on half the information of supervised algorithms, namely, the training will be based only on detected photons as they typically provide clear signatures of neutral showers.For the proof of principle of our architecture, we will focus on isolated neutral showers, so anything that is not recognized as a photon is then classified as a neutron. 10he neutral shower reconstruction problem is characterized by 14 features.The dimensionality of this space will be augmented utilizing residuals.The detector response depends on the kinematics of the particles, that is with which energy and at which position the particle interacts in the calorimeter. 11The dependence on the particle kinematics is encoded in our approach through conditional cAE and cMAF, as explained in Sec. 2. In fact, as it should clear from Fig. 2, the BCAL design has cylindrical geometry, so the two main kinematic parameters that characterize the reconstruction of the neutral shower are the energy E of the particle and the z position at which it strikes the innermost layer. Training and testing data consists of simulated samples, in which generation of only a specific particle type occurs via Geant4 particle gun [14], allowing by construction high purity samples in both sets (neutrons and photons). Particle samples are simulated in such a way to be approximately flat in reconstructed energy E = 200−2200 MeV and in z = 162−262 cm within the BCAL.This corresponds to a region of the phase space that is highly active within the calorimeter. We initially deploy fiducial cuts on reconstructed widths of showers, namely, width in z, radial width and width in time.Doing so removes any artifacts left over from the reconstruction process after simulation, in which are careful to apply loose cuts such that we do not impose restrictions on the data and lose sensitivity to the tails of data distributions.A strict requirement of one shower per event is also required to further eliminate any other interactions (for example split-offs in the photon sample). 12Since we are interested in only neutrons that mimic photons to a high degree, i.e. those not easily separated via rectangular cuts, we deploy a tight pre-selection on the radius of a shower (must be within the first 3 BCAL layers) and the amount of energy within the BCAL 4 th layer (less than 0.1 GeV).The 14 features are comprised of detector response variables and their definitions can be found in Appendix A. The entire training dataset consists of ≈ 1.8M photons in which we reserve about 10% for validation.Testing samples of photons and neutrons are generated independently and each contain about the same number for validation.We condition the model on continuous values of E and z, modeling the cMAF latent representations in bins of 4 cm, 40 MeV, values that correspond to a coarse representation of the BCAL photon resolution. The kinematics of the photons and neutrons detected by BCAL are displayed in Appendix B. A comparison of the 'original' feature distributions injected in the cAE, the 'reconstructed' by the cAE and those dynamically 'generated' by cMAF can be found in Appendix C, along with a comparison of the residuals and their corresponding generations. BSM dijet events at LHC Our architecture can be utilized in other problems too.Despite the multiple searches for physics beyond the Standard Model (BSM) conducted at the LHC, new physics remains elusive as of today.In the last few years many novel approaches have been developed for AD in order to detect signal events which would stand out as anomalous with respect to a reference background: these span from new unsupervised AD technique leveraging on neural density estimation [4] to tag-and-train techniques that can be applied to unlabeled data thus offering to be less sensitive to subtle features of jets which may not be well modeled in simulation [15].In this context, our architecture can be utilized to characterize how anomalous an individual event is with respect to a background events by remaining agnostic with respect to the individual events being analyzed. In this paper we consider QCD dijet events as background and we look for BSM dijet events from the decay of a Z .We utilized a suite of jets for SM and BSM particle resonances which is available on Zenodo [16], provided by the authors of [17].Primarily, we isolate Z → t t jets (anomalous signal) from SM QCD dijet (background) in order to remove the varying length feature vectors seen in other BSM datasets, such as W → W + jj.The datasets have been generated with MADGRAPH [18] and PYTHIA8 [19].The DELPHES [20] framework has been used for fast detector simulation.For a detailed description of the dataset we refer the reader to reference [16].The simulated QCD [21] and BSM dijets [16] are produced with the same selection criteria.Clustering of the jets was done using FASTJET [22], deploying the anti-k T algorithm [23] with a cone size of R = 1.0.As stated in [16,21], events within the datasets must meet the requirement of the leading jet having transverse momenta p T > 450 GeV and the sub-leading jet having p T > 200 GeV. In this case we use only a single conditional, namely the leading jet transverse momentum, and form a fixed length feature vector consisting of the remaining 4 vector properties of the leading jet, its n-subjettiness variables, the sub-leading jet 4 vector and its n-subjettiness variables. 13This feature vector then gets augmented with its residual vector from the cAE, resulting in a vector of 30 features at inference including residuals.We apply a further condition on the datasets, requiring the leading jet to have p T < 800 GeV in order to provide sufficient data as a function of the conditional parameter.We model the cMAF's transformed space in bins of 1 GeV. The architecture is trained on ≈ 600k QCD dijet events and validated on ≈ 50k, retaining around 50k for testing.We use only a single top jet file from [16] (m t = 174 GeV), providing 50k anomalous events.More details on the feature distributions of both classes can be found in Appendix D which includes also a comparison of the 'original' feature distributions injected in the cAE, the 'reconstructed' by the cAE and those dynamically 'generated' by cMAF along with the residuals and their corresponding generations. Analysis and results In what follows, we deploy our architecture on the two different physics scenarios introduced in Sec. 3. Neutral showers classification with the GlueX BCAL We demonstrate the potential of the model as an OCC method for GlueX photons, which in turn allows to tag neutron candidates from the sample of isolated neutral showers described in Sec.3.1.As already explained, there are specific regions in the phase space of the BCAL where simulating the detector response to neutrons is challenging because it is characterized by large uncertainty. Our strategy aims at isolating neutrons by applying cuts on the photon showers, the latter taken as the reference class.As described in Sec. 2, our approach is unsupervised and agnostic to the neutrons; it allows us to dynamically generate a reference cluster in the augmented space of the features as a function of the particle kinematics.The reference cluster is used to establish if a new particle is more likely to belong to the photon class or the neutron class.A quantile cut is applied on an outlier score to determine the probability of a particle to be a member of the cluster or to be an outlier.This approach can be useful when the uncertainty on the distributions of the complementary class (neutrons) is expected to be large compared to the reference class (photons), and the distributions of neutrons and photons cannot be easily separated by standard rectangular selections.In such scenarios, fully supervised approaches become less reliable without a proper assessment of the uncertainty quantification.Our approach allows to select the true positive rate (TPR) of the reference class which, by construction, is consistent with the quantile cut on the outlier score chosen for the selection.The outlier score of each particle corresponds to a probability of not belonging to the photon reference class, according to Eq. ( 5); given that we work with isolated neutral showers which can only be either a photon or a neutron, the outlier score is interpreted as how confident we are to have identified a neutron. Fig. 3 depicts the average value of the outlier score in bins of E and z. 14 It is easily seen that the average outlier scores for neutrons are much higher across the entire phase space when comparing both plots.It is also apparent that the outlier score of photons is flat and close to zero as a function of the kinematics and for neutrons it is large in value and rather uniform in distribution too, despite the reconstructed features do largely depend on the kinematics of the particles (as displayed in Figs.B1, B2).This means the architecture has provided an approximately uniform and good separation power in the phase space we are covering.Deploying a 95% quantile cut, we obtain a True Positive Rate (TPR) for photons, and a True Negative Rate (TNR) for neutrons of 95.09% and 52.40%, respectively, as summarized in Table 1.TPR and TNR are quite large and to our knowledge exceed by far results obtained with traditional rectangular selections [25]. We note that by assuming both photon and neutron training datasets are reliable, then deploying a fully supervised model like XGBoost [26] would result in a TPR and TNR at about 92% each, but again this is not the scenario we are tackling here. 15Efforts have been made to improve the prediction head of the algorithm, in which we replaced the clusterer with a One-Class Support Vector Machine (OC-SVM).The conditional generations (see Fig. C1 for an example of generations produced via the cMAF) are then used to fit the OC-SVM and predict outliers (i.e., neutrons).In smaller scale tests it was found that the TPR of photons suffered drastically, at around 56%, while offering a slight increase in the TNR of neutrons at around 90%.These results are most comparable to the results obtained deploying a 68% quantile cut with clustering, yet pose no real performance increases.Using the OC-SVM also limits our ability to employ a quantile cut.The SVM-based method was deemed inferior to the clustering method used in our approach and not explored further. Uncertainty in the neutron sample We further showcase the utility of our architecture when large uncertainty affects the class complementary to the reference class.In our case, the complementary class is that of the neutron candidates, and we know that neutron simulations in the phase space of interest described in this study can differ significantly from real data.We show how this affects a fully supervised approach such as XGBoost and make a comparison to our OCC approach which benefits from the usage of residuals. While the original injected neutron distributions overlap to a great extent with the photon distributions to begin with, we push this overlap to an extreme case: we artificially create an extra testing dataset of neutrons in which we 'scale' the neutron features in such a way to make them highly resemble photons.We pretend this new sample to be the actual data observed in an experiment representative of the true detector response, while we consider the original sample to be the simulated data in which we assume there exists ∼ 10 − 15% difference from actual data.Fig. C2 shows a comparison between photons and neutrons of the injected feature distributions.For the neutrons, we also show the case of the 'scaled' (i.e., actual) distributions.Similarly, Fig. C3 shows a comparison for the distributions of the features as reconstructed by the cAE; Fig. C4 shows the same comparison for the residuals.Finally, Fig. C5 shows a comparison for the distribution of the outlier scores. XGBoost is then trained on the simulated neutron sample, and we compare performance on both simulated and actual (scaled) samples in the inference phase.In this study we use the TPR obtained with XGBoost to define the quantile cut for our architecture and make a comparison.Table 2 shows the results of this study.XGBoost TNR performance drops from about 92% to 79% when deployed to what we consider the actual data in this example.This is the result of a decision boundary obtained in the training phase using an inaccurate neutron sample.On the other hand, F+M is an OCC approach that relies on photons only and is agnostic to the neutron sample which is 'seen' only during the inference phase.The TNR performance increases from 60% to 83% when deployed on the "actual" neutron sample.The reason for the increase is likely due to learning correlations in the kinematics in the augmented space.In fact, while the new neutron features are artificially made more photon-like by scaling their distributions and hence look harder to separate from photons, their correlations with kinematics which cAE is able to pick up changed, and the resulting residuals are more easy to separate.This is further confirmed via the results obtained using the feature space only, in which performance drops in with respect the the unperturbed neutron sample. A result provided by XGBoost in this example would provide a large TNR of 92% but it is affected by a large systematic as indeed the true performance would be 79%.F+M training does not depend on the neutron sample, and therefore the TNR performance is not affected by the same large uncertainty and it actually provides in this particular case a larger value of 82%.We note that: • The result of the TNR for F+M depends on how the actual data look like in the inference phase, i.e., the opposite result applies by switching labels in Table 2; • OCC is agnostic to the complementary class, which is unlabeled.External physical information can be used to label neutron data, e.g., this may depend on the event topology containing the neutron; • The outlier score of each particle represents a proxy for the confidence level of its classification; it's worth reminding that the quantile cut is defined by the photon sample and is dynamical in that depends on the kinematics; we notice that the neutron's outlier score increased on average for the new distributions, meaning that the uncertainty on each individual classified neutron is also decreased.More details can be found in Fig. C5. Anomaly detection of dijet events at LHC We deploy our algorithm for anomaly detection of BSM dijets with respect to a background of QCD dijet events.We consider the datasets described in Sec. 3. The performance of our algorithm is compared to other works [5,17] that used the same dataset and considered as a metric the area under the receiver operating characteristic curve (AUC).However, it should be noted that the AUC by construction is not agnostic to the BSM signal in the anomaly detection problem: in fact it provides model performance as a function of threshold cuts.In order to pick the optimal threshold one must have prior knowledge of the anomalous class itself, that is not always possible and if it is, implies a bias towards the model which in turn becomes weakly supervised.There are of course other methods to identify a suitable cut remaining agnostic towards the anomalous sample, yet these values may be far from optimal.Thus AUC can be an inflation of true model performance. The AUC is obtained by fitting our ROC curve.For each quantile between (0.02,0.98) in steps of 0.02, we compute the TPR and FPR on a random sample of size of 1k (approximately 50/50 of each class), propagating the uncertainty in both efficiencies.We report the AUC values in Table .3 and compare to the best values obtained by [5] and [17] using the same dataset.In [5,17] different AUC scores are listed based on different settings, loss functions, etc.Some knowledge of the anomalous class is utilized in order to define the optimal threshold.The AUC score of our architecture is slightly larger than in [5] and consistent within the uncertainty with that of [17].It should be noticed that our architecture can be further optimized by tuning its hyperparameters.This will possibly further improve the results shown in Table 3. Ours From [5] From [17] AUC 0.885 ± 0.003 0.87 0.89 Table 3: AUC score comparison: AUC score comparison between our architecture and two methods, Fraser et al. [5] and Cheng et al. [17].Our architecture performs on par with [17] within the uncertainty and slightly better than [5].It should be noticed that our architecture can be further optimized by tuning its hyperparameters.This will possibly further improve the results. In the following we calculate TPR and TNR.The idea is that of remaining agnostic to a potential BSM dijet signal, and changing the quantile cut on the QCD dijet background.For example, one may consider setting the threshold of our architecture to ≈ 100% for 4: Summary of dijet results at LHC: results are obtained using different quantile cuts on the outlier score.Fluctuations in TPR are within the error.For comparison, we also include a baseline rectangular selection based on loose fiducial cuts on each feature, defined in such a way to select when combined 99% of the SM QCD dijet events. QCD dijets, very naively letting only largely anomalous BSM samples to stand out from the selected distributions.For completeness we show different scenarios (including also low quantile cuts) in Table 4. Fig. 4 depicts the outlier scores of both QCD and BSM dijets at LHC as a function of the leading jet transverse momentum, in which the isolation of the tail of the BSM distribution is visualizable with respect to the 99% quantile threshold.One may be temped to instead opt for a global cut within the outlier space, however this explicitly demands prior knowledge of the complementary class, directly violating the conditions of anomaly detection frameworks. As is clear in all our tables the TPR is consistent by construction with the quantile cut applied to the outlier score obtained using the generated reference cluster. Benefits of the augmented space: residuals We have demonstrated the good performance of our architecture for different physics problems.In order to illustrate the performance increase via residuals, we follow the same inference procedures discussed before except we remove the residuals as input to the cluster.Table .5 shows a comparison between results obtained using the feature space and the augmented space of features plus residuals.The comparison is done for both problems (neutron identification in GlueX BCAL and dijet anomaly detection at LHC).In both cases, we observe a systematic improvement in the TNR by using the augmented space. We deem the residuals to be highly valuable in terms of separation and provide further evidence that features localize the space, and the residuals push nested clusters radially outward to be more easily extracted and seen as outliers.This interpretation is also represented in Fig. 5 which shows a t-SNE representation of the feature and augmented space for the GlueX problem.As shown in the figure, augmenting the space with residuals produces a better distinguishing power between photons and neutrons.5: Features vs augmented space for GlueX BCAL and LHC problems:.The benefits of residuals can be concluded via comparison of TNR at equal thresholds.Note the consistency between threshold and TPR by design in which fluctuations across differing spaces are within error. Architecture specifications and computing resources During the inference phase, in both problems, we are limited by the generation speed of the cMAF and the clustering.HDBSCAN is in fact not optimized to run on GPU and the fitting procedure dominates the computing time. 16Considering these limitations, we generate only 1.5k reference samples per particle in inference.We show in both cases the statistics are sufficiently high to run our analyses. The architecture has not undergone a rigorous optimization and we expect better results could be obtained.Table 6 contains hyperparameter settings used throughout our experiments.One can imagine optimizing the pipeline end-to-end under a Bayesian process, in which we rely on the signal class only.Key hyperparameters to tune are those of the (middle row) same problem using 'scaled' feature distributions for neutrons; (bottom row) the QCD dijet problem at LHC (15 dimensions, 30 with augmentation).The residuals create nested clusters of higher density within the data space that are pushed radially outward from the main primary class cluster (middle column), thus allowing more accurate separation of the two classes at inference.There still exist nested clusters within the reconstructed feature space (left column), yet it is apparent that these are not as easily extracted, which explains the performance increase via the inclusion of residuals in the augmented space (right column).The number of parameters correspond to the GlueX BCAL problem.Additional technical details on the architecture can be found in Table 7. Summary and conclusions We have developed a novel architecture that consists of two steps called "flux" and "mutability" (F+M).In the first stage we learn the distribution of a reference class.In the second stage we address if data significantly deviates from the reference class.The backbone of this architecture consists of: (i) a conditional autoencoder that allows to reconstruct the injected features and calculate the residuals which combined to the first make an augmented space; (ii) a conditional Masked Autoregressive Flow, that combined with a kernel density estimation allows in the inference phase to dynamically generate the reference class in the augmented space; (iii) a hierarchical-based clustering algorithm that allows to estimate if an object belongs to the reference class by producing an outlier score, which can be also seen as an estimate of how confident we are about the object belonging to the reference class.We demonstrated its capability as a one-class-classification method when dealing with isolated neutral showers at GlueX BCAL, providing good separation between photons (reference class) and neutrons (complementary class) while relying on only information related to the reference class.We then proved the advantage of this approach which is agnostic to the neutron sample, in particular when the latter is affected by large uncertainty. We also showed the capability of the algorithm as an anomaly detection method, isolating possible BSM Z → t t dijets topologies from SM QCD dijet background (reference class) at the LHC.We demonstrated that our model performs on par with other architectures using the same dataset, yet we are able to remain truly agnostic towards the complementary class using our final quantile metric.A possible extension of this problem that has been left for future exploration consists in conditioning on both p T and the invariant mass of the leading jet.We also note in general that our architecture has not undergone rigorous optimization of the hyperparameters and we therefore expect further increase in performance in doing so. In both cases, we have demonstrated an increased performance via inclusion of the residuals.We have concluded that an augmented space (features + residuals) is ideal for inference.The features localize the space for a given kinematics, and the residuals push the complementary class radially outward in the hypersphere used for clustering.This allows nested clusters existing in the data space to be more easily extracted and increase distinguishing power. The inference time per particle is fairly slow.As such, the architecture is best suited for offline analysis purposes in its current state, in which it can be optimized via parallelization.Reduction in inference time can potentially be obtained via the use of a different flow network as MAF is generally known for its slower generation speeds.Other NF models have not yet been tested and are left as further exploration.A large bulk of the inference time remains at the clustering stage as HDBSCAN is not optimized to run on GPU.Other clustering methods, or an improved GPU-optimized HDBSCAN build could potentially solve this issue.In the future we plan to extend this work to an application for data quality monitoring in an experiment, in that significant deviations from the expected quantiles of the reference distributions could determine if a new calibration/alignment is needed. Appendix A. GlueX BCAL feature definitions All showers are DBCALShower objects in the GlueX software package.We denote these showers as S, and label the quantity we use via a subscript (S x denotes the x position of the shower object for example).R denotes the inner radius of the BCAL (64.3 cm radially outwards from the center line of the target) and T z denotes the center z position of the hydrogen target (65 cm from the upstream edge of the BCAL).Showers are comprised of points, we define our features using the energy weighting of these points. 18 LayerM E = N i E i M ∈ {1, 2, 3, 4} is the layer number and E i is the energy of the i th reconstructed point in the layer. is the layer number and E i is the energy of the i th reconstructed point in the layer. and z i are the energy and z position of the i th point in the shower. 2, ∆r i = (R -r i ) E i and r i are energy and radial position of the i th point. and t i are the energy and timing information of the i th point. and θ i are the energy and polar angle (from the target center) of the i th point. and φ i are the energy and azimuthal angle of the i th point. The position at which the particle hits the inner radius of the BCAL. Appendix B. GlueX kinematic plots Plots contained in this section illustrate the 14 features (see Appendix A) as a function of kinematics for both simulated photons and neutrons, as described in the captions.These plots are integrated over the entire phase space used in the analysis.The functional dependence of features on the phase space (E,z) can be clearly seen in the simulated samples, more so for the photons as the simulation of a neutron interaction is more difficult. As one moves to training on real data from standard "candles" such as π 0 → γγ, the dependencies become more pronounced.The distributions of the features with respect to the z position and energy in BCAL are shown in Fig. B1 and B2 for photons and neutrons, respectively.The dependence differs from that of photons, but overlaps at certain regions in the phase space making separation of the two classes more difficult. Appendix C. GlueX generations We conditionally generate data using the cMAF in a select kinematic region to demonstrate the quality of data produced.A central region of the of the phase space we are working with (1 GeV < E < 1.4 GeV, 206 cm < z < 218 cm) is chosen, and used to compare three different quantities, namely the original injected features, reconstructions from the cAE and the generations of the cMAF.For each original point within the phase space, we generate ten artificial data points, as such, generated distributions an order of magnitude larger in terms of sample size but have been normalized.Being that we are generating the reconstructed space of the cAE, found to be beneficial due to skewing of out-of-distribution Original, reconstructed and generated features (top two rows), residuals and generated residuals (bottom two rows), for 1 GeV < E < 1.4 GeV, 206 cm < z < 218 cm.The cMAF is trained to generate the reconstructed space of the cAE as it was found to give better separation power due to skewing of OOD samples (i.e.neutrons).The cMAF matches closely the distributions of the sampled, reconstructed photons. (OOD) samples, one may argue the use of a Conditional Variational Autoencoder (cVAE) may be appropriate.We have developed a similar algorithm using cVAE's although it is not optimal for a few reasons, namely, the quality of generations and also the problem with dead nodes (referring to vanishing gradients, in which outputs tend to zero) during training.Due to small values in the residual space, the training process can be very tricky and can lead to Dirac delta distributions at zero in some variables if dead nodes occur.Using an NF instead was found to be more reliable and it can be seen from Fig. C1 the distributions are consistent.On a particle-by-particle basis we scaled the neutron features through this empirical formula: x + S * P * (x−x min ) * (xmax−x) , where S being the sign that controls which direction to nudge, P is the scaling effect in percentage (we used a 10% effect, that is P =90% or 110% depending on the sign S), and x min , x max are the minimum and maximum in the feature range which are physically allowed on each feature.We applied this scaling to all features with the exception of the shower R for which it has been neglected.Fig. C2 shows the injected distributions for photons, neutrons and scaled neutrons; Fig. C3 shows the corresponding reconstructed features, whereas Fig. C4 shows the residuals; Fig. C5 shows the corresponding outlier score distributions obtained with F+M.The perturbed neutron outlier scores are on average higher than the original, given that the cAE is able to detect kinematic discrepancies introduced via the perturbation. Appendix D. LHC generations We conditionally generate data using the cMAF in a select kinematic region to demonstrate the quality of data produced.A central region of the of the phase space we are working with (600 GeV < p T j1 < 650 GeV ) is chosen, and used to compare three different quantities, namely the original features, reconstructions from the AE and the generations of the cMAF.For each original point within the phase space, we generate ten artificial data points, as such, generated distributions an order of magnitude larger in terms of sample size but Figure D1: Features of QCD dijet events at LHC: Original, reconstructed and generated features (top three rows), residuals and generated residuals (bottom three rows), for 600 GeV < p T j1 < 650 GeV .The cMAF is trained to generate the reconstructed feature space of the cAE as it was found to give better separation power due to skewing of OOD samples (i.e.Top Jets).The cMAF matches the distributions of the reconstructed QCD dijets to a very high degree. have been normalized.We notice in this dataset the generation quality is not as good as for GlueX in Appendix C, this is due to the relatively low training sample size and The training data set should ideally be more dense (in terms of continuous conditionals) which would allow smaller modeling with KDE, and overall a more robust learning phase for the cMAF.We can see that when training data is sufficient, the generations are extremely accurate (see Appendix C).The discrepancies in some variables undoubtedly affect performance although it can still be seen the clusterer is able to efficiently use injected data at the inference phase based off performance obtained. Fig. D1 shows the distributions of the SM QCD dijet events at LHC, for the injected, reconstructed and generated data; Fig. D2 shows a comparison between the SM QCD and the BSM feature distributions. Figure 1 : Figure 1: Flowchart of the architecture in the inference phase: The flowchart is described from top to bottom, where the augmented object produced by the left column is compared to the reference cluster produced by the right column; (A): the inference object is sent through a cAE, in which the conditional learning as a function of the kinematics is done via concatenation.The cAE produces a reconstructed vector used to construct a residual vector (x − x).Reconstructed features and residuals are concatenated as a new augmented space; (B): cMAF is previously trained on the augmented space as a function of continuous conditions.The kinematic vector is mapped to a KDE functional, used to model sub-spaces of the flow-transformed distributions as a function of the kinematics.In the inference phase, the flow network is then fed a sample of Gaussian vectors produced via KDE.The data produced by the flow network is used to form the augmented reference cluster.(C): The comparison of the object with the reference cluster produces an outlier score used for classification. Figure 2 : Figure 2: Sketch of barrel calorimeter readout: (a) BCAL schematic; (b) a BCAL module side view; (c) end view of the BCAL showing all 48 modules and (d) an end view of a single module showing readout segmentation in four rings (inner to outer) and 16 summed readout zones demarcated by colors.More details can be found in [13]. Figure 3 : Figure 3: Outlier scores: Average outlier scores (probability of not belonging to the photon class) in bins of 40 MeV and 2 cm.Signal particles (photons, left) display much lower outlier scores (color) on average than background (neutrons, right) across the grid of the phase space (x axis: Z position, y axis: Energy).There are no apparent kinematic dependencies on separation power. Figure 4 : Figure 4: Outlier scores as a function of leading jet transverse momentum: Outlier score as a function of leading jet transverse momentum for QCD dijets (left), and BSM dijets (right) at LHC.The 99% quantile threshold is overlayed.Opting for large values of the quantile results in isolating the tails of the complementary class distribution if features overlap to a high degree.The 1σ band has been calculated from reference clusters with 1.5k generated objects. Figure 5 : Figure 5: Dimensionally reduced representation of the reconstructed feature, residual and augmented spaces: t-SNE [27] is used to provide a 2D representation of the reconstructed features, residual and augmented space.(top row) the γ/n classification in GlueX BCAL (14 dimensions, 28 with augmentation);(middle row) same problem using 'scaled' feature distributions for neutrons; (bottom row) the QCD dijet problem at LHC (15 dimensions, 30 with augmentation).The residuals create nested clusters of higher density within the data space that are pushed radially outward from the main primary class cluster (middle column), thus allowing more accurate separation of the two classes at inference.There still exist nested clusters within the reconstructed feature space (left column), yet it is apparent that these are not as easily extracted, which explains the performance increase via the inclusion of residuals in the augmented space (right column). Figure B1 : Figure B1: 2D histograms of photon features (x-axis) and z position (or energy) (y-axis): We see a clear dependence on z and energy for most of the features within the space.(top two rows) Features such as T Width, and φ Width display less of a z dependence on average.(bottom two rows) Any feature corresponding to energy deposition within layers has a large dependence on z and E, along with width variables.By conditioning on z and E, we are able to capture the functional dependence of detector response at the generation stage. Figure B2 : Figure B2: 2D histograms of neutron features (x-axis) and z position (or energy) (y-axis): (top two rows) We see a lesser degree of z position dependence on neutron features in comparison to that of photons.Features with high dependence in the photon sample no longer exhibit the degree of functional relation in the neutron sample due to their differing interactions.(bottom two rows) We see a clear dependence on energy within the feature space: any feature corresponding to energy deposition within layers has a high dependence, along with width variables.The dependence differs from that of photons, but overlaps at certain regions in the phase space making separation of the two classes more difficult. Figure C1 : Figure C1: Features of reconstructed photon showers in GlueX BCAL:Original, reconstructed and generated features (top two rows), residuals and generated residuals (bottom two rows), for 1 GeV < E < 1.4 GeV, 206 cm < z < 218 cm.The cMAF is trained to generate the reconstructed space of the cAE as it was found to give better separation power due to skewing of OOD samples (i.e.neutrons).The cMAF matches closely the distributions of the sampled, reconstructed photons. Figure C2 : Figure C2: Photon and neutrons distributions: Photon and neutron distributions.Original and scaled neutron distributions are also shown for comparison. Figure C3 : Figure C3: cAE reconstructed photon and neutrons distributions: Photon and neutron distributions for the features reconstructed by the cAE.Original and scaled neutron distributions are also shown for comparison. Figure C4 : Figure C4: cAE residuals for photon and neutrons distributions: Photon and neutron distributions for the residuals obtained with the cAE.Original and scaled neutron distributions are also shown for comparison. Figure C5 : Figure C5: Outlier score distributions: Photon, neutron, perturbed neutron and quantile cut (coinciding with the TPR of XGBoost equal to 92.15%) distributions.The perturbed neutron outlier scores are on average higher than the original, given that the cAE is able to detect kinematic discrepancies introduced via the perturbation. Figure D2 : FigureD2: Features of QCD dijet and BSM dijet events at LHC: Feature distributions are integrated over the entire phase space.The features overlap to a high degree, yet the resulting means of distributions occupy different regions within the space.The resulting differences in kinematic correlations are able to be exploited via the residuals. resulting large kinematic bins ( 1 GeV in transverse momenta) in the KDE modeling phase. Table 2 : Neutron sample study: The table shows a comparison of the performance obtained using two neutron samples, one assumed to be simulated and the other one considered as the "actual" detector response to neutrons.Differently from neutrons, photons simulations are more accurate and in agreement with real data.TPR is the true positive rate for photons and TNR the true negative rate for neutrons.XGBoost is trained on simulated photons and neutrons.F+M relies only on simulated photons.XGBoost TNR performance drops when deployed on actual data.The increased TNR of F+M depends on the residual space produced by the cAE which captures a different kinematic dependence of the neutron features compared to that of the photons in the actual case. Table 6 : Baseline hyperparameters of the F+M architecture: these values are not optimal, but have shown to be reliable as an initial starting point.Thorough optimization utilizing Bayesian processes is likely to further improve performance. Table 7 : Specs of the F+M architecture: the table reports the average inference time per particle, the inference memory and the training memory, i.e., the GPU memory required by the network during inference and training phases.Training was done utilizing google services on Nvidia A100-SMX4-40GB and Nvidia V100-16GB cards with TensorFlow 2.8.0 and TensorFlow-Probability 0.16.0 builds.Inference was supported via Compute Canada on Nvidia P100 PCIe 16GB card.The number of parameters correspond to the GlueX BCAL problem.The GlueX γ/n problem has slightly larger numbers due to extra layers in the cMAF needed to reproduce multi-modal distributions in the photon sample.clusterer.Training was done utilizing google services on Nvidia A100-SMX4-40GB and Nvidia V100-SMX2-16GB cards with TensorFlow 2.8.0 and TensorFlow-Probability 0.16.0 builds.Inference was supported via Compute Canada on Nvidia P100 PCIe 16GB card.17
2022-04-20T01:15:41.971Z
2022-04-19T00:00:00.000
{ "year": 2022, "sha1": "713f960cd7a88e34a5ce585e695752e4acedf16b", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/2632-2153/ac9bcb/pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "713f960cd7a88e34a5ce585e695752e4acedf16b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
17984846
pes2o/s2orc
v3-fos-license
Perspectives on Surgical Data Science The availability of large amounts of data together with advances in analytical techniques afford an opportunity to address difficult challenges in ensuring that healthcare is safe, effective, efficient, patient-centered, equitable, and timely. Surgical care and training stand to tremendously gain through surgical data science. Herein, we discuss a few perspectives on the scope and objectives for surgical data science. Introduction Surgical data science is an emerging discipline with the objective of enabling safe, effective, patient-centered, efficient, equitable, and timely surgical care by means of data acquisition, modeling, and analytics to empower clinical decisionmaking. Surgical data science relies for data capture technology upon mechanical, electrical, and electronics engineering, and for its analytic methods upon other data-intensive disciplines such as computer science, statistics, mathematics, information theory, and epidemiology. Data science involves extracting generalizable knowledge from large amounts of data, which are often unstructured or complex, and yields actionable products for decision-making [1]. Surgical disciplines have yet to benefit from the data science revolution mainly because of limited data capture techniques. Surgical data science has now become feasible because of parallel developments in two domains -technology to seamlessly capture surgical data at scale, and statistical techniques and computational algorithms to analyze large corpora of complex data from multiple sources. While a data science approach is meaningful for any clinical discipline, we emphasize surgery because it is an indispensable and integral component of healthcare [2]. Globally, more than 234 million major surgical procedures are performed each year [3]. Still, about 5 billion humans lack access to high-quality surgical care [4]. Delivering safe, effective, patient-centered, efficient, equitable, and timely surgical care is impeded by several challenges, some of which we propose can be addressed through a data science approach. In this article, we discuss a proposed scope for surgical data science and the challenges we anticipate in integrating data science into surgical care and education. Surgical training Effective and efficient surgical training is a necessity regardless of available resources. Although poor surgical skill is associated with a higher frequency of readmission, reoperation, and death [5], surgical education is inequitable, feedback provided to surgeons during training is inconsistent, and assessment of skill and competence remains unreliable [6,7,8]. Data-driven technologies for automated assessment of performance, diagnosis of skill deficits, targeted feedback through demonstration and examples, and individualized training are the next frontier for surgical education and credentialing across the globe. Technology for automated assessment may be focused on surgical technical skill and competence, non-technical skills, or clinical examination skills [8,9]. But gaps exist in research on automated technologies for surgical training. First, most such research has emphasized technical skill although non-technical skills such as situation awareness, teamwork, and decision-making are critical for safe and effective surgical care [10]. Second, technology to automate diagnosis of specific skill deficits and adaptive individualized feedback require learning from larger amounts of data than those typically utilized in research thus far. Third, algorithms to predict skill have yet to be translated into data products that can be integrated into surgical training curricula. Automated assistive technologies for surgical decision-making Technology affords tremendous opportunity to enhance patient safety because it is seamlessly integrated into surgical care, for example, with laparoscopic, endoscopic, or robotic techniques. Technical errors accounted for patient injuries more than half the time in a study of surgical malpractice claims in the United States, of which a majority were manual errors and resulted in permanent disability or death [11]. Automated assistive surgical technologies, such as an automated coach may mitigate certain technical errors. Manual coaching by an expert surgeon has been shown to improve surgical performance and reduce errors [12]. Automated assistive surgical technologies may also serve other purposes such as surgical decisionmaking and context aware surgical systems that detect patient workflow and surgical activity. But automated intra-operative assistive surgical technologies are far from being adequately developed for deployment in the operating room because of two major limitations. First, although some algorithms perform with reasonable accuracy to recognize surgical phases, input from other stakeholders such as caregivers and patients is needed to transform the algorithms into tools with utility in the operating room. Second, techniques have yet to be developed that efficiently assimilate data from multiple sources (e.g., tool motion, video, environmental cues, physiologic monitoring) and enable development of applications using the extracted information. A collaborative data science approach has the potential to establish the role of assistive surgical technologies in pre-, intra-, and post-operative patient care. Variability and value of surgical patient care Variability in healthcare in general, and surgical care in particular, has had a large influence on evolution of policy. Landmark observations about geographic variation in patient outcomes has spurred an emphasis on quality of patient care [13]. Most such efforts have focused on achieving certain benchamarks for a few patient outcomes at the population-level, e.g., 30-day mortality [13]. But patient outcomes alone may not be adequate to develop a health system that delivers efficient, equitable, and timely care. Studying variability in patient care processes can yield improvements in efficiency and value of care, which is defined as outcomes achieved per monetary unit spent [14]. A data science approach can facilitate understanding and optimizing patient care processes in the pre-and post-operative settings because data in these contexts are often unstructured and available from disparate sources. Data-driven insights into optimizing patient care workflow before and after surgery can minimize variation in care patterns and patient outcomes. Algorithms to model the relationship between patients' clinical course following surgery and their outcomes can be used to develop tools for timely detection of patients at risk for poor outcomes, and efficiently allocating system resources. Effectiveness of surgical treatments for decision-making The effectiveness of surgical interventions is highly influenced by the skill with which they are delivered, unlike non-surgical treatments that involve simply ingesting a medication at a precisely titrated dose. Thus advances in surgical care through shared quantifiable knowledge require systematic, objective, and valid measures of surgical skill, both technical and non-technical. For example, setting up fair comparisons in randomized controlled trials or establishing registries to enable discovery of effective surgical treatments is impeded by lack of standardized, objective, and valid measures of skill and outcomes [15]. Automated valid assessment of technical and non-technical surgical skill will also allow development of technological tools to support surgical decision-making in patient care, for example to individualize the choice of optimal surgical technique (laparoscopic vs. robotic) or treatment approach (e.g., use of mesh vs. natural tissue repair for a hernia). Challenges to assimilate surgical data science A few major challenges that need to be addressed for surgical data science to reach fruition are mentioned here. First, surgical data are not routinely captured at scale. Integrating a culture of routine data collection into surgical care, and consistent data capture and processing infrastructure are necessary to promote surgical data science through uniform data repositories across institutions. Such repositories are essential to accelerate the pace of discoveries from data through collaborative analytics. Second, there is considerable heterogeneity in how surgical procedures are described across the world. Standardized ontologies to describe surgical procedures can not only facilitate collaborative analytics across data repositories but also promote discovery of shared knowledge across surgical procedures. In addition, uniform ontologies can catalyze the adoption of data products by healthcare decision-makers. Finally, current research using machine learning in surgery is geared towards optimizing algorithms to solve focused technical problems, e.g., surgical phase detection. Realistic data products, together with solutions for socio-technical issues of understanding how to integrate these new information sources into organizations in ways that allow their effective use, are needed to promote data science in healthcare decision-making. Conclusion Surgical data science is a scientific discipline whose time has come. A data science approach to identify and address challenges in the surgical context will transform how data are used to advance patient care and surgical training. Educating key constituents, standardizing data infrastructure to maximize collective utility of disparate repositories, and promoting collaborations through dedicated funding mechanisms are necessary to realize the full potential of surgical data science.
2016-10-13T22:06:46.000Z
2016-10-13T00:00:00.000
{ "year": 2016, "sha1": "132d26656f264567bdd615201bd3ee2023a55c18", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "132d26656f264567bdd615201bd3ee2023a55c18", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
17340081
pes2o/s2orc
v3-fos-license
The role of macrophages polarization in predicting prognosis of radically resected gastric cancer patients Tumour-associated Macrophages (TAM) present two different polarizations: classical (M1) characterized by immunostimulation activity and tumour suppression; alternative (M2) characterized by tumour promotion and immune suppression. In this retrospective study, we evaluated the correlation between the two forms of TAM with survival time in radically resected gastric cancer patients. A total of 52 chemo- and radio-naive patients were included. Two slides were prepared for each patient and double-stained for CD68/NOS2 (M1) or CD68/CD163 (M2) and five representative high-power fields per slide were evaluated for TAM count. The median value of the two macrophage populations density and the median value of M1/M2 ratio were used as cut-off. Twenty-seven patients with M1 density above-the-median had a significantly higher survival compared to those below the median. Twenty-six patients with M1/M2 ratio above the median showed median OS of 27.2 months compared to 15.5 months of the patients below the median. No association between M2 macrophage density and patient's outcome was found. In multivariate analysis, M1/M2 was a positive independent predictor of survival. The M1 macrophage density and M1/M2 ratio, as confirmed in multivariate analysis, are factors that can help in predicting patients survival time after radical surgery for gastric cancer. Specifically, it has been reported that TAM infiltration into tumour tissue correlates significantly with tumour vascularity in human oesophageal and gastric cancers [15] and it is also has been found a direct association between the degree of TAM infiltration and depth of tumour invasion, nodal status and clinical stage in gastric cancer [16]. Nevertheless, no study evaluated the correlation between M1/ M2 tumour infiltration and the overall survival (OS) in gastric cancer patients. Against these backgrounds, we have decided to evaluate the prognostic role of TAM infiltration in patients affected by radically resected gastric cancer. Study population This study was approved by the Institutional Review Board of Campus Bio-Medico University, Rome, Italy. The procedures to obtain human gastric cancer tissues and follow-up information are in accordance with the Ethical Principles for Medical Research Involving Human Subjects as formulated in the World Medical Association Declaration of Helsinki (revised in 2008). All specimens were retrospectively obtained from the archives of formalin-fixed, paraffin-embedded tissue blocks in the Departments of pathology of Campus Bio-Medico University, Rome, and of Medical University of Pesaro-Urbino, Italy. The gastric cancer tissues were collected from surgeries performed from May 2000 to August 2004. The patients were followed up until December 2011 through outpatient visits and/or correspondences to family members. The inclusion criteria were complete follow-up data, paraffin blocks available, R0 radical surgery, preoperative chemotherapy or radiotherapy excluded. All of the cases that satisfied the inclusion criteria were included in this study. The patients were selected without knowledge of their previous tumour macrophage counts. Histological evaluation was based on the World Health Organization criteria [17]. Pathological findings (tumour size, spread and lymphnode status) were obtained from the pathologist's original reports. Tumour-node-metastasis status (TNM) classification was reassessed using seventh edition of the UICC/AJCC classification [18]. Statistical analysis The OS time was calculated as the period from the date of surgery until death. OS was determined by Kaplan-Meier product-limit method. Moreover, the differences in terms of OS according to the prognostic variables were evaluated by the log-rank test. Finally, the Cox proportional hazards model was applied to the multivariate survival analysis [19]. SPSS software (version 19.00, SPSS, Chicago, IL, USA) was used for statistical analysis. A P value of less than 0.05 was considered to indicate statistical significance. Analysis and validation of immunostaining Five representative high-power fields (9400 magnification) of the deepest infiltrative tumour areas per tissue section were selected using an Eclipse 80i Nikon microscope (Nikon, Tokyo, Japan). Acquisition was carried out using the Imaging Software NIS-Elements (Nikon). Cell counts were performed on collected composite images, in which the signal from the fluorochrome had been assigned a different pseudo-colour (green for 488, red for 568 and blue for DAPI). Adobe Photoshop CS 5 (version 5.5) software was used to generate composite images from three different fluorochrome signals. The number of nucleated cells with positive staining for the phenotype marker in each image was then counted manually. Evaluation was performed simultaneously by two investigators (MA, PB) who were blinded as regards which group the specimens belonged to and to patient outcome. The investigators analysed M1 macrophages, M2 macrophages and the M1/M2 ratio counting the absolute macrophage number for all high-power field per section (Fig. 1). Patient characteristics Fifty-two patients were included in this retrospective study. All of the patients had complete follow-up information and the pathological diagnosis was confirmed by a pathologist prior to inclusion in this study. No patients received chemotherapy and/or radiotherapy before or after surgery. The overall cumulative survival rates were 77% (40 patients) for 1 year, 42% (22) for 2 years, 15% (8) for 3 years, 13% (7) for 4 years and 11% (6) for 5 years. The clinicopathological characteristics were summarized in Table 1. Correlation between the M1/M2 macrophage density and the M1/M2 ratio with clinicopathological characteristics We found that there was no statistically significant association between the M1 macrophage density, M2 macrophage density and M1/M2 ratio with clinicopathological characteristics as tumour stage, histology, lymphonode invasion and grade (P > 0.05). Correlation between the M1/M2 macrophage density and the M1/M2 ratio with survival time To assess whether there is any value of the macrophage density of M1 and M2 in predicting prognosis, the median value of the macrophage density of two populations was used as a cut-off point to 1417 dichotomize the 52 patients into a group with a macrophage density above or below the median value according to previous studies [20,21]. Kaplan-Meier survival curves were plotted to investigate further the association of cell densities with survival. The log-rank test was used to compare survival rates. We found that the 27 patients with above-the-median M1 macrophage density had a 1-year survival rate of 81% (22 patients), 2-year survival rate of 55% (15), 3-year survival rate of 26% (7), 4-year survival rate of 22% (6) and 5-year survival rate of 18% (5) with median survival of 25.6 months, which were significantly higher than the corresponding survival rates 72% (18), 24% (6), 4% (1), 4% (1) and 4% (1) with Median Survival of 17.1 months, respectively, in the 25 patients with below-the-median M1 macrophage density, with P = 0.041 (P < 0.05) (Fig. 2, Table 2). In contrast, the 27 patients with above-the-median M2 macrophage density had a 1-year survival rate of 70% (19), 2-year survival rate of 37% (10), 3-year survival rate of 15% (4), 4-year survival rate of 11% (3) and 5-year survival rate of 11% (3) with median survival of 17.6 months, which were lower but not significantly than the corresponding survival rates 84% (21), 48% (12), 16% (4), 16% (4) and 12% (3) with median survival of 22.3 months, respectively, in the 25 patients with below-the-median M2 macrophage density, with P = 0.724 (P > 0.05) (Fig. 3). The median value of the M1/M2 ratio was used as a cut-off point to dichotomize the 52 patients into a group with a M1/M2 ratio above or below the median. The 26 patients with above-the-median M1/M2 ratio had a 1-year survival rate of 88% (23), 2-year survival rate of 65% (17), 3-year survival rate of 27%(7), 4-year survival rate of 23% (6) and 5-year survival rate of 15% (4) with median survival of 27.2 months, which were significantly higher compared with the corresponding survival rates of 65% (17), 19% (5), 3% (1), 3% (1) and 3% (1), respectively, in the 26 patients with below-the-median M1/M2 ratio with Median Survival of 15.5 months with P = 0.001 (P < 0.05) (Fig. 4, Table 2). To determine whether the macrophage density is independently associated with patient's survival time, the multivariate Cox proportional hazards analysis was used. Extent of the tumour (T), spread to regional lymph nodes (N) and differentiation grade (G) were included in the multivariate analysis along with macrophage density M1 and the M1/M2 ratio. In the multivariate analysis, we found that only the M1/M2 ratio was a positive independent predictor of patient's survival time (P = 0.001; hazard ratio 0.410) ( Table 2). The M1 macrophage densities had no statistically significant association with patient's survival time in the multivariate analysis (P > 0.05). Discussion Macrophages are one of the major populations of tumour-infiltrating immune cells. In most of the solid tumours, however, the existence of TAM is advantageous for tumour growth and metastasis [1]. It is demonstrated that tumour escape has been linked with a switch from M1 activation in the early tumour initiation process towards M2-like phenotype during tumour progression [22]. M1 and M2 subsets differ in terms of phenotype and functions. M1 cells have high microbicidal activity, immuno-stimulatory functions and tumour cytotoxicity. On the other hand, M2 cells produce interleukin 10 (IL-10) and Poorly differentiated 4 (7) Tumour grade, number (%) transforming growth factor b (TGF-b) leading to a suppression of general antitumour immune responses, promoting tumour neoangiogenesis by the secretion of pro-angiogenic factors and defining the invasive microenvironment to facilitate tumour metastasis and dissemination [23]. Anyway, at present, there is conflicting evidence regarding the role of tumour macrophage infiltrate in influencing patients survival. In many human neoplasms including lung, breast, cervix, bladder, ovary and pancreas cancers, the presence of extensive TAM infiltrate correlates with poor prognosis. In other tumours, including those of the brain and prostate, there is conflicting evidence regarding the role of macrophages in survival outcome [1,[24][25][26][27]. The basis for these conflicting data may be explained considering that in these studies tumour-associated macrophages were detected only by the immunohistochemical analysis of CD68 + cells. Matter of fact, CD68 expression is shared between M1 and M2 phenotypes and the use of CD68 as sole could not represent a reliable marker in evaluating the real impact of the two subtypes with almost opposite biological properties. Only two studies have analysed the relationship between M1 and M2 macrophages and prognosis using a double IHC staining to better characterize the two subsets of TAM. These reports demonstrated a significant direct correlation between M1 phenotype infiltrate and the survival time of patients affected by non-small cells lung cancer (NSCLC) [20,21]. In our study, we used double-staining for CD68/NOS2 as markers for M1 macrophages and CD68/CD163 as markers for M2 macrophages to be in accordance with the most part of previously published studies that performed a phenotypic characterization of macrophages polarization [20,25,[28][29][30][31][32][33][34]. The haemoglobin scavenger receptor, CD163, is expressed on tissue macrophages and monocytes after M2 polarization [35]. Conversely, macrophages M1 polarized by exposure to interferon (IFN)-c or LPS up-regulate inducible nitric oxide synthase (iNOS) to convert into nitric oxide that combining with oxygen radicals lead to the formation of cytotoxic peroxynitrite [36]. These markers are not absolutely specific, for example CD68 has been found in immature CD1a-positive dendritic cells [37,38], CD163 is also expressed in some dendritic cells [39], and iNOS is expressed by endothelial cells [40] as well as by arterial wall smooth muscle cells [41]. For these reasons, we paid particular attention to cell morphology to minimize these potential bias: in this direction we used DAPI as nuclear morphological marker and we counted as positive merely nucleate cell avoiding possibility to count the same cell more than once. At present, gastric cancer has poor overall survival at 5 years. The therapeutic approach is multidisciplinary and the surgery plays a central role [42], even if the overall survival after radical surgery remains poor. Only 29% of patients undergoing a D2 linfectomy and 21% of those undergoing a D1 linfectomy are alive at 15 years from surgery. Moreover, radical surgery in gastric cancer is often associated with important comorbidities [43]. The discovery of new and reli-able bio-markers able to select patients who really benefit from surgery is an urgent need in clinical practice. Currently, the only proven prognostic indicators are represented by clinical-pathological factors such as age, sex, gastric wall infiltration, locoregional nodal involvement, Laurens' histology and margins. Furthermore, in the last years, a growing attention was paid to new molecular prognostic markers related to cancer cell biology while few data are available about the role of tumour microenvironment in predicting patient's outcome. Two recent works demonstrated that gastric cancer patients with an high TAM count showed poorer surgical outcome than those with a low TAM count [16,44]. On the contrary, this is the first study investigating the two forms of macrophages population in gastric cancer. We demonstrated that the M1 macrophage density is a prognostic factor in univariate analysis in accordance with previous reports on the association of M1-survival time in patients affected by non-small cells lung cancer [20,21]. More interestingly, in our experience, only the M1/M2 ratio was an independent prognostic factor. Therefore, the cellular and molecular interactions between M1 and M2 population appear to play a major role in determining prognosis of gastric cancer patients. These data support future larger prospective confirming studies investigating the prognostic role of TAM polarization in radically resected gastric cancer with the aim to discriminate patients who could take the most advantage from surgery. Finally, these results highlight new therapeutic horizons involving strategies aiming to reverse TAM phenotype in gastric cancer.
2016-05-02T06:50:51.340Z
2013-11-01T00:00:00.000
{ "year": 2013, "sha1": "f330a844c45530632f0fafb5f8f4fcebf6927112", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1111/jcmm.12109", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f330a844c45530632f0fafb5f8f4fcebf6927112", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261143030
pes2o/s2orc
v3-fos-license
ROLE OF PREDATION AND ABUNDANCE OF BIOLOGICAL CONTROL AGENTS (ORDO HEMIPTERA, FAMILY REDUVIIDAE) AT SUBANG PALM OIL PLANTATION EXPERIMEN : The Hemiptere Order of the Family Reduviidae has an important role in an ecosystem, namely as a natural enemy (predator) of insects, the hemiptera group itself and other arthropods. This study aims to determine the role of predation and the abundance of natural enemies in the order Hemiptera family reduviidae on oil palm plantations. This study and research was carried out in the area of a productive oil palm plantation in the village of Karapyak, Cintamekar Serangpanjang, Subang, Jawa Barat. This research is an exploratory research conducted from September 2022 to mid-December 2022. Sampling was carried out using the letter 'S' transect method with hand picking assisted by fishing nets. Identification was carried out in the laboratory of the Kampus Politeknik Kelapa Sawit Citra Widya Edukasi, Cibuntu, Cibitung, Bekasi. Data is presented in the form of tables and plot design drawings in the field, as well as sample images of sample species in the field. The abundance of species of the order Hemiptera Family Reduviidae was analyzed using the Shannon-Wiener Index and evenness was calculated according to the Pielou formula, the role of natural enemies was determined based on the behavior description and mouth type of the order Hemiptera Family Reduviidae. The results of the study obtained 37 individuals consisting of 7 genera. The diversity index (H') of the fauna of the order Hemiptera family Reduviidae on oil palm plantations of producing plants is 0.59 which is classified as in the low category. The evenness index (E') of the fauna of the order Hemiptera family Reduviidae on oil palm plantations of productive plants is 0.39 which is classified as depressed evenness. INTRODUCTION Sustainable natural resources and sustainable palm oil is the most important resource in a tropical area like our country. The oil palm plantation agribusiness industry contributes greatly to regional development as an important source of alleviating the level of the economy through cultivation and processing of its downstream sector (Sudrajat, 2020). According to (Purba & Sipayung, 2018), from this area, Indonesia is capable of producing 42,883,631 tons of palm oil/year. Such a large level of oil palm production has been produced by oil palm plantations from agro-industrial companies, state plantations and smallholder plantations. The extent of the existing oil palm plantation area is able to represent as a whole several existing types of ecosystems, especially the ecosystem of perennial and perennial plantations. Indonesia as a strategic place for the cultivation of oil palm plants has a wealth of biodiversity that ranks 2nd in the world after Brazil. (Najib, 2014), most of the diversity in Indonesia is dominated by insects when compared to other animals. Disturbing insects that exist in the oil palm plantation area are cosmopolitan fauna that greatly suppresses FFB production in various ecosystems. Insects dominate terrestrial ecosystems because of their high adaptability. The high diversity and adaptability of the hemiptera order to environmental changes makes it beneficial for how the types and populations of the Hemiptera order of the Reduviidae family can adapt and thrive in oil palm plantation areas. The order hemiptera of the reduviidae family influences the occurrence of balance in ecosystems, so it is often very important to pay attention and examine its essential role as a balancer of population levels in an ecosystem. This matter emphasized by (Price et al., 2011) who stated that the fauna group of the order Hemiptera of the reduviidae family, besides their role in maintaining the balance of the ecosystem, also acts as a biological agent. Aside from being a counterweight to other organisms, the order hemiptera group of the rediviidae family is also a component of biodiversity in the ecosystem chain of a plantation. So far, there have not been many research studies on the faunal diversity of the order Hemiptera, the reduviidae family, in oil palm plantations. So it is very necessary to do research on the role of predation and the abundance of biological agents in the order Hamiptera family reduviidae in the Kebun Percobaab Politeknik Citra Widya Edukasi, Subang, Jawa Barat. The objectives to be achieved in carrying out this research are: 1. To find out the fauna of the order Hemiptera family reduviidae found in oil palm plantations at the Kebun Percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat. 2. To determine the abundance of fauna of the order Hemiptera family reduviidae found in oil palm plantations at the Kebun Percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat. 3. To determine the role of natural enemies of the order hemiptera fauna of the reduviidae family found in the Kebun Percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat. METHODS The process of carrying out this research was carried out for 3.5 months, from May to mid-September 2021. The implementation and sampling stages were carried out at the kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat. Then the samples were taken and analyzed at the Laboratory of the Kampus Politeknik Kelapa Sawit Citra Widya Edukas. (Borror at al., 1996) To preserve the sample To identify insect samples found Research Implementation of the fauna of the order hemiptera of the reduviidae family at the Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat. Research Implementation To make it easier for researchers to determine the place of the location observation, then the determination of the location of research is based on various considerations such as time, distance and cost. The main consideration is the sustainability and sustainability of the technical management of cultivation in the oil palm plantation in the Subang experimental garden, West Java. This garden has been cultivated in a sustainable manner since 2012. Based on this, the research location was made using a letter 'S' transect method with a length of 300 m and a width of 100 m. In the form of the letter 'S' transect, 5 plots in the form of an equilateral quadrangle are installed with a length of 20 m each side. Each plot has 5 circular object sampling areas with a diameter of 7 m. Sampling of the fauna of the order Hemiptera of the reduviidae family was carried out by taking samples of objects that comply with biological requirements in areas that are included in the research plot area that has been determined in the Politeknik Kelapa Sawit Citra Widya Edukasi Subang. The collection of fauna samples of the order hemiptera family reduviidae was carried out using the hand picking method assisted by insect nets. The layout/scheme for placement of the letter 'S' sampling plot transect (300 m long and 100 m wide) and the installation of the plot are in the form of an equilateral quadrangle with a length of 20 m each side as shown in Figure 1. Identification of Samples of the Order Hemiptera of the Family Reduviidae Sampling identification is based on morphological characteristics which are usually physical characteristics of the order hemiptera of the reduviidae family having two body segments which include: a. The front part of the body is called the head (caput) b. The middle segment is called the thorax. c. The back is called the abdomen. Samples that were found in the field and fulfilled the requirements were then grouped according to the sampling location and preserved with 60% alcohol, then brought to the campus laboratory to be determined and identified by observing the outer shape (morphology). Determination was carried out using a microscope, loupe and the help of a flashlight. Identification was carried out using the sixth edition of the book Introduction to Insect Studies (Borror et al., 1996). Data analysis Data from the types of the order Hemiptera family rediviidae that have been obtained, then analyzed qualitatively and descriptively and displayed in the form of graphs, tables and photos. While the data from the number of species of the order Hemiptera family rediviidae obtained, then analyzed based on the Shannon-Wiener index diversity parameter (1994), in (Pelawi, 2009) Assessment criteria based on species diversity : H´ ≤ 1, : low diversity 1 < H´ ≤ 3, : moderate diversity H´ > 3, : High diversity Diversity includes 2 main things, namely variations in the number of species and the number of individuals of each species in an area. If the number of species and the variation in the number of individuals of each species is relatively small, it means that there is an ecosystem imbalance caused by disturbance or pressure (Jumar, 2000). According to (Pelawi, 2009) community is said to have high species diversity if the community is composed of many species with the same or nearly the same abundance of species. Conversely, if the community is composed of very few species and if only a few species are dominant, the species diversity is low. High diversity indicates that a community has high complexity because the community also has high species interactions. So that in a community that has high species diversity there will be species interactions involving energy transfer (food webs), predation, competition, and the division of niches which are theoretically more complex. Evenness index (E) is determined by the following formula (Barbour et al., 1987): This index describes the average distribution of individuals from the species of organisms that make up the community. Assessment criteria based on species diversity: E' < 0,50 : The community is in a state of stress. 0,50 < E' ≤ 0,75 : The community is in an unstable condition. 0,75 < E' ≤ 1,00 : The community is in a stable condition. A. Fauna Ordo Hemiptera Family Reduviidae The fauna diversity community of the order Hemiptera family Reduviidae in a particular habitat is closely related to environmental factors and also to agro-ecosystems (Tindall, 2004). In addition, the hemiptera order of the reduviidae family is a natural enemy that acts as a predator which has a function related to controlling insect populations, especially other lepidoptera groups and regulation therein. (Yuliadhi & Pudjianto, 2015) stated that the order Hemiptera family reduviidae has a role in stabilizing the ecosystem, including in the ecosystem of a monoculture plantation such as an oil palm plantation. Differences in the type of land in a plantation will shape the vegetation structure and ecological functions which are of course specific and can affect the community structure of the abundance of the order Hemiptera, family Reduviidae. (Sahid & Natawigena, 2018) The abundance of fauna of the order Hemiptera of the reduviidae family is often used as an indicator of ecosystem stability because it acts as a natural enemy (predator) for arthropod groups and their presence is related to the structure and composition of vegetation in a certain area and the level of damage to the ecosystem. The aims of the study were (1) to determine the fauna of the order Hemiptera family reduviidae found on oil palm plantations at the kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat, (2) to determine the fauna diversity of the order hemiptera family reduviidae found on kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat, and (3) to find out the role of biological control agents in the order hemiptera fauna of the reduviidae family found in perkebunan kelapa sawit kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat. Fauna of the order Hemiptera family reduviidae found in the kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat will go through a search process in a predetermined sample area, then be explored and identified. The results of exploration and identification of several species of the Araneae order in the experimental garden area of the Citra Widya Edukasi Oil Palm Polytechnic, Subang, West Java, resulted in 37 individuals belonging to 7 genera, namely: Rhinocoris, Cosmolestes, Zelus, Arilus, Sycanus, Rhiginia dan Apiomerus. Types of fauna of the order Hemiptera family reduviidae based on the role of biological control agents as predators found in the kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat were then taken and put in plastic bags. The results of the identification of the types of the order hemiptera family reduviidae based on their role as predators found in the Citra Widya Edukasi Oil Palm Polytechnic experimental garden which were found can be seen in Table 3. B. Diversity Index (H') and Evenness (E) The results of the calculation of the index of diversity and evenness of the order hemiptera of the reduviidae family found in the kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat are in Table 4. Table 4 shows that the diversity index (H') type ordo hemiptera family reduviidae at Kebun Percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat is 0,59. This shows that the criteria for species diversity of ordo hemiptera family reduviidae at kebun kelapa sawit included in the low category (H´ ≤ 1). The three criteria for species diversity index values are, if H' < 1 means diversity is low, if H' = 1-3 means that the diversity is moderate, if H` > 3 means diversity is high (Pelawi, 2009). Diversity index on kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat included in the low category, this is because kebun percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat is a plantation ecosystem where the oil palm plantation community includes a monoculture cultivation system. Along with the routine maintenance of the experimental garden area, it certainly affects the formation of colonies fauna ordo hemiptera family reduviidae. This is supported by (Buchori, 2014) statement, which states that natural ecosystems have high diversity compared to oil palm plantation ecosystems. The diversity index tends to be high in older communities and tends to be low in communities where there are frequent human activities and clearing of cultivation areas. Of the 7 genera found, each genus has a varying number. These varying numbers cause the value of the genus diversity index to also vary. The diversity index will increase along with the increase in the evenness of the abundance of species. From an ecological point of view, the number of species in a community is important because species diversity appears to increase when the community is stable and undisturbed. Species diversity is a characteristic level in a community based on its biological organization, which can be used to describe the structure of the community (Tarumingkeng, 2009). A community is said to have high diversity if the community is composed of many species with the same and almost the same abundance of species. Conversely, if a community is composed of a few species and if only a few species are dominant, the species diversity is low. Result of evenness index calculation (E') type ordo hemiptera family reduviidae at Kebun Percobaan Politeknik Kelapa Sawit Citra Widya Edukasi, Subang, Jawa Barat as big 0,39, this shows that the level of evenness of several genera in the oil palm plantations is classified as even under depressed conditions. According to (Pelawi, 2009), three criteria for the environmental community based on the value of evenness, namely when E' < 0,50 then the community is in a state of stress. When 0,50 < E' ≤ 0,75 then the community is in an unstable condition while 0,75 < E' ≤ 1,00 then the community is in a stable condition. Evenness index value (E') can describe the stability of a community. The smaller the value E' or close to zero, the more unequal the distribution of organisms in the community that is dominated by certain types and vice versa the greater the value E' or close to one, then the organisms in the community will spread evenly. Role of Predation and Abundance of Biological Control Agents (Ordo Hemiptera, Family Reduviidae) at Subang Palm Oil Plantation Experimen C. The Role of Biological Control Agencies In the order Hemiptera, which plays an ecological role as a biological control agent, it can become a natural enemy of insect pests, one of which comes from the reduviidae family (Shanker et al., 2016). Zelus renardii is a common species found in oil palm plantations. This species is a natural enemy of oil palm pests. Pests that usually fall prey to the group of fire caterpillars. This type of caterpillar is commonly found in the undergrowth of oil palm plantations. The general characteristics of this species are that the body is dominated by brown, the abdomen which merges with the thorax looks small and the female imago has a brighter color than the male imago (Yaherwandi & Diratika, 2020). Reduviidae is a member of the order hemiptera in which all members act as natural enemies, especially as predatory insects. Reduviidae are polyphagous insects that can prey on more than 1 species of prey. The order hemiptera group of the reduviidae family are capable predators suppress insect pest populations on various types of plants such as crops oil palm and other cultivated crops. This family is a predator that preys on the larvae that destroy plant leaves, especially the oil palm leaf-eating caterpillar (UPDKS). For example R. fuscipes from the reduviidae family can prey on armyworm pests. This biological control agent has a very quiet and sluggish performance compared to ladybugs that suck plant fluids. Many times the predator R. fuscipes stops, then takes a stance waiting for its prey to pass, like the praying mantis which is endowed with modified front legs to catch and prey (Sahid et.al, 2018). When the prey is close enough to it, the front legs of R. fuscipes are extended forward as fast as lightning, and the prey is already in its grip. After the prey is gripped, the predator R. fuscipes will jab needle (stylet) into the body of the prey slowly on a soft spot between its body segments (Farehan et al., 2013). (Liu et al., 2012)) Another species, one example of the reduviidae family, is Sycanus annulicornis, which is still classified in the order hemiptera, the reduviidae family. S. annulicornis is one of the potential predator groups in the field. (Jamjanya et al., 2014) stated that this predator has the ability to live in various agro-ecosystems, both in food crop, vegetable and plantation agro-ecosystems with a wide range of prey, especially from the Order Lepidoptera. S. annulicornis predators lay eggs in groups forming elongated egg packets. The eggs are oblong-shaped, brown in color, and are laid in packages arranged in several rows. The eggs are coated with a liquid which functions to glue the eggs to form egg packages, besides that this liquid also functions to glue the egg packages to the surface. The coating also functions to protect the eggs from attack by bullies and other natural enemies (Sahid & Natawigena, 2018 D. Utilization of Biological Control Agents Is Part of Integrated Pest Management The concept of sustainability and sustainability provides a policy that the use of chemicals is dangerous enough to control the risks of chemical applications in oil palm Role of Predation and Abundance of Biological Control Agents (Ordo Hemiptera, Family Reduviidae) at Subang Palm Oil Plantation Experimen plantation areas and can potentially also be a threat to biodiversity. Therefore, many plantation companies are starting to engage in environmentally friendly development methods and approaches to integrate pest monitoring and control through an Integrated Pest Management (IPM) practical approach. Figure 5. Since hatching nymphs Sycanus annulicornis already actively looking for prey (Source: Doc. Personal (2022) If a prey has been accepted by a predator, then the predator will continue to eat the prey as a means to support the growth and development and reproduction of the predator, but if the prey is not suitable then the reaction will be different for each predator, 1) the predator immediately regurgitates the prey, 2) the predator instantly dies due to the poison content in the prey, 3) the predator remains alive but with very slow growth and development, and if it manages to reach an imago, its life span will be short and have very low fecundity and fertility (Ambrose et al., 2007). The IPM system implemented combines natural control, biological control, and technical (biological and chemical) control (Maredia et al., 2003). Technical control as a last resort is carried out when natural and biological control is no longer able to significantly suppress pest populations. Natural and biological control utilizes natural enemies (predators, parasitoids, and entomopathogens) which are able to suppress pest populations naturally and reduce the risk of environmental damage due to the use of pesticides (Buchori, 2014). CONCLUSION Based on the results of research conducted on oil palm plantations at the Citra Widya Edukasi Palm Oil Polytechnic, Subang, West Java, it can be concluded that the total number of individuals belonging to the order Hemiptera family reduviidae was found to be 37 individuals in 7 genera, namely: Rhinocoris, Cosmolestes, Zelus, Arilus, Sycanus, Rhiginia and Apiomerus. The diversity index (H') of the fauna of the order hemiptera family reduviidae in plantation areas producing oil palm plantations is 0.59 which is classified as low diversity, while the
2023-08-26T16:02:12.004Z
2023-01-10T00:00:00.000
{ "year": 2023, "sha1": "6bb1c651f47c8ac7497e152740fb83aed82f35e3", "oa_license": "CCBYSA", "oa_url": "https://jurnal.syntaxtransformation.co.id/index.php/jst/article/download/675/945", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "239291e91a27daea55e86a259d0d8b311f2e1b33", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
226457049
pes2o/s2orc
v3-fos-license
“Opus trium deorum”: A Note on the Origin of Annotationes ex Scriptis Karoli Episcopi Arosiensis imitate Adam of Bremen, whose text belonged to the central ones in the controversy. The invitation of famous scholars from the continent was an important contribution to the rise of Swedish education and scholarship in the 17 th century. However, most of the foreign guests stayed there for just a few months or years, earning their renown outside Sweden. There are just two prominent exceptions, namely the historian Johannes Locceni-us (1598Locceni-us ( -1677 who, being born in Holstein, was active in Sweden for half a century, and his son-in-law, Johannes Schefferus (1621Schefferus ( -1679, from Strasburg. The final episode in the career of the latter is what I am going to deal with in this article. Schefferus got an impressive education in his home town, and upon entering the university, was sheltered by the prominent historian Johann Boeckler. Still, despite some early academic merits (in particular, a monograph on ancient shipbuilding), he did not manage to obtain a position at the university (Schefferus 1915, 13-17). In 1648, another scholar from Strasburg, Johannes Freinshemius, who was employed in Sweden as professor Skytteanus at Uppsala University, became Queen Christina's librarian, with his position vacant. Boeckler, with whom Freinshemius got in touch, recommended Schefferus for the professorship. So he undertook a long and fascinating journey to the North and visited several European centres of learning on his way, in particular Leiden and Sorø (Schefferus 1915, 17-18). After arriving in Uppsala, he stayed in Sweden for the rest of his life. Schefferus is regarded to be the first Swedish classical philologist (although he founded no "school" around him). The position as professor Skytteanus implied teaching political eloquence. In the Early modern period this task was mainly connected with commenting on relevant classical authors. From Schefferus' autobiography we learn that during his first year in Uppsala he lectured on Pliny the Younger, next year on Livy, then on Pacatus and so on (Schefferus 1915, 21-22). At the same time he was actively working on editions of the classics; in particular, worth mentioning are his editio princeps of Pseudo-Mauricius (1664) and the edition of Trimalchio's Feast (1665) provided with an article in support of its authenticity. Schefferus was also interested in history and culture of his new home country. Perhaps, his most well-known book is Lapponia (1673), soon after the publication translated from Latin into several European languages. Memorabilia Sueticae gentis exempla (1671), modelled after Valerius Maximus, is an enjoyable reading until now. Svecia literata, published posthumously in 1680, was the first full-scale bibliographical reference work in Sweden and has not yet completely lost its value. Unfortunately, the interest in Swedish history together with a sober scholarly attitude to it made inevitable a confrontation with academic circles around Olof Rudbeck. This latter was in process of preparing for publication a monumental treatise Atlantica, where he argued that Swedish civilization must have been several thousand years old and identified it with Plato's Atlantis. The conflict with Rudbeck's close ally Olof Verelius (1618-1681) eventually gloomed Schefferus' last years. It was Schefferus' historical-topographical work Upsalia (1666) that triggered the quarrel. Verelius, who is particularly known as a pioneer of scholarship on Old Norse literature, exposed some of Schefferus' claims to harsh criticism in a commentary to Hervarar saga, published in 1672. Verelius was especially unhappy with identifying the oldest settlement in Uppsala with the modern position of the city (formerly "Östra Aros"), not with the village of Gamla Upsala (i. e. "old Uppsala") several kilometers to the North. Schefferus considered the name of Gamla Upsala to have emerged as late as in the 13 th century and not to be paid credit to. Rudbeck, on the other hand, with his fanciful archeological methods was just on the way to find the traces of the heathen temple, mentioned by Adam of Bremen, in the fundament of Gamla Upsala church, and any doubts in this historical link, utterly important for the whole Rudbeckian theory, were to be confronted bitterly. (It should be noted that in the matter of fact Verelius was ultimately right. Although it is still highly questionable whether the heathen temple was situated precisely on the spot of Gamla Upsala church and, moreover, whether it existed at all, it is now commonly accepted in historical scholarship that Gamla Upsala was the original settlement, considerably older than what now is Uppsala. Schefferus relied on his medieval sources too much.) Several polemical writings were printed in the discussion between Schefferus and Verelius in the 1670s. Finally, in the spring of 1677, Magnus Gabriel De la Gardie, the Chancellor of the Realm, issued an interdict against further replies in the quarrel, an interdict that was rather favourable for Schefferus, who was the last to answer. Nevertheless, just a year later a remarkable document that could be decisive in the discussion fell into the hands of Verelius -according to his own testimony, he received it from Rudbeck. It was not forbidden to publish historical sources, and the document was printed. Let us quote it in full: Annotationes ex scriptis Karoli Episcopi Arosiensis excerptae. Ex MS. o membraneo vetusto nunc primum in lucem prolatae. MCXXX. Nicolaus The edition was provided with an introduction (containing renewed attacks against Schefferus) and a short commentary. Some days afterwards a second edition appeared, this time without an introduction, but with an affirmation of the print's exact correspondence to the manuscript, signed by professors Andreas Norcopensis and Johan Gartman (both were allies of Rudbeck as well) and a report of some German soldiers who, led by Rudbeck, had inspected the basis of Gamla Upsala church and claimed to have seen clear traces of the former heathen temple. Just weeks later Schefferus published his critical review of the Annotationes, entitled De excerptis annotationibus ex scriptis Caroli episc. Arosiensis per adversarios expressum judicium. This analysis is a true masterpiece of Early Modern textual criticism, quite unique for Sweden in particular. In the preface Schefferus laments the need to take part in a lengthy emotional quarrel despite his multiple merits for Swedish scholarship -and to do it now, when he is aged and sickly. Still, both Verelius' preface to the Annotationes and their very contents make it necessary to continue the discussion, albeit "nullo scribendi pruritu". He promises to confine himself to the analysis of the text, putting aside his personal wounds (Schefferus 1678, 7). Schefferus' arguments against the authenticity of Annotationes can be divided into several groups. (1) The circumstances of the publication The document emerged with the help of Verelius' own sympathizers and appeared at the moment when it was most needed. Its contents make one think that the medieval bishop or at any rate the author of the excerpts had foreknown the discussion of Schefferus and Verelius and thus constructed his very short "chronicle" of the events connected with the foundation and the transfer of the churches in Uppsala, without forgetting to mention the heathen temple (Schefferus 1678, 11-12, 18). Nobody had had any notion of bishop Charles' writings before, although it is hardly believable that Rudbeck's father Johannes Rudbeckius, a bishop of Västerås himself, could have missed such an important historical source preserved in his own library. Finally, the nation-wide ordinance to look for medieval historical documents and to send them to the Collegium Antiquitatum in Stockholm had been issued a decade before, in 1667 -but Rudbeck made no haste to find the manuscript, and Verelius chose for some reason to publish it himself instead of submitting it to the expert evaluation (Schefferus 1678, 12-14). (2) Literary details The plural 'scriptis' in the title looks strange. Firstly, the contents of the Annotationes rather suggest that they had been based on one single work. Secondly, had bishop Charles been such a prolific author as to ascribe to him a series of works, he would hardly have been as completely forgotten as he was (Schefferus 1678, 15-16). And isn't that awkward, the bishop of Västerås writing exclusively about Uppsala? Verelius himself is aware of this difficulty in his commentary and suggests that the one who made excerpts was from Uppsala -but why, then, the existence of the writings had been ignored in Uppsala just like elsewhere for centuries? Besides, it is difficult to understand why a certain person decided to make excerpts of a chronicle, which, so far one can judge from a comparison with other works from the period, must itself have been utterly concise (Schefferus 1678, 14). On the other hand, a series of exact dates raises suspicions: "Nimirum rem noverunt omnes receptam apud veteres, consignare breviter vocabula hominum et res gestas, annos atque tempus silentio praeterire. Quod nisi hospes in Antiquitatibus Svecorum negare nemo poterit" (Schefferus 1678, 22). The last record in the Annotationes is noteworthy, as bishop Charles turns out to quote himself, almost verbatim repeating a sentence from a real medieval letter (Schefferus 1678, 34), published by Schefferus several years before (Schefferus 1673, 28). ( 3) Chronological details The dates given in the document diverge from or conflict with the commonly accepted ones on several occasions. The crassest example is the burial of Saint Eric in 1153, who actually died in 1160 (Schefferus 1678, 29). (4) Lexical details In the record under the year 1150 the Assumption of the Virgin Mary into Heaven is called ascensio, whereas the correct catholic term is assumptio: to ascend by his own force was a privilege of Christ. A medieval bishop could not fail to know that. The mistake is perhaps due to the Swedish language using the same word ('himmelsfärd') for both phenomena. Another strange detail is the reference to the heathen temple, which is both pleonastic ("Quid enim si paganicum, necesse fuit addere vetustum?") and lexically dubious (Schefferus 1678, 23-24; see the discussion below). (5) Paleographical details Schefferus informs us that he had got the opportunity to look at the manuscript, 1 although "brevissimo temporis spatio". It only confirmed his doubts (Schefferus 1678, 41). The manuscript is a sheet of approximately duodecimo size, torn apart from a Swedish-language manuscript of medical and botanical contents. The handwriting has not much in common with medieval Latin manuscripts and rather resembles Icelandic manuscripts donated to the Collegium Antiquitatum by M. G. De la Gardie (Schefferus 1678, 42). The ink is not pale enough to be several centuries old, but on the other hand it is diluted with water -"ad praeferendam vetustatem". The letter þ occurs in the manuscript, which is unthinkable for a medieval Latin text, and the form of the letter F imitates… the Gothic Codex argenteus, donated by De la Gardie to the Uppsala university in the 1660s. Finally, to fill a lacuna with a dash, as is done in the manuscript, is a modern habit, whereas the medieval scribes either do not signalize the lacuna at all or leave a space on its place (Schefferus 1678, 43). It should be admitted that not every argument by Schefferus taken alone is decisive, but the overall conclusion looks quite safe: Annotationes are a forgery, and of a poor quality. Still, its further history in the scholarship is remarkable. Verelius himself, accused in this way of publishing a forged document and even questioned on this matter by the university consistory, does not seem to have answered any of the objections in particular, only complaining about the calumny he had been exposed to. Later on (beginning with Klas Örnhiälm's Historia Sveonum Gothorumque ecclesiastica from 1689) voices has been raised both in favour and against the authenticity of the Annotationes (Kumlien 1967, 6-8). At the end of the 19 th century Claes Annerstedt, a famous specialist in the history of the Uppsala university, published an article on the quarrel between Schefferus and Verelius (Annerstedt 1891), and devoted considerable attention to the discussion of the Annotationes. Completely agreeing with Schefferus' arguments, Annerstedt adds some more, pointing at the inconsequent spelling of the proper names and at the dilettantish usage of medieval-styled abbreviations in the manuscript (Annerstedt 1891, 144-145). Nevertheless, in 1967 Kjell Kumlien published a monograph entitled Biskop Karl av Västerås och Uppsala ärkesätes flyttning, where he endorsed the authenticity of the Annotationes. Kumlien admits that the quality of the document does not allow to regard it as medieval (Kumlien 1967, 59), but does not consider it necessary to suppose a deliberate forgery. Kumlien suggests that bishop Charles indeed wrote some historical account about the church in Uppsala and this account fell into oblivion, but an unknown person in the 16 th or the 17 th century produced some excerpts of this account. It was these excerpts (of very poor quality) that fell into Rudbeck's and later Verelius' hands in the late 1670s. Kumlien blames Schefferus for his methodological near-sightedness, whereas Verelius and his later supporter Eric Benzelius are praised for a "more comparative" approach (Kumlien 1967, 65-66). Kumlien's hypothesis, presupposing a series of hardly plausible assumptions and principally denying a difference between "poor quality" and conscious stylization (see in particular Kumlien 1967, 44), is rather suitable as a plan for a postmodern novel than as a serious contribution to the textual history of Annotationes. Nevertheless, its conclusions were enthusiastically adopted by large parts of Swedish archeological community, bishop Charles was brought back to life and has become a frequent point of reference in scholarly literature (see a long list in Sävborg 2017, 66 nn. 10-12), penetrating as far as into Reallexikon der Germanischen Altertumskunde (Duczko 1998, 410). It took several decades before some surveys showing the lack of validity in Kumlien's arguments were published. Henrik Janson and Magnus Alkarp have demonstrated the incorrectness of the claim (going back to Eric Benzelius and of crucial importance for Kumlien's hypothesis) that the very existence of bishop Charles had been unknown at the time when Annotationes were published, and confirmed only later (Janson 2001, 48-50;Alkarp 2009, 204-208). The arguments adduced by Schefferus and Annerstedt are enriched by Janson with the words "S. Papae" in the last record. One can only explain them as "Sancti Papae", and such a way to refer to the Pope is unthinkable for a catholic. It can only have been invented by a person who wants to imitate a catholic but is chronologically situated too far from the Swedish Reformation to perceive the impossibility of such a phrase (Janson 2001, 59 n. 45). In another recent article, Daniel Sävborg analyses the Icelandisms in the Annotationes (Sävborg 2017, 76-78) and remarks that king Inge elsewhere is called Yggemundus (cf. the record under the year 1138) in one single source, namely one of the manuscripts of Hervarar saga -not the best one, but the very manuscript that was used by Verelius when preparing the edition of 1672 (Sävborg 2017, 74-75). The most relevant research area regarding Annotationes is, in my opinion, the question of the forger's 2 sources and way of working. Sävborg's article has been a significant contribution in this direction. Still, I suppose that there are more details to discover. As mentioned above, Schefferus was suspicious about the phrase "opus trium deorum" used to refer to the heathen temple (Schefferus 1678, 23-24): Kumlien rejects this remark as "uncomprehensible" ("obegriplig") and in an astonishingly ignorant way claims that it must be unproblematic to use the word 'opus' here, as it sometimes, according to what he found in a dictionary, means "building" (Kumlien 1967, 48). Of course, no examples are hereby adduced of 'opus' with a genitive of an animate noun that is not a subjective genitive. I have not managed to find such examples in classical or in later Latin either -so Schefferus seems to be (unsurprisingly) right: the expression is extremely odd. However, the choice of the forger here, unlucky as it is, turns out to be not a random one. Typically for a forgery, Annotationes are on a phrasal level sewn together from a series of sources that were to some extent relevant for their contents and chronologically close to the pretended date of their composition. We have seen that the forger has used a letter by a real bishop Charles and name forms from Icelandic literature. The ultimate source of knowledge about the heathen temple in Uppsala was Gesta Hammaburgensis ecclesiae pontificum by Adam of Bremen. He was present in the controversy between Schefferus and Verelius all the time, and it is surprising that nobody has so far drawn attention to its textual influence in the Annotationes. 3 The temple is not simply "opus trium deorum", it is "paganicum" as well. The adjective 'paganicus' is extremely rare compared to its synonyms 'paganus' and 'gentilis' , and it is true not least for medieval Latin. One of the few authors who had it, so to say, in his active word-stock, is just Adam of Bremen. He uses 'paganicus' three times, one of them in a context dealing with Swedish conditions. The same author, on a different occasion, writes about a fire of the St. Peter church in Bremen (II. 78): Is, conflagratione templi audita, mox pedem retorsit, iactisque sequenti aestate fundamentis, ad formam Coloniensis ecclesiae disposuit huius nostrae magnitudinem perducere. Et profecto credimus, si longiorem sibi vitam fato concesserint, omne opus ecclesiae finiturus erat paucis annis. It is here that we, in my opinion, can discern a source for "opus trium deorum". The author of Annotationes has found a similar context (building up a church after a fire; note the expression "iactisque fundamentis" as well, a juncture used twice in the Annotationes) in a work he must have been well acquainted with 4 and substituted the genitive 'ecclesiae' by the genitive 'trium deorum' . The lack of lexical compatibility was hardly a detail that the forger put attention to.
2020-09-10T10:16:26.999Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "a4902fc410fefae366ff0ec2d45a5fe46ff26c96", "oa_license": null, "oa_url": "https://philclass.spbu.ru/article/download/7781/5719", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "973f28704ec123babf5523d926abfa1c57a5a5ec", "s2fieldsofstudy": [ "History", "Linguistics" ], "extfieldsofstudy": [ "Art" ] }
199675825
pes2o/s2orc
v3-fos-license
Novel Routing Method Using Slime Mold Algorithm Corresponding to Movement of Content Source in Content-Oriented Networks Content-oriented networks are proposed as a novel network architecture in which routing is conducted using the content ID instead of an IP address. In content-oriented networks, there is a problem that users cannot discover contents when the mobile communication device that has the contents moves about. Therefore, many studies have been undertaken to solve this problem. In conventional research, by applying ant colony optimization (ACO), Manome and Asaka tracked the movement of contents by adding a pheromone to the route of the mobile device itself. This method improved the discovery rate after movement compared with the case without the pheromone, indicating the e ff ectiveness of their method. However, since the path searching by ACO depends on the behavior of movement of the mobile device, there is a problem of detouring along the route to the contents in some cases. To solve these problems, we used the true slime mold algo-rithm instead of ACO. In the simulation, we compared our results with those of the conventional method of using ACO to evaluate the performance of our proposed method. Introduction In recent years, which has been called an era of information overload, many users send and receive information in the form of written material and moving images by using various services such as YouTube and SNS. Therefore, most of the traffic flow on the Internet is occupied by the content we use. Internet contents are received of IP addresses, including location information. Accordingly, the communication model is considered a location-oriented architecture or a client-server model. There are various problems in this communication model. For example, when too many users access a server, there is a risk of the server crashing. Content delivery networks (CDNs) [1] or peer-to-peer (P2P) [2] networks have been developed to avoid this. However, since these networks are composed of location-oriented networks, it is necessary to change the network architecture itself. A content-oriented network (CON), in which content IDs are used for routing, has been studied. In the CON, each communication terminal holding content transmits content information to all nodes by flooding them with its own content information. Each node creates a routing table based on this information. However, when the content moves, since an inappropriate routing is executed until the rewriting of the previous routing table is completed, it becomes impossible to obtain the content. Therefore, a routing control method using ACO to acquire the contents in a mobile device was proposed [3]. In ACO, the path to the content is determined by a layered pheromone. A pheromone is added to each route, and each agent (ant) diffused from the node that requests the content executes a route search to the next node that holds content by using the pheromone information. By adding a pheromone to the route along which the mobile terminal passed, it becomes possible to establish a route even if the terminal moves. This method improved the discover rate after the movement compared with the case where no pheromone was left, indication the effectiveness of this method. However, since route searching by ACO depends on the behavior of movement of the mobile device, there are problems such as detouring along the route to the contents and too many control packets. In addition, the discovery rate decreases when the time to live (TTL) is low. To solve these problems, we use a slime mold algorithm instead of ACO for route searching. The slime mold algorithm is used to solve combinatorial optimization problems by employing ecological characteristics of slime molds. This algorithm has features such as if definitely finds the shortest route, the route search is quick, and adaptability to resetting the shortest route after the update of information is high. Therefore, in this research, we propose a routing control method to the mobile content source with a unique flow rate and conductivity on the basis of a CON. In the simula-tion, this proposed method is compared with the conventional method [4], and we show a comparative evaluation of it performances. Conventional Method In conventional research [3], by applying ACO, Manome and Asaka tracked the movement of contents by adding a pheromone to the route of the mobile device itself. ACO is based on the ant system proposed by Dorigo et al. [5]. This method consists of the renewal of the pheromone information on the links and the behavior of I-ants and D-ants. Here, we describe the procedure of the conventional method. Figure 1 shows the concept of the conventional method. Firstly, users send I-ants from node 0, as shown in Fig. 1 (Step 1). The I-ants search for the objective contents by laying pheromones. p j defined by Eq. (1) shows the probability of selecting a certain path from the present node i to the next node j. τ i j is the quantity of pheromones between nodes i and j. N is number of sending ants from the request node. L is the set of neighbor nodes that have not been passed. The ants do not select nodes that have been passed once. Secondly, if the I-ants discover the desired content, as shown in Fig. 1 (Step 2), the quantity of the pheromone ∆τ i j is given by Eq. (2), in which λ is the number of hops. The voluntary quantity of pheromone S is laid on a shorter path and a small quantity of pheromone is laid on a longer path. Thirdly, when N I-ants reach the content source, the content source sends D-ants to the users, as shown in Fig. 1 (Step 3). The quantity of pheromones τ i j is laid on each trail where N I-ants have passed. The remaining quantity of pheromones from i to be renewed is given by Then, when the mobile content source moves, the voluntary quantity of pheromones ∆ϕ i j that the content sources themselves lay on the trail from their prior location to the present one renews τ i j . Finally, the pheromones between all the nodes evaporate at rate ρ every fixed time frame, as shown in Fig. 1 (Step 4). τ i j is renewed by where ρ is the evaporation rate of the pheromones and is defined to be between 0 and 1. The paths become narrowes because the pheromones on the paths along which the packets do not often pass evaporate. The I-ants, D-ants, and "Interest" are deleted when their number of hops is greater then the TTL, which is the number of hops taken from start to finish. After deletion, the next node cannot be selected when all the neighbor nodes have already been passed. A large quantity of pheromones is laid on a shorter path and a small quantity of pheromones is laid on a longer path. Then, the pheromones on the longer paths evaporate and gradually disappear because of the repetition of Steps 1 to 4. Moreover, since the contents are detected after they move in Step 3, "Interest" can follow along the trajectory from their prior location to the present location. Therefore, users can acquire the objective content sources using the shorter paths and detect them even after the content moves. This method led to an improved discovery rate after movement compared with the case of not leaving the pheromone, indicating the effectiveness of the method. However, since path searching by ACO depends on the behavior of movement of the mobile device, there is a problem of detouring along the route to the contents in some cases. In addition, this method has the problem that too many control packets are generated. Moreover, the discovery rate decreases when the TTL is low. Proposed Method user node content source Content source itself lays "conductivity" and "flow rate" on route form previous place to present place. Figure 2: Concept of proposed method To solve the above problems, we use a true slime mold algorithm [4] instead of ACO. Tero et al. reported that this algorithm can quickly find the shortest path, and is highly adaptable to the reconstruction of the shortest path from the update of movement information. In this paper, we propose a route control method in a CON using the slime mold algorithm. The content lay flow rate and conductivity on the moving trail (Fig. 2). This shortest path search algorithm is called the Physarum solver. The conductivity and flow rate of each path can be found by solving The largest flow rate Q among connected nodes is used in next route selection. When the content moves, the conductivity is conventionally initialized to 0 and the shortest path search is performed again. On the other hand, in our method, the previous conductivity is set on each path as the initial value. Therefore, it is considered that the shortest route can be found faster. In addition, when the content moves, both the flow rate Q i j and conductivity D i j are updated as The initial value of conductivity D 0 i j is 1. Then, D n i j is updated by Eq. (7). "Interest" is deleted when the number of hops exceeds the TTL. A large flow is given on a short path, and a small flow is given on a long path. By repeating this, the flow rate on the long path gradually decays. Since "Interest" selects a route based on the flow rate, the content source is detected using Eq. (9) when content moves. Therefore, users can acquire content sources using the shortest path and detect the sources even after the mobile device with content moves. Simulation Results We describe the simulation model and the results of evaluating the proposed method in this section. Table 1 shows the parameter values used in this simulation. The network model is shown in Fig. 3. Node 0 is the user node that sends out the flow and "Interest". First, we set Node 99 as having the desired content. When 1000 units of time elapse, the content moves to Node 9. We compared the proposed method with the conventional method using ACO. The results of a comparison of the discovery rate is shown in Fig. 4. The green line shows the time that the content moves. In our proposed method, the discovery rate was 100%. On the other hand, in the conventional method, the discovery rate was not always 100% even when the correct route was learned, and the discovery rate after movement was markedly low. This is because many ants could not find the content in the TTL. Figure 5 shows the number of hops for the two methods. In the proposed method, when content moves, it is acquired in a moment by detouring. After that, the shortest path is learned very quickly. This is because route searching is unaffected by the previous flow rate. In the conventional method, it took Conventional method Proposed method Figure 5: Number of hops a log time to learn the shortest route. This is because the content was discovered after detouring, by the following the pheromone stored of each node. Conclusion In this paper, we proposed a routing information management method using the slime mold algorithm to track the movements of mobile content sources. In the simulation, we showed that the proposed method enables efficient retrieval even if the mobile content moves in a network. However, since only the movement of the mobile device with content is verified in this simulation, it is necessary to verify its validity in an environment where both the content request device and all other nodes move. In addition, it is necessary to permorm simulations with various parameter values and network models as a future work.
2019-08-16T16:05:50.480Z
2019-07-20T00:00:00.000
{ "year": 2019, "sha1": "74daa73ef6138371b7d31da237d23d5274147305", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jsp/23/4/23_173/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e39139e049f2c28062d4e4abcdd2ecba8df94e5b", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
253489269
pes2o/s2orc
v3-fos-license
Impact of Artificial Intelligence on Regional Green Development under China’s Environmental Decentralization System—Based on Spatial Durbin Model and Threshold Effect Artificial intelligence (AI) is the core technology of digital economy, which leads the transition to a sustainable economic growth approach under the Chinese-style environmentally decentralized system. In this paper, we first measured the green total factor productivity (GTFP) of 30 Chinese provinces from 2011 to 2020 using the super-efficiency slacks-based measure (SBM) model, analyzed the mechanism of the effect of AI on GTFP under the environmental decentralization regime, and secondly, empirically investigated the spatial evolution characteristics and the constraining effect of the impact of AI on GTFP using the spatial Durbin model (SDM) and the threshold regression model. The findings reveal: a U shape of the correlation of AI with GTFP; environmental decentralization acts as a positive moderator linking AI and GTFP; the Moran index demonstrates the spatial correlation of GTFP; under the constraint of technological innovation and regional absorptive capacity as threshold variables, the effect of AI over GTFP is U-shaped. This paper provides a useful reference for China to accelerate the formation of a digital-driven green economy development model. Introduction The work undertaken by Chung et al. (1997) was the initiative to incorporate undesirable outputs into the measurability of productivity for the first time [1], on which the notion of green total factor productivity (GTFP) was formulated. The current trend of global warming and resource constraint is becoming more and more serious, and the tension between Chinese economic progress, environmental conservation, as well as depletion of resources is becoming more and more prominent [2]. In October 2022, the 20th National Congress of the Communist Party of China reported that we should promote the high-end, intelligent, and green development of the manufacturing industry, promote green and low-carbon economic and social development, which is a key link to achieving high-quality development, improve the market-based allocation system of resource and environmental factors, and accelerate the research, development, and application of energy-saving and carbon-reducing advanced technologies. The access towards achieving sustainability of economic expansion lies in enhancing GTFP [3,4], which is a valid and comprehensive indicator of quality economic development. Artificial intelligence (AI) is often broadly described in terms of systems that accurately interpret exterior data, has the capability of learning from that data, in addition to using that acquisition to implement particular goals and missions by means of adaptive agility [5]. The Ministry of Science and Technology 2022 issued a "notice on supporting the construction of a new generation of artificial intelligence demonstration application scenarios" that requires giving full play to the role of artificial intelligence to empower economic and social development, and building a whole chain and process of artificial intelligence industry application ecology. The application of artificial intelligence in the chain makes it easier to diminish the overall influence over the environment by eliminating the need for excessive use of resources [6], and artificial intelligence drives the evolution of the economy from labor-and capital-intensive to technology-intensive [7], changing the way society and the economy, as a whole, operate [8]. Artificial intelligence can extract the value of production data, identify energy consumption bottlenecks, reduce production costs, and refine clean production management [9]. The greatest advantage of exploring the link between AI and GTFP is the combined impact of economic growth, energy efficiency, and the environment that can be derived [7]. With greenery having emerged as the foundation of China's current high-level construction [10], the Chinese government leads the way towards the strategic deployment of environmental protection, proposing to establish a government-led, social organizational and publicly engaged multi-governance, source-prevention environmental governance system [11]. As a part of environmental governance system reform [12], the scientifically sensible allocation of administrative power of environment across central and local governments [13] serves as the systematic basis for improving the efficiency of green development, while environmental decentralization determines the structure and utilization effectiveness of investment on environment pollutant management [14], so environmental decentralization plays an important role in influences exerted by AI technology on GTFP. Many studies have been conducted to test the implications of artificial intelligence as a powerful driver on green economic growth [15], green human resource management [6], technological innovation performance [8,16], and green finance [17]. found a remarkable "U-shaped" contribution of AI to GTFP and found that increasing AI levels in resource-rich areas improved GTFP [18]. The research on AI and government environmental governance systems is mostly explored in terms of the application of AI technologies in environmental modernization [19][20][21], ensuring that emerging technologies such as AI are fully utilized by building new frameworks, policies, and governance [22]. Current scholarship on GTFP still focuses on policy evaluation and production drivers [23]. Artificial intelligence is an important way to alleviate the current pressure on resources and the environment [24]. Although current research has widely discussed AI, however, there is still a lack of investigation into the implications of AI upon regional greener development in the context of an environmental decentralization system with Chinese characteristics. We are endeavoring to navigate the mysteries of China's peculiar environmental decentralization regime with respect to the connection with AI and GTFP. This paper, therefore, addresses the following questions and provides a marginal contribution to research in related fields: what is the impact of AI on regional green development under a Chinese-style environmental decentralization system? Is there heterogeneity in the impact of AI, environmental decentralization, and GTFP across provinces? Is the GTFP spatially correlated and is there a spatial spillover effect of AI on the GTFP? Do technological innovation and regional absorptive capacity pose constraints on the process by which AI affects GTFP? The answers to the above questions in this study are of guiding significance for better ecological and economic co-wins in the digital economy. In the current environment of decentralization of environmental management affairs between the central government and local governments in China, studying the integration of artificial intelligence and green total factor productivity provides a power source to promote highquality development and transformation and upgrading of China's economy, and relying on the integration of the two can form a new pattern of green development. The rest of this paper is organized as follows: Section 2 is a literature review, summarizing the current studies' status and shortcomings; Section 3 explains the study methodology of this paper, including model setting, variable description, and sources of data; Section 4 is the interpretation of the empirical test outcomes; and Section 5 outlines and generalizes the findings of the paper, along with policy recommendations accordingly. Literature Review GTFP is a new total factor productivity accounting system based on the traditional green total factor productivity by further adding resource consumption to the input factor and incorporating non-desired output represented by pollutant emissions into the output factor [25], which is then calculated as GTFP. GTFP is the residual value of output growth after deducting the growth of various factor inputs and emissions [26]. Current research regarding GTFP concentrates mostly upon the influence elements and measurement models, and input and output indicators are consistent or similar across scholars' regions [27,28]. Research on the factors influencing GTFP has focused on two perspectives [29], one exploring the impact of economic [30], technological [31], and resource [32] factors on the green development of national economies [33], and the other exploring the utility in shaping GTFP by government policies and legislative frames [34][35][36]. In terms of GTFP measurement, the dominant methodologies typically encountered are parametric, semi-parametric, and nonparametric processes [37]. To address the radial and directional bias in GTFP growth, the super slacks-based measure (SBM) model suggested by Tone [38] has become the most widespread tool in assessing green productivity growth [29]. Theories, methods, and techniques that help machines analyze, simulate, exploit, and explore human thought processes and behavior can be considered as artificial intelligence [39]. With the rise of Industry 4.0, the use of big data and artificial intelligence in creating value is becoming more and more widespread [40]. In their book Data Science Applied to Sustainability Analysis, Dunn and Balaprakash (2021) mention that data science and technology have become central to addressing sustainability challenges, and this role will only expand in the future [41]. In the era of Industry 4.0 artificial intelligence is important for fostering new competitive advantages, changing the energy consumption structure [42], promoting industrial transformation, as well as the advancement towards the middle and higher levels of the industrial value chain, which, in turn, exerts a significant influence upon sustaining the growth of China's economy [43] and is one of the core factors for the development of GTFP. Artificial intelligence can influence GTFP in three ways: improving resource utilization in the production process, improving pollution treatment and pollution control, and fostering green industries and promoting green energy development [44]. However, there are few studies on the association with AI and GTFP. Bai and Sun (2021) studied the impact on total factor carbon productivity from the overall perspective of Internet development (including Internet+, big data, cloud computing, AI, etc.), which found that Internet development significantly contributed towards the enhancement of total factor carbon productivity [45]. Zhao' s study (2022) is concerned with inclusive economic growth and focuses mainly on the analysis of the intrinsic drivers of inclusive economic growth [46]. Li and Song's study (2022) constructs a green evaluation system based on three dimensions: economic, environmental, and social and investigates its intrinsic mechanisms [47]. Compared with the studies of Zhao (2022) and Li and Song (2022), this paper tends to study the extrinsic influences of GTFP. The study by confirmed the U-shaped effect of AI on GTFP using a nonlinear dynamic panel regression model [16], which discussed the effect of regional resource heterogeneity but did not conduct an analysis of spatial spillover effects between regions. Lyu et al. (2022) confirmed the U-shaped relationship between both digital economy and GTFP, using a comprehensive indicator built from four dimensions of digital economy infrastructure, digital talent, digital application, and digital economy development environment to measure the digital economy [7]. In this paper, we refine the research object and select only one of the digital economy infrastructure indicators to explore whether AI will have a different impact on GTFP. A substantial body of existing research concerns the mechanisms of the fiscal decentralization affecting GTFP [48,49]. Song et al. (2020) studied the effectiveness of decentralized fiscal authority on GTFP from a revenue perspective and an expenditure perspective, respectively [50]. Significant political heterogeneity occurs due to the existence of differ-ent political status and political treatment among regions, which leads to more resource allocation power for regions with higher political status [51]. However, fiscal decentralization is not equivalent with environment decentralization. Environmental decentralization refers to the rational distribution of environmental protection rights among all levels of government [52], with local governments choosing the type of environmental protection policy [53] that is appropriate to their local preferences and seeking the optimal allocation of environmental protection functions among all levels of government [54]. Fiscal decentralization emphasizes the attribution of economic rights in the central and local government, while environmental decentralization emphasizes the delineation of environmental regulation rights [55]. There is a lack of exploration into the implications of environmental decentralization over artificial intelligence. To summarize, the current exploration of the connection among AI, environmental decentralization, and GTFP by domestic and foreign scholars is still insufficient, and the mode of functioning together with AI and GTFP remains ambiguous, ignoring the implications of China's ambient policies on AI. Therefore, this article estimates the GTFP of 30 Chinese provinces from 2011 to 2020 and adds the interaction term of environmental decentralization and AI to probe the moderation function by environmental decentralization and tests the heterogeneity of the three districts in the eastern, central, and western regions of the country by adding an SDM to empirically study the spatial correlation and a threshold regression model to imperatively study the threshold constraint by technological innovation as well as regional absorptive capacity. Equation Setting To verify the effectiveness in AI on GTFP, the following regression equation for panel data is developed: In Equation (1), AI is the core explanatory variable, standing for artificial intelligence. GTFP is the explanatory variable and signifies GTFP. X is the control variable, i and t indicate the i-th region and t-th period, respectively, µ i serves as an individual fixed effect, λ t serves as a time fixed effect, and ε it works as a random disturbance term. To verify the potential nonlinear effects of AI upon the presence of GTFP, Equation (2) introduces the squared term of AI. AI_sq is the square of artificial intelligence. In order to further verify the moderating function ascribed to environmental decentralization over the correlation among AI and GTFP, this paper introduces the interaction terms of AI, squared AI, and environmental decentralization, respectively, centered on AI*ED and AI_sq*ED, and Equation (3) is constructed. In Equation (3), ED is the moderating variable, representing environment decentralization. AI*ED represents the interaction term between AI and ED, and AI_sq*ED represents the interaction term between AI_sq and ED. The GTFP of a region may be subject not merely to the level of AI within the locality, but also to level of AI in its neighboring areas. Therefore, the following two spatial econometric equations are set up to test the spatial spillover effectiveness of AI and environmental decentralized regulatory mechanisms: GTFP it = βWGTFP it + δ 1 WAI it + δ 2 WAI_sq it + δ 3 ED it + δ 4 WAI it * ED it + δ 5 (5) where W in Equations (4) and (5) is the spatial weight matrix. Following Hansen's (1999) study [56], we set technological innovation and regional absorptive capacity to be threshold variables to detect the effects generated by AI on GTFP at different levels of technological innovation and regional absorptive capacity, respectively, and construct a threshold regression, Equations (6) and (7). where I() is the indicator function and q is the threshold value. Variable Selection Explained variables: green total factor productivity (GTFP). The study by Feng and Zhang (2017) measured GTFP using the weak directional distance function (W-DDF) model, strong directional distance function (S-DDF) model, and SBM model, respectively, and the results showed that the SBM model corresponds better to the true interpretation of GTFP [57]. Since the super-efficient SBM model evaluates the quiescent efficiency instead of GTFP, the Malmquist-Luenberger index derived from the super-efficient SBM model is chosen for computing the GTFP [58]. Motivated by the foregoing analysis, this article draws on the non-radial super-efficient SBM-Malmquist-Luenberger method commonly used by previous scholars [59,60], integrates energy expenditure and ambient contamination as part of the co-operative framework, and conducts the GTFP of 30 Chinese provinces (not including Tibet and Hong Kong, Macao, and Taiwan in view of data availability) from 2011 to 2020. Assume that every province is treated like a decision-making unit (DMU) with a total of n DMUs. Each DMU has m input factors x i (i = 1, 2, . . . , m) and produces s 1 desired output y d and s 2 undesired output y b . The super-efficient SBM model measures the following equation [58,59,61]: x i x ik 1 s 1 +s 2 ∑ The output indicators of GTFP include two parts: expected output and non-desired output; the input metrics comprise three parts: labor, capital, and energy inputs. Each specific indicator measurement method is shown in Table 1. Explanatory variable: artificial intelligence (AI). As the new technological revolution, with the new generation of information technology as the core, unfolds globally, it has triggered a new round of industrial changes and the profound convergence on artificial intelligence, as well as industry [45], which will affect the regional GTFP. Modeling on Borland and Coelli (2017) [64], this study uses the logarithm of the annual social fixed asset investment across the information technology (IT), computer services, and software sectors in each province for the purpose of capturing the degree of regional AI development. Fixed asset investment is an indicator of constructing and acquiring fixed assets completed in some period of time manifested as currency, which has a decisive contribution to the formation, as well as advancement, in industries within a region [65], so the whole society fixed asset investment within the IT, computer services, and software sectors directly determines the extent of AI development within the area; hence, this study uses this indicator to represent AI. In this paper, the squaring parameter of AI is also included for verifying the possible nonlinearity of the association of AI with GTFP. Moderating variable: environmental decentralization (ED). Most former investigations indirectly gauge environmental decentralization through the use of indicators, such as dummy variables, the proportion of locally autonomous laws, or fiscal decentralization [13,66], which cannot precisely embody the substance within Chinese-style environmental decentralization, while the allocation of staff in central and local environmental conservation systems can reflect the degree of environmental decentralization [14], so this paper draws on Zhang and Li (2022) [12] and Ran et al. (2020) [67] to use the staffing in both the central and the local environmental protective system as a measure of environmental power sharing. Threshold variables: technological innovation and regional absorptive capacity. Technological innovation (TI) is recognized to be a crucial driver of the clean engine economy, since it facilitates the emergence of innovative markets and enhances the effectiveness of energy distribution while reducing pollutant emissions [68,69], as measured in this study using the logarithm of the number of applications received per 10,000 people for three domestic invention patents in China. Regional absorptive capacity (RAC) not only enhances the ability to identify, transform, and apply knowledge and technology in the region, but also promotes innovation capacity, which, in turn, promotes regional green development [70]. Research and development (R & D) expenditure is the core element of regional absorptive capacity [71], so this study uses a logarithmic measure of R&D internal funding expenditures of local sector-affiliated research and development institutions across all regions. Control variable: industrial structure (IS): the second sector tends to be heavily dominated by energy-intensive industries that release considerable quantities of CO 2 , compared to the high output and negligible CO 2 emissions of the trivial sector [72]; increasing the share ascribed to the tertiary sector throughout the overall manufacturing sector will incline the industrial structure to assume a more critical position in boosting economic performance and reducing pollutants [73], contributing towards enhancing Chinese environmental performance. The ratio of tertiary sector to secondary sector output is measured annually for each region in this paper. Foreign direct investment (FDI): the halo effect of FDI and technological change spillovers contribute towards the enhancement of ambient properties [74], so this article uses a logarithmic measure of the actual annual use of FDI. Government intervention (GOV): fiscal decentralization endows local governments a moderate extent to fiscal independence and promotes rational allocation of resources [4], but fiscal decentralization can lead to GDP-only local governments, generating negative environmental externalities [75], and thus acting as a disincentive to GTFP. In this paper, we use the annual fiscal budget revenue shared by regional GDP in each region to measure. Energy consumption structure (ECS): traditional fossil energy forms a major source of energy with respect to China's economic development [76], and traditional fossil energy possesses a significant detrimental influence upon the environment, and its predominant energy consumption structure has a profound impact on GTFP. This paper measures the energy consumptive structure in terms of calculating the proportion of annual electricity usage in total energy utilization by region according to China's General Rules for Calculating Comprehensive Energy Consumption GB/T 2589-2020. Population density (POP): population density of each region can reflect the degree of aggregation of regional labor factors, which brings scale effect to a certain extent and provides an engine for regional GTFP development. This article utilizes the number of people living on each unit area of land in each region each year for measurement. Description statistics for all variables are displayed in Table 2. Baseline Regression Analysis The Hausman test was performed to ascertain the appropriate econometric model, and Prob > chi2 = 0.0035, indicating that the stationary-effects model becomes the optimal model; consequently, the fixed-effects model is used for the empirical investigation in this paper. Model 1 and Model 2 in Table 3 report the baseline regression analytical data. The findings from Model 1 reveal that the coefficient of AI is statistically significantly negative, which demonstrates that AI can inhibit the progress of regional GTFP. The squared term of AI was added to Model 2 for probing the potential nonlinear influence existing in AI towards GTFP, and the coefficient of AI was found to be negative and the coefficient of AI squared was significantly positive, which illustrated that the influence of AI on GTFP is nonlinear and there may be a U-shaped correlation. This article uses utest [77] to corroborate this. utest's results show that p > |t| = 0.006, verifying that the influence of AI over GTFP is indeed a U-shaped association. Figure 1 reports the U-shaped relationship between GTFP and AI. This is in agreement with the conclusions reached by [18] and Lyu et al. (2022) [7]. However, the difference with the study of lies in the choice of econometric model; the coefficients of AI and its quadratic term in the fixed effects model in the study of are not significant but significant in the GMM model. The inflexion mark of the U-shape relationship is calculated to be 4.50, which implies that AI promotes GTFP when it is above 4.50 and acts as a disincentive when it is below 4.50. The possible reason for the above phenomenon is that the input cost of AI is high in the early stage and GTFP cannot compensate for the production cost, and, since there is a lag effect from capital investment to the utilization of new technology [78], AI plays a facilitating role for GTFP after the inflection point. Most provinces (except Shanxi, Liaoning, Hainan, Gansu, and Ningxia) exceed the inflection point value in 2020 and are in the right-hand phase of the U-shaped curve. disincentive when it is below 4.50. The possible reason for the above phenomenon is that the input cost of AI is high in the early stage and GTFP cannot compensate for the production cost, and, since there is a lag effect from capital investment to the utilization of new technology [78], AI plays a facilitating role for GTFP after the inflection point. Most provinces (except Shanxi, Liaoning, Hainan, Gansu, and Ningxia) exceed the inflection point value in 2020 and are in the right-hand phase of the U-shaped curve. Analysis of Moderating Effects Lyu et al. (2022) conducted a mechanism test on the digital economy and GTFP and found that the impact of the digital economy on GTFP is mainly realized through factor market distortion and industrial structure upgrading [7]. In this paper, we choose another perspective of mechanism testing, i.e., we try to explore the moderating effect of environmental decentralization mechanism. Model 3 in Table 3 reports that the interaction term for environmental decentralization is significant with the 1% level, signifying that environmental decentralization has a moderating action in the influence of artificial intelligence on GTFP. The U-shaped relationship with moderating effects is further analyzed by drawing on Haans and Pieters (2016) [79] in order to clarify the moderating effects. The coefficient of the interactional term involving the square of AI and environmental decentralization in Model 3 is remarkably positive at the 1% level, indicating that environmental decentralization exerts a positive moderating effect within the U-shaped correlation between AI and GTFP, and the relationship between AI and GTFP tends to steepen, increasing severity of environmental decentralization. It shows that the environmental decentralization system allows local governments to have more abundant resources to govern the environment and increase the output of innovation, which leads to a closer relationship between AI and GTFP. The most interesting point is that the sign of the coefficients of both AI and its squared term becomes inverted after adding the moderating effect, and the curve shifts from a U shape to an inverted U shape under the moderation of environmental decentralization, as can be seen in Figure 2. This is further explained by the fact that the reverse U-shape correlation linking AI to GTFP is significantly strengthened in the case of high environmental decentralization. Referring to Haans and Pieters (2016) [79], the flip of the U-shape relativity may be accounted for in terms of the following mechanism: as the interacting term with respect to the strength of environmental decentralization and the intensity of AI relative to both is affected by the rise in technological dynamism, the U-shaped curve gradually bends toward the middle to the point where its curvature exceeds the curvature of the cost curve, so that the relationship changes from Ushaped to inverse U-shaped. After adding the moderating function of environmental decentralization, the inflexion mark of the inverse U-shape correlation among AI and GTFP Analysis of Moderating Effects Lyu et al. (2022) conducted a mechanism test on the digital economy and GTFP and found that the impact of the digital economy on GTFP is mainly realized through factor market distortion and industrial structure upgrading [7]. In this paper, we choose another perspective of mechanism testing, i.e., we try to explore the moderating effect of environmental decentralization mechanism. Model 3 in Table 3 reports that the interaction term for environmental decentralization is significant with the 1% level, signifying that environmental decentralization has a moderating action in the influence of artificial intelligence on GTFP. The U-shaped relationship with moderating effects is further analyzed by drawing on Haans and Pieters (2016) [79] in order to clarify the moderating effects. The coefficient of the interactional term involving the square of AI and environmental decentralization in Model 3 is remarkably positive at the 1% level, indicating that environmental decentralization exerts a positive moderating effect within the U-shaped correlation between AI and GTFP, and the relationship between AI and GTFP tends to steepen, increasing severity of environmental decentralization. It shows that the environmental decentralization system allows local governments to have more abundant resources to govern the environment and increase the output of innovation, which leads to a closer relationship between AI and GTFP. The most interesting point is that the sign of the coefficients of both AI and its squared term becomes inverted after adding the moderating effect, and the curve shifts from a U shape to an inverted U shape under the moderation of environmental decentralization, as can be seen in Figure 2. This is further explained by the fact that the reverse U-shape correlation linking AI to GTFP is significantly strengthened in the case of high environmental decentralization. Referring to Haans and Pieters (2016) [79], the flip of the U-shape relativity may be accounted for in terms of the following mechanism: as the interacting term with respect to the strength of environmental decentralization and the intensity of AI relative to both is affected by the rise in technological dynamism, the U-shaped curve gradually bends toward the middle to the point where its curvature exceeds the curvature of the cost curve, so that the relationship changes from U-shaped to inverse U-shaped. After adding the moderating function of environmental decentralization, the inflexion mark of the inverse U-shape correlation among AI and GTFP becomes 4.58, and the inflexion mark of the inversed U-shape curve relocates to the right, which coincides with the tendency of the curve change in Figure 2. becomes 4.58, and the inflexion mark of the inversed U-shape curve relocates to the right, which coincides with the tendency of the curve change in Figure 2. Regional Heterogeneity Analysis Models 4 to 9 in Table 4 report the differences in the effects of AI on GTFP and environmental regulation mechanisms in the eastern, central, and western regions of China, respectively, and the results reveal significant regional heterogeneity. China is a vast country, and there is a certain degree of uneven development in the eastern, central, and western regions; thus, the differences in the regression results in the models can be attributed to the local industry mix, development goals of each region [80], economic development level, and labor quality. The specific analysis in different regions is as follows. In Model 4, it is observed that the principal term of AI is statistically significantly negative and the semi-term is statistically significantly positive, together with its utest p-value of 0.034, so there exists a U-shape association among AI and GTFP at east. Model 5 becomes inverted U shape after adding the moderating force of environmental decentralization, which is the same trend as the national overall. The absolute values of the AI and AIsquared coefficients in Model 4 and the absolute values of the interaction term coefficients in Model 5 are larger compared to the national ones, indicating that the effect of AI upon GTFP, as well as the moderating function of environmental decentralization, are more significant in the east. The explanation could be that the influence by AI upon GTFP is more significant in the east as a developed region with faster development of technology and industrial revolution process [81], higher level of advanced industrial structure and local government environmental decentralization, and wide application of AI in emerging industries and tertiary industries. In Model 6, it can be seen that AI and its squared term coefficient are greater than the national level yet less than the level in the east, illustrating that the influences brought by AI on GTFP among the central area are superior to the national general level but still have a gap in comparison with the east. After adding the moderating effect of environmental decentralization across the Central Zone, the correlation between AI and GTFP remains U-shaped, which indicates that the degree of environmental decentralization among the central area does not yet support a flip in the U-shaped relationship. From Model 8, it can be seen that the coefficient of the primary term of AI is statistically significantly negative at the 10% level, while the quadratic term of AI is nonsignificant, indicating that AI has a simple linear relationship with GTFP and AI considerably suppresses the development of GTFP across the west. The interactive term between environmental decentralization and AI is added to Model 9, and the interaction term is significantly negative and the coefficient of AI is positive, implying that environmental decentralization in the western region acts as a negative moderator among AI and GTFP and, after moderating AI, plays a facilitating role to GTFP. The reason may be that, before Table 4 report the differences in the effects of AI on GTFP and environmental regulation mechanisms in the eastern, central, and western regions of China, respectively, and the results reveal significant regional heterogeneity. China is a vast country, and there is a certain degree of uneven development in the eastern, central, and western regions; thus, the differences in the regression results in the models can be attributed to the local industry mix, development goals of each region [80], economic development level, and labor quality. The specific analysis in different regions is as follows. In Model 4, it is observed that the principal term of AI is statistically significantly negative and the semi-term is statistically significantly positive, together with its utest p-value of 0.034, so there exists a U-shape association among AI and GTFP at east. Model 5 becomes inverted U shape after adding the moderating force of environmental decentralization, which is the same trend as the national overall. The absolute values of the AI and AIsquared coefficients in Model 4 and the absolute values of the interaction term coefficients in Model 5 are larger compared to the national ones, indicating that the effect of AI upon GTFP, as well as the moderating function of environmental decentralization, are more significant in the east. The explanation could be that the influence by AI upon GTFP is more significant in the east as a developed region with faster development of technology and industrial revolution process [81], higher level of advanced industrial structure and local government environmental decentralization, and wide application of AI in emerging industries and tertiary industries. In Model 6, it can be seen that AI and its squared term coefficient are greater than the national level yet less than the level in the east, illustrating that the influences brought by AI on GTFP among the central area are superior to the national general level but still have a gap in comparison with the east. After adding the moderating effect of environmental decentralization across the Central Zone, the correlation between AI and GTFP remains U-shaped, which indicates that the degree of environmental decentralization among the central area does not yet support a flip in the U-shaped relationship. From Model 8, it can be seen that the coefficient of the primary term of AI is statistically significantly negative at the 10% level, while the quadratic term of AI is nonsignificant, indicating that AI has a simple linear relationship with GTFP and AI considerably suppresses the development of GTFP across the west. The interactive term between environmental decentralization and AI is added to Model 9, and the interaction term is significantly negative and the coefficient of AI is positive, implying that environmental decentralization in the western region acts as a negative moderator among AI and GTFP and, after moderating AI, plays a facilitating role to GTFP. The reason may be that, before regulation, AI and GTFP follow the cost hypothesis and the investment in AI does not achieve the effect of promoting higher GTFP and the costs outweigh the benefits. Meanwhile, after regulation, environmental decentralization promotes local governments to conduct R&D, following the Porter effect [82], producing a win-win situation. Analysis of Spatial Correlation Spatial econometric analysis presupposes the existence of spatial correlation [83] and, in this paper, the global Moran index is computed on the basis of the economic proximity weighing matrix [84] for the GTFP of 30 provinces from 2011 to 2020 to verify the spatial correlation. Table 5 demonstrates that the z-statistics of the global Moran index of GTFP from 2011 to 2020 all passed the significance test at the 1% level, and there are positive and negative global Moran indices, illustrating that GTFP exhibits a spatial positive correlation in some years and spatial negative correlation in some years, demonstrating that GTFP shows a spatial aggregation pattern throughout China. Based on the above analysis, local indicators of spatial association (LISA) [85] is further calculated and Moran scatter plots are drawn in order to explore local spatial correlation, i.e., spatial heterogeneity. In this paper, the data of 2011, 2014, 2017, and 2020 are selected and the Moran scatter plot is shown in Figure 3. As reflected in the figure, it is clear that the positions of the provinces basically do not change in the four years selected, which proves the stability of the spatial dependence of China's GTFP. The GTFP shows an overall trend of shifting to the left during 2011-2020, demonstrating that the spatial dispersion of the GTFP converges to be obvious in 2011 and 2020. The four quadrants of the Moran scatter plot are high-high, low-high, low-low, and high-low aggregation, in that order [86], so it can be seen that the number of high-high and high-low aggregation provinces decreases, while the number of low-high and low-low aggregation provinces increases between 2011 and 2020, indicating a notable local spatial correlation in GTFP. The number of low factor productivity provinces increases and shows an increase in the gap between some low-GTFP provinces and neighboring provinces, as well as an increase in the similar aggregation effect in another part of low-GTFP provinces. Stably located in high-low agglomeration cities possessing a high level of GTFP and low level of GTFP in neighboring provinces are mainly Shanghai, Beijing, Jiangsu, Zhejiang, Chongqing, Hebei, etc. These areas have obvious advantages in terms of economic development level, advanced science and technology, and labor quality, so they can make their GTFP reach a higher level. Spatial Econometric Analysis After the Moran index test mentioned above, this article proposes to introduce a spatial econometric model for the empirical analysis of spatial correlation. Through Hausman test, this study utilizes a spatial fixed effects model. For determining the optimal spatial econometric model, the Lagrange multiplier test, i.e., LM test [87], likelihood ratio, i.e., LR test, and Wald test are first conducted for the model spatial error model (SEM), spatial lag model (SAR), and spatial Durbin model (SDM), in turn, in this study. First, the LM test was performed. Table 6 shows that the Lagrange multipliers of both SEM and SAR were significant and could not significantly favor SEM or SAR, so the LR test and Wald test were undertaken. Table 6 reports that the p-values of both the LR test and the Wald test are significant, so the SDM does not deteriorate into SEM and SAR, and choosing of SDM is robust. Finally, to determine whether to choose the time fixed effect, individual fixed effect, or double fixed effect model, the LR test was performed again and the individual Spatial Econometric Analysis After the Moran index test mentioned above, this article proposes to introduce a spatial econometric model for the empirical analysis of spatial correlation. Through Hausman test, this study utilizes a spatial fixed effects model. For determining the optimal spatial econometric model, the Lagrange multiplier test, i.e., LM test [87], likelihood ratio, i.e., LR test, and Wald test are first conducted for the model spatial error model (SEM), spatial lag model (SAR), and spatial Durbin model (SDM), in turn, in this study. First, the LM test was performed. Table 6 shows that the Lagrange multipliers of both SEM and SAR were significant and could not significantly favor SEM or SAR, so the LR test and Wald test were undertaken. Table 6 reports that the p-values of both the LR test and the Wald test are significant, so the SDM does not deteriorate into SEM and SAR, and choosing of SDM is robust. Finally, to determine whether to choose the time fixed effect, individual fixed effect, or double fixed effect model, the LR test was performed again and the individual fixed effect could not reject the original hypothesis, while the time fixed effect refused the original assumption, so the time fixed effect was chosen. Table 7 reports the baseline regression outcomes for SDM; Model 12, Model 13, and Model 14 report the decomposition findings regarding direct, indirect, and total effects; and Model 15 reports the spatial assessment findings of the environmental decentralized moderating influence. The R 2 in Model 11 is 0.44 and the Log-likelihood is 111.82, indicating a good model fit and credibility. In Model 12 and Model 13, the direct effectiveness of the squared term of AI qualified the 1% significance test and the indirect effect was negative and insignificant, demonstrating that AI is advantageous in promoting the enhancement of GTFP within the region, while there is no significant spillover effect onto the peripheral areas. The interacting term of environmental decentralization and AI squared in Model 15 is not significant in the spatial weight matrix, indicating that environmental decentralization exerts a non-moderating function in the process of AI's influence on GTFP. Note: standard errors are in parentheses, * p < 0.1, ** p < 0.05, *** p < 0.01. MAUP Test The modifiable areal unit problem (MAUP) is a problem where the delineation of the area units of the study object may affect the results of the analysis, and a particular aggregation at a particular scale can produce results that are valid for that particular description [88,89]. The spatial effect is further explored in this paper by referring to the MAUP test used in the paper by Contreras (2022) [90]. Therefore, the entire eastern region was chosen for this paper to investigate whether different study area sizes would have an impact on the previous findings. First, the spatial autocorrelation test was conducted for the eastern region, and the global Moran index was significant (the following condition was satisfied from 2012 to 2020: p = 0.000). Next, the local Moran index test was performed and Figure 4, the Moran scatter plot, was plotted. Figure 4 reports the spatial distribution of GTFP for each province in the eastern region in 2011, 2014, 2017, and 2020, respectively. It can be seen from the figure that the position of the provinces in the eastern region in the quadrant has changed and the vast majority of the provinces are concentrated in the H-H region and the L-H region, and the overall trend of migration from the H-H region to the L-H region from 2011 to 2020 indicates that there is a trend of widening the GTFP gap between the provinces in the eastern region and a polarization phenomenon. Shandong and Hainan have been located in the second quadrant, but the position of Shandong Province has been shifting to the right, and the position of Hainan Province has not changed much. Tianjin and Fujian have slowed down the pace of development over the decade and the GTFP gap with surrounding provinces has increased. Under the aforementioned condition of significant spatial correlation, this paper conducted a spatial econometric analysis using the spatial Durbin model. The results of the spatial regression are reported in Table 8. In Model 16, it can be seen that, under the economic distance weight matrix, the primary and secondary coefficients of AI are significantly positive and negative at the 1% level, respectively, so there is still a U-shaped relationship between AI and GTFP in space, and both the primary and secondary coefficients of AI are larger than those in Model 11. This indicates that the effect of AI on GTFP is greater in the eastern region, which is indeed consistent with the actual situation. At a finer spatial scale, labor and capital flows and trade are more important [91]. The eastern region has advantages in the level of technological development, the quality of human resources, and the level of advanced industrial structure, and AI is more widely used in the eastern region, so the enhancement effect of AI on the development of GTFP in the eastern region is greater. Models 17 to 19 report the decomposition effect of the spatial Durbin model measurement results, and the direct effect of AI on GTFP in the eastern Under the aforementioned condition of significant spatial correlation, this paper conducted a spatial econometric analysis using the spatial Durbin model. The results of the spatial regression are reported in Table 8. In Model 16, it can be seen that, under the economic distance weight matrix, the primary and secondary coefficients of AI are significantly positive and negative at the 1% level, respectively, so there is still a U-shaped relationship between AI and GTFP in space, and both the primary and secondary coefficients of AI are larger than those in Model 11. This indicates that the effect of AI on GTFP is greater in the eastern region, which is indeed consistent with the actual situation. At a finer spatial scale, labor and capital flows and trade are more important [91]. The eastern region has advantages in the level of technological development, the quality of human resources, and the level of advanced industrial structure, and AI is more widely used in the eastern region, so the enhancement effect of AI on the development of GTFP in the eastern region is greater. Models 17 to 19 report the decomposition effect of the spatial Durbin model measurement results, and the direct effect of AI on GTFP in the eastern region passes the test at the 1% level, indicating that there is no spatial spillover effect in the eastern region as well. The data of Model 20 show that the coefficients of AI primary term and square pass the test at the 1% level, respectively, and the AI primary term is positive and the squared term is negative, so the effect of AI on GTFP becomes inverted U-shaped under the moderation of environmental decentralization. This is the same direction of coefficients compared with Model 15, but the absolute values of the coefficients of the primary and quadratic terms of AI become larger in Model 20. A point of interest is that, under the economic distance weight matrix, the cross-product terms of the environmental decentralization and the AI primary and squared terms, respectively, are both significant at the 1% level, which indicates that, in the eastern region, AI can have an impact on GTFP only through the moderating effect of the environmental decentralization. Robustness Test For excluding the effect of extremes and outliers, a 1% tailoring of the data was performed [92], and the findings for the baseline regression, moderating effects, and SDM direct effects, indirect effects, and total effects are reported for Models 21 through 25 in Table 9, respectively. Model 26 to Model 30 in Table 10 is to add more control variables to test the robustness [93], adding outward foreign direct investment (OFDI); China makes full use of OFDI to learn and cite advanced technology at home and abroad and enhance our GTFP through reverse technology spillover from OFDI [94,95]. This study utilizes the ratio of gross exports and imports of merchandise over GDP, measured by regions by location of operating units. Except for a mild alteration concerning the absolute deviation of the coefficients, the symbols and the significances of the coefficients of AI and its square did not alter, and Model 22 and Model 27 report that environmental decentralization still positively moderates and that the association for AI and GTFP turns from U-shaped to inverted U-shaped, signifying that the aforementioned findings are robust. Threshold Regression Analysis This paper further employs a threshold regression model for exploring if the influence of AI over GTFP is constrained by the regional absorptive capacity of technological innovation. Table 11 reveals the values of the threshold regression test, where the F-statistic of technological innovation is statistically significant on the 1% level and regional absorptive capacity is significant at the 5% level, both of which satisfy the single-threshold test and reject the dual-threshold test, indicating the existence of one threshold each for technological innovation and regional absorptive capacity. Note: ** p < 0.05, *** p < 0.01. Table 12 presents the outcomes for estimating the thresholds in terms of technological innovation and regional absorptive capacity; from the table, it is evident that the threshold for technological innovation is 4.27 and the threshold for regional absorptive capacity is 12.15. Table 13 presents the findings of the threshold regression effects and, since both technological innovation and regional absorptive capacity are single thresholds, both divide the effect of AI on GTFP into two intervals, respectively. When technological innovation is less than or equal to 4.27, the effect coefficient is −0.02, significant at the 5% level, as well as, when technological innovation exceeds 4.27, the effect coefficient is 0.5, significant at the 1% level. When the regional absorptive capacity is less than or equal to 12.15, its impact coefficient is −0.03, which adopts the significance test at the 5% level; when the regional absorptive capacity is greater than 12.15, its impact coefficient is 0.01, not passing the significance test. The coefficient of AI changes from negative to positive after crossing the two thresholds, respectively, indicating that the influence of AI onto GTFP is U-shaped before and after the threshold of technological innovation and regional absorptive capacity. The reason for this may be that productivity is low when the threshold is not crossed due to the crowding-out effect [96] and, after crossing the threshold with the advancement of territorial technological innovation along with regional absorptive capacity, the level of learning, absorption, and internalization of knowledge in the region is enhanced, promoting the development of AI and, thus, GTFP. Note: t-test statistics in parentheses, * p < 0.1, ** p < 0.05, *** p < 0.01. Conclusions and Policy Implications The paper empirically examines the effect of AI on GTFP in 30 Chinese provinces under the environmental decentralization system from 2011 to 2020 using the super-efficient SBM model, spatial Durbin model, and threshold regression model. The following study findings were obtained: (1) the impact of AI on GTFP is first inhibited and then promoted in a U shape, with an inflection point of 4.50; (2) environmental decentralization serves as a positively moderating function among AI and GTFP, and the impact of AI on GTFP changes from a U shape towards an inverse U shape after moderating, and the shape of the curve becomes steeper and the inflection point shifts to the right; (3) the results of the regional heterogeneity test reveal that the influence by AI over GTFP in the east is U-shaped and, under the positive regulation of environmental decentralization, the influence of AI upon GTFP flips to an inverse U shape, which is the same trend as the national change; the influence of AI upon GTFP in the central region is U-shaped and still U-shaped under the positive regulation of environmental decentralization, and environmental decentralization strengthens the influence of AI over GTFP; the west region has a remarkable suppressive function of AI on GTFP and, after adding the moderating variable environmental decentralization, AI manifests a significant promoting action upon GTFP; (4) the Moran index suggests that there exists spatial correlation over GTFP, and the empirical findings of the SDM demonstrate that AI poses a direct effect on GTFP within the area, with no spatial spillover effect on the surrounding regions; (5) with technological innovation and regional absorptive capacity as the threshold variables, the influence of AI upon GTFP appears U-shaped under the respective constraints of both. Combining the foregoing study findings, this article proposes the ensuing policy insights. We can derive general considerations from the Chinese sample [97]. Firstly, the advantages of AI application are fully utilized to promote technological innovation. The empirical study in this paper finds that the impact of AI on GTFP shows a U shape and most provinces cross the inflection point and are at the stage of positive correlation between the two. Therefore, we attach great importance to the importance of AI on GTFP, take technological innovation as the core driving force, strengthen the construction of regional intelligent supporting facilities, accelerate the innovation and upgrading of AI-related technologies, guide the penetration of AI technologies into the value and industrial chains of production, circulation, configuration, integration, and sewage, and develop advanced pollution control and emission reduction technologies to achieve sustainable economic and social development. The second is to strengthen the government's environmental governance policy guidance, appropriately increase the decentralization of environmental rights, give full play to the initiative of local governments, play to local advantages, and give AI a relaxed and supportive development environment. In view of the spatial correlation of GTFP, inter-regional co-operation should be strengthened, resource allocation should be optimized, and the full flow of factors between different regions should be realized, thus promoting the synergistic development of different regions. The third is to develop differentiated AI development strategies according to local conditions, dissolve regional barriers, and promote synergistic development in various regions. The eastern and central regions should continue to deepen reforms building on the existing foundation, enhance the diversification of industrial agglomeration, optimize industrial structure, and develop low-pollution industries. The western region should rely on its own resource advantages, learn advanced experience from the east and central parts, and innovate economic development models to create new economic expansion points from which to protect the ecological landscape. The fourth is to strengthen regional absorption capacity, promote inter-industry and inter-regional resource knowledge sharing, promote the formation of regions to produce knowledge spillover effects and agglomeration effects, take advantage of external opportunities, and focus on cultivating the ability to identify and absorb the resources, technologies, and talents into substantial benefits. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-11-13T16:20:01.158Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "1ef2b3288a485124120af1321443288fb93ce880", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/22/14776/pdf?version=1668068128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "58fcdeb97bd2214534008de289a629f8ecf294fe", "s2fieldsofstudy": [ "Environmental Science", "Economics", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
226629131
pes2o/s2orc
v3-fos-license
A FINITE ELEMENT METHOD FOR NEUTRON NOISE ANALYSIS IN HEXAGONAL REACTORS The early detection of anomalies through the analysis of the neutron noise recorded by incore and ex-core instrumentation gives the possibility to take proper actions before such problems lead to safety concerns or impact plant availability. The study of the neutron fluctuations permits to detect and differentiate anomalies depending on their type and possibly to characterize and localize such anomalies. This method is non-intrusive and does not require any external perturbation of the system. To effectively use the neutron noise for reactor diagnostics it is essential to accurately model the effects of the anomalies on the neutron field. This paper deals with the development and validation of a neutron noise simulator for reactors with different geometries. The neutron noise is obtained by solving the frequency-domain two-group neutron diffusion equation in the first order approximation. In order to solve this partial differential equation a code based on a high order finite element method is developed. The novelty of this simulator resides on the possibility of dealing with rectangular meshes in any kind of geometry, thus allowing for complex domains and any location of the perturbation. The finite element method also permits automatic refinements in the cell size (h-adaptability) and in its polynomial degree (p-adaptability) that lead to a fast convergence. In order to show the possibilities of the neutron noise simulator developed a perturbation in a hexagonal two-dimensional reactor is investigated in this paper. INTRODUCTION Being able to monitor the state of nuclear reactors while they are running at nominal conditions is a safety requirement. The early detection of anomalies gives the possibility to take proper actions before such problems lead to safety concerns or impact plant availability. The CORTEX project [1], funded by the European Commission in the Euratom 20162017 work program, aims at developing an innovative core monitoring technique that allows detecting anomalies in nuclear reactors, such as excessive vibrations of core internals, flow blockage, coolant inlet perturbations, etc. The technique is based on using the inherent fluctuations in neutron flux recorded by incore and ex-core instrumentation, referred to as neutron noise, from which the anomalies will be detected and differentiated depending on their type, location and characteristics. The method is non-intrusive and does not require any external perturbation of the system. To be able to detect, localise and quantify a perturbation in real-time, an automatic algorithm based on machine learning has to be provided with a large set of simulation data [2]. As the number of experiments to effectively train the machine learning algorithms is huge, these experiments must be carried out in a time efficient manner, i.e. fast running techniques are required to carry out the simulations. One useful technique to solve the effect of a perturbation in the neutron noise is to resolve the frequency-domain first-order neutron noise equation in the diffusion approximation. This is a partial differential equation with complex numbers. This work presents a neutron noise simulator developed with the finite element method, called FEMFFUSION. It can deal with any kind of geometry allowing complex domains and any location of the perturbation. In other words, it computes the same quantities as the frequency domain code CORE SIM [3] but allowing any kind of geometry and a more adaptable structure. Also, the finite element method also permits automatic refinements in the cell size (h-adaptability) and in its polynomial degree (p-adaptability) that leads to an exponentially fast convergence. THE NEUTRON DIFFUSION EQUATION In the two energy group approximation, the time-dependent neutron diffusion equation with one group of delayed neutrons, where the matrices are denoted by [ ], are defined as [4] [ where the cross sections matrices are defined as The main unknown of the neutron diffusion equation is the space-and time dependent neutron flux, in its usual separation in the fast and thermal energy groups φ = φ 1 ( r, t), φ 2 ( r, t) T , and the neutron precursor concentration C( r, t). All other quantities have their usual meaning [4]. Static problem For a given transient analysis in a core reactor, usually, a static configuration of the reactor is considered as initial condition. Associated with the time dependent neutron diffusion equation, (1) and (2), the static solution takes the form This problem is known as the Lambda Modes problem for a given configuration of the reactor core. To solve the problems (3), a spatial discretization of the equations has to be selected. In this work, a high order continuous Galerkin finite element method is used leading to an algebraic eigenvalue problem associated with the discretization of equation (3) with the following block structure, are the algebraic vectors of weights associated with the fast and thermal neutron fluxes. More details on the spatial discretization used can be found in [5]. The resulting algebraic eigenvalue problem is solved using the Block Inverse-Free Preconditioned Arnoldi Method (BIFPAM) [6] or Newton iteration solver [7]. FIRST-ORDER NEUTRON NOISE THEORY The first-order neutron noise theory is based on splitting every time dependent term, expressed as X( r, t), into their mean value, X 0 ,which is considered as the steady-state solution, and their fluctuation around the mean value, δX as, The fluctuations are assumed to be small compared to the mean values. This allows to neglect second-order terms (δX( r, t) × δX( r, t)) ≈ 0. Also, the fluctuations of the diffusion coefficients are neglected and δD g ≈ 0 is assumed. This approximation was demonstrated to be valid for light water reactor applications [8]. Thus, the first-order neutron noise equation can be written as [9]. The perturbation source term δS( r, ω) is given by the frequency-domain as changes in the cross sections : δS( r, ω) = δS 1 ( r, ω) δS 2 ( r, ω) = φ s δΣ 12 + φ a δΣ a1 δΣ a2 where By comparing with Eqs. (3), it can be seen that the neutron noise equation is an in-homogeneous equation with complex quantities that has to be solved after the steady-state solution is obtained because φ 1 and φ 2 represent the steady state fast and thermal neutron fluxes, respectively. The related static eigenvalue problem must be solved with the same spatial discretization as the frequency domain neutron noise equation to get coherent results. Applying the continuous Galerkin finite element discretization to Eq. (6) leads to an algebraic linear system of equation with the following block structure where δΦ = δφ 1 , δφ 2 T are the algebraic vectors of weights associated with the fast and thermal neutron noise fluxes. NUMERICAL RESULTS To study the possibilities of the FEMFFUSION code, a two-dimensional hexagonal reactor (2D IAEA benchmark) was considered. This benchmark has a 1/12 reflective symmetry but as the inserted perturbation is not symmetrical, the whole reactor is solved. The fuel assembly pitch is 20.0 cm. Table 1 shows the cross section data for the this reactor. Figure 1 shows the materials layout. A perturbation in the fuel assembly marked with material 5 of 10% of its cross sections reference values needs to be solved inserted to verify the developed noise simulator: δΣ a1 = 0.00095042, δΣ a2 = 0.00750058, δΣ s12 = 0.00177540, δΣ f 1 = 0.00058708, δΣ f 2 = 0.00960670. Table 2 shows the convergence of the solution depending on the polynomial degree used in the FEM shape functions (FED). We have defined the following error indicators where values with * represent reference results extracted from [10] for the steady-state results and a very fine FEM computation with a finite element polynomial degree equals to 7 and each cell refined into 16 cells for the neutron noise results. φ * a,g and δφ * a,g are the mean flux and the average noise flux, respectively, at assembly a. N A is the number of assemblies in the reactor. Figure 2 represents the mean assembly flux values for the steady state solution using FED = 3. Figure 3 presents the neutron noise magnitude and Figure 4 displays the neutron noise phase. The results shows that the fast neutron noise has ab influence over a wider region. On the other hand, the thermal neutron noise is mostly localized. Also, for this perturbation, the phase of the neutron noise is similar throughout the entire reactor. CONCLUSIONS This work presents a neutron noise simulator developed with the finite element method. It can deal with different kinds of geometry allowing complex domains as hexagonal reactors and any location of the perturbation. Also, the finite element method permits automatic refinements in the cell size and in its polynomial degree that leads to fast spatial convergence. This code will permit to train machine learning algorithms to detect perturbations in real-time in operating nuclear reactors.
2020-11-14T08:17:54.358Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "05721c20c848070d5c5e7e714fdc5ba8e4888aa1", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2021/01/epjconf_physor2020_21007.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1c660209b7ab7c1270e3c2873e724231a5eb03c5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
267706617
pes2o/s2orc
v3-fos-license
Investigating the use of snags and downed logs by wild animals in Federal College of This study examined the uses of snags and downed logs in the Federal College of Wildlife Management, New Bussa. The research aimed to identify the kinds of trees that produce snags and deadwood logs in the study region as well as the ways that wild animals utilize these snags and deadwood logs in the study area. The methodology employed involves the use of plot sampling method. Field observation of plant species that have turned into dead trees was carried out. The data obtained were analyzed using descriptive statistics (tables and charts). The results showed that a significant number of plant species were identified as snags and down logs; the finding indicates that Terminelia glaucocens having (12.5%) occurrence and Pterocarpus erinaceus (10.71) was the predominant species in the study area. These dead trees species are used by wild animals in a variety of ways; for instance, 50% of wild animals use the snags and down logs for perching, 17.44% use them as foraging sites, and 4.65% use them as nesting sites. The most common users are squirrels, accounting for 13.95% of the total, followed by francolins birds (11.63%) and hawks (1.16%). Since many wildlife species rely on these trees to survive, it is imperative to protect these tree species inside the college estate to stop the extinction of wild animals. It is not advisable to remove snags. INTRODUCTION The value of a given forest for conservation is partly determined by how it is managed.For instance, numerous studies have discovered that, at the stand level, the preservation of structural features like dead wood and snags, as well as the ability of native plant species to tolerate succession in the understory, have a significant impact on the population of fauna in tree plantations (Hartley 2002).Snags are the standing trees dead for a natural process.They are an important environmental element and are essential for maintaining biodiversity in forest ecosystems (Ferris, and Humphrey 1999;Tavankar, et al., 2011).Snags are originated by any possible factor that contributes to tree mortality, such as lightning, storm breakage, fire, disease, insects, drought, flooding, forestry practices, and so on (Wolf et al. 2004;Bendix and Cowell, 2010).Snags are not only the base of a food-chain but also, they provide microhabitats for many living organisms, including fungi, epixylic lichens, bryophytes, invertebrates, birds, mammals, reptiles, and amphibians (Russell et al. 2006;Nascimbene et al. 2013). Cavities in trees and snags provide suitable nesting sites for birds, bats and other wildlife species.In forest areas, an adequate and continuous availability should be ensured for preservation purposes (Tremblay et al., 2010).The potential benefits to wildlife from deadwood are dependent on several factors, size, species, level of decay, and location.Increasingly, snags have been studied in managed forests to determine snag dynamics (Chambers and Mast, 2005;Russel and Weiskittel, 2012).Most ground-dwelling wildlife species uses downed logs as cover, a way to escape from predators, a way to traverse through areas, and, in the case of amphibians, a potentially comfortable microclimate.Moreover, a variety of predators, both vertebrate and invertebrate, feed on the profusion of arthropods that fallen wood harbors (Lohr et al., 2002;Ulyshen and Hanula, 2009). Birds comprise the most conspicuous and best-known animal group using tree cavities as nest sites.The abandoned cavities resulting from their excavations are used by many other mammal and bird species, including American martens (Martes americana) and Vaux's swifts (Chaetura vauxi) (Bull and Holthausen, 1993).If only forest birds are considered, the proportion of cavity-nesting species is about 30% in northern Europe (Siitonen, 2001), 35% in central Europe (Wesołowski, 2007), 40% in North America and 20-30% in tropical Central America (Gibbs et al., 1993), while we expect to find the highest diversity of treecavity nesters in the tropics (e.g., woodpecker diversity is highest in the Asian, South American and African tropics (Mikusiński, 2006;Winkler & Gusenleitner, 2015), The probability of cavity trees being occupied by vertebrates increases with the number of cavities per tree, diameter and crown senescence (Koch et al., 2008b).Most cavities used by vertebrates have an entrance diameter of at least 10 cm and are at least 30 cm deep Stokland et al. (2012).However, different species of arboreal marsupials show clear preferences in their choice of cavities.Similar to cavity-nesting birds, the preferences may vary in terms of entrance diameter and cavity volume, height above the ground, position on the tree (main stem or branches) and number of cavities per tree (Gibbons and Lindenmayer, 2002).Cavity volume is also an important factor in explaining cavity use because it can affect reproductive success (Martin et al., 2004).Cavities with large internal size may allow for better thermoregulation and reduce competition for space among siblings.Therefore, the ideal cavity to maximize fecundity and minimize predation is a large-volume cavity with a small entrance (Martin et al., 2004).The high proportion of cavity nesters is a compelling indication of the importance of living mature and dead standing cavity trees in natural forest ecosystems.The snags have become a major conservation issue in managed forest ecosystems.While managing quality saw timber with the single tree selection system often reduces the number of cavity trees and snags, because they are removed under an intensive timber management regime (Larrieu et al. 2012;Perry and Till, 2013). There is an abundance of existing literature supporting the importance of snags and downed logs cavities to wildlife abundance and diversity (Stokland et al., 2012).Unfortunately, most of this information has not been generated through study of the region.The objectives of the study are to identify the types of trees forming snags and downed log in the study area and to determine the uses of snags and downed log by wild animal in the study area, thereby providing relevant information required to meet the research needs of resource managers and conservationists. Study Area Federal College of Wildlife Management is in New Bussa, the administrative headquarters of Borgu Local Government Area of Niger sate, it covers a total land mass of about 16,200km 2 and it is situated between latitude 9°53'N and longitude 4°31'E, it has a total population census of 171, 965 people.The length of the rainy season is about 175 to 190 days (5 -6 months) during which 1000mm -1250mm rainy is recorded annually.The rainy season normally comes in April accompanied by strong wind and thunderstorm reaching its peak in July to August and declines in September.Generally, the temperature is high during dry season just before the rain.It declines during the rainy season from June to October and rises again in November and drop slightly in December and January due to Harmattan in the dry season.The mean maximum temperature is 35°C -40°C but minimum temperature ranges between 14°C -15°C in the Harmattan (Ekeke and Stopfords, 1984). Study Design Two plots of 100 m 2 were mapped out in the study area with measuring tape namely, thicket woodland (Site 1) open plantation woodland (Site 2), snags and downed logs ≥10m in height was identified and uniquely marked with numbered nylon tags, allowing us to distinguish existing snags from new snags when re-sampling plots.Also, wildlife species that uses the snags and downed logs were enumerated. The study plots were located within the protected area of the College with minimal human activities.The study was carried out for a period of six ( 6) months (January to June 2019).Each site was visited ten (10) days in the month.The period of visit was between 6:00am -9:00am in the morning and 4:00pm -6:00pm in the evening.The size study area is 250 km 2 . Data Analysis The data obtained were analyzed using Descriptive Statistics in form of frequencies and percentages (Tables and Charts). RESULTS Table 1 displays the types of trees that formed snags or downed logs, together with the size of their cavities and the percentage of occurrence in the research area.The table reveals that a total of 17 different tree species produced snags or downed logs in the study area.The most common species is Terminalia glaucescens, with a cavity size of 9 to 10.3 cm and a frequency of 12.5%.Pterocarpus erinaceus is next, with a frequency of 10.71% and a cavity size of 5 to 7.2 cm, and Vitelaria paradoxa snag with a frequency of 10.71%. Table 2 shows the Kinds of wild animals and the trees snag/ down log they occur in FCWM.Results show that Snake, Vinaceus dove, and Squirrel are found using Acacia gourmaensis Snag/ Down log, while Francolin, Grey hornbill, Giant rat and Hare are found using Pterocarpus erinaceus Downed logs.The results in Figure 1 illustrate the total number that use snags and downed logs in the study area (%).Squirrels are the main users, with 13.95 percent, followed by francolins birds with 11.63%, and hawks with 1.16%.3 shows the purposeful uses of snags and downed log by wild animals in the study area.Results show that wildlife uses snags and downed logs for specific purposes.The majority of wildlife uses snags and downed logs for perching, followed by 17.44% for feeding sites, 4.65% for nesting, and 4.65% for territorial displays. DISCUSSION The study found that many wildlife species rely heavily on using dead and dying trees, particularly fallen logs, for their survival.Snag contributes to the ecosystem's food chain, which in turn sustains the large number of animals by giving them food, cover, and safety.In the study region, a wide variety of tree species produced snags or downed logs.Terminalia glaucescens, Pterocarpus erinaceus, and Vitelaria paradoxa are the most common species.Terminalia glaucescens has a high cavity size of 9 to 10.3 cm, while Pterocarpus erinaceus has a cavity size of 5 to 7.2 cm.The likelihood of a cavity being used increases with hollow depth and size at the level of individual cavities.This supports the claim put forth by Gibbons and Lindenmayer (2002) that birds that nest in cavities might have distinct preferences regarding the size, diameter, and volume of cavities in each tree.This is also consistent with the results of Koch et al. (2008b) and Gibbons et al. (2002) which show that the majority of the cavities used by vertebrates have an entrance diameter of at least 10 cm and a depth of at least 30 cm.Nonetheless, distinct species of arboreal marsupials exhibit distinct preferences when it comes to the cavities they choose.Mammals are more likely than cavity-nesting birds to engage in den-swapping behavior, which involves them routinely moving between denning sites and using many cavities on a regular basis.Accordingly, a decline in the variety and accessibility of cavities may result in a fall in the density of species that rely on them, which eventually may lead to the local extinction of these species (Gibbons and Lindenmayer, 2002).Snags and downed logs have been used in the research region to identify a large number of wildlife species.In addition to growing mushrooms and being colonized by insects, deadwood also attracts a variety of animal species, such as snakes, vinaceous doves, squirrels, francolin, grey hornbills, giant rats, and hares.These animals come to the area to take shelter, make nests, and get protection from predatory enemies.According to Stokland et al. (2012), a large number of both vertebrate and invertebrate species use snags, logs, and cavities in trees for breeding and other purposes.This bolsters the findings of Russell et al. (2006) and Nascimbene et al. (2013), who state that in addition to serving as the base of a food chain, snags offer microhabitats for a wide variety of living things, such as fungi, bryophytes, insects, birds, mammals, reptiles, and amphibians. It appears that dependency on cavities is as common in mammals as it is in birds, albeit the percentage of cavity users varies significantly between locations.Many wild animals, in the study area especially birds, use snags and fallen logs for a variety of purposes, including breeding, roosting, foraging, perching, and territorial displays.Findings show that the primary users of snags and downed logs in the research region are squirrels, followed by francolins and other birds.This is consistent with research by Bull and Holthausen (1993), which revealed that specific birds use tree cavities.These species include American martens (Martes americana) and Vaux's swifts (Chaetura vauxi), while mammal species like squirrels (including flying squirrels) (Sciuridae), and several mainly tropical families like New World porcupines (Erethizontidae) and scaly-tailed squirrels (Anomaluridae) (MacDonald, 2001), mice (McCay, 2000) to bears (Wong et al., 2004) also use tree cavities and logs for cover and nesting. CONCLUSION This study on downed logs and snags has provided us with information on the significance of fallen logs and snags as a habitat for wildlife.The study has given us evidence to know that maintaining structural features like snags and dead wood has a major effect on the variety of species found in a wooded woodland environment.Thus, to improve the conservation and protection of the species both locally and throughout Nigeria, it is imperative to establish effective monitoring of the changes in the tree community within the research zone. The study's findings lead to the following recommendations: Promoting environmental protection and the need to preserve deadwood logs and snags are crucial.It is imperative to safeguard and manage fallen logs and snags in a manner that keeps wild animal species from going extinct.People with broad interests in forests and the preservation of the natural environment will be able to learn more about the species that inhabit woodlands.Snag removal ought to be avoided.Removal of snags should be discouraged. Figure 1 : Figure 1: Total number of wild animals that use snags and downed log in FCWM Table3shows the purposeful uses of snags and downed log by wild animals in the study area.Results show that wildlife uses snags and downed logs for specific purposes.The majority of wildlife uses snags and downed logs for perching, followed by 17.44% for feeding sites, 4.65% for nesting, and 4.65% for territorial displays. Table 1 : Types of trees formed snags/downed log, their cavity size and percentage occurrence in FCWM Table 2 : Kinds of wild animals and the trees Snag/ Down log they are found in FCWM Table 3 . Purposeful Uses of Snags and Downed Log by wildlife in FCWM Area (%)
2024-02-17T16:15:10.118Z
2024-02-13T00:00:00.000
{ "year": 2024, "sha1": "51465df947afeddcdc80d33a671513ca9625bf0b", "oa_license": "CCBYNCSA", "oa_url": "https://www.ajol.info/index.php/jagrenv/article/view/264810/249908", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "18b7f427bb774b138e7363526b68611cec256260", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
221595043
pes2o/s2orc
v3-fos-license
Influence of Heat Treatment on the Microstructure and Corrosion Behavior of Thixo-cast Mg-Y-Nd-Zr The influence of semisolid metal processing (SSM, also called thixoforming) and T6 heat treatment (HT) on the microstructure and corrosion behavior in chlorides of Mg-Y-Nd-Zr (WE43B) magnesium alloy was investigated. The as-cast microstructure is composed of α-Mg grains with the size of 52.8 ± 1.9 μm surrounded by eutectic precipitations enriched in rare-earth elements (Y, Nd). The thixo-cast microstructure contained α-Mg globular grains with the size of 65.5 ± 2.1 μm surrounded by a fine eutectic mixture in the volume of 26.6%. The T6 HT (heat treatment and saturation at 525°C/5 h, cooling in H2O and aging at 190°C/48 h) caused an increase of yield strength to 180 MPa and tensile strength to 280 MPa at the hardness 105 ± 4 HV5. Next, the electrochemical response was investigated in 0.1 M NaCl using the global and local LSV (linear sweep voltammetry) and EIS (electrochemical impedance spectroscopy) methods. The EIS method suggests the same mechanism for the processes occurring at the electrode/electrolyte interface and shows higher values of the polarization resistances of treated samples after 24-h immersion tests. In particular, better corrosion resistance in chlorides is observed in the alloy after SSM compared to the SSM/HT specimen, which has also been confirmed by the LSV tests performed after 24-h immersion. By using a local technique, a higher susceptibility of the matrix of SSM and SSM/HT samples to pitting corrosion has been revealed. Introduction Magnesium belongs to the group of active metals with relatively low electrochemical potential (E°= À 2.4 V vs. SHE), indicating a very high tendency to undergo oxidation (Ref [1][2][3][4][5]. This feature of pure Mg and its alloys results in electrochemical activity, which is the reason for susceptibility to corrosion in highly conductive environments, like water solutions. On the other hand, magnesium alloys are promising as structural materials due to their low weight and environmental friendliness ( Ref 6,7). Nevertheless, their application is still relatively limited because of their low ductility and strength. At present, studies are focused on developing new chemical compositions and their further optimization as well as on new forming technologies (Ref 8,9). Rokhlin et al. (Ref 10) indicated that the addition of rare-earth elements (RE), such as Gd, Y, Nd and mischmetal, leads to significant improvement in specific strength at both room and elevated temperatures, as well as creep resistance. There are many types of conventional magnesium-forming technologies (e.g., casting or plastic deformation, like rolling or extrusion), which bring about an improvement in mechanical properties and, at the same time, a significant impact on corrosion behavior (Ref [11][12][13][14][15][16]. A new alternative to the abovementioned forming methods is semisolid metal processing (SSM), which utilizes the thixotropic behavior of metal alloys containing 15-80% of liquid fraction. This is possible when the microstructure in the solidus-liquidus range consists of globular grains surrounded by a uniformly distributed liquid (Ref 17,18). The generation of globular morphology within the alloy microstructure is an essential step in all SSM technologies ( Ref 19). Depending on the method employed, a different grain size and a different fraction of the liquid phase are obtained (Ref 17,20). In the case of Mg alloys, the SIMA (straininduced melting-activated) (Ref [21][22][23] and RAP (recrystallization and partial remelting) methods (Ref 24) are most commonly used. Another familiar method is modification, in which the heterogeneous nucleation of solid solution grains is applied ( Ref 25). The alloy composition and the forming technology affect the corrosion behavior (Ref [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40]. The corrosion properties are particularly important when the material can be used as an implant. It is well known that Mg alloys possess a similar Young's modulus to that of human bones; thus, they are promising materials for the elements used in bone surgery. Because magnesium is nontoxic, its alloys can be used as temporary implants, which after some time dissolve without harming the human body ( . In addition, some Mg-RE (rare-earth elements) alloys can be used as biomedical materials (Ref [44][45][46]. The scientific goal presented in this paper was to analyze how the SSM process (thixo-casting, thixoforming) and the T6 heat treatment influence the microstructure, mechanical properties and corrosion behavior of the WE43 magnesium alloy. This has been carried out due to a lack of knowledge about the possible use of the WE43 alloy as a biomedical material, where its properties (like mechanical strength and corrosion resistance) after SSM should be explained in more detail. Material and Sample Preparation A magnesium-based alloy with additives (4.2 wt.% Y; 2.5 wt.% Nd; and 0.53 wt.% Zr), which corresponds to the specification for Elektron WE43B (AMS 4427), was used in the present study. The alloy was supplied in the form of as-cast ingots by Magnesium Electron Ltd. The semisolid metal processing (also called thixoforming) of the WE43B magnesium alloy was conducted using a specially built press prototype, as described in previous papers (Ref 19,47). The ingot was heated up to 625°C, and the heating rate was about 70°C/min, which corresponded to about 25% of the liquid phase. The pressing process was executed at the pressing force of 300 kN. Analysis of Microstructure and Mechanical Properties A light microscope (Nikon Eclipse L100) was used for the optical observations of the surface of the WE43 magnesium alloy and also in combination with the electrochemical microcell technique to investigate local corrosion behavior. The microstructure was also examined by applying an FEI SEM XL30 scanning electron microscope (SEM) working in back-scattered electron (BSE) or secondary electron (SE) modes. The chemical composition of selected microareas was established using the energy-dispersive x-ray spectroscopy (EDS) technique with an EDAX Apollo XP spectrometer. Five measurements were made for each phase, and the mean value and standard deviation were calculated. The phase analysis was carried out with a Philips PW 1710 diffractometer and Co Ka filtered radiation in the scan mode in the range 20-100 of 2-Theta at the anode voltage of 40 kV and current of 40 mA. The PDF-4+ cryptographic database (the International Center for Diffraction Data), Panalytical Data Viewer, High Score Plus and Match programs were used to identify the phases. The tensile strength test was performed using an INSTRON 3382 machine in accordance with the PN-EN ISO 6892-1:2016-09 standard. The Vickers hardness was measured using a Zwick/ ZHU 250 tester under the load of 5 kg in accordance with ASTM E92. Electrochemical Measurement Methodology Corrosion behavior studies were performed by using a classical electrochemical three-electrode system ( Ref 48,49). The tested specimens acted as working electrodes with 0.5 cm 2 of exposed surface, with the reference electrode being Ag/AgCl (containing 3 M KCl electrolyte) and a platinum plate working as a counter electrode. All electrochemical tests were conducted in a 0.1 M NaCl water solution and at room temperature (about 20°C). To investigate the corrosion behavior of the samples on the global scale, three basic electrochemical methods were employed: first, the open circuit potential measurement (OCP) during 24 h of the specimen immersion, second, linear sweep voltammetry (LSV) with the sweep rate of 1 mV s À1 and with the beginning potential from À 2 V vs. Ag/AgCl toward anodic potential, and third, the electrochemical impedance spectroscopy (EIS) at OCP (with a sinusoidal perturbation signal of 10 mV amplitude and a 10 kHz to 5 mHz frequency range). The above-mentioned electrochemical microcell technique (EMT) was also employed to investigate the local corrosion behavior (Ref [50][51][52][53][54]. EMT allows electrochemical measurements to be performed on the microscale and local corrosion behavior to be characterized. This technique consists of a glass microcapillary that is filled with an electrolyte (Fig. 1a). The microcapillary tip was sealed to the specimen surface with a layer of silicon rubber. The microcell (connected with the capillary) was mounted on a microscope for precise positioning of the microcapillary on the surface (Fig. 1b). The size of the microcapillary tip used during studies of the WE43 magnesium alloy was about 50 lm in diameter. The counter electrode was a platinum wire, and the reference electrode was Ag/AgCl (3 M KCl). The LSV electrochemical method was used during local studies. Microstructure and Mechanical Properties The analysis based on scanning electron microscopy of ascast WE43B (ingot) is presented in Fig. 2(a) and (b). The grains of a-Mg surrounded by the eutectic can be seen giving a lightgray diffusive contrast and a strong bright contrast (3.1 vol. %) coming from the phases which solidified as the last ones. The circularity index of solid solution grains was 0.72 ± 0.1, and the average size was 52.8 ± 1.9 lm. The chemical composition from the middle part of the grains (point 1, Table 1, Fig. 2a) confirmed the presence of a predominant amount of magnesium at a small amount of RE. The grain areas with the gray diffusive contrast occupied 19.3 vol. % of the image surface, while the concentration of RE elements was significantly higher (point 2, Fig. 2(a), Table 1), suggesting the segregation of RE-rich phases at grain boundaries and the formation of the eutectic (morphology typical for this phase). The EDS analysis from the area where white contrast is present (points 3-5 in Fig. 2 Table 1) confirmed the high content of Y and Nd, suggesting the eutectic contains phases enriched in RE elements. The addition of RE elements, such as Gd and Nd, has a significant impact on the microstructure, which leads to grain refinement by the heterogeneous nucleation of a-Mg. The Zr addition also results in the formation of Zr-rich particles that are nuclei during solidification, which leads to grain refining. ZnZr 2 precipitations on the grain boundaries limit the microstructure coarsening ( Ref 55,56). According to the binary diagrams developed by Massalski (Ref 57), it could be deduced that for Mg-Nd, Mg-Gd and Mg-Y systems, the solubility of neodymium, gadolinium and yttrium in magnesium is about 4.2 wt.% at the temperature of 535°C, 3.75 wt.% at 566°C and 11.4 wt.% at 567°C, respectively. This proves that the metals have limited solubility in the solid state, and the T6 heat treatment is required to dissolve the RE in the Mg matrix in order to improve their properties by precipitation hardening and solution hardening. The Zr-enriched precipitations were present around the grain area, leading to the limitation of its coarsening ( Ref 58). The xray analysis (Fig. 3a) confirmed the presence of a predominant and Mg 3 Nd intermetallic phases located inside the eutectic areas, which are at the grain boundaries. The WE43B magnesium alloy ingot was used as a feedstock for semisolid processing to assess the influence of technology on the microstructure, mechanical and corrosion properties. As confirmed in several works (Ref 59, 60), SSM allows a unique structure to be obtained, thanks to the specific conditions of the process, for example, high shearing during the flow of the alloy suspension, rapid cooling, exerted pressure during the solidification and, consequently, the formation of a strong nonequilibrium state of material. In Fig. 4, the DSC curve (solid line) shows the endothermic effects which occurred during heating at the rate of 20 K/min. Additionally, the dashed double-dotted curve in Fig. 4 shows the dependence of the calculated amount of the liquid phase as a function of temperature. The melting point is determined as the onset of the endothermic peak. The start of melting is set at such a temperature when the heat curve falls away from the tangent line. The end of melting is set by the onset point on the other side of the peak. The enthalpy of the melting process is À 246.7 lVs/mg. The semisolid processing was carried out at 625°C, which corresponds to 25% of the liquid phase in accordance with the DSC analysis (the point marked in Fig. 4). The SEM-BSE microstructure of the thixo-cast WE43B revealed the presence of a-Mg globular grains with an average size of 65.5 ± 2.1 lm, surrounded by the eutectic mixture and secondary solid solution (Fig. 5a). Based on image analysis, the liquid fraction in a semisolid range (area around grains) was in the volume of 25.5%. The XRD analysis confirmed the presence of a-Mg and Mg 24 Y 5, Mg 3 Nd, at lower content, in comparison with the ingot structure of Mg 41 Nd 5 which partially dissolved in the magnesium matrix (Fig. 3b). The results of EDS point analysis in the grain (point 1 in Fig. 5b) as well as in the eutectic (points 2 and 3 in Fig. 5b) are presented in Table 1. Significant microstructure changes in comparison with the WE43B ingot are visible. This is connected with the sample remaining in the semisolid state, which leads to diffusion control phenomena, such as an increase in average grain sizes as well as increases in the concentration of RE elements in the grains (point 1 Fig. 2(a) and (b) and point 1 Fig. 5(a), Table 1). This suggests that, after SSM, alloy grains are partially saturated while the eutectic is refined through rapid cooling to a steel die. Additionally, small bright areas inside grains in the volume of 1.1% are visible. The EDS analysis (point 4, Fig. 5, Table 1) confirmed the increased concentration of RE elements. According to references (Ref 24, 61), these are micro-eutectics, which formed inside the grains due to segregation phenomena. These regions of micro-eutectics decrease the effective liquid fraction present between globules, thus negatively affecting the alloy flow behavior during semisolid processing. The T6 heat treatment of thixo-cast samples was performed in the next stage of the study. The saturation temperature, aging and time of processes were determined based on DSC results (not presented here) and our previous studies (Ref 47). The SEM microstructures after saturation at 525°C/5 h and cooling in H 2 O and aging at 190°C/48 h are presented in Fig. 6(a) and (b). Because of the high diffusion of elements at 525°C, intermetallic phases almost completely dissolved into the magnesium solid solution (resulting in the decrease in its volume) with simultaneous coarsening of the a-Mg solid solution. Additionally, the SEM-EDS image with the EDS analysis revealed the chemical composition of a-Mg (point 1 in Fig. 6b, Table 1) and the eutectic in the form of bright precipitations (point 2 in Fig. 6b, Table 1). From the comparison with the ingot as well as thixo-cast chemistry, it could be seen that a-Mg was enriched in RE elements. The x-ray analysis also confirmed nearly complete dissolution of Mg 3 Nd intermetallic phases after T6 (Fig. 3c), which was visible by the lack of peak intensities (minimum detection is about 3 vol.%) in comparison with the precursors or the thixoformed sample. However, small peaks coming from Mg 41 Nd 5 as well as Mg 24 Y 5 are visible. Aging for 120 h at 190°C resulted in hardening of the alloy. It is generally accepted that for the Mg-RE alloys aged isothermally at 120-260°C, the strengthening phases appeared due to the presence of Y and Nd, which are mainly responsible for the precipitation of the metastable b¢ phases on the prismatic planes of the a-Mg matrix ( Ref 10,[62][63][64]. This was also confirmed by the increase in mechanical properties presented in the next paragraph. Mechanical Properties The tensile strength tests were carried out to determine the deformation behavior of the WE43B magnesium alloy after various methods of processing. The stress-strain plots of the ingot (dashed dotted curve; feedstock for SSM), directly after thixoforming (continuous curve) and after T6 (dashed curve), are presented in Fig. 7. The ingot had the lowest tensile strength of 153 MPa at the hardness 54 ± 3 HV and plasticity 12.5%. This was connected with the presence of the coarse eutectic and intermetallic components, and with the lack of nanoprecipitates in the a-Mg matrix. The thixo-cast WE43B revealed the plastic strain of 6% and the tensile strength of 186 MPa at the yield strength of 110 Mpa. The hardness was 67 ± 3HV. Relatively better mechanical properties are the results of refining the eutectic by rapid quenching to a steel die. The sample contained 26.6% of liquid fraction during the SSM. Due to the fragmentation of the eutectic phase, the strengthening of the interglobular space was higher than that of the input material (ingot). Better properties were achieved during the tensile strength analysis of the samples after heat treatment nanoprecipitations responsible for strengthening. The increase in the YS value in the thixo-cast after T6 (at 35%) could be connected with an increased number of b¢ precipitates (due to the application of the solution treatment). Corrosion Behavior 3.3.1 The Influence of SSM and T6 Heat Treatment on the Electrochemical Response. In order to verify the influence of the SSM processing and T6 heat treatment on the electrochemical behavior of the WE43B magnesium alloy, the linear sweep voltammetry (LSV) was recorded after 24-h immersion of samples in the 0.1 M NaCl-water solution (Fig. 8). The electrochemical impedance spectroscopy (EIS) measurements (Nyquist and Bode plots are presented in Fig. 9(a), b and c for the specimen as-cast, thixo-cast (SSM) and thixo-cast after the T6 heat treatment (SSM/HT), respectively) were obtained during immersion for 1, 3, 6 and 24 h in order to verify the course of the corrosion processes on the surface of the tested samples. These measurements were made using a sinusoidal signal with the amplitude of 10 mV in the range of frequencies from 10 kHz to 5 mHz. The scattered plots are the measurement results, and solid lines show the calculations, the results of which are collected in Table 3. In Fig. 8, global potentiodynamic polarization curves (with a 1 mV/s scan rate) recorded after 24-h immersion for the tested specimens in the 0.1 M NaCl solution are presented. The electrochemical parameters (E corr and I corr ) are presented in Table 2. These results show that all three specimens undergo corrosion; nevertheless, the thixo-casting has a strong positive effect and slows down the corrosion rate of the WE43 magnesium alloy. The current density values are much lower for the specimens after SSM and SSM/HT (0.012 mA/cm 2 and 0.032 mA/cm 2 , respectively) compared to the initial state (ingot has 0.145 mA/cm 2 ), which indicates significant improvement in the corrosion behavior. However, the T6 HT slightly worsens corrosion parameters contrary to the thixo-cast specimen. This will be discussed in the next paragraphs. As can be seen in Fig. 9(a), (b) and (c), the EIS diagrams are composed of two time constants in the first period of immersion. Next, the electrochemical behavior changes and one time constant remain. Impedance plots have been fitted using the equivalent circuits A (for two time constants) and B (for one time constant) visible in Fig. 10. The numerical data from the EIS simulations are included in Table 3. Fitting the impedance diagrams allows the determination of the electrolyte resistance (R el ), polarization resistance/charge transfer resis-tance (R1), which is a measure of the corrosion rate (resistance of the oxide layer or corrosion products which cover the surface of the specimen), constant phase element properties (CPE1) associated with the double layer (in the case of the present investigations, the C capacitance was replaced with the element CPE = 1/Y 0 (jx) u , which improved the simulation of the impedance spectrums), constant phase element (CPE2), asso- In the first period of immersion, the EIS response shows two time constants (capacitive loops), where the first peak observed in the Bode diagram (at high frequencies range) is related to the charge transfer resistance which can be used to determine the corrosion rate (Ref 65-67). The second time constant is visible in the lower frequency range, indicating slow processes (like the diffusion of Mg 2+ ions through the porous corrosion product layer) occurring at the electrode/electrolyte interface. The second time constant is present until the corrosion processes (substrate dissolving) have started preceded by dissolution of the layer of corrosion products. This disappears after 3 h of immersion for both the as-cast and thixo-cast specimens and after 1 h of immersion for the thixo-cast/T6 specimen. An inductive loop in EIS diagrams is observed in the lowest frequency range, indicating the adsorption of the corrosion products induced by pitting and dissolution processes (breakdown/dissolution of the corrosion products layer) ( Ref 68). As can be seen in Table 3, the EIS calculations revealed that for the ingot sample the resistance R1 reached 355 X cm 2 after 24-h immersion. The SSM process (before heat treatment), which modifies the structure of the WE43B alloy and improves the mechanical properties, means that its corrosion resistance also increases. The R1 value for the thixo-cast sample increased up to 1101 X cm 2 . After the same time of immersion, the corrosion resistance was lower for the thixo-T6 alloy and reached 586 X cm 2 . Moreover, the drop of the R1 value was observed earlier (after 1 h) than for the ingot and thixo-cast specimens. This indicates a shorter time of immunity of the thixo-T6 specimen in chlorides. There are also variations in the CPE1 value observed for tested samples during the 24-hour electrochemical test. In the first period of immersion, CPE1 does not change significantly. However, in the case of the ingot specimen, a slight decrease in the CPE1 value during the first 6 h is seen, which can be related to a thick corrosion product film forming on the surface. After 24 h, there is a growth in the CPE1 value for the as-cast specimen which reaches 3.6 * 10 À5 (X À1 cm À2 s u ), which is the highest value for the tested specimens. The highest CPE1 values indicate that the corrosion product film and the substrate are strongly dissolved. After 24-h immersion, the lowest value of CPE1 was registered for the thixo-cast and it suggests that there are fewer corrosion products which may be dissolved. This is related to the previously discussed chemical composition of the matrix and precipitations of the network of the second phase (eutectic), which is wide and causes the separation of grains. The wide eutectic network and matrix enriched in RE are the reasons that the specimen is less electrochemically active; thus, it is less covered by corrosion products. The exponent CPE1-u values are close to 1 and suggest capacitive properties of the oxide layer on all three samples. After 24 h, the CPE1-u values become slightly lower for all specimens, which is related to the formation of a thicker layer of corrosion products. This suggests that the character is gently changing from capacitance to diffusion. The highest values of CPE2 for the as-cast specimen suggest the thickest MgO layer and/or its surface area is widely distributed. Moreover, the exponent CPE2-u values are between 0 and 0.5, which suggests its resistance/diffusive character. It is well known that MgO exhibits semiconductive properties. 3.3.2 Mechanism of Corrosion in the WE43 Alloy After SSM and SSM/T6 Heat Treatment. The optical images of cross sections are shown in Fig. 11 together with corroded parts of the surfaces of tested samples after 24-h immersion in the electrolyte. Deep damage can be observed in the matrix of the as-cast specimen after 24-h immersion. The precipitates seem to be untouched, and the dissolution processes bypass them. This is a well-known effect in nonferrous metallic alloys, where precipitates act as local cathodes and there are cathodic reactions on the surface, while the surrounding matrix is dissolving (Ref 69). Moreover, the second-phase precipitations are resistant to chlorides as can be observed, for example, in the case of the AZ91 magnesium alloy; thus, they do not undergo dissolving (Ref 39, 70). The thixo-cast sample is presented analogically in Fig. 11(b). Because of the presence of the eutectic, which covers a relatively large area on this specimen and appears as a continuous network, the corrosion penetration is not so deep as in the previous case. It can be concluded that large eutectic precipitations inhibit the progress of dissolution. A similar effect was observed in the AZ91 magnesium alloy (Ref 33). The T6 heat treatment led to microstructural changes, which affected the corrosion behavior. It is visible in Fig. 11(c) that the surface of the thixo-cast/T6 specimen is also ''ragged'' after 24-h immersion in the electrolyte. The above-mentioned electrochemical measurements revealed a slightly detrimental effect of T6 HT on the corrosion rate. A dual influence of the Fig. 11 Cross-section optical images of the surface (corroded areas) after 24-hour immersion in the electrolyte: (a) ingot specimen, (b) thixocast, (c) thixo-cast after heat treatment microstructure can be observed. First, it is related to the chemical composition of the matrix, which is enriched in RE elements (Fig. 6, Table 1). This means that the dissolution of the matrix is more shallow than in the previous cases. Moreover, the eutectic mixture remnants are much smaller and finer after HT, which leads to a more homogeneous surface of the thixo-cast/T6 specimen. On the other hand, the discontinuous second-phase precipitations (eutectic remnants) promote the corrosion processes in magnesium alloys. The SEM images (10009 magnification) together with the EDS analysis results (Table 4) are shown in Fig. 12 for all three tested specimens after 24-h immersion in the electrolyte. It can be seen that significantly higher amounts of oxygen have been registered for all three tested specimens. This is related to the oxidation of the alloy surface, which has been explained above. It should be noted that the lowest value of oxygen has been registered for the thixo-cast specimen. As the intermetallic phases present in the alloy are enriched in RE (SEM analysis; Fig. 2, 5, 6 and 12), they exhibit a strong cathodic character relative to the matrix (which is anodic). This corrosion mechanism (galvanic corrosion) is very well known and occurs on many ferrous and nonferrous metallic alloys in high-conductivity environments (for example, NaCl water solution). The electrochemical response strongly depends on the chemical composition, size and distribution of the precipitates of intermetallic phases. Thus, the black areas surrounding the precipitates (visible in SEM images after corrosion tests, Fig. 12) confirm the presence of crevices at the matrix/precipitate interface resulting from galvanic corrosion and strong electrochemical reactions in these regions. In order to support the above observations and discussion, a local technique based on microcapillaries (EMT) has been employed. EMT studies were conducted after mechanical polishing of the surface of tested specimens. In Fig. 13(a), the local polarization curves (1 mV s À1 scan rate) obtained by using EMT (microcapillary was 50 lm in diameter, Fig. 1) are presented. LSV has been recorded in the matrix of the ingot-black curve, thixo-cast-blue curve and thixo-cast after T6 HT-magenta curve (Fig. 13b, c and d respectively). It is worth noting that in order to obtain the characteristics of the local corrosion behavior in the matrix of the alloy under different treatment conditions, local LSV curves need to be repeated a few times. The results revealed differences in the corrosion behavior of the three tested specimens, as can be seen in Fig. 13(a). The electrochemical response from the matrix of the ingot specimen shows more heterogeneous behavior of the a-Mg phase in this sample, as confirmed by less reproducible results. Some places have lower current densities, while others have higher values in both the cathodic and anodic ranges. The same observations can be seen when the corrosion potential is considered. There are some sites where corrosion potential is much lower, while others exhibit similar values to those registered for the thixocast and thixo-T6 specimens. This indicates nonhomogeneous corrosion behavior, which results from the nonequilibrium state of the ingotÕs microstructure. In addition, the anodic current densities are higher compared to the thixo-cast and thixo-T6 specimens, which confirms previous observations (see the discussion of EIS results), i.e., more intense anodic (oxidation) reactions in the matrix of the ingot. Thus, it can be concluded that the intensity of the oxidation (corrosion) increases-a thicker corrosion product is formed on the sample surface, which, in turn, causes a higher corrosion resistance in the first period after immersion. (Note that there is no breakdown potential observed in the case of the ingotÕs matrix contrary to the thixo-cast and thixo-T6 specimens). In the case of the thixo-cast specimen, first of all the local electrochemical response shows more homogeneous corrosion behavior. The current densities and corrosion potentials registered in the different sites of the matrix are almost the same. The much lower anodic current densities suggest a lower rate of the corrosion processes in the thixomatrix; however, it results in a thinner corrosion product layer. This causes a shorter time of immunity in the first period after immersion. The courses of the local LSV of the matrix of the thixo-T6 specimen shows similar electrochemical behavior to that observed in the thixo-cast specimen. It is worth noting that the presence of small precipitates enriched in RE in both the thixo and thixo-T6 matrices can be responsible for a higher susceptibility to pitting compared to the ingotÕs matrix (indicated by the white arrows in Fig. 5 and 6). Analogically to previous experiments, the local polarization curves recorded for sites with precipitates of intermetallic phases are shown in Fig. 14(a). Higher local electrochemical activity of the ingot specimen can be observed, where lower corrosion potential and higher anodic currents have been registered. The results also indicate heterogeneous corrosion behavior that particularly depends on the size of the precipitate. In detail, higher corrosion potentials and lower anodic currents are seen in the case of the thixo and thixo-T6 specimens showing their lower local electrochemical activity. Moreover, the measurement values are more homogeneous and present reproducibility. It is worth noting that the breakdown potential is visible in all cases. It can be concluded that the SSM and SSM/HT processes result in lower electrochemical activity both in the matrix and in sites with precipitates (lower anodic currents). A simplified model of the corrosion processes is demonstrated in Fig. 15. The magnesium oxide covers the surface of the alloy in the initial state (just before immersion, Fig. 15a). When the specimen comes into contact with the electrolyte, the magnesium hydroxide is formed on the surface according to chemical reaction (1). Simultaneously, the electrochemical reactions (2) and (3) occur (Fig. 15b). Two time constants are observed in the EIS diagrams where the first (in the high frequencies range) is associated with the charge transfer resistance of the Mg 2+ ions. Second time constant is attributed to diffusion processes of Mg 2+ through the corrosion product of Mg(OH) 2 formed on the surface. The process continues until the Mg(OH) 2 is dissolved somewhere (reaction (4)), and the corrosion processes start in the weakest regions-usually at sites where the intermetallic phases are present. This has been proved by observations and local LSV (Fig. 11, 12, 13 and 14). Therefore, as the layer of corrosion products becomes thicker, the time needed in order to initiate the corrosion processes increases. The thickness of corrosion products formed on the Mg alloy surface is strongly dependent on microstructure, as was explained above by the experiments and observations. Conclusions The proposed conditions of the SSM process and T6 heat treatment conducted on the WE43 magnesium alloy allowed material to be obtained with improved mechanical parameters and corrosion resistance in a chloride environment (0.1 M NaCl). The thixoforming procedure (the temperature of 625°C which corresponds to about 25% of the liquid phase) induced the microstructure changes, where next to the a-Mg phase as globular grains with the size of 65.5 ± 2.1 lm, a wide eutectic mixture is present in the volume of 26.6 vol.%. The T6 heat treatment of the thixo-cast specimen (saturation from 525°C/ 5 h/H 2 O and aging of 190°C/48 h) caused an increase in yield strength to 180 MPa and tensile strength to 280 MPa at the hardness of 105 ± 4 HV 5 . Both SSM and T6 HT mean that the time for the degradation processes to start in chlorides is slightly shortened compared to the as-cast condition. On the other hand, SSM processing led to changes in the microstructure (a net-shaped eutectic appears) which slows down the degradation processes (inhibiting the corrosion processes that are already running). T6 heat treatment slightly worsens the corrosion resistance of the WE43 magnesium alloy compared with the specimen directly after SSM. The mechanism of the corrosion processes seems to be the same in all three cases; however, the initiation of the corrosion, electrochemical activity and degradation rates are significantly dependent on the microstructure.
2020-09-11T14:14:02.559Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "37254178f59fa2127924a585043ad85de8da4305", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11665-020-05085-1.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "37254178f59fa2127924a585043ad85de8da4305", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
226205332
pes2o/s2orc
v3-fos-license
Effect of plasma polyunsaturated fatty acid levels on leukocyte telomere lengths in the Singaporean Chinese population Background Shorter telomere length (TL) has been associated with poor health behaviors, increased risks of chronic diseases and early mortality. Excessive shortening of telomere is a marker of accelerated aging and can be influenced by oxidative stress and nutritional deficiency. Plasma n6:n3 polyunsaturated fatty acid (PUFA) ratio may impact cell aging. Increased dietary intake of marine n-3 PUFA is associated with reduced telomere attrition. However, the effect of plasma PUFA on leukocyte telomere length (LTL) and its interaction with genetic variants are not well established. Methods A nested coronary artery disease (CAD) case-control study comprising 711 cases and 638 controls was conducted within the Singapore Chinese Health Study (SCHS). Samples genotyped with the Illumina ZhongHua-8 array. Plasma n-3 and n-6 PUFA were quantified using mass spectrometry (MS). LTL was measured with quantitative PCR method. Linear regression was used to test the association between PUFA and LTL. The interaction between plasma PUFAs and genetic variants was assessed by introducing an additional term (PUFA×genetic variant) in the regression model. Analysis was carried out in cases and controls separately and subsequently meta-analyzed using the inverse-variance weighted method. We further assessed the association of PUFA and LTL with CAD risk by Cox Proportional-Hazards model and whether the effect of PUFA on CAD was mediated through LTL by using structural equation modeling. Results Higher n6:n3 ratio was significantly associated with shorter LTL (p = 0.018) and increased CAD risk (p = 0.005). These associations were mainly driven by elevated plasma total n-3 PUFAs, especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) (p < 0.05). There was a statistically significant interaction for an intergenic single nucleotide polymorphism (SNP) rs529143 with plasma total n-3 PUFA and DHA on LTL beyond the genome-wide threshold (p < 5 ×  10− 8). Mediation analysis showed that PUFA and LTL affected CAD risk independently. Conclusions Higher plasma n6:n3 PUFA ratio, and lower EPA and DHA n-3 PUFAs were associated with shorter LTL and increased CAD risk in this Chinese population. Furthermore, genetic variants may modify the effect of PUFAs on LTL. PUFA and LTL had independent effect on CAD risk in our study population. Introduction Telomeres are complexes at the ends of eukaryotic chromosomes, which consist of tandem repeat DNA sequences (TTAGGG) n for humans, and associated proteins [1]. Telomeres protects the genome from degradation, interchromosomal fusion, unnecessary recombination and being recognized as a double-strand break by DNA repair proteins [2]. Telomeres shorten progressively during cell divisions and eventually result in cellular senescence or apoptosis when telomere length (TL) reaches a critical limit [3]. Growing epidemiologic and clinical studies have shown that TL is associated with chronic diseases, including cancer [4], osteoporosis [5], and cardiovascular diseases [6]. These observations have led to telomeres being proposed as an important marker of biological age, which is independent of chronological age [7], and as a prognostic marker of chronic disease risk, progression and premature mortality [8,9]. Dietary intake is a significant contributor in determining cellular TL. Intakes of both omega 3 (n-3) and omega-6 (n-6) polyunsaturated fatty acids (PUFAs) can influence inflammation [10], which may affect telomeres attrition rate both in vitro [11] and in vivo [12]. Higher n-3 PUFA concentration in plasma may have an antiinflammatory effect [13], while n-6 PUFA shows proinflammatory and pro-thrombotic potential through synthesis of oxidized metabolites [14,15]. There is competition between n-3 and n-6 PUFA for desaturation and elongation enzymes. The ratio of plasma n-6 and n-3 PUFA (n6:n3 ratio), may hence contribute to inflammatory profiles and health status of an individual [16]. Genetic studies have identified several loci associated with leukocyte telomere length (LTL) [17,18]. However, the contribution of these variants, even in combination, to the overall heritability of LTL is modest. Interaction between genes and life-style factors may also contribute to LTL heritability. The aims of this study were to investigate the association between plasma PUFA levels and LTL in the Chinese population and to evaluate genetic variants that may modify this effect. Study population The Singapore Chinese Health Study (SCHS) is a longterm population-based prospective cohort study focused on dietary, genetic and environmental determinants of cancer and other chronic diseases in Singapore [19]. From April 1993 to December 1998, a total of 63,257 Chinese individuals (Hokkien or Cantonese dialect group) aged 45-75 years were recruited. At recruitment, all the study subjects were interviewed in-person by an interviewer with a structured questionnaire. Since April 1994, a total of 28,439 participants donated blood specimens. The study was approved by the Institutional Review Boards of the National University of Singapore and the University of Minnesota, and all study subjects gave written informed consent. The current study was conducted in a coronary artery diseases (CAD) case-control study nested within SCHS, including 744 incident acute myocardial infarction (AMI) cases and 744 matched controls. Both cases and controls were SCHS participants with donated blood specimens and without a prior history of CAD or stroke at the time of blood collection. The cases selected were incident nonfatal or fatal AMI that occurred during follow-up from blood drawn through December 31, 2010. The controls were alive and free of CAD at the time of the AMI diagnosis or death of the index case. The matching criteria included gender, dialect group (Hokkien, Cantonese), date of birth (±5 years), date of recruitment (±2.5 years), and date of blood collection (± 6 months) [20]. Measurement of leukocyte telomere length DNA of SCHS study subjects was extracted from peripheral blood collected prior to CAD events, using QIAamp DNA Blood kits (Qiagen, Valencia, CA). Relative LTL was measured using a validated monochrome multiplex quantitative PCR (qPCR) method [21]. This method expressed LTL as a ratio (T/S) of telomere repeat length (T) to copy number of a single copy gene albumin (S), relative to a reference sample. The LTL for each sample was measured in duplicates and the average T/S ratio was used for subsequent analysis. Detailed description for LTL measurement in SCHS, including standard curve generation, PCR condition and coefficients of variation was published previously [18]. Measurement of plasma PUFA Plasma n-3 and n-6 PUFA were quantified from baseline specimens prior to CAD events, in a targeted mode using gas chromatography-mass spectrometry (GC-MS)/MS on an Agilent 7890 GC system (Shanghai, China) equipped with a G7000B QQQ triple quadrupole mass detector and an auto sample injector. Both free and esterified (triglycerides, phospholipids, cholesterol esters) FA fractions were measured in total. Samples were analyzed in 76 batches, with cases and matched controls included in the same batch. Pooled human plasma was used for quality control (QC). The experimental details and the coefficients of variation of the measured FAs were published elsewhere [20]. Genotyping and imputation Study samples were genotyped on the Illumina Huma-nOmni ZhongHua-8 Bead Chip. After QC [20,[22][23][24] procedures, 711 cases and 638 controls with complete information for both genotypes and plasma PUFA measurement were included in the current study. Imputation for additional autosomal single nucleotide polymorphisms (SNPs) was performed with IMPUTE2 [25] and genotype calls were based on phase3 1000G cosmopolitan panels. Statistical method The main demographic clinical characteristics for the study subjects were compared between CAD cases and controls. Normally distributed quantitative traits, including age, LTL, total plasma n-6 PUFA, linoleic acid (LA) and arachidonic acid (AA), were presented as mean ± SD (standard deviation) and the differences in means between cases and controls were compared by t-test. Nonnormally distributed variables, including n6:n3 ratio, total plasma n-3 PUFA, α-linolenic acid (ALA), eicosapentaenoic acid (EPA), docosahexaenoic acid (DHA), γlinolenic acid (GLA) and dihomo-γ-linolenic acid (DGLA) were presented as median with interquartile range, and the differences between groups were determined by the Mann-Whitney U test. Categorical variables, including gender and SNP genotypes, were presented as number of individuals and differences in their frequencies between groups were determined by Pearson's χ 2 test, which was also used for checking significant departure of genotype frequencies from Hardy-Weinberg expectations (HWE). Linear regression was used to investigate the main association of LTL with plasma PUFA and SNP. Cox Proportional-Hazards model was utilized to assess the association of PUFA and LTL with CAD risk, with age, gender and the first three principle components (PCs) included as covariates. Mediation analysis was conducted using the Structural Equation Modeling (SEM) to assess whether the effect of plasma PUFA on CAD risk was mediated through LTL. Non-normally distributed variables were normalized by z-score transformation. Genome-wide interaction analyses were also performed using linear regression by additionally introducing the interaction term (plasma PUFA x SNP) with PUFA and SNP included as covariates in the same regression model. Analysis was first carried out in cases and controls separately and subsequently meta-analyzed using the fixed-effects inverse-variance weighted method. Cochran's Q test was used to measure heterogeneity and a Q p value cut-off < 0.05 was used to determine SNPs with between-study heterogeneity [26]. The genome-wide interaction analysis was carried out by using an additive model in Pro-bABEL [27]. Common SNPs with minor allele frequency (MAF) above 3% were included in the current study. All other statistical analyses were carried out using STATA 15.0 (Stata Corp, College station, TX) and a 5% type I error was set to indicate statistical significance (twotailed) in all analyses. Results The main demographic characteristics for the study subjects were presented in Table 1. Cases had significantly lower plasma total n-3 PUFA levels (p = 0.013), EPA levels (p = 0.002) and DHA levels (p = 0.020) as compared to controls. No significant difference was observed between cases and controls for age, gender, LTL, plasma n6:n3 ratio, ALA, total n-6 PUFA and PUFA subtypes. Association between plasma PUFA and LTL Higher plasma n6:n3 ratio was significantly associated with shorter LTL (T/S ratio) (β = − 0.015, p = 0.018, Table 2). When analyzing the individual effect of plasma n-3 and n-6 PUFA on LTL, only n-3 PUFA showed a significant association and each 1-SD increase in total n-3 PUFA was associated with 0.014 increase in relative LTL (β = 0.014, SE = 0.006, p = 0.024). We further analyzed the association between specific n-3 and n-6 PUFA subtypes with LTL. Both EPA and DHA showed significant associations with LTL while ALA did not. Each 1-SD increase of EPA and DHA was associated with 0.016 (β = 0.016, SE = 0.006, p = 0.011) and 0.015 (β = 0.015, SE = 0.006, p = 0.017) longer relative LTL, respectively ( Table 2). The n-6 PUFA subtypes were not associated with LTL (Table 2). Association between LTL, plasma PUFA and CAD Higher plasma n6:n3 ratio was significantly associated with increased CAD risk [HR (95%Cl) = 1.114 (1.034, 1.200), P = 0.005, (Table 3)]. When analyzing the individual effect of plasma n-3 and n-6 PUFA on LTL, n-3 PUFA showed a significant protective effect on CAD risk [HR (95%Cl) = 0.885 (0.820, 0.955), P = 0.002] but not n-6 PUFA (Table 3). We further analyzed the association between specific n-3/n-6 PUFA and CAD risk. showed significant association with decreased CAD risk ( Table 3). The n-6 PUFA subtypes were not associated with CAD (Table 3). We also observed that longer LTL has protective effect on CAD risk [HR (95%Cl) = 0.664 (0.481, 0.917), P = 0.013, (Table 3)]. Additionally, we evaluated if the effects of PUFA on CAD was mediated by LTL but did not find strong evidence for this in our dataset (Supplemental Table 4). Interaction between genetic variants and plasma PUFA on LTL In the assessment of the interaction between plasma PUFAs and genetic variants on LTL, an intergenic SNP, rs529143, was found to modify the effect of plasma n-3 PUFA and DHA on LTL both in CAD cases and controls. After meta-analysis, the interaction reached genome-wide level of significance (Table 4, Supplemental Table 1). Although the main effect of rs529143 on LTL was not significant (p = 0.252, Supplemental Table 2), interaction existed between rs529143 and n-3 PUFA on LTL. Stratification by tertiles of plasma n-3 PUFA levels indicated that individuals carrying the minor C allele have shorter LTL in the lower tertile group while the higher tertile group have longer LTL (Fig. 1). Similar results were observed for the interaction between DHA and rs529143. Minor CC homozygous subjects have shorter LTL in lower plasma DHA tertile group and longer LTL in higher tertile group (Fig. 2). We further tested whether rs529143 interacted with dietary intake of PUFAs to affect LTL in all the extended SCHS dataset with complete information for both genotype and diet (N = 21,828) but no significant interaction was detected (Supplemental Table 3). Discussion In this prospective nested case-control study of the Singaporean Chinese, we observed an inverse association of plasma n6:n3 ratio with LTL and CAD risk. The association was driven by total plasma n-3 but not n-6 PUFA. When studying the association between specific n-3 PUFA and LTL, higher plasma levels of both EPA and DHA were associated with longer LTL and decreased CAD risk. However, the effects of PUFA and LTL on CAD risks were independent in our study population. We further found a genome-wide interaction between an intergenic variant, rs529143, and n-3 PUFA as well as DHA on LTL. To the best of our knowledge, our study represents the first investigation on the effect of plasma PUFA on LTL and its interaction with genetic variants in a Chinese population. Studies of the association of PUFA, either dietary or in the plasma, with LTL have largely shown inconsistent results. Most have found telomeric attrition to be attenuated by higher plasma n-3 PUFA levels or increased marine n-3 intake [28,29], which is consistent with the finding that plasma n-3 PUFA concentration is associated with low proinflammatory markers and high antiinflammatory markers [30]. In contrast, a large crosssectional study, comprising the controls of the Nurses' Health Study found no association between n-3 PUFA and LTL. Instead, the study reported increased n-6 PUFA intake, specifically LA intake, to be inversely associated with [31]. In our study, LTL was significantly associated with plasma n-3 but not n-6 levels. One possible explanation for such discrepancies may be due to the varied dietary intakes (and possibly other lifestyle or environmental factors) between the study populations that may impact on LTL attrition rates. Additionally the ratio of plasma n6:n3 levels has not been evaluated extensively in these previous studies for LTL associations. In a randomized controlled trial, there was no significant differences for LTL changes among groups receiving Table 3 Association between LTL/plasma PUFA and CAD HR (95% Cl) P However, an increase of LTL was observed with decreasing of n6:n3 ratio [10]. N-3 and n-6 PUFAs compete for key enzymatic pathways, and thus the relative balance is of health interest [16]. Higher plasma n6:n3 ratio has been associated with higher inflammatory markers, such as TNF-α and IL-6 [32]. Oxidative stress, and inflammation may result in LTL attrition [33]. These data, together with the finding in our study, suggest that rather than just considering the absolute amount of n-3 or n-6 PUFA individually, the background n6:n3 ratio should also be taken into account for clinical studies or for evaluation of nutritional interventions. When we tested the association between specific n-3 FAs and LTL, we observed significant association for EPA and DHA but not ALA. Although ALA can be converted to EPA and DHA, the conversion process is inefficient in humans. A previous study had shown that the same dosages of ALA produced different physiological responses from EPA and DHA to decrease risk factors for metabolic syndrome, while physiological responses to EPA and DHA were similar. This result strongly suggests that ALA exerts its independent effects in metabolic syndrome [34]. A randomized double-blind nutritional intervention study also showed that ALA have different effect on cardiovascular risk markers in healthy elderly subjects compared to EPA and DHA [35]. Previous studies have shown an inverse association between long-chain n-3 PUFAs and CAD risk [36] while adipose tissue AA, a n-6 PUFA, was associated with higher risk of AMI [37,38]. Similar findings in SCHS between plasma PUFA and CAD has been reported previously [20]. In this SCHS data subset with genetic information, higher plasma n6:n3 ratio was associated with shorter LTL and increased CAD risk. The association was driven mainly by elevated total plasma n-3 but not n-6 PUFA, especially EPA and DHA. Since PUFA and TL are both related to oxidative stress and inflammation, which contribute significantly to the pathogenesis of CAD, we investigated whether the effect of PUFA on CAD is mediated through LTL. However, we did find sufficiently strong evidence for this (Supplemental Table 4) and it may be likely that PUFA and LTL have independent effects on CAD risks. Our interaction analysis indicated an intergenic SNP, rs529143, could modify the association between n-3 PUFA/ DHA and LTL. Carriers of the minor C allele with low n-3 PUFA/DHA (lower tertile) had shorter LTL while those with high n-3 PUFA/DHA (higher tertile) had longer LTL. Regional genes (100 kb) around rs529143 include multiple phospholipase genes such as PLA2G2D and PLA2G2F, which have strong relevance to phospholipid metabolism [39]. Functional annotation of this SNP with expression quantitative trait loci (eQTL) data indicated that rs529143 may affect the expression level of AKR7A3 (p = 4.57 × 10 − 6 ) in transformed fibroblasts [40,41], which is involved in the detoxification of aldehydes and ketones. The enzymes from the aldo-keto reductases (AKRs) superfamily were also reported to play important roles in nuclear receptor signaling, cellular metabolism, inflammatory responses, endobiotic, osmoregulation and xenobiotic detoxification and hormone synthesis [42,43]. Moreover, genome-wide yellow fluorescent protein complementation screen has showed that AKR7A3 can interact with DNA-binding transcription factor Ras-related protein 1 (RAP1), one of the core telomeric proteins, to regulate telomeres [44]. The interaction observed in our study might be through the effect of AKR7A3 on telomere length. Our study has several potential limitations. First, measurements of LTL in our study were mean TL in leukocytes and therefore may not reflect TL dynamics in other tissues [45]. The measurements of TL in vascular cells could be more informative for the mediation analysis for CAD effects [46]. However, there is evidence that within an individual, LTL is likely to be correlated with tissue specific TL [47,48]. Second, although the association between plasma PUFA levels and LTL was significant in the meta-analysis, when examining the association in cases and controls separately, they were only significant in the latter. Nevertheless the direction of the association was consistent across the datasets and the between-study heterogeneity examined by Cochran's Q test was not significant (P > 0.05). Conclusions We report in this study an inverse association of plasma n6:n3 ratio with LTL and CAD risk and that this association was mainly driven by total plasma n-3 but not n-6 PUFA. Higher plasma levels of both EPA and DHA were associated with longer LTL and decreased CAD risk. We additionally identified an intergenic genetic variant, rs529143 that was observed to modify the association between plasma n-3 PUFA/DHA level and LTL. Additional file 1: Table S1. Interaction between genetic variants and plasma PUFA on telomeres in SCHS_CAD cases and controls. Table S2. Association between genetic variant and telomeres. Table S3. The mediation effect of telomeres on the association between plasma PUFA and coronary artery disease. Table S4. Interaction between genetic variants and PUFA intake on telomeres in SCHS (N = 21,828)
2020-10-31T13:51:20.339Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "289f98fa889c3c03de2d60ad2b77efd2c015392f", "oa_license": "CCBY", "oa_url": "https://nutritionj.biomedcentral.com/track/pdf/10.1186/s12937-020-00626-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "289f98fa889c3c03de2d60ad2b77efd2c015392f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
265588178
pes2o/s2orc
v3-fos-license
Limited clinical validity of univariate resting-state EEG markers for classifying seizure disorders Abstract Differentiating between epilepsy and psychogenic non-epileptic seizures presents a considerable challenge in clinical practice, resulting in frequent misdiagnosis, unnecessary treatment and long diagnostic delays. Quantitative markers extracted from resting-state EEG may reveal subtle neurophysiological differences that are diagnostically relevant. Two observational, retrospective diagnostic accuracy studies were performed to test the clinical validity of univariate resting-state EEG markers for the differential diagnosis of epilepsy and psychogenic non-epileptic seizures. Clinical EEG data were collected for 179 quasi-consecutive patients (age > 18) with a suspected diagnosis of epilepsy or psychogenic non-epileptic seizures who were medication-naïve at the time of EEG; 148 age- and gender-matched patients subsequently received a diagnosis from specialist clinicians and were included in the analyses. Study 1 is a hypothesis-driven study testing the ability of theta power and peak alpha frequency to classify people with epilepsy and people with psychogenic non-epileptic seizures, with an advanced machine learning pipeline. The next study (Study 2) is data-driven; a high number of quantitative EEG features are extracted and a similar machine learning approach as Study 1 assesses whether previously unexplored univariate EEG measures show promise as diagnostic markers. The results of Study 1 suggest that EEG markers that were previously identified as promising diagnostic indicators (i.e. theta power and peak alpha frequency) have limited clinical validity for the classification of epilepsy and psychogenic non-epileptic seizures (mean accuracy: 48%). The results of Study 2 indicate that identifying univariate markers that show good correlation with a categorical diagnostic label is challenging (mean accuracy: 45–60%). This is due to a considerable overlap in neurophysiological features between the diagnostic classes considered in this study, and to the presence of more dominant EEG dynamics such as alterations due to temporal proximity to epileptiform discharges. Markers that were identified in the context of previous epilepsy research using visually normal resting-state EEG were found to have limited clinical validity for the classification task of distinguishing between people with epilepsy and people with psychogenic non-epileptic seizures. A search for alternative diagnostic markers uncovered the challenges involved and generated recommendations for further research. Introduction Considerable clinical challenges are involved in the diagnosis of seizure disorders; the most common problem involves differentiating epilepsy from epilepsy mimics such as psychogenic non-epileptic seizures (PNES) and syncope.Misdiagnosis rates for adults with epilepsy can be as high as 26% when the diagnosis is made by non-specialist medical professionals. 1 Similarly, epilepsy mimics such as PNES are frequently misdiagnosed as epilepsy, and on average, people spend 7 years receiving inappropriate management and treatment before the diagnosis of PNES is reached. 2 Considerable effort has been devoted to identifying measurable biological characteristics that could aid clinicians in the diagnosis of seizure presentations.Quantitative analysis of resting-state EEG signals has been implemented to understand whether epilepsy and PNES are associated with alterations in spontaneous electrophysiological activity.Our recent systematic review summarized 26 studies exploring group differences or diagnostic accuracy of markers extracted from interictal, visually normal EEG segments of adults with idiopathic epilepsy (i.e.non-lesional epilepsies of unknown or presumed genetic aetiology) or PNES. 3 Results suggested that the resting-state EEG in idiopathic epilepsy is characterized by increased theta power as compared to controls, and has a pattern of EEG slowing as indicated by a shift of power and power peak towards lower frequencies.Conversely, no clear pattern could be identified from the few studies comparing PNES cohorts to controls.The latest evidence continues to support these findings. 4,5his encourages an exploration of these markers as potential discriminators between a diagnosis of epilepsy and one of PNES. Notably, only two studies so far have directly compared cohorts with idiopathic epilepsy and PNES on resting-state EEG measures.Bernasconi et al. 6 investigated delta (1-4 Hz) amplitude, reporting no group differences.Cao et al. 7 explored group differences and diagnostic accuracy of correlation and coherence measures, reporting 55-70% classification accuracy for a set of selected markers.However, methodological considerations (e.g.lack of control for antiseizure medication effects and circadian variations) and analytical considerations (e.g.use of a case-control design, circular analysis and data leakage between training and test sets) limit the interpretability and generalizability of these findings.The question of whether people with epilepsy and PNES can be differentiated on resting-state EEG characteristics remains underexplored. We conducted two sequential studies.In a first, theory-driven diagnostic accuracy study, we explored whether resting-state EEG markers identified through our systematic review (theta power and peak alpha frequency) 3 have the potential to aid in the classification problem of distinguishing epilepsy and PNES.In a second study, a data-driven exploratory approach was implemented; a large number of features were computed using a recently developed feature extraction tool. 8No a priori hypotheses were made as to individual features' relevance to the diagnostic classification.By working with a large pool of measures based on different theoretical frameworks, we aimed to identify markers that had good correlation with diagnostic labels in our cohort and might therefore be clinically meaningful. Faiman et al. 3 highlighted the need to control for sources of bias and address shortcomings in study designs to improve the analytical and clinical validity of diagnostic markers.Towards this end, this study implemented validated automated pipelines to maximize the generalizability and reproducibility of EEG analyses.Reporting complies with reproducibility, diagnostic prediction models and STROBE guidelines. 9,10Efforts were made to control for common sources of bias such as EEG artefacts, differences in demographic characteristics, medication effects, alertness, circadian variation and incorporation bias.The study design was optimized for assessing the clinical validity of diagnostic accuracy indices; all patients suspected of having epilepsy or PNES and meeting inclusion criteria were quasiconsecutively selected.EEG data extracted for analysis were performed from unmedicated patients prior to diagnosis.A diagnosis was subsequently reached by specialist clinicians for most patients.Unlike case-control designs, the diagnostic accuracy design implemented mimics the population in which EEG markers would be used to inform the diagnostic decision. 11 Participants We retrospectively identified a quasi-consecutive crosssectional sample of people presenting to King's College Hospital specialist clinics with a suspected seizure disorder (see Supplementary Material Section 1 for patient identification strategy).Inclusion criteria were age over 18 and not taking any CNS agents at the time of EEG [including antiseizure medications (ASMs), antidepressants, anxiolytics, opioids, etc.].Exclusion criteria were acute symptomatic seizures, abnormal CT/MRI, eventual diagnosis of concurrent epilepsy and PNES, relevant history of other neurological, neurodevelopmental or severe mental health disorders (details in Supplementary Material Section 1).Age and gender matching for people eventually diagnosed with epilepsy or PNES was achieved as described in Supplementary Material Section 1.An established diagnosis of epilepsy was a clinical diagnosis in accordance with operational clinical definitions. 12An established diagnosis of PNES was considered either a video-EEG supported diagnosis in accordance with ILAE definitions, 13 or in the absence of video-EEG confirmation, a strong clinical indication of PNES following review by both a specialist neurologist and neuropsychiatrist.A minimum sample size of 130 participants was defined a priori. 14The study was approved by the London Queen Square Research Ethics Committee (REC: 20/LO/ 0784, IRAS ID: 265164, 28/09/2021).Patient consent for analysis of anonymized data was not required. EEG data acquisition Clinical EEG data were acquired at King's College Hospital Department of Neurophysiology in unshielded and artificially lit rooms.Twenty-one scalp electrodes (Hurev, silver disk, 1.5Ø DIN connector) were placed according to the Modified Maudsley Configuration. 15,16This is similar to the 10-20 system, with slightly different positioning (∼20 mm lower) of the temporal electrodes chain for optimal frontal and temporal coverage. 17Impedances were kept below 5 kΩ (or 10 kΩ under particular circumstances), checked at electrode placement and throughout the appointment.The reference channel was typically placed on the midline anterior to Pz. Single-channel ECG was simultaneously recorded.EEG data were acquired using either the NicoletOne EEG System v44 (Viasys Healthcare, with Viasys Healthcare NicoletOne M40 Amplifier, 40-channel, sampling rate 256 Hz and analogue filter bandwidth 0.053-500 Hz, with additional acquisition filters set at 0.5-70 Hz) or the Xltek EEG32U system (Natus, with Nicolet v32 Amplifier, 32-channel, sampling rate 500 Hz and analogue filter bandwidth 0.053-500 Hz). EEG data selection The first available EEG recording during which the patient was aged over 18 and was not taking any CNS agents was reviewed for inclusion.Resting-state EEG data to be used for the analyses were inspected (consistently using common average and then longitudinal bipolar montage) and selected by a trained band 6 EEG technician at King's College Hospital (N.C.) with 2.5 years of clinical experience who was blinded to diagnosis, referral question, and any patient details.Data segments of minimum 20 s were selected from the controlled baseline period (awake, eyes-closed, relaxed recordings in lying position) that were normal on visual inspection, i.e. did not include any slowing, interictal epileptiform discharges (IEDs), non-specific or dubious abnormalities, periods of drowsiness or sleep, or major artefacts. EEG data preprocessing Data acquired with the Xltek EEG32U system were downsampled to 256 Hz, high-pass filtered at 0.5 Hz and downpass filtered at 70 Hz (Hamming windowed sinc FIR filter) to match the sampling rate and acquisition filters of the NicoletOne EEG system.The PREP pipeline was implemented to remove externally generated experimental artefacts (line noise and bad channels), to interpolate bad channels (spherical interpolation: EEGLAB 'eeg_interp()' function; mean 1.9 channels (SD 1.5) interpolated per segment) 18 and to calculate a reliable and robust average reference. 19ndependent Component Analysis (ICA) was run using the extended Infomax algorithm. 20The number of ICs generated was rank-adjusted (18 ICs generated on average) 21 and IClabel 22 was then implemented to remove ICs that had ≥80% probability of being generated by muscles, eyes, heart, line noise or channel noise (1.9 ICs removed on average). 23,24All data were visually inspected and then segmented into 20 s non-overlapping segments to maintain consistency with previous studies. 3One 20 s segment per participant was selected at random for analyses. EEG features extraction Study 1 The 20 s segments were segmented in 2 s epochs.Spectral information was obtained for each of the 21 EEG channels by means of Fast Fourier Transformation using the FieldTrip toolbox for EEG analysis (http://fieldtriptoolbox.org) 25 in MATLAB (R2021a, Natick, MA: The MathWorks Inc.), implementing a multi-taper method with a single (Hanning) taper window to reduce spectral leakage. 26A sliding window was set to 50 ms.The frequency resolution was 0.5 Hz.Power values were averaged over epochs to obtain a single power density spectrum (μV 2 ) for each 20 s segment. 27ach value was converted from μV 2 to decibels (dB): y dB = 10 * log10(y).The logarithmic transformation creates a close-to-normal distribution that is otherwise positively skewed due to the 1/f scaling of the power spectrum. 28ean power was computed for each channel for each of the following frequency bands: delta (1-3.5 Hz), theta (4-7.5 Hz), alpha (8-12.5 Hz) and beta (13-30 Hz). 9 The peak alpha frequency (PAF) is the frequency of the highest point in the spectral peak occurring within the alpha bandwidth. 29A global PAF index was computed for each 20 s segment by means of the MATLAB (R2021a) restingIAF opensource package v1.0.3 (https://github.com/corcorana/restingIAF). 30To derive the power spectrum (1-45 Hz), this implements pwelch, normalization and Savitzky-Golay filter smoothing (11 bins, polynomial degree of 5).PAF was identified using 21 EEG channels, with a minimum of three valid channels (i.e. with reliable peak, independently of spatial location) needed to support the global PAF estimation, and an alpha peak search window between 7 and 13 Hz, as per guidelines. 30o ensure that the EEG signal used for statistical analyses contained oscillatory activity over and above the aperiodic 1/f activity, EEG segments without a detectable alpha peak on a minimum of three channels (as detected by the restingIAF function) were excluded from further analyses. 31ll patients had at least one usable 20 s segment. Study 2 The highly comparative time series analysis (hctsa) software tool 8,32 was implemented in MATLAB R2021a and used to compute a high number of features (i.e.measures to be used as predictor variables, n = 7729) from time-series (EEG) data.Hctsa is a univariate method, so the full set of 7729 features was extracted for each patient, and separately for each of the 21 EEG channels.Any features that had <100% valid values were removed, resulting in a total of 6425 valid features (Fig. 1).Further details on hctsa and the retained features are in Supplementary Material Section 3. Model fitting The objective of model fitting was to determine whether the features extracted could discriminate between the diagnostic classes (epilepsy/PNES).The key steps of the model fitting process are outlined below; in-depth technical details are in Supplementary Material Sections 2 and 3. Study 1 To test whether theta power (in 21 channels) and global PAF could predict the binary diagnostic class, a Support Vector Machine (SVM) model was fit to the data 33 using the scikit-learn software in Python v3.10.4. 34The whole dataset was divided at the patient level into a training set (80%) and a test set (20%), stratifying by diagnosis (Fig. 2).Features were normalized, and a SVM model with radial basis function 35 was implemented on the training set.We optimized hyperparameters via grid search, followed by a 5-fold crossvalidation to identify the optimal kernel parameter combination.The final model was built and assessed on the test set.This process was iterated five times on different training and test sets.Average model performance is reported. Study 2 We evaluated whether the 6425 features extracted could predict diagnostic class y.The same analysis pipeline as the one described for Study 1 was followed, with the addition of a feature selection step on the training set (Supplementary Fig. 11).This step involved using a filter-based feature selection method, the minimum Redundancy-Maximum Relevance algorithm (mRMR), 36 to identify informative features.These were normalized and retained for evaluation on the test set.The rest of the analysis pipeline is identical to Study 1.For Study 2, the whole procedure was repeated independently for each of the 21 EEG channels.Features that were repeatedly selected as informative are reported (Supplementary Fig. 11). Subgroup analyses To explore whether the model achieved a higher accuracy for certain patient subgroups, a series of analyses were run for both studies measuring performance for patients grouped by: 1) different method of confirmation of diagnosis (video-EEG confirmation / clinical diagnosis only); 2) different epilepsy types (focal / generalized / unclassified); and 3) different overall outcome for EEG examination studied (normal / abnormal with non-specific findings / abnormal with epileptiform features). Accuracy indices for each subgroup were calculated based on the observed and predicted diagnosis for each patient. Control and post hoc analyses For both studies, control analyses were performed on different split ratios (70:30 and 90:10) as training set size can influence machine learning algorithm accuracy. 37To ensure results are not dependent on EEG segment choice, analyses were repeated for two different randomly selected 20 s segments per participant (hereafter named random segments 2 and 3).For the minority of participants with fewer than three 20 s segments (n = 28 with two; n = 10 with one), random segments 1 or 2 were reused as control segments (Supplementary Material Section 4). Study 1 In a further control analysis, the performance of a model built solely on theta power measures, and one built solely on PAF was reported.Multiple independent samples t-tests were performed post hoc to report on group differences for each predictor.As there have been variable reports of EEG power being higher in people with epilepsy for other frequency bands (delta, alpha, beta), 3 we also reported prediction accuracy of power measures for these frequency ranges. Study 2 A series of post hoc analyses were conducted to aid interpretation of results, including synthetic dataset code validation, testing alternative feature selection methods, selecting different number of features, assessing intra-patient feature stability, running control analyses to rule out overfitting and testing the appropriateness of the classifier to represent the feature space and low-dimensional visualizations.Detailed methodology can be found in Supplementary Material Section 3. Population descriptive statistics To estimate the significance of group differences for descriptive categorical variables, a χ 2 test 38 was used, or Fisher's exact test 39 in the presence of low cell count (n < 5). 40The Kolmogorov-Smirnov test was implemented to assess the normalcy of distributions. 41To estimate the significance of group differences for continuous variables, independent samples t-test were used in the presence of normal distributions, and a Mann-Whitney U test was used for other distributions. 42o control for circadian effects, the time of the day at which the EEG recording was taken was converted to radians (as per e.g.Abela et al. 43 ) using the hms2rad function in the R package 'astroFns'.The Watson-Williams test was used to compare group means of circular data (radians). 44ue to the retrospective nature of the study, some descriptive variables had missing data; these were handled by listwise case exclusion in descriptive statistics; the resulting sample size is reported. Data availability Anonymized data are available upon request for collaborative purposes.The code used for the main analyses can be found in Supplementary Material Sections 4 and 5. Population characteristics A total of 148 people (epilepsy n = 75; PNES n = 73) received a diagnosis by the time of data analysis and were included in the study; people with uncertain diagnosis (n = 29) were excluded (Supplementary Figs 1 and 2).A significantly higher number of people with PNES were left-handed or ambidextrous (Table 1).All patients were diagnosed by a consultant neurologist specialized in epilepsy and/or by a consultant neuropsychiatrist specialized in PNES.The diagnosis was supported by video-EEG in 80% of people with PNES (typical event captured whilst recording normal EEG) and in 57% of people with epilepsy (epilepsy-specific abnormalities captured) (Table 1). Relevant disorder characteristics were similar between groups, except for a higher monthly seizure frequency in PNES (Table 1).The epilepsy aetiology was unknown for the whole epilepsy sample.Epilepsy type was recorded as focal in 44% (n = 33), generalized in 27% (n = 20) and unknown/undefined in 29% (n = 22) of the sample. The EEG recordings used were performed between April 2007 and August 2021 in different clinical EEG settings; the acquisition systems used were balanced across groups (Table 1).Groups did not differ in the time of the day during which recordings were taken, making confounding effects due to circadian rhythms unlikely (Table 1).Patients did not report any epileptic seizures in the 24 hours prior to EEG recordings (excluding PNES n = 32 and epilepsy n = 9 with unspecified latency from last seizure).At the time of EEG, the median days since last event were 60 (IQR: 75.25) for people with epilepsy and 7 (IQR: 20.0) for people with PNES (excluding missing).For 78% of people with PNES and 29% of people with epilepsy, the whole EEG examination was overall normal; for the remaining patients, some EEG abnormalities were recorded elsewhere in the same recording session (Table 1); EEG segments analysed contained no abnormalities. Study 1 The SVM classifier was built to optimize accuracy for the two groups based on theta power in 21 channels and a global PAF index from one randomly selected EEG segment per patient.This resulted in a sensitivity of 57%, a specificity of 38% and an accuracy of 48%, indicating that the model predicts patients' diagnosis no better than chance (Table 2). Study 2 For each channel, a SVM classifier was built to test the ability of selected feature sets to predict binary diagnostic class. Results indicated that the models had a poor prediction ability in all channels (Table 3).A number of features tended to be repeatedly selected across different split folds and across different channels; the most consistently selected was the P-value of the z-test applied to the time series (Supplementary Table 12). Study 1 Control analyses showed that poor classification performance was not dependent on the proportion of data included in the training and test sets, nor on the specific 20 s segment selected, nor on specific feature sets (Supplementary Tables 1-3).Post hoc independent t-test analyses revealed no statistically significant differences between groups in PAF or theta power at any electrode location (Supplementary Fig. 3; Supplementary Table 4).Classification accuracy was poor also for delta, alpha and beta power (Supplementary Table 5), with only a trend towards significant differences on post hoc t-test in some channels along alpha and beta power (Supplementary Figs 4-6; Supplementary Table 6). Study 2 Poor classification performance was not dependent on the proportion of training data nor on the 20 s segment selected (Supplementary Tables 13-16).The features that were most commonly selected (>6 times) across folds and across channels were highly variable across different random segments, with only two features being selected commonly across multiple segments (HT_HypothesisTest_ztest and SB_BinaryStats_iqr.meanstretchdiff;Supplementary Tables 17 and 18).Post hoc analyses confirmed that no performance improvement was possible by implementing alternative feature selection methods, alternative classifiers, by simplifying the feature set or the model implemented (Supplementary Section 3 and Supplementary Tables 19-25).Results of the intra-patient feature stability analysis indicated that informative features were moderately stable over the course of the same EEG recording session.In half of the predictions tested, features identified as informative in one 20 s segment remained predictive in the subgroup of patients with detected epileptiform abnormalities (67-83% accuracy) when performance was measured in two different 20 s segments; however, diagnostic class prediction was poor at the whole group level (49-57% accuracy; Supplementary Table 23). Study 1 Classification performance was comparable for people with and without diagnostic video-EEG confirmation, and for people with different epilepsy types (Table 4).Classification performance was poor both for those with an overall normal EEG examination outcome and for those with an EEG classified as abnormal with non-specific findings.However, 70% accuracy was achieved for the subgroup of people whose visually normal EEG segment was sampled from recordings that subsequently contained epilepsyspecific abnormalities (71% sensitivity; 50% specificity; Table 4). To further explore this finding, independent t-test analyses were performed post hoc comparing people with epilepsy for whom epilepsy-specific abnormalities were captured (total n = 28; focal n = 16; generalized n = 10; unclassified n = 2) and people with PNES with a normal EEG examination (n = 57).][47] When these groups were considered, a trend towards significantly higher theta power was observed in the epilepsy group over most of the channels (Supplementary Table 7, Supplementary Fig. 7). Subgroup analyses for other frequency bands are in Supplementary Tables 8-11 and Supplementary Figs 8-10; for the alpha band, these show similar findings to the theta band findings described above. Study 2 In five of the channels (Pz, Cz, F8, P3, T4), 67% to 77% accuracy was achieved for the subgroup of people whose visually normal EEG segment was sampled from recordings that subsequently contained epilepsy-specific abnormalities (Supplementary Table 26).69% accuracy was achieved in C3 for the subgroup of people whose visually normal EEG segment was sampled from recordings that subsequently contained non-specific abnormalities; this included a mixture of people with epilepsy and people with PNES.Classification performance was poor for all other subgroups tested across all channels (Supplementary Table 26).Subgroup analyses for random segments 2 and 3 showed similar results (random segment 2: 67-73% accuracy in Cz, Fp2 and C4 for the subgroup with epilepsy-specific abnormalities captured, 72-74% accuracy in T6 and P4 for the subgroup with non-specific abnormalities captured; random segment 3: 69-74% accuracy in F7, Fp2 and A1 for the subgroup with epilepsy-specific abnormalities captured; 67-70% accuracy in A2 and C3 for the subgroup with nonspecific abnormalities captured). Discussion Two original diagnostic accuracy studies were conducted that sought to determine whether resting-state EEG markers identified through our recent systematic review (theta power and PAF) 3 have the potential to aid in the classification problem of distinguishing epilepsy and PNES (Study 1), and whether any other markers showing good correlation with the diagnostic labels could be identified in a data-driven fashion from a large, multidisciplinary feature set (Study 2).The visually normal segments of resting-state EEGs of 148 medication-naïve people were analysed; groups were matched by age and gender. Study 1 Results of the first study indicated that measures of theta power and PAF predict patients' diagnoses no better than To demonstrate applicability of our model's results to different patient subgroups, subgroup analyses were performed.Prediction accuracy was relatively higher (70%) for the subgroup of patients who had epilepsy-specific abnormalities captured elsewhere in the course of the EEG recording.A possible explanation is that, for this patient subgroup, our model may be picking up features associated with IEDs that are not detectable on visual inspection.This would be in line with previous evidence showing that the magnitude of resting-state EEG alterations is dependent on the temporal proximity to epileptiform abnormalities. 48n ability to predict these cases would offer limited advantage to the diagnostic process.Implementing such a model in clinical practice could, however, bring some benefit to ensure that IEDs are not missed during an EEG recording session (e.g. by extending the session if a patient is flagged as likely to show abnormalities), therefore minimizing any diagnostic delays.Concurrent implementation of clinical tools such as meticulous history taking, seizure semiology and pre-test probability would remain essential for the diagnostic decision. A number of factors can account for the discrepancy between our findings and those reported by previous studies, including differences in sampling strategies, the specific population under study, and the influence of CNS agents such as ASMs. Regarding differences in sampling strategies, previous studies all implemented a case-control design whereby a group of patients with known diagnosis is compared to a control group without the condition (typically healthy controls). 36][47] The present study consecutively enrolled all patients suspected of having non-lesional epilepsy or PNES over a specific time period.Only a small subset (24 patients) had to be identified outside the sequential recruitment approach to achieve age and gender matching.Such a selection more closely reflects the population in whom the marker under study would be used to inform the diagnostic decisionmaking, therefore mitigating sampling bias. In an attempt to mimic a case-control design and explore the effect of sampling on results, only the 'more extreme' cases on our patient spectrum were selected in post hoc analyses: people with PNES who had a completely normal EEG examination, and people with epilepsy with epilepsy-specific abnormalities captured elsewhere in the EEG session.Upon comparison, we replicated or approximated to the finding of increased power in epilepsy, in line with previous case-control studies. 3However, our results suggest that the measures studied would have poor applicability in clinical practice as they are not discriminatory in the context of a richer variability of data and clinical presentations. A second factor relates to the specific population under study.Whilst previous research has largely implemented control groups composed of healthy volunteers, this is one of the first studies directly comparing non-lesional epilepsy and PNES cohorts.Since publication of our systematic review, 3 one additional study reported increased low frequency power in PNES as compared to healthy controls 49 in line with findings from a previous study, 50 whilst a third study reported no differences. 51It is possible that low frequency power is elevated in PNES, similarly to epilepsy, and this complicates the classification problem. A third relevant factor is the effect of ASMs.Consumption of certain ASMs is associated with an increase in delta and theta power and/or to a decline in the dominant (alpha) rhythm frequency 52-56-57 .The EEG data analysed in this study were recorded whilst patients were not taking any CNS agent, whilst almost all previous studies reporting alterations in epilepsy included patients who were already on ASM therapy. 3Two exceptions were Schmidt et al. 58 and Clemens 59 who compared unmedicated epilepsy patients to healthy controls.In accordance with our findings, Schmidt et al. 58 showed very poor discriminative ability for a measure of alpha power peak.Clemens 59 still reported increased power in epilepsy.Future studies are needed to disentangle the relative contribution of ASMs and genetic factors in modulating EEG alterations in non-lesional epilepsy of unknown aetiology. Study 2 Results of the second, data-driven study indicated that identifying resting-state EEG markers that show good correlation with the diagnostic label is challenging.A series of features were consistently flagged as informative by multiple feature selection algorithms and under different analytical circumstances.In some EEG channels, a SVM classifier built on selected features predicted the diagnostic label of novel observations with 67-77% accuracy, but only if patients had epilepsy-specific abnormalities captured elsewhere during the EEG recording.The classifier's ability to predict the diagnosis at the level of the whole sample (including people without epileptiform abnormalities captured) was poor (45-60% accuracy). Control analyses excluded the possibility that poor performance was due to overfitting, as simplifying the feature set or the model implemented did not improve results.Poor performance was not due to a failure of the SVM classifier to appropriately represent the feature space, as implementing a different supervised classification approach (Random Forest) did not improve results.Subgroup analyses excluded the possibility that other factors might account for the results, such as the absence of video-EEG diagnostic confirmation for some patients, or the inclusion of patients with different epilepsy types. Analyses of two different resting-state EEG segments sampled at a different time during the recording session produced similar results. Overall, results of Study 2 suggest that the features were selected based on information that was predominantly present in the EEG of people that were in closer proximity to epileptiform abnormalities.The models learnt to utilise this information to solve the classification problem.When presented with unseen data, the classification accuracy was good for the specific subgroup of people with detected epileptiform abnormalities, but poor for the rest of the sample. Results of the intra-patient feature stability analysis indicated that in half of the cases, features selected at one time remained predictive when applied to segments taken at different times, but there was very low consistency between the exact features selected across different segments.The EEG channels that were most predictive also varied across time.This suggests that informative features were likely related to transient and spatially evolving EEG dynamics, rather than to spatially and temporally stable traits.The most consistently selected feature was the P-value of the z-test applied to time series.The z-test tests the null hypothesis that the data come from a normal distribution with 0 mean and unit variance.P-values represent the deviation between the mean of the time series and a zero mean: the greater the deviation, the lower the P-value.In time series, such a measure of deviation from normality may provide information on the degree of 'anomalies' in the data.It is possible that this, in conjunction with other metrics, was relevant for classifying the subgroup of people that had visibly detectable epileptiform abnormalities elsewhere in the recording session. The inability of the present study to identify a feature set that is useful to predict the trait 'diagnostic class'-rather than conceivably transient EEG states dependent on the proximity to epileptiform abnormalities-does not imply that such a feature set does not exist.However, results of this study highlight that features capturing relevant yet transient dynamics from certain patient subgroups might be 'stronger' (i.e.contribute to the variance in y to a greater extent, and are therefore more easily detectable) than features containing information on what we think of as stable traits, such as a categorical diagnostic label-and especially in patients where epilepsy has just begun to manifest with seizures. It is possible that a mismatch exists between the rigid diagnostic constructs which we seek to define due to their practical utility, and the rich reality of neurophysiological presentations, which might lead to an overlap between diagnostic classes due to shared genetic or environmental factors.For example, ∼30% of the sample had a family history of epilepsy in both the epilepsy and PNES cohorts.The possibility of an overlap between diagnostic classifications is supported by the low-dimensional feature visualizations implementing t-Distributed Stochastic Neighbour Embedding (t-SNE; Supplementary Fig. 12).This suggests that the feature space constitutes a single continuum of observations, without noticeable clusters representing distinct diagnostic categories. Limitations and applicability Limitations include the possibility of misdiagnosis for some cases, as epilepsy-specific abnormalities or a typical episode were never captured on video-EEG in 43% of our epilepsy sample and 21% of our PNES sample respectively.However, the PNES patients were reviewed by a specialist consultant neuropsychiatrist who assigned a diagnosis of PNES due to strong clinical features, and further reviewed by the study Principal Investigator (P.S.).Cohorts without video-EEG diagnostic support did not have lower model accuracy in subgroup analyses, suggesting that diagnostic errors, if present, are not noticeably biasing the model.It cannot be excluded that some patients might have concurrent epilepsy and PNES that went undetected. Due to analytical requirements, people with unconfirmed diagnosis at the time of data analyses were excluded.A proportion of these were 'difficult-to-diagnose' cases and it is expected that their inclusion would further complicate the classification task. 60Applicability of the findings is also limited to people without major neurological and psychiatric comorbidities.Patient sampling was performed retrospectively; however, this is not associated with an inaccurate estimation of accuracy indices as compared to prospective diagnostic accuracy studies. 45,46EG data were acquired in clinic and analysed in sensor space.Measures of power and peak frequency can be confounded by the intercept or slope of the aperiodic component of the power spectra; analytical solutions have recently emerged to disentangle their contribution. 61or Study 2, analyses performed using a synthetic dataset indicated that the ability of feature selection methods to identify informative features decreases as a function of dataset complexity.With small sample size and high feature set size, feature selection methods struggle to reliably find a feature set with low error from which a good classifier can be designed. 62,63We acknowledge the possibility that some of the identified features might be related to the outcome variable spuriously.Increasing the sample size would improve the reliability of feature selection algorithms. Future directions Expanding the length of the EEG segments analysed (e.g. to 5-10 minutes) might decrease the influence of transient dynamics on the features extracted; this could improve their temporal stability (which in the present study was only observed for the subgroup of patients with detected epileptiform abnormalities) and maximize the chances of extrapolating trait-related information.If diagnosis-related information is present in the resting-state EEG of undiagnosed cohorts, this is likely to be more subtle than information related to proximity to epileptiform abnormalities. Analyses could be repeated excluding people that had epileptiform abnormalities detected elsewhere during the recording session.However, increasing the size of study samples should be attempted to account for these exclusions and to address the large variability of neurophysiological presentations observed across the diagnostic classes.This study explored the predictive potential of univariate features, i.e. features derived from individual channels.It is possible that diagnostically relevant information is contained in multivariate features representing the relationship between channels.Relevant information might also be present in recordings taken during other states such as sleep or during task execution and these should be explored in future. Conclusions Two diagnostic accuracy studies were run on visually normal segments of resting-state EEG recordings. Study 1 investigated the ability of pre-selected markers to distinguish between a diagnosis of epilepsy and a diagnosis of PNES.Contrary to expectations, results indicate that measures of theta power and peak alpha frequency have limited clinical validity as they predict patients' diagnoses at a level no better than chance (mean accuracy: 48%).Factors that could account for the discrepancy with results of previous studies include sampling strategies, the specific population under study, and the influence of ASMs. In Study 2, a data-driven exploration was carried out on a large number of features.It was not possible to identify a feature set that predicted people's diagnosis better than chance.Different feature sets were identified that were predictive of the diagnosis for a subgroup of people who had epileptiform abnormalities detected elsewhere in the course of the EEG recording session.This suggests that EEG markers containing information on the proximity to epileptiform abnormalities contribute to the variance in the outcome variable (diagnosis) to a greater extent than any markers associated to diagnostic class itself.It remains possible that a good feature set exists that is related to diagnostic class.Investments in sample size and innovation in study design are promising determinants of advances in the field. Figure 1 Figure 1 Visual representation of hctsa feature extraction from the EEG signal (Study 2) for a sample channel (FP1).For each patient (y-axis), the left subplot displays the first second of the 20-s EEG data segment analysed.The right subplot displays the normalized values for each of 6425 features (operations) computed from each segment, on a colour scale ranging between zero (blue) and one (red).Rows and columns are ordered by feature clusters for visualization purposes. Figure 2 Figure 2 Support vector machine pipeline for Study 1. cv, cross-validation; PPV, positive predictive value; NPV, negative predictive value; Table 1 Demographic, clinical and EEG characteristics of the included population, with statistical group comparison a Non-specific transient abnormalities refer to slowing or other non-specific transients (sharpened waves).b Epilepsy-specific transient abnormalities refer to spikes, sharp waves, generalized spike and wave discharges, generalized polyspike-waves, generalized 3-4 Hz spikes.c Seizures refer to epileptic seizures for the epilepsy group and non-epileptic seizures for the PNES group. Table 2 Classification performance for theta power in 21 channels and peak alpha frequency (PAF) for one randomly sampled 20 s EEG segment Reported are mean (SD) across 5 cross-validation folds (80:20 proportion for training and test sets).PPV, positive predictive value; NPV, negative predictive value; AUC, area under the curve. Table 3 Classification performance for selected features in 21 channels for one randomly sampled EEG segment Reported are mean (SD) across 5 cross-validation folds (80:20 proportion for training and test sets).PPV, positive predictive value; NPV, negative predictive value; AUC, area under the curve. Table 4 Results of subgroup analyses Displayed are classification indices for different subgroups of patients based on the true and predicted scores across five cross-validation test sets.vEEG,video-EEG; PPV, positive predictive value; NPV, negative predictive value; AUC, area under the curve.chance(mean accuracy: 48%) and have therefore poor clinical validity for the diagnostic classification of PNES and epilepsy.Control analyses demonstrated that results were generalizable across different EEG segments and training-test set proportions.
2023-12-04T16:53:47.724Z
2023-11-30T00:00:00.000
{ "year": 2023, "sha1": "4166b6bc577226ec788af5256b06c1be14a9b422", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/braincomms/advance-article-pdf/doi/10.1093/braincomms/fcad330/53957270/fcad330.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3158b5be880b9c1947bf28f00b7072c18e4ca5dc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
88479871
pes2o/s2orc
v3-fos-license
Real-time Strategy Game Tactical Recommendation Based on Bayesian Network Real-time strategy (RTS) games simulate battles between large numbers of units and pose significant challenges to artificial intelligence which are complex areas of confrontation. In the field of RTS games, confrontation planning under uncertainty remains an unsolved challenge. There are two main types of uncertainty in RTS games. The first is the partial observability of the game, causing uncertainty. Second, there is uncertainty as the game is against the game and the player cannot predict what the opponent will do. This paper uses the Bayesian model as a logical alternative to solve the uncertainty caused by the confrontation game. The experimental results show that the proposed model can effectively solve the tactical recommendation problem in real-time strategy games. The article uses Bayesian programming to formalize the Bayesian network. Since this form is based only on inference rules that require probability calculus, it is very versatile. This paper proposes a tactical recommendation method used in RTS games to formally model the environmental information and unit information of the game. The experimental results show that the proposed method can effectively solve the tactical recommendation in RTS games. The remain of the paper is outlined as follows. The second part is an introduction to basic knowledge, including an introduction to RTS games and Bayesian theory. The third is the Bayesian tactical recommendation model in the real-time strategy game proposed in this paper. The fourth part is the experiment to verify the validity and strength of the model. The fifth part is the conclusion. RTS games The real-time strategy game is essentially a simplified military simulation. RTS games involve longterm goals and often require multiple levels of abstraction and reasoning. Even with the current active field of artificial intelligence research, Go, the complexity of RTS games has increased dramatically. The state space is so large that traditional heuristic-based search techniques have so far failed to solve all the problems except the most restricted sub-problems of RTS AI. Meta is a function that determines the winner of the game. The result can be that the game is running, one wins or draws. init s S ∈ , indicating a limited state. Bayesian network The Bayesian network, also known as the Belief Network, or the directed acyclic graphical model, is a probability graph model. First proposed by Judea Pearl [7] in 1985, it is a model of uncertainty processing that simulates causality in human reasoning. The Bayesian network is formed by plotting the random variables involved in a research system according to whether the conditions are independently drawn in a directed graph. It is mainly used to describe the conditional dependencies between random variables, using circles to represent random variables and arrows to indicate , ,..., n X X X , which can be observable variables, or hidden variables, unknown parameters, and so on. Variables or propositions that are considered causal (or unconditionally independent) are connected by arrows (in other words, the arrows connecting the two nodes represent whether the two random variables are causal or unconditionally independent). If two nodes are connected by a single arrow, indicating that one of the nodes is "parents" and the other is "children", the two nodes will generate a conditional probability value. Let ( ) denote a DAG, which represents a collection of all nodes in the graph, and represents a set of directed connected segments, and has represented by a node in its directed acyclic graph Random variable, if the joint probability of node X can be expressed as: In addition, for any random variable, the joint probability can be obtained by multiplying the respective local conditional probability distributions: 2) The Bayesian network combines graph theory and statistical knowledge to provide a natural representation of causal information, along with other knowledge discovery methods and decision modelling methods (e.g. rule representation, decision trees, artificial neural networks, etc.) Compared with the following advantages: -Bayesian networks can unearth the implications of knowledge. After learning the Bayesian network from the data, the network is reasoned, interpreted, and able to obtain the desired knowledge, concepts, and decision information. Moreover, from the network model after learning, explicit knowledge can be extracted, and black box defects such as neural network models can be avoided. The conditional probability can express the correlation between various information elements, which can be limited and incomplete. Learning and reasoning under uncertain information conditions. -The Bayesian network itself is an uncertain causal correlation model. Different from other decision models, Bayesian network itself is a kind of probabilistic knowledge representation and reasoning model that visualizes multivariate knowledge schemas. It more closely implies the causal relationship and conditional correlation between network node variables, which is convenient for analysing action sequences. The result of the action, the interaction with the observation, and the expected effect of the action, so that planning and decision making can be made under uncertainty. -Bayesian networks have parallel reasoning capabilities and global update capabilities. The Bayesian network takes values based on the node variables in the network. Over-probability reasoning can obtain the posterior probability information of any other node variable to achieve global update. -The simple expression and conditional independence of Bayesian network saves storage space, simplifies the process of knowledge acquisition and domain modelling, and reduces the complexity of the reasoning process. At the same time, graphical representation makes efficient reasoning possible, using distributed belief update. The method greatly improves the computational efficiency and avoids the repeated interaction of causal relationships. Bayesian programming Bayesian programming is used to formalize the Bayesian network. Since this form is based only on inference rules that require probability calculus, it is very versatile. Bayesian programming is a formal method used to fully describe the Bayesian model, which includes Bayesian networks and Bayesian mapping. In fact, it is equivalent to the probability factor graph. There are two main types of problems in the Bayesian network. One is how to describe and calculate the joint release, and the second is how to solve the problem. The description includes showing the associated variable { } 1 ,... n X X and using the existing prior knowledge π and data δ to solve the joint distribution to explain the dependencies between them. 3.TACTICAL RECOMMENDATION MODEL Hagelbäck and Johansson [8] found that "tactics are one of the most successful indicators of whether a player is a human being" when studying the intelligent features of RTS games. The tactics are between strategic (advanced) and micro-management (lower), covering where the attack is and how to attack. A good human tactical decision maker needs to consider a lot of questions when choosing tactics: Is there a flaw in defense? Which place is worthier of attack? What is the number of units I am attacking here? Is the terrain (the fortress point) good for me? and many more. The problem that this article needs to solve is to consider as much as possible the tactical problems in real-time strategy game confrontation planning and make the most effective intelligent recommendation. The established intelligent recommendation system performs the function of "someone" to direct the collaboration of the unmanned platform. Different units have different abilities, which leads to different tactics possible. Consider three different combat units here. Light combat unit (marine), heavy combat unit (firebat), remote combat unit (ghost). Attacking enemy bases is an important tactical action that directly affects the course of operations. In the intelligent recommendation of tactics, it is first necessary to predict the tactics of the opponents. On this basis, the intelligent recommendation in the tactical game process is carried out to optimize the game results. That is, taking into account all the things that can happen, make the most effective tactical decisions and recommendations. The complexity of tactics has not been specifically studied in the existing literature. The tactics correspond to where the player moves a group of combat units and how they move and move. Combat units have different capabilities, which leads to different possible tactics. For attack tactics, Bayesian programming has the following definitions: indicates whether the opponent is attacked in the area i . true indicates that the opponent is attacked in the area, and false indicates that the opponent is not attacked in the area. (2) is the economic value of the defender in the area i . That is, if there is a base for defenders in the area, the economic value of the area is high ; in the area where there is no base for defenders, the economic value of the area is no . (1) A prior probability that the player is performing an attack in this area. In this model, the proportion between the combat unit and the total unit that performs the attack in this area is set . In a given learning data, it should be uniformly initialized to gradually learn the opponent's preferred attack area to predict the possibility that an opponent might attack the area in the future, and learn the areas where our attack failed or made a successful decision. (2) ( ) , , | P E T TA A is a joint probability distribution table of economics and tactics (defenders and attackers), indicating the score at the time of the attack. The Laplace's law of succession is used to estimate the probability estimate from the training set, and the maximum likelihood learning is performed on the probability distribution table. Experiment setup This article uses the MicroRTS platform as a model for verification. Developed by Santiago Ontañón [9], MicroRTS is a simple RTS game designed to perform artificial intelligence research while minimizing the amount of work required to participate in the work.
2019-03-31T13:14:01.885Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "fac985eb3ac950fb259c23d06b4eaa41085fa44e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1168/3/032018", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "21aab602f7fc88033e4cafe66e92b25157b6b3bb", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
263829836
pes2o/s2orc
v3-fos-license
Robust cosmological inference from non-linear scales with k-th nearest neighbor statistics We present the methodology for deriving accurate and reliable cosmological constraints from non-linear scales (<50Mpc/h) with k-th nearest neighbor (kNN) statistics. We detail our methods for choosing robust minimum scale cuts and validating galaxy-halo connection models. Using cross-validation, we identify the galaxy-halo model that ensures both good fits and unbiased predictions across diverse summary statistics. We demonstrate that we can model kNNs effectively down to transverse scales of rp ~ 3Mpc/h and achieve precise and unbiased constraints on the matter density and clustering amplitude, leading to a 2% constraint on sigma_8. Our simulation-based model pipeline is resilient to varied model systematics, spanning simulation codes, halo finding, and cosmology priors. We demonstrate the effectiveness of this approach through an application to the Beyond-2p mock challenge. We propose further explorations to test more complex galaxy-halo connection models and tackle potential observational systematics. INTRODUCTION The spatial distribution of galaxies presents one of the most powerful probes of the fundamental properties of the universe.A new generation of wide-area spectroscopic surveys, such as the Dark Energy Spectroscopic Instrument (DESI; Levi et al. 2013), the Subaru Prime Focus Spectrograph (PFS; Takada et al. 2014), the ESA Euclid satellite mission (Laureijs et al. 2011), and the NASA Roman Space Telescope (WFIRST; Spergel et al. 2013) will enable us to study the 3D large-scale structure (LSS) of the universe with unprecedented precision.This precision promises insights into critical questions, including the cause of the universe's accelerated expansion and the theory of gravity, the nature of dark matter and neutrinos, the physics of the primordial universe, and the detailed process of galaxy formation. Over recent decades, LSS analyses have seen significant advancements.On large scales (> 50 ℎ −1 Mpc), concise analytic models based on perturbation theory (of the density contrast) are sufficiently accurate.Given that density field on large scales is well approximated by a Gaussian random field, the 2-point correlation function (2PCF, or its Fourier pair the power spectrum) captures its complete information content.Consequently, large-scale 2PCF analyses using perturbative methods have become pivotal in modern cosmology.However, there is a wealth of additional information on smaller, nonlinear scales.These scales contain complex features in the density field and offer the highest signal-to-noise clustering measurements, but pose significant analytical challenges due to the intermingling of cosmology and galaxy physics. ★ E-mail: sihany@stanford.eduAddressing these challenges and harnessing the full potential of small-scale information necessitate alternative modeling frameworks.While gravity-only -body simulations can predict matter density accurately to smaller scales, the sheer computational expense and the precision requirements of modern surveys mean we need large volume, high-resolution simulations.To bridge this gap, surrogate models or emulators interpolate between a limited set of simulated cosmologies, providing efficient predictors for arbitrary cosmologies (e.g.DeRose et al. 2019b;Nishimichi et al. 2019;Maksimova et al. 2021).This method, termed simulation-based modeling, has been successfully implemented in various recent studies (e.g.Lange et al. 2022;Yuan et al. 2022a;Kobayashi et al. 2022;Chapman et al. 2022;Zhai et al. 2023). Still, the credibility of these models on smaller scales remains a concern.Large-scale perturbative models use a sequence of bias parameters, keeping them physics-agnostic, to marginalise over the connection between galaxies and dark matter.On smaller scales, however, simulation-based methods require well-motivated models linking galaxies to dark matter haloes.The most commonly used approach is Halo Occupation Distribution (HOD; Peacock & Smith 2000;Scoccimarro et al. 2001;Berlind & Weinberg 2002;Berlind et al. 2003;Zheng et al. 2005Zheng et al. , 2007) models due to its speed and flexibility.In its simplest form, only halo mass is used, and the HOD can be summarised as ( galaxy | halo ); a commonly used form of this model is summarised by Zheng et al. (2007) (also see section 3.2).More recent studies using additional data and summary statistics have found the need to include a secondary halo property (e.g.Yuan et al. 2021;Wang et al. 2022;Beltz-Mohrmann et al. 2023;Contreras et al. 2023), an effect known as galaxy assembly bias (Gao et al. 2005;Wechsler et al. 2006).Although the HOD model has undergone extensive refinement, its inherent empirical nature and simplicity indicates that it may fail in detail when confronted with ever more precise data.Thus, a pivotal question is ensuring that the model connecting connect galaxies to dark matter is sufficiently flexible to avoid systematic biases in the cosmology inference. To date, most simulation-based studies have adopted a fiducial model for the galaxy-halo connection without conducting extensive tests to demonstrate its robustness.Some studies have shown convergence between a few different HOD models (Yuan et al. 2022a), or for a wider range of models for the galaxy-halo connection (Reddick et al. 2014), but they have not typically demonstrated that such models span the necessary model space.This is essential in the era of precision cosmology given that it has been shown that different galaxy assembly bias assumptions can impact cosmological parameters (Lange et al. 2019).Here, we present new tests that can significantly increase our confidence in the assumed galaxy model. Beyond the question of robustness, modeling non-linear scales requires advanced summary statistics.While the large-scale density field is adequately represented by the 2PCF, non-linear scales require a more complex approach.The -th nearest neighbor statistics (NN Banerjee & Abel 2021a; Banerjee et al. 2022;Yuan et al. 2023a) have emerged as a promising solution due to their informative nature, computational ease, and interpretative clarity. In this work, we describe in detail the methodology we use to derive precise yet dependable cosmology constraints with NNs from non-linear scales in the Beyond-2p blind mock challenge .The paper is structured as follows.Section 2 describes the blind mock data.Section 3 details our simulation-based modeling framework, including simulations, the HOD, our summary statistics, and emulation.Our results, accompanied by methods for selecting scale cuts and validating HOD modeling, are presented in section 4. We discuss our methodology's limitations and forecast potential advancements in section 5. Conclusions are drawn in section 6. BLIND MOCK DATA To test our cosmology inference framework, we use the blind mock galaxy catalogs described in .The significance of this mock lies in that the true cosmology deviates significantly from Planck and is kept secret until all analyses were done and finalised.The galaxy model is also kept secret and will be hidden in the future to enable future participation in the blind challenge.In context, this is the first blind mock challenge specifically designed for analyses pipelines that target non-linear scales, with sufficient volume to match the new generation of cosmology surveys. The mock catalogs are created by populating galaxies in -body simulation halo catalogs at = 1, following an HOD prescription designed by the creators of the Beyond-2p challenge.While the mocks assume a flat ΛCDM cosmology in which the CMB acoustic scale ★ is fixed to the value adopted in AbacusSummit (see section 3.1), the input values of other cosmological parameters and the HOD model are kept blind.For creating the halo catalogs, dark-matter-only simulations are run in (2 ℎ −1 Gpc) 3 boxes using the GINKAKU -body solver (Nishimichi et al. prep).The initial conditions are generated using the second-order Lagrangian perturbation theory (Crocce et al. 2006) at = 49, and haloes are identified using the Rockstar halo finder (Behroozi et al. 2013).The redshift-space distortions are implemented by modulating each galaxy's position along the -axis with the redshift-space displacement determined by its velocity.An additional ten (2 ℎ −1 Gpc) 3 boxes are generated with the same cosmology and HOD, but different initial phases for covariance calculations. It is important to point out that these mocks are tuned to roughly produce the correct spatial number density and linear bias of DESI Luminous Red Galaxies, but we were not given any additional details about the underlying model beyond the fact that it is an HOD.We do not know the form of the HOD, the value of any HOD parameters, or the inclusion of any non-vanilla extensions such as galaxy assembly bias.The fact that the HOD is implemented on top of a new simulation and halo code means that there will be significant and complex differences between the mock HOD and any existing implementations.A key aim of this paper is to quantify to what extent we can marginalise over these differences and uncertainties about the model. METHODOLOGY We use a simulation-based model for this analysis.In this section, we introduce the various layers of the model, including the simulation suite, the HOD model, the NN and 2PCF summary statistics, and our neural-net-based emulator. AbacusSummit simulations The AbacusSummit simulation suite (Maksimova et al. 2021) is a set of large, high-accuracy cosmological N-body simulations using the Abacus N-body code (Garrison et al. 2019(Garrison et al. , 2021)), designed to exceed the cosmological simulation requirements of the Dark Energy Spectroscopic Instrument (DESI) survey (Levi et al. 2013).Abacus-Summit consists of over 150 simulations, containing approximately 60 trillion particles at 97 different cosmologies.For this analysis, we use exclusively the "base" configuration boxes in the simulation suite, each of which contains 6912 3 particles within a (2ℎ −1 Gpc) 3 volume, corresponding to a particle mass of 2.1 × 10 9 ℎ −1 ⊙ . 1 The AbacusSummit suite also uses a specialised spherical-overdensity based halo finder known as CompaSO (Hadzhiyska et al. 2022c). Here we mainly use the AbacusSummit cosmology grid, a set of 85 base boxes run at 85 distinct cosmologies, at fixed initial phase.The 85 cosmologies are tagged c000-181 non-consecutively and visualised in Figure 1 (see Maksimova et al. 2021 for details).The details of each cosmology are described on the AbacusSummit website. 2 While we only conduct our analysis in ΛCDM cosmology space in this paper, we train the emulator in a larger CDM+ eff +running space with 8 parameters: the baryon density = Ω ℎ 2 , the cold dark matter density cdm = Ω cdm ℎ 2 , the amplitude of structure 8 , the spectral tilt , running of the spectral tilt , the density of massless relics eff , and dark energy equation of state parameters 0 and (() = 0 + (1 − ) ). A set of observational constraints were followed in the design of the parameter grid.For example, is varied while holding the equation of state at = 0.333 constant, so that the low-redshift cosmic c100-126 is a linear derivative grid set up around c000, with symmetric pairs along all 8 parameter axes.The grid also includes additional pairs along cdm , 8 , , 0 , and with smaller step sizes. c130-181 forms an emulator grid that provides a wider coverage of the 8-dimensional parameter space.This was done by placing electrostatic points on the surface of an 8-dimensional ellipsoid, whose extent was chosen to be 3 to 8 standard deviations beyond current constraints from the combination of CMB and large-scale structure data.The grid excludes anti-podal reflected points and includes extra excursion along 8 . These 85 cosmologies span the CDM+ eff +running parameter space and form the basis of our simulation-based model.It is worth pointing out that while we train our emulators over this broader cosmology model space, the analysis presented in the rest of this paper only considers the ΛCDM space, fixing the other parameters to their Planck2018 values.Additionally, the neutrino mass is fixed at 60meV and the Hubble parameter ℎ is fully degenerate with matter density by fixing the acoustic scale across all 85 cosmologies.We refer the readers to Maksimova et al. 2021 for full description and justification of the cosmology design. The Halo Occupation Distribution (HOD) The galaxy-halo connection model we use for generating the realistic mocks and for the forward model is known as the Halo Occupation Distribution (HOD; e.g.Zheng et al. 2005Zheng et al. , 2007)), which probabilistically populates dark matter haloes with galaxies according to a set of halo properties.For a Luminous Red Galaxy (LRG) sample, the HOD is well approximated by a vanilla model given by: where the five vanilla parameters characterizing the model are cut , 1 , , , .The parameter cut sets the halo mass at which the mean central occupation is 0.5, while 1 characterises the typical halo mass that hosts one satellite galaxy. describes the steepness of the transition from 0 to 1 in the number of central galaxies, is the power law index on the number of satellite galaxies, and cut gives the minimum halo mass to host a satellite galaxy.We have added a modulation term nLRG cent () to the satellite occupation function to remove satellites from haloes without centrals3 .We have also included an incompleteness parameter ic , which is a downsampling factor controlling the overall number density of the mock galaxies.This parameter is relevant when trying to match the observed mean density of the galaxies in addition to clustering measurements.By definition, 0 < ic ⩽ 1. In addition to determining the number of galaxies per halo, the standard HOD model also dictates the position of velocity of the galaxies.For the central galaxy, its position and velocity are set to be the same as those of the halo center, specifically the L2 subhalo center-of-mass for the CompaSO haloes.For the satellite galaxies, they are randomly assigned to halo particles with uniform weights, each satellite inheriting the position and velocity of its host particle. To model redshift-space distortion, we also include an additional HOD extension known as velocity bias, which biases the velocities of the central and satellite galaxies.This is shown to to be a necessary ingredient in modeling BOSS LRG redshift-space clustering on small scales (e.g.Guo et al. 2015;Yuan et al. 2021).Velocity bias has also been identified in hydrodynamical simulations and measured to be consistent with observational constraints (e.g.Ye et al. 2017;Yuan et al. 2022b). We parametrise velocity bias through two additional parameters: vel,c is the central velocity bias parameter, which modulates the peculiar velocity of the central galaxy relative to the halo center along the line-of-sight (LoS).Specifically in this model, the central galaxy velocity along the LoS is thus given by where L2,z denotes the LoS component of the central subhalo velocity, ( LoS ) denotes the Gaussian scatter, and vel,c is the central velocity bias parameter.By definition, vel,c = 0 corresponds to no central velocity bias.We also define vel,c as non-negative, as negative and positive are fully degenerate observationally. The second parameter is vel,s , the satellite velocity bias parameter, which modulates how the satellite galaxy peculiar velocity deviates from that of the local dark matter particle.Specifically, the satellite velocity is given by where p,z denotes the line-of-sight component of particle velocity, and vel,s is the satellite velocity bias parameter. vel,s = 1 indicates no satellite velocity bias, i.e. satellites perfectly track the velocity of their underlying particles.So far, the baseline HOD is parameterised by 8 parameters, cut , 1 , , , , vel,c , vel,s , and ic .We now introduce three physically motivated extensions in addition to the baseline HOD: • cent or sat are the concentration-based secondary bias parameters for centrals and satellites, respectively.Also known as galaxy assembly bias parameters. cent = 0 and sat = 0 indicate no concentration-based secondary bias in the centrals and satellites occupation, respectively.A positive indicates a preference for lower-concentration haloes, and vice versa. • cent or sat are the environment-based secondary bias parameters for centrals and satellites, respectively.The environment is defined as the mass density within a env = 5ℎ −1 Mpc tophat of the halo center, excluding the halo itself. cent = 0 and sat = 0 indicate no environment-based secondary bias.A positive indicates a preference for haloes in less dense environments, and vice versa. • is the baryon feedback parameter, which modulates how the radial distribution of satellite galaxies within haloes deviates from the radial profile of the halo (mimicking baryonic effects). = 0 indicates no radial bias, i.e. satellites follow the dark matter distribution. > 0 indicates a more extended (less concentrated) profile of satellites relative to the dark matter, and vice versa.It has also been shown that galaxy selection based on luminosity and SFR can bias the satellite profile (Orsi & Angulo 2018).Thus, the parameter can additionally marginalise over some selection effects. Throughout the rest of this paper, we conduct fits with three different HOD models, each including a subset of the three extensions.To keep future participants blind, we refer to these models as models A, B, and C. A key objective of this paper is to compare these three different models and demonstrate that one model is validated by the data while the other two are not (section 4.2).For completeness, we also tested a fourth model that includes all aforementioned extensions, but we do not see any improvements in goodness-of-fits and model evidence so we omit that from this paper.It is important to keep the HOD model compact so as to not dilute the cosmological information in the data. For computational efficiency, we adopt the highly optimised Aba-cusHOD implementation, which significantly speeds up the HOD calculation per HOD parameter combination (Yuan et al. 2021).The code is publicly available as a part of the abacusutils package at https://github.com/abacusorg/abacusutils.Example usage can be found at https://abacusutils.readthedocs.io/en/latest/hod.html. 2D-𝑘NN The -th nearest neighbor (NN) are highly informative statistics that summarise the spatial distribution of data points by tabulating their distances to a set of random or uniform volume-filling query points.Banerjee & Abel (2021a,b) showed through Fisher forecasts that NNs are highly informative on cosmological parameters and are sensitive to all orders of -point correlation functions.Operationally, for each , we identify each query point's -th nearest data points and record their distances , where loops through all the query points and represents the order.These s can then be summarised with a cumulative distribution function (CDF) as a function of length.The ensemble of these CDFs for all s forms our NN summary statistics.Yuan et al. (2023a) generalised the standard NN to 2D to disentangle the redshift-space distortion features from the projected galaxy clustering.Specifically, we decompose the distance between each query-data pair into a and a component, and we bin both projections into a 2D histogram.Then we calculate a 2D CDF where each bin accumulates the counts from all bins with smaller and .Finally, we normalise the cumulative counts by the total number of query points.Conceptually, the 2D NN-CDF is exactly analogous to the default 1D NN-CDF, except we tabulate distances and the cumulative statistics in 2D.Yuan et al. (2023a) showed that the 2D NNs are richly informative and fully derive other summary statistics such as the 2-point correlation function (2PCF), counts-incells, and the void probability function (VPF).The study also showed that the 2D NNs place tight constraints on HOD parameters in a realistic setting. For this analysis, we adopt the 2D NN statistics to comprehensively extract information about galaxy density and clustering on non-linear scales.Specifically, we separately analyze both the querydata NN (RD-NN) and the data-data NN (DD-NN).Yuan et al. (2023a) described two flavors of NNs in detail, but we briefly describe them here.RD-NN represents the standard setup, where we put down a volume-filling set of uniform or random query points and we tabulate their distances to the data points.With DD-NN, we use the data points as query points, thus we tabulate the distance between data points and data points. Although the two statistics might seem similar at first glance, they reveal distinct facets of the galaxy field.The RD-NN is primarily a density statistic, capturing density counts in specific volumes (counts-in-cells) and detailing the size and distribution of voids.In contrast, the DD-NN enumerates the galaxy pairs in group and cluster regions.It also serves as a generating function for the 2PCF and closely relates to high-order correlation functions.The unique information provided by the two statistics becomes apparent in their respective constraints.We also conduct cross-validation tests between the two statistics to assess the robustness of our results.For computational efficiency, we adopt the parallel NN implementation written in Rust4 and wrapped for Python5 . We set up our 2D NN data vector as follows: we use 8 logarithmic bins along the direction between 0.32ℎ −1 Mpc and 63ℎ −1 Mpc, and 5 logarithmic bins along the direction between 1ℎ −1 Mpc and 32ℎ −1 Mpc.We include = 1, 2, 3, ..., 9.For RD-NN, we set up the query points as a uniformly spaced grid with cell length 10ℎ −1 Mpc.We also remove bins where the CDF values are less than 0.05 or greater than 0.95 as those bins are noisy and contain little physical information.As a result, the DD-NN data vector is of length 143 whereas the RD-NN data vector is of length 114.The minimum transverse scale after removing these bins is approximately 3ℎ −1 Mpc for RD-NN and 0.5ℎ −1 Mpc for DD-NN. We justify our scale choices as follows.Our methodology is focused on non-linear scales.The maximum scale in projected separation is chosen to obtain some power from linear scales but not rely on well-known large-scale features such as the BAO.The modest maximum projected separation also reduces any potential biases in the jackknife covariance calculation.The maximum separation along the line of sight is designed to capture the finger-of-god and largescale Kaiser effects.The fiducial minimum scale reaches into the 1-halo regime, which is well measured given the volume but which we expect to be significantly affected by halo definition and baryon feedback.After removing noisy bins, the RD-NN should be largely insensitive to 1-halo scale physics, but the DD-NN remains sensitive to very small scales.Thus, it is essential that we validate our modeling of the smallest scales for this statistic. Figure 2 and Figure 3 visualise the 2D NN statistics up to the first four orders.We show the measurement on the redshift-space mocks as well as the measurement on an unclustered Poisson random sample of the same size for comparison.The difference between the solid and dashed contours denotes the informative features in the statistics.The corresponding covariance matrices are calculated from 1250 jackknife volumes cut from the provided boxes.Each box is of length 400ℎ −1 Mpc, sufficient for the scales considered.The covariance matrices are visualised in Figure A1 and Figure A2, respectively. For comparison, we also make several references to the 2-point correlation function (2PCF) in this analysis.Specifically, we make use of the redshift-space 2PCF ( , ), which can be computed using the Landy & Szalay (1993) estimator: where , , and are the normalised numbers of datadata, data-random, and random-random pair counts in each bin of ( , ); and are the transverse and line-of-sight separations in comoving units.The redshift space 2PCF ( , ) in principle represents the full information content of the 2PCF. For the rest of the paper, we adopt a Gaussian likelihood function for both NN statistics, specifically where is the NN data vector and is the covariance matrix.We do not include a mean density term in the likelihood as the mean density information is naturally captured by the NNs. We test the assumption of a Gaussian likelihood as follows.If the likelihood of the summary statistic is Gaussian distributed, the 2 values should also follow a 2 distribution with degrees of freedom determined by the number of bins. Figure 4 shows the comparison of 1883 independent realizations of mock NN statistics calculated on the AbacusSummit small boxes with analytic 2 distributions.Both our statistics are consistent with a 2 distribution and thus our Gaussian assumptions are valid.There is a high- 2 tail that exceeds the analytic prediction in the DD-NN case.Additional scale cuts, employed in the following sections, remove this excess. Forward model and emulator Having defined the summary statistics and likelihood function, we now describe our modeling methodology that predicts the relevant summary statistics given arbitrary input parameters in cosmology and HOD.Specifically, we use a forward model as follows.Starting from dark matter only simulations parameterised with cosmological parameters, we populate the simulated haloes with galaxies using parameterised HOD models.Then we compute the NN statistics on the resulting mock galaxy density.We compute the likelihood function by comparing the mock predicted NN with the measured NN on the target sample, incorporating the jackknife covariance matrix and assuming a Gaussian likelihood function. Due to the high computational cost of running large, highresolution simulations, we cannot run arbitrary cosmologies on the fly, but instead, we have to rely on the 85 cosmologies in the Abacus-Summit suite to build an approximate emulator model.To achieve this, we populate each cosmology box with a few thousand of HOD models and record the resulting NNs to form a large training set.Specifically, we follow the approach of Yuan et al. (2022a), where we take advantage of the high efficiency of the AbacusHOD code and run MCMC chains in the HOD parameter space against the target data vector at each cosmology.We stop the MCMC chains after 20,000 evaluations in each box and select samples whose likelihood is greater than log > −9000.This approach limits the subsequent emulator training to a compact region in the cosmology+HOD parameter space, reducing the effects of outliers and improving the emulator precision. Having selected the training sample, we now build an emulator or surrogate model that takes the cosmology and HOD parameters as inputs and outputs the desired NN statistics as outputs.For the emulator model, we adopt a fully connected neural network of 5 layers and 500 nodes per layer with Randomised Leaky Rectified Linear Units (RReLU) activation.We train the network following a mini-batch routine with the Adam optimiser and a mean squared loss function, where we use the diagonal terms of the covariance matrix as bin weights. To test the performance of the emulator, we remove 9 cosmologies The solid line shows an analytic 2 distribution with degrees of freedom set to the number of bins, as we expect from Gaussian statistics.Both statistics are consistent with the analytic distributions and with our assumption of a Gaussian likelihood. (c001-004 and c171-175) from the training set and reserve them as outsample tests.Figure 1 visualises the distribution of the training and test sets in ΛCDM parameter space.At each of the 9 cosmologies, we randomly sample 400 HODs with log > −9000 and combine them to form a test set of size 3600.We compute the mean absolute error of the emulator on all 9×400 test samples and find a mean error that is sub-dominant to the data error, approximately 30-60% of the jackknife error.We further conduct tests to ensure the emulator is not suffering from systematic biases towards the mean (see Figure 10 of Yuan et al. 2023b).Finally, we summarise the emulator errors by computing an emulator covariance matrix from the hold-out test samples and add it to our jackknife covariance matrices.We provide the emulator correlation matrices in Figure B2 phase difference between the data and the model) and emulator errors (C emulator ): To summarise, our model starts from the AbacusSummit simulations, and we model galaxy distribution with AbacusHOD, invoking several extensions to the standard HOD model.We then compute the desired summary statistics and emulate that as a function of cosmology and HOD using a neural net model.We finally confront our forward model with the mock data by sampling the parameter posteriors and deriving constraints. RESULTS In this section, we present the ensemble results with different analysis choices and then describe the validation tests and model selection tests carried out to identify our final results presented in .We adopt flat priors for each parameter, and Table 1 summarises the prior bounds.For posterior sampling, we employ the efficient nested sampling package dynesty (Speagle & Barbary 2018;Speagle 2020).We initiate each nested sampling chain with 2000 live points and a stopping criterion of log Z = 0.01, where Z is the evidence. We note that all analyses in this section were conducted blind and all the results were finalised before unblinding.We do not present or make reference to any analysis done post-unblinding.We also do not omit blind analyses that might be deemed unsatisfactory post unblinding.However, the texts presented in this section were written post-unblinding and make reference to the true values for the sake of discussion. Cosmology posteriors Figure 5 presents the 2D marginalised posteriors when fitting the RD-NN in the ΛCDM cosmology space.Note that for this analysis, we fix 0 = −1 and = 0.The Hubble parameter ℎ is fully degenerate with matter density by fixing the acoustic scale.The three colours correspond to constraints from the different HOD models.The legend summarises the goodness-of-fits, and the contours show the 1-2 constraints.For the RD-NN data vector, we use = 1, 2, 3, ..., 9, covering scales 3ℎ −1 Mpc < < 63ℎ −1 Mpc and 0.5ℎ −1 Mpc < < 31.6ℎ−1 Mpc.The fainter solid lines correspond to maximum likelihood points.We omit ℎ as it is fixed to the acoustic scale.The three contours show good consistency with each other and the input values.The contours correspond to 68% and 95% confidence intervals. We see that all three models correctly recover the true cosmology and derive mutually consistent cosmology.This shows that the RD-NN is a powerful statistic that captures non-linear information without being necessarily sensitive to the details of HOD modeling.This is perhaps not surprising as we have configured it with a query spacing of 10ℎ −1 Mpc, thus making it mostly sensitive to densities on scales of a few Megaparsecs or larger and removing much of the sensitivity to 1-halo physics.It is also worth reminding that the RD-NN is a density statistic that measures the configuration of mass around volume-sampled queries, meaning that it is more sensitive to the mass distribution around voids.This distinguishes RD-NN from clustering statistics (DD-NN and 2PCF) that measure galaxygalaxy distances, which sample mostly highly clustered regions that are strongly affected by 1-halo physics. In Figure 6, we show the cosmology posteriors obtained from fitting the DD-NN.The three models are mutually in up to 2 tension with each other.The large goodness-of-fit values indicate that our model is not flexible enough to produce the features in the DD-NN.Given these discrepancies, we either need more flexible models or must apply further scale cuts to achieve reliable results.Note that this decision was made before unblinding, and was based primarily on observed tensions among the models and the poor 2 value.Without access to additional information about the target galaxy sample -such as spectral energy distributions, selection criteria, or imaging data -that would typically be available in a real survey, we do not have straightforward ways to decouple the galaxy formation model from cosmology.Hence, we choose to use additional scale cuts to improve the fit. Figure 7 shows the best-fit 2 as a function of minimum scale cuts.The first two columns show the fiducial cuts used for the RD-NN and DD-NN data vectors.The last two columns show best-fit 2 when progressively larger minimum scale cuts are applied to the DD-NNs.We see that we only achieve a good fit 2 /d.o.f.≈ 1 at > 5ℎ −1 Mpc. Figure 8 shows the DD-NN cosmology posterior when an additional cut > 5ℎ −1 Mpc is applied.We see that indeed, the posterior constraints of the three models are now consistent with each other.Thus, we conclude that our current HODs can only reliably model DD-NN at scales greater than 5ℎ −1 Mpc, and we only report our DD-NN constraints at > 5ℎ −1 Mpc for the blind challenge.However, we still include the full-scale constraints in our discussions to motivate future work. The fact that the current HOD models failed to fit DD-NNs on very small scales suggests that the DD-NNs are highly informative on HOD modeling.In Yuan et al. (2023a), we demonstrated that the DD-NN is significantly more constraining on HOD and assembly bias than the standard redshift-space 2PCF.We showed that the 2PCF is a strict marginalization of the DD-NN.Conceptually, DD-NN captures extra information because while the 2PCF encodes the average number of neighbors of a galaxy at a certain distance, the DD-NN additionally encodes the ordering of neighbors at any distance.Thus, DD-NN additionally captures the phase space and topological configuration information. There are several potential reasons why our model cannot produce the DD-NN below 5ℎ −1 Mpc.First of all, without additional information on the galaxy sample that would otherwise be available in a realistic survey, we cannot meaningfully improve our galaxyhalo modeling.For example, the photometric selection allows us to assess the completeness of the sample at different magnitudes.The colours and morphology would inform us of the galaxy type.The spectra can additionally give us statistical descriptions of key galaxy properties such as stellar mass, star formation rate, and metallicity.All these sources of information are useful in building up priors on the galaxy model.For example, Wang et al. (2022) showed that a magnitude-limited sample in SDSS can be robustly described by an HOD model that includes assembly bias.A series of recent studies have also combined realistic selections with state-of-the-art hydrodynamical simulations and semi-analytic models of galaxy formation to build up priors on the appropriate galaxy-halo connection model (e.g.Xu et al. 2021;Hadzhiyska et al. 2022b,a;Yuan et al. 2022b). An additional systematic effect is that the target mock is generated with a different simulation code and different halo finder, which can all produce small-scale features that the DD-NN is potentially sensitive to.This is an issue that we would need to be able to marginalise over in our forward model if we want to exploit 1-halo scales.We propose to assess this point in a future post-unblinding analysis where we can disentangle our uncertainty about the true HOD and the systematic effects associated with simulations and halo finding. To summarise, the DD-NNs are potentially highly sensitive to small-scale effects such as the details of galaxy-halo connection modeling, halo finding, and simulation codes.For the sake of this analysis, we can protect ourselves from these systematics by applying a relatively conservative scale cut of > 5ℎ −1 Mpc.In contrast, the RD-NNs appear to be significantly less sensitive to these effects as long as we choose appropriate query spacing. Figure 9 summarises the cosmology constraints from the different analysis choices.These include all blind analyses we carried out for the challenge.The -axis reports the difference between the inferred and true values, with the green lines denoting 0. The blue lines show the posterior mean and 1 constraints.The orange triangles show the maximum likelihood points.Immediately, all our results are remarkably consistent with each other.We largely derive unbiased cosmology constraints, except in when we use the full-scale range in DD-NN.The occasional disagreement between the maximum likelihood points and the posterior constraints indicates potential prior volume / projection effects.This warrants more careful investigation in future application of our methods.Finally, we highlight the stringent constraints on 8 from both NN statistics.The full DD-NN constrains 8 error bars to less than 0.01.Even after the scale cuts, the 1 constraints are still below 0.02 for both RD-NN and DD-NN, significantly stronger than existing ∼ 5% constraints from galaxy clustering in BOSS/SDSS (e.g.Alam et al. 2017;Lange et al. 2022;Kobayashi et al. 2022;Yuan et al. 2022a;Zhai et al. 2023).While this extra precision comes directly from the increased volume in this mock challenge relative to BOSS, what we have demonstrated is that we can effectively take advantage of the extra volume on non-linear scales, which is essential for taking full advantage of the data that will be imminently available with DESI.These constraints also demonstrate the ability of our novel statistics to capture small-scale information.Assuming we are able to construct more informed galaxy-halo connection models with DESI, this should enable us to push our analysis down to smaller scales and is expected to yield even tighter constraints. Validation of galaxy-halo connection modeling To derive reliable cosmology constraints, it is essential to demonstrate that the galaxy-halo connection model is sufficiently flexible to describe the relevant features in the galaxy-halo physics, but not so flexible that it dilutes the cosmology constraints.There are several ways one can inform and validate the galaxy-halo connection model.We summarise them here: • Simulated models: Simulated galaxy models such as hydrodynamical simulations and semi-analytic models are excellent sandboxes to gain physical intuitions about the galaxy sample and identify necessary ingredients in their galaxy-halo connection models.However, hydrodynamical simulations are currently too small to calibrate our models to the precision necessary for next-generation surveys.Semi-analytical models are cheaper and can be generalised to larger volumes, but they are still too expensive right now to fairly sample the model space while maintaining cosmological volumes (see Perez et al. 2023 for the current state-of-the-art). • Galaxy observations: Leveraging existing observations and information about the galaxy sample is a powerful and data-driven way to understand and constrain the galaxy-halo connection.For example, one can directly constrain the halo mass given observed galaxies via gravitational lensing.There are also patches of the sky that are observed by a variety of facilities (such as the COSMOS field), yielding deep images across a wide wavelength domain and spectra.One can leverage these datasets to understand the completeness, mass function, and other galaxy properties.These observations can be richly informative but require careful modeling to connect to theory models of dark matter. • Mock challenges: Showing unbiased cosmology constraints on simulated galaxy mocks is perhaps the most direct way to show robustness.This was the primary motivation for the Beyond-2p challenge.However, the mocks are still generated relying on some assumptions about galaxy formation physics or the galaxy-halo connection.Thus, while these tests help develop confidence (the larger the variety of galaxy models, the better), it does not rule out the possibility that unforeseen galaxy features in the real Universe can bias the inference. • Statistical validations: Statistical tests such as goodness-of-fit and cross-validation are model-agnostic ways to show consistency with data.The idea is to confront the model with all relevant aspects of the data and show that the model can describe the data without internal tensions.In this section, we will first test the flexibility of our HOD models with goodness-of-fit metrics.Then we select the "correct" HOD model via cross-validations, where we break the data into multiple parts that contain different information and demonstrate that the model calibrated on one part can consistently predict another part.A sufficiently flexible model with correct physical assumptions should achieve good fits and self-consistently predict all relevant aspects of data.In the current work, because we have access to multiple summary statistics that access different subsets of the information content of the galaxy field, we can conduct this crossvalidation test by fitting the model on one summary statistic and checking if the constrained model can predict another statistic. In the context of this mock challenge, we cannot leverage simulated galaxy models or galaxy observations to learn the galaxyhalo connection model, but we can conduct statistical tests to assess model validity.To demonstrate this point, we employed three different HOD models in our analysis, adding different combinations of assembly bias and baryon feedback prescriptions to the vanilla HOD.The objective of this section is to compare the three models with goodness-of-fit metrics and cross-validation tests to show that one of the three models is preferred by the data and dependable on the relevant scales. First, we compare the three models on the basis of their 2 /d.o.f.(see Figure 7 and the legends of Figure 5-8).For RD-NN, all three models achieve good fits to the mock data, with models A and B showing slightly better 2 .For DD-NN, models B and C yield significantly better fits on small scales.With the > 5ℎ −1 Mpc cut, the models A and B result in better fits.These comparisons are not conclusive but show that model B performs consistently well in configurations.We do not show joint fits of the DD-NN and RD-NN due to limitations in modeling the joint covariance matrix. The reduced 2 metric gives a rough characterisation of the goodness-of-fit but offers a poor assessment of possible over-fitting.Instead, we perform cross-validation tests where we take the fits on one summary statistic to predict another summary statistic, relying on the notion that the model with the right physical assumptions should extrapolate to predict additional data.Our summary statistics are particularly well suited for this test because the different summary statistics capture disjoint information in the density field.Specifically, while the DD-NN and the 2PCF capture the clustering information and emphasise the structure of density peaks on small scales, the RD-NN capture the density information and transition between voids and density peaks (Yuan et al. 2023a).It is thus particularly useful to conduct cross-validations between a clustering statistic such as (DD-NN or the 2PCF) and a density statistic (such as RD-NN).This idea is not completely new.A similar cross-validation was done in Wang et al. (2022) between clustering and counts-in-cell statistics to validate galaxy assembly bias models.A similar cross-validation scheme was also used in recent assessments of potential tensions between galaxy clustering and galaxy-galaxy lensing statistics (e.g.Leauthaud et al. 2017;Yuan et al. 2020;Lange et al. 2020;Contreras et al. 2023). In Figure 10, we show the RD-NN predictive distribution derived when the models are fit to the DD-NNs.In particular, the -axis shows the difference between the RD-NN prediction with the three models and the true RD-NN on the blind mock, normalised by the jackknife errors.The three shaded lines represent the 1 predictions for the three HOD models.Model B successfully predicts the correct RD-NN, while the two other models fail.This indicates that model B is both sufficiently flexible to describe the relevant aspects of the data (good 2 /d.o.f.) and carries the right physical assumptions such that it can predict additional statistics it is not fit on.It is also worth noting that all three models make reliable predictions at the beginning of each sub-block (shown by vertical dashed lines), corresponding to small separation, but fail at larger values.Thus, it appears that model B might be better at capturing the correct redshift-space distortion compared to the other models.All three models make consistent predictions at large (as in NN, not spatial frequency), which makes sense because large orders probe larger scales and contain less information about the galaxy-halo connection. Figure 11 shows the redshift-space 2PCF ( , ) predictive distribution when the models are fitted to the RD-NNs.The -axis on the left side shows the difference between the predicted ( , ) and the true ( , ), normalised by the jackknife errors.We have broken the 2D ( , ) data vector into different rows, each row corresponding to a .We see that all three models can predict the 2PCF at large , but none of the three models correctly predict the 2PCF all the way down to = 1ℎ −1 Mpc.However, comparing the three models, model B predicts the 2PCF consistently down to = 2ℎ −1 Mpc whereas the other two models deviate from the true values at larger scales. To summarise, model B is strongly preferred by the mock data, both because of good 2 /d.o.f. and its superior ability to predict statistics that it was not fit on.Model B performed better in both The trend in the disagreement shown in Figure 11 also shows that the current failure to model scales smaller than < 3ℎ −1 Mpc might be due to insufficient flexibility in modeling the small-scale finger-of-god.While we have included velocity bias in both central and satellite galaxies in our velocity model, the parameterisation only allows for a constant fractional shift relative to the underlying dark matter.There exists evidence that a more sophisticated velocity bias model is needed.For example, Ye et al. (2017) reports significant mass dependencies in hydrodynamical models.Additionally, the differences between simulation codes and halo finders can introduce significant complexities in the galaxy velocities.This highlights the need for more flexible velocity modeling in our pipeline.Several other possible reasons for poor fits at very small scales include unrealistic models used in the mock production and underestimated covariances.It is particularly important to further validate our covariance matrices due to their large size and significant off-diagonal contribution.We reserve that for a future paper. DISCUSSION So far, we have presented the methodology and results of applying our NN analysis pipeline to the Beyond-2p mocks.While we derive strong and unbiased constraints on cosmology, we highlight a few caveats and provide outlook for future work in this section. Addressed and unaddressed systematics The fact that we derive unbiased cosmology from deeply non-linear scales is significant for a couple of reasons.We are among the first to show that a simulation-based method for non-linear scales can recover unbiased cosmology results when confronted with mock data generated with a completely different simulation.This shows that our modeling framework can be resilient against several simulationrelated systematics, including gravity codes, simulation resolution, and halo finding (e.g.Heitmann et al. 2008;Knebe et al. 2011;Grove et al. 2022).We attribute this resilience to the careful choice of HOD models and scale cuts.This conclusion is made stronger by the fact that our mock data was blind and the true cosmology was far from Planck, which shows that we are not biased by priors. However, one key systematic that this exercise has not fully addressed is galaxy-halo connection modeling.Both the models and the mock data are generated from HOD models.We first note that the HOD itself is actually a broad framework and there is a large amount of potential systematics that can arise from different implementations of HODs (e.g.Hearin et al. 2016).In its most general interpretation, the HOD model simply implies a probabilistic occupation based on halo properties (see Wechsler & Tinker 2018 for a review).Thus, there is a large amount of freedom in terms of the specific function form of the HOD, the probability distribution, and the halo properties used.Moreover, the fact that the mock data and the models used different simulations and halo finders means that the behavior of the same HOD can be different in complex ways.All this is to say that even within the HOD framework, it is non-trivial to recover unbiased cosmologies from two different HOD implementations on top of two different halo finders. Nevertheless, the HOD is an empirical model that is necessarily wrong in detail when compared to the real Universe (e.g.Hadzhiyska et al. 2020;Xu et al. 2021;Beltz-Mohrmann et al. 2023).Thus, it is important to repeat this exercise with blind mocks constructed from more sophisticated galaxy-halo connection models.One good and available option is to use subhalo abundance matching (SHAM Vale & Ostriker 2006;Lehmann et al. 2017), which relies on the assumed correlation between some galaxy property with some subhalo property to assign galaxies.SHAM is meaningfully different from the HOD as it is based on subhalos instead of halos, and it does not rely on analytic forms.However, SHAM has two drawbacks.First, it is similar to HODs in the sense that it is still commonly based on halo mass or other proxies of mass.Second, it also requires large high-resolution simulations with effective subhalo finding, which can be expensive.Another option is to use machine learning models to paint galaxies over from hydrodynamical simulations or semi-analytic models (e.g.Delgado et al. 2022;Xu et al. 2021;Lovell et al. 2022;McGibbon & Khochfar 2022;Chittenden & Tojeiro 2023).These approaches automatically incorporate a variety of subtleties in galaxy modeling and can be a powerful tool to generate complex and realistic mocks.A third option is to use physically motivated empirical models such as UniverseMachine and Diffstar (Behroozi et al. 2019;Alarcon et al. 2023).These models do not rely on assumptions built into external simulations and provide a self-consistent way of modeling galaxy assembly.Both the second and third options should be highly differentiated from the HOD and would serve well to test against galaxy-halo modeling systematics. Finally, this analysis does not address observational systematics such as survey incompleteness due to survey windows and fiber collisions, or evolution effects over finite redshift ranges.These effects are non-trivial to correct for when calculating novel summary statistics such as the NNs.However, in principle we can forward model these effects using simulation lightcones (Yuan et al. 2023b;Hahn et al. 2022).Specifically, in Yuan et al. (2023b), we present both a flexible and performant framework to explore HOD and redshift evolution models on lightcones, and techniques to correct for survey boundary and fiber collision effects.Similar realistic lightcone mocks were also constructed and tested for large photometric surveys in DeRose et al. (2019a), MacCrann et al. (2018), and To et al. (2021).Combining the forward modeling framework and the modeling techniques laid out in this paper will be powerful in exploiting non-linear scale information in upcoming spectroscopic surveys. The impact of scale cuts In section 4.1, we chose a fiducial scale cut of > 5ℎ −1 Mpc for the DD-NN statistics.Such a scale cut is indeed necessary for this specific analysis as our galaxy-halo connection model is not sufficiently flexible to self-consistently predict features on smaller scales.However, we also pointed out that in a realistic survey, we would have considerably more information on the target galaxy sample to better inform the galaxy-halo connection model.Thus, it is interesting to discuss how our constraints would improve as we push down to smaller scales with improved models. Figure 12 shows the 1D marginalised constraints on cosmology as we include smaller in DD-NNs.Clearly, the constraining power of the DD-NN statistics depends strongly on the minimum scale cuts.At > 0.67ℎ −1 Mpc, we see a 4 times improvement in the 8 and constraints and a 2-3 times improvement in the constraints on the density parameters.This demonstrates that while we can already derive highly competitive constraints with our fiducial scale cut of > 5ℎ −1 Mpc, the true power of the DD-NN statistics lies in the smaller scales.This strongly motivates the need for a more informed yet flexible model of galaxy-halo connection in the advent of nextgeneration surveys such as DESI.We note that these constraints are derived with a fixed galaxy-halo connection model that we already showed to be insufficient to describe the smaller scales.Thus, while the improvement in constraining power is informative of the next priority in further developing simulation-based models, it should not be interpreted as a detailed forecast for future analysis.The exact constraining power will depend on the modeling specifics. It is also important to briefly discuss the impact of maximum scale cuts.In this analysis, we are exclusively interested in nonlinear scales, so we chose a modest maximum scale cut of < 63ℎ −1 Mpc, which reaches into the linear regime but does not capture key large-scale observables such as the BAO or the matter-radiation equality scale.In principle, we can gain sensitivity to those scales by extending the range and including much higher orders.We expect this expansion to significantly improve our constraints on the mass density parameters but also to have minimal effect on our 8 and constraints, as those come predominantly from small scales and RSD.We reserve a full-scale NN analysis for a future paper. CONCLUSIONS The key to unlocking the cosmological information on non-linear scales lies in employing simulation-based models and high-order summary statistics.In this paper, we present our methodology for recovering robust cosmology constraints from non-linear scales, leveraging two flavors of the NN statistics, one capturing the density field (RD-NN) and the other capturing the clustering (DD-NN).We confront a blind mock with three different HOD models, including varied prescriptions of assembly bias and baryonic effects built on top of the AbacusSummit cosmology suite.We demonstrate several key points in this paper: • This is the first time a simulation-based model has been shown to derive strong and unbiased cosmology constraints in a blind mock challenge of comparable volume to a modern spectroscopic survey.The power of this result is strengthened by the fact that the mock uses different simulation and halo codes than the input data, and by the fact that the underlying cosmology is significantly different than Planck. • We demonstrated the flexibility and resilience of our approach using multiple HOD models by employing a mix of goodness-of-fit metrics and cross-validation tests.These tests give a clear preference for one of the models, which is able to reproduce a second statistic after being fit to a first.On real data, more information may be available to further inform the HOD model tests. • We use goodness-of-fit metrics to establish minimum scale cuts for our analysis.The RD-NN statistic emerged as less prone to nonlinear scale modeling errors.However, the DD-NN proved to be considerably more constraining when we employ improved models of small scales.This work underscores the potential of using beyond-2p statistics on non-linear scales for cosmology, especially in the context of the next generation of spectroscopic surveys.We argue that integrating direct insights on the galaxy-halo connection from data and simulations would likely enable analyses on even smaller scales. Moving forward, it would be valuable to re-do this analysis with a variety of more sophisticated galaxy-halo connection models that are not based on HODs.That would more convincingly demonstrate the credibility of simulation-based modeling approaches for cosmology.It is also important to test against observational systematics such as survey incompleteness and redshift evolution, for example, using a lightcone-based forward modeling approach like in Yuan et al. (2023b) and applying these systematics directly.One can also resort to realistic non-HOD mocks such as in DeRose et al. (2019a), MacCrann et al. (2018), and To et al. (2021).We look forward to applications of such approaches in future analyses. Figure 1 . Figure 1.Training and test cosmologies in ΛCDM parameter space.Top panels in each row show the distribution of each parameter.Note that we omit ℎ as it is fixed to the acoustic scale. Figure 2 . Figure 2. RD-NN statistics calculated on the redshift-space Beyond-2p mock, averaged over 10 realizations.Each panel corresponds to a order.In each panel, the colour gradient represents the CDF as a function of projected separation and line-of-sight separation.The solid lines show the contours of the CDF.The dotted lines show the contours of a CDF of an un-clustered Poisson random sample.The difference between the solid and dotted contours indicates the signatures of clustering.We show only the first 4 orders for brevity. Figure 3 . Figure 3. Visualizations of the DD-NN statistics calculated on the redshift-space I mock, averaged over 10 realizations.The lines are colours are definely similarly to Figure 2. Note that we have defined DD-1NN = 1, thus = 2 is the first meaningful order. Figure 4 . Figure 4.A qualitative assessment of the Gaussianity of the likelihoods for RD-NN (top panel) and DD-NN (bottom panel).The histograms show the distribution of 2 values as measured from 1883 AbacusSummit small boxes.The solid line shows an analytic 2 distribution with degrees of freedom set to the number of bins, as we expect from Gaussian statistics.Both statistics are consistent with the analytic distributions and with our assumption of a Gaussian likelihood. and Figure B1.The final covariance matrices going into our likelihood function is a combination of data sample variance (C jackknife , accounting for the Figure 5 . Figure5.The ΛCDM cosmology posteriors inferred with the RD-NN data vector using three different HOD models.The axes show the difference between the inferred and true values.The three HOD models are summarised in section 3.2.For the RD-NN data vector, we use = 1, 2, 3, ..., 9, covering scales 3ℎ −1 Mpc < < 63ℎ −1 Mpc and 0.5ℎ −1 Mpc < < 31.6ℎ−1 Mpc.The fainter solid lines correspond to maximum likelihood points.We omit ℎ as it is fixed to the acoustic scale.The three contours show good consistency with each other and the input values.The contours correspond to 68% and 95% confidence intervals. Figure 6 . Figure6.The ΛCDM cosmology posteriors inferred with the DD-NN data vector using three different HOD models.The axes show the difference between the inferred and true values.For the DD-NN data vector, we use = 2, 3, ..., 9, covering scales 0.67ℎ −1 Mpc < < 63ℎ −1 Mpc and 0.5ℎ −1 Mpc < < 31.6ℎ−1 Mpc.The three contours show mild disagreement at the 2-3 level.The goodness-of-fit as shown in terms of 2 /d.o.f. is poor.The contours correspond to 68% and 95% confidence intervals. Figure 7 . Figure7.The best-fit 2 /d.o.f.when using different minimum scale cuts.Different colours refer to the three different HOD models.The -axis iterates through different scale cuts (in units of ℎ −1 Mpc), showing the fits grouped vertically by data vector.DD-NN only achieves a good fit at > 5ℎ −1 Mpc; RD-NN achieves a good fit at smaller scales.This highlights the DD-NN's high sensitivity to galaxy bias and the details of the HOD model on small scales. Figure 9 .Figure 10 .Figure 11 . Figure9.The marginalised ΛCDM cosmology posteriors inferred with the two NN data vectors and using three different HOD models.The blue error bars indicate the (16, 50, 84)% posterior constraints, and orange triangles show the maximum likelihood point.The -axis iterates through the HOD models, with the fits vertically grouped by data vector.The parameter values are blinded.There is excellent agreement between the RD-NN and DD-NN ( > 5ℎ −1 Mpc).The full-scale DD-NN ( > 0.67ℎ −1 Mpc) also yields reasonably consistent constraints, albeit with significantly tighter error bars, and moderately biased results in some parameters. Figure 12 . Figure12.The marginalised ΛCDM cosmology posteriors when using different minimum scale cuts for the DD-NN statistics.For this test, we have fixed the HOD model to model B. The constraining power of DD-NN strongly depends on minimum scale cut.We do not vary the minimum scale of RD-NN as it would require extensive re-computation. Figure A1 .Figure A2 . Figure A1.Visualizations of the RD-NN correlation matrix.The covariance matrix is computed from 1250 jackknife regions.The "bins" represents the flattened indices of NN bins along , , and axes, in orders of outer to inner cycles.For ease of visualization, we separate the bins into blocks via the vertical and horizontal white lines, where each block corresponds to a .Within each block, the increasing bin number cycles through values at each fixed . Figure B1 . Figure B1.The correlation matrix of the RD-NN emulator covariance matrix.This plot is constructed similarly to Figure A1, where we use a white grid to denote different s.The bin indices also assume the same order. Figure B2 . Figure B2.The correlation matrix of the DD-NN emulator covariance matrix. Table 1 . Prior bounds for the NN analyses, for the cosmology parameters, HOD parameters, and assembly bias parameters.
2023-10-11T18:43:47.944Z
2023-10-06T00:00:00.000
{ "year": 2023, "sha1": "566eda63d5b58d184be5969aae340ec2fa559388", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "566eda63d5b58d184be5969aae340ec2fa559388", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
245816978
pes2o/s2orc
v3-fos-license
Effect of Propolis on Bone Quality and Cortical Bone Thickness of Ovariectomized Female Wistar White Rats as A Model for Osteoporosis Estrogen deficiency increases the rate of osteoporosis, especially in menopausal women, by altering the bone tissue microarchitecture. Propolis has compounds that could be used as an alternative therapy to treat estrogen deficiency and to protect against bone damage. This study aims to determine the effect of propolis on bone quality and cortical bone thickness of femoral metaphysis in ovariectomized female Wistar white rats as a model for menopausal osteoporosis. The rats were divided into five groups: negative control group (not subjected to ovariectomy), sham group (subjected to ovariectomy), and treatment groups that were subjected to ovariectomy and given propolis orally at a dose of 180 mg/ kg BW, 360 mg/kg BW, and 720 mg/kg BW for 30 days. Bone quality and cortical bone thickness testing were undertaken on the 31st day. The osteoblast and osteoclast cell examination was evaluated using an Olympus BX 51 light microscope at 400x magnification for bone quality and the Betaview program, Beta 3.1MP Sony Exmor CMOS Sensor camera at 40x magnification for cortical bone thickness. Data were analyzed using the one-way ANOVA and continued with Duncan’s multiple range tests. It was found that propolis had a significant effect on the ratio of osteoblast and femur bone osteoclasts (p0.05). The administration of propolis at a dose of 180 mg/kg BW, 360 mg/kg BW, and 720 mg/kg BW had an effect in decreasing the ratio of osteoblasts and metaphysical osteoclast cells of femoral metaphysics. However, propolis administration did not affect the thickness of the femoral metaphysical cortical bone. INTRODUCTION Osteoporosis is a disease characterized by low bone mass and microarchitecture damage to bone tissue, causing bone fragility and increased fracture risk (NIH, 2018;Liu et al., 2017). According to WHO data, around 200 million people worldwide have osteoporosis, and it is classified as a "silent disease" that causes secondary health problems to death (Sozen et al., 2017;Kemenkes RI, 2015). An imbalance in bone formation and bone resorption during the remodelling process causes osteoporosis (Geng et al., 2019), which is influenced by two essential factors; aging factors and a decrease in gonadal function (Sihombing et al., 2012). Osteoporosis is more commonly found in menopausal women. In Indonesia, two out of every five women are at a higher risk of osteoporosis (Kanis et al., 2000;Ginta et al., 2015). Women are more likely to lose bone mass than males due to a reduction in estrogen hormone production, particularly in women who have gone through menopause (Ermawati, 2018;Gambacciani et al., 2018;Thulkar et al., 2015). Estrogens are antiresorptive agents and stimulators of bone formation. It works by reducing apoptosis and osteoclast differentiation, inhibiting the reduction of osteoblasts and osteocytes, and reducing oxidative stress. The deficiency of postmenopausal hormone estrogen can increase the rate of bone resorption and cause an imbalance in the bone remodelling process (Sihombing et al., 2012;Mescher, 2012;Modder et al., 2011;Okman, 2015). Loss of bone tissue is characterized by an increase in the number of osteoclasts (increased resorption) or a decrease in the number of osteoblasts (deficiency of estrogen) (Grabowski, 2015). In animal experiments, ovariectomy was performed as a postmenopause modelling (Yudaniayanti et al., 2019). Nevertheless, osteoporosis therapies such as estrogen supplements and MHT (menopause hormone therapy) have side effects on the cardiovascular system and increase the risk of breast and uterine cancer on longterm use (Geng et al., 2019;Fait, 2019). Therefore, we need other alternative therapies, especially natural compounds with minimal side effects, one of which is propolis that is widely found in nature. Propolis is a natural substance collected by bees from the buds and plant exudates, then mixed with the secretion of saliva and the enzyme β-glycosidase (Segueni et al., 2011). The chemical content of propolis are flavonoids, flavonols, phenolic, flavones, benzoic acid, and its derivatives, such as benzaldehyde derivatives, cynamil alcohol, cinnamic acid and its derivatives, tannins, saponins, nicotinic acid, amino acids, esters, vitamins, and minerals (Ishtiaq et al., 2018). Furthermore, flavonoids stimulate osteoblastogenesis by increasing the production of nitric oxide (NO) and osteoprotegerin (OPG). OPG will bind to RANKL and reduce its function for bone resorption (Luka et al., 2015). Meanwhile, NO can prevent osteoporosis by increasing the proliferation and differentiation of osteoblasts and increasing bone mass through the reduction of osteoclasts and thus decreasing bone resorption (Joshua et al., 2014). With these mechanisms, propolis can prevent osteoporosis and could be a candidate for a natural compound to treat estrogen deficiency. Therefore, this research was conducted to see how the protective effect of propolis on osteoporosis treatment caused by menopause by observing bone quality (the ratio of osteoblast and osteoclast cells) and cortical bone thickness of ovariectomized female rats. Ethical Approval This research was approved by the Animal Ethics Committee of the Faculty of Medicine, University of Andalas (No. 185 / KEP / FK / 2020). Experimental Animals This study used 20 female Wistar rats weighing 150-200 grams at three months of age. The rats were acclimatized for seven days. During the acclimatization process, experimental rats were given standard feed and drinking water sufficiently. The ovariectomy procedure was performed on 16 rats. Observations from the ovariectomy procedure were carried out for 24 hours. The experimental rats were randomly divided into five groups: group I, negative control that was only given food and drink; group II, a sham group that was subjected to ovariectomy and was not given propolis; group III, subjected to ovariectomy and was given propolis orally at a dose of 180 mg/ kg BW; group IV, subjected to ovariectomy and given propolis orally at a dose of 360 mg/kg BW; and group V, subjected to ovariectomy and given propolis orally at a dose of 720 mg/kg BW. All groups were treated for 30 days. Ovariectomy Procedure Ovariectomy was performed to make female rats become menopausal (Yudaniayanti et al., 2019) by removing the ovaries of female rats. The experimental rats fasted for six hours. Ketamine 50 mg/kg BW and xylazine 10 mg/ kg BW were given intramuscularly (IM) as anaesthesia. The mouse was placed in its supine position, and the hair on the left flank area was shaved with a length of ± 15 cm; then, the shaved area was cleaned with a clean cotton swab and alcohol 70%. Povidone-iodine was rubbed into the surgical incisions. During the surgery, the subcutaneous skin and linea alba from cranial to caudal direction were incised with a length of ± 4 cm, followed by ligation of the ovaries and removal of the left and right ovaries. The cut end of the cervix was inserted back into the abdomen, and the alba line and subcutaneous skin were sewed again. Gentamicin was given to avoid infection and leave one day after the surgery to heal the incision wound. Femoral Bone Preparations On the 31 st day, the rats were sacrificed and dissected to remove the left femur for cortical thickness analysis and bone quality. The steps in making bone preparations were fixation of rats femur bone tissue with 10% formalin, then decalcification in 8% HCl. Furthermore, it was processed into paraffin blocks, cut with a microtome with a thickness of 4 µm, and placed on the slide. The slides were deparaffinized with xylene for 2x5 minutes, then were rehydrated with graded alcohol starting with ethanol 100%, 96%, 70%, and distilled water for 5 minutes per solution. The slides were then stained with hematoxylin for 8 minutes and rinsed with distilled water for 10 minutes. Subsequently, the slides were dehydrated with 70% alcohol for 5 minutes and 96% alcohol for 5 minutes and immersed in eosin solution for 2 minutes. Then, the slides were rinsed in ethanol 96% and 100% for 5 minutes each. Lastly, the slides were cleared in xylene for 2 x 5 minutes and mounted in the deck glass with entellant for examination under a microscope (Liu et al., 2017). Histological Analysis of Bone and Cortical Bone Thickness Measurements were made by capturing hematoxylineosin slides. The osteoblast and osteoclast cell examination was evaluated using an Olympus BX 51 light microscope at a magnification of 400x. The assessment of cortical bone thickness was evaluated using the Betaview program, Beta 3.1MP Sony Exmor CMOS Sensor camera at 40x magnification. Photomicrographs were taken in a representative area. Data Analysis The qualitative data was represented as microscopic images of bones tissue. The cortical bone thickness data and the ratio of osteoblast cells and osteoclasts were analyzed using a one-way analysis of variance (ANOVA) to see the relationship with the dose. Duncan's multiple range test was carried out to see the significant differences between all groups. The Ratio of Osteoblasts and Osteoclasts Cells Based on the results of the histology analysis, as seen in Figure 1, there was a difference in the histological features of the femur. In the negative control group, osteoblasts were seen regularly arranged on the bone surface with few osteoclasts. Both osteoblasts and osteoclasts are small with a small amount of cytoplasm, suggesting a stable bone homeostasis process. The sham treatment showed a bone area with an increased osteoclast population with larger cells and a slight decrease in the osteoblast population. Interestingly, the administration of propolis could affect the population and morphology of osteoblasts and osteoclasts more like negative control rats. One-way ANOVA statistical test results showed that the treatment group had a significant effect on the ratio of osteoblasts to the osteoclasts in the femoral metaphysis (p<0.05). The average value of the ratio of osteoblasts and osteoclasts in the negative control group had the highest value, about 12.89 J/mm 2 . In contrast, the sham group had an average value of 6.71 J/mm 2 . The results in the treatment groups showed an increase in the average ratio of the number of osteoblasts to osteoclasts compared to the sham group, with the dose of 360 mg/kg BW gave the greatest increase ion the average ratio of osteoblasts to osteoclasts 10.83 J/mm 2 , which statistically have the same subset with the negative group in Duncan's multiple range test (p>0.05). Meanwhile, the doses of 180 mg/kg BW and 720 mg/kg BW were 7.43 J/mm2 and 8.69 J/mm 2 , respectively, as shown in Figure 2. Cortical Bone Thickness The cortical bone thickness results were presented in Table 1 and Figure 3. Our data showed that the cortical bone thickness in the negative control group was 201.82 µm and in the sham group was 175.92 µm. Surprisingly, the bone thickness in the treatment groups was not increased linearly as expected following the dose increment, with a dose of 360 mg/kg BW giving a slight decrease with 168.95 µm compared to a dose of 180 mg/ kg BW and 720 mg/kg BW that showed 187.42 µm and 207.35 µm, respectively. One-way ANOVA statistical test results showed no effect of propolis administration on the cortical bone thickness of the femoral metaphysis (p>0.05). DISCUSSION Our study aimed to evaluate the effect of propolis on the ratio of osteoblasts and osteoclasts in ovariectomized female Wistar white rats. Ovariectomy is a menopausal golden standard in experimental animals as a model for osteoporosis. The ovaries, which are the main part of the producer of estrogen, are removed from the rats to mimic the pathophysiology of humans (Soplhocleous and Idris, 2014;Yousefzadeh et al., 2020). Estrogen is essential for maintaining bone formation. The mechanism in bone metabolism is to reduce apoptosis of mature osteoblast cells (osteocytes), thereby reducing the process of bone turnover (anti-remodelling). This hormone also promotes osteoclast apoptosis escalation and RANKL differentiation reduction, thus resulting in decreasing bone resorption. Moreover, it is necessary to reduce oxidative stress, where oxidative stress reactions play a role in postmenopausal bone loss. In contrast, the loss of estrogen increases the production of cytokines such as interleukin-1 (IL-1), interleukin-6 (IL-6), interleukin-7 (IL-7), and tumor necrosis factor-alpha (TNFα). TNFα can increase osteoclast formation, and IL-7 can increase osteoclastogenesis, which both of them lead to bone loss (Mödder et al., 2011;Okman, 2015). An elevated number of osteoclasts and a lower number of osteoblasts are signs of bone loss, which subsequently drives a significant change in the architecture of the cortical bone and trabecular bone due to an imbalance in the bone remodeling process (Grabowski, 2015). The ratio of osteoblasts to osteoclasts illustrates the higher number of osteoblasts in the bone formation process as opposed to the number of osteoclasts that are accountable for breakdown cells in the bone resorption process. Our study showed that the sham group significantly decreased the ratio of osteoblast cells metaphysis femoral cells (p> 0.05) compared to the negative control group, suggesting a change in the bone remodeling process dominant bone resorption process. The decrease in estrogen hormone causes a reduction in osteoclast apoptosis and promotes the differentiation of RANKL induction. RANKL will bind to the receptor activator of nuclear factor-kappa-Β (RANK) and stimulate osteoclast precursors to differentiate and fuse into multinucleated osteoclasts. In addition, there was an increase in osteoblast apoptosis and oxidative stress and an increase in osteocyte apoptosis. When this process continues, it leads to osteoporosis (Okman, 2015). As expected, the results in the treatment groups showed an increase in the average ratio of osteoblasts to osteoclasts compared to the sham group. It indicates an increase in forming new bone compared to the resorption process due to an increase in osteoblasts and a decrease in osteoclasts. Surprisingly, the dose of 360 mg/kg BW gave the most significant increase, which amounted to 10.83 J/mm 2 . The ratio of osteoblasts and osteoclasts at this dose is close to the ratio in the negative control group, thus making the most optimal dose to increase the ratio of osteoblasts and osteoclasts. On the other hand, the doses of 180 mg/kg BW and 720 mg/ kg BW were much lower, which accounted for 7.43 J/ mm 2 and 8.69 J/mm 2 , respectively. One of the possible mechanisms of the propolis effect could be the ability to improve fracture healing by clearing up free oxygen radicals or ROS. The antioxidant properties of propolis as exogenous antioxidants may aid the endogenous antioxidant system in combating oxidative stress in the presence of high-level free oxygen radicals, conditions such as infection, inflammation, and trauma (Guney et al., 2011). Antioxidants can overcome tissue damage and play a role in bone formation by preventing ROS activity. ROS increases osteoclast formation and activity and, together with TNF-α, suppresses osteoblast differentiation. Therefore, the inhibition of ROS is necessary to reduce bone resorption. Consequently, the number and activity of osteoclasts decrease while the osteoblast differentiation process continues (Luka et al., 2015;Darmadi and Mustamsir, 2016). Apart from the antioxidant mechanism, quercetin (one of the flavonoid compounds) could suppress bone resorption by inhibiting osteoclast differentiation and activation. Quercetin suppresses the activation of the transcription factors NFkB and activator protein-1 (AP-1). These transcription factors play a role in modulating osteoclast differentiation, survival, and activation to decrease osteoclast formation. Quercetin also inhibits osteoclast formation by acting on osteoclast progenitors (inhibiting preosteoclast tartrate-resistant acid phosphatase (TRAP) activity). In addition, propolis can stimulate osteoblastogenesis and thus increase the formation of OPG and NO. OPG is an anti-osteoclastogenesis similar to RANKL, and NO can increase the proliferation and differentiation of osteoblasts and reduce osteoclasts (Luka et al., 2015;Joshua et al., 2014). Interestingly, we found that the cortical bone thickness of the femoral metaphysis gave no significant difference between treatment groups (p> 0.05). We assumed it was due to a short period of experimental design, which was only 30 days of treatment after ovariectomy. The earliest change in the width of the cortical femoral bone could be seen between 90 and 120 days after ovariectomy and takes 180 days or more to reach a steady state. Even though Pei-Yu Hsu et al. research in 2016 reported a decrease in cortical thickness at the femoral neck three months after ovariectomy, this result was not statistically significant (Hsu et al., 2016). Cortical bone has an outer surface called the periosteal and an inner surface called the endosteal. The periosteum is a sheath of fibrous connective tissue that surrounds the outer surface of the cortical bone. It contains blood vessels, nerve fibers, osteoblasts, and osteoclasts. The periosteum also protects, nourishes, and aids the process of bone formation. Therefore, the periosteum plays a vital role in the growth and repair of fractures. In addition, the decrease in cortical bone thickness caused by ovariectomy sometimes occurs due to highendocortical resorption. Trabecular bone resorption also results in fewer endocortical and trabecular connections. Endocortical and trabecular resorption would eventually lead to loss of endocortical and trabecular connections, where this connectivity plays an essential role in bone strength. This process is due to the deficiency of estrogen, which increases the resorption process beyond the formation process. However, these changes usually occur 2-3 months after ovariectomy (Hsu et al., 2016). Therefore, further study needs to be considered by extending the observation period to see the actual decrease of bone thickness by estrogen deficiency, which mimics osteoporosis progression in menopausal women. Since we only investigated the histological analysis and the cortical thickness of the femoral bone, the following study, including scanning electron microscopy (SEM) examination, in vitro assay, more specimens, power tests, and in vivo scans, need to be taken to assess osteoclastogenesis, bone microarchitecture, and morphology. If these limitations are resolved in the future, the mechanism of propolis in preventing osteoporosis in menopausal women could be fully elucidated, and we could further extend this potent effect by confirming it on the human body. CONCLUSION The administration of propolis orally at a dose of 180 mg/kg BW, 360 mg/kg BW, and 720 mg/kg BW had an effect in decreasing the ratio of osteoblasts and metaphysical osteoclast cells of femoral metaphysics in ovariectomized female Wistar white rats as menopause modelling. Propolis at dose 360 mg/kg BW gave the highest ratio of osteoblasts and osteoclasts, which is statistically the same with the negative control group. However, propolis administration did not affect the thickness of the femoral metaphysical cortical bone.
2022-01-09T16:15:08.350Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "94ac6a868d743ba363bb9027015c34364b97090c", "oa_license": "CCBYNC", "oa_url": "https://scholarhub.ui.ac.id/cgi/viewcontent.cgi?article=1214&context=psr", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "eccd1ac131ac627795a5a68bc91241ec868af43e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
213199238
pes2o/s2orc
v3-fos-license
Impact of Intellectual Capital on Financial Reporting Quality of Selected Banks in Nigeria The study examined the impact of intellectual Capital on Financial reporting quality of selected Banks in Nigeria. The specific objective of the study was to investigate the impact of Intellectual capital on financial reporting quality on selected Banks. A total of ten banks were selected for the study from 2006-2017. Regression analysis was used to do the analysis. They study made use of Value Added Intellectual Coefficient (VAIC) to ascertain the extent of intellectual capital indices while Fincial reporting quality is proxy by accrual which was calculated using Dechow and Dichev’s(2002) model. The result indicated a positive impact on Fincial reporting quality. The study therefore concludes that Banks should pay more attention to the three intellectual capital variables to improve their financial reporting quality. The study recommends that the three variables of intellectual capital should be well handled in other to have higher quality of financial reporting quality and also provide enabling environment needed to achieve a vital human capital in their system. Keywords : Intellectual Capital, Financial Reporting Quality, Nigeria Banks, Value Intellectual Coefficient, Dechow and Dichev model (2002) DOI : 10.7176/EJBM/11-16-09 Publication date :June 30 th 2019 Introduction One deterring factor of companies' success in the 21 st century is intellectual capital. Recently new economies are shifting towards a knowledge based economy or knowledge economy (KE) in which companies competitiveness and sustainability are increasingly dependent of knowledge based resource. The dramatic shift from material source to knowledge, from hard ware to software is actually experiencing by companies across the world. Sudibyo and Basuki(2017) with the advent of knowledge-based economy knowledge or intellectual capital become more important in company with other production factors such as land ,capital and machinery, so that in this economy, knowledge is considered as the most important production factor and it is named as the most important completion advantage of organisations, Darabi,Radand Heideribali(2017). There are various definitions for intellectual capital. Researchers have tried to define and explore different factors of intellectual capital. But the general indication is that intellectual capital is a non-monetary assets without physical existence but possesses value that can generate future earnings, Deepa and Subka(2015). Darabi , Rad and Heidaribali(2012) define intellectual capital as the development and application of source of knowledge in the companies. Therefore investors and creditors are interested in the intellectual capital and its components (human capital, capital employed and structural capital) which play an important role in decision making. The search for the most appropriate method of measuring Intellectual capital, led Pulic Ante to develop the most popular method that measures the efficiency of value added by corporate intellectual ability (Value Added Intellectual Coefficient -VAIC) Pulic (1998Pulic ( , 2000a The VAIC method measures the efficiency of three types of input: Capital employed (physical and financial), human capital and structural capital (Puntilo 2009, Williams, 2003 The combination of financial report with intellectual capital improves the quality of financial reports. The quality of financial report, the accrual and validity to operations of companies as well as declaring all assets of companies including intangible assets and intellectual capital, inform users of no baryasis . According to Jone and Balanced (2000) financial reporting quality are fair and transparent financial information that is not designed to obfuscate or misled users. Financial reporting needs to provide useful data to help potential investors make logical decision. Therefore, intellectual capital in Organisation leads to good quality of financial statements, thus it will be really necessary for organisation to consider intellectual capital. This study therefore investigates the impact of intellectual capital on the financial reporting quality of banks in Nigeria. According to Ekwe (2013) the choice of the banking sector is because in every country the banking sector plays a pivotal role in setting the economy in motion and in its development processes. Banks promote growth and success of business in both developed and developing countries and Nigeria banks have been noted for favouring graduates with second class honours degree(upper division) in their employment policies thereby giving weight to the fact that it is the intellectual capital that determines the quality of financial report. According to Kamath(2007), the banking sector is an ideal area for intellectual capital research because the banking sector is intellectually intensive and its employees are intellectually more homogeneous than those in other sector. Owing to the level of intellectually transformation programmes and improvements in the Nigeria banking sector, this study examines the impact of intellectual capital on the financial reporting quality of selected banks in the banking sector. Objective of the study. The general objective of this study is to ascertain the impact of intellectual capital on financial reporting quality of Banks in Nigeria. There is no significant capital efficiency and financial reporting quality. Ho3 There is no significant and positive relationship between capitals employed efficiency and financial reporting quality. Review of Related Literature. According to IAS(38) intellectual capital has been define to include expenditure on advertising and marketing research and developmental activities, human resources expenditure, copy rights, Franchises , future interest, licenses, operating rights, patent, record master, secret processes, trademarks and trade names, organisational structure and values that come from brand names. Edrinsson and Malone (2013) define intellectual capital IC as possession of knowledge applied experience, Information technology , customer relationship and professional skills that provide a company with a competitive edge in the market. In the word of Roos(2013) define I.C as the sum of company's member's knowledge and practical translations of this knowledge. To summarize, intellectual capital lacks physical form, it cannot exist on its own but derives value from network effect, and it is claim on future assets. Intellectual Capital and Its components. Intellectual capital is said to have the following components  Human Capital (HC)  Structural Capital (SC)  Customer/Relational Capital (RC) Human Capital Human Capital represents Knowledge of the individuals of an organization. Human Capital is interested as employed values creating potential depicted in the knowledge competencies, skills, experience, abilities and talents of firm's employees and managers .According to Mehrjardi(2016) describe it as mental agility which enables the individual to change the practices and thinking about innovative solutions for issues. Structural Capital (organizational) This is defined as knowledge assets that indeed company's property and includes intellectual property such as patent, copyright and trademarks processes methodologies, models; documents and other knowledge such as computer network and software, administrative system etc. Customer/Relational Capital This capital represents the value of current and ongoing relationship with individuals or organization that provides them with serves. Customers' capital indicators include market share, customer maintenance and profit .Irananed, Moeinaddin, Shahmoradi and Heyrani(2014). It is the knowledge embedded in relationship with customer, supplier's industry associations or any other stakeholder that influence the organisation's life. Empirical Reviews This study is to investigate the impact of intellectual capital on financial reporting quality. Vol.11, No.16, 2019 Darabi, Rad and Heidaribali (2012) carried out a work on the impact of intellectual capital on financial reporting quality: evidence from Tehran Stock Exchange. A sample of 184 accepted companies in Tehran stock Exchange that work in deference industries between 2004 and 2009 was selected. Co-relational analysis and multiple linear regressions were used for the study. The result of the study shows that two components of Intellectual capital-Capital employed efficiency and human capital efficiency have significant positive effect on the dependent variable of financial reporting quality while structural capital efficiency has a significant negative effect on the financial reporting quality. Payam Mojtakedi(2013) did a work on the impact of intellectual capital on earnings quality Evidence from Malaysian firms , the study made us of 100 Malaysian firms during the year 2000 and 2011.Mutiple regression and panel data analysis was used for the analysis. The study find out that intellectual capital has a positive and significant impact on Earnings quality. Ghasempour and Yosof (2014) carried out a work on quality of Intellectual capital and Human resource disclosure on the firm Valuation. The study made use of 65 companies listed on Tehran stock Exchange in the period of 2005 to 2012. The result of the study showed that voluntary disclosure of intellectual capital and human resources information had a significant and positive impact on firm value Hardeep and Purnima (2016) did a work on measurement of intellectual capital in the Indian Banking sector, sing 144 branches of 21 public and seven private commercial banks operating in India, all the three dimensions were found to significantly contribute to the intellectual capital among which relational capital contributed relatively more followed by human capital and structural capital. The research findings can help bank mangers in deterring how to generate value using human, structural and relational capital. Abubakar,(2011) carried out a work on Human resources Accounting and quality of financial Reporting of quoted service companies in Nigeria. Descriptive and field survey research methods were employed in the study. Data were collected from the conduct of interview, administration of questionnaire and financial statement of selected quoted service companies. The date were analysed using Kendall's coefficient of concordance (KCC). The study finds out that human recourse Accounting has significant impact on Fincial reporting of quoted service companies in Nigeria, Chen, Tang, Jiang and Lin (2010) did a work on the quality of accounting for the firms that are member of EU before and after accepting international standards of financial reporting in 2005. The study showed that the highest degree of accounting quality has been related to the period after accepting international standards of financial reporting. Akinlo and Olayinola(2017) carried out a study on Human capital reporting and Corporate Earnings: Evidence from Nigeria. The study used secondary data from 2007 to 2014 collected from selected annual report of 50 listed manufacturing companies. Pooled least square were used in the analysis. The result shows that total earrings present a positive relation with the components of human capital but significant one with salaries and wages and Labour turnover. Darabil,Rad and Ghadiri(2012) did a work on the relationship between intellectual capital and earrings quality. The study was conducted with 158 companies in Iran stock exchange .The result of the study shows that intellectual capital and its human capital components have a significant positive impact on earnings quality and its leads us to conclude that intellectual capital has a positive role in financial practice and reporting. Evidence from Nigeria. The study used secondary data fro 2007-2014 from selected Annual reports and Accounts of 50 listed manufacturing companies. The study shows that Total earnings have a positive relationship with the components of human capital, but a significant one with salaries and wages and labour turnover. It then suggests that capitalization of corporate investment on its human capitalization of corporate investment on its human resource has the aperture of increasing the total earnings of quoted manufacturing companies in Nigeria. Methodogy. This section of the paper identifies and describes the proxies used both the dependent, independent variables. The regression equation is outlined at the latter part of the section. Data were computed from the annual report and accounts of banks of study for a period of eleven years (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017). Description of the Dependent Variable Due to relative important of intellectual capital in organizational productivity, financial reporting quality is the dependent variable adopted in this paper using the Accrual Model. Financial reporting Quality. Using the accruals consistent with Dechow and Dichev's(2002) model is used to measure the quality index of financial statements. The mentioned model is presented below cAc=B+B1cfi+t+B2cf it +B3cf it +B4∆Sit B5PPE +€ Where cAC is current accruals, ∆S is change in sales, CF is cash resulting from the operational activities of the firm in the years it, t, +t, PPE is the book value of property,(cost of price of property, plant and equipment minus accumulated depreciation) and€-CAC is error of assessment of accruals. Description of the independent Variables. The value Added intellectual Co-efficient basis for the independent variable in the study. (VAIC) methodology developed by Ante Pulic in 19981 and Pulic 2000 formed the underlying basis for the independent variable in the study. Most intellectual capital methods are criticized because the measure subjectively and cause many problems during measurement. (Sveiby2000; Williams2001). The regression results showed that the estimated co-efficient of the regression parameters have only positive sign. This sign means that the dependent variable is positively influenced by the independent variable and is represents what financial reporting quality will be without the independent variable in the model. The value 0.603119 for β1implies that holding all other factors constant, a unit increase in cost of free test will lead to 0.603119 increases in Y which is financial reporting quality. The R 2 tells the percentage variation in financial reporting quality explained by Human capital efficiency. By implication, the value 0.313263means that about 31% of the total variation in the dependent variable is as a result of changes in the independent variables, while 69% is unexplained. This remaining percent could be caused by other factors or variables not built in the model. Since the Durbin-Watson statistic is near 2, there is no evidence of first-order autocorrelation. The estimated F-value is significant at 5% level (because the p value is 0.005448).With this, we can reject the null hypothesis that human capital efficiency has no significant effect on financial reporting quality, we therefore conclude that human capital efficiency has a significant effect on financial reporting quality. Ho2 There is no significant capital efficiency and financial reporting quality. The regression results showed that the estimated co-efficient of the regression parameters have only positive sign. This sign means that the dependent variable is positively influenced by the independent variable and is significant. The value of 10.52228 for c. represents what financial reporting quality will be without the independent variable in the model. The value 0.405355 for β1implies that holding all other factors constant, a unit increase in cost of free test will lead to 0.405355 increases in Y which is financial reporting quality. The R 2 tells the percentage variation in financial reporting quality explained by significant capital efficiency. By implication, the value 0.290979means that about 29% of the total variation in the dependent variable is as a result of changes in the independent variables, while 71% is unexplained. This remaining percent could be caused by other factors or variables not built in the model. Since the Durbin-Watson statistic is near 2, there is no evidence of first-order autocorrelation. The estimated F-value is significant at 5% level (because the p value is 0.005293). With this, we can reject the null hypothesis that significant capital efficiency has no significant effect on financial reporting quality, we therefore conclude significant capital efficiency has a significant effect on financial reporting quality. Ho3 Capitals employed efficiency has no significant effect on financial reporting quality. The regression results showed that the estimated co-efficient of the regression parameters have only positive sign. This sign means that the dependent variable is positively influenced by the independent variable and is significant. The value of 6.328989 for c. represents what financial reporting quality will be without the independent variable in the model. The value 0.935203 for β1implies that holding all other factors constant, a unit increase in cost of free test will lead to 0.935203 increases in Y which is financial reporting quality. The R 2 tells the percentage variation in financial reporting quality explained by Capitals employed efficiency. By implication, the value 0.339880means that about 34% of the total variation in the dependent variable is as a result of changes in the independent variables, while 66% is unexplained. This remaining percent could be caused by other factors or variables not built in the model. Since the Durbin-Watson statistic is near 2, there is no evidence of first-order autocorrelation. The estimated F-value is significant at 5% level (because the p value is0.004643). With this, we can reject the null hypothesis that Capitals employed efficiency has no significant effect on financial reporting quality, we therefore conclude Capitals employed efficiency has a significant effect on financial reporting quality. Conclusion and Recommendation. The result of the data analysis showed that Intellectual capital on Financial Reporting quality was statistically significant. From the above result it is fair to conclude that Nigeria Banking sector make use and apply intellectual in their Fincial reports. The results further showed that Banks are statistically different in both the intellectual capital and its Fincial reporting quality. Constance and regular training of employees is also an aspect of the banks operation which is highly recommended because it is established that regular training will positively impact on the employee's performance and service delivery thereby boosting the Fincial reporting quality.
2019-09-16T03:38:17.160Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "7e6b3ad6100799cafb3ae1b47f35f4f90b0fac3d", "oa_license": "CCBY", "oa_url": "https://www.iiste.org/Journals/index.php/EJBM/article/download/48378/49977", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "514e3acae31bdcf555b32d4e7cc9bc2162431a89", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
55803136
pes2o/s2orc
v3-fos-license
Experimental and theoretical study on emission spectra of a nitrogen photoionized plasma induced by intense EUV pulses Spectral lines of low-temperature nitrogen photoionized plasma were investigated. The photoionized plasma was created in the result of irradiation N2 gas using laser plasma EUV radiation pulses. The source was based on a 10J/10ns Nd:YAG (λ = 1064 nm) laser system and a gas puff target. The EUV radiation pulses were collected and focused using a grazing incidence multifoil EUV collector. The emission spectra were measured in the ultraviolet and visible (UV/Vis) range. It was found that the plasma emission lines in the lower region of the UV range are relativley weak. Nonetheless, a part of the spectra contains strong molecular band in the 300 430 nm originated from second positive and first negative systems band transitions of nitrogen. These molecular band transitions were identified using a code for study the diatomic molecules, LIFBASE. The vibrational band of ∆v = 0 and ±1 transitions were significantly populated than of that with ∆v = ±2 and 3 transitions. A comparison of the calculated and measured spectrum is presented. With an assumption of a local thermodynamic equilibrium (LTE), the vibrational temperature was determined from the integrated band intensities with the help of the Boltzmann plot method and compared to the temperature predicted by SPECAIR and LIFBASE simulations. A summary of the results and the variations in the vibrational temperatures was discussed. Introduction Photoionization of molecular nitrogen is performed and emission spectra in the result of the photoionization was measured.The plasma was produced by a laser-produced plasma (LPP) extreme ultraviolet (EUV) source.The experimental source was based on a Nd: YAG laser system delivering a 10 J, 10 ns pulse at the fundamental wavelength, λ =1064 nm.The laser power density in such regime can be easily achieved an order of 10 12 W/cm 2 in the interaction region.Such intensity range can be enough to laser-produced xenon plasma and therefore the EUV radiation collected was used to ionize nitrogen gas up to first few ionization states.Mostly, spectral lines from molecular band transitions in the ultraviolet and optical wavelengths region were the dominant features in the spectra.The molecular emission spectra were found corresponding to the second positive (2PS) ( 3 Π- 3 Π) and first negative (1NS) ( 2 Σ + - systems of the molecular nitrogen.The identification of the molecular band transitions was performed using software for diatomic molecules simulation, LIFBASE [1].The plasma parameters and its condition can be studied based on these molecular band transitions by evaluating vibrational temperature with an assumption of the local thermodynamic equilibrium.The vibrational temperatures provide insight to the molecular vibrational processes while the rotational temperature usually represents the gas temperature and it is responsible of reaction rates data of many molecular processes.In thermodynamic equilibrium condition, determination of the rotational and vibrational temperatures is often done by assuming the vibrational levels based on the Boltzmann distribution function.For nitrogen plasma, the band transitions from the upper electronic to the ground electronic states in 2PS and 1NS systems and their characteristics make possible to estimate the vibrational temperatures from emission intensities.Though, spectroscopic investigations on nitrogen plasmas or nitrogen-containing plasmas have been studied using the band spectra recorded in the various plasmas at different experimental conditions.Of these, include a compact helicon plasma source [2], nitrogencontaining plasmas [3] as well as in highly constricted plasma and applying the fitting methods [4].Further, the plasma emission lines from 2PS are well-known and the best candidate used for estimation of the vibrational temperature in a nitrogen or nitrogen-containing plasma.Provided that if an assumption of the thermodynamic equilibrium between the ground and excited electronic states is achieved [5].In this case, the required condition is the radiative rates are dominated by the collision rates to ensure the LTE condition in plasma.Knowledge of these population rates will provide more information about the plasma emissions processes and control conditions.However, if the plasma is far from LTE, determinations of vibrational temperature is more sophisticated unless the high-resolution spectra were used.In this case, the vibrational temperature inferred from the Boltzmann plot methods still give insight into the plasma condition.Though, the 2PS of nitrogen molecule was widely used to study the gas temperatures in nitrogen and nitrogen-containing plasmas [6][7][8][9][10]. In this work, the emission lines of nitrogen photoionized plasma created by laser plasma EUV radiation pulse is investigated.The electron vibrational temperature is determined using the integrated line intensities of the band transitions in the second positive system of nitrogen.This can be done with the help of the Boltzmann plot of vibrational and rotational levels.In this case, especially for the molecular spectroscopy, much attention must be paid to ensure the highresolution of the molecular band spectrum which enables us to accurately determine the band head wavelength of a transition.On the other hand, the determination of the vibrational and rotational temperature can be predicted using software for the molecular spectral simulation, LIFBASE.In this case, the temperature was set manually in acquiring the simulated.The more accurate data on electron vibrational or rotational temperature can be utilized by direct fitting of the measured spectra [2].The most applicable software used for this purpose is the commercial software SPECAIR [11]. Experimental setup In the experiments, Nd: YAG laser system delivering the maximum energy of 10J/10 ns pulse and 10 Hz repetition rate was used.In the present measurements, energy about 5.6 J per pulse was employed.The laser pulse was focused onto a double stream gas-puff target, created synchronously with the laser pulse.The gas-puff target was created by pulsed injection of high-Z (argon or xenon) gas into a hollow stream of low-Z helium gas using a double nozzle set-up.The nozzle consists of a small central orifice surrounded by a ring-shaped outer nozzle and equipped with an electromagnetic valve system.More experimental detailed description of the double stream gas puff target can be found in [12,13].Irradiation of nitrogen gas (4 bar) injected into the vacuum interaction chamber synchronously with the radiation pulses from laser plasma EUV source resulted in photoionized plasma.The photoionized plasma emissions and focusing conditions were adjusted in the way to obtain maximum power density in the interaction region.The maximum intensity is estimated around 10 8 W/cm 2 under EUV fluence of ~300mJ/cm 2 .The EUV radiation was focused using a gold-plated grazing incidence multifoil EUV collector.The spectral analyzing instrument was an Echelle Spectra Analyzer ESA 4000 spectrograph, equipped with the ICCD Kodak KAF 1001 camera.The spectrometer system allowed for simultaneous measurements of complex spectra within the wide UV/Vis (200-780 nm) spectral range with considerably higher spectral resolution λ/∆λ ≈ 20000.Schematic view of the EUV photoionization experimental setup is shown in Fig. 1. Vibrational temperature The spectrum of the generated photoionized plasma is dominated by the emission of second positive N 2 (C 3 Π u -B 3 Π g ) and first negative N + 2 (B 2 Σ + u -X 2 Σ + g ) systems of molecular features of nitrogen.If the assumption of the upper populations having Boltzmann distributions with both vibrational and rotational temperatures T v and T r , respectively , the line intensity of the molecular band system between the upper vʹ,Jʹ and lower vʹʹ,Jʹʹ rovibrational levels can be expressed as following [14]. Here, C is the scaling factor related to the collection solid angle and the sample volume of the optical emission, R SP is the spectral response function, s(Jʹ,Jʹʹ) is strength function and F(vʹ,Jʹ) rotational spectral term.Q vib , Q rot are the vibrational and rotational partition functions, respectively.q(ν′,ν′′) is the Franck-Condon factor of the vibrational transition (ν′ -ν′′) [15]. For simplicity, we can obtain the T v easily by from the Boltzmann plot using integrated band intensities I(v',v'') as a function of the upper vibrational energies of different transition in the second positive system.In such case, as can be seen from equation (1), the total intensity is given for both vibrational and rotational levels.To deduce the vibrational temperature, we can use the vibrational dependent portion as follows [16,17]: Where Experiment. Results and discussion Identification of emission line in plasma has great importance and it is considered as the first step to specify the plasma species and their charge states.Fig. 2 shows the measured and simulated emission spectra of molecular nitrogen.The dominant features in the spectrum are the molecular band transitions in 2PS and 1NS systems of N 2 in the ultraviolet (UV) wavelength range.Most of the emission lines were corresponding to the 2PS in the 300-400 nm range.The measured and SPECAIR simulation results were shown a good agreement with relatively different in their intensities, where the experimental emission lines are more strength.Although simulated with the T v = 3358.9K obtained from linearity of the Boltzmann plot, the spectra computed using LIFBASE also showed the strongest band transitions, (0-0) and (0-1) in 1NS (B-X).It has been noticed that even for the T v of 2000 K, the results do not show any significant difference in terms of molecular band transitions rather in the line intensity.The best T v can be found by fitting the measured spectra with an appropriate fitting function. In Fig. 3, the experimental (top) and simulated (bottom) spectra corresponding to the (1-0) and (1-2) vibrational band transitions of the 2PS (C 3 Π u -B 3 Π g ) acquired at a pressure of 4 bar at a pulse repetition rate of 10 Hz with the energy of 5.6 J per pulse.A good agreement was found between the measured and simulated spectra for a temperature of 2000 K.This spectra is mainly part of the spectrum in Fig. 2 and has been taken between 354 nm to 359 nm to clearly demonstrate the measured and calculated vibrational spectra. To evaluate electron vibrational temperature, Fig. 4 shows the Boltzmann plot of the integrated band intensities versus the upper vibrational energy levels corresponding to the transitions in 2PS.We have used six transitions in 2PS to infer the T v because the spectra dominated mostly with molecular band transitions in the 2PS band system, Fig. 2. The fitted line to the experimental points gives rise an approximate vibrational temperature of 3358.9 ± 665 K.This value is a bit higher compared to the best value of the T v (2000 K) that used to obtain the best fit between the simulated and measured spectra with SPECAIR software under the present experimental conditions.Despite the temperature differences, the measured and calculated spectra using SPECAIR were found to be in good agreement.Such discrepancies are expected whether associated with experimental errors or attributed the use of a small number of vibrational transitions, as it is always recommended to include much more lines in constructing the Boltzmann plot.In the other hand, when the determined vibrational temperature 3358.9K is used to simulate the measured spectrum with LIFBASE software, the computed spectra contain molecular bands mainly from 1NS (B-X), (0-0) and (0-1) emissions.In Fig. 5 the vibrational population and its dependence on the T v are examined using emission spectra in the 1NS.That calculated vibrational populations at a lower T v , 2000 K, are found to be more vibrationally populated than that calculated at a higher T v , 3358.9 K.This can be explained as at lower T v , most of a gas molecule within molecule will be vibrated at their fixed vibrational levels.With Enhancing T v the individual amplitude of each molecule increases before the molecules undergo the translational motion, this resulted in less populated states at higher vibrational temperatures.Therefore, the vibrational bands with the ∆v = 0 and ±1 transitions in 2 Σ + - 2 Σ + system are significantly populated than of that with ∆v = ±2 and 3 transitions.As the plasma in LTE, the gas temperature can be considered as the rotational temperature T r that measured in the upper states in a molecular transition.In contrast, for non-LTE plasmas, measurements of the gas temperature is possible via analyzing the rotational molecular emission provided that the rotationaltranslational relaxation is sufficiently fast to equilibrate the gas and rotational temperatures [18]. Conclusions In summary, laser-plasma EUV source was used to create a low-temperature photoionized nitrogen plasma.High-resolution emission lines from photoionized plasma in the UV/Vis spectral range were recorded using echelle spectral analyzer ESA 4000.The dominant molecular band transitions in plasma were the electronic transitions from 2PS ( 3 Π- 3 Π) and 1NS ( 2 Σ + - 2 Σ + ) systems.The diatomic molecular simulation programs, LIFBASE and SPECAIR have been applied to reproduce the experimental spectra theoretically and interpretation.A comparison of the measured and calculated emission lines is given.With the assumption of the plasma in the LTE condition, the linearity of the Boltzmann plot is obtained using the integrated band intensities in 2PS vibrational transitions.A vibrational temperature of 3358.9K is inferred from the used ∆v = 0,±1,±2 and 3 transitions.The T v calculated with the aid of the Boltzmann plot is comparable to the results obtained using the intensity ratio of two consecutive vibrational bands of the same sequence in 2PS from a helium microwave gas discharge [19]. The inconsistency between the simulated and measured vibrational temperatures was expected.The differences can be explained due to the departure of the plasma from the local thermodynamic equilibrium and low degree of ionization.The method of direct fitting the experimental emission spectra may yield reasonable results of vibrational temperature in plasma in the case where the Boltzmann plot is not accurate either for low intensity of the band emission or due to the weak radiative transition rates of a line. Fig. 1 . Fig. 1.Schematic view of the experimental setup for the EUV photoionization experiments. Fig. 2 . Fig. 2. Measured (top) and simulated UV/Vis emission spectra belong to the 1NS (B-X) and second positive 2PS (C-B) systems of nitrogen induced by the intense laser plasma EUV pulses. Fig. 3 .Fig. 4 . Fig.3.Comparison of measured (black) and calculated (red) emission spectra of a second positive ( 3 Π- 3 Π) band transitions in plasma over a wavelength range from 354 nm to 359 nm Fig. 5 . Fig. 5.Vibrational populations of non-thermal mode vibrational levels of 2PS, obtained at the end of the simulations for two different temperatures (T v ,T r ).
2018-12-12T03:43:09.331Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "2c491f21688dcf2c5ffef8562d73276ed32f624d", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/02/epjconf_ppla2018_03006.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2c491f21688dcf2c5ffef8562d73276ed32f624d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
25605349
pes2o/s2orc
v3-fos-license
Prevalence of Trichomonas vaginalis in women of reproductive age at a family health clinic Introduction: Trichomonas vaginalis is considered the most prevalent curable sexually transmitted infection, and its occurrence exceeds that of gonococcal and chlamydia infections. This parasite has been identified as responsible for the increased risk of transmission of HIV and has also been associated with prostate and cervical cancer. Many carriers of T. vaginalis are asymptomatic and, when experiencing a health problem, they most often have nonspecific symptoms. The aim of this research was to estimate the presence of T. vaginalis and the associated factors in women of childbearing age at a primary health care clinic in the Federal District of Brazil. Methodology: A cross-sectional study was conducted with consecutive sampling of an outpatient population of women of childbearing age (excluding minors and pregnant women). The women answered a questionnaire and were examined. After vaginal pH measurement and whiff testing, a vaginal secretion sample was obtained for inoculation in TYM, a specific T. vaginalis culture medium. Results: The presence of T. vaginalis was identified in 16% of the sample. Fewer lifetime sexual partners and consistent condom use were identified as factors of protection against the infection. Complaints of dyspareunia were proportionally higher among women with positive cultures for T. vaginalis. Conclusions: The prevalence of T. vaginalis infection was high in the sample studied. The infection was positively associated with the number of lifetime sexual partners, and consistent condom use was a protective factor. Vaginal complaints were more common among women with T. vaginalis, but only dyspareunia had significant association. Introduction Trichomonas vaginalis is a sexually transmitted flagellated protozoan extracellular parasite of humans [1,2].It is estimated to be the most common curable sexually transmitted infection (STI) in the world [3]. T. vaginalis infection can impair the reproductive success of men and women since it is associated with the occurrence of pelvic inflammatory disease [4,5] and can reduce sperm viability [6].The parasite is also associated with premature rupture of membranes, preterm labor, and low birth weight [7,8].Studies have also reported an association with increased risk of HIV transmission [9], prostate cancer [10], and cervical neoplasia [11]. It is estimated that the occurrence of T. vaginalis is twice the estimated incidence of infections caused by Chlamydia trachomatis [12], and it may be considered the most prevalent curable sexually transmitted infection [3,13].In contrast to other STIs, T. vaginalis more frequently affects people in older age groups [14,15]. The prevalence of T. vaginalis in each age group depends on several factors, including the technique used to assess this prevalence [17][18][19][20].For example, research using culture, wet mount, and Pap smear found a prevalence of 2.6% in women attending primary health clinics in Uberlândia, MG, in the southeast region of Brazil [18].The prevalence was highest among women 40 years of age or older.This finding was consistent with that reported by other authors [21].A survey that included more than 140,000 women in the Federal District, Midwest region of Brazil, found a prevalence of 7.3% [22]. Although there are different diagnostic techniques to detect the presence of the parasite, the diagnosis is often only presumptive, based on complaints registered by women.T. vaginalis infection increases the risk of HIV transmission and acquisition [9,[23][24][25][26][27][28][29], and this protozoan is a suspected potential vector for other pathogens, since T. vaginalis can harbor Mycoplasma hominis [30,31].Furthermore, some T. vaginalis strains are infected by a dsRNA virus from the genus Trichomonasvirus, named TVV [32].The presence of TVV and M. hominis may affect the pathogenicity of the parasite [33,34]. Various techniques have been used to diagnose the infection, including direct microscopy and cultures of vaginal secretion.Additionally, recent studies report the use of molecular techniques [18,35] and serology [36]. The Brazilian Federal District Health Department has no established routine for the laboratory diagnosis of T. vaginalis.Cases are diagnosed clinically, using syndromic management, or even by the occasional finding of the parasite in a Pap smear.Although the Pap smear has specificity for T. vaginalis, there are sensitivity limitations that prevent the use of this technique for the diagnosis of infection [37].The parasite has also been identified, occasionally, in urine samples. The syndromic approach to sexually transmitted diseases involves a diagnostic and treatment strategy recommended by the World Health Organization [38] and adopted by the Brazilian Ministry of Health.This approach aims to increase the sensitivity of the identification of these diseases and the consequent reduction in their incidence.Diagnostics and treatment flowcharts are used based on the patients' complaints.It is important to note that a considerable number of STIs are asymptomatic or have nonspecific symptoms.In the case of T. vaginalis, for example, more than half of infected women may not exhibit any symptoms [16]; they would, therefore, not be included in the flowcharts.Moreover, there is no information about those who are diagnosed with T. vaginalis, since it is not a reported disease. Prevalence studies may be useful to health professionals in their daily routines as well as for health management decisions regarding primary care.The aim of this study was to estimate the prevalence of T. vaginalis and its associated factors in women of childbearing age attended to by a team of family health specialists in the Federal District of Brazil. Methodology A cross-sectional study with consecutive sampling of women was conducted between November 2014 and March 2015.The study was conducted in the primary health care clinic of one of the 31 administrative regions of the Federal District of Brazil. Calculation of the sample size was made taking into account a finite population of 700 individuals, a number slightly higher than the number of women of reproductive age registered in the unit.Estimating a prevalence of 10% and assuming a 5% confidence interval, the calculation indicated a sample of 116 women for the study.Predicting a dropout rate of 20%, the final sample comprised 139 women.All women between 18 and 49 years of age who visited the clinic were invited to participate.The invitation was made regardless of the reason that had led to the visit.Pregnant women and virgins were excluded.A total of 201 of 219 eligible women agreed to participate in the study.Of these, 193 were examined. Ethical considerations The research ethics committee of the Federal District Health Department approved the study (CAAE 28186514.5.0000.5553). Proceedings The women of the sample responded to a structured questionnaire administered by a trained female interviewer.They were later referred for the collection of biological samples.During the collection, vaginal pH was tested using scaled strips with a range of 3.6 to 6.1, at intervals of 0.3 units (pH-Fix, Macherey-Nagel, Düren, Germany).Simultaneously, a small amount of vaginal secretion was harvested from the fornix and placed on a glass slide with a drop of 10% potassium hydroxide to perform the whiff test.The test was considered positive when a characteristic fishy odor was noted [39]. Continuing the testing process, a sterile swab was applied against the posterior side wall of the vagina and seeded in TYM (trypticase-yeast extract-maltose) culture medium, as proposed by Diamond [40], comprising tryptone (Difco, Sparks, USA), yeast extract (Biobrás, Montes Claros, Brazil), maltose (Merck, Darmstadt, Germany), L-cysteine, chloride (Synth, Diadema, Brazil), ascorbic acid (Vetec, Rio de Janeiro, Brazil), K2HPO4, KH2PO4, distilled water.Plates were read after 24, 48, and 72 hours.After 96 hours, the culture media were washed with 5 mL of 0.9% saline solution, which was then transferred to test tubes and centrifuged at 2,500 rpm for 5 minutes.The supernatants were discarded and the remaining pellets were examined [41].For each sample, 100 µL of the pellets were placed on slides and examined using an optical microscope at 40x.A second slide was made and allowed to dry at room temperature.This second slide was fixed with methanol, stained with Giemsa (Newprov, Pinhais, Brazil), and examined under a microscope at 100x.The result was considered positive when the presence of the parasite on either of the slides was verified. Blood samples were also collected from the women and tested for syphilis, HIV, hepatitis B, and hepatitis C. Syphilis serology was performed in the regional laboratory with the use of non-treponemal (venereal disease research laboratory [VDRL]) and treponemal (TPHA) tests, according to the manufacturer's recommendations (WAMA Diagnóstica, São Carlos, Brazil).Serology for HIV as well as for hepatitis B and C were carried out at the Central Laboratory (LACEN) of the Federal District using the electrochemiluminescence (ECL) technique (Roche Diagnostics GmbH, Mannheim, Germany). Data analysis Data were submitted to EpiInfo7 (version 7.1.5.0,Centers for Disease Control and Prevention) and analyzed using SPSS version 22.0 (IBM, Armonk, USA).Associations between categorical variables were tested using the Chi-squared test and were measured by odds ratios (OR).Statistical significance was considered when p < 0.05.Differences of proportions were tested using the one-tailed Z test. Results A total of 219 women were invited to take part in the study.Of these, 201 women agreed to participate, answered the questionnaire, and provided blood for serology.Of these, 193 were examined by measurement of pH, the whiff test, and sample collection for T. vaginalis culture.The results below apply to the 193 participants who were completely examined. The mean age of participants was 34 years (95% CI = 30.7-37.3),most of whom were of mixed ethnicity (57%).More than half of the women (53%) had eight or fewer years of schooling.Information regarding age of first sexual intercourse, smoking status, and use of hormonal contraceptives are detailed in Table 1.Thirty- three women (17%) reported having had some sexually transmitted disease (STD) at some time in their lives (Table 2).Gynaecological complaints, predominantly lower abdomen pain (52%; 101/193), were common among participants.These were followed by reports of vaginal discharge (48%; 92/193) Table 3. Thirty samples (30/193) tested positive for T. vaginalis (16%; 95% CI = 11-21).Among these women, the most frequent complaint was vaginal discharge (57%).Among the women with positive cultures, 27 (90%) had at least one complaint, and 3 (10%) of them had no complaints (Table 3).Regarding age, skin color, and age of first sexual intercourse, women with positive cultures did not differ from those with negative results.In relation to the complaints, only dyspareunia was significantly higher among those with positive results. Among the 193 women examined, 12 had positive results for hepatitis B serology (anti-HBc), but none of them were in the acute phase.All of them had negative HBsAg serology.Of these 12, two had positive results for the T. vaginalis culture, and.Three had positive results on the VDRL test, all confirmed by TPHA.These three participants were negative for T. vaginalis.Three others had positive results for HIV serology, and all were positive for T. vaginalis.None of the women had positive anti-HCV tests.The serology results are shown in Table 2. The results demonstrated an association between infection with T. vaginalis and the total number of lifetime sexual partners.For those who had had 10 or fewer partners, this was a protective factor (OR = 0.14).Consistent condom use was also an important protective factor (OR = 0.064).There was no association between positive results and the following variables: age, use of hormonal contraceptives, smoking, or gynaecological complaints.Association measures, confidence intervals, and pH values are detailed in Table 4. Discussion Although the Federal District is the capital of Brazil, the social and economic inequalities are quite similar to the rest of the country.The operational area of the present research was, until recently, a region considered a slum.Only in the last seven years has the region been developed with urban facilities (sewage system and paved streets).The results should be considered in light of this context. The prevalence of T. vaginalis infection in this sample was found to be 16% (95% CI = 10-21).This finding was higher than that expected by the authors.A study conducted in the Federal District, (Simões-Barbosa et al., 2002) found a prevalence of 7.3% in a sample of 142,158 women.Infection with T. vaginalis was present in 10% of the inflammatory lesions [22].In that research, the methodology used was the Pap smear, which has sensitivity for T. vaginalis detection lower than that of the culture method.The disagreement between the prevalence rates found in these two studies may be partly explained by the different methodologies.The different prevalence rates between the two populations could have an alternative explanation.The high prevalence detected in our study applies to an outpatient population.It may not reflect the prevalence in the general population.Outpatient populations may consist mostly of people complaints or a higher perceived risk of STI.This could explain the high prevalence. An association between education and STIs has been found by various authors [16], but educational level was not associated with positive results for T. vaginalis in our sample.However, more than half of the women studied had eight or fewer years of formal education, and the homogeneity of this variable could conceal possible association. Infection with T. vaginalis has also been associated with variables such as age [21], smoking [18], and use of hormonal contraceptives [42].In our study, however, there was no association between positive results for T. vaginalis and any of these variables. The proportion of women with vaginal discharge was higher among those who had positive cultures (57% versus 46%); however, this difference was not statistically significant.Similarly, the complaints of pruritus, burning, malodorous secretion, and dyspareunia, although not significant, were also proportionally more common among women with positive results for T. vaginalis.On the other hand, the difference between the proportions in terms of those who mentioned pain during sexual intercourse was significant.Generally, among the culture-positive women, genital complaints were frequent; this also occurred among the culture-negative women.In other words, although infected women were not completely asymptomatic, the symptoms reported were not specific to T. vaginalis and may not be attributed solely to this infection.Moreover, 10% of women with positive cultures were completely asymptomatic. The pH values tend to be above 4.5 in women infected with T. vaginalis, and the whiff test is usually positive [43].In our study, 50% of infected women had pH values above 4.5, and 40% were positive in the test of the amines.Considering pH, although the prevalence of infection was higher in women with values above 4.5 (18% versus 14%), the association was not significant.A similar observation applied to whiff test: the prevalence among the positive and negative women was, respectively, 20% and 13%, without statistical significance.A potential limitation of our study is the fact that a large proportion of the participants (43%) made regular use of vaginal douches.This may have compromised the results of the pH and the whiff tests.Another potential limitation was the sample size used for this study. There was no association between prior STI reported and positive cultures.Interpretation of this result requires a degree of care for two reasons: 1) among women who reported having had an STI, nearly one-third did not reveal the diagnosis; some common genital changes may be misclassified as an STI; and 2) women with an STI do not always have an adequate perception of their condition; there may have been incidences of STIs among those who denied prior STI history. A few women had positive results for syphilis, hepatitis B, and HIV.This situation limits the potential of statistical tests.An association between T. vaginalis and number of lifetime sexual partners was found in another study [44] and is not surprising for this STI.Similarly, the protective action of consistent condom use was expected [45] and corroborates the recommendation for condom use, especially among people who have no steady partner or have more than one sexual partner. Conclusions The prevalence of T. vaginalis infection was considered high in the sample examined.The infection was positively associated with the number of lifetime sexual partners, and consistent condom use was a protective factor.Vaginal complaints were more common among women with T. vaginalis, but only dyspareunia was significantly higher. Replication of this study in other administrative regions of the Federal District may provide important information for understanding the magnitude of this disease in populations; its distribution in different genres, age groups, and social strata; and the impact on quality of life.Such information can be useful to the managers responsible for public health in the Federal District. Table 1 . Distribution of participating women based on sociodemographic variables, 2015. Tv: Trichomonas vaginalis; Total N is 201, except as otherwise indicated.. Table 2 . Distribution of women based on history of sexually transmitted diseases and results of serology, 2015. Tv: Trichomonas vaginalis; STD: sexually transmitted disease; VDRL: venereal disease research laboratory; Total N is 201, except as otherwise indicated. Table 3 . Distribution of women based on complaints, value of vaginal pH, and whiff test, 2015. Tv: Trichomonas vaginalis; Total N is 201, except as otherwise indicated. Table 4 . Distribution of T. vaginalis infected women based on risk factors, 2015.
2018-04-03T03:51:56.917Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "be3a785e76fce1027443039bab1b4a030bed3118", "oa_license": "CCBY", "oa_url": "https://jidc.org/index.php/journal/article/download/28368862/1681", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "be3a785e76fce1027443039bab1b4a030bed3118", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254784517
pes2o/s2orc
v3-fos-license
Lung inflammation induced by silica particles triggers hippocampal inflammation, synapse damage and memory impairment in mice Background Considerable evidence indicates that a signaling crosstalk between the brain and periphery plays important roles in neurological disorders, and that both acute and chronic peripheral inflammation can produce brain changes leading to cognitive impairments. Recent clinical and epidemiological studies have revealed an increased risk of cognitive impairment and dementia in individuals with impaired pulmonary function. However, the mechanistic underpinnings of this association remain unknown. Exposure to SiO2 (silica) particles triggers lung inflammation, including infiltration by peripheral immune cells and upregulation of pro-inflammatory cytokines. We here utilized a mouse model of lung silicosis to investigate the crosstalk between lung inflammation and memory. Methods Silicosis was induced by intratracheal administration of a single dose of 2.5 mg SiO2/kg in mice. Molecular and behavioral measurements were conducted 24 h and 15 days after silica administration. Lung and hippocampal inflammation were investigated by histological analysis and by determination of pro-inflammatory cytokines. Hippocampal synapse damage, amyloid-β (Aβ) peptide content and phosphorylation of Akt, a proxy of hippocampal insulin signaling, were investigated by Western blotting and ELISA. Memory was assessed using the open field and novel object recognition tests. Results Administration of silica induced alveolar collapse, lung infiltration by polymorphonuclear (PMN) cells, and increased lung pro-inflammatory cytokines. Lung inflammation was followed by upregulation of hippocampal pro-inflammatory cytokines, synapse damage, accumulation of the Aβ peptide, and memory impairment in mice. Conclusion The current study identified a crosstalk between lung and brain inflammatory responses leading to hippocampal synapse damage and memory impairment after exposure to a single low dose of silica in mice. Introduction Mounting evidence indicates that a crosstalk between peripheral and central inflammation can lead to brain dysfunction and neurodegeneration [1][2][3]. For example, chronic kidney disease has been associated with cognitive impairment, delirium, encephalopathy, and dementia [4,5]. Type 2 diabetes and obesity, characterized by *Correspondence: ferreira@bioqmed.ufrj.br peripheral inflammation and insulin resistance, are risk factors for dementia [6][7][8] and for major depressive disorder [9]. Gut microbiota has also been implicated in brain-periphery crosstalk [10]. Exposure to microbial amyloids in the gastrointestinal tract can accelerate alpha-synuclein aggregation in the gut and brain, and lead to enhanced microgliosis and astrogliosis, suggesting that bacterial amyloid may function as a trigger to initiate brain inflammation and alpha-synuclein aggregation in synucleinopathies [11,12]. Robust evidence further indicates that acute peripheral inflammatory conditions, including viral and bacterial infections, can trigger brain inflammation and dysfunction, resulting in cognitive decline or neuropsychiatric conditions [13,14]. For example, recent studies demonstrate that systemic inflammation induced by SARS-CoV-2, the etiologic agent of COVID-19, activates brain toll-like receptors (TLRs) and upregulates brain tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6) signaling, triggering synapse damage and leading to depressive and cognitive symptoms in COVID-19 patients [15]. Collectively, multiple lines of evidence indicate that both acute and chronic, low grade peripheral inflammation can trigger brain inflammation and progressively lead to brain dysfunction underlying cognitive decline and dementia [13][14][15][16][17]. Silicosis is a major occupational lung disease, with a significant impact on health systems and on the quality of life of workers in many industries [18,19]. Exposure to silica (SiO 2 ) particles induces chronic lung inflammation, including immune cell infiltration, macrophage activation and release of pro-inflammatory cytokines, e.g., TNF-α, interleukin-1β (IL-1β) and IL-6, resulting in tissue fibrosis, alveolar collapse and lung dysfunction [19][20][21]. Clinical and epidemiological evidence points to an association between pulmonary function and cognitive impairment. A recent meta-analysis of longitudinal studies of individuals with impaired pulmonary function found an increased risk of dementia in such individuals [22], and this appears more pronounced for restrictive pulmonary impairment than for obstructive lung disease [23]. Recent studies indicate that both neurodegeneration and vascular brain lesions may underlie the association between pulmonary dysfunction, memory impairments and dementia [24,25]. However, the mechanistic underpinnings of the connection between lung and brain dysfunction remain unclear. Here, we investigated the crosstalk between lung inflammation, brain inflammation and memory in a mouse model of silicosis. We report that lung inflammation in mice exposed to silica particles is accompanied by hippocampal inflammation, synapse damage and memory impairment. Animals Experiments were performed on 8-to 10-week-old male Swiss mice. Animals were housed in groups of five per cage with free access to food and water, under a 12-h light/dark cycle with controlled room temperature (21 ± 2 °C). Mice were randomly divided into two groups. In the control group (Ctrl), mice received an intratracheal (i.t.) administration of 0.05 mL sterile saline solution (0.9% NaCl). Silica-exposed animals (Si) received an intratracheal administration of 2.5 mg silica (SiO 2 ; particle size: 500 nm-10 μm, 80% of the particles between 1 and 5 μm; S5631, Sigma Chemical Co., St. Louis, USA) suspended in 0.05 mL saline, as previously described in murine models of acute silicosis [26,27]. Animal behavior was analyzed 24 h or 15 days after saline or silica administration. This study was approved by the Ethics Committee on the Use of Animals, Health Sciences Center, Federal University of Rio de Janeiro. Tissue collection and preparation Mice were anesthetized with 1.5 ml/kg of a solution containing 10% ketamine and 2% xylazine immediately after behavioral tests. Bilateral hippocampi and lungs of Ctrl and Si animals were collected, immediately frozen in liquid nitrogen, and stored at − 80 °C until use in Western blotting or ELISA assays. Lung histology Histology was performed as previously described [20,26]. Morphometric analysis was performed in granuloma-free tissue areas using an integrating eyepiece with a coherent system of a 100-point and 50 lines (known length) grid coupled to a conventional light microscope (Axioplan; Zeiss). Analysis was performed in 10 random, non-overlapping fields with 200× magnification. For cellularity analysis, total numbers of mononuclear (MN) and polymorphonuclear (PMN) cells in granuloma-free lung tissue areas were counted in each animal across 10 random non-overlapping microscopic fields at 1000× magnification. Data are presented as cell count percentages and cell numbers/tissue area. Behavioral tests Open field Mice were placed at the center of an open field arena (30 cm × 30 cm × 45 cm) for habituation, and their activity was recorded for five minutes. Total distance traveled and time spent in the central square were automatically quantified using Any-maze ® video-tracking system (Stoelting Inc., Kiel, WI). The arena was cleaned with 20% ethanol between trials to eliminate olfactory cues. Novel object recognition test (NOR) The novel object recognition test was performed in the open field arena with objects fixed to the box using tape. The test was video recorded for behavior readout [28]. During training and test sessions, animals were placed at the center of the arena, and exploratory behavior toward both objects was recorded for 5 min. The arena was cleaned with 20% ethanol between trials to eliminate olfactory cues. The training session was performed in the presence of two identical objects. One of the two objects used in the training session was then replaced by a novel object for the test session, carried out one and a half hours after training. Sniffing and touching the object were considered exploratory behavior, and the amount of time spent exploring each object was registered by a trained researcher [28]. Statistical analyses All datasets were submitted to the Shapiro-Wilk normality test. Specific statistical tests employed are mentioned in figure legends. Datasets showing normal distribution were analyzed by ANOVA or Student's t-test. Histological datasets were analyzed by ANOVA and differences were considered statistically significant with p < 0.05. Data from the NOR task were analyzed using a one-sample Student's t-test, comparing the exploration time of the novel object to the fixed value of 50% (chance level) as previously described [28][29][30][31]. All analyses were performed using GraphPad Prism 8 (GraphPad Software; La Jolla, CA). Results are expressed as means ± SEM, and corresponding t-values. Intratracheal administration of silica induces alveolar collapse and lung infiltration by polymorphonuclear (PMN) cells Lung histology showed that intratracheal administration of SiO 2 (silica) particles caused a significant increase in the percentage of collapsed alveoli compared to control, saline-instilled mice (F (3 The decreases in pre-and post-synaptic marker proteins in the hippocampi of silica-instilled mice were similar to previous observations in Alzheimer's disease (AD) mouse models [31]. This led us to investigate if hippocampal inflammation in silica-instilled mice could be associated with changes in levels of the amyloid-β peptide (Aβ), a neurotoxin that accumulates in AD brains and is implicated in brain inflammation and synapse damage in AD [8,30,[32][33][34]. Aβ was indeed elevated in the hippocampi of Si mice compared to Ctrl animals [t (1,14) = 2.157; p = 0.0488] (Fig. 4C). To examine a possible mechanism underlying increased hippocampal Aβ in silica-instilled mice, we measured protein levels of β-secretase (BACE 1), a key protease involved in the cleavage of the amyloid precursor protein (APP) to release Aβ [35,36]. No difference was observed in BACE1 immunoreactivity between Ctrl and Si mice [t (1,12) = 0.2663; p = 0.410] (Fig. 4D). However, APP was significantly increased in the hippocampus 15 days after administration of silica in mice [t (1,11) = 2.174; p = 0.05] (Fig. 4E), suggesting that the increase in Aβ might be related to increased substrate (APP) availability. Recent studies have established that inflammationassociated inhibition of brain insulin signaling plays an important role in neurodegenerative mechanisms leading to synapse damage and cognitive impairments in AD [8] and sepsis [13,37]. To determine whether a similar mechanism might be induced by silicosis-induced hippocampal inflammation, we examined pSer473-Akt levels as a proxy of activity of the insulin signaling pathway. pSer473-Akt was reduced in Si mice compared to Ctrl animals [t (1,12) = 2.124; p = 0.0551], indicating that brain inflammation induced by intratracheal administration of silica is accompanied by inhibition of hippocampal insulin signaling. Intratracheal administration of silica induces memory impairment in mice Finally, we hypothesized that the impact of silicosis on hippocampal pro-inflammatory cytokines, synaptic markers, and phosphorylation of Akt could result in memory impairments in mice. Control open field tests revealed no differences in total distance traveled or time spent at the center of the arena between Ctrl and Si mice, indicating that silica instillation did not affect locomotor/ exploratory behavior or induced anxiety in mice (Fig. 5A Discussion We initially established that a single dose of silica (2.5 mg/kg; particle size range: 500 nm-10 μm) administered into the trachea induced alveolar collapse, lung infiltration by PMN cells, and increased lung pro-inflammatory cytokines, all indicative of the induction of silicosis in mice. We further found that lung silicosis induced by this low dose of silica [38] was followed by an increase in hippocampal pro-inflammatory cytokines, synapse damage, accumulation of the Aβ peptide, and impaired memory. In addition to an acute (24 h) examination of the impact of intratracheal administration of silica particles on the lung and brain, we analyzed lung and brain inflammation 15 days after administration of silica. The 15-day time window is frequently used to study the outcomes of exposure to SiO 2 (e.g., Reiser et al. [39]; Faffe et al. [26]; Yang et al. [40]. Numerous studies have demonstrated significant pathological alterations induced by silica particles using a 15-day experimental window. The experimental design in our current study was thus based on a validated model in the silicosis field. Silicosis is a restrictive disease that causes alveolar collapse, an increase in lung elastic tissue and lung infiltration by inflammatory cells [26]. The increase in collapsed alveoli indicates tissue damage and impairment of pulmonary function [41]. As a result, cytokines are secreted into the lung, triggering pulmonary remodeling processes, and resulting in the production of connective tissue fibers [41]. As expected, we found morphometric and cellularity alterations in the lung following intratracheal administration of silica. The percentage of collapsed alveoli was increased 24 h after instillation and remained elevated 15 days later, indicating that tissue damage was persistent over time. We further observed an increase in the number of PMN cells, but not in MN cells, in the lung 24 h after silica instillation (but not 15 days after instillation), indicating an acute lung infiltration by peripheral immune/inflammatory cells. Previous studies employing administration of higher doses of silica (20 mg/kg) demonstrated that histological alterations in the lung were accompanied by robust increases in pro-inflammatory cytokines [42,43]. Thus, we sought to determine whether the lower dose of silica employed here elicited a similar response. In accordance with previous results [3], both IL-1β and IL-6 were significantly increased in the lung 24 h after administration of silica. However, in contrast with the increase in lung TNF-α observed in mice instilled with a higher dose of silica [3], under our conditions TNF-α was reduced at 24 h in Si mice compared to Ctrl animals. The different TNF-α response may be related to the low dose of silica administered or to the use of Swiss mice in the current work, as opposed to a higher silica dose and BALB/c mice in previous studies [3]. In addition, evidence suggests that TNF-α release is time-dependent in silica-induced toxicity [44], and lipoxins may regulate this release as discussed next. In line with the reduction in PMN cell number, IL-6 and TNF-α retuned to baseline levels 15 days after silica instillation. Modulation of cytokine levels in silicosis may be further be related to the upregulation of lipoxin A4 (LXA4) signaling. An increase in LXA4 has been found to contribute to the protective effect of apolipoprotein A1 (ApoA1) against fibrosis in an experimental model of lung silicosis [45]. The mechanism of LXA4 protection has been reported to involve attenuation of the release of pro-inflammatory cytokines and chemokines, such as TNF-α and macrophage inflammatory protein-2 [46]. In addition to its anti-inflammatory activity, LXA4 stimulates macrophages to perform phagocytosis of apoptotic immune cells without the release of pro-inflammatory cytokines [47]. Administration of micro-and nano-sized particulate matter to animals via the lung has been shown to trigger systemic inflammation and lesions to the spleen, heart, and kidney [40,48], as well as autoimmunity [49]. Humans exposed to silica displayed increased levels of systemic inflammatory markers [50], rheumatoid arthritis or systemic scleroderma [51]. The central nervous system can also be impacted by silica. After inhalation, SiO 2 nanoparticles penetrate the epithelium of the respiratory tract and are translocated to the brain via either the circulatory system or the olfactory nerve [52]. Following exposure to silica nanoparticles, mice showed neuropathology, degeneration, and synapse damage [3]. Silica nanoparticles were found to be primarily deposited in the frontal cortex and hippocampus [3]. Because the hippocampus is centrally implicated in memory and cognition, and pathological deregulation of hippocampal function underlies cognitive and memory impairments in neurological disorders, we investigated the possibility that lung inflammation might be followed by hippocampal inflammation and memory loss in silicosis. Increasing evidence supports the notion that both acute and chronic peripheral inflammation can trigger brain inflammation, neurodegeneration, and cognitive deficits [2,30,37]. Significantly, recent studies have revealed that restrictive pulmonary diseases, such as silicosis, are risk factors for cognitive impairment and dementia [23]. However, the mechanisms underlying the association between lung and brain dysfunction remain unknown. The potential crosstalk between lung inflammation and brain dysfunction in the mouse silicosis model was initially evaluated by measuring hippocampal cytokines. Results showed no significant differences in hippocampal IL-1β, IL-6, or TNF-α 24 h after administration of silica. However, hippocampal IL-1β and IL-6 were significantly increased 15 days after instillation, suggesting that hippocampus inflammation was triggered by and temporally followed the initial lung inflammation caused by silica. Increased brain levels of pro-inflammatory cytokines trigger synapse damage and affect neuroplasticity [6], resulting in cognitive impairments [2,32]. We found that hippocampal pre-and post-synaptic markers (synaptophysin and PSD-95, respectively) were significantly decreased 15 days after administration of silica, indicating synapse damage and loss. Numerous studies have established synapse loss as a hallmark of AD [33]. Soluble forms of the Aβ peptide accumulate in the AD brain [35,36] and activate pro-inflammatory pathways leading to synapse loss and neuronal damage [7,53]. This prompted us to investigate whether hippocampal inflammation in silica-instilled mice was associated with elevated Aβ levels. Hippocampal Aβ levels were indeed higher in silica-instilled than in control mice. While we did not detect any changes in BACE1, one of the secretases responsible for cleavage of APP and production of Aβ [54], APP was found to be elevated in the hippocampi of mice that received silica. It is, thus, possible that the increase in hippocampal Aβ resulted from increased substrate (APP) availability for cleavage by secretases in silica-instilled mice. Results suggest that inflammatory signaling from the lung triggers a deleterious process involving elevated hippocampal cytokines and Aβ and leading to synapse damage in the hippocampus. Communication via the vagus nerve might be an additional mechanism involved in the crosstalk between peripheral and central inflammation in silicosis, and further studies appear warranted to investigate this communication. Inhibition of brain insulin signaling is thought to play a major role in the pathogenesis of neurodegenerative disorders [7,8,37] and in sepsis [24,25], and to be a leading mechanism underlying cognitive impairment. A key component of the insulin signaling pathway is Akt, which becomes phosphorylated at Ser473 in response to insulin [8]. Disruption of Akt signaling is detrimental to insulin response and may lead to brain dysfunction [7]. Our results showed a decrease in hippocampal pSer473-Akt in silica-instilled mice, suggesting disruption of insulin signaling such as observed in AD and other neurodegenerative disorders. Fig. 6 Schematic representation of the current study findings. Hippocampal inflammation temporally follows the initial lung inflammation induced by silica particles. Cytokine (IL-1β and IL-6) levels first increase in the lung 24 h after intratracheal administration of silica, coinciding with lung infiltration by phagocytes (PMNs). The initial lung inflammatory response is followed by increased hippocampal IL-1β and IL-6 15 d after silica instillation. This, in turn, is accompanied by damage to synapses, leading to memory impairments Finally, our findings of hippocampal inflammation, synapse damage and impaired insulin signaling prompted us to assess the impact of the administration of silica on memory. Remarkably, we found that intratracheal instillation of a low-dose of silica impaired memory in the NOR test. In conclusion, the current study identified a crosstalk between lung and brain leading to hippocampal inflammation, synapse damage and cognitive impairment after a single intratracheal instillation of a low dose of silica in mice. Figure 6 schematically represent this sequence of events. Interestingly, a recent study showed a link between intranasal exposure to silica, α-synuclein aggregation and neurodegeneration in a Parkinson's disease model [55]. In light of recent clinical and epidemiological studies connecting pulmonary dysfunction with cognitive impairments and dementia [22][23][24][25], and considering the prevalence of silicosis as an occupational disease, investigation of potential neurological outcomes in patients appears warranted. Our findings further suggest that early intervention to attenuate peripheral pro-inflammatory signaling may be important to preserve brain health in silicosis.
2022-12-18T05:07:40.107Z
2022-12-16T00:00:00.000
{ "year": 2022, "sha1": "556ab6681be468050ac7c0b7205b78f84fe0605a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "556ab6681be468050ac7c0b7205b78f84fe0605a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
14000528
pes2o/s2orc
v3-fos-license
Hydrogen sulfide alleviates chlorobenzene toxicity in soybean seedlings As a gas signaling molecule, endogenous hydrogen sulfide (H2S) plays a crucial role in the plant stress response. However, the role of H2S in the response to organic pollutants specifically has not been studied. Here, the effects of H2S addition on soybean (Glycine max) seedlings tolerance of 1,4-dichlorobenzene (1,4-DCB) were investigated. Under 1,4DCB stress, the growth of soybean seedlings roots and stems was inhibited, while L-/D-cysteine desulfhydrase (LCD/DCD) activity was induced and endogenous H2S increased. When applied jointly with sodium hydrosulfide (NaHS), a H2S donor, root growth inhibition was effectively alleviated. Pre-treatment of seedlings with 0.4mmol L-1 NaHS reduced the malondialdehyde (MDA) and reactived oxygen species (ROS) content, mitigating root cell toxicity significantly. Further experiments confirmed that NaHS enhanced soybean seedlings peroxidase (POD) and superoxide dismutase (SOD) enzyme activities. In contrast, these effects were reversed by hypotaurine (HT), a H2S scavenger. Therefore, H2S alleviated 1,4-DCB toxicity in soybean seedlings by regulating antioxidant enzyme activity to reduce cell oxidative damage. InTRoduCTIon Hydrogen sulfide (H 2 S) has been recently discovered to act as a gaseous transmitter.It is therefore the third discovered gaseous signaling molecule, following the discoveries of Nitric oxide (NO) and Carbon monoxide (CO).In plants, H 2 S is mainly produced via the L-/D-cysteine desulfhydrase enzymes (LCD/ DCD) and it has been reported to be involved in stress responses to drought, heat, heavy metal and salt during seed germination and seedling growth (ZHANG et al., 2010a;JIN et al., 2011;LAI et al., 2014;LI et al., 2014).Applications of H 2 S could also significantly improve root morphology, chlorophyll content and photosynthetic activity under Pb stress (ALI et al., 2014). Chlorobenzene dissolved in an organic solvent is an important raw material and intermediate in the pharmaceutical, leather, dye and other industries, where it is widely used.With the discharge of industrial waste, chlorobenzene, which is nonbiodegradable and can accumulate in the environment, enters in ecosystems in different ways, as polluted air, soil and water, potentially causing lasting harm.For example, the presence of 1,2,4-trichlorobenzene reduced maximum root length, plant height, tillers per hill, shoot and root dry weight in rice plants, as a result of changes in the activity of antioxidant enzymes, reactive oxygen species (ROS), malondialdehyde (MDA) and membrane lipid peroxidation (DING et al., 2014).Chlorobenzene also inhibited maize cell division and seedling growth, and the oxidative stress response increased proportional to the compound's degree of chlorination (MIGUEL et al., 2012). H 2 S has been identified to play an important role in diverse physiological processes in plants.However, the relationship between H 2 S and plant chlorobenzene stress tolerance remains poorly characterized, and whether or not oxidation reactions are involved in the process is not clear.The aim of this study was to provide more evidences for potential mechanisms of H 2 S-related mitigation of developmental inhibition caused by the organic pollutant, 1,4-dichlorobenzene (1,4-DCB).Changes in root and stem elongation, MDA and ROS contents, and antioxidant enzyme levels were investigated.The results revealed that the presence of H 2 S alleviated 1,4-DCB-induced stress in soybean seedlings by restoring oxidation balance. mATeRIAlS And meTHodS plant materials, growth conditions and processing methods Seeds of the soybean (Glycine max) variety Zhong Huang 57 were surface-sterilized with a 10min long soak in 5% NaOCl.After being rinsed with deionized water, the seeds were then soaked in an appropriate amount of deionized water for 24h to initiate germination.Uniform seedlings were selected and planted in seedling trays containing treated soil.The soil was purchased from Kaifeng Seed Company and contained at least 28% total organic matter and 2% total nitrogen, phosphorus and potassium.Varying concentrations of sodium hydrosulfide (NaHS, H 2 S donor) or hypotaurine (HT, H 2 S scavenger) solutions were added to the dry soil mix, increasing the total water content to 30%, and 1,4-DCB dissolved in acetone was then added to make up the different treatments, with acetone for the controls.Soybean seedlings were maintained at 25°C±1°C with a 14h photoperiod (the light intensity was 200µmol m -2 s -1 ).One week old seedlings were harvested for analysis. Endogenous H 2 S content determination and L-/Dcysteine desulfhydrase activity assay The methylene blue method was used to determine the H 2 S content in soybean seedling roots, following ZHANG et al. (2008), while root cysteine desulfhydrase activity was determined using the method of JIN et al. (2011).The activity of L-cysteine desulfhydrase was established using H 2 S (including L-DTT) release.D-cysteine desulfhydrase enzyme activity was also measured by replacing L-cysteine with D-cysteine and using pH 8.0 Tris-HCl buffer; otherwise, the steps remained the same.Control and treatment data were expressed as means plus standard errors of three independent experiments. MDA content determination The MDA concentration was calculated using the formula of ZHANG et al. (2011). -generation were determined using assay kits produced by the first branch of the Nanjing Jiancheng Biological Engineering Institute. Histochemical staining and tissue sections Evans Blue dye was used to detect the viability of root tip cells in the soybean seedlings following BAKER & MOCK's method (1994).A 2-3cm root tip was stained for 10 minutes in a 0.25% Evans Blue solution at room temperature.After a wash with double distilled water, the root tip was photographed. Antioxidant enzyme activity assay POD activity was measured using the guaiacol method, where a 0.01 change per minute at A 470 was calculated as one enzyme activity unit (U).CAT activity was determined according to LANG & ZHU's method (2014).SOD activity was determined using the nitro blue tetrazolium chloride (NBT) photochemical reduction method. ReSulTS And dISCuSSIon 1,4-DCB inhibited soybean seedlings growth and promoted the synthesis of H 2 S In order to assess toxicity symptoms in soybean seedlings under chlorobenzene stress, different concentrations (0,4,8,16, 24mM L -1 ) of 1,4-DCB were applied in this study.It was determined that 1,4-DCB significantly inhibited the elongation of soybean seedling roots and stems in a dose-dependent manner, with greater inhibition with increasing 1,4-DCB concentrations.Maximum response occurred at doses of 24mmol L -1 , where the length of seedling roots and stems were 33.3% and 34.7%, respectively, of controls (Figure 1A, B).In order to investigate whether H 2 S was associated with this process, endogenous H 2 S concentrations were measured in soybean seedlings.H 2 S content increased with 1,4-DCB concentration; when the 1,4-DCB concentration was between 8 and 24mmol kg -1 , the endogenous H 2 S content in treated seedlings differed significantly from controls (Figure 1C).Therefore, a concentration of 8mmol kg -1 1,4-DCB was selected to induce a stress response for further experiments.H 2 S was mainly produced from L-/Dcysteine in plants (JIN, et al., 2015).Here, DCD activity increased with the concentration of 1,4-DCB and the activity of LCD was significantly elevated at a 1,4-DCB concentration of 8mmol kg -1 ; however, as 1,4-DCB levels rose above 8mmol kg -1 , the total activity of LCD did not differ from the control (Figure 1D).Thus, these results indicated a possible inter-relationship between LCD/DCDrelated H 2 S homeostasis and chlorobenzene response in soybean seedlings.The H 2 S pathway was similar to the generation of H 2 S in stomatal closure (HOU et al., 2011). NaHS alleviated 1, 4-DCB-induced toxicity in soybean seedlings NaHS has been reported to be a H 2 S donor in previous research (LAI et al., 2014).Therefore, NaHS was applied in this study to examine the effects of H 2 S on soybean seedlings under 1,4-DCB-induced stress.Experiments were carried out on soybean seedlings planted in soil with different concentrations of NaHS (0, 0.2, 0.4, 0.6, 0.8, 1.0mmol L -1 ) and 8mmol kg -1 1,4-DCB.Root length inhibition observed under 1,4-DCB stress was mitigated progressively as the NaHS concentration increased; this effect might be even further enhanced above 1mmol L -1 of NaHS.However, there was no detoxification effect of NaHS on stem growth (Figure 1E).This finding agreed with previous results showing that pretreatment with NaHS could improve heat tolerance (LI et al., 2015), but differs from results showing that high concentrations of NaHS produce toxic effects (LAI et al., 2014).This variation in results might be due to differing physiological concentrations of H 2 S experienced as a result of variation in the plant material, developmental stage, tissue and organs, and growth environment studied.In addition, this study reported relationship between NaHS concentration and the release characteristics of different H 2 S donors.pretreatment with increasing doses of NaHS caused a considerable decrease in the level of 1,4-DCB-induced MDA (Figure 1F).This effect occurred above NaHS concentrations of 0.4mmol L -1 , hence this concentration of NaHS was used in the follow-up experiments. H 2 S scavengers aggravated toxic effects of 1,4-DCB in soybean seedlings In order to clarify the role of endogenous H 2 S homeostasis in regulating the stress response of soybean seedlings to 1,4-DCB, applications of HT were made in experiments.When HT was applied (at 0.4mmol L -1 ), soybean seedlings had an opposite phenotype compared to the NaHS treatment.The growth of roots and stems was arrested, and in the presence of both chlorobenzene and HT, this growth inhibition was further aggravated, with root and stem lengths only 26.5% and 37.4%, respectively, of controls (Figure 2A, B); roots also became thicker.H 2 S scavengers aggravated the toxic effect of 1,4-DCB on cells H 2 S reduced the impact of 1,4-DCB stress on plants, mainly by lowering the ROS content, which helped to maintain the integrity of the cell membrane (ZHANG et al., 2015).Under 1,4-DCB stress, pretreatment with HT significantly increased the MDA content (Figure 2C), hydrogen peroxide (H 2 O 2 ) content and the rate of superoxide radical (O 2 .-)generation (Figure 2E).This implied that a large number of lipid peroxides were produced, which could eventually damage the cell membrane.In contrast, these effects were reversed by NaHS, indicating that such damage had been alleviated.Similar phenomena have been previously observed; for example, H 2 S maintained low concentrations of MDA and H 2 O 2 , thereby alleviating aluminum stress (ZHANG et al., 2010b).This effect was confirmed here by using tissue staining.When there was no stress treatment, the root staining was relatively lighter.Under 1,4-DCB stress, pretreatment with HT enhanced the chlorobenzene-related staining pattern, as indicated by a stronger deposition of blue-colored precipitates in root tips (Figure 2D).This result agreed with the effects seen on soybean seedling phenotypes.Hence, the presence of a H 2 S scavenger, HT, aggravated the toxic effects of 1,4-DCB on cells. H 2 S restored redox equilibrium To reduce free radical damage to the cell membrane, plants may activate an antioxidant defense system.In this study, soybean seedlings were subjected to 1,4-DCB stress and the activities of different antioxidant enzymes were measured.POD was the most sensitive to the treatment: even at low 1,4-DCB concentrations (4mmol kg -1 ), POD activity increased rapidly.With increasing 1,4-DCB stress, POD activity gradually increased.At the highest concentration of 1,4-DCB, POD activity was more than 8 times greater than in the control.As a key regulator of H 2 O 2 , CAT exhibited the opposite trend; as the 1,4-DCB concentration increased, CAT activity decreased.At high 1,4-DCB concentrations, this decline was significant.The activity of SOD, a scavenging superoxide anion, was more stable (Figure 3A).In order to examine the relationship between H 2 S and antioxidase levels, the H 2 S inhibitor HT and donor NaHS were applied experimentally.Activities of three kinds of antioxidases were then observed and, as expected, HT acted to further reduce POD and SOD activity.In contrast, NaHS increased the activity of POD and SOD (Figure 3B).The activity levels of CAT, POD and SOD were thus adjusted in soybean seedlings so as to limit the amount of active oxygen species, in order to mitigate the damage from 1,4-DCB.H 2 S also helped alleviate 1,4-DCB-induced injury by increasing the activity of antioxidant enzymes, maintaining a balance of active oxygen.This finding was consistent with previous research in peas that showed H 2 S increased soluble protein content, as well as the activity of APX, POD and SOD, but decreased CAT activity in root tissues (LI et al., 2010).It was also similar to results, which showed that higher CAT and SOD activity relieved heat and salt stress (LI et al., 2014;LAI et al., 2014).Although the role of antioxidant enzymes in the stress response versus the reaction to H 2 S was not the same, antioxidant enzymes acted in the cases to maintain stability in levels of oxygen free radicals, keeping their concentration within a low enough range to reduce cellular damage.Results of this study were also consistent with the finding that, through increasing the activity of antioxidant enzymes, plants resisted the effects of aluminum, boron, and tissue hypoxia stresses (WANG et al., 2010., CHEN et al., 2013;CHENG et al., 2013).However, owing to the ubiquity of H 2 S and its versatile properties, the enhancement of antioxidant capacity to decrease ROS accumulation is ConCluSIon This study revealed that the presence of endogenous H 2 S, associated with the total activity of LCD/DCD, effectively improved soybean seedlings' tolerance of 1,4-DCB.Study results further illustrated the role of endogenous H 2 S homeostasis in the stress Figure 1 - Figure 1 -Effects of 1,4-DCB and NaHS treatments on soybean seedlings.The root and stem lengths (A), endogenous H 2 S content (B), and activities of DCD (C) and LCD (D) were measured in soybean seedlings under the 1,4-DCB stress.NaHS treatments of different concentrations on root length and stem length (E) and intracellular MDA (F) in soybean seedlings under 1,4-DCB stress.Note that values are means ± the SE of three independent experiments, and that different letters indicate significant differences (P<0.05), in all figures. Figure 2 - Figure 2 -Effects of HT and NaHS treatments on the root length and stem length of soybean seedlings (A & B), intracellular MDA content (C), H 2 O 2 content , and the rate of O 2 .-generation(E) in soybean seedlings under 1,4-DCB stress.The apical part of treated roots was stained (D). Figure 3 - Figure 3 -Effects of 1,4-DCB (A), HT and NaHS (B) treatments on the activities of antioxidant enzymes in soybean seedlings.A B
2018-04-30T20:42:44.641Z
2016-09-05T00:00:00.000
{ "year": 2016, "sha1": "dbec90907223875c101833da5e833050bfbae1d5", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/cr/v46n10/1678-4596-cr-46-10-01743.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6a0bb0e12bd0bf9ed73286c1463def4b9910a909", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
230800168
pes2o/s2orc
v3-fos-license
Pulmonary rehabilitation to improve physical capacity, dyspnea, and quality of life following pulmonary embolism (the PeRehab study): study protocol for a two-center randomized controlled trial Background Recently, a large group of patients with persistent dyspnea, poor physical capacity, and reduced health-related quality of life (HRQoL) following pulmonary embolism (PE) has been identified and clustered under the name “post pulmonary embolism syndrome” (PPS). These patients seem good candidates for pulmonary rehabilitation. The aim of the study is to explore whether a pulmonary rehabilitation program can improve physical capacity, dyspnea, and HRQoL in PPS patients. Methods A two-center randomized controlled trial (RCT) is being performed at Østfold Hospital and Akershus University Hospital in Norway. Patients with PPS are 1:1 randomized into an intervention or a control group. The intervention consists of a supervised, outpatient rehabilitation program twice weekly (1 h) for 8 weeks provided by experienced physiotherapists. The intervention involves individually adapted exercises based on existing pulmonary rehabilitation programs (relaxation, interval, and resistance training), and an educational session including topics such as normal anatomy and physiology of the respiratory and circulatory system, information on PE/PPS, breathing strategies, and benefits of exercise/physical activity. Patients randomized to the control group receive usual care without specific instructions to exercise. Participants in the intervention and control groups will be compared based on assessments conducted at baseline, 12 weeks, and 36 weeks after inclusion using the incremental shuttle walk test (primary outcome) and endurance shuttle walk test (exercise capacity), Sensewear activity monitor (daily physical activity), the modified Medical Research Council scale, the Shortness of Breath Questionnaire (dyspnea), and EQ-5D-5L and the Pulmonary Embolism Quality of Life Questionnaire (HRQoL). Recruitment of 190 patients is currently ongoing. Discussion Results from this study may provide a currently untreated group of PPS patients with an effective treatment resulting in reduced symptoms of dyspnea, improved exercise capacity, and better HRQoL following PE. Trial registration Clinical Trials NCT03405480. Registered prospectively on September 2017. Protocol version 1 (from original protocol September 2017). The study protocol has been reported in accordance with the Standard Protocol Items: Recommendations for Clinical Interventional Trials (SPIRIT) guidelines (Additional file 1). Background Pulmonary embolism (PE) occurs when an emboli blocks a pulmonary artery resulting in acute symptoms, such as dyspnea and chest pain, which usually subside gradually with the majority of patients regaining normal function within 3-6 months [1]. However, long-term complications following PE can include recurrent venous thromboembolism (VTE), bleeding, and chronic thromboembolic pulmonary hypertension (CTEPH) [2,3]. Several studies have shown that up to 50% of patients complain of various grades of persistent unexplained dyspnea many years after the diagnosis of PE [4,5]. Furthermore, patients who reported dyspnea had reduced exercise capacity as measured by the 6-min walk test (6MWT) compared to patients with no dyspnea [6]. Additionally, those suffering from persistent dyspnea had impaired health-related quality of life (HRQoL) compared to both the normative population and PE patients without dyspnea [4]. These findings have recently been confirmed by a prospective study showing that half of PE patients have an exercise limitation at 1 year post-PE which negatively influences walking distance and reduces HRQoL [7]. Some of these patients have persistent pathological findings, such as right ventricular dysfunction, pulmonary hypertension, or residual perfusion defects causing dead space ventilation, which may explain, at least in part, the persistent symptoms. The majority of patients, however, have no detectable cardiopulmonary sequel and merely suffer from deconditioning. Foregoing research has led to the recognition of patients with the so-called post-PE syndrome (PPS), defined as new or progressive dyspnea, exercise intolerance, and/or diminished functional status following PE without an apparent non-PE alternative explanation [8]. Guidelines provide clear recommendations for the management of CTEPH, the most severe presentation of PPS affecting about 4% of the patients following PE [2]. Studies focusing on adequate treatment of other PPS presentations to improve functionality and decrease symptoms are, however, lacking and guidelines make no mention of this large patient group. Because it is likely that physical deconditioning may be responsible for at least a part of the disease burden, it has been hypothesized that patients with PPS may benefit from pulmonary rehabilitation [7,9]. Pulmonary rehabilitation is a core component in the management of chronic lung disease and is mostly utilized by patients with chronic obstructive pulmonary disease (COPD). Programs typically consist of patienttailored therapies such as exercise training, education, and behavioral changes, based on a thorough assessment of the patient, with the goal of improving physical and psychological condition and promoting long-term adherence to health-enhancing behaviors [10]. Rehabilitation is a cost-effective intervention and has demonstrated a reduction in respiratory symptoms such as the perception of dyspnea, improved physical function and HRQoL in patients with COPD, and reducing hospital admissions and improving mortality rates [10]. Recently, there has been an increased focus on the benefits of rehabilitation for other types of patients experiencing similar respiratory symptoms and reduced exercise capacity, such as lung cancer, pulmonary hypertension, and cystic fibrosis [10]. Moreover, a study from 2016 investigated the feasibility of a breathlessness rehabilitation program for patients with both respiratory and cardiac disease suggesting that rehabilitation should focus on the symptoms and limitations that patients experience rather than traditional disease-focused rehabilitation [11]. To our knowledge, there are few studies that have addressed the effect and safety of rehabilitation and exercise after PE or DVT. One retrospective study evaluated the safety of rehabilitation after PE, showing that it is safe to start to exercise following PE [12]. One small randomized controlled trial (RCT) objectively measured the effect of exercise and behavioral weight loss after VTE demonstrating that early initiation of exercise was safe and resulted in improvements in physical activity and fitness [13]. Both studies pointed out the need for large prospective RCTs. Furthermore, a recently completed study randomized patients with newly diagnosed PE, regardless of the presence of persistent dyspnea, to a home-based training program or a control group [14]. This study concluded that home-based exercise training and nurse consultations did not improve exercise capacity or symptoms of dyspnea following PE. However, this study included all PE patients, rather than those with PPS only, thus including patients who had recently been diagnosed with PE where a spontaneous improvement in symptoms can be expected from the natural course of the disease. No current studies have provided rehabilitation to patients suffering with PPS. Previous research has indicated that the HRQoL impairment in patients with PPS is driven by reduced physical capacity [4] suggesting a possible receptivity for an intervention such as pulmonary rehabilitation, including exercise training, in order to reduce breathing discomfort and improve HRQoL and exercise capacity. The aim of this study is to explore the effect of pulmonary rehabilitation on exercise capacity, dyspnea, and HRQoL in patients with PPS. Hypothesis The primary hypothesis is that a structured, outpatient, hospital-based, 8-week pulmonary rehabilitation program will lead to increased exercise capacity, less symptoms of dyspnea, and improvements in HRQoL in patients with PPS as compared to a control group receiving no active intervention. Study design A two-center RCT is being performed at the outpatient departments of Østfold Hospital Trust (ØHT) and Akershus University Hospital (AHUS) in Norway. Patients with PPS are 1:1 randomized into two arms, an intervention arm and a control arm, using sealed envelopes. The allocation sequence will be computer generated, and to ensure balanced recruitment during the study, this will be performed in blocks of 10. The allocation sequence will not be available to the person enrolling participants and the randomization code will be kept inside sealed opaque envelopes. The generation of the allocation sequence has been performed by the statistician at ØHT. The enrollment process and assignment of interventions will be performed by the PhD candidates. In addition, a group of patients with no PPS following PE will be examined at baseline to compare patients with and without persistent dyspnea after PE in terms of exercise capacity, daily physical activity, dyspnea, and HRQoL. The primary study objective is to explore the shortterm changes in exercise capacity from baseline to 12 weeks after inclusion between groups as measured by the incremental shuttle walk test (ISWT). The secondary objectives are to explore the long-term effect of the rehabilitation program on exercise capacity 36 weeks after inclusion between groups (ISWT) as well as changes in exercise endurance (ESWT), subjective symptoms of dyspnea, daily physical activity levels, and HRQoL from baseline to 12 and 36 weeks after inclusion. Eligibility criteria Patients diagnosed and treated for PE 6 months to 6 years previously at ØHT or AHUS are identified from ØHT's Thrombosis registry (TROLL registry-NSD 28435/3/LMR) (ØHT only) or via ICD-10 discharge codes (AHUS). Patients are invited to participate by postal mail. Inclusion criteria include age 18-75 years, objectively diagnosed symptomatic PE (greater than isolated subsegmental PE) by CTPA 6 months to 6 years prior to inclusion to the study, persistent dyspnea defined as modified Medical Research Council (mMRC) breathlessness scale grade ≥ 1 that had appeared or worsened after the diagnosis of PE, and the ability to provide written informed consent. Exclusion criteria include pulmonary diseases (such as COPD GOLD ≥ 2 or restrictive pulmonary diseases, lung cancer, or pleural disease), heart failure, CTEPH, significant valvular heart disease, patients with a condition that would interfere with the ability to comply with the study protocol or to give informed consent (e.g., history of drug abuse, excessive alcohol beverage consumption, cognitive dysfunction, or severe psychiatric disease), active malignancy or recurrent, metastatic or inoperable disease, life expectancy less than 3 months, and pregnancy. Blinding The investigators performing the walking tests at followup are blinded to the patients' group allocation. Due to the nature of the intervention, blinding of the participants and the physiotherapists providing the intervention is not possible. The statistician who will perform the data analysis will be blind to group allocation. Rehabilitation group Patients in the intervention group are allocated to a basic pulmonary rehabilitation program consisting of a supervised, outpatient exercise program for 1 h twice weekly for 8 weeks. Experienced physiotherapists construct an individually adapted exercise program based on existing pulmonary rehabilitation programs (combining relaxation, interval training at moderate intensity measured with the Borg scale, and resistance training), and an educational session provided by a medical doctor and a physiotherapist. The educational session includes topics, such as normal anatomy and physiology of the respiratory and circulatory system, information on PE and PPS, breathing strategies, and benefits of exercise/ physical activity. Training attendance is documented and patients will be given a simple home-based exercise program, consisting of resistance exercises that can be performed without equipment, to be performed once to twice weekly during the intervention period. Implementing supervised, outpatient rehabilitation program will not require alteration to all other usual care pathways (including use of any medication in particular anticoagulation) and these will continue for both trial arms. Minimal actions will be made to improve adherence, for example only one telephone call will be made in the case of poor attendance. Control group Patients randomized to the control group will receive usual care without specific instructions to exercise (no active intervention). All patients were treated and followed up according to international guidelines [15]. The participants randomized to standard care in the control group will not receive any structured exercise or information as part of the current study, but continue their routine follow-up at the outpatient clinic. However, if they already perform regular physical activity at the time of inclusion, they are encouraged to continue doing so. Outcome measures Primary outcome measure The primary endpoint of the study is improvement in physical capacity as measured by the ISWT. This walking test has been developed to assess exercise capacity and is valid, reliable, and responsive in a number of study populations, including patients with cardiac and respiratory diseases [16]. The patient walks between two shuttles along a 9-m track in a tempo guided by audible sounds which increase in speed every minute for a maximum of 12 min. The test ends when the patient cannot manage to keep the correct speed or has to stop because of symptoms (such as dyspnea or fatigue). Standardized instructions will be provided before the test commences. In order to exclude a learning effect, the ISWT is performed twice at baseline with at least 15 min between tests. Peripheral oxygen saturation is registered and patients will report their subjective experience of dyspnea during exertion using the Borg scale before and immediately after the test [17]. The Borg scale is commonly used for assessing perceived exertion during field walk tests. The minimal clinical important difference (MCID) for the ISWT is 70 m in patients with cardiac disease and 48 m in patients with COPD [18,19]. Secondary outcome measures Endurance shuttle walk test The endurance shuttle walk test (ESWT) is a derivative of the ISWT. The patient walks between two shuttles along a 9-m track at a predefined speed, usually at 85% of the maximum speed derived from the ISWT. The test ends when the patient cannot continue because of symptoms (such as dyspnea or fatigue) or for a maximum of 20 min (test completion). The outcome of the ESWT is usually reported as time (minutes and seconds), although in some studies the distance completed (meters) has been used. Studies suggest that the ESWT is more sensitive to change after rehabilitation when compared to the 6MWT and ISWT [20,21]. However, compared to our primary endpoint (ISWT), there is less evidence on using the ESWT and there are no reference values for the PPS population. The MCID for the ESWT has been demonstrated to be 174 to 279 s in COPD after pulmonary rehabilitation [22]. Modified MRC dyspnea scale The mMRC scale is a widely used tool for evaluating the limitation of activities due to dyspnea. This short questionnaire consists of five statements describing the patient's respiratory disability, ranging from 0 ("not troubled by breathlessness except on strenuous exercise") to 4 ("too breathless to leave the house or breathless when dressing or undressing"). The MCID for the mMRC is 0.5 points [23]. The Shortness of Breath Questionnaire The Shortness of Breath Questionnaire (SOBQ) is a patient-reported outcome measure which assesses subjective symptoms of dyspnea associated with activities of daily living (ADL). The SOBQ includes 24 items and each is scored on a scale from 0 ("not at all") to 5 ("maximal/unable to do because of breathlessness"). Total scores range from 0 to 120 with a higher score indicating a higher degree of dyspnea. Sensewear activity monitor Daily physical activity is measured using a Sensewear activity monitor. The participants will wear the monitor for 1 week before and 1 week after the intervention period to investigate whether the intervention results in a change in daily physical activity or not by measuring the number of steps taken per day and time spent in different activity intensities. Sensewear is a multisensor activity monitor combining a triaxial accelerometer and is shown to be a reliable and valid tool for measuring physical activity in people with respiratory disease [24,25]. EQ-5D-5L The EQ-5D-5L has been developed by the Euroqol Group as a patient-reported outcome measure to assess generic health status and HRQoL in 5 different dimensions: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. Each dimension has 5 possible answers ranging from 1 to 5 with a higher score indicating worse possible state, and these scores can be aggregated to a utility score on a 0-1 scale using a tariff of preferences derived from a general population [26]. In addition, the patient subjectively scores their general HRQoL on a visual analogue scale from 0 ("worst imaginable state of health") to 100 ("best imaginable state of health"). Pulmonary Embolism Quality of Life Questionnaire The Pulmonary Embolism Quality of Life Questionnaire (PEmb-QoL) is a disease-specific patient-reported outcome measure to assess HRQoL following PE [27]. The PEmb-QoL has 40 items over 6 domains, which assess symptom frequency, the time of day when complaints are at their worst, and the effect of pulmonary-specific symptoms on ADL and work-related problems. Scores for each domain range from 0 to 100 with the average score of all six domains being used to calculate the total score. A lower score indicates better HRQoL. The MCID for the PEmb-QoL is 15 points [28]. The Hospital Anxiety and Depression Scale The Hospital Anxiety and Depression Scale (HADS) is a patient-reported outcome measure assessing symptoms of depression and anxiety. The HADS provides a total score with a 0-42 range with a higher score indicating that the patient is more symptomatic. Scores of ≥ 19 points indicate symptoms corresponding to cases of anxiety and depression, whilst scores between 15 and 18 points suggest possible symptoms of anxiety and depression. It is also possible to calculate a score for anxiety or depression only (range 0-21 points). Scores of ≥ 11 points indicate symptoms that can be compatible with anxiety/depression, and 8-10 points suggest possible symptoms of anxiety/depression. The MCID for the HADS has been suggested to be a reduction of 1.3 to 1.8 points in COPD patients undergoing pulmonary rehabilitation [29]. Data collection All outcome measures are completed at baseline and 12 and 36 weeks after inclusion (Figs. 1 and 2). In addition, a complete baseline evaluation is performed on all participants including a full history and medical examination, routine blood tests and biobanking (10 ml EDTA plasma, 10 ml citrated plasma, 10 ml serum, and 10 ml in paxgene), ventilation and perfusion scintigraphy, pulmonary function test (including spirometry, whole body plethysmography, carbon monoxide diffusing capacity of the lung), and transthoracic echocardiography. In addition, cardiac magnetic resonance imaging is performed on 50 participants without PPS and 50 participants with PPS before and after rehabilitation. Finally, patients will be asked to complete questions on selfreported physical activity and exercise habits. Data collected during the course of the research will be kept strictly confidential and will be stored in the secure research server at ØHT to which only the project investigators have access. Participants will be allocated an individual trial identification number and only deidentified data will be analyzed. The identification key will be stored in a separate file on the secure research server. Only the project investigators and the statistician analyzing the data will have access to the data set. Anonymized data may be shared with other researchers to enable international prospective meta-analyses. Data management and analysis The results will be analyzed according to the intentionto-treat principle. Baseline characteristics will be described by mean and standard deviation, median and interquartile range, or number and proportions as appropriate. The effect of the intervention on the primary outcome (ISWT) will be assessed by comparing the change in exercise capacity after 12 weeks. The primary analysis based on the baseline data and data after 12 weeks will be conducted as a linear regression. The subsequent analysis, which will include data after 36 weeks, will have three measurements per individual to assess the short-term and long-term effect of the intervention and will therefore be analyzed using a linear mixed model. This variant of multiple linear regression allows for addressing such correlations as well as adjusting for possible confounders such as age, body mass index, sex, and treatment center. HRQoL, general activity, and the mMRC breathlessness scale will be compared between the 2 groups at 12 weeks using appropriate statistical tests depending on the normality of data. In addition, the model will also account for missing values. We will apply a post hoc sensitivity analysis to get an indication on potential bias caused by comparing potentially unequal groups with respect to time since PE at the time of inclusion. One way to achieve this is by the use of resampling techniques. Sample size calculation There is currently no data on the physical capacity of PPS patients as measured by ISWT. Therefore, in concurrence with the Danish study that was ongoing when we designed our protocol [30], we have based our sample size calculations on the mean improvement in ISWT previously reported in patients with cardiac and respiratory disease. Rolving et al. assumed that the achieved difference will be around 70 m, i.e., comparable to cardiac patients. Based on 6MWT results from a previous study in patients with PPS [6], patients walked between 413 and 480 m on 6MWT, which is closer to the cardiac population. Therefore, we assume a baseline ISWT for PE patients to be 390 m. Based on calculations, our clinical experience, and on previous findings, an improvement of 60 m or more on the ISWT will be considered as being of clinical relevance. Given these assumptions, a required sample size to test for that effect size with a type 1 error of 5% and a type 2 error of 20%, 86 patients are needed in each study arm. By adding 10% attrition, the required sample size will be a total of 190 patients. No interim analysis will be performed. Discussion This study is understood to be the first study exploring the effect of structured pulmonary rehabilitation on exercise capacity in patients with PPS. Results from this study may therefore increase the knowledge regarding the management of persistent dyspnea in this patient group as well as providing a currently untreated group of patients with a treatment potentially resulting in reduced chronic symptoms following PE. In comparison to previous studies, patients with chronic, persistent dyspnea from 6 months post-PE are included in the present study in whom no spontaneous improvement may be expected. The ISWT is validated, commonly used in clinical practice, and was the chosen primary outcome for the study by Rolving et al. [30]. Thus, the ISWT was chosen as the primary endpoint in the present study to enable generalization to clinical practice as well as comparison to the findings by Rolving et al. The definition of PPS is somewhat unclear and several different definitions have been used in previous studies. The study group has chosen to identify PPS patients based on the presence of subjective symptoms of persistent dyspnea, which started or was worsened at the time of PE diagnosis, compared to other studies who have defined PPS as the presence of dyspnea and/or reduced functional capacity and/or reduced HRQoL. The main inclusion criterion is the presence of a PE event within a period of 6 months to 6 years. Although this timeframe may be considered to be wide, and may potentially result in heterogeneity in the sample population, the study group considered it important that the time since PE should be long enough to prevent the occurrence of spontaneous improvements in dyspnea and physical function following PE as described by Kahn et al. [7]. Our previous research has shown that patients may present with symptoms of dyspnea many years following an acute PE episode; thus, we did not want to deny patients with long standing dyspnea a therapeutic option that may improve their complaint. Further, based on our experience and current research on the effect of pulmonary rehabilitation on patients with chronic dyspnea, the study group chose to include patients with more chronic symptoms as well as those who had suffered with PE relatively recently in order to explore any potential differences in treatment effect between patients with recent PE or more chronic symptoms. In addition, the majority of the participants will be recruited from the ØHT's Thrombosis registry where few patients will have experienced a PE more than 2 years prior to recruitment; thus, the mean time since PE will be shorter. Results from this study may have clinical significance by increasing the understanding of the background, assessment, treatment, and prevention of PPS and may change treatment standards in this patient group. The study may also increase the awareness of pulmonary rehabilitation being a feasible treatment for patients with respiratory symptoms similar to COPD and other welldocumented respiratory diseases. Data monitoring The study will be monitored by the research department at ØHT. Any adverse effects will be reported. The trial steering committee is made up of the supervisors of the three PhD candidates and a selection of experts within the field of PE. The role of the steering committee is to ensure the quality of the trial and sufficient progress underway. The trial steering committee will meet twice a year to review the progress of the study and address potential challenges and obstacles during the course of the trial. The PhD candidates are responsible for setting up the committee meetings. The group providing day to day support for the progression of the trial is made up of the PhD candidates and their main supervisors, as well as research nurses and research advisors at ØHT who ensure the performance of the trial and practical tasks such as testing patients following intervention (blind to randomization). The PhD candidates are responsible for all aspects of local organization including identifying potential recruits and collecting informed consent. The PhD supervisors and advisors at the research department at ØHT are responsible for supervising the trial and meet regularly. As this trial does not involve any use of pharmaceutical drug or medical device, no formal safety and monitoring board has been established. However, the conduct and progress of the study will be regularly overseen by the leader of the group, professor Waleed Ghanima. Trial status The trial is currently ongoing and recruitment began in January 2018 at ØHT and in August 2019 at AHUS. Recruitment is expected to be complete in late 2020 to early 2021.
2021-01-07T09:04:07.104Z
2020-06-16T00:00:00.000
{ "year": 2021, "sha1": "05a9e07b8a24fc28aca82acca7241c83bcfc0581", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-020-04940-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a4e976882e9554183448de8bbd551e1b7df062b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
198571706
pes2o/s2orc
v3-fos-license
Measurement of quality of certification services to reduce wastage of non value added activity (journal review) Increasing consumer needs that are consumptive causes the service industry, especially in the field of certification, to guarantee a product begins to experience many requests related to the legality of a product with high demand related to the guarantee of a product both at home and abroad. Hope in this system the certification process should be faster, easier, transparent and flexible. The fact that certification services are still often having problems with the number of complaints from customers submitting certification because the length of the certification process is still far from the standard set by the company. Factors that often cause a service failure due to a service experiencing waste such as delays, duplication, unnecessary movements, unclear communication, wrong inventory, error, and lost opportunities. The cause of the length of the certification process was due to many activities carried out repeatedly due to the absence of standards in the assessment of prospective customers and lack of training for employees. Several approaches are explained in this paper as a proposal to design a process flow to eliminate waste in a process that does not provide added value (non value added activity), especially in the service industry Introduction Wastage in Japanese is called young, is everything the action is done without producing value. Taiichi Ohno, a Toyota executive, was the first to spark seven kinds of waste. Then Linker adds one type of waste to the seven kinds of waste [1]. A very good method of reducing waste is Lean -Service. Lean Service is a systematic approach to identify and eliminate waste through a series of improvement activities [2]. The problems that often occur in the service company is still found the amount of waste (waste) in terms of time serving customers that are not affected by inefficient activity or no added value. Activities that do not have added value include in the process of supplying raw materials from suppliers, material flow from the initial process to the final process, the movement of tools and machines that do not fit the capacity, waiting process, and rework process [3]. Failure on services occurs when the customer does not get the service as expected. Some factors that cause the failure of a service result from human resources, environment, equipment, methods, and management are important factors in running the service process itself. So it is necessary to find out how to reduce the failure of service is an important issue for a business. It is important for service designers to identify potential failure modes and take appropriate action to prevent such failures. With limited resources, managers or corporate leaders should be able to prioritize potential failure modes on service delivery and provide improvements before the services are rendered [4]. The solution needs to make continuous improvements to create standardized work by concentrating on achieving continuous process flow by identifying the value in each step of the activity. Any steps that fail to add value can then be omitted and identify the root cause of each delay, this is represented in the form of a value-stream mapping (VSM) map [5]. This paper focus " How to analyzing the activities that affect the effectiveness of the certification process". Plus the literature of some relevant international journals. The literature study conducted over the last 15 years shows several approaches by researchers to solve service quality problems and eliminate non value-added activities. The Concept of Reducing Wastage Lean is a continuous effort to eliminate waste that occurs in a process and increase added value to a product or service in an effort to provide value to customer value. Lean aims to continuously increase value to customers through continuous improvement in the comparison between value added to waste. In 2006, the comparison of value with waste in Japanese companies was around 50%, Toyota Motor companies about 57%, the best companies in North America (the United States and Canada) about 30%, while the company's value-to-waste ratio best in Indonesia only about 10%. A company can be considered Lean if the value -to -waste ratio has reached a minimum of 30%. If the company is not Lean, the company may be called Un -Lean Enterprise and categorized as a traditional company [6]. Lean as a business philosophy based on the efforts to minimize resource use. In a company process. Lean focuses on identifying and eliminating non-value-added activities in design, production or service operations, and supply chain management directly related to customer satisfaction [7] At present, the concept of lean has been used dalam meminimalkan pengurangan waste. Dalam konteks ini, value stream mapping (VSM) digunakan untuk melakukan pemetaan dan penyelidikan lebih lanjut dari segmen tertentu dari keseluruhan value stream atau value stream secara keseluruhan. Meskipun pendekatan VSM awalnya berasal dari Sistem Produksi Toyota. Selanjutnya, dimungkinkan untuk memperpanjang VSM ke seluruh rantai pasok. Pendekatan VSM menggunakan berbagai alat untuk meningkatkan nilai dari proses. Techniques in Reducing Wastage Various kinds of tools and techniques in lean are value stream mapping, waste elimination, and 5 why. Value Stream Mapping is one of the Lean techniques commonly used to analyze the current flow of materials and information needed to bring a product or service to the consumer. Value Stream Mapping is derived from Toyota companies and this technique is also often called Material and Information Flow Mapping. In waste elimination there are 2 main categories of waste, namely Type One Waste and Type Two Waste. Type One Waste is a work activity that does not create added value in the process of transforming inputs into outputs along the value stream, but the activity is now unavoidable for various reasons. For example, inspection and sorting activities from a Lean perspective are not valueadded activities that are waste, but nowadays companies generally still require inspection and sorting because the machines and equipment used are so old that their reliability is reduced. Similarly, surveillance of people, for example, is a non-value-added activity based on Lean's perspective, but now the company still has to do it because the person has just been recruited by the company so that it has no experience. In the long run Type One Waste should be eliminated or reduced. Type One Waste is often as Incident al Activity or Incidental Work which belongs to non value-adding activities (nonvalue-adding work or activity). Analysis 5 Whys is a simple question-and-answer technique to investigate causal relationships that are at the root of a problem. This technique is the practice of asking, why five times, why a technical problem occurs in an effort to determine the root cause of a damage or problem. This technique was developed by Sakichi Toyoda who was later used in the company of Toyota Motor Corporation. In the 1970s, the Why 5 strategy was popularized by the Toyota Production System. This method is now used as one of the methods in Six Sigma strategy. Servperf (Service Performance) According to Cronin and Taylor (1994) cited by Dharmayanti, Service Performance is the performance of the service received by consumers themselves and assess the quality of the service they really feel. [12] In contrast to the SERVQUAL method, SERVPERF has the advantage of providing information on which service quality attributes are more important to be improved so that anatara wants and interests can become more visible in the analysis of service quality attributes [13] (Remba, et al, 2008). This is reinforced by the statement of Alford and Sherrell (1996) quoted from Dharmayanti (2006), that service performance will be a good predictor of service quality or service. [14] Service performance is more able to answer the problems that arise in determining the quality of services because consumers will only be able to judge the quality they receive from a particular producer not in their perception of the quality of service in general (Bolton and Drew 1991 Teas 1993; Gotlieb, Grewal and Brown, 1994) quoted from Dharmayanti (2006) 2000) states that for service industries with "a lot of little goods and services" such as supermarkets, SERVQUAL is better to apply. However, for environments with elements of service are important, such as electronics sellers, SERVPERF is more suitable to be consumed. [17] Lean Service Lean service is a set of tools and methods designed to eliminate waste, reduce waiting times, improve performance, and reduce costs. According to other sources, lean is eliminating waste and creating customer value, and consists of several principles on which to base its philosophy [18] [19]. Lean is an ongoing effort to eliminate waste and increase the value added of products (goods and or services) to deliver value to customers (customer value) [6]. There are five basic principles of Lean Service: 1. Specify the exact value of the product desired by the customer. 2. Identify the transform (Value Stream) for each service process. 3. Eliminating all the waste in the service flow process (Moment of Truth) for the value to flow unimpeded. 4. Establish an anti-fault system of any service process to avoid wastage and delays. 5. Pursuit of excellence to achieve perfection (Zero Waste) through continuous improvement radically. According to Ciarapica et al (2016) that Value Stream Mapping (VSM) is a standard method for documenting a mapping process and a flow of information both physically and only information, which is applied in a systematic way of analyzing an activity or process that is focused on identifying waste that is in every activity. Grewal (2008) says The description of lean processes is by identifying activities that have added value in a value stream and eliminating unnecessary waste. This tool developed by VSM was originally developed to focus on analyzing an activity flowing in a process in a manufacturing environment. VSM allows a company to see all future and future processes that we want in an effort to identify and eliminate waste, then simplify the work process, reduce waiting time, reduce costs and improve quality The steps to implementing VSM are to draw a map of the current situation. The current process map flow is made in such a way and identifies various types of added value and activities that do not have added value in this phase. The current state map is usually drawn by a cross-functional, multi-disciplinary team to document the actual workings. The next step is to develop a map of future conditions. so, the current state map must be analyzed first. The team needs to identify gaps or areas of improvement (eg large inventory, long lead times), and provide reasons why and why these identified activities are not worth adding. in the gap found, the team proposes what must be changed in the process, method, and organization. The final step is to analyze the results after applying the proposed changes. This must be quantified in terms of reducing lead time, reducing cycle time, reducing inventory, etc. In addition, the team needs to develop a plan that provides necessary action steps to support the proposed changes. The steps VSM are shown in Figure 1. Figure 1. Implementation VSM Lean's approach to service, particularly in service, has a significant impact on quality, cost and time and satisfaction for both employees and consumers. The results of research on tangible dimensions such as reduction of processing time or waiting time, improve quality with reduction of errors as well as cost reduction, as well as intangible factors such as increased motivation and employee satisfaction and increased customer satisfaction. Examples of cases in health care, Lean focuses on the ongoing assessment of clinical processes to identify and eliminate waste of patients, the ability of employees to test their work environment, and improve quality, safety and efficiency in the process. Lean advises in the mindset of medical and administrative employees to create better service capacity and establish new rules, effective and efficient methods for service delivery. Activity-Based Management (ABM) Researchers have defined Activity-Based Management (ABM) as a systematic method for planning, controlling, and increasing activities and related indirect costs. This method is the principle of allocating all activities to the cost triggers for each of these activities. ABM uses ABC information to control activity costs based on the underlying assumptions mentioned earlier. ABM has proven to be very efficient in controlling activity both in the service and production of the company. Cost analysis carried out using a multidimensional approach to cost drivers in companies has provided a broad background ABM method and is very useful for understanding and controlling costs in most companies, including steel industry companies. According to ABC, the cost of repairs has a single cost driver as output, but becomes a variable for other cost drivers as an output circuit or product range. In other words, costs are not fully corrected or variable, but their behavior depends on the relationship with the trigger of the cost. The ABM principle can be used to regroup different activities into activities with cost triggers that can facilitate cost control.. Conclusions Customer demand in terms of the legality of a product's assurance will continue to increase as the industry grows and the number of people and the economy continues to increase. Especially developing countries such as Indonesia, the duty of certification services industry still not meet expectations, so it can cause risks for the community such as loss of public confidence due to the bureaucratic process that is still too long due to wasteful flow of activity on the certification process, thereby degrading the quality of service the company. Failures like this will impact the loss of customers. The approach to solve the problem is still not solved seriously by the company resulting in a decrease in productivity. This paper presents some analytical techniques that can measure the extent to which service quality needs to be evaluated. The above approach is expected to be a long-term solution to solve the problem of the length of process in certification. Although theoretically described steps in work activities that do not create added value in the process of transforming inputs into outputs along the value stream, in practice, it is tailored to the needs and conditions of the industry itself to choose the most likely combination to apply in this situation. Through appropriate approaches and analytical techniques, the problem of non value-added activities can be reduced so that it will improve the quality of service in the future.
2019-07-26T13:58:27.813Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "42ceb7a809bffc0ee9a0f23e51330e78176c6ce6", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/505/1/012062", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c261b5b03c86144d92a4043307e4d69105b1c57e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business", "Physics" ] }
236321748
pes2o/s2orc
v3-fos-license
Frequency and characteristics of falls in people living with and without multiple sclerosis during the COVID-19 pandemic: A cross-sectional online survey Background Public health responses to Coronavirus Disease 2019 (COVID-19) including lockdowns may negatively impact physical and mental functioning in clinical populations. People living with multiple sclerosis (MS) may be more susceptible to physical function deterioration while practicing social distancing. Recent reports have suggested that about 50% of people with MS (pwMS) decreased their leisure physical activity during COVID-19, and upwards of 30% reported decreased physical fitness levels. However, the impact of social distancing on adverse health-related outcomes such as falls has not received much scrutiny. Therefore, we explored the frequency and characteristics of falls experienced by people living with and without MS during the COVID-19 pandemic. Methods Two-hundred and thirty-nine individuals, including 106 pwMS (median age: 59 years) and 133 people living without MS (median age: 66 years) were recruited for this cross-sectional study. A snowball sampling strategy was used for online recruitment. Participants completed a customized falls questionnaire and the number of falls experienced (if any) during COVID-19 was recorded. Fall-related characteristics such as the timing, locations, activities undertaken before falling and consequences, as well as self-reported physical activity were also recorded. Results Overall, participants reported 232 falls (1.67 falls/person in pwMS and 0.41 falls/person in non-MS participants). People living with MS (pwMS) had a significantly higher frequency of falls (58.5% vs 21.8%; p< 0.001) and recurrent falls (45.3% vs 9.8%; p< 0.001) compared to non-MS participants. Additionally, pwMS reported a significantly higher proportion of in-home falls (83.9% vs 54.2%; p = 0.004), as well as a higher proportion of overall injuries (44.3% vs 12.5%, p< 0.001), fractures (5.7% vs 0.8%, p = 0.048), and healthcare utilization (9.4% vs 1.6%, p = 0.007) compared to non-MS participants. A similar proportion of pwMS (49.1%) and non-MS participants (52.2%) reported lower physical activity levels during COVID-19. Conclusion This cross-sectional study revealed that pwMS remain at high risk of falls and fall-related outcomes during COVID-19. The high number of falls experienced by pwMS is of clinical concern considering the current strain on the healthcare system. Findings from this study highlight the importance of monitoring falls and the potential for telerehabilitation in persons with MS during COVID-19. Introduction Coronavirus disease 2019 , resulting from novel coronavirus SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2), began in late 2019 in Wuhan, China. The World Health Organization (WHO) declared a global health emergency on January 30, 2020 and a pandemic on March 11, 2020(WHO, 2020. As of January 2021, there are over 92 million individuals who have been diagnosed with COVID-19 worldwide. COVID-19 was first reported in the United States (U.S.) on January 21, 2020 and, to date (January 2021), it has caused the highest death toll worldwide, with more than 390,000 deaths. Prior to the wide-spread availability of a vaccine, the limited medical interventions for COVID-19 and its high infection rate have led to the recommendation and adaptation of social distancing measures. Although social distancing is an effective infection risk mitigation strategy, it potentially has unintended consequences on physical and mental functioning. For instance, a recent large-scale descriptive study (> 455,000 participants) revealed that, within 10 days of the WHO pandemic declaration, people reduced the number of daily steps by 5.5%, and within 30 days, the number of steps was reduced by 27.3% (Tison et al., 2020). This reduction in physical activity raises potential concerns due to the well-established detrimental effects of sedentary behavior on multiple domains of physical health and function (Cecchini et al., 2010). It has been recently proposed that people living with neurological impairments, such as multiple sclerosis (MS), may be more susceptible to physical function deterioration while practicing social distancing (Pelicioni et al., 2020). Although MS and disease modifying therapies do not appear to increase COVID-19 infection risk (Willis and Robertson, 2020), the significant comorbidity burden in people living with MS (pwMS) seemingly places this population at greater risk of severe COVID-19 related outcomes and decreased well-being (Brownlee et al., 2020;Motl et al., 2020). Recent reports have also suggested that about 50% of pwMS decreased their leisure physical activity during COVID-19, and upwards of 30% reported decreased physical fitness levels (Kalron et al., 2020). Despite the preliminary evidence of decreased physical activity observed in pwMS during COVID-19, the impact of social distancing on adverse health-related outcomes such as falls has not received much scrutiny. People with MS (pwMS) are at high risk of falls and adverse consequences (Nilsagård et al., 2015). Previous research has suggested that most falls (62%) experienced by pwMS occur inside (Gunn et al., 2014). The domestic environment poses several potential environmental hazards for falls and, more importantly, an altered interaction of physiological, cognitive and environmental stressors can lead to higher risk of falling inside the house (Lord et al., 2006). It is logical to speculate that adoption of social distancing practices recommended during COVID-19, such as the commonly named "lockdown", could cause a perturbation of such interaction due to physical function deterioration, worsening of depressive symptoms and modification of activity-related behaviors (Kalron et al., 2020). In turn, these relatively sudden changes could result in a higher risk of falling and fall-related outcomes. Therefore, understanding the indirect impact of COVID-19 on fall-prevalence and fall-related circumstances would be paramount to mitigate the potential risks for falls during the COVID-19 pandemic. The relevance of this research is even stronger in light of the current strain on the healthcare system, which has led to the reduction of non-essential services including rehabilitation and community-based fall prevention. The objective of this study was to provide data on frequency and characteristics (i.e. timing, location, precipitating activities and consequences) of falls in people living with and without MS during the COVID-19 pandemic in the U.S. As a secondary objective, we aimed to explore the relationship between self-reported physical activity during the lockdown and falls in pwMS. We hypothesized that self-reported reductions in physical activity would be associated with falls/fall-related injuries. Study design and setting In this study, we implemented a cross-sectional online survey aiming to explore the frequency and characteristics of falls during COVID-19 in a convenience sample of people living with and without MS. The survey was conducted between mid-June 2020 and August 2020 in the U.S (Fig. 1). We utilized this timeline to 1) examine falls experienced during the first U.S. lockdown, or stay-at-home orders, which occurred between March 2020 and June 2020, following the WHO pandemic declaration on March 11, 2020 and 2) to limit the recall bias on retrospective fallreporting (Ganz et al., 2005). The ethics of the study were reviewed and approved by the Office for the Protection of Research Subjects at the University of Illinois by means of Exempt Determination (#20,894). Participants Participants were invited to take part in the study if they were over 18 years of age, male or female, and fluent in English. Participants were excluded if they were not living in the U.S. between January 2020 and June 2020. Procedures Participants without MS were recruited through a combination of online announcements and email distribution while pwMS were recruited online with the support of relevant organizations. Supporting organizations based in the U.S. were contacted via e-mail and those who agreed to help with recruitment advertised the study on their social media channels and websites. A snowball sampling approach was used to further boost recruitment online. The survey was hosted by Qualtrics (Qualtrics International Inc, Provo, UT) and was accessible by clicking a link to the survey website. People wishing to take part in the study were asked to provide consent by clicking on the "Agree" box, confirming they read and understood the participant information sheet and met the inclusion criteria. The following pages included sociodemographic (e.g. age, gender, education, marital and occupational status) and clinical (e. g. medications, comorbidities) questions as well as other physical activity-and fall-related questionnaires. Participants completed a customized falls questionnaire (Supplementary information file 1) asking about the number of falls experienced (if any) during COVID-19. Falls were operationally defined as unintentional events in which one comes to rest on the ground or other lower level (Lamb et al., 2005). In addition to the number of falls, participants were asked to report fall-related circumstances such as the timing, locations and activities undertaken before falling. The consequences of falls (i.e. injuries and healthcare utilization) were also recorded. Overall injuries were defined as occurrence of bruising and/or cuts/lacerations and/or ligament sprains and/or head injury and/or fractures as a result of falling (Gunn et al., 2014). In order to determine whether falls were experienced before or after the WHO pandemic declaration, participants were also asked to provide the date(s) around which they fell. Participants were classified as "fallers" if they reported at least one fall in the survey and as "recurrent fallers" if they experienced two or more falls (Vassallo et al., 2002;Dai et al., 2018;Bartosch et al., 2020). In addition to the sociodemographic and clinical questions, pwMS also completed the self-reported expanded disability status scale (SR-EDSS) as a measure of disability (Learmonth et al., 2013). Information on the type of MS (e.g. relapsing-remitting, primary or secondary progressive) and year of MS diagnosis were also recorded. Lastly, we examined the impact of COVID-19 on physical activity status by asking participants to indicate whether their physical activity levels during COVID-19 were lower, higher or about the same as their normal activity levels (Kalron et al., 2020). Self-reported reduction in physical activity was dummy coded as: "about the same or higher physical activity during COVID-19 ′′ = 0 and "lower physical activity during COVID-19 ′′ = 1. Data analyses Statistical analyses were performed with SPSS (Version 26 for Windows, SPSS Inc., Chicago, IL). The Kolmogorov-Smirnov test was used to assess whether data were normally distributed. Individual missing data were excluded on a case-by-case basis from the analysis. Falls experienced before January 2020 (i.e. prior to COVID-19) were excluded from the calculation of fall-frequency. Sociodemographic and clinical data are presented as mean ± standard deviation or median and interquartile range based on normal distribution assumptions. Differences between pwMS and non-MS participants were explored by means of Independent t-tests and Mann-Whitney U for continuous variables, as appropriate, or through Chi-square tests/Fisher's exact test for categorical variables. Negative binomial and logistic regression analyses were used to explore the relationship between self-reported reduction in physical activity and the number of falls/overall fall-related injuries (yes or no). Statistical limits for interpretation of all analyses were set at an alpha level of p = 0.05. Participants Two-hundred and seventy-nine individuals agreed to participate and completed the online survey. Forty (14.3%) participants had multiple missing items and were therefore removed from the final analyses by means of listwise deletion. A total number of 239 individuals, including 106 (44.4%) pwMS and 133 (55.6%) non-MS participants, provided complete answers to the survey and were therefore included in the study. The sociodemographic and clinical characteristics of these 239 participants are summarized in Table 1 and Table 2 respectively. Overall, participants from 40 States took part in the study, with Illinois being the most represented in both pwMS (24.5%) and non-MS participants (67.7%). Frequency of falls The distribution of falls reported by participants is displayed in Fig. 2. Overall, participants reported 232 falls, out of which 177 (76.3%) were experienced by pwMS and 55 (23.7%) by non-MS participants. The crude fall-rates for pwMS and non-MS participants were 1.67 falls/ person and 0.41 falls/person respectively. Considering the whole sample, 91 (38.1%) participants reported at least one fall and were classified as fallers. People living with MS had a significantly higher proportion of fallers (n = 62, 58.5% vs n = 29, 21.8%; p< 0.001) and recurrent fallers (n = 48, 45.3% vs n = 13, 9.8%; p< 0.001) compared to non-MS participants. Overall, 18 (19.8%) fallers could not remember the date around which they fell, while the majority of fallers (80.2%) reported that falls were experienced between late January 2020 and mid-June 2020. Five (5.5%) participants reported falling at least once between late January and February 2020, while 71 (78%) reported falling between March and mid-June 2020. Characteristics of falls Out of 91 participants who reported falls, five (5.5%) did not provide information on the characteristics of these falls and were therefore The majority of fallers reported falling at home at least once (n = 65, 75.6%), with pwMS reporting a significantly higher proportion of inhome falls compared to non-MS participants (n = 52, 83.9% vs n = 13, 54.2%; p = 0.004, ϕ c = 0.31). Seven (8.1%) participants reported falling inside other types of buildings, while 35 (40.7%) reported falling outside. People living with MS and non-MS participants did not differ in the proportion of such falls (0.275 ≤ p-values ≤ 1.000). The most common activities undertaken before falling were associated with ambulation, as 63 (73.3%) participants reported falling at least once while walking or turning. Other activities frequently reported by participants included transferring (n = 17, 19.8%), stair climbing (n = 13, 15.1%), and standing (n = 12, 14%). No differences between pwMS and non-MS participants in terms of activities undertaken before falling were detected (0.100 ≤ p-values ≤ 1.000). Discussion In this cross-sectional study, we aimed to explore frequency and characteristics of falls experienced by people living with and without MS during the COVID-19 pandemic. The online survey revealed a higher proportion of fallers among pwMS compared to non-MS participants (58.5% vs 21.8%). In addition, pwMS reported a higher number of falls (Fig. 2) and had a crude fall-rate about four times higher than those living without MS. Moreover, pwMS experienced more adverse outcomes, as evidenced by the higher proportion of overall injuries (44.3% vs 12.5%), fractures (5.7% vs 0.8%), and fall-related healthcare utilization (9.4% vs 1.6%). Notably, the prevalence of fallers among pwMS seems to be aligned with the meta-analysis by Nilsagard et al. (2015) who concluded that about 56% of pwMS fall at least once in any three-month period. Additionally, the crude fall-rate observed in pwMS (1.67 falls/person) is also in agreement with the current literature on falls in the general MS population (range between 1.6 falls/person-year (Kasser et al., 2011) and 18.4 falls/person-year (Gunn et al., 2014)). In this regard, our findings are more similar to those of Kasser et al. (2011) while the apparent discrepancy with the study by Gunn et al. (2014) is probably ascribable to the lower SR-EDSS scores and proportion of patients with primary or secondary progressive MS in our study (43.4% vs 69.6%). Moreover, the high fall-rate reported by Gunn et al. (2014) is potentially inflated as a result of the relatively short follow-up of falls (three months) used in their study. Overall, the current observations do not seem to suggest that COVID-19 pandemic at least during the sampled time-period has impacted fall prevalence in persons with MS. Overall, pwMS and non-MS participants exhibited similar characteristics for sociodemographic and clinical variables that could be linked to an increased fall-risk, such as female gender and comorbidities (Tables 1 and 2). Interestingly, pwMS were significantly younger than non-MS participants (median age= 59 vs 66 years old, p<0.001). This reinforces previous observations that, regardless of age, individuals with MS experience a higher number of falls compared to people living without MS (Mazumder et al., 2014). Importantly, the current MS sample seemed to be representative of the worldwide MS population in terms of clinical subtype (Table 2), with 49.1% of participants reporting relapsing-remitting MS and a median MS vintage of 16 years (Wallin et al., 2020). The sub-analysis of fallers (Table 3) revealed that a significantly higher proportion of pwMS reported in-home falling compared to non-MS participants. On the one hand, this observation reflects the higher prevalence of disability in pwMS (Table 1) and subsequent limitation in terms of activities undertaken outside the house. On the other hand, it is plausible that, due to their greater vulnerability, pwMS tended to spend more time at home to increase the effectiveness of social distancing and decrease the risk of COVID-19 infection. About 49% of pwMS reported lower levels of physical activity since the beginning of the pandemic, which is in agreement with the findings by Kalron et al. (2020). Although we did not observe a significant association between self-reported reduction of physical activity and falls/fall-related injuries in either group, the borderline p-values highlighted by the negative binomial (RR= 2.21, 95% CI: 0.97-5.02, p = 0.058) and logistic regression analysis (OR= 2.15,p = 0.055) in pwMS raise the question as to whether the non-significant findings may be related to the limited accuracy of self-reported physical activity data and limited statistical power. Therefore, we cannot ultimately exclude that a relationship between COVID-related physical activity curtailment and falls may exist. Pelicioni et al. (2020) recently postulated that sedentary behavior during COVID-19 may contribute to falls through physical deconditioning in pwMS. Conversely, experiencing a fall-related injury while practicing social distancing at home may lead to fear of falling syndrome, which could be responsible for further reductions of physical activity (Pelicioni et al., 2020). It should also be noted that our survey was circulated during the early stages of COVID-19 pandemic in the U.S. (June through August 2020). It is quite possible that unmitigated physical deterioration arising from prolonged inactivity during COVID-19 may place pwMS at higher risk of severe fall-related outcomes in the longer term. Overall, findings from this study suggest that, during COVID-19, falls remain of clinical concern in pwMS, which is worrisome in light of the current strain on the healthcare system. At the same time, the observation that about one in two participants decreased their normal physical activity levels during the pandemic reiterate the importance of sitting less and adopting an active lifestyle during and after COVID-19 Kalb et al., 2020). Additionally, the use of telehealth rehabilitation has shown great potential as an alternative platform for exercise delivery (Tuckson et al., 2017) and has recently gained popularity in response to the need to practice social distancing . Nevertheless, the utility and safety of this approach require further research in people with neurological impairment (Pelicioni et al., 2020). Study limitations The current study is not without limitations. First of all, we should acknowledge that, due to the cross-sectional design, participants were asked to report falls experienced up to seven months earlier, which can increase the recall bias and lead to misreporting of falls (Ganz et al., 2005). Another limitation related to the study design is that we could not provide more detailed information on the characteristics of falls experienced by participants. Specifically, we recorded information as to which proportion of participants reported predefined fall characteristics (e.g. location, activities etc.) rather than providing a detailed account of fall-related characteristics for every single fall experienced. We should also acknowledge that data concerning physical activity status were limited in our study. Particularly, we did not collect more detailed information such as frequency and duration of activities performed in the last week, which would have been useful to more reliably describe physical activity levels in pwMS and non-MS participants as well as to evaluate the impact of activity behaviors on fall-rates during the lockdown. Lastly, due to the relatively small sample size achieved, we did not explore the effects of possible COVID-19 infection on fall-rates in the study participants. As some researchers have postulated, people who suffered from COVID-19 infection may be at higher risk of falls due to mobility and balance deterioration (Pelicioni et al., 2021). Conclusions This cross-sectional study revealed that, despite the physical activity reduction, pwMS remain at high risk of falls and fall-related outcomes during the COVID-19 pandemic. People living with MS had a four-time higher fall-rate, a higher proportion of in-home falls and fall-related injuries/healthcare utilization compared to non-MS participants. The high number of falls experienced by pwMS is of clinical concern considering the current strain on the healthcare system. We recommend that providers emphasize fall-monitoring during the COVID-19 pandemic, as part of standard care, and engage patients in developing a tailored plan to ensure adherence to current active lifestyle recommendations. Declaration of Competing Interest None. This work was supported by a Mentor-based Rehabilitation Research Post-doctoral fellow grant of the national multiple sclerosis society (MB-1807-31633). The funders had no role in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.
2021-07-01T13:12:51.346Z
2021-06-24T00:00:00.000
{ "year": 2021, "sha1": "891360b44d0431aa8dc1b65ba0927e7a8ffcb429", "oa_license": null, "oa_url": "http://www.msard-journal.com/article/S2211034821003783/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "08b5e0161877b44e5ecf29c741b7da10adf338ae", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
252711592
pes2o/s2orc
v3-fos-license
Identification work: Ambivalence, qualms and resistance in social workers’ identification of trafficking victims Social workers play a pivotal part in the implementation of human trafficking policies, not least in identification of victims. When assessing who is and who is not a trafficking victim, boundaries are drawn between different groups of people and the human trafficking definition is operationalised. However, the actual practice of trafficking identification has not been sufficiently explored. Based on 12 qualitative interviews with social workers in Norway and taking an institutional ethnographic approach, I argue that a framing of identification as identification work underlines the ongoing assessments and actions that comprise identification, as well as ethical tensions in social workers’ identification practice. Introduction Over the past couple of decades, combatting human trafficking has become an important goal in international and national policies.Human trafficking has become a central framing for understandings of certain types of exploitation of people, originally and most notably in the field of prostitution/sex work, but with an increasing focus on labour exploitation and other forms of exploitation. Simultaneously, a growing body of research criticises simplistic understandings of vulnerability, migration, and exploitation in anti-trafficking policies, and 'modern slavery' discourses (Brace and Davidson, 2018;Charnley and Nkhoma, 2020;Chuang, 2015;Kempadoo, 2017).Examination of discursive and ideological elements of human trafficking policies has raised important discussions, not least about the power and control directed towards those assumed to be trafficking victims (Dottridge, 2007;FitzGerald, 2016;Kempadoo et al., 2015).However, there is lack of broader analyses that examine how institutional responses are shaped in practice (Jahnsen and Skilbrei, 2017), a sparsity of empirical investigations of anti-trafficking policy implementation (Gozdziak and Graveline, 2015), and not least, how both human trafficking and policy responses vary with context (Weitzer, 2007). Social workers can play a pivotal part in anti-trafficking practice, in their direct contact with possible victims in risk groups, in service provision and assistance, and not least through identification of trafficking victims (Alvarez and Alessi, 2012;Hodge, 2014).Through the process of identification of human trafficking, boundaries are drawn between different groups of people (Aradau, 2004) and the definition of trafficking is given its real-world content. This article seeks to illuminate how these boundaries are drawn in practice.Based on 12 qualitative interviews with social workers in three organisations in Norway, I examine how the human trafficking category is applied and understood by social workers, in their interactions with female sex sellers: What are the actual processes by which someone is defined as a trafficking victim? Literature Literature in the health and social work fields on trafficking identification often presents indicators of trafficking and advice on how they can be used to uncover trafficking in meetings with patients and clients (see, for example, Chisolm-Straker et al., 2019;Tortolero, 2020).Among reported barriers to identification are exploited persons' unwillingness to self-identify (Baldwin et al., 2011), as well as lack of knowledge about human trafficking among practitioners (Donahue et al., 2019).Practice-oriented literature tends to a lesser degree to discuss some inherent definitional challenges in anti-trafficking work.Human trafficking is often associated with obviously coercive situations (involving for instance abductions, violence and threats).However, a wide variety of exploitative relationships can be classified as human trafficking under its international definition, established in the United Nations Trafficking Protocol (United Nations, 2000).It is therefore not a straightforward issue to describe what human trafficking is and what it is not. Article 3a of the Trafficking Protocol defines human trafficking as a set of actions ('[. ..] the recruitment, transportation, transfer, harbouring or receipt of persons') by a certain set of means ('[. . .] the threat or use of force or other forms of coercion, of abduction, of fraud, of deception, of the abuse of power or of a position of vulnerability or of the giving or receiving of payments or benefits to achieve the consent of a person having control over another person') for the purpose of exploitation.The definition also renders a victim's consent to exploitation irrelevant (article 3b) if any of the means listed in article 3a have been used.(A very similar definition is found in the Norwegian Penal Code, paragraphs 257 and 258 [Penal Code, 2005], and in other states that have ratified the Trafficking Protocol, meaning that the issues discussed in this article have relevance beyond the case of Norway.) A definitional challenge arises in that both what constitutes vulnerability and what constitutes the abuse of such a position is vague, contextual and situational (Chuang, 2014).Therefore, it is not entirely clear in which situations a person's consent to their situation should be disregarded (Gallagher and McAdam, 2013;Skilbrei, 2010).This complicates identification in practice and necessitates much discretion on the part of those identifying trafficking, since a person's own opinion about whether they have been coerced is not necessarily decisive. A substantial body of literature criticises gendered and simplified understandings of trafficking victims in law and policy, and a privileging of 'innocence' in the overarching human trafficking discourse (Harrington, 2005;Srikantiah, 2007).This favours victims who appear young, innocent, unknowing and powerless, that is, 'the ideal victim' (Christie, 1986;Hoyle et al., 2011).However, there is limited empirical analysis of whether or how these simplified and ideal victim images manifest in the actual identification of victims, particularly by social workers engaging with people in potential risk situations. The small empirical literature on identification practices has demonstrated that stereotyping and assumptions about innocence can influence who is identified as a victim of trafficking.Stereotypical depictions of trafficking victims can skew understandings of what to look for, for instance someone 'looking sad and scared' rather than there being a presence of debt, coercion and other more relevant aspects of exploitative situations (Spanger, 2011).Different institutional tasks and professional roles beyond responding to human trafficking also shape understandings and practice (Hoyle et al., 2011;Pickering and Ham, 2014).Human trafficking can be interpreted and understood very differently by, for example, representatives of the justice sector and social workers (Bjelland, 2016;Di Nicola, 2007;Skilbrei, 2010).The actual practice of identification needs more careful empirical attention and contextualisation, in turn casting light on what are common issues and challenges across national contexts. As such, this article contributes to the research literature on human trafficking through empirical examination of social workers' practices and dilemmas.I suggest that framing this as identification work highlights the ongoing assessments, actions and interactions that enter into these processes.I analyse identification practice in the light of discussions of ideology and (ideal) victimhood in anti-trafficking efforts. The Norwegian context While human trafficking in Norwegian debates is commonly framed as an issue of international migration, the legal definition does not exclude domestic trafficking.However, nearly all cases identified as trafficking in Norway have involved international migration.The latest available data that give some indication of identification prevalence are from 2016, when 262 persons received assistance as possible victims of trafficking, the majority of whom were female (79%) and trafficked for prostitution or sexual exploitation (72% of all victims) (Politidirektoratet, 2017).Victims of trafficking are entitled to temporary residence permits, housing, social support and medical services (Brunovskis, 2016).Being defined as a possible trafficking victim releases a set of rights, opening more options for assistance (and for social work) than are available for persons who are not identified as trafficked.However, the extent of these rights is conditional, and residence permits and assistance beyond the first 6 months depend on cooperation with the police in investigations.The willingness and ability of victims to cooperate therefore also play a great part in what assistance will be available in the long term (Brunovskis & Skilbrei, 2016). Method and analytic framework The analysis in this article is based on 12 qualitative interviews with social workers in three organisations working with sex sellers and victims of human trafficking in Norway.Two of these organisations (one municipal and one non-governmental) provide assistance in different forms to sex sellers generally.Important points of contact are outreach work in street prostitution and through advertisements.The organisations also offer medical check-ups, consultations with social workers, and social activities.The third organisation, also non-governmental, specialises in assisting victims of trafficking, and receives referrals from the two other organisations, and others who encounter possible trafficking victims.Respondents were recruited by approaching the organisations with a request for interviews with social workers with identification experience.All organisations agreed to participate.In the interviews, I asked about my informants' day-to-day work, what assessments they made and what actions they took in interacting with possibly trafficked persons.To ensure that the interviews provided sufficient detail on practices and assessments, I also asked specifically about the five latest instances where social workers had developed a suspicion of human trafficking, and what were their step-by-step assessments and practice in these cases.This interview strategy was chosen to offset one central limitation in the methodological approach, in that asking respondents to describe their practice retrospectively might elicit a rather different picture than what might have been accessible through observation.However, observation was deemed not to be a feasible strategy in this study.One ethical consideration was whether possible trafficking victims would be able to give free and informed consent for observation to take place, being in a position of dependency and vulnerability in their interactions with social workers.Ethical considerations were also tied to the possibility that a researcher's presence during social workers' consultations could negatively influence the willingness of possible trafficking victims to share information about exploitative situations.Also, in practical terms, while suspicion of trafficking is serious, it is also a relatively rare occurrence, meaning that it would be very difficult to time observation to be able to observe social workers' relevant practice.All interviews were recorded with the respondents' consent, transcribed and analysed using NVivo 10 software. Taking a starting point in the everyday practice of my respondents, I draw on institutional ethnography, developed by Dorothy E Smith (1987Smith ( , 2005Smith ( , 2006) ) and others (Campbell, 1998;DeVault, 2013;DeVault and McCoy, 2006).In an institutional ethnographic approach, the researcher starts from the point of people's day-to-day experience, knowledge and local situatedness, with a goal to understand 'how things happen' (Campbell, 2003), and ultimately mapping out what institutionally shapes and organises subjects' experience (Campbell, 2016).Institutional ethnography is an effective approach to producing knowledge that can inform social work practice (O 'Neill, 1998).As a complementary perspective, I propose that by understanding identification work as a series of social classifications that are related to different social 'levels', some of the complexity involved in social workers' experiences of identification work can be untangled.Analytically separating between classifications as a matter of (a) individual (self-)identification, identity and selfhood; (b) interpersonal relationships and how we understand each other; and (c) the institutional allocation of administrative labels (Jenkins, 2000: 10) illuminates the considerations that go into identification work.While identification-as-detection takes place at the individual and interpersonal levels, identification-as-status assignation involves an allocation of an administrative label. Findings: Identification of human trafficking in social workers' daily practice My basic question going into this research was how social workers identify victims of trafficking.A partial answer is 'not easily'.Translating abstract legal terms that define human trafficking into observable elements in people's lives is not straightforward: What is vulnerability?What is exploitation, in concrete terms?Most people would agree that taking all of someone's money from sex sale is exploitation, but what about taking some, in exchange for its organisation?Is 50 percent exploitation?20 percent?Boundaries are fluid and not easily drawn in unregulated contexts. Frontline social workers generally go through three stages in their identification work.The first stage is suspicion and concern, or a feeling that 'something is wrong' and that this 'wrong' may or may not include human trafficking.The second stage involves trying to elicit more information from the women they suspect might be trafficked and they may try to discuss the issue of possible exploitation with them.In the third stage, social workers consider how (or whether) to recommend seeking assistance for a person they define as a trafficking victim, that is, going from detection to formal victim status in the identification process.I elaborate on each of these stages below. The first step -Suspicion and concern Only in a very few cases did women approach the social workers asking for help to leave an exploitative situation.When they did, it was generally when something had happened to make the situation untenable, for example, violence or threats.One woman who sought help said she had paid more than 20,000 Euros to a 'madam' and was still being told her debts were unchanged.Since cases of asking for help were rare, identification of trafficking was more often initiated by the social workers after something caused concern -for instance in what the women said, or in their demeanour and behaviour, or background.In a few cases, elements of control appeared obvious, including having a man intervene and stop conversations between the women and social workers and taking away information leaflets about assistance options.In other cases (and more commonly), control or exploitation were less obvious, but something still seemed 'off', particularly in issues relating to money, travel or living arrangements.Some examples included, If they say they come from a very poor region, but that they flew here directly, then I think that it takes some doing just to get on a plane and go to Norway, like completely out of the blue . . .There was this young girl, she was really reserved, and I could get no eye contact and she seemed very jumpy.She said she slept at the central train station.And then I think, there's something . . .It's not common to sleep at the train station, they usually have a [social] network or someone that they can stay with. Sometimes, if they say they have a boyfriend, I might ask, so what's he like, what is his job?Do you live here together?And then if she says, no, he's back home.Oh, is he, I may say, and you send all your money to him?And then she might say -yes, I give all my money to him.Then I think [that something's off]. Issues that caused concern were thus related to money (whether the women paid money to someone else), the journey (whether someone else had arranged it), to what extent the women seemed to be in control of their situation (appearing to be under surveillance or subjected to violence) and whether or to what extent they displayed different vulnerabilities (e.g.unable to communicate in English, appearing to be very young, not being able to identify Norway on a map).Stories that appeared unusual or strange sometimes also induced suspicion.It was not enough for one of these elements to be present for social workers to suspect human trafficking as such, but more an issue of the total balance adding up to someone possibly being in a situation they could not get out of, and therefore to social workers trying to get more information. It is worth noting that the 'consent issue' was not pivotal in the development of a suspicion of trafficking.While clearly coercive situations and women who explicitly had not consented to selling sex were readily recognised as trafficking, suspicion could equally arise in situations where women had consented to selling sex but were seen as vulnerable.Social workers were also fundamentally negative to depictions of 'ideal victimhood' in awareness raising campaigns or media coverage, calling them 'irresponsible'.They were also concerned about victims being expected to behave in a specific way (e.g.'looking scared') according to so-called indicators of human trafficking (see, for example, UNODC, n.d.) and noted that being in a coercive situation could just as often cause someone to react with anger as with sadness or other easily recognisable expressions of fear. Underlining the complexity and fallibility in recognising signs of trafficking, one respondent said it was generally difficult to distinguish by observation alone, between organised prostitution and trafficking exploitation.She said, There are many women here now from [one country], we can never reach them when we try to call them on the phone, we've understood that they all live in this one apartment, and the wording of the ads is very similar.So, there are a number of issues that indicate that this is organised.But whether it's organisation as in help with getting to Norway, or as in having a pimp, or as in trafficking, we have no idea. Visible indicators of trafficking are thus not particularly useful in trying to assess whether someone might fit into the trafficking victim category, and most commonly, social workers would therefore try to get a more comprehensive image of what the situation was, which is the second step in their identification work. The second step -Communicating about trafficking, doubts and dilemmas One issue was immediately flagged by social workers as the main challenge in identification work: in their experience, persons who might fit within the trafficking victim category rarely saw themselves as victims, neither in a general sense, nor of human trafficking specifically.This points to a divide between how potentially exploitative relationships are understood by those defined as possible victims and by those who are assigned with the task to define them as such.This can complicate communication.Even in cases where women had explicitly wanted help, reported their exploiters to the police and where exploiters had been sentenced for trafficking, social workers said that the women involved did not necessarily agree that they had been trafficked as such, but had been more concerned with being subjected to individual crimes such as violence or threats. In one such case, a social worker had been in close contact and worked with a woman over several years, from the point of identification and through two trials.In the view of the social worker, the woman had never felt comfortable with some of the criminal charges against her exploiters.Among the elements that had contributed to this case being prosecuted as human trafficking was her vulnerable situation when she was 'recruited' and transported to Norway.However, the social worker's impression was that the focus on these elements had been alienating to the woman, who was irritated with repeated questions in police interviews and on the witness stand relating to the initial contact and travel with those later convicted of her trafficking. The discrepancy between what might be defined as trafficking in the Norwegian bureaucracy and legal system, on the one hand, and the perception of many of the women concerned, on the other hand, meant that the social workers did not lean solely on the women's own understanding of their situation.Rather they tried to uncover aspects of the women's past and present circumstances that could comply with the human trafficking definition.Approaching the issue of possible exploitation, however, was presented by social workers as tricky and they needed to be careful, wary of jeopardising trust.Questions about travel arrangements, money, accommodation and so on -which were relevant to assess trafficking risk -were sometimes seen by the women as strange and irrelevant and could quickly shut down the conversation. Categorising women as victims of trafficking when this did not mesh with the women's own perception led to some discomfort and ambivalence among my respondents.They questioned their right, as privileged women with education and employment in one of the richest countries in the world, to define the women's lives in a very different way to what the women did themselves.This was particularly the case when some of the women presented selling sex, even under exploitative conditions, as an improvement on their previous lives, or as part of a long-term strategy towards economic security in a situation where they saw few other alternatives. Social workers would thus try to arrive -on some levels at least -at a common understanding of the situation of the women and approached it in terms of whether they might be in a situation they wanted to get out of, or whether they might need help in other ways.This was partially a question of providing information but was not a straightforward process in most cases.Some women were in relationships with men that social workers saw as manipulative and were seldom receptive to the idea that they were exploited.In yet other cases, social workers had experienced that some women had been in such extreme situations and been so traumatised that they had been unable to absorb and process information in general.There are thus several very varied reasons for why it can be complicated to 'negotiate' a common understanding of the situation in terms of whether the women are trafficked or not. The third step -Recommending assistance In the first two steps -the emergence of suspicion and eliciting further information -social workers try to assess whether there may be indications of exploitation and get an understanding of the social relationships of the women they work with.They highlighted ambivalence and grey areas and indicated that the trafficking category, for them, is far from binary but rather a continuum.They expressed that it was not a simple question of whether someone was or was not trafficked, and in many cases, it was difficult to draw a line. A palpable shift happens when social workers try to assess whether a person could be eligible for assistance within the trafficking-specific system, and also, whether and to what extent they should recommend such assistance -that is, initiate a process of formal identification.While suspicions of trafficking rest on complex understandings and much ambivalence about what can and should be classified as human trafficking, other concerns come into play in considering the next steps. Results of police investigations are crucial to the further outcomes for the women, most notably in whether they are eligible for a residence permit in Norway.Therefore, social workers' experiences of past cases in the justice system enter into assessments of whether assistance will be beneficial in each particular case.This means that the women's assistance needs are weighed against assessments of whether the system will be able to meet these needs.This largely hinges on whether social workers believe that the women have a case that is likely to succeed in the justice system. It is important to note that social workers say that they always inform the women about the availability of trafficking-specific assistance and the potential eligibility for a temporary residence permit, as it is the right of any possible victim to receive such information.Social workers also arrange for legal advice for the women if there is indication that someone has been trafficked. There is, however, a distinction between informing about and actively recommending trafficking-specific assistance, and the two were framed by the social workers as involving different ethical considerations and pressures.They expressed an ethical obligation to ensure that the women they worked with knew their rights and options, but equally, a duty not to create unrealistic expectations.Social workers expressed that they had, over time, become more reluctant to recommend such assistance.Their experience was that it was, for many, not particularly beneficial in the long term and could instil false hopes of residence permits, with a corresponding setback when hopes of a new life were quashed: We have identified fewer and fewer victims of trafficking, and I think that we focus on it less, on some level.And also, not that many people are really all that interested in being identified as a victim of trafficking. Another expressed a related sentiment, in that she and her colleagues had previously been enthusiastic about trafficking assistance options, but had, over time, lost much of their belief that this assistance was beneficial.This introduces an element of resistance against the official push to identify trafficking: Previously, we recruited people [for assistance], because we thought -'we can help you, we have the reflection period, a safe place to stay, money for food, and then you can think for a while about whether you want to go to the police'.And that has become more and more difficult to 'sell' because we don't believe in it to the same extent that we used to. The scepticism towards trafficking-specific assistance had developed gradually and several social workers echoed that individual outcomes were far too unpredictable, and sometimes even damaging for the women involved.They were particularly concerned about cases where women had reported their traffickers to the police, but it had not resulted in prosecution: . . .we have seen in some cases that the police have arrested the traffickers, or the suspects, and let them out again, and what that means for the woman who has 'snitched' -it's really horrid.They are left without any rights whatsoever, and they come back to us and they're re-trafficked and they're bruised from being beaten with baseball bats.Really . . .I sometimes think that they have nothing to gain by reporting to the police.The only consequence is that their traffickers are even more pissed off.This is a very serious example of an outcome that has gone extremely wrong, and cases like this profoundly undermine social workers' trust that the justice system will work to trafficked women's advantage.This translates into considerable ethical qualms in assessments of whether to recommend assistance that rests on the women cooperating with the police.This further shows how the institutional framework that comprises the justice system and its logic becomes entangled with social work assessments, because the outcomes of investigations and legal proceedings to a large extent also determine the long-term assistance outcome.Taking on a criminal justice perspective, social workers would try to assess likely outcomes of investigations, trying to assess for instance whether the trafficker was in the country, whether the woman's information could be sufficient for the police to investigate, and whether she was motivated to go through multiple trials.Not least, social workers were concerned that women who presented inconsistencies in their stories or were telling only partial truths would not be seen as credible victims and could thus suffer harm rather than being helped by going to the police.In these cases, social workers would generally aim to find other ways of assisting the women.However, depending particularly on their migration status and associated rights situations, assistance outside of the trafficking system can be considerably less comprehensive (e.g. in terms of access to legal assistance, temporary residence permits, certain health services (Brunovskis & Skilbrei, 2016)). Social workers see the justice system as primarily working for those who have a clear understanding of themselves as victims and who present their story consistently and comprehensively from the outset.By implication, this is more likely if they have not consented to their exploitation, or that their understanding is that they have been exploited.And yet, as discussed in the background section of this article, the international definition of human trafficking specifies that consent to exploitation can be disregarded, if a position of vulnerability has been exploited.This is an understanding that is well reflected in social workers' initial suspicions and assessment of whether someone is trafficked, but it is also an understanding that is, to a degree, filtered out by default when social workers need to consider whether trafficking-specific assistance is beneficial.Those who identify less with a victim role and are less sure about whether or how they want to move forward are perceived by social workers as less likely to succeed in the justice system, and thus less likely to benefit from trafficking-specific assistance in the long term. Discussion Understanding the processes described above as identification work underlines the complex assessments through the three phases of suspicion, information gathering and decisions about recommending assistance.All phases involve discretionary assessments and predictions of likely outcomes.These stages also involve very different applications of human trafficking understandings and social workers' considerations shift along the process.This supports, but also adds to, previous research that has highlighted the importance of institutional frameworks and contexts for implementation of human trafficking policies (Wan Ismail et al., 2017) and the significance of the professional roles and mandates in how human trafficking is understood (Bjelland, 2016;Di Nicola, 2007;Skilbrei, 2010).What this article adds to these perspectives are the challenges introduced to social workers' identification work by the shifting institutional frameworks that become relevant at different stages of the process.While initially relating to a social work and welfare perspective, the necessity to consider criminal justice perspectives in relating to assistance needs can create ethical pressures and considerable qualms relating to social workers' ideal of beneficence to the people they work with (Brunovskis & Surtees, 2019). The dilemmas in identification work and the background for these different applications of the human trafficking term are illuminated by an understanding of identification as work processes relating to social classifications that are active on different levels: individual, interpersonal and institutional/administrative (Jenkins, 2000), and with shifting institutional frameworks that become relevant.Individually, the women rarely see themselves as victims, nor does the trafficking terminology particularly resonate with them in most cases.Interpersonally, in describing one-to-one exchanges and assessments, social workers see complexity and underline the inherent difficulties in drawing a line between trafficked and non-trafficked.They also to some extent questioned their own right to classify someone as exploited, especially when that person might see their situation as their best shot at a better life.Social workers employed at this stage an understanding that most human trafficking has its basis in personal or structural vulnerability, that victims might indeed have consented to both prostitution and to their working conditions and still classify as victims, and that notions of ideal victimhood were counterproductive. However, assessments were different when it came to administrative classification: those who conformed to the idealised victim stereotype recognised themselves as victims and/or felt that the human trafficking terminology was relevant and appropriate to describing their situation, were seen as more likely to succeed in the justice system.This also meant that they would more likely benefit from trafficking-specific assistance, which is more extensive.The identification practice of social workers thus appears separate from their (ideological) understanding of human trafficking.At the interpersonal level, social workers have room for thinking about trafficking as taking place along a continuum and for recognising the complexities in women's experiences of violence and exploitation.At the level of administrative categorisation, this room disappears, because the logic of the justice system has such a dominant role in further outcomes.Since the women and their own presentation of their situation need to fit with the binary logic of the legal justice system, the room for continuum and grey areas evaporates.Where at the individual and interpersonal levels there is space for potentially trafficked women's ambivalence and changes in perception and self-understanding over time, the administrative category of trafficking victim rests on binaries: vulnerability versus non-vulnerability, cooperation versus non-cooperation, telling the truth versus lying.Social workers thus appear to shift their perspective and try to see the cases from a criminal justice point of view, and further, to try to predict, in each case, the chances of a successful investigation. Based on a social work tenet of beneficence to their clients, social workers do sometimes consider that 'helping' would be more likely to lead to 'harming'.They also expressed that this had led to them identifying fewer and fewer victims in recent years -not necessarily because they observed less exploitation, but because they feared that identification might be harmful.Social workers do not start out from an understanding that only 'ideal victims' should be defined as trafficked or receive assistance.On the contrary, this is an idea that they strongly object to.Ironically, though, 'ideal victims' have a higher chance of succeeding in the justice system, and social workers have a stronger belief that such individuals will also benefit from being identified as a trafficking victim, leading them to be more likely to recommend trafficking-specific assistance in such cases. As discussed above, in some cases possible trafficking victims did receive assistance, though not as victims or within the trafficking-specific assistance system.As such, it would perhaps not appear so important whether someone is identified as a trafficking victim.However, apart from this often being less comprehensive assistance, social workers also worried about the effect of falling registered numbers of identification at an accumulated and policy level.These numbers are given political significance and taken as indicative of developments (Chuang, 2014;Feingold, 2010;Weitzer, 2014).There was therefore a concern that falling numbers would be taken as a sign of the success of Norwegian anti-trafficking work in terms of presenting an image that there was less trafficking, and not be considered reflective of what social workers saw as the underlying issuean assistance system and legislation that were failing all but a small group of victims. Notions of ideal victimhood and the privileging of innocence have been criticised and become a cause for concern in human trafficking studies.The propensity to more often identify victims that appear innocent and unknowing is often assumed to be based in ideological and gendered discourses of innocence and deservedness (Rodríguez-López, 2018;Srikantiah, 2007).On closer examination, it appears that a greater chance of 'ideal victims' being identified can in some instances also be more complex.In the case of social workers in Norway, it is paradoxically a concern for the welfare of trafficking victims who are not 'ideal' that can lead to their non-identification, and not an idea that they are in any way less 'deserving' of assistance.Social workers believe that not only are these women less likely to benefit from being identified but that it may, in some cases, even harm them.Thus, the same result -more ideal victims are identified -can rest on different premises and processes.This illustrates the importance of examining day-to-day interventions and interactions if the intention is to understand what shapes policy in practice, not least if the aim is to improve policy responses. In terms of implications for social work practice, bringing these issues to light can help contextualise qualms that social workers may have about their identification work and challenges they face, underlining the institutional processes at play rather than framing difficulties in trafficking identification primarily as a question of social workers' competence and skills.The findings in this article to some extent challenge the idea that more knowledge about trafficking will facilitate the identification of possible victims (see, for example, Burt, 2019;Havig and Mahapatra, 2021).All social workers interviewed for this study were very knowledgeable and had several years of experience in the trafficking field, knew the legal and institutional landscape well, and they were more than able to recognise signs that someone might be exploited.However, barriers in their identification work were, in addition to institutional shortcomings, also related to the women's own perceptions of their situations.This underlines that trafficking identification is not a one-way procedure in which possible victims are passive agents to be found and helped, but rather a negotiated process that is also shaped by institutional constraints for all parties involved.This is not to say that awareness and knowledge about trafficking are not necessary, but far more important is the existence of effective and attractive protection and assistance systems with more foreseeable individual outcomes so that the beneficence of trafficking identification clearly outweighs potentially negative consequences. Conclusion Identification of victims is not merely a matter of victims' self-identification or understanding of their situation, nor of their interactions with social workers and the social workers' interpretation of the women's situation.This article has also demonstrated how identification is not just about ticking off a list of criteria that decide whether someone administratively 'fits' or not.All three levels are at play at once and impact on each other.Tension and friction between the levels sometimes leads to considerable ambiguity and unease for the social workers, as well as resistance (both active and passive) to the requirements made on them by anti-trafficking policies. Important questions are therefore who is formally identified as trafficking victims and not least, what do the reported numbers of assisted victims of trafficking in Norway and elsewhere reflect?Reported numbers of trafficking victims represent institutional processes and priorities, rather than the actual level of exploitation in different sectors.The process of identification or assignation of the victim of trafficking label is highly fluid and responds to institutional, bureaucratic and legal developments that directly and indirectly come to define in practice the criteria for assignation of a victim label. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
2022-10-05T15:17:43.204Z
2022-10-03T00:00:00.000
{ "year": 2024, "sha1": "11b38c0c6783bf94068244333243df827058f6ed", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/00208728221126263", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "fb98bb852cff3ee0fc2ae8f60a2bf9bd15431408", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
246433123
pes2o/s2orc
v3-fos-license
Long-Term Complications of COVID-19 Infection in Adolescents and Children Purpose of Review Compared to adults, post-COVID-19 symptoms are uncommon and have not been thoroughly evaluated in children. This review summarizes the literature in terms of persistent symptoms in children and adolescents after SARS-CoV-2 infection. Recent Findings Children were less likely to develop long COVID when compared to adults. Older children (e.g., adolescents) and those who had symptomatic COVID-19 had a higher probability for long COVID. Summary Families and health care providers need to be aware of a new constellation of long COVID symptoms in the pediatric population. More evidence and time are needed to better understand the potential effects of long COVID-19 in children and adolescents. In comparison to adults, children are less likely to have persistent COVID-19 symptoms. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was identified as a novel human pathogen in December 2019 and has since led to a worldwide pandemic that has threatened human health and public safety [1]. To date, approximately 257 M worldwide cases of SARS-CoV-2 have been detected, yielding a total of 5.1 M deaths. Common symptoms of SARS-CoV-2 infection, commonly referred to as coronavirus disease 2019 (COVID- 19), include shortness of breath, fever, cough, sore throat, malaise, myalgias, anorexia, nausea, diarrhea, anosmia, and ageusia [2]. COVID-19 cases in children and adolescents were initially identified as early as January 2020, but the incidence and severity of the condition are much lower when compared to adults [2]. Up to 20% of children will not manifest acute COVID-19 symptoms, and symptomatic cases are typically mild, do not require hospital admissions, and rarely fatal [2]. Recently, however, there has been an emergence of children and adolescents presenting with long-term symptoms after SARS-CoV-2 infection [3]. This article is part of the Topical Collection on Critical Care Persistent symptoms after COVID-19 have multiple names including "post-COVID conditions," "long-COVID," "long-haul COVID," "post-acute COVID-19," "long-term effects of COVID," and "chronic COVID." For simplicity, this review will be utilizing the term "long COVID." According to the Centers for Disease Control and Prevention (CDC), long COVID has been defined as new, returning, or ongoing symptoms that develop during or after a SARS-CoV-2 infection that continue for four or more weeks [4]. The number of symptoms associated with long COVID is vast and includes fatigue, muscle and joint pain, headache, insomnia, respiratory problems, heart palpitations, gastrointestinal problems, nausea, dizziness, seizures, hallucinations, and testicular pain (Table 1). Currently, the literature reporting long COVID has focused on the adult population [3,5]. Among the first studies describing pediatric long COVID was derived from investigators from Gemelli University Hospital in Rome, Italy. According to Buonsesno et al., more than half of the children in the sample of 129 children had at least one symptom lasting more than 120 days [6••]. Herein, we highlight the manifestations of long COVID in the pediatric and adolescent population across several organ systems. Acute COVID Manifestations Prior to discussing long COVID symptoms, we will provide a brief overview of the presentation of SARS-CoV-2 infection in children. A recent review article by Galindo et al. cited that children with COVID-19 typically manifest with fever, and upper respiratory symptoms with/without gastrointestinal symptoms [7]. Likewise, the symptomatic profile of fever, cough, diarrhea, vomiting, rhinorrhea, and headache was also portrayed in the sentinel review by Castagnoli and colleagues [8••]. In a previous paper, we also describe the classic findings of children diagnosed with COVID-19 [9]. Figure 1 depicts acute symptoms of SARS-CoV-2 infection in more than 7000 children. General Symptoms In a prospective study of school-aged children from the UK, investigators sought to capture illness duration and symptom profiles after SARS-CoV-2 infection [10••]. Seventyseven of 1734 COVID-19-positive children had symptoms that lasted at least 28 days. At a rate of 84.4%, fatigue was the most reported symptom, followed by headache (77.9%, n = 60). Twenty-five of 1379 children had symptoms that lasted > 56 days; headache occurred in 20 children (80%). Likewise, a follow-up study of 518 Russian children revealed that fatigue (10.6%) and sleep disturbance (7.2%) were frequent long COVID symptoms [11 ••]. After fatigue and insomnia, an investigation of long COVID symptoms in Italian children revealed that lack of concentration and weight loss took place in 10.1% and 7.7% of individuals, respectively [6••]. Psychiatric In a study by Zhang and colleagues, the psychological impact of COVID-19 on adolescents who were in continuous treatment for major depressive disorder (MDD) was compared to adolescents with MDD [12]. Using the Diagnostic and Statistical Manual of Mental Disorders IV diagnosis of post-traumatic disorder (PTSD), the investigators found that a higher proportion of adolescents with MDD had increased arousal, avoidance, and flashbacks after the start of the pandemic. Moreover, junior high school students were more likely to exhibit avoidance, while high schoolers experience more intrusive symptoms. In a meta-analysis of 80,879 children, the pooled prevalence of depression and anxiety symptoms during COVID-19 has doubled compared to prepandemic estimates [13]. These findings are consistent with previous studies that have shown that compared to adults, children and adolescents are at higher risk for depression, anxiety, and PTSD after natural disasters [14]. Otolaryngologic Patterning the adult population, anosmia has been described as an otolaryngologic manifestation of pediatric COVID [15,16]. The previously stated investigation by Molteni and colleagues described anosmia as the second most common symptom (77.9%) during the entire illness of children with long COVID. While headache, fatigue, and sore throat presented earlier in the disease, anosmia was a later finding in the course of illness [10••]. Furthermore, anosmia was the most common symptom (84.0%) in a subgroup of children who had long COVID for more than 56 days (n = 25). Changes to smell was an initial presenting symptom of COVID-19 in 14% of children [11 ••]. This symptom remained for more than one month, and in some cases more than 5 months, in 22 of 467 children (4.7%). Dizziness as a long COVID symptom was less common at rate of 2.1%. Problems with balance or fainting were rare to none. Dysphagia and dysphonia have also been described as a long COVID symptom in children. In a retrospective cohort study of 50 pediatric patients with multisystem inflammation after COVID-19, nine patients required specialist assessment for dysphonia and/or dysphagia (one patient had dysphagia alone, three patients had dysphagia and dysphonia, and five patients had dysphonia alone) [17]. Management [20]. Echoing the above studies, a prospective cohort of 52 children and adolescents from the UK described the spectrum of neurological and psychiatric manifestations linked to COVID-19. Thirty-three percent of children had some degree of neurologic disability 1-6 months after COVID-19 hospital discharge. Parents and clinicians should be aware of these potential devastating neurologic symptoms in children after COVID-19 and it is recommended that they expedite formal neurologic evaluation and multidisciplinary management to improve outcomes. Furthermore, children presented to the emergency department or primary care providers with new onset neurologic symptoms should undergo SARS-CoV-2 testing. Symptoms lasting more than a month were low (9%) and lasting more than 12 weeks occurred in only 4 individuals. Chest tightness was a long COVID-19 symptom that was reported in 1 child and persisted for more than three months [21]. In another study, clinicians surveyed 518 children and their families regarding symptoms that persisted months after COVID-19 infection [11••]. Palpitations and variations in heart rate were reported in 7/472 (1.5%) and 10/494 (2.0%), respectively. Similarly, in a case series of 129 children, palpitations, chest pain, or chest tightness was described in 3.1 to 6.2% of the cohort after the diagnosis of COVID-19 [22]. Although cardiovascular symptoms can be seen in children after COVID-19, they seem to be a rare phenomenon. Pulmonary In an Australian study of 171 children, 12 (7.0%) had a long COVID symptom (PMID: 33,891,880). Specific to the respiratory system, 6 children (50%) had a post-viral cough that ranged from 3 to 8 weeks from the time of symptom onset. The cough resolved at a later follow-up. In the previously mentioned Italian study [6••], persistent cough was appreciated in 5.4% of the population. Patterning the previous two studies (Osmanov et al.), persistent cough was documented in 5 of 503 children (1.0%) [11••]. A different study found that rhinorrhea was among the most common respiratory symptoms that persisted after acute COVID-19 (52.4%) [10 ••]. Feelings of chest tightness or difficulty breathing have also been reported in children [11••, 21]. Gastrointestinal Nausea, abdominal pain, and diarrhea have been stated as long COVID symptoms in children. In younger children, abdominal pain can be a presenting feature of COVID-19 in up to 27.7%; however, less than 5% of children experience this symptom > 4 weeks [10••, 21]. Of more concern is a child who had COVID-19 and three to six weeks later presents with an acute abdominal picture combined with high fever, rash, and conjunctival injection. This clinical presentation is more concerning for multisystem inflammatory syndrome in children (MIS-C) or also referred to as pediatric multi-system inflammatory syndrome temporally related to SARS CoV-2 (PIMS-TS). A summary of the findings discussed in this review is summarized in Fig. 2. Predictors for Long COVID A study analyzing 4182 adult cases of COVID-19 reported 558 (13.3%) participants with symptoms lasting ≥ 28 days [23]. The authors created a model that could discriminate which adults were more likely to have long COVID. Sudre et al. concluded that the number of symptoms in the first week combined with age and sex reached an area under the curve for predicting long COVID of 76.7%. Comparable studies in children may lead to the creation of models that could help clinicians track symptoms, and more importantly test interventions that may reduce symptom burden. At present, pathophysiologic reasons that may explain why some children are more susceptible to long COVID are unknown. Pathophysiologic Proposals Related to SARS-CoV-2 Manifestations Below is a synopsis of SARS-CoV-2 interactions according to different organ systems. Multisystem Inflammatory Syndrome in Children (MIS-C) Amidst the current COVID-19 pandemic, a novel syndrome affecting children and adolescents rose to great attention across the globe in the latter half of April 2020 [30,31,32]. Although MIS-C does not fit the definition of long COVID, many of the sequelae from this devastating disease can result in long-term childhood symptoms/ complications. According to the Center for Disease Control and Prevention, MIS-C is defined by (1) an individual aged less than 21 years old; with (2) clinical criteria of a minimum of 24-h history of subjective or objective fever ≥ 38.0 °C, severe illness necessitating hospitalization, and involvement of two or more organ systems; (3) laboratory evidence of inflammation; (4) laboratory or epidemiologic evidence of SARS-CoV-2 infection; and (5) no alternative diagnosis. The signs and symptoms of MIS-C include fever, abdominal pain, vomiting, diarrhea, skin rash, mucocutaneous lesions, hypotension, and cardiovascular and neurologic compromise [33]. Reviews of complications secondary to MIS-C have been described elsewhere [34,35]. Conclusion Although the manifestations of long COVID are new in the field of pediatrics, evidence suggests that the symptoms are variable and can span across multiple organ systems. During the initial stages of the pandemic, the pediatric population was often overlooked as they had less rates of infection and poor outcomes were rare. This new wave of COVID-19 is resulting in a rise of cases in the unvaccinated pediatric populations and may provide more data and insight into the long-term effects of COVID-19 in children and adolescents. Time will provide an advanced understanding for the prevention and appropriate care of long COVID in the pediatric population.
2022-02-01T14:44:43.327Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "ae90cf8aefc52041a4b9f9ce761d68305775dfa3", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s40124-021-00260-x.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ae90cf8aefc52041a4b9f9ce761d68305775dfa3", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
150373996
pes2o/s2orc
v3-fos-license
Kinetic Brownian motion on the diffeomorphism group of a closed Riemannian manifold We define kinetic Brownian motion on the diffeomorphism group of a closed Riemannian manifold, and prove that it provides an interpolation between the hydrodynamic flow of a fluid and a Brownian-like flow. Introduction Kinetic Brownian motion is a purely geometric random perturbation of geodesic motion. In its simplest form, in R d , the sample paths of kinetic Brownian motion are C 1 random paths run at unit speed, with velocity a Brownian motion on the unit sphere, run at speed σ 2 , for a speed parameter σ > 0. More formally, it is a hypoelliptic diffusion with state space R d × S d−1 , solution to the stochastic differential equation dx σ t = v σ t dt, dv σ t = σ P v σ t (•dW t ), for P a : R d → a ⊥ , the orthogonal projection on the orthogonal of a , for a = 0 in R d , and W a standard R d -valued Brownian motion. If σ = 0, we have a straight line motion with constant velocity. For a fixed 0 < σ < +∞, we have a C 1 random path, whose typical behavior is illustrated in Figure 1 below. For σ increasing to ∞, the exponentially fast decorrelation of the velocity process v σ on the sphere implies that the process x σ converges to the constant path x 0 , if the latter is fixed independently of σ. One has to rescale time and look at the evolution at the time scale σ 2 to see a non-trivial limit. It is indeed elementary to prove that the time rescaled position process (x σ σ 2 t ) 0≤t≤1 of kinetic Brownian motion converges weakly in C [0, 1], R d to a Brownian motion with generator extended this result in [Li16] to non-compact manifolds subject to a growth condition on their curvature tensor. In [ABT15], Angst, Bailleul and Tardif gave the most general result, assuming only geodesic and stochastic completeness, using rough paths theory as a working horse to transport a rough path convergence result about kinetic Brownian motion in R d to the manifold setting. See also [Li18] for further results in homogeneous spaces, and [Per18] for a generalization of the homogenization result of [ABT15] to anisotropic kinetic Brownian motion, or more general Markov processes on T 1 M . Note that the dynamically obvious convergence of the unrescaled kinetic Brownian motion to the geodesic motion has been studied from the spectral point of view in [Dro17], for compact manifolds with negative curvature, showing that the L 2 spectrum of the generator of the unrescaled kinetic Brownian motion converges to the Pollicott-Ruelle resonances of M . Other examples of homogenization results for Langevin-type processes include works by Hottovy and co-authors, amongst others; see e.g. [BVW17, HV16, BW18, LWL19] for quantitative convergence results. See also [Sol95,Kol00,AHK12,Gli11] for other works on Langevin dynamics in a Riemannian manifold. This kind of homogenization result certainly echoes Bismut's program about his hypoelliptic Laplacian [Bis05,Bis15], whose probabilistic starting point is a similar interpolation result for Langevin process in R d and its Cartan development on a Riemannian manifold. The dynamics is lifted to a dynamics on the space of differential forms to take advantage of the correspondence between the cohomology of differential forms and homology of M , via index-type theorems. See [Bis11,Bis15,Bis16,She16] for a sample of the deep results obtained by Bismut and co-authors on the hypoelliptic Laplacian. Note also that kinetic Brownian motion is the Riemannian analogue of its Lorentzian counterpart, introduced first by Dudley in [Dud66] in Minkowski spacetime in the 60's. See the far reaching related works [FLJ07,Bai10,FLJ11,BF12], on relativistic diffusions in a general Lorentzian setting. No homogenization result is expected for these purely geometric diffusion processes, unless one has an additional non-geometric ingredient, e.g. in the form of a relativistic fluid flow, like in [AF07]. The object of the present work is to define and study kinetic Brownian motion in the diffeomorphism group M , or volume preserving diffeomorphism group M 0 , of a closed Riemannian manifold M . As in the finite dimensional setting, we prove that it provides an interpolation between the geodesic flow and a Brownian flow, as the noise intensity parameter σ ranges from 0 to ∞. For σ = 0, the motion in each diffeomorphism group is geodesic, and it corresponds to the flow of the solutions of Euler's equation in the case of M 0 , after the seminal works of Arnold [Arn66] and Ebin & Marsden [EM69]. When considered in the setting of volume preserving diffeomorphisms, the Eulerian picture of kinetic Brownian motion provides a family of random perturbations of Euler's equations for the hydrodynamics of an incompressible fluid. There has been much work recently on random perturbations of Euler's equations, following Holm's seminal article [Hol15]. See [GBH17, CHR18, CFH18, DH18, BdLHLT19] for a sample. The structure of the noise in these works is intrinsically linked to the group structure of the diffeomorphism group, and it amounts to perturbe Euler's equation for the velocity field by an additive Brownian term, with values in a space of vector fields on the fluid domain M . Our point of view is purely Riemannian, and does not appeal to the group structure of the diffeomorphism group of the fluid domain M . As in the above finite dimensional setting, we define kinetic Brownian on the diffeomorphism group as the Cartan development of its 'flat' counterpart. Unlike the group-oriented point of view, where the running time diffeomorphism is sufficient to describe its infinitesimal increment from the noise, we need here a notion of frame of the tangent space of the running diffeomorphism to build its increment from the noise. We prove that each component of the energy spectrum of the Eulerian velocity field is ergodic, and give an explicit description of its invariant measure. We also have an explicit description of the invariant measure of the energy of the Eulerian velocity field. On the technical side, we use rough paths theory to transport a weak convergence result for the flat kinetic Brownian motion taking values in the tangent space to the configuration space M , or M 0 , to a weak convergence result for the solution of a differential equation controlled by that flat kinetic Brownian motion. We use for that purpose the continuity of the Itô-Lyons solution map to a controlled ordinary differential equation, in the present infinite dimensional setting. This allows to bypass a number of difficulties that would appear otherwise if using the classical martingale problem approach, as in [Li12,Li16]. All we need about rough paths theory is recalled in Section 2.4. From a geometric point of view, the tangent space to the configuration space can naturally be seen as an infinite dimensional Hilbert space. For this reason, we define and study in Section 2 kinetic Brownian motion on a generic infinite dimensional Hilbert space H. We provide an explicit description of the invariant measure of the velocity process in Section 2.1, and we establish exponential decorrelation identities for the latter in Section 2.2. The invariance principle for the position process associated to the time-rescaled H-valued kinetic Brownian motion is then established in Section 2.3. With the rough paths tools introduced in Section 2.4, Section 2.5 is devoted to the proof of the fact that the canonical rough path above the time-rescaled position process converges weakly as a rough path to the Stratonovich Brownian rough path of a Brownian motion with an explicit covariance. Elements of the geometry of the configuration spaces M and M 0 are recalled in Section 3. We develop in particular in Section 3.3 and Section 3.4 the material needed to talk about Cartan development operation as solving an ordinary differential equation driven by smooth vector fields. The final homogenisation result, proving the interpolation between geodesic and Brownian flows on the configuration spaces, is proved in Section 4 using the robust tools of rough paths theory. Appendix A contains the proof of a technical result about Cartan development in M 0 . Notations. We gather here a number of notations that are used throughout the article. • The letter γ stands for a Gaussian measure on a Hilbert space H, with covariance C γ : H * × H * → R, and associated operator C γ : H → H. The scalar product and norm on H are denoted by (·, ·) and · , respectively. • We denote by H the Cameron-Martin space of the measure γ. • We endow the algebraic tensor space H ⊗ a H with its natural Hilbert norm. This amounts to identify H ⊗H with the space of Hilbert-Schmidt operators on H. • We use the notation A À p B for an inequality of the form A ≤ cB, with a constant c depending only on p. Recall that a Gaussian probability measure on H is a Borel measure γ such that * γ is a real Gaussian probability on R, for every continuous linear functional : H → R. Fernique's theorem [Fer70] ensures that H exp a x 2 γ(dx) < ∞, for a small enough positive constant a. It follows that the covariance Kinetic is a well-defined continuous bilinear operator on H * × H * . One can then define a continuous symmetric operator C γ : H → H, by the identity for all h, k ∈ H. It has finite trace equal to Conversely, one can associate to any trace-class symmetric operator C : H → H, a Gaussian measure γ on H whose covariance C γ ( , ) = C( , ), for all ∈ H. Since C γ is compact, there exists an orthonormal basis (e n ) of H, such that C γ (e n ) = α 2 n e n , for non-negative and non-increasing eigenvalues α n with α 2 n < ∞. We define a Hilbert space H by choosing α n e n as an orthonormal basis for it. The space H is continuously embeded inside H. Let (X n ) stand for a sequence of independent, identically distributed, real-valued Gaussian random variables with zero mean and unit variance, defined on some probability space (Ω, F, P). Then the series n X n α n e n converges in L 2 (Ω, H), and has distribution γ. Fix a positive time horizon T ∈ (0, ∞]. An H-Brownian motion in H, on the time interval [0, T ) is a random H-valued continuous path W on [0, T ), with stationary, independent increments such that the distribution of W 1 is a Gaussian probability measure γ on H. A simple construction is provided by taking a sequence (W n t ) of independent, identically distributed, real-valued Brownian motions, and setting Denote by S the unit sphere of H, and let P a : H → H stand for the orthogonal projection on a ⊥ , for a = 0. The H-spherical Brownian motion v σ t on S is defined as the solution to the Stratonovich stochastic differential equation associated to a given initial condition v σ 0 ∈ S; it is defined for all times. The speed parameter σ is a non-negative real number. Write Z for H 1 u γ(du). Theorem 2.1. The image under the projection u → u/ u of the measure 1 Z 1 u γ(du) in the ambiant space H is a probability measure µ on S that is invariant for the dynamics of v σ t , for any positive speed parameter σ. This statement generalizes Proposition 1.1 of [Per18] to the present infinite dimensional setting. The above description of the invariant measure µ as an image measure under the projection map actually coincides with the finite dimensional description given in the latter reference. Proof. When written in Itô form, the stochastic differential equation (1) defining the process (v σ t ) t≥0 reads Alternatively, one can bypass computations and argue using Malliavin calculus as follows. Denote by L the infinitesimal generator of the process (u σ t ). Set V (u) := u/ u 2 for u = 0, and let ∆ γ denote the Laplace operator associated with the covariance C γ with weights (α 2 n ),. We then have for any test function f and any u ∈ H One then has for any test function f , with usual notations D for the gradient and δ for the divergence, We prove in Section 2.2 that the velocity process (v σ t ) converges exponentially fast in Wasserstein distance to the invariant probability measure µ of Theorem 2.1, for any initial velocity v 0 , despite the possible lack of strong Feller property of the associated semigroup. An invariance principle for the time-rescaled position process (x σ σ 2 t ) is obtained as a consequence in Section 2.3. We recall in Section 2.4 what we need from rough paths theory in this work, and prove in Section 2.5 that the canonical rough path associated to the time-rescaled process (x σ σ 2 t ) converges weakly as a rough path to an explicit Stratonovich Brownian rough path. Exponential mixing of the velocity process We consider in this section the mixing properties of the spherical process (v σ t ) t≥0 with unit speed parameter σ = 1. To simplify the expressions, we drop momentarily the exponents σ from all our notations. Our objective is to show that the spherical process t ) t≥0 is exponentially mixing. Recall that the 1 and 2-Wasserstein distances are defined for any probability measures µ, ν on S by the identities where the infimum is taken over all couplings P of X ∼ λ and Y ∼ ν, and the supremum over all 1-Lipscthiz functions f : S → R. The first two equalities are definitions, the last one is the Kantorovich-Rubinstein duality principle. Note that W 1 ≤ W 2 . Proposition 2.2. Assume that There exists a positive time τ such that for any probability measures λ and ν on the unit sphere S of H, we have W 2 (P * t λ, P * t ν) ≤ e −t/τ W 2 (λ, ν), for all t ≥ 0. In particular, the invariant measure µ is unique, and for any probability measure λ on the sphere S, and t ≥ 0, we have The role of the trace condition (21) will be clear from the proof. If we have the freedom to choose the covariance C γ of the Brownian noise, this is not a constraint. Note that the rougher the noise, that is the more slowly the sequence of the eigenvalues α n converges to 0, the easier it is to satisfy condition (21). We shall see in Section 4 that it holds automatically in a number of relevant examples of random dynamics in the configuration space of a fluid flow. Proof. Denote by P the law of the Brownian motion (B t ) with covariance C γ , and by P v the law of the solution of Equation (1) with σ = 1, starting from v ∈ S. Denote by E and E v the associated expectations operators. Recall that the notation (a, b) stands for the scalar product of a and b in H. Fix v 0 , w 0 ∈ S, and consider the two diffusion processes (v t ) and (w t ), started from v 0 and w 0 , respectively, and solutions of the Itô stochastic differential equations Comparing with Equation (2), it is clear that (v t ) has law P v0 and (w t ) has law P w0 . Moreover, Itô's formula yields or equivalently, setting we get Now remark that since the sequence (α n ) is non-increasing, we have for any v ∈ S. Taking the expectation under P in equation (5), we have from Grönwall inequality The conclusion of the statement follows. Remark that E µ [v t ] = 0, as a consequence of the symmetry properties of the invariant measure µ. The process (v t ) is stationary if v 0 has distribution µ; it can then be extended into a two sided process defined for all real times. Denote by (F t ) t∈R the complete filtration generated by (v t ) on the probability space where it is defined. Set F ≤0 := σ F t ; t ≤ 0 and F ≥s := σ F t ; t ≥ s , for any real time s. Recall that the mixing coefficient α(s) of the velocity process v is defined, for s > 0, by the formula The following fact will be useful to get for free the independence of the increments of the limit processes obtained after proper rescalings of functionals of (v t ). Corollary 2.4. The mixing coefficient α(s) tends to 0 as s increases to ∞. Proof. As a preliminary remark, recall the definition of the lift (u σ t ) to H of (v σ t ), introduced in the proof of Theorem 2.1. This process is strong Feller, as it can be seen to satisfy a Bismut-Li integration by parts formula. See e.g. Peszat and Zabczyk' seminal paper [PZ95], and Wang and Zhang's extension [WZ10] to unbounded drift and diffusivity. The velocity process (v σ t ) is thus itself a strong Feller diffusion, and if one denotes by (P t ) its transition semigroup, the functions P 1 g, for g measurable, bounded by 1, are all Lipschitz continuous, with a finite common upper bound L for their Lipschitz constants. Now, it follows from the Markovian character of the dynamics of (v t ), and the Feller property of its semigroup, that it suffices to see that tends to 0 as s goes to ∞, for any real-valued continuous functions f, g on the unit sphere S, with null mean with respect to the invariant measure µ. Writing further for s > 1, and using the strong Feller property of the semigroup of the diffusion process (v t ), we can further assume that the function g in (6) is L g ∞ -Lipschitz continuous. Let w g stand for its uniform modulus of continuity. For each s, denote by (v s , v s ) a W 1 -optimal coupling of the measures P * s δ v0 and µ, for a deterministic v 0 , so we have Using the fact that gdµ = 0, one then has so the statement follows from Proposition 2.2. Invariance principle for the position process We assume in all of this section that the initial condition v 0 of the velocity process of kinetic Brownian motion is distribued according to its invariant probability measure µ, from Theorem 2.1. Pick 1/3 < α ≤ 1/2. We prove in this section that the distribution in C α ([0, 1], H) of the time-rescaled position process (x σ σ 2 t ) converges to the distribution of a Brownian motion in H with an explicit covariance, given in identity (7) of Proposition 2.5 below. The usual invariance principles in Hilbert spaces consider weak convergence in C 0 ([0, 1], H), so we need an extra tightness estimate provided in Section 2.3.1 to complete the program. To make the most out of the convergence results from Section 2.2, set , the spherical Brownian motion run at speed σ 2 = 1. 2.3.1. Tightness in Hölder spaces. We dedicate this section to proving the following uniform estimate. Proposition 2.6. For any p ≥ 2, we have It follows from Kolmogorov-Lamperti tightness criterion that the laws of X σ form a tight family in C α ([0, 1], H), for any 0 < α < 1/2. Note that for T = σ 4 (t − s) > 0, we have so Proposition 2.6 is a consequence of the estimate We translate our problem in discrete time, writing to work with the correlations between different integral slices, and compare this sequence to martingale differences. There is an abundant literature on the subject; we follow here the approach of C. Cuny [Cun17]. Let (Ω, F, P) be a probability space with a filtration (F n ) n≥n0 , where −∞ ≤ n 0 ≤ 0, and let (X n ) n≥n0 be H-valued random variables such that each X n is measurable with respect to F n . Recall that (X n ) n≥0 is said to be a martingale difference with respect to (F n ) if each X n is integrable and E X n+1 |F n = 0, for all n ≥ n 0 . The following result is an elementary consequence of the Burkholder-Davis-Gundy and Jensen inequalities. Lemma 2.7. Let X be an H-valued martingale difference with moments of order p ≥ 2. Then In particular, if X is stationary, then Assume from now on that we are given a sequence (X n ) ≥n0 of integrable H-valued random variables on (Ω, F, P). For j ∈ Z, and k ≥ 0, define the σ-algebra F j−1 . (It may not make sense for all j, k, depending on how far in the past the σ-algebras (F n ) are defined.) Note that is a stationary martingale difference with respect to the filtration F ( +1) j j≥0 . We use the classical martingale/co-boundary decomposition to prove the next result. Lemma 2.8. Fix p ≥ 2, and assume that F n is defined for n ≥ −2 k+1 , then Proof. For any 0 ≤ j ≤ k, set n j := 2 k−j ; note that n k = 1. We have for j < k the identity By induction we get We also know that Putting it all together, we obtain Proof of Proposition 2.6. It is enough to prove that we have for any T ≥ 1 and p ≥ 2, the estimate Fix the integer k such that T /2 ≤ 2 k < T , and define Since we assume that v 0 is distributed according to an invariant probability measure, we can actually have our process started for a time arbitrarily far in the past, so we can assume that F j is well-defined for any j ≥ −2 k+1 . We can then write The first sum is a stationary martingale difference with respect to the σalgebra (F j ) j≥0 ; the second is the subject of the previous lemma. One then has the estimate with the notations of Lemma 2.8. In our setting, Note that we have from Corollary 2.3 We can insert this in the upper bound for the integral to obtain 2.3.2. Convergence in Hölder spaces. We are ready to prove Proposition 2.5 on the weak convergence of X σ in any Hölder space C α ([0, 1], H) to the Brownian motion in H with covariance given by formula (7). Proof of Proposition 2.5. From the tightness result in C α ([0, 1], H) stated in Proposition 2.6, it is sufficient to show that X σ converges weakly in C 0 ([0, 1], H) to the above mentionned Brownian motion. If suffices for that purpose to see that for a finite sequence of t 1 < · · · < t n , and a small enough positive delay , the random variables X σ ti− − X σ ti−1 converge to finitely many independent Gaussian random variables, with corresponding covariances (t i − − t i−1 ) times the covariance (7). One can for instance use Dedecker and Merlevède conditional central limit theorem, from Theorem 1 in [DM03], to see the convergence to a Gaussian limit with the expected covariance operator. One checks that the four conditions (a)-(d) from Theorem 1 in [DM03] hold true in our setting. We denote by ∈ H * a continuous linear form on H. (a) We have from the decorrelation result in Corollary 2.3 that The decorrelation result in Corollary 2.3 justifies the use of dominated convergence to justify that converges in L 1 as T goes to ∞. The limit β 2 is constant, from the ergodic behaviour of the velocity process. (c) The L p estimate (8) with any p > 1 shows that the family is uniformly integrable. (d) Last we have, by stationarity of the velocity process, that converges indeed to a finite limite as T goes to ∞. This limit is given by the finite sum i≥0 β 2 i , where ( i ) stands for an orthonormal basis of H * . One reads the independence of the limit Gaussian random variables corresponding to different time intervals [t i−1 , t i − ] on their null correlation; the latter is a direct consequence of the decorrelation property of Corollary 2.3. The statement of Proposition 2.5 follows then from the conclusion of Dedecker and Merlevède convergence result. The identification of the covariance (7) is a consequence of the corresponding statement, Proposition 3.4, in the finite dimensional setting of [Per18]. We aim now at improving the weak invariance principle of Proposition 2.5 into a weak invariance principle for the canonical rough path associated with X σ . This will be crucial in Section 4 when defining kinetic Brownian motion in a diffeomorphism space as the solution of a differential equation driven by X σ , and proving the interpolation results of Theorem 4.3 and Theorem 4.4 by a continuity argument. We recall in the next section all we need to know from rough paths theory. The flavor of rough paths theory It is not our purpose here to give a detailled account of rough paths theory. We refer the reader to the lecture notes [CLL07,FH14,Bau14,Bai15b], for introductions to the subject from different point of views. The following will be sufficient for our needs here. Rough paths theory is a theory of ordinary differential equations controlled by non-smooth signals h ∈ C α ([0, 1], R ). The point z t moves here in R d , where we are given sufficiently regular vector fields V i . Young integration theory [You36,Lyo94] allows to make sense of the integral · 0 V (y s )dh s , for paths y, h that are α-Hölder, for α > 1 2 , as an R d -valued α-Hölder path depending in locally Lipscthiz way on y and h. This allows to formulate the differential equation (9) as a fixed point problem for a contracting map from C α ([0, 1], R d ) into itself, and to obtain as a consequence the continuous dependence of the solution path on the driving control h. Lyons-Young theory cannot be used for α-Hölder controls with α < 1 2 , as even in R, with one dimensional controls, there exists no continuous bilinear form on C α ([0, 1], R) × C α ([0, 1], R) extending the Riemann integral 1 0 y t dh t , of smooth paths y, h; see Propositon 1.29 of [CLL07]. (This can be understood from a Fourier analysis point of view as a consequence of the fact that the resonant operator from Littlewood-Paley theory is unbounded on [BCD11].) Lyons' deep insight was to realize that what really fixes the dynamics of a solution path to the controlled differential equation (9) is not only the increments dh t , or h t − h s , of the control, but rather the increments of h together with the increments of a number of its iterated integrals. This can be understood from the fact that for a smooth control, one has the Taylor-type expansion for any real-valued smooth function f on R d . (We use Einstein' summation convention, with integer indices in [1, ].) We consider here the vector fields V i as first order differential operators, so we have for instance The usual first order Euler scheme is refined by the above second order Milstein scheme whose one step error is given explicitly by the above triple integral, of order |t − s| 3 , for a C 1 control h. The iterated integrals are however meaningless for a control h ∈ C α ([0, 1], R ), when α ≤ 1/2. A prough path X above h, with 2 ≤ p < 3, is exactly the datum of h together with a quantity, indexed by (s ≤ t), that plays the role of these iterated integrals. Set [0, 1] ≤ := (s, t) ∈ [0, 1] 2 ; s ≤ t , and recall that R ⊗2 stands for the set of × matrices. Definition 2.9. Fix 2 ≤ p < 3. A p-rough path X over R , is a map for a C α ([0, 1], R ) path h, and X satisfies Chen's relations for all 0 ≤ s ≤ u ≤ t ≤ 1. The 1/p-Hölder norm on X, and the 2/p-Hölder norm on X, define jointly a complete metric on the nonlinear space RP(p) of p-rough paths. Chen's relation accouts for the fact that for a C 1 path h, one has indeed for any 0 ≤ s ≤ u ≤ t ≤ 1, and any indices 1 ≤ j, k ≤ . One has also in that case, by integration by parts, the identiy A p-rough path X such that the symmetric part of X ts is equal to 1 2 X ts ⊗X ts , for all times 0 ≤ s ≤ t ≤ 1, is called weakly geometric. The set of weakly geometric p-rough paths is closed in RP(p). For a C 1 path h defined on the time interval [0, 1], setting X ts := h t − h s and for all 0 ≤ s ≤ t ≤ 1, defines a weak geometric p-rough path, for any 2 ≤ p < 3, called the canonical rough path associated with h. Let B stand for an -dimensional Brownian motion. The Stratonovich Brownian rough path B = (B, B) is defined by It is almost surely a weak geometric p-rough path, for any 2 < p < 3. Definition 2.10. Let C 3 b vector fields (V i ) 1≤i≤ on R d be given, together with a weak geometric p-rough path X over R . A path (z t ) 0≤t≤1 is said to be a solution to the rough differential equation if there is an exponent a > 1, such that one has for any smooth real-valued function f on R d , and any times 0 ≤ s ≤ t ≤ 1. The above O(·) term is allowed to depend on f . Importantly, the solution of a rough differential equation driven by the Stratonovich Brownian rough path coincides almost surely with the solution of the corresponding Stratonovich differential equation; see e.g. the lecture notes [FH14,Bai]. Theorem 2.11 (Lyons' universal limit theorem). The rough differential equation (10) has a unique solution. It is an element of C 1/p ([0, 1], R d ) that depends continuously on X. The map that associates to the driving rough path the solution to a given rough differential equation, seen as an element of C 1/p ([0, 1], R d ), is called the Itô-Lyons solution map. If (X n ) is a sequence of random geometric p-rough path in R , converging weakly to a limit random geometric p-rough path X, the continuity of the Itô-Lyons solution map gives for free the weak convergence in C 1/p ([0, 1], R d ) of the laws of the solutions to Equation (10) driven by the X n , to the law of the solution of that equation driven by X. The theory works perfectly well for dynamics with values in Banach spaces or Banach manifolds, and driving rough paths X = (X, X), with X taking values in a Banach space E. One needs to take care in that setting to the tensor norm used to define the completion of the algebraic tensor space E ⊗ a E, as this may produce non-equivalent norms, and that norm is used to define the norm of a rough path. Note that families of vector fields (V 1 , . . . , V ) are then replaced in that setting by one forms on E with values in the space of vector fields on the space where the dynamics takes place. See e.g. Lyons' original work [Lyo98] or Cass and Weidner's work [CW16] for the details. See e.g. [Bai15a] for a simple proof of Lyons' universal limit theorem in that general setting. The vector fields in Definition 2.10 and Theorem 2.11 are required to be C 3 b . This is used to get solution of equation (10) that are defined on the whole time interval [0, 1]. Only local in time existence results can be obtained when working with unbounded vector fields, or on a manifold. The Taylorlike expansion property (11) defining a solution path is then only required to hold for each time s, for t sufficiently close to s. One still has continuity of the solution path with respect to the driving rough path, in an adapted sense. See e.g. Section 2.4.2 of [ABT15]. This continuity property is sufficient to obtain the local weak convergence of the laws of the solution path to the corresponding limit path, for random driving weak geometric p-rough paths converging weakly to a limit random weak geometric p-rough path. See Definition 4.2 for the definition of local weak convergence. So far, we have defined kinetic Brownian motion (x σ t , v σ t ) in H from its unit velocity process v σ . We have seen in Proposition 2.5 that its time rescaled position process (X σ t ) := (x σ σ 2 t ) is converging weakly in C α [0, 1], H to a Brownian motion with explicit covariance (7), for any α < 1/2. We prove in the next section that the canonical rough path X σ associated with X σ converges weakly as a weak geometric p-rough path to the Stratonovich Brownian rough path associated with the Brownian motion with covariance (7), for any 2 < p < 3. This convergence result will be instrumental in Section 4 to prove that the Cartan development in diffeomorphism spaces of the time rescaled kinetic Brownian motion in Hilbert spaces of vector fields converge to some limit dynamics as σ increases to ∞. This will come as a direct consequence of the continuity of the Itô-Lyons solution map. Remark 2.12. The idea of using rough paths theory for proving elementary homogenization results was first tested in the work [FGL13] of Friz, Gassiat and Lyons, in their study of the so-called physical Brownian motion in a magnetic field. That random process is described as a C 1 path (x t ) 0≤t≤1 in R d modeling the motion of an object of mass m, with momentum p = mẋ, subject to a damping force and a magnetic field. Its momentum satisfies a stochastic differential equation of Ornstein-Uhlenbeck form for some matrix M , whose eigenvalues all have positive real parts, and B is a d-dimensional Brownian motion. While the process (M x t ) 0≤t≤1 is easily seen to converge to a Brownian motion W , its rough path lift is shown to converge in a rough paths sense in L q , for any q ≥ 2, to a random rough path different from the Stratonovich Brownian rough path associated to W . A number of works have followed this approach to homogenization problems for fast-slow systems; see [ABT15, KM16, KM17, BC17, CFK + 19] for a sample. Rough paths invariance principle for the canonical lift As in Section 2.3, we assume in all of this section that the initial condition v 0 of the velocity process of kinetic Brownian motion is distribued according to its invariant probability measure µ, from Theorem 2.1. Let X σ = (X σ , X σ ) stand for the canonical rough path associated to the random C 1 path X σ , where we recall that Recall that the tensor space H ⊗ H is equipped with its natural complete Hilbert(-Schmidt) norm. It follows in particular from Proposition 2.6, Lemma 2.13 and the known Kolmogorov-Lamperti criterion for rough paths that the family of laws L(X σ ) is tight in RP(α −1 ), for any 1/3 < α < 1/2. Proof. The statement of the lemma is a consequence of the estimate for T ≥ 1; we prove the latter. We use for that purpose the same kind of multiscale martingale/coboundary decomposition as in the proof of Lemma 2.8. Let k the unique integer such that As above, we can assume without loss of generality that F j is defined for all j ≥ −2 k+1 , as v 0 is assumed to be distributed according to the invariant probability measure of the velocity process. Then the integral rewrites as The first sum is a martingale difference with respect to ( F n ) n≥0 , albeit not stationary, Each term is controlled using Lemma 2.6, and the fact that |v t | = 1, so the L p norm of the first sum in (12) is bounded above by 2 k , up to a constant depending only on p. The second sum in 12 is treated as in the proof of Lemma 2.8. Set here One has and we are left with the study of the moments of the Z (n) j . These variables are the conditional expectation of a double integral, which can be decomposed at time (j − 1)2 n δ + δ as follows. Because the conditioning is from a distant past, the first term is controlled using the exponential mixing and the estimate of Lemma 2.6. When dealing with the second term, we use the stationarity of v to write Now we have, for each 0 ≤ n ≤ k and 0 ≤ j < 2 k−n , This last sum is convergent, so the L p norm of the second term in (12) is no greater than a constant multiple of 2 k . Convergence in rough path space. We are now ready to state and prove the main result of this section. Theorem 2.14. Pick 1/3 < α < 1/2. The processes X σ converge in law in RP(α −1 ), as σ goes to ∞, to the Stratonovich Brownian rough path with covariance Let X be a random weak geometric α −1 -rough path with distribution an arbitrary limit point of the family of laws of the X σ . Write X = (B, X), with B a Brownian motion with the above covariance. Denote by X the projection of X on the finite dimensional space generated by the first d vectors of the basis (e i ) from Section 2.1 -we use below the associated coordinate system. Using a monotone class argument and the tightness result stated in Lemma 2.13, the statement of Theorem 2.14 is a consequence of the following result, given that d ≥ 1 is arbitrary. Lemma 2.15. The d-dimensional random rough path X is a Stratonovich Brownian rough path with associated covariance matrix diag(γ 1 , · · · , γ d ), with Proof. Let G 2 d stand for the step-2 nilpotent Lie group over R d . We prove that the process (X t0 ) 0≤t≤1 is a G 2 d -valued Brownian motion by showing that it has stationary, independent, increments. The stationarity is inherited from the stationarity of the X σ . The independence of the increments of X on disjoint closed intervals is a consequence of Corollary 2.4 on the convergence to 0 of the mixing coefficient of (v t ). Continuity of X allows to extend the result to adjacent time intervals. We identify the generator of the G 2 d -valued Brownian motion (X t ) as the generator of the d-dimensional Stratonovich Brownian rough path following the method of [Per18]. We recall the details for the reader's convenience. Note that we only need to consider the joint dynamics of B t and the antisymmetric part (A t ) of (X t ); the former takes values in the Lie algebra g 2 d of G 2 d -a linear space. Denote by A B the antisymmetric part of Stratonovich Brownian rough path associated with B. We then have, for any smooth real-valued function The conclusion follows by multiplying by t −1 and taking expectation, sending t to 0, after recalling that A t and A B t are centered, and recalling the uniform estimates from Proposition 2.13 under the form 3. Geometry of the configuration space is an isomorphism between H s (F ) × H s (G) and H s (F × M G). 4. Omega lemma. Given a smooth fiber bundle morphism Φ : Figure 3. An infinitesimal rigid object x is moving along a path. It has position y and velocity v at some time. Its orientation at that time is given by an isometry e : T x M → T y M , and its velocity v is given in its initial reference frame by w. is a smooth bundle morphism, it follows from items 3 and 4 above, that it induces a smooth map from H s (F (w,e) ) into H s (F (v) ). Similarly, the smooth map We refer the reader to the classic textbook [Ros97] for the following elementary facts from functional analysis about the Laplace operator ∆ on vector fields on M . We take the convention that −∆ is a non-positive symmetric operator on L 2 (T M ). This operator has compact resolvant, so one has an eigenspaces decomposition with finite dimensional eigenspaces E λn , with corresponding non-positive eigenvalues λ n ↓ −∞. Eigenvectors of −∆ are smooth, from elliptic regularity results. We recover the space H s (T M ) described above setting The 0-eigenspace is finite dimensional. Any choice of Euclidean norm · on it defines the topology of H s (T M ), associated with the norm Weak Riemannian structure on the configuration space Denote by Vol the Riemannian volume measure on (M, g), and by exp : T M → M , its exponential map. The configuration space M is endowed with a smooth weak Riemannian structure, setting for any ϕ ∈ M and X(ϕ), Y (ϕ) ∈ T ϕ M , This formula defines by restriction a weak Riemannian metric on the space M 0 of H s maps from M into itself preserving the volume form. In that setting, notice that if X(ϕ) = X • ϕ and Y (ϕ) = Y • ϕ, for some vector fields X, Y on M , then the change of variable formula gives so the scalar product is in that case the L 2 scalar product of the vector fields X and Y. The fact that the topology on M induced by the scalar product is weaker than the H s -topology makes non-obvious the existence of a smooth Levi-Civita connection. Ebin and Marsden have proved that • the L 2 metric (14) is a smooth function on M , • it has a smooth Levi-Civita connection ∇, with associated exponential map Exp well-defined and smooth in a neighbourhood of the zero section; it is explicitly given by Exp ϕ (X)(m) = exp ϕ(m) X(m) . The geodesics of (M , ∇) are defined for all times. Denote by ∇ the Levi-Civita connection of (M, g). For smooth right invariant vector fields X, Y on M , with X(ϕ) = X • ϕ and Y (ϕ) = Y • ϕ, one has The L 2 -scalar product is right invariant on the group M 0 , from the change of variable formula. The Levi-Civita connection of the L 2 metric on the volume preserving configuration space M 0 is explicitly given in terms of the Hodge projection operator P on divergence-free vector fields on M . Denote by R ϕ the right composition by ϕ. For any ϕ ∈ M 0 , the map is indeed the orthogonal projection map from T ϕ M into T ϕ M 0 , and its depends smoothly on ϕ ∈ M 0 . So the Levi-Civita connection ∇ 0 on M 0 is given by it is a smooth map. Its associated exponential map is no longer given by the exponential map on T M , due to the non-local volume preserving constraint. Geodesics are not defined for all times anymore. Denote by Id the identity map on M . For smooth right invariant vector fields X, Y on M , with X(ϕ) = X • ϕ and Y (ϕ) = Y • ϕ, for vector fields X, Y on M , one has ∇ 0 X Y (Id) = P ∇ X Y . V.I. Arnol'd showed formally in his seminal work [Arn66] that the velocity field u : [0, T ] → H s (T M ) of a geodesic ϕ t in M 0 , with u t :=φ t • ϕ −1 t , is a solution to Euler's equation for the hydrodynamics of an incompressible fluid. Ebin and Marsden gave an analytical proof of that fact in their seminal work [EM69]. (Besides that classical reference, we refere the reader to Arnold and Khesin's book [AK98], or Smolentsev's thourough review [Smo07] for reference works on the weak Riemannian geometry of the configuration space.) The flat two-dimensional torus T 2 offers an interesting concrete example. Its symplectic structure allows to identify a Hilbert basis (A k , B k ) k∈Z 2 \0 of T Id M 0 from an eigenbasis for the Laplace operator on real-valued functions on T 2 ; see e.g. Arnold and Khesin's book [AK98], Section 7 of Chap. 1. Denote by ∂ 1 , ∂ 2 the constant vector fields in the coordinate directions, and k = (k 1 , k 2 ) ∈ Z 2 . One has One can see in the following simulations the image of axis circles by the time 1 map of the associated flow in T 2 , corresponding to different inital conditions for u 0 , with ϕ 0 = Id. The simulations were done using an elementary finite dimensional approximation for the dynamics, using the explicit expressions for the Christoffel symbols first given by Arnold in [Arn66]. We come back Denote by V 2 F (v) the vertical space in T F (v) for the canonical projection map One defines a smooth one form on We Proposition 3.1. Given a path ϕ t (·); e t (·), v t (·) 0≤t≤1 in H s (F (e,v) ), one has pointwise for every X ∈ T Id M . The next two propositions give a description of parallel transport in M and M 0 , respectively, in terms of the vector field H (v) on H s (F (v) ). Proposition 3.2. Let ϕ t (·), v t (·) 0≤t≤1 be a T M -valued path. Then Proof. Given (y, v) ∈ T M , the following map identifies T y M with the vertical subspace of T (y,v) T M , one then has For an H s (F v )-valued path ϕ t (·), v t (·) , one then has the splitting The result follows because composition by V v (y, v; ·) is one-to-one. Recall that P stands for Hodge projector on divergence-free vector fields. Proof. Write T M0 M for the section of T M above M 0 , and write Q := id−P : T M0 M → T M0 M , for the projection on the orthogonal in T M of T M 0 . Note that the differential dP of P identifies to P in the fibers, since it is linear. The identification is up to an isomorphism which is exactly the composition by V v , in the sense that for any v, v ∈ T ϕ M . As we work with a T M 0 -valued path (ϕ t , v t ), one has Q(v t ) = 0, at all times, so differentiating this identity with respect to t gives dQ(v t ) = 0. Since P + Q = id, we can conclude with the decomposition (16), by rewriting the expression for the time derivative under the form Cartan and Lie developments Explosion may happen before time 1. This path in M depends not only on m 0 but also on e 0 . Conversely, given any C 1 path (m t ) 0≤t≤1 in M and z 0 = (m 0 , e 0 ) ∈ OM above m 0 , parallel transport of e 0 along the path (m t ) 0≤t≤1 defines a path (z t ) 0≤t≤1 in OM , and setting x t := t 0 e −1 s (ṁ s ) ds, defines a path in R d whose Cartan development is (m t ) 0≤t≤1 . Geodesics are Cartan's development of straight lines in R d . = (a 1 , . . . , a d We recast the definition of Cartan development given above in a finite dimensional setting in the following form well suited for the present infinite dimensional setting. at all times where ϕ t is well-defined. This definition conveys the same picture as above. The map e t , named 'frame', is transported parallely along the path (ϕ t ), whileφ t is given by the image by e t ofẊ t . The existence of a unique Cartan development for a path (X t ) in T Id M is elementary in that case. It follows from Proposition 3.2 that equation (17) is equivalent to requiring that the H s F (e) -valued path (ϕ t , e t ) satisfies the equation Since the one-form H e is smooth, this equation has a unique solution until its possibly finite explosion time. Here is now the form of Cartan development dynamics in M 0 . Recall T Id M 0 is the set of H s divergence-free vector fields on M . Definition 3.5. Let a C 1 path (X t ) in T Id M 0 be given. An M 0 -valued path (ϕ t ) is the Cartan development of (X t ) if there exists a family e t : T Id M 0 → T ϕt M 0 , of bounded linear maps, with e 0 = id, such thaṫ at all times where ϕ t is well-defined. The proof of existence of a unique solution to Cartan's development system (19) in M 0 is not fundamentally different from the case of M , and uses Proposition 3.3 instead of Proposition 3.2. It is however more technical, and full details are given in Appendix A. The system is recast as a controlled ordinary differential equation in the state space with generic element (ϕ, e), f , and dynamics of the form driven by a smooth vector field-valued one form on T Id M 0 . We use Cartan's development map in the configuration manifolds M and M 0 in the next section. We conclude this section by a brief comparison between Cartan development and the Lie group notion of development, commonly used to define the stochastic Euler equation. Let G stand for a finite dimensional Lie group with Lie algebra Lie(G). Lie's development operation provides another way of constructing paths with values in G from paths (x t ) 0≤t≤1 in R d , by identifying T g0 G and R d via a linear map ι 0 , and solving the ordinary differential equatioṅ In such a group setting, Malliavin and Airault [AM02] gave a correspondance between the Cartan and Lie notions of development, although this was certainly known to practitioners before; see also [CFM07]. Choose an orthonormal basis of the Lie algebra of G, and denote by c n k, the structure constants, so the Christoffel symbols are given by Γ n k, = 1 2 c n k, − c k ,n + c n,k . Write Γ k for the antisymmetric endomorphism with matrix Γ · k,· in the chosen basis, for 1 ≤ k ≤ d, and consider Γ as a linear map from R d into the set of antisymmetric endomorphism of the Lie algebra. Denote by OLie(G) the orthonormal group of Lie(G). Proposition 3.6. Let (w t ) 0≤t≤1 be a C 1 path in the Lie algebra of G. The path (g t ) 0≤t≤1 solution to the OLie(G) × G -valued equation is the Cartan development of the path (w t ). (The system (20) is reminiscent of the equation in from Appendix A, recasting Cartan's development dynamics in M 0 .) The geodesic started from the identity of G, with direction ω ∈ Lie(G), is in particular given in the Lie picture as the solution (g t ) 0≤t≤1 to the equatioṅ Note that exp tΓ(ω) (ω) ∈ Lie(G). Note also that it is the fact that the Christoffel symbols are constants that allows to reduce the second order differential equation for the geodesics on a generic Riemannian manifold into a first order differential equation, in a Riemannian Lie group setting. Following Euler's picture, it is this group-oriented point of view that has been considered so far in the geometric viewpoint on fluid hydrodynamics, deterministic or stochastic. The naive implementation of Cartan's machinery in terms of Lie development runs into trouble in the infinite dimensional setting of M or M 0 . This can be seen on the example of the two dimensional torus and the volume preserving diffeomorphism group as a consequence of the fact that Christoffel symbols define antisymmetric unbounded operators that have no good exponential in the orthonormal group of T Id M 0 . The problem comes from the fact that M of M 0 have a fixed regularity. See Malliavin's works [Mal99,CFM07] for a quantification of the loss of regularity of Brownian motion in the set of homeomorphisms of the circle, as time increases. The Lie development picture of Cartan's development map can however be used for numerical purposes for simulating kinetic Brownian motion in M 0 . It corresponds to havingẇ t a Brownian motion on the unit sphere of the H s space of divergence-free vector fields on M ; see Section 4. Kinetic Brownian motion on the diffeomorphism group with the notations of Section 2.1. We assume that the trace condition holds true. Note that the faster λ i goes to ∞, the lesser there is noise in W . The extreme case corresponds to only finitely many non-null α i . On the other extreme, the bigger the multiplicity of α 2 1 is, the more noise there is in W . The trace condition (21) holds automatically as soon as α 2 1 has multiplicity three. Definition 4.2. A sequence (Q n ) n≥0 of probability measures on Ω 0 , F is said to converge locally weakly to some limit probability Q if the sequence Q n •T −1 R of probability measures on C([0, 1], B R ) converges weakly to Q•T −1 R , for every R > 0. We proved in Theorem 2.14 that the canonical rough path lift X σ of x σ σ 2 t 0≤t≤1 , converges weakly in the space of weak geometric p-rough paths in H, to the Stratonovich Brownian rough path B = (B, B), with covariance operator Since one can rewrite Equation (22) as a rough differential equation driven by the rough path X σ d dt (ϕ σ t , e σ t ) = H e ϕ σ t , e σ t ; dX σ t , the continuity of the Itô-Lyons solution map gives the following theorem. Recall that the solution of a rough differential equation driven by the Stratonovich Brownian rough path coincides almost surely with the solution of the corresponding Stratonovich differential equation. Theorem 4.3. The M -valued part (ϕ σ t ) of kinetic Brownian motion is converging locally weakly to the projection on M of the H s (F (e) )-valued Brownian motion (ϕ t , e t ) solution to the stochastic differential equation We remark here that the stochastic homogenization methods that X.-M. Li used in [Li16] to prove the homogenization result for kinetic Brownian motion in a finite dimensional, complete, Riemannian manifold, require a positive injectivity radius and a uniform control on the gradient of the distance function over the whole manifold. It is unclear that anything like that is available in the present infinite dimensional setting, or in the setting of volume-preserving diffeomorphisms investigated in the next section, especially given the fact that M or M 0 have infinite negative curvature in some directions. The robust pathwise approach of rough paths allows to circumvent these potential issues. Kinetic Brownian motion in M 0 Let H 0 stand for the closed subspace of H of divergence-free vector fields on the fluid domain M . It is the tangent space at the identity map of the closed submanifold M 0 of M of diffeomorphisms that leave invariant the Riemannian volume form of M . The intersection H s+a is smooth, the problem reduces to the following question. Let a Banach manifold A and a Hilbert space H, be given together with a smooth map F : A × H → H, that is linear with respect to its second argument. Denote by a and b generic elements of A. Prove that the curryfication Cur F : a ∈ A → F (a, ·) ∈ L(H) is well-defined and smooth. Write d for the differential operator. We show that d(Cur F ) = Cur (∂ a F ). This will be enough, since we can then bootstrap the construction to show that d n (Cur F ) = Cur (∂ n a F ), is differentiable for any n. Because the result is local, we can assume without loss of generality that A an open set of a Banach space. Fix a ∈ M , and let U × B(0, ε) be a convex neighbourhood of (a, 0) in A × H, such that ∂ 2 a F ∞ < 1 + ∂ 2 a F (a, 0) . Then for all b ∈ U and |w| < 1, one has The conclusion follows from the fact that we have in particular the estimate for a positive constant c independent of b. Choose now a C 1 path (X t ) with values in T Id M 0 , and zero initial condition. Let (ϕ t , e t ), f t ) be the solution in Theorem A.3. The path (ϕ t ) takes values in M 0 , and coincides with the Cartan development of (X t ). We further haveφ t = e t f t (Ẋ t ) , so the dynamics (23) does not depend on the extension P of the Hodge projector P used in the definition of H f . Proof. Let Y ∈ T Id M 0 , be a fixed divergence-free vector field on M . We need to show that ∇ 0φ t e t (Y) = 0, on the whole time interval [0, ζ). From Proposition 3.3, this is equivalent to showing that we have d dt ϕ t , e t f t (Y) = dP H (v) ϕ t , e t f t (Y) ;φ t . We have We prove that e t (Y) is divergence-free. Define for that purpose the subset I ⊂ [0, ζ) of times t such that e t (Z) is divergence-free for all Z ∈ T Id M 0 , and ϕ t preserves the volume form. It is a non-empty closed subset of [0, ζ). Fix t 0 ∈ I. It suffices to prove that t 0 is in the interior of I for a well-chosen extension P of P , possibly different from P . We choose for P any smooth extension of P defined on a neighbourhood of ϕ t0 , such that P • P = P . Set Q := id − P : T M → T M , so for a fixed Z ∈ T Id M 0 , the quantity This differential equation satisfies the classical Picard-Lindelöf assumptions, so it has a unique solution with given initial condition. Since Z 0 = 0 and the constant zero vector field is a solution to the equation, Z t is identically zero, and e t (Z) is divergence-free. This holds true for any Z, in a time interval independent of Z. It follows in particular thatφ t = e t f t (Ẋ t ) is locally divergence-free, and ϕ t preserves the volume form, in a neighbourhood of the time t 0 . The interval I is thus both closed and open, so I = [0, ζ). The statement of Theorem A.3 follows, since P e t (f t (Y)) = e t f t (Y) , so we get d dt ϕ t , e t f t (Y) = dP F d dt (ϕ t , e t ), f t (Y) = dP d ds s=t ϕ s , e s f t (Y) = dP H (v) ϕ t , e t f t (Y) ;φ t , using Proposition 3.1 in the last equality.
2019-05-10T12:11:28.000Z
2019-05-10T00:00:00.000
{ "year": 2019, "sha1": "0403746aa9e8c5069d70a91e12dce9dbcdd8db2c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0403746aa9e8c5069d70a91e12dce9dbcdd8db2c", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
215761038
pes2o/s2orc
v3-fos-license
Bond-forming and electron-transfer reactivity between Ar2+ and N2 Collisions between Ar and N2 have been studied using a coincidence technique at a centre-of-mass (CM) collision energy of 5.1 eV. Four reaction channels generating pairs of monocations are observed: Ar + N2 , Ar + N, ArN + N and N + N. The formation of Ar + N2 + is the most intense channel, displaying forward scattering but with a marked tail to higher scattering angles. This scattering, and other dynamics data, is indicative of direct electron transfer competing with a ‘sticky’ collision between the Ar and N2 reactants. Here Ar + is generated in its ground (P) state and N2 + is primarily in the low vibrational levels of the CSu + state. A minor channel involving the initial population of higher energy N2 + states, lying above the dissociation asymptote to N + N, which fluoresce to stable states of N2 + is also identified. The formation of Ar + N by dissociative single electron transfer again reveals the involvement of two different pathways for the initial electron transfer (direct or complexation). This reaction pathway predominantly involves excited states of Ar (D and S) populating N2 * in its dissociative CSu , 2Pg and D Pg states. Formation of ArN + + N proceeds via a direct mechanism. The ArN is formed, with significant vibrational excitation, in its ground (XS ) state. Formation of N + N is also observed as a consequence of double electron transfer forming N2 . The exoergicity of the subsequent N2 2+ dissociation reveals the population of the APu and D Pg dication states. Introduction Doubly charged positive ions (dications) are found in a variety of energised media including the ionospheres of planets and their satelites. [1][2][3][4][5][6][7][8] As demonstrated in several studies, both atomic and molecular dications exhibit significant bimolecular reactivity following collisions with neutral species. [9][10][11][12][13] Indeed, the lifetimes of atomic dications in planetary ionospheres are expected to be primarily determined by such collisional processes. 14 This significant dicationic reactivity suggests that dication chemistry can play a role in ionospheric processes; 15 for example, dications are proposed to be involved in the chemistry of complex molecule assembly through carbon chain-growth. 14,[16][17][18][19] Atomic dications have been detected in planetary ionospheres. 20 However, it is difficult to unambiguously detect many ionospheric molecular dications using simple mass spectrometry, the usual sampling technique. This difficulty arises because there are often monocations with the same mass to charge ratio as the target dication present in these environments. 9 The lack of definitive detection of ionospheric molecular dications may account for the historical neglect of these species in models of ionospheric chemistry. 14 In order to identify dication reactions of ionospheric interest, laboratorybased experiments to probe dicationic reactivity, along with spectroscopic identification techniques, are vital. 21 The value of such laboratory work is shown by experiments that have identified the role molecular dications play in atmospheric erosion processes. [22][23][24][25] Following our recent study of the reactions of Ar 2+ + O 2 , 26 this paper presents a detailed investigation of the interactions between Ar 2+ and N 2 . This work both further elucidates the energetics, reactivity and reaction mechanisms of dications and also allows a better understanding of the relevance and influence of Ar 2+ /N 2 collisions in planetary environments. Argon constitutes B1% of the Earth's atmosphere and is also found in the atmospheres of the Moon, Mercury and Mars. [27][28][29][30][31] In the upper reaches of these atmospheres, the formation of the Ar 2+ dication is likely, as recognised by Thissen et al. 14 The bimolecular reactivity of Ar 2+ with a variety of rare gases and simple molecules has been studied previously. [32][33][34][35][36][37][38] Most of the early investigations of Ar 2+ -neutral collisions were carried out at 0.1-20 keV collision energies. At these significant collision energies only single-electron transfer (SET) and double-electron transfer (DET) channels were observed. In contrast, more recent experiments, utilising lower collision energies (o100 eV) revealed bond-forming chemistry following the interactions of Ar 2+ with various neutral species. 26,[39][40][41][42][43][44] The formation of Ar-X (X = O, N, C) bonds, detected in the above studies, confirms the bimolecular reactivity of rare gas dications as an effective route to the formation of unusual chemical species. Nitrogen (N 2 ) is the dominant species in the atmospheres of the Earth and Titan, and is present in the atmospheres of other planets and satelites. 14,15,[28][29][30][31][45][46][47] The reactions resulting from collisions of Ar 2+ with N 2 have been the subject of previous investigation. As noted above, at high collision energies (keV), SET and DET pathways were identified, as expected, although these studies did not probe the reactivity at an electronic stateselective level. 32,[34][35][36]48,49 However, in 1999, Tosi et al. 39 observed the formation of ArN 2+ following the collisions of Ar 2+ with N 2 , demonstrating a more complex chemistry in this collision system than the earlier studies had indicated. Indeed, molecular ions of ArN have attracted interest due to their rare gas bond and ArN + is a well-known contaminant in plasma-based mass spectrometry. [50][51][52] The formation of ArN + has also been observed as a product of monocation-neutral 53,54 and dication-neutral reactions. 43 In the latter case, the production of ArN + and ArNH + was observed following reactions of Ar 2+ with NH 3 ; the reaction proceeding via the formation of a collision complex [ArNH 3 ] 2+ . Computational investigations predict ArN 2+ to be kinetically stable 55 whilst ArN + is found to have the highest binding energy of the ArX + (X = Li-Ne) species. 56 The stability of ArN n+ species, and the facility of dication-neutral reactions to form new bonds, suggests that there is perhaps a richer chemistry resulting from the collisions of Ar 2+ and N 2 than has been previously reported. In this investigation we study collisions between Ar 2+ and N 2 , at a centre-of-mass (CM) collision energy of 5.1 eV, using position-sensitive coincidence mass spectrometry (PSCO-MS). The PSCO-MS technique involves coincident product detection via time-of-flight mass spectrometry using a position-sensitive detector. This experimental technique has been shown to provide comprehensive information on the dynamics and energetics of dicationic bimolecular reactions that generate pairs of monocationic products. 9,26,37,57 For the Ar 2+ /N 2 collision system our experiments reveal the dynamics and energetics of the SET and DET channels, including both dissociative and non-dissociative SET reactions. We see clearly that the dissociative SET reaction proceeds via two mechanisms: a long-range direct process, and a process involving the formation of a collision complex, [Ar-N 2 ] 2+ . We also report, for the first time, to the best of our knowledge, a bond forming channel that generates ArN + + N + via a direct mechanism. Experimental Coincidence techniques involve the simultaneous detection of two or more products from a single reactive event. Bimolecular reactions of dications with neutral species often generate pairs of monocations and these pairs of ions are detected in coincidence in the PSCO-MS experiment. The PSCO-MS apparatus used in this study has been described in detail in the literature. [57][58][59] Briefly, a pulsed beam of dications is directed into the field-free source region of a time-of-flight mass spectrometer (TOF-MS) where the dications interact with a jet of the neutral reactant. Subsequent application of an extraction voltage to the source region allows the TOF-MS to detect the cation pairs generated from the dication-neutral interactions. The detection of these ions involves recording their arrival time, and position, at a large microchannel-plate detector. From this raw data, a list of flight times and arrival positions of the ions detected in pairs, a two-dimensional mass spectrum can be generated revealing the different reactive channels. The positional data accompanying the ionic detections shows the relative motion of the products of each reactive event, providing a detailed insight into the mechanisms of each reactive channel. 59 In this work the Ar 2+ ions are generated, along with Ar + , via electron ionisation of Ar (BOC, 99.998%) by 100 eV electrons in a custom-built ion source. The positively charged argon ions are extracted from the ion source and pass through a hemispherical energy analyser to restrict the translational energy spread of the final Ar 2+ beam to B0.3 eV. The continuous beam of ions exiting the hemispherical analyser is then pulsed, using a set of electrostatic deflectors, before being accelerated and focussed into a commercial velocity filter. The velocity filter is set to transmit just the 40 Ar 2+ (m/z = 20) ions. The resulting pulsed beam of energy-constrained Ar 2+ ions is then decelerated to less than 10 eV in the laboratory frame before entering the source region of the TOF-MS. In the source region the beam of dications is crossed with an effusive jet of N 2 (BOC, 99.998%). Single-collision conditions 60 are achieved by employing an appropriately low pressure of N 2 and, hence, most dications do not undergo a collision and only a small percentage experience one collision. Such a pressure regime ensures no secondary reactions, due to successive collisions with two N 2 molecules, influences the Ar 2+ reactivity we observe. An electric field is applied across the TOF-MS source when the dication pulse reaches the centre of this region. This electric field accelerates positively charged species into the second electric field (acceleration region) of the TOF-MS and then on into the flight tube. At the end of the flight tube, the cations are detected by a position-sensitive detector comprising a chevron-pair of microchannel plates located in front of a dual delay-line anode. 57 The voltage pulse applied to the source region also starts the ion timing circuitry, to which the signals from the detector provide stop pulses. The experiments in this work employed both high (183 V cm À1 ) and low (28.5 V cm À1 ) TOF-MS source fields. As discussed in more detail below, the lower source field results in better energy resolution in the resulting PSCO-MS data. However, in these low field spectra ions with high transverse (off-axis) velocities do not reach the detector. Signals from the detector are amplified and discriminated before being passed to a PC-based time-to-digital converter. If two ions are observed in the same TOF cycle, a coincidence event is recorded and each ion's arrival time and impact position on the detector are stored for off-line analysis. The use of single-collision conditions ensures 'false' coincidences are kept to a minimum. The ion pairs data can be plotted as a 2D histogram, a 'pairs spectrum', where the time of flights (t 1 , t 2 ) of each ion in the pair are used as the (x, y) co-ordinates. Peaks in the pairs spectrum readily identify bimolecular reaction channels that result in a pair of positively charged product ions. Each such peak, the group of events corresponding to an individual reaction channel, can then be selected for further off-line analysis. As shown in previous work, the positional and time of flight information for each ion of a pair can be used to generate their x, y and z velocity vectors in the laboratory frame; here the z-axis is defined by the principal axis of the TOF-MS. 57 The x and y velocity vectors of an ion are determined from the associated positional information and flight time; the z vector is determined from the deviation of the observed TOF from the expected TOF of the same ion with zero initial kinetic energy. The laboratory frame velocities are then converted into the CM frame using the initial dication velocity. 57 Often the pair of monocations resulting from the reaction between a dication and a neutral are accompanied by a neutral product: a threebody reaction. A powerful feature of the PSCO-MS experiment is that the CM velocity of such a neutral product can be determined from the CM velocities of the detected ionic products via conservation of momentum. 57 To reveal the dynamics of a given reaction channel, a CM scattering diagram (Fig. 1) can be generated from the velocities of the product ions. Such CM scattering diagrams are radial histograms that, for each event collected for a given reaction channel, plot the magnitude of the products' CM velocity |w i | as the radial co-ordinate and the scattering angle y between w i and the CM velocity of the incident dication as the angular coordinate. In the kinematics that apply in our experiment, where the dication is heavier and markedly faster than the neutral, the velocity of the incident dication is closely oriented with the velocity of the centre of mass. In our CM scattering diagrams, since 01 r y r 1801, the data for one product can be shown in the upper semi-circle of the figure and the data for another product in the lower semi-circle, as the scattering of each ion is azimuthally symmetric. For three-body reactions, internal-frame scattering diagrams can be a powerful aid in interpreting the reaction dynamics. In this class of scattering diagram |w i | is again the radial coordinate, but the angular coordinate is now the CM scattering angle with respect to CM velocity of one of the other product species. From the CM velocities of the product species the total kinetic energy release (KER) T for a given reactive event can also be determined using the individual CM velocities of the products. 57 The exoergicity of the reaction DE can then be determined from T and the CM collision energy, E com : where E products and E reactants are the relative energies of the product and reactant states respectively. If the products lie lower in energy than the reactants, the resulting exoergicity will be positive. Performing this analysis for all the events collected for a given reaction channel provides a histogram of the exoergicity of the detected reactive events. From knowledge of the available electronic states of the reactants and products the exoergicity spectrum can reveal the electronic states involved in the reaction. Results and discussion PSCO-MS spectra were recorded following the collisions of Ar 2+ with N 2 at E com = 5.1 eV. The 'pairs' spectrum revealed the four reaction channels shown in Table 1. The most intense channel (Rxn. I) is a non-dissociative single electron transfer process (ND-SET), producing Ar + + N 2 + . A dissociative SET (DSET) reaction, forming Ar + + N + + N is also observed (Rxn. II) with a slightly lower intensity than the ND-SET channel. A bond forming channel (Rxn. III) is also observed, producing ArN + + N + . To our knowledge, the formation of ArN + from the interactions of Ar 2+ and N 2 has not been previously observed. Finally, double electron transfer (DET) is observed resulting in the formation of N + + N + via N 2 2+ (Rxn. IV). PSCO-MS experiments were also repeated at a low TOF-MS source field to yield a higher energy resolution in the exoergicity spectrum (E com = 4.5 eV). As discussed below, these low source field experiments reveal a minor, low energy release, pathway in the ND-SET channel (Rxn. I). Fig. 1 shows the CM scattering diagram for the Ar + + N 2 + product ions observed from the ND-SET reaction, Ar 2+ + N 2 -Ar + + N 2 + . A forward scattering pattern, typical of that reported before for this class of reaction, is observed. 37,61,62 Forward scattering indicates that the velocity of the Ar + product ion is predominantly oriented in the same direction as the velocity of the reactant Ar 2+ , w(Ar 2+ ), while the velocity of the N 2 + product ion is directed anti-parallel to w(Ar 2+ ). This scattering pattern is typical of a direct process, where the electron transfer occurs at a relatively large interspecies separation (3-6 Å), and is generally well represented by a Landau-Zener (LZ) formalism. 59,[63][64][65] The scattering angles of the Ar + ion (the angle between the velocity of the reactant dication w(Ar 2+ ) and the velocity of the Ar + product ion) are shown more clearly in Fig. 2. Fig. 2 reveals that whilst the scattering is dominated by y o 901, the scattering is not concentrated as intensely at lower angles as might be expected for a typical forward scattered ND-SET reaction. 37,61,62 For example, in the SET reaction between Ne 2+ + Ar, also investigated with PSCO-MS, the Ne + product was forward scattered with an angular distribution peaked at B151. 65 There is also a tail in our data, to higher scattering angles, manifested in the scattering diagram ( Fig. 1) by the extra 'bumps' involving higher velocity ions scattered between 70 o y o 110. Both of these observations hint strongly that there is a distinct contribution to the scattering in this channel involving longer-lived association, or a 'sticky collision', between the reactant species, in addition to the usual direct (LZ) mechanism. As we will see below, the analysis of the N 2 + electronic states populated in this ND-SET channel, and the dynamics exhibited by the DSET channel, also point towards a contribution from such a non-direct reaction pathway. Fig. 3 shows a histogram of the exoergicities recorded in the ND-SET reaction channel, Ar 2+ + N 2 -Ar + + N 2 + . In the exoergicity distribution, there is a maximum centred around 5.8 eV, with a full width at half maximum (FWHM) from 4.1-7.2 eV. To interpret the exoergicity spectrum for this channel, we need to consider the accessible electronic states of the reactant and product species. For these species the relevant energetic data is readily available. The Ar 2+ beam used in this experiment has been shown to be composed of ions in the three electronic states derived from the Ar 2+ p 4 configuration ( 3 P, 1 D and 1 S), with relative abundances that are approximately statistical. 26,66,67 There are two energetically accessible electronic states for the Ar + product ( 2 P and 2 S). 68 The reactant N 2 molecule, admitted as an effusive beam, will be in its ground vibronic state, X 1 S g + v = 0. The ground state of N 2 + (X 2 S g + ) lies 15.58 eV above the ground state of N 2 . 69,70 The lowest energy dissociation asymptote of N 2 + (N + ( 3 P) + N( 4 S)) lies at 24.3 eV relative to N 2 (X 1 S g + ), which corresponds to the energy of N 2 68,71 Photoionisation studies have shown that N 2 + states generated with a higher internal energy than 24.3 eV have dissociation lifetimes less than the timescale of our experiment and therefore will not contribute to the N 2 + counts observed in this channel. [71][72][73][74] From the above energetic considerations, we find that there are four possible ND-SET reaction pathways that match the range of exergicities 69,70 shown in Fig. 3: (a-d). Additionally, whilst pathway (e) has an exoergicty (8.9 eV) clearly outside of the observed range, if the N 2 + (B 2 S u + ) product is formed with significant vibrational excitation, it could yield exoergicities in accord with our experimental observations. Non-dissociative SET Ar 2+ ( 3 P) + N 2 (X 1 S g + ) -Ar + ( 2 P) + N 2 Ar 2+ ( 1 D) + N 2 (X 1 S g + ) -Ar + ( 2 P) + N 2 Ar 2+ ( 3 P) + N 2 (X 1 S g + ) -Ar + ( 2 P) + N 2 The match of the calculated exoergicities of pathways (a)-(e) with the experimental spectrum is good, particularly when allowing for potential vibrational excitation of the N 2 + product ion. Pathways (a)-(e) are all spin-allowed and involve the formation of the Ar + ion in its ground 2 P state; their exoergicities are indicated in Fig. 3. Reviewing what is known of the N 2 + electronic states involved in these pathways is instructive. Photoelectron spectra show low intensities for the formation of N 2 + (D 2 P g ) from N 2 in this energy range due to Franck-Condon effects; in fact the D state is only stable to dissociation at significantly longer bond lengths than that of the neutral molecule. Pathway (e) involves ground state Ar 2+ ( 3 P) and forms N 2 + in its B 2 S u + state. As noted above, populating the B 2 S u + state and giving an exoergicity within the observed range necessitates the state being formed with a high vibrational quantum number. The potential energy surface of the N 2 + (B 2 S u + ) state has a deep well and therefore could support vibrational excitation, however, photoelectron spectra show that the first two vibrational levels, v = 0 and v = 1, are predominantly populated in a vertical transition. 69,70 In our spectra it is not possible to resolve the different N 2 + channels potentially involved in this ND-SET reaction. Pathways (c)-(e) involve the formation of N 2 + (D 2 P g ) or vibrationally excited levels of N 2 + (B 2 S u + ). As noted above, such transitions are not favoured in a vertical transition from N 2 (X 1 S g + ) and previous experiments studying dicationic electron transfer have shown that the ionising transitions in the neutral are often vertical in nature. 75,76 However, ionising transitions in the neutral collison partner that produce monocations in vibrational states well outside the Franck-Condon zone have also been reported. 77 Additionally, the longer-lived association observed between the reactant species in this channel (identified above) will facilitate the formation of N 2 + states away from the equilibrium geometry of N 2 . However, in contrast to pathways (c)-(e), pathways (a) and (b) involve the population of the lower vibrational levels of N 2 + (C 2 S u + ), transitions which are favoured in the photoelectron spectrum of N 2 , and are inherently more probable than transitions to the higher vibrational levels of the B 2 S u + state or D 2 P g states. Thus pathways (a) and (b), involving N 2 , are most likely the dominant pathways in the ND-SET reaction, but a minor contribution from pathways (c)-(e) is also possible. As discussed before in the literature, higher resolution energetic information is obtainable from the PSCO-MS experiment using a low TOF-MS source field. 37,57,65 In low source field experiments conducted as part of this study, the counts where the Ar + ions were forward scattered relative to Ar 2+ were masked by reactions occurring away from the source region. Thus, only events where the Ar + ions were backwards scattered could be selected for analysis. Fig. 4 shows the resulting exoergicity spectrum of these back-scattered events for the ND-SET channel. As previously noted, low source field experiments do not collect product ions with high transverse velocities. Therefore, exoergicity spectra from low source field experiments discriminate in favour of events with lower exoergicities. The exoergicity of the back-scattered events in the low source field experiment (Fig. 4) ranges from B2.5 eV-5.5 eV. The range of exoergicities revealed in Fig. 4 are clearly present at the low energy extreme of the exoergicity distribution generated by the high source field experiment (Fig. 3). One way to account for these low exoergicities is to invoke the population of higher energy, long-lived, N 2 + states than those involved in pathways (a)-(e). However, given the extensive studies of N 2 + it is unlikely that there are previously unknown long-lived metastable states of N 2 + lying above the dissociation asymptote to N + + N. The formation of stable states of N 2 + (X, A) in processes involving low exoergicities (2.7 eV, 1.6 eV) is possible from reactions of Ar 2+ ( 1 S) with N 2 , if Ar + ( 2 S) is generated as the atomic monocation. However, such processes cannot account for the signals around 3 eV in Fig. 4. Indeed, Ar 2+ ( 1 S) is a minor component of the dication beam and formation of Ar + ( 2 S) from Ar 2+ ( 1 S) involves a two-electron transition, usually a strong indication of a disfavoured process. Thus, we do not feel such reactions can explain the form of the exoergicity spectrum ( Fig. 4) at low exoergicities. A more likely explanation of these low exoergicity processes, generating long-lived N 2 + ions, is that the N 2 + (C 2 S u + ) state is formed with an energy above the first dissociation limit, before fluorescing to a N 2 + bound state, most likely X 2 S g + . Populating these higher vibrational levels of the C state will result in the reduced exoergicity we observe. Several of the electronic excited states of N 2 + higher in energy than X 2 S g + , including the N 2 + (C 2 S u + ) state, are known to fluoresce to lowerlying electronic states. [78][79][80] Since our energetic analysis above clearly shows population of bound levels of the C state, it is not unreasonable to propose higher levels of the C state are also populated, and these levels then, in competition with their dissociation, fluoresce to result in long-lived N 2 + ions. In Fig. 4, there is perhaps a hint of fine structure that could result from the vibrational structure of the N 2 + state populated in this low exoergicity region. The spacings of these features (Fig. 4) appear to be of the order of B0.25 eV which is the vibrational spacing of the N 2 + (C 2 S u + ) state. 69 The competition between fluorescence and predissociation has been studied in depth for N 2 + (C 2 S u + ). 71,81,82 Predissociation dominates over fluorescence when N 2 + is formed with more energy than the lowest energy dissociation asymptote (24.3 eV). [82][83][84] However, predissociation of the C state will not generate counts in this ND-SET channel, but instead contributes to the counts in the DSET channel, Rxn II., as discussed below. So although the yield of the C state fluorescence is low, the long-lived N 2 + ions resulting from this emissive process will be sensitively detected in the low-field spectrum. To summarise, the ND-SET reaction forming Ar + + N 2 + is the dominant channel resulting from the collisions of Ar 2+ and N 2 . A broadly forward scattering dynamic was observed, indicative of a direct, long-range electron transfer, but with a significant tail to higher scattering angles, indicative of a competitive mechanism involving a longer-lived association between the reactant species. After the electron transfer, Ar + is generated in its ground ( 2 P) state and N 2 + is likely predominantly generated in its C 2 S u + state, with perhaps a minor contribution from the B 2 S u + and D 2 P g states. The high dissociation threshold of N 2 + and the involvement of the most abundant Ar 2+ states present in the beam explain why this ND-SET reaction is the most intense product channel in the Ar 2+ + N 2 collision system. Dissociative single electron transfer The pairs spectrum we record following collisions of Ar 2+ with N 2 shows a clear peak corresponding to the formation of Ar + + N + : a DSET reaction. The general mechanism for dicationic DSET reactions has been well investigated, 37,64,85-87 and involves an initial LZ style single electron transfer, populating a product cation in a dissociative state (e.g. N 2 + *), followed by subsequent dissociation of that ion. In the CM scattering diagram these dynamics result in strong forward scattering (Fig. 5a), with the velocity of the Ar + product w(Ar + ) strongly oriented with w(Ar 2+ ). The scattering angles of the Ar + ions are shown in more detail in Fig. 6 which reveals a bimodal distribution: a large peak at low scattering angles, consistent with a direct mechanism, along with an additional broad peak at higher scattering angles. This secondary peak has a broad maximum close to 901, typical of processes involving isotropic scattering associated with a longer temporal association between the Ar 2+ and N 2 species. That is, the involvement of a collision complex, as also suggested by the ND-SET data discussed above. Again, it seems clear that both a direct mechanism and a mechanism involving complexation are operating in this channel. Fig. 5b shows the internal frame scattering of the N + and N products, relative to the velocity of the Ar + product. The N + and N fragments are clearly both back-scattered, away from the Ar + product ion, confirming that any complex between the N 2 and Ar 2+ initially dissociates into N 2 + * + Ar + . Fig. 5b also clearly shows that the N + ion flies away from the Ar + ion with a greater velocity than the N product. Such a signature has been observed before in DSET reactions, 26 26 The experimentally determined total exoergicity of the DSET reaction, (see Fig. S1 in the ESI †) for forming Ar + + N + + N, has a peak at 6.5 eV with a FWHM from 4.4 eV-8.0 eV. The bulk of the counts in this spectrum can be accounted for by contributions from the first and second excited states of Ar 2+ ( 1 D and 1 S) forming N + + N at the three lowest energy dissociation limits of N 2 + together with Ar + ( 2 P). 68,88 The three channels involving Ar 2+ ( 1 S) result in nominal exoergicities of 7.4 eV, 5.5 eV, and 5.0 eV, in good accord with the bulk of the exoergicity distribution. Additionally, minor structure towards lower exoergicities could point to the involvement of Ar 2+ ( 3 P) or N + ( 1 S). Considering the higher energy events in the exoergicity spectrum (Fig. S1, ESI †) we note that, as previously discussed, if N 2 + is formed with an energy of over 24.33 eV relative to the ground state of N 2 (equivalent to N 2 + (C 2 S u + v = 3)) and does not fluoresce, it will dissociate within the lifetime of our experiment and therefore can contribute the DSET channel. [71][72][73][74] The maximum energy that can be released from Ar 2+ ( 1 S) accepting an electron to form the ground state monocation, Ar + ( 2 P), is 31.75 eV. Therefore, the maximum exoergicity in this channel is 7.42 eV if we restrict ourselves to the p 4 states of Ar 2+ . There are a significant number of counts observed in this channel above this theoretical maximum of 7.4 eV (B30%, see Fig. S1, ESI †). These higher energy events are too numerous and extend to too high an energy to be explained by the spread in the translation energy of the Ar 2+ ions in the beam (FWHM = 0.3 eV). One possible source for these higher energy events is higher lying excited Ar 2+ energy states in the beam. However, we see little evidence of such states in other channels in this collison system, or in our previous work involving Ar 2+ . 26,37,67 However, the clear observation of a significant complexation pathway in this reaction channel provides an explanation for these high energy events. Specifically, if the translational energy of the Ar 2+ in the beam can be coupled into the reaction, a process that is not normally involved in the direct SET mechanism 9 but is perfectly feasible when complexation is involved, exoergicities of up to B12.5 eV are perfectly possible. The Ar + scattering angle distribution of the high exoergicity events (47.4 eV) is dominated by the peak centered at 901, indicating a link with the complexation pathway. Thus, it seems highly likely that the high energy tail in the exoergicity distribution is yet another signature of complexation competing with direct electron transfer in this collision system. If we consider the DSET reaction to be predominantly stepwise, the exoergicity of the initial electron transfer step (forming Ar + and N 2 + *) can be estimated using the N 2 + * precursor velocity. The N 2 + * precursor velocity is determined, on an event-wise basis, via conservation of momentum from the Ar + velocity. Using this method, which neglects any small contribution to the Ar + velocity from interaction with the final N + product, we find the exoergicity for the initial electron transfer step to have a broad peak centred at 5.0 eV and with a FWHM from 2.9 eV-6.4 eV, as shown in Fig. 7. Exoergicity distributions for such primary electron transfer reactions of dications are commonly peaked between 2 and 6 eV due to such exoergicities favouring the net curve crossing probability as predicted in the LZ model. 64,89 As discussed above, this DSET channel, producing Ar + + N + + N, mostly involves Ar 2+ ( 1 D and 1 S), and results in the formation of N 2 + * in a dissociative state. The dissociative states of N 2 + * that best fit the exoergicity data in Fig. 7 are: the C 2 S u + state (v 4 2), the 2 2 P g state, and the continuum of the D 2 P g state (E B26 eV), all of which lie in the Franck-Condon region of the N 2 ground state. 70 Photoelectron spectra from Baltzer et al. 70 show that the 2 2 P g state overlaps with the C 2 S u + state around the Franck-Condon region, overlying the D continuum, and these states are therefore indistinguishable in our experiment. We detail in Table 2 the possible pathways contributing to this channel, the exoergicities of which are marked on Fig. 7. Pathways (i) and (j) match well with the peak of the observed experimental exoergicity distribution, and involve the formation of N 2 + (C 2 S u + ) and N 2 + (D 2 P g ) respectively. Pathway (h) also involves the formation of D 2 P g with Ar 2+ ( 1 D). There are also possible smaller contributions from pathways (f) and (g), which populate the higher lying F 2 S g + and G 2 P u or H 2 P u states of N 2 + ; structures that hint at these reactions can be seen in the exoergicity spectrum (Fig. 7). Additionally, pathway (k) could contribute to this channel, involving Ar 2+ ( 1 S) generating N 2 + (C 2 S u + ). Of course, the observed exoergicities will be Fig. 6 Histogram for the CM scattering angles for the product Ar + ion, relative to w(Ar 2+ ), for the reaction Ar 2+ + N 2 -Ar + + N + + N at a CM collision energy of 5.1 eV. The error bars represent two standard deviations of the counts. Table 2. The error bars represent two standard deviations of the associated counts. broadened by the population of the N 2 + species in a range of vibrational states. The exoergicity of the final N 2 + * dissociation can also be evaluated by determining the velocities of the N + and N products, on an event by event basis, in the frame of the N 2 + * precursor velocity. 26 This exoergicity spectrum (Fig. 8) has a maximum at 0.9 eV with a FWHM extending from 0.1 eV-2.3 eV. To interpret this exoergicity, we must consider previous studies of N 2 + dissociation. As noted above, the dissociation threshold, corresponding to the lowest energy N + + N asymptote, L1 (N + ( 3 P) + N( 4 S 0 ), Table 3), lies at B24.3 eV above the molecular ground state and corresponds to N 2 [71][72][73] At energies above the second dissociation limit, L2 (N + ( 1 D) + N( 4 S 0 ), B26.2 eV, Table 3), there is competition between dissociation to L1 and L2. 88,90,91 In studies of N 2 excitation, at the energies involved in the processes we see in our experiment (24.3-32 eV), the C 2 S u + state is the dominant state populated in photoelectron spectra and dissociation to the three lowest energy dissociation asymptotes is observed, with lifetimes of the order of nanoseconds. 69,70,74,79,[92][93][94][95] The predissociation of N 2 + (C 2 S u + ) is thought to occur via several mechanisms including by spin orbit coupling to the 2 S u À state then transition to the continuum of 4 P u . 71,93,[96][97][98] In previous experiments probing the dissociation of N 2 + *, produced via electron impact or photoionisation, kinetic energy releases of 0.5 eV-8 eV were observed. 90,99 The maximum theoretical exoergicity for N 2 + * dissociation in this channel, under the energy constraints of the current study, is 7.4 eV, arising when Ar 2+ ( 1 S) is involved and N 2 + * dissociates to the lowest energy dissociation limit, L1. Considering the maximum theoretical exoergicity available in this system (7.4 eV), the exoergicity observed in this study matches nicely with the previous experiments characterising N 2 + * dissociation. The shape of the exoergicity spectrum we see for the dissociation of N 2 + * (Fig. 8) can be associated with the pathways shown in Table 3. The main contributions are clearly from the C 2 S u + /2 2 P g or the D 2 P g states dissociating to L1, in satisfying accord with the assignment made above that the initial electron transfer step populates these ionic states. Additionally there are potentially minor contributions from the involvement of some of the higher energy excited states of N 2 + . These states were also implicated in the above anaylsis of the initial electron transfer, showing a coherent description of the electron transfer state selectivity is emerging. To summarise, dissociative single electron transfer is the second most intense channel following the collisions of Ar 2+ and N 2 at a collision energy of 5.1 eV. The scattering angles of the Ar + product ion (Fig. 6) show that two mechanisms are involved in the initial electron transfer: a direct, Landau-Zener process where the electron transfer occurs at long range, and a process involving the formation of a complex [Ar-N 2 ] 2+ . In this channel, electron transfer predominantly involves N 2 and Ar 2+ ( 1 D and 1 S), forming Ar + ( 2 P), and N 2 + * formed in the dissociative C 2 S u + , 2 2 P g and D 2 P g states. These N 2 + * ions then fragment, a dissociation slightly perturbed by the field of the Ar + product, primarily to the lowest energy dissociation asymptote, N + ( 3 P) + N( 4 S 0 ). The lifetime of N 2 + * before it dissociates was determined to be B100 fs, comparable to estimates for N 2 + * generated in similar experiments. 85 There is a spread in the observed exoergicities due to minor contributions from the involvement of Ar 2+ ( 3 P), N 2 + (F 2 S g + ) and N 2 + (G 2 P u or H 2 P u ) and higher energy dissociation limits of N + + N. Chemical bond formation Fig. 9 shows the CM scattering of the ArN + and N + products observed from the previously unreported bond forming channel, Rxn. III. Fig. 9 shows that the ArN + product ion is scattered with a marked bias towards lower scattering angles. This bias can be seen more clearly in the histogram of ArN + scattering angles, shown in Fig. 10. This form of the scattering suggests a stripping-style mechanism where an N À is transferred between the N 2 and Ar 2+ species at a relatively large interspecies separation. This style of direct mechanism is similar to that found in our previous work with the analogous channel in the Ar 2+ + O 2 system, forming ArO + . 26 The more usual mechanism observed for a chemical bond forming reaction between a dication and neutral species involves a 'long-lived' association between the reactant species with a lifetime of at least several rotations of this collision complex. 59,61 However, direct mechanisms for bond forming reactions between dications and neutral species have been previously reported. 37 The scattering data shown here shows little evidence for a long-lived association between the reactants. If such a complex survived for long enough to undergo several rotations, the relationship of the direction of approach of the reactant species would be scrambled and both product fragments would be scattered effectively isotropically about the CM, as has been observed before in other collision systems. 62,100 It is interesting that formation of ArN + from the Ar 2+ + N 2 system proceeds via a direct mechanism rather than complexation, particularly given the clear evidence of complexation observed in the SET channels. The formation of new chemical bonds via direct processes is well-established in dication reactions and the experimental data clearly imply that complexation does not provide a viable route to populate long-lived states of ArN + . Fig. 11 shows the experimental exoergicity distribution observed for the bond-forming reaction (Rxn. III). The exoergicity maximum is at 5.5 eV, and the FWHM is from 3.5 eV-9.0 eV. To interpret this exoergicity we note that several states of ArN + have been identified theoretically. 51,56,101 The ground state, X 3 S À , and first excited state, A 3 P, are both lower in energy than the Ar( 1 S 0 ) + N + ( 3 P 0 ) dissociation asymptote and their formation has been reported from the reactions of N 2 + + Ar and Ar + + N 2 respectively. 53 Here we will consider just the ground state, X 3 S À , which is well bound with a significant dissociation energy (B2.1 eV). The minimum of the ArN + (A 3 P) state lies just below the Ar( 1 S 0 ) + N + ( 3 P 0 ) dissociation asymptote. Thus, we would not expect to populate long-lived, and hence detectable, ArN + (A 3 P) states with the level of vibrational excitation that we expect to result from a long-range N À abstraction from N 2 by Ar 2+ . From consideration of the calculated ArN + energies and literature values for known N + and Ar 2+ states, reaction pathways (r)-(t) provide a very good match to the exoergicity distribution observed for this channel. 56,68 Pathways (s) and (t) result from the production of ArN + (X 3 S À ) and N + in its ground state ( 3 P) from the two lowest energy Ar 2+ states in our beam ( 3 P and 1 D). Pathway (r) results in the formation of N + in its first excited state, 1 D. Note that these pathways are all spin allowed. Of course, the formation of vibrationally excited ArN + , which we expect due to the long-range N À abstraction from N 2 by Ar 2+ , will act to spread (decrease) the nominal exoergicity of the reaction, in accord with the spread in the exoergicity data in Fig. 11. Ar 2+ ( 3 P) + N 2 (X 1 S g + ) -ArN + (X 3 S À ) + N + ( 1 D) DE = 4.7 eV (r) This journal is © the Owner Societies 2021 Ar 2+ ( 3 P) + N 2 (X 1 S g + ) -ArN + (X 3 S À ) + N + ( 3 P) DE = 6.6 eV (s) Ar 2+ ( 1 D) + N 2 (X 1 S g + ) -ArN + (X 3 S À ) + N + ( 3 P) DE = 8.3 eV (t) The formation of ArN + from the collisions of Ar 2+ and N 2 has not been previously reported, to the best of our knowledge. This study therefore offers another potential source for the formation of ArN + species detected in Ar/N 2 plasmas. 50,51 The scattering shows that, unusually, this reaction proceeds via a direct mechanism. The relative intensity for this channel is high (4.3%) compared with typical bond forming dicationneutral reactions, showing an affinity to form the Ar-N bond. [41][42][43][102][103][104][105][106][107] Dissociative double electron transfer Rxn. IV from the Ar 2+ /N 2 collison system results in the formation of N + + N + and has relative intensity of 10.8%. From the dynamics it is clear that this channel originates from double electron transfer (DET), via the formation of N 2 2+ , as the N + + N + ions are effectively isotropically scattered about the velocity of the N 2 reactant. Such DET reactions are commonly observed in dicationic collision systems, 37,108,109 where two electrons transfer from N 2 to the Ar 2+ ion and the nascent N 2 2+ ion then dissociates. As discussed in more detail in our previous work, 26,37 dicationic DET usually favours a concerted mechanism in which the product and reactant asymptotes lie close in energy (o1 eV). 37 The Ar 2+ ground state ( 3 P) and first two excited states ( 1 D and 1 S) have energies of 43.4 eV, 45.1 eV and 47.5 eV above the ground state of Ar respectively. 68 There are several dissociative states of N 2 2+ that lie at a comparable energy to these Ar 2+ states relative to the ground state of N 2 . 69,110,111 Therefore, concerted DET would be expected to occur in the Ar 2+ + N 2 system. The dissociation of N 2 2+ into N + + N + has been well studied. In 1996, Lundqvist et al. 110 reported the kinetic energies of N 2 2+ dissociation revealing energy releases of 6.7-7 eV corresponding to the v = 7-10 levels of the A 1 P u state dissociating to the lowest energy N + + N + asymptote, D1(N + ( 3 P) + N + ( 3 P)). Lundqvist et al. also observed peaks at 7.6 and 7.7 eV, corresponding to the N 2 2+ (D 3 P g ) v = 0 and v = 1 levels dissociating to D1. In Lundqvist's study of the dissociation of N 2 2+ the dominant contribution is from the lowest energy N + + N + dissociation asymptote. From analysis of the N + ion velocities, we see the exoergicity for the dissociation of N 2 2+ in the DET channel has a maximum centred at 7.2 eV with a FWHM from 6.2-8.6 eV, shown in Fig. 12. The experimental exoergicity distribution is a good match with the observations of Lundqvist et al. 110 (see Fig. 12) and also agrees well with energy releases reported in other studies of N 2 2+ dissociation. [112][113][114][115][116] Therefore, it seems clear that the nascent N 2 2+ is generated in the A 1 P u and D 3 P g states which predominantly dissociate to form pairs of N + ( 3 P) ions. Conclusions Collisions between Ar 2+ and N 2 have been studied using a coincidence technique at a CM collision energy of 5.1 eV. Four reaction channels generating pairs of monocations are observed, producing: Ar + + N 2 + , Ar + + N + , ArN + + N + and N + + N + . The formation of Ar + + N 2 + is the most intense channel, displaying forward scattering but with a marked tail to higher scattering angles. This scattering is indicative of direct electron transfer competing with a 'sticky' collision between the Ar 2+ and N 2 reactants. After the electron transfer, Ar + is generated in its ground ( 2 P) state and N 2 + is primarily in the low vibrational levels of the C 2 S u + state, with contributions from the B 2 S u + state and D 2 P g states. The exoergicity distribution in this channel also indicates a minor contribution to the formation of N 2 + via the initial population of higher energy N 2 + states, lying above the dissociation asymptote to N + + N, which fluoresce to stable states of N 2 + . The formation of Ar + + N + results from dissociative single electron transfer. The scattering in this channel again reveals the involvement of two different pathways for the initial electron transfer: a long-range direct process, and a process involving the formation of a complex, [ArN 2 ] 2+ . Satisfyingly, the operation of these same pathways was extracted from the data for the non-dissociative channel. Despite the differing dynamics, the electronic states involved in this dissociative electron transfer reaction appear the same for both routes. That is, the excited states of Ar 2+ ( 1 D and 1 S) are involved in the initial electron transfer, populating N 2 + * in its dissociative C 2 S u + , 2 2 P g and D 2 P g states. The nascent N 2 + * then quickly dissociates, primarily to the lowest energy dissociation asymptote, N + ( 3 P) + N( 4 S). We also observe the formation of ArN + + N + which has not been previously reported. The scattering shows that this bond-forming reaction proceeds via a direct mechanism. The molecular ion ArN + is formed, with significant vibrational excitation, in its X 3 S À state. Finally, the formation of N + + N + is observed, resulting from double electron transfer that initially generates N 2 2+ which subsequently dissociates. The exoergicity of the N 2 2+ dissociation is in good agreement with previous studies of the dissociation of the isolated dication, formed in a vertical transition from the neutral molecule, which involve the dissociation of the A 1 P u and D 3 P g dication states. Conflicts of interest There are no conflicts to declare.
2021-05-07T06:22:55.366Z
2021-05-06T00:00:00.000
{ "year": 2021, "sha1": "2805534570ad4b3507400b0d01657127f4dc3e95", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/cp/d1cp00918d", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7f6963860183c44d327c36568d6ea5fcbe489601", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
261406530
pes2o/s2orc
v3-fos-license
Quality of Life in Patients After Acute ST-Segment Elevation Myocardial Infarction Abstract Background: ST-segment elevation myocardial infarction (STEMI) is the acute coronary syndrome with the highest severity and mortality. It can affect physical health and well-being of patients, and consequently their quality of life (QoL). Objective: To describe the QoL of patients at 30 days and 180 days after STEMI, focusing on sex differences and repercussions on physical and mental dimensions. Methods: Observational study with 174 STEMI patients included in the study on STEMI conducted in the city of Salvador, Brazil (PERSISST). The QoL of patients at 30 days (D30) and 180 days (D180) after the coronary event was assessed using the 12-item short form health survey (SF-12). Physical and mental components of QoL were calculated using the SF-12 OrthoToolKit. Descriptive analysis of data was made using the IBM SPSS software, version 25.0. Results: Mean age of participants at D30 and D180 was 57.1±11.4 years and 60.5±10.9 years, respectively, with a higher prevalence of men (55.8% and 56.8%). In general, patients had a poor QoL at both time points (scores 49.1±8.9 and 49.9±8.4, respectively). Analysis by sex, however, showed that men had a good QoL at both 30D (score 51.8±7.4) and 180 D (score 51.3±7.7), whereas a poor QoL was found among women at these time points (45.7±9.6 and 48.1±9.0, respectively). Men showed higher physical and mental health scale scores than women at both D30 and D180, and there was a greater impairment of the physical component in both sexes. Conclusion: Patients had poor QoL at 30 days and 180 days after STEMI, with a greater impairment of the physical component and a worse QoL perception among women than men at both time points. an increased risk of depression, anxiety and/or fear of a new event. 9 Therefore, coronary involvement may lead to a reduction in quality of life (QoL), health status and vitality, and impairment of function, social relationships and mental health. 10 Considering that the presence of STEMI may affect patient QoL, it is important to assess this effect 30 days and 180 days after the coronary event, considering sex differences and repercussions on physical and mental dimensions, and this is the objective of the present study. Methods This was an observational, descriptive study, derived from the PERSISST study, 11,12 which was a study on STEMI conducted in Salvador, Brazil, created from the regional integrated network for the management of this condition. In 2009, this network was implemented due to the low availability of reperfusion treatment in the public health system in the city of Salvador, aiming to increase the rates of this therapy and reduce morbidity and mortality. 10,11 In 2017, the study on STEMI conducted in the city of Salvador, Brazil (PERSISST) was started with the objective to assess the flow of STEMI patients who received ambulance services according to the AMI protocol or were treated in public health centers in Salvador metropolitan area, and the outcome of these patients. The target population of this study was patients with diagnosis of STEMI who received medical services according to the AMI protocol, met the inclusion criteria for the PERSISST (Table 1) and were interviewed from May to October 2020. A convenience sample were recruited from those patients who answered the Short Form Health Survey (SF-12) 13 at 30 days (D30) and/or 180 days (D180) after the coronary event. Patients who declined to sign the consent form were not included in the interview. Also, patients who could not be contacted by telephone, and those who signed the informed consent form, but their questionnaires were answered by someone else were not included in the study. Following the PERSISST project, at 30 and 180 days after the coronary event, the SF-12 13 was applied to assess self-reported QoL. The SF-12 comprised 12 items, divided into two domains -a physical component, composed of physical functioning, physical performance, pain and general health, and a mental component, composed of emotional performance, mental health, social functioning, and vitality. Data collection was performed by trained medical school students, responsible for patient follow-up. The questionnaires were administered prospectively, in-person or by telephone, based on patient preference. Most (90%) questionnaires were administered by telephone, and 10% were administered in person by a cardiologist, at the waiting room of the health care center. This was because of restrictions imposed by the COVID-19 pandemic and, despite this, and both methods of administration strictly followed the same instructions and procedures. After data collection, two groups were defined for data analysis, D30 and D180; it is important to point out that some patients did not participate in both time points, and their QoL could not be compared. Therefore, the groups were composed of different individuals, who answered the survey at 30 days or at 180 days after the infarct. The results were analyzed using the SF-12 OrthoToolKit, which calculates the physical and the mental scores. Scores above 50 indicated a good QoL and scores below 50 indicated poor QoL. The study met the requirements of the 466/12 and 510/16 norms of the Brazilian National Health Council and was approved by the ethics committee of Bahia Secretariat of Health (CAAE 58949416.7.0000.0052, approval number 4.330.336). Statistical analysis A descriptive analysis was made using the IBM SPSS software, version 25.0. First, normality of data distribution was assessed and confirmed by the Shapiro-Wilk test. Then, numerical variables were expressed as mean and standard deviation (SD), and categorical variables as absolute and relative frequencies. A p<0.05 was set as statistically significant. Results A total of 195 patients with STEMI were identified as eligible for the study. However, 10.8% of these patients did not answer the questionnaire and the final sample was composed of 174 patients, 49.3% of them at D30 and 50.6% at D180. There was a higher prevalence of men at both time points, and mean age was not different between the groups (Table 2, Central Figure). The SF-12 scores (Table 2, Figure 1) indicated poor QoL of participants at both D30 and D180, with higher mental and physical scores among men than women. A greater impairment of physical QoL was observed in both men and women. Women had a worse perception of QoL than men at both time points, and satisfactory mental scores at D180 only. Among men, the highest scores were obtained from the SF-12 mental component scores at both D30 and D180 (Table 2, Figure 1). Discussion The present study assessed the QoL of STEMI patients, participants in the PERSISST study, at 30 days and 180 days after the event, focusing on differences between genders and on physical and mental repercussions of the disease. Regarding sex and age distribution of our patients, most patients were men, corroborating the data described by Alves and Polanczyk, 2 who reported a predominance of male patients among those who experienced a coronary ischemic event, aged between 56 and 64 years, as described by Costa QoL is associated with individuals' perception about their role in life, regarding the cultural context and value system, and to their objectives, expectations, standards and worries. 15 Such perception is hence individualized and non-transferable, and its evaluation after STEMI would allow the identification of difficulties faced by patients during rehabilitation. It is believed that STEMI can directly affect patients' lives, in terms of both socioeconomical and physical aspects, consequently influencing their work performance. This, in turn, causes a negative impact on interpersonal and financial relationships, and consequently on QoL perception. 16 Altogether, these data may explain our results showing that at 30 days and 180 days after STEMI, patients have a poor QoL, which may also have been influenced by physical limitations of these individuals, including pain, limitations in household and work activities, difficulties in climbing stairs, among others. This is in line with results obtained from Thomas et al., 17 who assessed AMI patients in outpatient care and observed satisfactory QoL in both control (seen in a conventional cardiology outpatient clinic) and intervention (referred to an outpatient clinic for secondary prevention of coronary artery disease) groups. It is worth pointing out that the physical component of QoL may reflect issues related to the patient and the first clinical visit. It invites us to review the entire care chain of these patients, with special attention to a multidisciplinary approach to provide an educational program targeted to lifestyle changes and knowledge about the disease. This would lead to improvements in cardiovascular rehabilitation, currently a rare modality of treatment in public health system. 18,19 In addition to physical limitations, other risk factors for the development of AMI include unfavorable psychosocial conditions, like anxiety, stress, social isolation 20 and depression. 21 According to the American Heart Association, depression is responsible for worse cardiovascular outcomes. Concerning the mental component of QoL, in our study, women showed satisfactory values at 180 days only, but still lower than those of men, which corroborates previous studies. 9,22,23 Serpytis et al. 9 assessed anxiety and depression after AMI and showed that women are at higher risk of developing anxiety and depression, compromising the QoL after the coronary event. Similar results were reported by Rafael et al. 22 who assessed patients hospitalized for cardiac rehabilitation after AMI. Figueiredo et al. 23 showed that women had a 3.5 times higher risk of developing depressive disorder than men. It is important to highlight that, in the present study, data were collected during the COVID-19 pandemic, which may have negatively affected patient adherence to the study protocol. Pandemic restrictions had an impact on patients' daily lives, including reduced access to medical services, 24 which may have made it difficult for patients to return to their habits after the coronary event. Besides, with these restrictions, different methods for the questionnaire administration had to be developed, but several measures were used to minimize possible biases. Training of examiners, standardization of procedures and continuous monitoring of data collection were implemented to ensure consistency and quality of the results obtained. Potential limitations of the study include difficulties in obtaining the signature of the consent form in person due to the COVID-19 pandemic restrictions; difficulties in contacting the patients for the interview, due to the lack of a telephone number and unanswered calls, and in performing a long-term QoL follow-up. Consequently, different groups were formed, making it difficult to draw statistical inferences and generalize the results. Not all patients completed the SF-12 at both time points (30 days and 180 days), which hampered comparison of results over time, collection of detailed information about the clinical course of patients, and consequently a more Potential Conflict of Interest No potential conflict of interest relevant to this article was reported. Sources of Funding There were no external funding sources for this study. Study Association This study is not associated with any thesis or dissertation work. Ethics Approval and Consent to Participate This study was approved by the Ethics Committee of the Secretaria da Saúde do Estado da Bahia (SESAB) under the protocol numbe 58949416.7.0000.0052. All the procedures in this study were in accordance with the 1975 Helsinki Declaration, updated in 2013. Informed consent was obtained from all participants included in the study.
2023-09-01T15:10:32.619Z
2023-09-15T00:00:00.000
{ "year": 2023, "sha1": "1c6cfb85f4c1c577e032597d161cb5dee0496f38", "oa_license": "CCBY", "oa_url": "https://ijcscardiol.org/wp-content/uploads/articles_xml/2359-4802-ijcs-36-e20230041/2359-4802-ijcs-36-e20230041.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "62cf2171a87de345b41c0191b1d99975cbe2c1e8", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [] }
233676295
pes2o/s2orc
v3-fos-license
Earth Scientists and Sustainable Development: Geocomputing, New Technologies, and the Humanities This opinion paper discusses some of the challenges and opportunities that earth scientists face today in connection with environmental problems. It focuses on aspects that are related to the role of geocomputational approaches and new technologies for geoenvironmental analysis in the context of sustainable development. The paper also points out a “data imbalance” effect, a key issue in the analysis of environmental evolution and of geosphere-anthroposphere interactions in the long-term. In connection with this, it stresses the importance of geoenvironmental information which can be derived from environmental humanities and related disciplines, such as history and archeology. In this context, the complexities and potentialities of a dialogue between earth sciences and the humanities are outlined. Introduction Sustainable human development, from the perspective of the maintenance of the Earth system in a resilient state (e.g., [1,2]), presents relevant challenges, involving science, technology and socio-economical aspects. Global and local sustainability require the analysis and understanding of multiple interacting environmental processes, acting across multiple spatiotemporal scales. Moreover, the analysis and management of human perturbations on the earth system require an interdisciplinary approach, capable of analyzing and modeling geosphere-anthroposphere interactions. Earth scientists play a key role in many aspects of the sustainability challenges, including the definition and implementation of human development related policies. From this perspective, the Sustainable Development Goals (SDGs), as defined by the United Nations [3], influencing the policies of many countries, e.g., the European "Green Deal", are emblematic. Several of the SDGs are in fact related to the environment and, as such, more or less directly address the work of earth scientists and acknowledge the societal value of their research. In this context, geocomputational approaches and new technologies play a pivotal role in earth sciences. On the one hand, the acquisition and the quantitative analysis of geoenvironmental data are fundamental for understanding the complex dynamics of the earth system and its interactions with the anthroposphere. On the other hand, the gathering of geoenvironmental data, their correct comprehension and modeling, as well as their use have taken on an unprecedented political meaning as they have become fundamental for responsible decision making. The validation of data and their interpretation have ostensibly reached out from the ivory towers of science to the public sphere as matters of shared concern in civil society. New technologies and computational methods are available to the earth scientist, but they need to be reassessed in order to be correctly-and critically-employed towards the achievement of global goals. Moreover, the epistemological question of how to integrate information, knowledge and approaches stemming from different disciplines is far from settled in a time in which the urgency to address the connection between earth-system processes and cultural phenomena is evidenced by the deep anthropic transformation of our planet. In the field of the history and philosophy of sciences, there is growing interest in studying the interactions between humankind and the environment (e.g., [4]), as well as between methodologies descending from the natural sciences and the humanities [5]. Our considerations are particularly pertinent to soil science and geoenvironmental research, especially when focused on topics such as land degradation/management, water resources protection/management and multiple geoengineering-related issues. First, these issues directly involve the Critical Zone (e.g., [6]), the surficial earth layer characterized by a high intensity of interactions between the geosphere (intended in a broad sense, including the hydrosphere), the biosphere and the anthroposphere. Second, new technologies (e.g., remote sensing) for the collection, analysis and modeling of geoenvironmental data have a strong impact on this context. Third, valuable information on the complex interactions between the humans and the environment can be extracted from humanities-related informative sources (e.g., [7]). The themes covered in this essay are related to a wide range of topics, many of which are in continuous evolution owing to the fast developments that characterize technology and geocomputational approaches. Accordingly, the present discussion is inevitably partial and covers only those aspects that we consider worth highlighting in view of a conscious and critical use of geocomputational methodologies and available informative sources. Section 2, "Geosphere-anthroposphere interlinked dynamics", discusses the difficult conceptualization of the geosphere-anthroposphere dynamics from an interdisciplinary perspective that brings the earth sciences and humanities in dialogue. The role of technology and geocomputation for communicating environmental dynamics and human impacts is then introduced. Section 3, "Technological innovation and geoenvironmental data", focuses on the role of technology for geoenvironmental data retrieval and analysis, including the challenges related to the growing complexity and heterogeneity of informative sources. Section 4, "Geocomputing and the earth scientists", outlines the relevance of expertknowledge in geocomputation for explorative and predictive analysis; then, it discusses the impact of technology on the development and diversification of geocomputational tools, posing opportunities and challenges for the earth scientists. Section 5, "Data imbalance at the crossroads of geocomputing, new technologies and historical information", discusses the "data imbalance" that frequently characterizes the analysis of geoenvironmental dynamics in the long-term. The need to consider humanities-related informative sources, both for compensating the data imbalance as well as for studying geosphere-anthroposphere interactions, is introduced. Finally, the necessity to improve the dialogue between earth sciences and humanities is outlined. Geosphere-Anthroposphere Interlinked Dynamics The relevance of local and global sustainability challenges will likely increase in the coming years due to multiple factors, among which is population global dynamics [8]. Not only will the global population likely continue to grow, with an estimated population of more than 9 billion by 2050 [8], but it is also marked by a relevant imbalance, both from the geographical as well as socio-economical viewpoints, among the regions of the globe. The polarization of population in and around big/mega cities (will this trend be changed by pandemics outbreaks?), with more than half of the world population living in cities [9,10], is another relevant factor. These basic demographic considerations suggest a future increase in interactions between the human and the geoenvironment. Such interactions have well-known multiple manifestations, both from the perspective of anthropic impacts and natural impacts: land-use changes, natural hazards, pollution, ecological alteration, climate change, natural resources depletion, geoengineering issues, etc. The increasing relevance of geosphere-anthroposphere interlinked dynamics for society and science is also marked by various research pathways focused on this issue. It is worth mentioning research areas that have become particularly visible in recent times: the "Anthropocene" related debates and the study of the "Critical Zone", that is, the superficial geological layer of maximum human-geological-biological interactions [6]. The Anthropocene issue [11] has come to cover a wide range of debates, ranging from those in the humanities to the artistic scene and environmental activism (as can be evidenced by approaches as different as [4,[12][13][14]. The fact that the concept of Anthropocene stems from geology [15] and geological observations is representative of the intense anthropic signature on our planet. Then, the proposal to formally define a new geological epoch, involving strict stratigraphic requirements, currently based mainly on geochemical considerations, is still more emblematic of this concept. The formal definition of an Anthropocene epoch, in the stratigraphical sense, depends on the results of the ongoing work of the Anthropocene Working Group, which has been created as part of the Subcommission on Quaternary Stratigraphy of the International Commission of Stratigraphy, in 2009 [16]. However, even independently from the stratigraphical definition of the Antropocene, the concept is valid in its essence. Both research areas (Anthropocene and the critical zone) are explicitly focused on interactions between humans and the geoenvironment. For both, the collection and analysis of geoenvironmental data plays a pivotal role. Moreover, it is important to stress that in these research pathways, historical and archeological information is extremely relevant, especially for the analysis of the evolution of geoenvironmental system on long-time scales. In this context, policies for sustainable development, as the SGDs and the European "Green Deal", are a first step toward maintaining the "stability" of the Earth system, as proposed for example by [2]. However, the definition and implementation of sustainable development policies imply a detailed and objective knowledge of the geoenvironmental system and a continuous monitoring of its dynamics, including the interactions with the anthroposphere. Unfortunately, the level of geoenvironmental knowledge required for sustainable development is not easily achievable. First, due to the complexity and heterogeneity of geoenvironmental systems, a full parameterization of the system, including knowledge on governing processes/factors and boundary conditions, requires huge quantity of data, with high spatiotemporal sampling densities and coverage. Second, reasoning from the perspective of the implementation of sustainable policies, the potential reflexive dynamics (e.g., [17][18][19]) that characterize the human-geoenvironmental systems should be considered. This reflexive behavior often develops according to circular, selfreinforcing and path-dependent patterns, through perception-related human actions. This implies that both the social sciences and history should contribute to untangling the complex interactions between humans and the geoenvironment. Third, the analysis and understanding of the geosphere-anthroposphere dynamics often require the study of geoenvironmental processes for extended periods of time, often considering centuries or even millennia (e.g., for studying human forcing on the climate system). As discussed further in the paper, this represents a critical point due to the "data imbalance" effect: firstly, this is represented by the progressive deterioration in data, i.e., spatiotemporal density and accuracy, going back in time; secondly, this is also marked by a general inhomogeneity in data properties (e.g., [20]). The need to consider, in addition to traditional geoenvironmental proxies (e.g., based on dendrochronology, sedimentology, paleontology, geochemistry, etc.), humanitiesrelated informative sources open new challenges for the earth scientists, who need to consider specific characteristics of human sciences and its investigative approaches. With reference to the human sciences, it should be stressed that their theory and knowledge (in the realms of sociology, anthropology, economy and, more generally, cultural studies) transform the subject matter they target due to a reflexive loop effect. According to a semiotic "principle of indeterminacy" (which we could more simply call the "observer effect"), all inquiry into human reality affects and transforms its object of inquiry [21] (pp. [28][29]. This awareness has even led to the identification of social reality with its 'representation' in some of the most influential trends of sociological investigation [22,23]. To neglect the observer's positioning can lead to a sort of "ideological fallacy", that is, to assume the objective neutrality of the humanities and social sciences, as if the agendas and motivations that inform their specific form of knowledge could be separated from their content (this is like positing a form of pure "knowledge for the sake of knowledge" on the basis of which the knower does not want to undertake anything and wishes to leave reality untouched) [21]. As a consequence of these methodological premises, culture has been seen as a structured symbolic system (or "semiosphere") which results from a process of selective abstraction (which, in turn, is dependent on codification-andinterpretation codices) [24]. This semiotic abstraction should by no means be confused with reality itself, as inclusion and exclusion from the system is a processual matter [25,26]. Moreover, all abstraction-no matter how accurate, complex and systematicemerges out of cognitive and historical processes and exerts societal functions towards specific goals ( [27] (pp. ). If cultural studies and, more specifically, the human sciences, will be included in a program of geo-anthropological inquiry, the objectivesubjective tension that characterizes them ought to be taken into account as a constituent of the resulting interdisciplinary paradigm (in Kuhn's structuralist sense of paradigm, [28]) which cannot by any means be transcended or circumvented, not even by automated processes of data elaboration. In extreme synthesis, it is hard for the earth scientists to extract unbiased environmental information from human sciences related sources without the contribution of scientists/scholars in the humanities and social sciences; on the other side, it can be difficult for humanities-based scientists/scholars to analyze humanenvironmental interactions without contributions from earth and natural scientists. New technologies and geocomputational approaches contribute strongly to the communicative approaches of earth scientists, permitting us to highlight human impacts on the geosphere at multiple scales, including the global one. For example, the remote sensing based "BlackMarble" map of NASA [29], reporting light pollution on the globe (Figure 1), furnishes a sharp and self-evident picture of how humankind is overrunning the planet Earth. Light pollution, apart from being an impact itself, is a proxy for urbanization and land use changes, with all the inherited, direct and indirect, geoenvironmental and ecological implications, including the systemic disruption of multiple ecosystem services. From this viewpoint, the various geographical informative layers and maps reported in the "Atlas of the Human Planet" [9,10] are even more convincing. The atlas has been built by means of advanced geocomputational approaches based on machine learning, permitting the integrated use of different sources of information, including remote sensing technologies. It offers an updated and quantitative analysis of the urbanization in the globe, with interesting outcomes. At the end, the communicative power of images and maps plays a key role from the perspective of social perception and serves as a key informative instrument in the hands of geoscientists. Quantitative maps of environmental variables (for example, reporting the pollution of air, water and soil) unequivocally display the human impacts on the geosphere. The air pollution maps of the globe derived via remote sensing technology, as with the ESA Tropomi instrument mounted on Sentinel 5 (https://www.esa.int/Applications/Observing_the_Earth/Copernicus/Sentinel-5P/Nitrogen_dioxide_pollution_mapped, accessed on 7 February 2021) or the maps of Cesium deposition in Europe after the Chernobyl accident (e.g., [30,31]) are emblematic. As a further instance, one can mention cold-war atomic tests (over 500 detonations in the 1950s and 1960s) which left an even broader and lasting "bomb spike" that is currently under examination as a possible Anthropocene marker with disturbing ethical connotations [32,33]. Finally, recent public health and epidemiological studies are revealing that, almost surreptitiously (e.g., [34]), the pollution of air (e.g., [35]), soil (e.g., [36,37]) and water (e.g., [38]) is significantly affecting the health and well-being of humans; the potential societal and economic impact for the coming years could be worse than what is expected from climate change. The potential interactions between pandemic events (e.g., , environmental pollution and socio-economic processes are another area to be further investigated (e.g., [39,40]). To be sure, many more examples could be presented, in connection with climate change, ecological impact, land-use changes and other geoenvironmental aspects. The point is that technology and geocomputational approaches are fundamental not only for researchers studying and managing the environment, but they are also fundamental to increase awareness among the wide public and policy makers toward environmental issues and controversies over "consensus on consensus" (beginning with the paradigmatic case of human-caused global warming, as discussed by [41]. In these cases, statistical graphs, images and maps (e.g., [42]) capturing the environmental processes are a formidable communication tool. However, statistics, graphs and maps should be always accompanied by information on the informative sources and on their inherent limits (e.g., spatial resolution, uncertainty, underlying assumptions, etc.), to correctly assess their objective content. This allows us to avoid useless controversies that misleadingly embrace an opposition between data and models (which are, in fact, generally interdependent, as discussed by [43]). It also helps us to counter wide-spread forms of anti-science skepticism, a growing problem in the public opinion whose recent manifestations have their roots in the 'constructivist' sociology of science (in particular, the thesis that scientific truths are socially constructed) [44]. Criticism stemming from the sociology of scientific knowledge needs to be counter-balanced by a renewed trust in the validity and objectivity of knowledge content, albeit with an awareness of the function of such content and its methodological and technological limitations. Otherwise, mounting skepticism can become an instrument of irresponsible economic agendas and populist politics, and deeply affect scientific work and discredit expertise, as has been evidenced by recent posttruth debates [45][46][47]. Technological Innovation and Geoenvironmental Data Geoenvironmental data are fundamental in order to handle objectively the challenges that our planet and humanity are facing. They play a pivotal role in communicating geoenvironmental issues to a non-expert audience too. A fundamental characteristic of geoenvironmental data is that the spatial and temporal dimensions are an inherent property. In fact, the geographical position and temporal reference of environmental data (e.g., the concentration of a pollutant) are an integral part of the available information; this aspect has a decisive impact on all processes related to data collection, data analysis and dissemination of environmental information. Technological, hardware and software and methodological developments, both in regard to field as well as laboratory procedures, have a strong impact in geoenvironmental information retrieval, management and exploitation. The set of methodologies and tools that can be deployed to parameterize the environment is extremely wide and is characterized by continuous advancements. This contributes to the extreme heterogeneity in the characteristics of geoenvironmental data available. In fact, geoenvironmental data can be characterized by relevant differences in many aspects, such as (e.g., [48,49]): typology of information (e.g., continuous, categorical, compositional, hard, soft, etc.), spatiotemporal support of measurement, spatial coverage (fragmentary versus exhaustive information) and uncertainty. Probably, at least concerning the analysis of the earth-surface geoenvironmental processes from a global perspective, the most evident progress in geoenvironmental data retrieval is related to the remote sensing technologies (e.g., [50][51][52]) mounted on spatial platforms outside the atmosphere. The remote sensing imagery, representing spatial data with an exhaustive coverage of the domain studied, have the capability to catch the dynamics of environmental processes in action, for wide areas and with relatively high spatial and temporal resolution. Series of imagery reporting atmospherics or oceanic circulation are an example of this capability. Another relevant example could be represented by the improvements in satellite gravimetry (e.g., [53,54]). National and international space agencies are making serious efforts aimed to develop new sensors and platforms and for improving easy access to data. From this viewpoint, it is worth mentioning the efforts of European Union with its European Space Agency (ESA) and Copernicus for developing new satellite sensors and making available to the public satellite data via various web portals and software tools (https://earth.esa.int/eogateway/ accessed on 7 February 2021). Remote or, more generally, contactless sensing (e.g., proximal sensing) includes not only sensors mounted on satellite platforms but also on all the other platforms, manned or unmanned, that can be terrestrial, marine and aerial (e.g., [55][56][57]). Moreover, active sensing technologies (e.g., [51]) such as Light Detection and Ranging (LiDAR) and Synthetic Aperture Radar (SAR,) have revolutionized the way in which we can study earth processes. An instance of this is the possibility to derive by means of airborne LiDAR high-resolution digital terrain/surface models that makes the detection of fine-scale morphology and the study of multiple aspects of surface roughness feasible (e.g., [58,59]). Moreover, these high-resolution terrain models, when collected on a multitemporal basis, are fundamental to monitor specific processes such as landslides, glaciers and coastal morphology (e.g., [57,60]). Concerning SAR technology, the possibility to monitor ground deformation for wide areas has a specific value for monitoring geoenvironmental processes such as land subsidence (e.g., [61]) or ground deformations after strong earthquakes (e.g., [62]). Unfortunately, remote sensing technologies are useful for gathering information about the earth surface but furnish limited information on geoenvironmental processes and factors in the subsoil or below the water surface. In this context, geophysical methodologies, strongly connected with remote-sensing technologies and coupled with geocomputational tools, are fundamental to improve our understanding of earth subsurface processes (e.g., [63][64][65]). Geophysical methodologies have seen relevant developments in recent years, with the main trend toward the development of easy-todeploy and low-cost technologies to be applied to a wide set of issues and in a wide range of settings, including urban contexts. For example, seismic, geoelectrical and georadar technologies are currently intensively applied for multiple geoenvironmental and geoengineering issues. The role of technological developments in this context is emblematically described by the case of passive seismic methodologies with the development of flexible and easy-to-use tromographs that fueled an effective explosion of environmental seismology-related research (e.g., [66]), focused, for example, on seismic microzonation and bedrock-sediment transition mapping. Technological development improved the collection of geoenvironmental data useful for a wide range of earth science disciplines, both in the context of field and as well laboratory equipment. Focusing on field sensors and the related data loggers, the improvements have been impressive in multiple fields: geochemical sensors for environmental monitoring (soil, water and atmosphere), physical sensors (pressure, temperature, strain, conductivity, etc.) for hydrological and hydrogeological monitoring, proximal sensing, tracers, etc. In general, the trend is toward the development of low-cost sensors, rugged, customizable and requiring minimum maintenance. Then, modern sensors coupled with web technologies become smart and the "geosensor webs" become possible (e.g., [67]); within this framework, each sensor is capable of adapting to the registered signal and also of taking into account the feedback from the other sensors on the web, making the construction of self-adaptive monitoring networks feasible. This framework directly relates to the Internet of Things (IOT), a technology opening up new opportunities [68] but also implying potential cybersecurity threats [69] that can be critical when the environmental monitoring is devoted to important economical and strategical assets represented by natural resources. Another typology of sensor that has seen strong improvement is the "human sensor" through Citizens Science approaches (e.g., [70,71]). In particular, the development in Information Communication Technologies (ICT), both from the side of software (e.g., web) and hardware (e.g., smart phones, microcontrollers, etc.) facilitates the collection of environmental data by means of participative approaches and directly in the field by means of digital technologies (e.g., digital geological mapping and references). In regard to participative geoenvironmental data collection, many examples can be reported; some of these are related to civil protection activities, ecological monitoring and post-disaster mapping, such as the post-Fukushima radioactivity monitoring network (https://safecast.org/, accessed on 7 February 2021). Citizen Science approaches could play an active role in increasing transparency in environmental monitoring and improving awareness on environmental issues. ICTs play a fundamental role in all segments related to the productive chain of geoenvironmental information, including information retrieval, management and analysis (e.g., [72]). Cloud storing services are fundamental to manage the bewildering quantity of remote sensing data available from space agencies; for example, the private firm "Amazon", with the "Amazon Sustainability data initiative" (https://registry.opendata.aws/collab/asdi/, accessed on 7 February 2021) manages remote sensing data from multiple sources. Cloud services play a pivotal role even in the context of computing and participative programming; "Google earth engine" for remote sensing (https://earthengine.google.com/, accessed on 7 February 2021) or the "Tensor" platform (https://www.tensorflow.org/, accessed on 7 February 2021) for machine learning are examples of how the computing power and the possibility to develop algorithms in collaboration with multiple researchers are widening the potentialities in environmental data analysis but are also raising concerns regarding the free access to data. Corporate ownership and selling of data, especially those related to human activities, and their embedment in algorithmic systems used for the reorganization and automation of labor and policing raise legal, ethical and political concerns [73][74][75]. Even in the context of geographical information systems (GIS), there is a continuous push, both from proprietary as well as open-source solutions, toward WebGis services and online GIS. Many environmental agencies, research institutions, associations and other entities collect and manage environmental data by means of cloud storage services, generally following various standards on data and metadata (e.g., Inspire). Moreover, bigdata related to human activity and consumption plays a key role in studying the possible geosphereanthroposphere interconnections. The complexity and quantity of geoenvironmental information available is evergrowing; this is also accompanied by continuous improvements and diversification of data analysis tools and increasing computer power. However, there is the feeling that these developments have grown much faster than our capability to fully, safely and robustly exploit available information. In order to "mine" the core information from multiple sources and huge quantity of data, which is not always qualitatively homogeneous, it is often necessary to adopt a big-data perspective and data-mining approaches. In this context, specific strategies for information validation become a key element. Relevant efforts should then be spent to formalize and explicate expert-based choices in the process of retrieving and analyzing data, given that user-based decisions impact many segments in the productive chain of environmental information. A final and perhaps obvious remark is that the dependency on online services for data storage and analysis could be risky, especially if based on infrastructures owed by private firms for which the profit is the inherent target, since they can change their policies at any time or they can also fail. Geocomputing and Expert Knowledge Geocomputational methodologies play a key role in the context of sustainability challenges. These are fundamental for the quantitative analysis of the main processes and their interactions characterizing the earth system (e.g., [1]). The analysis and modeling of geoenvironmental data are crucial for the detection of early-warnings signs of geoenvironmental-system instabilities at the local or global scales. Moreover, the importance of geoenvironmental intelligence tasks for economic investments and policy making has significantly grown, adding a 'prescriptive dimension' to the collection and modeling of geodata. This more than ever emphasizes the need for a conscious and transparent use of geocomputational methodologies. Geoenvironmental information is generally exploited by means of supervised or unsupervised learning approaches for achieving two main tasks: data exploration and prediction. In this discussion, the term "prediction" is used in a broad sense, i.e., the action of evaluating the value of an environmental property or the state of an environmental system in a specific location of the spatiotemporal domain of interest, where information is lacking or it is incomplete (e.g., [48,49,76,77]). Clearly, prediction and data exploration are two interlinked and complementary tasks, often marked by fuzzy boundaries. In data exploration, the main aim is to find some "interesting" underlying structure which is potentially capable of shedding light on studied phenomena (e.g., detection of forcing factors) and governing the predictive approaches further adopted. The "interesting" structure can be related to multiple aspects, e.g., spatial and temporal autoand cross-correlation, trends (in space and or time), periodicities and multiscale analysis (Fourier, fractal, wavelets, etc.), in causality relationships, clustering, fractal analysis, tipping points, variable reduction, pattern analysis, etc. In prediction, the main aim is to estimate the value (continuous) or the state (discrete) of an environmental property (or of an ensemble of environmental properties) in "locations" of the spatiotemporal domain of interest where measurements are missing or incomplete. Ultimately, from a practical perspective, one of the most important tasks is to obtain a spatiotemporally exhaustive "mapping", static or dynamic, of the environmental variables of interest in a given spatiotemporal domain. The reconstructed spatiotemporal mapping should be characterized by low uncertainty and should be "realistic", i.e., compatible with available data and with our expert knowledge. Following this perspective, under the term "predictive" we can include not only explicitly predictive approaches (e.g., spatial interpolators, regression, etc.) but also numerical modeling approaches (e.g., ground water models). In particular, the adoption of a specific approach is dependent on the quantity of data available, the complexity of studied phenomena and on the level of knowledge of the involved physicochemical processes. Accordingly, we could classify the predictive approaches in terms of the balance between available data and expert knowledge in influencing the analysis. When data are dominant with respect to expert knowledge, statistical predictive approaches such as geostatistics, Bayesian modeling and machine learning approaches can be adopted [76]. In these approaches, the expert knowledge influences the analysis in a semiquantitative way, for example, during the phases of exploratory data analysis and in the selection of critical user-defined settings (e.g., selection of the domain analysis, selecting a specific anisotropy parameter, etc.). Then, in those settings characterized by a general balance between available data and expert knowledge, the last codifiable only semantically, the prediction can be performed via a set of expert-based rules. In this typology, the approaches based on fuzzy logic [78] are emblematic. In other circumstances, the data spatiotemporal density can be too low for describing the true heterogeneity; however, at the same time, the physical-chemical processes governing the studied phenomena are identified and numerically modelable: in this setting, typical for example in groundwater modeling (e.g., [79]), the data are used for calibrating the numerical model, and the spatial fields of the geoenvironmental property of interest can be derived by means of forward modeling (e.g., the dispersion of a pollutant) or by means of inversion (e.g., hydraulic conductivity). Finally, a data assimilation approach can be adopted when a continuous flow of information is available and the physical-chemical processes governing the studied phenomena are identified and numerically modelable. In this setting, typical of meteorology and oceanography, the numerical models are continuously updated as long as new data flow in the model. The role of expert knowledge becomes particularly intricate in explorative analysis focused on finding interesting structures in data. In some way, this typology of explorative analysis is related to the ancestral-inherent human characteristics of finding an underling meaning in the surrounding environment. For example, the inference of causation from data in Earth system sciences-including by means of machine learninghas received new attention [80] in the wake of novel trends toward the formalization of causal thought. Such trends are backed by claims that automated causal inferences constitute a breakthrough with respect to earlier vetoes against the derivation of causation from correlation-which led to a general ban on causal explanation from statistics [81]. The difficulty of shifting from an 'epistemology of correlation' to one 'of causation' is not unprecedented, as can be evidenced by important historical developments of the natural sciences. In astronomy, for instance, it took very long time before a proper celestial mechanics could emerge [82]. This emergence firstly presupposed the collection of 'big observational data' (from Babylonian times throughout the Middle Ages) and secondly, a shift from predictions based on the recognition of the recurrences of planetary motions to a geometrical 'pattern recognition' (beginning in Greek antiquity, cf. [83,84]). Eventually, causality could enter the arena of mathematical astronomy in modern times, when Kepler and, more maturely, Newton introduced forces as the causes from which the geometrical regularities of celestial physics should be derived. But this causal leap looked like an epistemological break rather than a causal inference. Even today's most keen supporters of causal inference, Pearl and McKenzie, acknowledge that causes depend on belief assumptions 'beyond the data'. Causal surmises can be confirmed and selected, yet they are not extracted from a hypotheses-free tabula rasa [85]. In the human sciences, the problem is tantalizing. The fathers of modern economy, especially after David Ricardo, already recognized that the source of the wealth of nations rests on labor, whose structuring and organization in specific societal formations depends on variable social and political factors [86]. Predictions based on the determination of historical causality are dubious since patterns can always be structurally altered by "black swans" that irreversibly change the paradigms to be modeled [87]. Renewed attention to the impact of ideas in the form of political and juridical theories that justify and produce societies' developments-that is, the importance of political and ideological factors over economic and technical ones-is at the basis of a rebirth of historical approaches in economy, in which the importance of the natural language as a fundamental complement to the mathematical and statistic language has taken center stage [88]. Such epistemological remarks, far from destructive skeptical arguments, call for a sober recognition of the intrinsic limitations in the modeling of societal phenomena, which depend on the historically contingent character of the reality they map. Geocomputation and Technology The cited approaches have seen a growing applicability in recent years due to the increase in data availability, continuous software development and augmented computational power. In the context of software, the evolution of programming languages and of programming environments is a key ingredient for the applicability and development of computational approaches. Programming paradigms that go beyond oldfashioned procedural programming, currently based on object oriented, generic and functional programming make it feasible to write geocomputational software more easily and efficiently than in the past. Moreover, modern code could be potentially easier to understand and is better suited for participative collaboration and development. In this regard, the availability of opensource solutions and common standards are fundamental. An example is represented by the Python language (https://www.python.org/, accessed on 7 February 2021), both used as scripting for automation and customization in domain specific software (e.g., GIS packages) as well as a standalone programming language with its own scientific libraries. Other examples are represented by mathematical or statistical programming environments where the development of new algorithms is straightforward. In this context the opensource statistical programming environment R [89] is emblematic of the potentialities of opensource programming in science. Even domain-specific software (free-open or commercial) have seen astonishing developments, which can have an impact in the direction envisioned by the SGDs. The possibility to adopt free and open software solutions is fundamental to promote proper (perhaps more 'democratic') environmental management, as for ground-water resources (e.g., [90]). In analogy to what has already been reported in regard to data-source availability, the available set of algorithms and related software seems to grow faster than the capability to select and use the right tools for specific tasks. For example, in the context of statistical predictive algorithms, such as geostatistics and machine learning approaches, the quantity of available software packages and algorithms, and the related papers, is bewildering. It is extremely difficult for an earth scientist, especially at the beginning of her/his career, to find a clear pathway among the multiple options available today. Moreover, scientific literature may be of little help if the reader does not adopt a critical view; it is not rare to review or read scientifically unsound or at least inaccurate papers, for example those naively applying interpolation methodologies. Nevertheless, the wide set of available methodologies in data analysis and modeling, including technologies fostering participative approaches, represents an opportunity to develop collective intelligence approaches. These, improving transparency and pluralities of perspectives, are necessary to shed light on the multiple aspects of the earth system and its interactions with the geosphere. In order to move safely among the many options available today and the new ones of the future there is the need to improve many aspects related to geocomputation. First, an intense and generalized demystification campaign should be conducted to clarify the key concepts, the assumptions (often hidden) and the limits of available methodologies. This is particularly true for approaches that seek to find 'interesting' structures in data such as tipping points, chaos and causality relationships. Moreover, complex formula and formal mathematical expressions should always be accompanied by a clear explanation in plain language; specialistic terminology should be always explained and jargon should be avoided. In the same Enlightenment spirit, there is the need to highlight connections and analogies between different approaches, especially when the differences are mainly related to tradition and specificities of the different disciplines. There are multiple examples of this kind, e.g., the connections of kriging and objective analysis (e.g., [91]); the analogies between orthogonal regression and Principal Components (e.g., [92]); the analogies between autocorrelation analysis in time series and geostatistics. These first two targets are fundamental to obtain the third one: selecting the right tools for a specific task, preferring the simpler ones. Then, it is always worth highlighting the essential role of explorative data analysis and of expert-knowledge in predictive approaches, even when adopting machine learning algorithms and other supposed automatic/black-box methodologies. In this context, the main issue relies upon documenting transparently how user-based decisions influence the results of the analysis. Data Imbalance Technological developments and growing awareness of geoenvironmental issues are fueling a continuous growth in geoenvironmental data collection. This reverberates in the astonishing imbalance in data quantity, spatiotemporal density and spatial coverage between datasets related to currently or recently monitored phenomena and datasets related to past dynamics. If we focus on the analysis of the surface geoenvironmental processes, a sharp and exponential increase in data coverage and spatiotemporal density could be seen in the 1970s, corresponding to the beginning of the NASA Landsat program [50]. The more we dig into the past, the less environmental data are available, and the data imbalance becomes more evident. The deterioration in information density, coverage and quality going back in time, which for simplicity is referred to as a "data imbalance", is clearly not new, and it is a recurring curse in many disciplines such as history, archeology and, of course, geology. What is really new is the bewildering explosion of new environmental informative sources and data collection capabilities of the last decades, which are growing day after day. The "bloom" in geoenvironmental data is particularly extreme in the context of earth surface processes; consequently, another extreme data imbalance is also present when dealing with subsurface data, characterized by a sharp deterioration in information moving downward. The data imbalance becomes relevant when studying geoenvironmental dynamics on long time lengths (or in 3D), for example in studying temporal trends of specific environmental variables (e.g., atmospheric temperature, sea level, subsidence, etc.). The imbalance is critical when the study of environmental dynamics is oriented to specific tasks, e.g., the analysis of variability, analysis of extreme events or in the detection of causality relationships (e.g., [80]). In the perspective of sustainable development, and given the complex and reflexive relationships between humankind and the geosphere, the data imbalance is a critical issue, and is not easily surmountable. The earth scientists need to take into careful consideration the data imbalance in their analysis. A favorable condition is that the current capability to obtain a detailed and exhaustive spatial mapping of environmental variables of interest, even if limited to the earth surface, permits us to reasonably estimate the impact of a severe undersampling and/or of a deterioration in data quality. Moreover, it is possible to derive an almost exhaustive and detailed overview of the complexity and dynamicity of many environmental processes, including the presence of abrupt transitions of system state. However, considering human-environment interactions, we have to reflect on how an oversimplified modeling of societal phenomena hinders their very comprehension, as modeling might lead us to forget the discontinuity that marks human history, which is marked by shifts and breaks that can elude the framework of a given societal-cultural formation and that therefore can escape the possibility of necessary deduction from preexisting conditions. Earth Science and the Humanities The data imbalance discussed above and the need to analyze geosphereanthroposphere interlinked dynamics in time amplifies the relevance to consider, in addition to well-known geology-related methodologies, historical sources. The derivation of geoenvironmental information from historical records (e.g., documents, paintings, architectures, etc.) is fundamental to reconstruct past geoenvironmental conditions (e.g., local sea levels), the occurrence of peculiar environmental events (e.g., exceptionally cold winters, earthquakes, eruptions) and the changing relationships between humankind and the environment (e.g., landscape engineering). For example, the use of historical records of earthquakes for seismic hazard evaluation is well-established (e.g., [93,94]) and relies on the possibility of directly appropriating historical records on the perception of seismic effects. Other examples can be found in the context of geomorphological changes (e.g., [95]), landslides (e.g., [96,97]), volcanic activity (e.g., [98]) and many other geoenvironmental phenomena. The derivation of quantitative proxy data from historical records is not straightforward, although some pioneering works already exist. Camuffo et al. [99] worked on the reconstruction of temperatures in the Mediterranean Sea over 500 years through the combination of more and less recent data derived from instrumental observation and historical sources for times that preceded modern scientific measurements. Camuffo et al. [100] could also derive evidence of extremely cold winters in the lagoon of Venice from local documentary sources, including not only archival documents but also the visual arts and early printed books. The most daring proposal has been to derive biological proxy about the past sea levels of the lagoon of Venice, from 1350 to 2014, from early-modern depictions of green algae in Venetian canals, and to integrate them with information about past sea levels inferred from the height of the stairs of historical palaces on the main city canal, the Canal Grande [7]. One of the main difficulties in order to use such sources rests on the correct historical evaluation of the material and cultural contexts of works of art and the uses of architecture. Moreover, not only did scientific concepts change with time but so did the units of measurement that were once used and the meaning of their referents. To take just one example, it is not easy to translate measurements of river flows in the past if, even after the Renaissance mathematization of the principles of water flow thanks to the Galileian school, the quantity of measured water was referred to the volume of a geometrical construction rather than to the modern concept of flow rate [101,102]. Additionally, the study of historical and archeological records also sheds light on how science, politics and socio-economic factors interacted from the perspective of geoenvironmental policies and adaption to ever-changing environmental conditions (e.g., [103]). To remain with the Venetian case, archival administrative, technical and political documents could provide a mine of geological and environmental data, provided the correct interpretative methods are employed. Such documents also offer historical cases that help us reflect on geoenvironmental politics. The proper methods, in this case, include archival competence, philological skills and historical training, which are rarely united with a sufficient preparation in the earth and natural sciences. Moreover, the use of digital tools for the extraction of information in the humanities (e.g., for textual analysis or the comparison of corpora of texts) is less developed than in the earth sciences, although the digital humanities is a rapidly growing field of research (e.g., [104]). New Professionalism? The derivation of quantitative geoenvironmental information from historical and archeological records is not an easy task and requires a truly holistic approach, in which earth scientists (e.g., geologist, soil scientist, ecologist, geographer, etc.) work in teams with historians, archeologists, philologists, historians of architecture and philosophers of science. Hence, it is necessary to establish more inter-disciplinary research networks and collaborations. A new hybrid profession is needed. Part of its work ought to be devoted to clarifying the historical meaning of scientific categories and their transformations, as well to clarifying the varying goals that shaped the sciences of the past, which is typically a work for intellectual historians. Most importantly, this hybrid professional figure should bridge different academic cultures and disciplines, thus mediating between different outlooks and different uses of concepts, which sometimes only superficially look identical. For instance, it is not clear yet, from a geo-anthropological perspective, how historical time and geological 'deep' time intersect and co-determine each other from the viewpoint of the disciplines that deal with them. Problems of geocomputing, linked to data imbalance and the problems of integrating historical data, not to mention the modeling of societal data, are highly relevant in politics and decision making. Quantitative abstractions of cultural practices and the tracing of natural processes and human actions has ostensible advantages in terms of management but runs the risk of preparing new forms of exploitation and authoritarian politics. In fact, the inclusion of geoenvironmental models in economic computations, although pursued in the name of a "green economy", reconceptualizes and operationalizes natural and cultural phenomena in terms of resources, services and capital (e.g., [105][106][107]). Far from being mere metaphors, such linguistic uses connected with quantitative abstractions ensures the possibility of economic valorization in the framework of a sort of "hyperrealism" which, at the level of individual and collective psychology, creates the illusion that the profit economy constitutes the unsurmountable naturalized horizon of history and human relations [108]. Conclusions The increasing availability of data and geocomputational tools is continuously amplifying our capability to understand and model the environment. However, we are, maybe luckily, far from relying on purely automatic "meat mincer" approaches capable of searching and assimilating all available environmental data for the problem at hand and outputting the core information. Differently, data exploration, possibly according to a plurality of perspectives along with expert knowledge, play a key role in understanding and modeling earth system dynamics. Now more than ever, there is the need for earth scientists characterized by a balanced alchemy of geo-environmental knowledge and geocomputational capabilities, with advanced field-related skills. Field interpretation of the geoenvironment, even with the contribution of digital technologies, will continue to play a pivotal role. Present-day geological and environmental challenges need us to look closer at human history, in particular the history of economics, technology, and science. These studies can help dig out environmental data and offer cases of the complex and reflexive dynamics between humans and their environments. Moreover, by strengthening the awareness of the historicity of human society, culture and knowledge, science studies (the wide spectrum of disciplines which reflect on science at philosophical, historical and sociological levels) foster critical thought, which is particularly important in order to find a balance between the political and the techno-scientifical components that are coimplicated in sustainable development policies. Acknowledgments: The design of this paper is related to a keynote presentation (Trevisani, 2019) presented during the international conference "TerraEnvision 2019: toward sustainable development goals" (https://terraenvision.eu/, accessed on 7 February 2021), held in Barcelona on 2-6 September 2019: "Geocomputing, New Technologies and Historical Analysis: Tools for a Changing Planet". Some technology-related considerations have been inspired during the various meetings of the Geosciences & Information Technology group (Section of the Italian Geological Society). The authors are grateful to the Max Planck Society for the funding of the Max Planck Partner Group in Venice, The Water City, in order to further investigate the themes of this opinion paper. The authors would like to acknowledge the blind referees for their comments and suggestions and Jonathan Regier for his valuable support with the final revision.
2021-05-05T00:09:32.553Z
2021-03-12T00:00:00.000
{ "year": 2021, "sha1": "3ba69b08db5099378b01ecb72e57097e8e54bdcb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-445X/10/3/294/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fc13b821a154da9d4fc40d8ecd97c865a47801b1", "s2fieldsofstudy": [ "Environmental Science", "Computer Science", "Geography" ], "extfieldsofstudy": [ "Engineering" ] }
18117270
pes2o/s2orc
v3-fos-license
Common ground for biodiversity and ecosystem services: the “partial protection” challenge New global initiatives require clarity about similarities and differences between biodiversity and ecosystem services. One argument is that ecosystem services capture utilitarian values, while biodiversity captures intrinsic values. However, the concept of biodiversity equally emerges from anthropogenic use values. Measures of biodiversity indicate broad option values, and so provide different information about future uses and benefits. Such differences nevertheless can be the basis for “common ground” for biodiversity and ecosystem services. Systematic conservation planning and related frameworks acknowledge such differences through effective trade-offs and synergies among different values of society. The early work on regional biodiversity trade-offs includes a little-explored aspect that could enhance this common ground. Regional planning here takes into account the “partial protection” of biodiversity provided by some land uses. Common-ground will be promoted by better integrating the ecosystem services and biodiversity conservation offered by ecosystems at the “natural end of the spectrum” with the partial protection and other benefits/services provided by more intensively-transformed places. Introduction "Biodiversity" and "ecosystem services" increasingly travel together as companion terms. Examples include the new "Intergovernmental science-policy platform on biodiversity and ecosystem services", (IPBES), the new Strategic Plan of the Convention on Biological Diversity (CBD), and the emerging Global Biodiversity Observation Network (GEO BON). These new initiatives require clarity about the similarities and differences between biodiversity and ecosystem services. Some distinctions naturally emerge from our basic definitions -"biodiversity" refers to living variation, and "ecosystem services" refers to benefits to humans from natural ecosystems. However, biodiversity also has traditional links to benefits/values, and here comparisons with ecosystem services continue to raise important issues. Biodiversity sometimes is characterised as all about intrinsic, nonanthropogenic values, with ecosystem services then providing the links to human well-being. For example, Haines-Young and Potschin 1 argue: "Biodiversity has intrinsic value and should be conserved in its own right. However, the utilitarian arguments which can be made around the concept of ecosystem services and human well-being are likely to become an increasingly central focus of future debates about the need to preserve 'natural capital'". Similarly, Hardy 2 argues: "The idea of ecosystem services allows for acknowledging more than the "intrinsic" value of biodiversity by expanding the breadth of the conservation argument to include the "utilitarian" values of nature". Thus, an argument is that only through ecosystem services do we move beyond biodiversity's intrinsic values to also consider utilitarian values. Common ground A recent statement by Reyers et al. 3 that "the concept of biodiversity emerges from an intrinsic context" echoes earlier studies, including the previous assertion by Reyers and colleagues 4 that "biodiversity and ecosystem services are associated with different values (intrinsic vs. utilitarian)" (see also 5 ). However, Reyers et al. 3 do suggest "common ground" based on biodiversity's additional links to ethical, spiritual, and religious values. They argue that, because these are ecosystem services, conservation of ecosystems services sometimes captures biodiversity and its values (see also 6,7 ). In a response to Reyers et al., Faith 8 points out that the concept of biodiversity equally emerges from anthropogenic use values, citing the early calls for conservation of diversity to ensure benefits "for present and future use" 9 , and the early references 10 to "option values" (the value of biodiversity in providing uses, often unanticipated, for future generations; see also 11,12 ). Thus, in contrast to recent perspectives, there is no requirement to add-in ecosystem services considerations in order to build a case for biodiversity conservation based on human-use values. Reyers et al. 13 agree that the concept of biodiversity emerges from anthropogenic values. However, they object to Faith's observation 8 that biodiversity and ecosystem services "may differ in how well they capture current and future uses". Reyers et al. correctly argue that ecosystem services include future uses. However, Faith argues that option values of biodiversity are broad in reflecting unknown benefits, including those from unknown elements or services 14 . In contrast, ecosystem services typically focus on option values related to possible future use of known services (e.g. future timber from a forest area). For example, DIVERSITAS links option value to the "availability of a particular service for use in the future". Broader option values are measured by estimating biodiversity (for discussion see 14,15 ). Thus, biodiversity by its nature arguably contributes something additional, something different, concerning potential future uses. Reyers et al.'s 13 conclusion that "some scientists focus on differences while others focus on similarity and common ground" therefore is a concern. It implies that proposing differences is counter-productive to finding "common ground". However, I think any truly useful "common ground" for biodiversity and ecosystem services will build on differences. This is apparent in decisionsupport frameworks related to systematic conservation planning 16 and "regional sustainability analysis" 17 that seek trade-offs and synergies among the different values associated with biodiversity, ecosystem services, and other needs of society. Part of that common ground framework is now well-established. Measures of regional biodiversity are used to identify places with high versus low biodiversity marginal gains ("complementarity" values 16 which vary depending on other allocations in the region). For a given locality, high complementarity, combined with high co-benefits (or "negative costs" 18,19 ) and low opportunity costs of conservation, implies priority for conservation over alternative land uses having higher costs and smaller co-benefits (for related work, see 20-27 and Insights from an Australian planning framework for biodiversity and ecosystem services. Partial protection The early foundations of that regional biodiversity-plus-costs framework 17,28-30 include some little-explored aspects that could enhance the common ground of biodiversity and ecosystem services: here, planning includes land/water uses offering ecosystem services or other benefits, combined with only "partial protection" of biodiversity (implying some lower complementarity value) 17 . Early examples 17,19,28 illustrate cases where a partial protection option is allocated, and other cases where the non-conservation land use in a given place is preferred over the partial protection option because it maximises regional net benefits (see Partial degrees of protection and regional sustainability analysis). The Millennium Ecosystem Assessment 31 (MA) highlights this approach in the context of biodiversity policy options: "...an integrated biodiversity trade-offs framework (Faith et al. 2001a(Faith et al. , 2001b 32,33 suggests how such partial protection (for example, from private land) can contribute to the region's trade-offs and net benefits". However, the MA also observes that "The great uncertainty is about what components of biodiversity persist under different management regimes, limiting the current effectiveness of this approach" 31 . As more information of this kind becomes available, case studies should explore applications, and evaluate interesting variants of the partial protection framework. Variants now include extensions to the original DIVERSITY-ED 17,28-30 , and TARGET (e.g., 19 ) partial In essence, Faith argues for the need to recognise the differences between biodiversity and ecosystem services, and he promotes the use of the 'partial protection' approach within a systematic planning framework. The partial protection approach applied within a given region would consider the various trade-offs from protecting biodiversity and protecting land or systems generating services to maximise regional net benefits. I agree with the need to recognise the different values and benefits of biodiversity and ecosystem services, and do not support subsuming biodiversity entirely within the ecosystem-services framework. The partial protection approach appears to offer promise in this area. This opinion piece is a reply to a reply and readers interested in following this argument should see citations 8 and 13 (in article reference list) and the original paper (Reyers, B (2012) et al. Finding common ground for biodiversity and ) ecosystem services . I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. No competing interests were disclosed. Competing Interests:
2016-05-04T20:20:58.661Z
2012-10-16T00:00:00.000
{ "year": 2012, "sha1": "4200c7ef0a76db9cf56393e4ae5395df7852146f", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/1-30/v1/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1374a03f54eb247ba6e8ac9122181b717eeec1ec", "s2fieldsofstudy": [ "Environmental Science", "Philosophy" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
259660558
pes2o/s2orc
v3-fos-license
Nitrogen-and Fluorine-Doped Carbon Nanohorns as Efficient Metal-Free Oxygen Reduction Catalyst: Role of the Nitrogen Groups : The search of active, stable and low costs catalysts for the oxygen reduction reaction (ORR) is crucial for the extensive use of fuel cells and metal–air batteries. The development of metal-free catalysts, instead of platinum-based materials, can dramatically reduce the cost and increase the efficiency of these devices. In this work, carbon nanohorns (CNHs) have been covalently functionalized with N-containing heterocycles by the Tour reaction protocol and tested as metal-free ORR catalysts. The insertion of N-functionalities favored the complete reduction of oxygen to hydroxyl ions, while their absence favored the production of hydrogen peroxide. With the aim of determining the N-species responsible for the ORR activity of CNHs, photoemission and electrochemical measurements were combined. Results suggest that protonated N is the main species involved in the ORR process, facilitating the adsorption of oxygen, with their consequent reduction to neutral hydrogenated N species. Introduction Alkaline fuel cells (AFCs) and rechargeable metal-air batteries (MABs) are nextgeneration energy devices for clean power generation [1][2][3][4].However, the improvement of their efficiencies as well as the reduction in their costs are crucial for their extensive use, both in stationary and mobile applications, which would facilitate the transition to a more sustainable future.Both devices have in common the electrochemical reaction that takes place at the cathode side during the discharge process, which is the oxygen reduction reaction (ORR) in an alkaline environment.The alkaline character of the electrolyte allows for the use of non-noble metals or even metal-free catalysts, instead of platinumbased materials, which can dramatically reduce the costs and increase the efficiency of the electrochemical devices. Carbon materials have been extensively studied as a new class of metal-free ORR catalysts [5][6][7].There exists a wide range of carbon materials with different properties, such as activated carbon, graphite, graphene, fullerene, carbon black, carbon nanotubes (CNTs), etc., and most of them have been tested as ORR catalysts [8][9][10].In particular, doped carbon materials have attracted much attention since it has been demonstrated that the doping of carbon materials with more electronegative atoms, such as nitrogen, phosphorus, and sulfur, creates a positive charge density on adjacent carbon atoms that facilitates oxygen adsorption and charge transfer [10][11][12].Nitrogen, in particular, has been widely studied due to its suitable atomic size and electronegativity [9,13,14].Nitrogen-doped carbon materials Surfaces 2023, 6 228 usually contain different nitrogen species, which are expected to present a different ORR activity.The role of these species has already been investigated, however, there is still a big controversy on which species are involved in the ORR mechanism.In the literature, some groups defend that pyridinic N species are the active ones [13][14][15][16], whereas other groups attribute the activity of this type of materials to graphitic N species [17][18][19]. In this work, we have studied carbon nanohorns (CNHs) covalently functionalized with N-containing heterocycles as metal-free ORR catalysts.CNHs are conical carbon nanostructures constructed from an sp 2 carbon sheet.Their special morphology and graphitic structure provide them with suitable properties for electrochemical applications [20].However, they have been less investigated than other carbon materials, such as CNTs or graphene [14,21,22]. We have decorated the CNHs with nitrogen functionalities by chemical functionalization through the Tour reaction protocol [23].As reported by Giambastiani and coworkers, this is an effective method to prepare metal-free carbon-based catalysts with excellent ORR catalytic activity [13].In addition to the physicochemical and electrochemical characterization of the functionalized carbon materials, here we present an in situ study of the evolution of nitrogen functionalities under ORR conditions in order to shine light on their role and involvement in the ORR mechanism.Our results support that protonated N species facilitate oxygen adsorption and are the ones involved in the actual ORR. Materials All reagents and solvents were purchased from Sigma-Aldrich (Milan, Italy) and used as received if not otherwise specified.Single-walled CNHs were purchased from Carbonium s.r.l.(Padua, Italy) and present a dahlia-type shape with a diameter of 60-120 nm.Electron microscopy evidence of the structure and morphology of the CNHs used in this work can be found in the literature [24][25][26]. CNH-Py-F.As-purchased CNHs (9.3 mg, 0.77 mmol of C) have been dispersed in CHP (7.0 mL) through pulsed sonication for 10 min and transferred in a two-necked roundbottomed flask.A solution of 4-amino-2-fluoropyridine (93.4.0 mg, 0.83 mmol) in 3 mL of CHP was added, and the mixture was heated to 80 • C under a nitrogen atmosphere.Then, isopentylnitrite (100 µL, 0.74 mmol) was added carefully.After 15 min of continuous stirring, the reaction mixture was diluted with cold methanol (100 mL) and filtered through a PTFE membrane filter (Fluoropore, 0.22 µm pore size).The filtrate was washed with methanol (4 × 20 mL) and dried under an IR lamp for 15 min. Physicochemical Characterization Each product was dispersed in CHCl 3 (7 mL) through pulsed sonication (5 min) followed by centrifugation (4500 rpm, 15 min, 20 • C).The supernatant was recovered and characterized.Thermogravimetric analysis (TGA) was carried out using a Q5000IR TGA (TA Instruments, New Castle, DE, USA) under nitrogen by an isotherm at 100 • C for 10 min followed by heating at 10 • C/min rate until 1000 • C. DLS measurements were carried out on a Zetasizer Nano S (Malvern Instruments, Malvern, UK), setting the material as polystyrene latex (RI = 1.590,Abs = 0.010) and the measurement angle at 173 • , backscatter (NIBS default).DLS analyses were carried out in CHCl 3 at 25 • C (equilibration time = 120 s), performing 3 measurements of 11 runs with run duration set at 10 s for each measurement and using quartz cuvettes, according to the solvent, with a 1 cm optical path.Kinetic analysis was performed, collecting 60 measurements (11 runs with run duration set at 10 s each) with 600 s delay between measurements. Raman spectra of samples drop-casted on precleaned glass micro slides (Corning, Corning, NY, USA) and dried under vacuum were recorded using an Invia (Renishaw, Wotton-under-Edge, UK) Raman microspectrometer (50 × objective) using the 633 nm line of a He-Ne laser at room temperature with a low laser power (<1 µW). X-ray photoelectron spectroscopy (XPS) measurements were acquired in a custommade UHV system working at a base pressure of 10 −10 mbar, equipped with an EA125 electron analyzer (Omicron Taunusstein, Germany) and an X-ray source with a dual Al−Mg anode.Core-level photoemission spectra (C 1s, N 1s, O 1s, F 1s, and Br 3d regions) were collected in normal emission at room temperature with a non-monochromatized Mg Kα X-ray source (1253.6 eV).Single spectra were acquired using 0.1 eV steps, 0.5 s collection time, and 20 eV pass energy.In order to analyze the single components of the C 1s and N 1s regions, the spectra were separated into chemically shifted components.For the C 1s region, an asymmetrical shape was used for the sp 2 component, whereas symmetrical Voight functions were used for the sp 3 component and the C-O functional groups.For the N 1s region, symmetrical Voight functions were used. Electrochemical Characterization Ex situ electrochemical characterization was conducted in a conventional three-electrode cell using a rotating disk electrode (RDE).A glassy carbon rod and an Ag/AgCl (KCl sat) electrode were used as counter and reference electrodes, respectively.Measurements were carried out in a 0.1 M KOH solution saturated in Ar or O 2 at room temperature.The working electrodes were prepared by drop-casting.A catalyst ink was prepared by mixing 1 mg of catalyst and 10 µL of Nafion dispersion (5 wt.%, Sigma-Aldrich, Milan, Italy) in 500 µL of ultrapure water (Millipore Milli-Q system).An aliquot of 15 µL of the suspension was deposited onto a 3 mm diameter glassy carbon disk.First, cyclic voltammograms (CVs) were recorded in Ar-and O 2 -saturated electrolyte.Subsequently, linear sweep voltammetries (LSVs) were recorded at different rotation rates at 10 mV s −1 in O 2 -saturated 0.1 M KOH.The electron number (n) transferred during the reaction was calculated employing the Koutecky-Levich formalism. In-Line Photoemission and Electrochemical Measurements Measurements were performed in an ultrahigh vacuum (UHV) system that consists of two independent UHV chambers, the analysis (XPS) and the electrochemical (EC) chambers, connected through a transfer system.The UHV-EC transfer system, which consists of two manipulators (horizontal and vertical), is connected to the main preparation chamber through a gate valve.The horizontal manipulator is used to transfer the sample from the analysis chamber to the EC chamber, whereas the vertical one allows the sample to be raised to couple it to the electrochemical cell, which is connected to the EC chamber from the top.A custom made PEEK (polyether ether ketone) cell was used for the electrochemical measurements.A Pt wire was used as counter electrode and an Ag/AgCl/Cl-(3M KCl) electrode placed in a Luggin capillary was used as reference electrode.A 0.1 M KOH solution, prepared from high purity reagents (Sigma-Aldrich) and saturated in Ar or O 2 , was used as the electrolyte.The electrolyte was pumped into the EC cell through a tubing system using a syringe pump (N-1010, Pump Systems Inc., Farmingdale, NY, USA), which allows for an accurate control of the flow.The electrolyte inlet consists of a capillary tube (diameter ca.0.35 mm) placed in the center of the cell, whereas the outlet is constituted by eight holes (diameter 0.5 mm) placed around the central capillary.Prior to the EC measurements, the tubing system was purged with Ar to remove the oxygen and then, it was filled with the electrolyte.All the experiments were carried out at RT using a flow rate of 1 mL min −1 .First, CVs were recorded in a small potential window (−0.6 V to +0.2 V vs. Ag/AgCl) to identify the working potentials and, subsequently, chronoamperometric curves were recorded at those potentials for 1200 s.The potentials studied were −0.1 V, −0.35 V, and −0.6 V vs. Ag/AgCl.After each EC treatment, the sample was transferred back to the analysis chamber to determine the chemical changes induced by the electrochemical work using XPS.Photoemission data were obtained as described above using the non-monochromatized Al Kα X-ray source (1486.7 eV) and using 0.1 eV steps, 0.5 s collection time and 20 eV pass energy. Synthesis and Physicochemical Characterization of CNH Derivatives Following the previously reported CNH functionalization procedure [26], we explored the possibility of synthesizing two CNH derivatives bearing different pyridyl moieties: CNH-Py-Br and CNH-Py-F (see Scheme 1).chamber through a gate valve.The horizontal manipulator is used to transfer the sample from the analysis chamber to the EC chamber, whereas the vertical one allows the sample to be raised to couple it to the electrochemical cell, which is connected to the EC chamber from the top.A custom made PEEK (polyether ether ketone) cell was used for the electrochemical measurements.A Pt wire was used as counter electrode and an Ag/AgCl/Cl-(3M KCl) electrode placed in a Luggin capillary was used as reference electrode.A 0.1 M KOH solution, prepared from high purity reagents (Sigma-Aldrich) and saturated in Ar or O2, was used as the electrolyte.The electrolyte was pumped into the EC cell through a tubing system using a syringe pump (N-1010, Pump Systems Inc., Farmingdale, NY, USA), which allows for an accurate control of the flow.The electrolyte inlet consists of a capillary tube (diameter ca.0.35 mm) placed in the center of the cell, whereas the outlet is constituted by eight holes (diameter 0.5 mm) placed around the central capillary.Prior to the EC measurements, the tubing system was purged with Ar to remove the oxygen and then, it was filled with the electrolyte.All the experiments were carried out at RT using a flow rate of 1 mL min −1 .First, CVs were recorded in a small potential window (−0.6 V to +0.2 V vs. Ag/AgCl) to identify the working potentials and, subsequently, chronoamperometric curves were recorded at those potentials for 1200 s.The potentials studied were −0.1 V, −0.35 V, and −0.6 V vs. Ag/AgCl.After each EC treatment, the sample was transferred back to the analysis chamber to determine the chemical changes induced by the electrochemical work using XPS.Photoemission data were obtained as described above using the nonmonochromatized Al Kα X-ray source (1486.7 eV) and using 0.1 eV steps, 0.5 s collection time and 20 eV pass energy. The synthetic approach, based on our previous modifications [27,28] of the Tour reaction [23,29] was carried out in 1-cyclohexyl pyrrolidone (CHP) in the presence of 4amino derivatives of 2-substituted pyridines and isoamyl nitrite (see experimental details in Section 2).This functionalization approach has the advantage of preventing relevant alterations of the structure and morphology of the basal plane, thus retaining its shape and electronic properties [27]. Thermogravimetric analysis (TGA) of the CNH derivatives (Figure 1) shows an increased weight loss below 400 °C, compared to pristine CNH, due to the decomposition of organic moieties.Functionalization degrees (FD) obtained from TGA measurements Scheme 1.General strategy for the functionalization of CNHs with pyridine derivatives via diazotization reaction (CHP = 1-cyclohexyl-2-pyrrolidone). The synthetic approach, based on our previous modifications [27,28] of the Tour reaction [23,29] was carried out in 1-cyclohexyl pyrrolidone (CHP) in the presence of 4-amino derivatives of 2-substituted pyridines and isoamyl nitrite (see experimental details in Section 2).This functionalization approach has the advantage of preventing relevant alterations of the structure and morphology of the basal plane, thus retaining its shape and electronic properties [27]. Thermogravimetric analysis (TGA) of the CNH derivatives (Figure 1) shows an increased weight loss below 400 • C, compared to pristine CNH, due to the decomposition of organic moieties.Functionalization degrees (FD) obtained from TGA measurements [26,30] (FD: CNH-Py-Br = 1.2%;CNH-Py-F = 0.9%) refer to the fraction of all C atoms of the CNH that are functionalized, while only the more exposed atoms can react.The effective functionalization of CNH is confirmed by the concentration of stable solutions in CHCl3 (solubility: CNH-Py-Br = 1.4 mg/mL; CNH-Py-F = 1.1 mg/mL; see Section 2 for details), while pristine CNH did not afford stable solutions given the strong πstacking interactions established within the individual CNH.Indeed, the organic moieties introduced with functionalization provide solubility to the nanostructure by limiting the tendency of CNH to aggregate. Raman spectra of pristine and functionalized CNHs are reported in Figure 2. The ratio between the intensities of D (1320 cm −1 ) and G (1600 cm −1 ) bands are almost unaffected (D/G: CNH = 1/25; CNH-Py-Br = 1.23;CNH-Py-F = 1.15), indicating that the native structure of CNH is substantially preserved upon functionalization.In this sense, we can infer that the functionalization is high enough to improve dispersibility in liquid media, but also low enough to likely preserve electronic properties of the pristine carbon nanostructure. the CNH that are functionalized, while only the more exposed atoms can react.The effective functionalization of CNH is confirmed by the concentration of stable solutions in CHCl3 (solubility: CNH-Py-Br = 1.4 mg/mL; CNH-Py-F = 1.1 mg/mL; see Section 2 for details), while pristine CNH did not afford stable solutions given the strong πstacking interactions established within the individual CNH.Indeed, the organic moieties introduced with functionalization provide solubility to the nanostructure by limiting the tendency of CNH to aggregate. Raman spectra of pristine and functionalized CNHs are reported in Figure 2. The ratio between the intensities of D (1320 cm −1 ) and G (1600 cm −1 ) bands are almost unaffected (D/G: CNH = 1/25; CNH-Py-Br = 1.23;CNH-Py-F = 1.15), indicating that the native structure of CNH is substantially preserved upon functionalization.In this sense, we can infer that the functionalization is high enough to improve dispersibility in liquid media, but also low enough to likely preserve electronic properties of the pristine carbon nanostructure.Surfaces 2023, 6 232 DLS analysis (Figure 3) provides an estimate of the solvodynamic diameter (SD) of the particles present in solution.SD values (CNH-Py-Br: 172 nm; CNH-Py-F: 88 nm) are compatible with the dimensions of functionalized CNHs [29] and suggest, in the case of CNH-Py-Br, the possible formation of a multilayered aryl coating expanding the size of the nanostructure, as previously observed with CNTs [27].A radical process involved in the diazotization reaction may be responsible for both the growth of this coating and the loss of bromine, as suggested by XPS measurements (vide infra).DLS analysis (Figure 3) provides an estimate of the solvodynamic diameter (SD) of the particles present in solution.SD values (CNH-Py-Br: 172 nm; CNH-Py-F: 88 nm) are compatible with the dimensions of functionalized CNHs [29] and suggest, in the case of CNH-Py-Br, the possible formation of a multilayered aryl coating expanding the size of the nanostructure, as previously observed with CNTs [27].A radical process involved in the diazotization reaction may be responsible for both the growth of this coating and the loss of bromine, as suggested by XPS measurements (vide infra).The introduction and the nature of nitrogen, fluorine, and bromine species was investigated using XPS (Figure 4). Figure 4a shows the successful introduction of N species in both the CNH-Py-Br and CNH-Py-F samples.The analysis of the N 1s region shows that in addition to pyridinic N species at 398.7 eV, another three components at 399.7 eV, 401.2 eV, and 402.5 eV are also present.Unexpectedly, the main component is the one at 399.8 eV that be attributed to azobenzene bonds formed through a diazo coupling reaction, as already observed for products of the Tour reaction [27].However, it is more likely to assign this component to hydrogenated pyridinic N species formed under reaction conditions [31].The component at 401.2 eV can be instead attributed to protonated pyridinic N species [31].These results suggest therefore the sole presence of pyridinic N species (hydrogenated, protonated, or not), confirming the success of the functionalization process.Finally, the small amount of oxidized N species (<5 at.%) can be attributed to the oxidation of N species in contact with air [31].The introduction and the nature of nitrogen, fluorine, and bromine species was investigated using XPS (Figure 4). Figure 4a shows the successful introduction of N species in both the CNH-Py-Br and CNH-Py-F samples.The analysis of the N 1s region shows that in addition to pyridinic N species at 398.7 eV, another three components at 399.7 eV, 401.2 eV, and 402.5 eV are also present.Unexpectedly, the main component is the one at 399.8 eV that could be attributed to azobenzene bonds formed through a diazo coupling reaction, as already observed for products of the Tour reaction [27].However, it is more likely to assign this component to hydrogenated pyridinic N species formed under reaction conditions [31].The component at 401.2 eV can be instead attributed to protonated pyridinic N species [31].These results suggest therefore the sole presence of pyridinic N species (hydrogenated, protonated, or not), confirming the success of the functionalization process.Finally, the small amount of oxidized N species (<5 at.%) can be attributed to the oxidation of N species in contact with air [31]. The presence of the halogen was only confirmed for the CNH-Py-F sample, as shown in Figure 4b.The analysis of the F 1s region suggests two different fluorine environments.The component at lower binding energy can be associated with the 2-fluoropyridine moiety, while the one at a higher binding energy is ascribed to organic fluorine [13].On the contrary, bromine was not detected in the CNH-Py-Br sample.This result could be attributed to radical addition mechanisms during the functionalization process, which could induce the breakage of the carbon-bromine bond.The presence of the halogen was only confirmed for the CNH-Py-F sample, as shown in Figure 4b.The analysis of the F 1s region suggests two different fluorine environments.The component at lower binding energy can be associated with the 2-fluoropyridine moiety, while the one at a higher binding energy is ascribed to organic fluorine [13].On the contrary, bromine was not detected in the CNH-Py-Br sample.This result could be attributed to radical addition mechanisms during the functionalization process, which could induce the breakage of the carbon-bromine bond. Electrochemical Characterization The pristine and functionalized CNH were tested as metal-free catalysts towards the ORR. Figure 5a-c show the results obtained for the three materials.In Ar-saturated electrolyte, the currents observed were associated with the double layer of the carbon material, i.e., the charges accumulated on the electrode surface due to chemical and/or electrical interactions.However, in the presence of oxygen, a sharp negative current at around −0.3 V vs. Ag/AgCl, attributed to the reduction of oxygen, was observed for all the materials.This value is similar to that reported in the literature for carbon materials [13,14], indicating that these materials are active towards the ORR.No differences in the onset potential were observed for the different functionalized samples (Figure 5g), although higher current densities were delivered by the functionalized materials, suggesting that the functional groups have an important effect on the activity of the materials. Electrochemical Characterization The pristine and functionalized CNH were tested as metal-free catalysts towards the ORR. Figure 5a-c show the results obtained for the three materials.In Ar-saturated electrolyte, the currents observed were associated with the double layer of the carbon material, i.e., the charges accumulated on the electrode surface due to chemical and/or electrical interactions.However, in the presence of oxygen, a sharp negative current at around −0.3 V vs. Ag/AgCl, attributed to the reduction of oxygen, was observed for all the materials.This value is similar to that reported in the literature for carbon materials [13,14], indicating that these materials are active towards the ORR.No differences in the onset potential were observed for the different functionalized samples (Figure 5g), although higher current densities were delivered by the functionalized materials, suggesting that the functional groups have an important effect on the activity of the materials. In order to elucidate if the functionalization has any effect on the ORR mechanism, LSVs at different rotation rates were performed in O 2 -saturated electrolyte.The results are reported in Figure 5d-f.By applying the Koutecky-Levich analysis, the number of exchanged electrons was determined (Figure 5h).Pristine CNHs mainly reduced oxygen to hydrogen peroxide since they show a value of n around two in the entire potential window studied.The functionalization of the CNHs favored the complete reduction of oxygen to hydroxyl ion, as deduced from the increase in the n values from two to four.No differences were observed for the two functionalized samples, suggesting that the change in the ORR mechanism from two to four electrons can be mainly attributed to the N species, and that the presence of fluorine has not a significant effect.In order to elucidate if the functionalization has any effect on the ORR mechanism, LSVs at different rotation rates were performed in O2-saturated electrolyte.The results are reported in Figure 5d-f.By applying the Koutecky-Levich analysis, the number of exchanged electrons was determined (Figure 5h).Pristine CNHs mainly reduced oxygen to hydrogen peroxide since they show a value of n around two in the entire potential window studied.The functionalization of the CNHs favored the complete reduction of oxygen to hydroxyl ion, as deduced from the increase in the n values from two to four.No differences were observed for the two functionalized samples, suggesting that the change in the ORR mechanism from two to four electrons can be mainly attributed to the N species, and that the presence of fluorine has not a significant effect. In-Line Photoemission and Electrochemical Measurements In order to investigate the role of the functional groups and the active species involved in the ORR, we combined XPS and EC measurements to determine the changes undergone by the CNH-Py-F sample under ORR conditions.First, CVs in Ar-and O2saturated electrolyte were performed in order to select the potentials of interest (Figure 6a).Three potentials were selected: −0.10 V (pre-catalytic conditions), −0.35 V (beginning of catalysis), and −0.60 V (catalytic conditions).Then, the sample was polarized at those potentials (Figure 6b), as detailed in Section 2.5.After each electrochemical treatment, the sample was analyzed by XPS. In-Line Photoemission and Electrochemical Measurements In order to investigate the role of the functional groups and the active species involved in the ORR, we combined XPS and EC measurements to determine the changes undergone by the CNH-Py-F sample under ORR conditions.First, CVs in Ar-and O 2 -saturated electrolyte were performed in order to select the potentials of interest (Figure 6a).Three potentials were selected: −0.10 V (pre-catalytic conditions), −0.35 V (beginning of catalysis), and −0.60 V (catalytic conditions).Then, the sample was polarized at those potentials (Figure 6b), as detailed in Section 2.5.After each electrochemical treatment, the sample was analyzed by XPS. Figure 6c shows the analysis of the N 1s photoemission line.The comparison of the spectra before and after the addition of Nafion shows the presence of a new component associated with N-F interactions and the increase in the protonated pyridinic component as well as the decrease in the pyridinic one.This result suggests an interaction between the N groups of the sample and Nafion.In particular, it has already been reported that pyridinic groups could preferentially interact with the F atom of Nafion, which would explain the decrease in this component and the increase in the N-F and protonated ones.This hypothesis agrees well with the data reported in the literature for the fluorination of N-doped carbon materials, where it has been demonstrated that the component at around 400.4-401.2eV corresponds to the attachment of a fluorine atom in meta position to a pyridine-like nitrogen atom [32].Figure 6c shows the analysis of the N 1s photoemission line.The comparison of the spectra before and after the addition of Nafion shows the presence of a new component associated with N-F interactions and the increase in protonated pyridinic component as well as the decrease in the pyridinic one.This result suggests an interaction between the N groups of the sample and Nafion.In particular, it has already been reported that pyridinic groups could preferentially interact with the F atom of Nafion, which would explain the decrease in this component and the increase in the N-F and protonated ones.This hypothesis agrees well with the data reported in the literature for the fluorination of N-doped carbon materials, where it has been demonstrated that the component at around 400.4-401.2eV corresponds to the attachment of a fluorine atom in meta position to a pyridine-like nitrogen atom [32]. Under precatalytic conditions, at −0.10 V, no significant changes were observed, suggesting that the sample is stable under these conditions.Under catalytic conditions, on the contrary, as the potential decreased, the component associated with the NOx groups decreased, since they are reduced under ORR conditions.In addition, the hydrogenated N species (399.8 eV) increased, whereas the protonated ones (401.2 eV) decreased.The component associated with the N-F interaction remains unmodified in the whole range of potentials studied.The analysis of the N 1s region is reported in Table 1. In the literature, pyridinic N species have been mainly associated with the activity of the nitrogen-doped carbon materials [15,16], however, some other groups attribute the activity of this type of materials to the graphitic N groups [17][18][19].Therefore, there is still a big controversy in the literature with this regard.Under precatalytic conditions, at −0.10 V, no significant changes were observed, suggesting that the sample is stable under these conditions.Under catalytic conditions, on the contrary, as the potential decreased, the component associated with the NO x groups decreased, since they are reduced under ORR conditions.In addition, the hydrogenated N species (399.8 eV) increased, whereas the protonated ones (401.2 eV) decreased.The component associated with the N-F interaction remains unmodified in the whole range of potentials studied.The analysis of the N 1s region is reported in Table 1.In the literature, pyridinic N species have been mainly associated with the activity of the nitrogen-doped carbon materials [15,16], however, some other groups attribute the activity of this type of materials to the graphitic N groups [17][18][19].Therefore, there is still a big controversy in the literature with this regard. Our results indicate that the protonated N species are reduced under ORR conditions to neutral NH species when oxygen is adsorbed on an adjacent carbon atom.This reduction explains the decrease in the protonated component (401.1 eV) and the increase in the hydrogenated one (399.8eV).Therefore, the presence of protonated species favors the adsorption of oxygen, which is the initial ORR stage.The adsorption of oxygen on a carbon atom adjacent to the protonated N species is accompanied by the simultaneous reduction of such species, as suggested in the literature [33]. Conclusions In this work we synthetized N-doped carbon nanostructures by functionalizing CNHs with pyridine derivatives, respectively bearing a fluorine or a bromine atom in ortho position with respect to nitrogen, through a diazotization reaction.According to XPS analysis, bromine was lost during the functionalization process, while fluorine was retained.Three different pyridinic species (pyridinic, hydrogenated pyridinic, and protonated pyridinic N species) were present in the functionalized materials.The functionalized CNH derivates were tested as metal-free ORR catalysts, and compared to pristine CNHs.The effective N-doping achieved in both derivatives enabled the complete reduction of oxygen affording hydroxyl ion, while hydrogen peroxide evolution was obtained with pristine CNHs.From the combined XPS-electrochemistry study, we found that protonated pyridinic N is the main N species involved in the ORR mechanism for our CNH derivatives, favoring the adsorption of oxygen in one adjacent carbon atom.However, the presence of fluorine did not induce any difference in the catalytic behavior.The participation of protonated pyridinic N species in the ORR process induces their reduction to hydrogenated pyridinic N species. Figure 1 . Figure 1.Overlay of the thermograms of pristine CNH (black), CNH-Py-Br (red), and CNH-Py-F (blue) in a nitrogen atmosphere.The effective functionalization of CNH is confirmed by the concentration of stable solutions in CHCl 3 (solubility: CNH-Py-Br = 1.4 mg/mL; CNH-Py-F = 1.1 mg/mL; see Section 2 for details), while pristine CNH did not afford stable solutions given the strong π-stacking interactions established within the individual CNH.Indeed, the organic moieties introduced with functionalization provide solubility to the nanostructure by limiting the tendency of CNH to aggregate.Raman spectra of pristine and functionalized CNHs are reported in Figure2.The ratio between the intensities of D (1320 cm −1 ) and G (1600 cm −1 ) bands are almost unaffected (D/G: CNH = 1/25; CNH-Py-Br = 1.23;CNH-Py-F = 1.15), indicating that the native structure of CNH is substantially preserved upon functionalization.In this sense, we can infer that the functionalization is high enough to improve dispersibility in liquid media, but also low enough to likely preserve electronic properties of the pristine carbon nanostructure. Figure 4 . Figure 4. (a) N 1s XPS region of the CNH-Py-Br and CNH-Py-F samples, and (b) F 1s XPS region of the CNH-Py-F sample.The deconvolution into single chemical components is included in (a,b). Figure 4 . Figure 4. (a) N 1s XPS region of the CNH-Py-Br and CNH-Py-F samples, and (b) F 1s XPS region of the CNH-Py-F sample.The deconvolution into single chemical components is included in (a,b). Figure 5 . Figure 5. (a-c) Cyclic voltammograms recorded in Ar-and O2-saturated 0.1 M KOH; (d-f) linear sweep voltammograms recorded at different rotation rates in O2-saturated 0.1 M KOH; (g) comparison of the ORR activity at 1600 rpm; and (h) number of electrons exchanged as a function of the potential determined by the Koutecky-Levich analysis for the pristine CNH (green), CNH-Py-Br (red), and CNH-Py-F (blue) samples. 5 . (a-c) Cyclic voltammograms recorded in Ar-and O 2 -saturated 0.1 M KOH; (d-f) linear sweep voltammograms recorded at different rotation rates in O 2 -saturated 0.1 M KOH; (g) comparison of the ORR activity at 1600 rpm; and (h) number of electrons exchanged as a function of the potential determined by the Koutecky-Levich analysis for the pristine CNH (green), CNH-Py-Br (red), and CNH-Py-F (blue) samples. Figure 6 . Figure 6.(a) Cyclic voltammograms recorded in Ar-and O2-saturated 0.1 M KOH of the CNH-Py-F in the XPS-EC cell; (b) current-time curves obtained at different potentials indicated in (a); and (c) deconvolution of the N 1s photoemission line in single-chemical components before and after the EC measurements. Figure 6 . Figure 6.(a) Cyclic voltammograms recorded in Ar-and O 2 -saturated 0.1 M KOH of the CNH-Py-F in the XPS-EC cell; (b) current-time curves obtained at different potentials indicated in (a); and (c) deconvolution of the N 1s photoemission line in single-chemical components before and after the EC measurements. Table 1 . Analysis of the single chemical components of the N 1s photoemission line for the CNH-NF sample before and after the EC measurements.
2023-07-12T05:26:14.872Z
2023-07-08T00:00:00.000
{ "year": 2023, "sha1": "115c4f311ba3c1775a258f1eb06a3392bc961506", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2571-9637/6/3/15/pdf?version=1688802110", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "580ca0453df2aea335f0828e85343a1eea0b3a27", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
263109981
pes2o/s2orc
v3-fos-license
Comparing the effect of transcutaneous electrical nerve stimulation and massage therapy on post laparoscopic shoulder pain: a randomized clinical trial Background Shoulder pain is a common clinical problem after laparoscopic surgeries. The use of non-pharmacological massage and transcutaneous electrical nerve stimulation (TENS) as an adjunct to routine treatment is increasing to provide optimal pain relief. Therefore, we aimed to determine the effect of TENS and massage therapy on post laparoscopic shoulder pain (PLSP). Methods This study was conducted on 138 patients who underwent laparoscopic cholecystectomy. Patients were randomly divided into three groups: massage plus conventional pharmacological treatment (n = 46), TENS plus conventional pharmacological treatment (n = 46), and conventional pharmacological treatment (n = 46). Massage and TENS were performed three consecutive times after the patients regained consciousness in the inpatient wards. The intensity of Shoulder pain was evaluated using a visual analog scale before and 20 min after each treatment. Results Both massage therapy and TENS led to a significant reduction in the intensity of PLPS compared to the control group in all three measured times (p < 0.001). However, no significant difference was observed between TENS and massage at any of the three-time points. Conclusions This study’s findings demonstrated that massage and TENS techniques could reduce PLSP. Trial registration Registered in the Iranian registry of clinical trials (www.irct.ir) in 05/02/2022 with the following code: IRCT20200206046395N1. Introduction Laparoscopic cholecystectomy is the standard procedure in gallbladder surgery [1].Although laparoscopy has optimal results compared to open surgery, such as cosmetic effects, rapid discharge, and less pain at the incision site, post laparoscopic shoulder pain (PLSP) is one of the most common complications [2].The prevalence of PLSP varies from 30 to 90%, and this relatively high rate is a challenging issue [3].Although the underlying cause of PLSP is unknown, tissue manipulation, diaphragmatic pressure due to pneumoperitoneum, and carbonic acid production are the possible causes [4]. Various approaches have been investigated to relieve PLSP following laparoscopic cholecystectomy.In some studies, different surgical techniques, drainage, respiratory approaches [5], drugs such as lidocaine [2], duloxetine [6] ,and clonidine [7] were used.Despite few studies, they are limited, indefinite, and have controversial results. Multifaceted methods for pain relief are essential due to the complex pain mechanism after laparoscopic cholecystectomy [8].Non-pharmacological methods used for these purposes include: Acupuncture, music, rhythmic breathing, relaxation, meditation, TENS ,and massage [9]. "Therapeutic massage is the manipulation of the soft tissue of whole body areas to bring about generalized improvements in health, such as relaxation or improved sleep, or specific physical benefits, such as relief of muscular aches and pains" [10].It is a non-invasive method with an easy application used in various clinical centers to increase the quality of surgery [11].The benefits of massage in pain relief are supported by multiple studies on colorectal surgery [12], coronary artery bypass grafting [13], cesarean Sect.[14], and cholecystectomy [15]. In addition, transcutaneous electrical nerve stimulation (TENS) as an essential strategy in interventions and clinical approaches for analgesia and its side effects due to unique benefits such as ease of use, efficiency, inexpensiveness, and low risk is increasing.TENS uses batteries and electrodes on the skin based on the adjusted frequency and pulse width by providing alternating current for various therapeutic purposes [16].TENS has been instrumental in postoperative analgesia in many hernia repair surgeries [17], arthroscopy [18], gynecological laparoscopy [19], and liposuction [20]. Although various techniques have significant effects in reducing PLSP, to our knowledge, no previous research compared these two interventions, massage therapy and TENS, for the relief of shoulder pain after laparoscopic cholecystectomy.Considering the prevalence of 63% of shoulder pain after laparoscopic cholecystectomy and the increase of patients who complain of PLSP [21], it is necessary to ensure adequate pain management.Because insufficient and poor management leads to delayed recovery, adverse clinical outcomes and increased treatment costs [22].Therefore, we aimed to determine the effect of TENS and massage therapy on PLSP.We hypothesized that massage and TENS could reduce PLSP in the first hours after cholecystectomy. Design This study is a prospective, randomized, single-blind, parallel controlled, and triple-arm clinical trial.It was conducted on 138 patients with gallstones treated with laparoscopic cholecystectomy at Imam Reza Hospital, affiliated with Kermanshah University of Medical Sciences, Iran.Eligible patients were selected from February 2022 to August 2022.They were randomly divided into three groups, including massage (A), TENS (B), and control (C).The ethics committee of the Vice-Chancellor of Research approved the study at Kermanshah University of Medical Sciences (Code: IR.KUMS.REC.1400.757).It was registered on the Iranian Clinical trial Registry with identification number IRCT IRCT20200206046395N1 (registration date: 05/02/2022). Inclusion and exclusion criteria The inclusion criteria for patients were: • Undergoing laparoscopic cholecystectomy surgery. • Visual Analogue Scale (VAS) shoulder pain score at baseline (within 3 h after surgery) ≥ to 3. • Class I or II physical conditions based on the American Society of Anesthesiology (ASA).• No history of mental, motor ,or cognitive disorders according to medical records.• Lack of shoulder pain before surgery.The exclusion criteria were: • Age under 18 years and over 60 years. • Addiction to painkillers and sedatives. • Ulcers, phlebitis, trauma ,and arthritis in the massage area.• Contraindications to TENS such as pregnancy and internal pacemaker. Sample size calculation After a pilot study on 30 participants (10 in each group), the minimum number of samples based on a 10-point pain scale 4 h after cholecystectomy using the mean and standard deviation (3.83 ± 1.94 in the control group and 5 ± 1.67 in the intervention group) was identified.By accepting α = 0.05 and a power of 80%, the standard sample size for each group was calculated as 38 patients (114 in total).Considering a 20% drop rate, 46 participants were included in each group (138 patients in total) (Fig. 1). Randomization and blinding After evaluating the admitted patients based on the inclusion and exclusion criteria and completing the informed consent form, they were placed in massage, TENS, or control groups using a random block size of 6. A randomization sequence was performed using a random number generator.Twenty-three opaque envelopes and 138 cards were created for the random allocation concealment process.The envelopes were opened simultaneously as the individuals were assigned to the groups.Before random sampling, the researcher and participant did not know each individual would belong to which group.Concealment was also observed using this method.Participants were aware of the intervention, but the data collector and statistical analyst were unaware.In addition, the form related to the baseline assessments was initially completed before assigning the person to groups.The participants were separated and had no contact to prevent bias. Pain assessment Patients were asked to rate their pain in the shoulder area by marking a cross on a 10 cm horizontal VAS.It is one of the standard tools for measuring pain in studies with a clinical approach.In the study of Al-Ghadir et al. [23], the reliability of this tool for pain by the intra-class correlation coefficient was reported 0.97, which can be confirmed as one of the most reliable tools for measuring pain.Zero indicated no pain, and ten showed unbearable pain on this scale.Pain intensity in this study was categorized into four groups, 0 = painless, 1 to 3 = mild, 4 to 7 = moderate, and 8 to 10 = severe pain. Intervention On the morning of surgery, each patient was asked to rate pain VAS using understandable and straightforward sentences during a training session.After transferring patients to the postoperative ward and consciousness recovery, participants in the intervention group received TENS or massage in addition to conventional pharmacological treatment.No intervention was applied to the control group, who only received conventional pharmacological treatment.Acetaminophen (Paracetamol) was prescribed 15 mg/kg/dose every 8 h according to our standard organizational protocol to relieve postoperative pain.In addition, patients received intramuscular Pethidine 25 mg/mL at regular intervals in case of more severe pain. Both massage therapy techniques and TENS were performed at regular intervals of 4, 8, and 12 h after surgery by a trained individual and under the supervision of a physiotherapist.The interventions lasted 30 min in a quiet environment away from noise (3 times, 90 min in total). In addition, to avoid bias, in all groups, the researcher attended during the study at the mentioned times, and asked the participants to lie on the bed for 30 min in a supine position with their head at a height of 30 Degree headrest. Massage The massage was performed in the first intervention group using this protocol: The masseur washed his hands thoroughly and dried them with a towel.He applied massage on the patient's dominant hand using kneading-friction-petrissage techniques.The wrist was held in one hand, and with the other hand, the massage started with direct pressure from the base of each finger and circularly continued to the tip.The palm was kneaded with C-like movements and firm pressure in alternating directions, then the back of the hand was stretched with forward movements from the wrist to the base of each finger.Around the wrist bones, the massage continued with circular movements of the thumbs clockwise and then counterclockwise.While the patient's arm was lying flat on the bed, the therapist's palm was used for upward movements from the wrist to below the elbow and downward movements from the elbow to the wrist.Finally, the massage ended with gentle twisting movements of the forearm (Fig. 2). Patients were asked to inform the researcher having any risk factor to avoid any problem during the massage therapy.After the first set of massage and to ensure the patient's comfort, the second set of massage was performed similarly, without using any special equipment on the non-dominant hand.The massage procedure was followed using a coherent program and continued with the appropriate rhythm and pressure. Petrissage Massage: Apply direct pressure slowly and rhythmically with the fingertips. Friction Massage: Rubbing in small areas in a circle where the masseur presses the area with the front of the fingertips or the front of the palm. Kneading Massage: It is similar to twisting and changes the compression direction in succession [24]. TENS Patients in the second intervention group received TENS.Initially, the researcher watched the skin of the scapula and shoulders to ensure that no foreign objects obstructed the attachment of four electrodes (5 cm x 10 cm).Then the electrodes were placed on the skin in the area of trapezius muscle and scapula, in both sides of the shoulders, parallel to the spine (Fig. 2).After placing the self-adhesive electrodes at a distance of 5 cm in areas, a TENS current with a frequency of 150 Hz and a pulse width of 75 microseconds [25] was applied.During the intervention, the intensity of the TENS device was constantly increased by the researcher up to the extent that the maximum effect of the tingling sense did not cause discomfort and muscle contraction (the maximum sensory threshold for pain). Data collection Participants' data, including demographic and clinical characteristics, were collected using two parts form and multiple-choice questions. In Its first part, age, sex, education and previous history of abdominal surgery were recorded one day before the operation. In the second part, duration of surgery, duration of anesthesia, amount of analgesia used during the intervention and severity of PLSP were assessed by the outcome assessor, unaware of the type of interventions and randomization.According to the beginning of the first, second and third interventions at 4, 8 and 12 h after cholecystectomy, pain was recorded six times.T 1 : 20 min before the start of the first intervention, T 2 : 20 min after the first intervention, T 3 : 20 min before the start of the second intervention, T 4 : 20 min after the second intervention, T 5 : 20 min before the start of the third intervention, T 6 : 20 min after the third intervention. Statistical methods Data analysis was conducted using the SPSS 25.0 software.In the present study, Data normality was confirmed using the Kolmogorov-Smirnov.ANOVA analysis of variance compared quantitative variables in three groups.Also, the variables of gender, education, and history of previous abdominal surgery were evaluated with the Chi-square test.In each group, ANOVA with repeated measures was used to compare the trend of changes in the severity of PLSP at 4, 8, and 12 h after surgery.P-value < 0.05 was considered significant. Results This study was conducted on 138 patients with a mean age of 43.19 ± 11.3 years after laparoscopic cholecystectomy (Fig. 1).The majority of them were women (68.8%).Table 1 shows no difference between the demographic and clinical characteristics of the participants in the massage, TENS, and control groups (p > 0.05). The results of one-way analysis of variance are summarized in Table 2; Fig. 3. Levene's test confirmed the assumption of homogeneity of variances (p > 0.05).Before the interventions, the results showed that the mean PLSP was not significantly different in the groups.However, after the interventions, the amount of PLSP in the massage therapy, TENS, and control groups decreased at all period times and was considered statistically significant (p < 0.001). The mean difference in PLSP decrease in the three groups before and after the interventions were significant (p < 0.001) (Table 3). Comparing the two groups regarding changes in PLSP intensity showed that the effect of massage therapy and TENS was more significant than the control group (p < 0.001).However, there was no significant difference between the two interventions of massage therapy and TENS at any time (Table 4).In addition, Mauchly test confirmed the assumption of sphericity of the data.ANOVA with repeated measurements showed a significant difference between the three groups in the average intensity of PLSP after the intervention.Hence, the relief of pain in the intervention groups was more than the control group (p < 0.001) (Table 2). Discussion The present study evaluated the effectiveness of classical hand massage in reducing PLSP compared to TENS.To our knowledge, no previous research compared these two interventions to relieve PLSP.Therefore, the current randomized trial was the first study following laparoscopic cholecystectomy.Based on our findings, TENS significantly reduced PLSP when added to multimodal treatment methods.In other studies on different types of surgery, the effect of active TENS has been compared to placebo [18,20,25], no treatment (Control) [17,26] and pharmacological drugs [19,27], so significant pain relief was identified in the active TENS group. In a previous study, Asgari et al. [27] compared the effect of TENS with fentanyl on reducing PLSP in patients undergoing laparoscopic gynecologic surgery who had spinal anesthesia.They indicated that in the short-term evaluation, five minutes after fentanyl intervention, there was a more significant effect than TENS.Nevertheless, TENS was more effective after 30 min.Although their overall results showed no inferiority of fentanyl to TENS, both methods reduced PLSP. In line with our study, Borges et al. [25] evaluated the effect of Active TENS and Placebo TENS compared to the control group on pain after laparoscopic cholecystectomy.The authors found that Active TENS for 30 min significantly reduced pain at the umbilical, subcostal, epigastric, and abdominal incision sites compared to placebo TENS and the control group.This study showed the superiority of Active TENS over placebo TENS and the control group that did not receive any intervention.Also, Platon et al. [19] reported the effect of TENS on pain after laparoscopic gynecological surgery.One group received TENS and the other group received opioids; the results of this study showed significant pain relief in both groups after discharge from the recovery ward and hospital. In another study, 52 patients undergoing inguinal hernia surgery were divided into two active TENS and control groups (26 people in each group) to investigate the effects of TENS on postoperative pain.The duration of the intervention was 24 h and applied five times after the operation for 30 min.In the experimental group, TENS was used with a frequency of 100 Hz and a pulse width of 100 microseconds.In the other group, TENS was applied without electrical stimulation.Similarly, the results indicated a significant reduction in pain in the Active TENS group [17]. However, in some studies, no evidence confirmed the effect of TENS on pain.In this regard, Kurata et al. [28] investigated the effects of TENS on pain after a cesarean section.In this clinical trial, 180 women were randomly divided into three groups: Active TENS, Placebo TENS and control.The results showed that the pain after the cesarean section was not significantly different between the groups.The findings of this study did not confirm our results.Differences in the target population may be the reason for this difference.Also, in these three studies, no effect of TENS on pain at rest (static pain) after abdominal surgery [29], hysterectomy [30] and, hip fracture [31] was observed.The main reasons for this may be the difference in sample size, TENS application method and follow-up period.Although our findings were consistent only with the limited previous studies, the difference in surgical procedures and other factors influencing the perception of pain intensity should be considered. We also found that classic hand massage significantly reduced PLSP.The present finding can improve our understanding of the value of massage therapy techniques on pain. In a systematic review and meta-analysis, an evidence was presented that short-term or long-term massage is an effective method for reducing shoulder pain [32].Consistent with our study, a case report by Zerkle and Gates (2020) examined the effect of 25 min of massage after laparoscopic abdominal surgery and reported that massage therapy significantly reduced PLSP [33].Similarly, Sözen and Karabulut (2020) found that a 30-minute hand massage relieved pain after laparoscopic cholecystectomy [15]. In another study, 105 patients with rheumatoid arthritis were divided into three massage, reiki, and control groups to investigate the effects of hand massage and reiki on pain.The interventions were repeated six times and each time for 30 min.Similar to the current study, the results indicated a significant reduction in pain in both groups [34]. Rasooli et al. [14] used 15-minute hand massage techniques to relieve headaches after cesarean section and it was found that the VAS score decreased significantly in the hand massage group.Their results are consistent with the present findings. In two clinical trials on patients undergoing heart surgery, the effect of hand massage on postoperative pain showed that in both studies, massage significantly reduced patients' pain compared to those who received only routine care [35,36]. In the present study, there was no significant difference in the dose of pethidine used between the three groups.Contrary to the current findings, some studies reported that massage and TENS reduced analgesic use compared to control or placebo groups [15,37].These differences can be attributed to differences in samples, pain intensity, methods, and study designs. Several mechanisms have been proposed for massage therapy and TENS to relieve pain [38,39].According to the gate control theory, thick sensory fibers (A-β) stimulated by various factors such as massage and TENS are faster than thin fibers (A-δ and C) that cause pain transmission.These two modalities stimulate mechanoreceptors and thick sensory fibers and eventually slow down the transmission of pain signals to the brain.Also, by stimulating dense sensory fibers, the pain-inhibiting neurons are activated at the level of the dorsal horn.The pain is minimized by reducing the speed of the projection neurons [40].In addition, it has been shown that massage therapy and TENS can effectively relieve pain by secreting endogenous opiates [38,39]. Limitations Despite the strengths of the present study, our research has the following limitations: 1.Although the surgical procedure and anesthesia were the same for all patients, the patients underwent surgery under the supervision of different anesthesiologists and surgeons.2. There was insufficient evidence on the best time for TENS and massage to start interventions to improve shoulder pain which may affect the results to somewhat.3. Shoulder pain was assessed in all patients only at rest, and no separate pain measurement was performed at rest and in motion.4. The primary source of patients' pain (incision and shoulder pain) was not assessed when they requested analgesia. The effect of interventions on patients' shoulder pain was not assessed for a long time and was limited to the initial hours after surgery.6.At the end, the sample size was small, and it was unclear whether the findings would be different if used in multiple centers in different environments and with larger sample sizes. Conclusion It seems that massage therapy and TENS can be effective in PLSP relief.Considering that the shoulder pain is often expected after laparoscopy, and the large number of this clinical problem, specifies the need to pay more attention to this matter.It is recommended to use frequent massage therapy and TENS with a longer duration on patients' having shoulder pain after other laparoscopic surgeries so that their effect on the amount of analgesia used by patients can be seen. Fig. 3 Fig. 3 Mean scores of the shoulder pain severity after Cholecystectomy Table 1 Demographic and clinical characteristics of participants in massage, TENS, and control groups Table 2 Comparison of mean PLSP intensity scores in massaging, TENS, and control groups *ANOVA ** Repeated measure ANOVA Table 3 Comparison of the mean difference in shoulder pain intensity scores before and after the intervention in the massage, TENS, and control groups *one-way ANOVA Table 4 Two-by-Two comparison groups regarding shoulder pain intensity at 4, 8, and 12 h after laparoscopic cholecystectomy *Post hoc analysis, using Tukey's method
2023-09-28T13:20:27.397Z
2023-09-28T00:00:00.000
{ "year": 2023, "sha1": "b67bd3ed3346a638a71701ca7148db2e0bcff661", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/counter/pdf/10.1186/s12891-023-06905-w", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c898dcb14a7029c2ab24bf5e719e4bcbef153c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234573098
pes2o/s2orc
v3-fos-license
Roundtables on Performance Research, Developing Cultural Ecologies, and Artistic Research Networking in the Asia-Pacific Three sessions of international and local participants from a July 2019 conference created active ecosystems which generated living examples of intercultural improvisation, performance research, cultural ecologies and artistic research in Thailand. Summarized and assessed in this article, these sessions revealed some of the first fruits of Thailand’s work in these areas through engagement with other practitioners in the region. Besides offering creative improvisation among Thai artists and artist-centered critical assessments of their work, the article captures active thinkers seeking to reimagine the “festival” format for performance research, and seeks for ways to continue future regional collaboration in artistic research. The article embodies the ecological aspects of live collective thinking in the arts. The conference "From Performance Research to Cultural Ecologies: Creating Sustainable Artistic Communities" was held on July 19-20, 2019 at Chulalongkorn University in Bangkok, the first of its kind in Thailand.Beyond featuring more than a dozen presentations by international and Thai scholars on new modes of creative research for communities, the conference included afternoon roundtables on each of its two days and a follow-up networking session on the morning of July 21.These roundtables focused on the underlying premise of the conference, namely that performance research works best when it recognizes performance as part of specific cultural ecosystems.Using this premise, the conference explored how research in the performing arts could generate new creative synergies and social innovation for local urban communities.In doing so, the roundtables identify key elements of the rationale for the projects central to Chulalongkorn University's new research cluster in the arts and culture begun in late 2018.The interactions of artists and researchers on projects involving experts from different backgrounds enable the production of new knowledge, skills and social value. To organize this new thinking on how to ground performance in knowledge to benefit participants and communities, discussion centered on how researchers could adopt the "festival" format in innovative ways in urban communities. The roundtables achieved the conference goals and embodied the processes of the new research cluster.They provided opportunities for participants to reflect, engage and inspire and to take on the conference themesperformance research and cultural ecologies -both of which are new modes of research in the arts for Thailand.By facilitating interactions of senior and junior scholars, artists and academics, administrators and organizers, these roundtables helped to stimulate, challenge, and encourage those present in these topics, especially younger colleagues, by exposing them to the ideas and experiences of a range of top senior scholars, creative practitioners and managers.These special intellectual events provided opportunities for crossfertilization with different types of expertise with notable figures involved in international modes of artistic research, creating festivals, intellectual fora and exchange networks.Such roundtables are valuable for the academy, forming an interactive system for creating and testing new ideas and practices for future work in cutting edge research, forming a nascent cultural ecosystem.This article summarizes the three roundtable sessions for this issue of Manusya, while sketching the agenda of Chulalongkorn University's new research platform. The roundtable on July 19 followed the improvisational performance Thai Dance Now and focused on "Insights from Performance Research for Today's manusya 23 (2020) 450-474 Traditions." On July 20, a roundtable on the theme of "Artistic Research for Cultural Ecologies in the University, Now and Anon" concluded the conference.A shorter informal roundtable session on "Realizing Performance Ecologies: Networking & Workshop" on the morning of July 21 explored how to continue regional cooperation among researchers, cultural workers and artists in performance as a creative workshop for students was held next door.Summaries of the roundtables held on July 19, 20, and 21 follow.1The roundtables fostered new thinking and practical knowledge on performance research and festival creation that focus on combining academic and artistic work to engage urban communities. 1 July 19: Thai Dance Now and "Insights from Performance Research for Today's Traditions"2 After the opening day of the conference, a special performance Thai Dance Now by the dancers Saran Suwanachot and Thammanit Nikomrat and composer-musician Sinnapa Sarasas led into the roundtable on "Insights from Performance Research for Today's Traditions."The collaborative intercultural performance by Saran (Fon Cheong), Thammanit (Nora), and Sinnapa (modern music arranger), called "Klong Hong 2019" (Catching the Swan 2019), was a new improvisational piece by artist-researchers in different trf projects conducted in 2017 and 2018 outside Bangkok.3Since performance research is about live performance and not just about research papers, they were asked to come up with a new dance for us that involved knowledge and skill form different regions of Thailand.4They decided to do the "Klong Hong" (Catching the Swan) 1 Full transcripts of the Roundtable sessions will be available on the Thai Performance Practice As Research (ppar) section of Facebook https://www.facebook.com/pg/ThaiPerformancePracticeResearch based at the Department of Dramatic Arts of Chulalongkorn University More online information on this initiative can be found here: http://www.findglocal.com/TH/Bangkok/1632830297026383/Thai-Performance-Practice-as-Research.The Roundtable summaries will identify quotations from roundtable by the use of italics. 2 See the photos of this article at the url https://doi.org/10.6084/m9.figshare.13265585. 3 The original dance by Thammanit and Saran was accompanied by improvised music on traditional instruments by Sinnapa and the central Thai drummer Sa-Ngiam Lertjiraratong and Torpong Samerjai who played a northern style flute.4 The two masters of dance first were planning to come here to present papers for the conference.But that is not the core of their performance research and it does not show them at what they are best at.So their performance was the best way to show their performance research as a living form of knowledge.Organizers decided to have them show them at their best, at their most creative and artistic: how they creatively dance and play music.Sinnapa had never played for Saran before.But she used music to support the two performers -Saran dance to new music with a northern "Fon Choeng" martial arts dancer.Like performance research more generally, the work brought different artists together and let them come up with something new to show them at their best. The performance remixed styles from Northern and Southern Thailand in a modernized soundscape by Bangkok musicians using traditional instruments.Saran in Chiang Mai and Thammanit's and Sinnapa's work in Pattani and Songkhla let them show the creative side of artistic research.In this new artistic interaction with a new audience, the performers of "Klong Hong 2019" aimed to show intercultural artistic collaboration in action in a new environment.By letting the artists living and fresh artistic knowledge and skill interact together in a new living performance ecology for the conference, they formed a performative starting point for the roundtable to follow. A short 3-minute video of three trf performance research projects done in 2017 and 2018 introduced the artists and scholars from three research projects outside of Bangkok.They shared their insights and thoughts on their projects and responded to questions from those in attendance on issues tied to performance research. The three underlying research projects were performed many times in local communities and in universities since they were created since 2017.Pachaya Akkapram from Khon Kaen University in Northeastern Thailand developed a puppet performance piece tied to the Isan Sinsai legends with local artists in the Nongmontai community of Mahasarakham province over seven months from late 2017.His students learned local folk culture from the ground up, developing a new performance, the Sinsai Ru jai Ton, to benefit his students and the community.Thammanit Nikomrat from Thaksin University worked with Sinnapa Sarasas from Silpakorn University to develop a new musical piece combining three different musical traditions in Southern Thailand, starting at Thaksin University, Songkhla: Nora, Rong Ngeng and Digir Hulu, and Chinese drumming.They created the piece Hoamroang Sam Prasarn to celebrate Pattani's Lim Gor Neaw in 2017.Saran Suwanachot from Chiang Mai worked with Pornrat Damrhung from Chulalongkorn University on a project to produce a new performance which staged a traditional tale Pu Nang: The Great Ancestor at the Lanna Wisdom School in Chiang Mai to celebrate its 20th anniversary in 2018. The roundtable was skillfully moderated by Dr. Charlene Rajendran from Nanyang Technological University in Singapore and Dr. Lawrence Ross from and Thammanit-in their dance.Their mutual trust permitted them to communicate how to use skills to make new music for the dance.the University of Malaya in Kuala Lumpur, Malaysia.Researchers from the above three projects were the centerpiece of the discussion in this session and they provided lively exchanges on the work of performance research, including responses from those in the audience.Charlene Rajendran's opened the session by asking about performance research.Professor Pornrat Damrhung provided background on her vision for introducing performance research to Thailand and the three projects presented in this roundtable.Moderators sought responses from participants on the following four issues based on their artistic research work: 1) how did they deal with artistic and cultural differences in their projects?2) did the tensions and synergies of the projects lead to new skills for creative collaboration? 3) what would the future of local artistic cultures and intercultural work be like?4) what new artistic traditions would emerge in Thailand?Following their responses, five questions from the floor led to further discussion: 1) did these projects open new modes of collaborative creative intercultural work?2) how to assess the value of artistic research in performance -like improvisation and creative interplay -since they depend subtle forms of knowledge and skills that do not easily fit into academic frameworks or forms, and are often intangible and hard to articulate; 3) how can this type of creative work continuehow can it remain "sustainable"? 4) how to make this type of creative and intercultural work interesting to young people?5) can this type of work make money? Charlene first reminded everyone that live performance is engaging, powerful, meaningful, but also ephemeral and hard to evaluate.While all performances are based on research, this research is often hard to articulate through standard research approaches.This is partly because performances are part of an ongoing process of producing embodied and collective knowledge, skills and sensibilities.Given this ongoing lived, intangible, processual aspect of performance research, what did Pornrat Damrhung, who ran the projects and organized this conference, see as the main aims and results of the projects discussed at this roundtable? Pornrat contextualized the projects that conference attendees heard and saw earlier in the day.Besides being based outside Bangkok, they contributed to the larger aims of the umbrella project by developing performance research approaches as viable forms of artistic research for artists, universities and researchers in Thailand.She then introduced the three artist-scholars central to those projects mentioned above, noting how they produced "a living platform of performance research at sites outside of Bangkok."Their work helped the artists discover their passion and design suitable performance research methodologies for participants and partners to work in communities across Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/Thailand.They involved the interaction of local artists of different backgrounds and university researchers working diverse local communities, seeking to be cross-disciplinary forms of artistic collaboration in a contemporary vein which could develop performances in local communities by combining different traditions and art forms and connect with their diverse audiences.As the three projects evolved, they required adjustments to solve problems that they faced as they moved from imagining, practicing, rehearsing and performing creativity in their local communities.All three projects also required their participants to remember and relearn neglected aspects of their cultural identities by re-engaging local cultural life, and then to include their efforts, experiences, and insights in this research project as it evolved.Significantly for this conference and the new research cluster at Chulalongkorn University, she began seeing their research as consisting of dynamic networks of interacting components tied to performance and rooted in local socio-cultural environments.She saw the projects as part of living cultural ecosystems. The artists' work produced new performances and durable relations among artists and communities, along with new knowledge and skills.But the artists often found that writing up their research is hard.Communicating their thoughts, experiences and goal challenged the artists to write in an academic mode based on their performance practice and collaborative work in local communities.By putting performance to work in new contexts and new communities, though, the artists, students, communities, audiences and scholars could better see their dreams realized.Classical and traditional arts went to work in contemporary spaces.In helping participants understand themselves and to have pride in their artistic practice, they could better realize their cultural identities social values in today's world. When young people practice the arts and become part of new artistic projects that their masters do in a different platform, they learn to adjust and rework their way of working together.Each of the above artists noted the importance of trust and respect in working with those who are different from them.Time spent working together helps them to create trust and understanding of each form of knowledge, which is the key to sustainable creative work together.In developing performance research projects, the university became a key platform or an interactive space for these experimental projects, and enabled innovative work through the interaction of young and old artists, academics and local communities, with strong university connections, fostering the bonds of trust, respect, and self-confidence participants needed to produce sustainably creative work beyond what they think they can do. There were varied responses to Lawrence Ross question on the creative challenges and opportunities of working with those from different traditions, Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/manusya 23 (2020) 450-474 as he wondered how participants "dealt with aesthetic and cultural difference" in their projects.Sinnapa, Thammanit and Pachaya, and Saran replied differently to the question, reflecting their diverse backgrounds and goals, their roles in the research and the local contexts of their individual projects.Sinnapa's long experience in this type of intercultural creative work in music rooted in Thai instruments and musical sensibility had prepared her for this project, but noted it was much more challenging to work with young students and more effective to work with professional musicians.Thammanit's previous work with Sinnapa let him see her systematic approach to using Thai instrumentation to produce new music, permitting him to see for the first time how Thai music could be modern.Their bonds of trust formed the basis of their creative collaboration in this project.For Pachaya, local Isan culture was something he had avoided, so re-engaging it required sustained effort, with he and his students living for long periods of time with artists in local communities.Their immersion into the world of local artist master Preecha Karoon and villagers in Mahasarakham Province allowed them to do this, permitting himself, his students, his university, and the local community and its artists to create a working system of artist interaction, community engagement and effective education.Saran aimed to strengthen local Lanna culture so it could survive in our complex contemporary culture, using his own knowledge and expertise from the region's traditional performance culture and involving young people.He sought "to make a place for his own old Lanna art and culture so it can thrive in the modern world." When Charlene Rajendran wondered if any new skills and attitudes emerged from the collaborative intercultural work across boundaries and from doing creative research.What skills or attitudes were needed to root traditional art in contemporary work, to realize your research goals, the researchers provided diverse responses? Pornrat stressed two mindsets facilitated the development of new skills: intercultural trust among artists and confidence in experimenting beyond what artists know best.Both required listening to and respecting others in the project to recognize and deal with their differences from you.When artists and researchers developed these mindsets could allow for new work and new skills to develop, by expanding the horizons of artistic familiarity and creative possibilities of their knowledge.She first understood this from her work with Thammanit, when sustained interaction among artists from different backgrounds permitted new understanding of the role of experimentation and new possibilities for creative thinking and artistic work in the future.Pachaya wanted to help his students develop life skills and self-understanding by grounding them in local Isan community knowledge.This immersive learning gave them Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/self-confidence, pride in their cultural roots, teamwork skills, social intelligence, and new abilities to communicate with people different from them through practical action with others.Saran worked to revitalize old Lanna artistic skills and sensibilities as forms of "powerful knowledge" which were lived and felt provided participants with strong experiential and affective understanding grounded in local culture.This knowledge is contemporary, local and personal, forming a very particular Chiang Mai form of lived culture, rooted in the embodied knowledge, skills and roots shared in the area.His work permitted him to retain "our skills in the embodied, practiced, felt cultural knowledge we've kept for a time as a powerful knowledge alive in our culture."Sinnapa stressed how her skills in her classical Thai dance and music let her work with traditional artists."What's crucial is that you keep the key characteristics of the dance or music in new work.The challenge is to find ways to let traditional music breathe in fresh air, to give it a new house and a fresh, fluid sensibility."Never changing her approach or goal, since it would erase its distinctive Thai features, she seeks to show "how Thai instruments, music, and dance can be contemporary," a task easier for folk arts since they are more open, even if the artists do not realize this. Lawrence Ross asked two question about the future of the local arts in Thailand.One related to what Dr. Suradech Chotiudompant said earlier in the day: "What is contemporary now will become tradition, so parts of what we are experiencing now as the current or 'contemporary' will crystallize into what is traditional."Things are changing so much all around us in the mix of traditional arts and contemporary arts.How will local arts look 50 years from now? Sinnapa, Thammanit, Saran, and Pachaya all believed creative intercultural contemporary work grounded in the traditional arts would continue, with some challenges, based on their work with young people in this project. Presenting contrasting views, Saran first stressed that Lanna's embodied cultural knowledge would survive.Although it is powerful enough to withstand imported culture from outside, since outside culture often requires a massive support infrastructure, this outside culture can sometimes overwhelm local cultural life.So he wanted to strengthen local knowledge, skills and expertise using the tools and sensibility we already have.By developing creative work with the local forms of culture which are known, artists can make and share something with others and pass it down.By contrast, Pachaya suggested that Northeastern Thailand would be more urban and interconnected with the rest of modern world, with Northeastern cities bigger, more complex and globally interconnected like Bangkok today.Cities of Khon Kaen, Udonthani, Ubonratchathani, Nakhonratchasima will be producing more complex transnationally-focused cultures.manusya 23 (2020) 450-474 Lawrence followed up by asking about what new traditions would emerge in Thailand, such as in those new cities.Pachaya said that young people would likely seek fame in a cultural celebrity form, but it would not replace Isan culture since young people's interest in Isan folk singing and they can help shape our culture for the future.Pornrat stressed that although cultural life changes, it never quite disappears, and new work can come from the old.There are always forms of engagement through remembering, re-learning or remixing of "lost" or neglected forms of culture to revive them, by listening to masters and letting them play with their traditions.This type of intercultural mixing can create new alternatives, through the mixing of different local and regional Thai cultures, since artists and audiences both value things that are familiar and new.While "contemporary" work is often seen to ignore -if not reject -"traditional" things, the artist's vision, skills and the context matter here.It depends on the world the artist lives in.Sometimes this means not working with the artist as much as providing a space for him or her to work, like my experience with Saran.As someone rooted deeply in tradition, he wanted to do traditional work in a contemporary style, not something Western or even modern.She learned from Thammanit and Saran that she was a modern or Western theatre-based person, so she needed to let the performance emerge from the cultural worlds of the artists she worked with."I am already mixed up, so my tradition might not be the same from those of others involved in these performance research projects.Other artists are mixed up differently from me Working with them made me realize there will always be alternatives, which are different than exist now, if you have the eyes, ears, and feel for them.Interaction permits the creativity of multiple interacting artists to produce new work." Charlene offered a wonderful set of reminders of key aspects of performance research.First, "artists have always been researchers.It is the work of the artist to keep questioning and thinking about -what else? what if? why not? how else? who else? why else?These kinds of questions push artists to break the rules and play with difference and play with risk and play with ideas and practices that sometimes will feel like they are not ready or they are too much or too soon or sometimes not yet at the right time.It is not always the linear -past-presentfuture -things happening in a straight line.There is also this kind of this continuing cyclical, spiral movement of action, feeling, and knowledge, what unfolds from what is here and now in surprising ways.And this unfolding provides a valuable kind of knowledge, the artist's way of knowing, the embodied and felt knowing, knowing through embodied skills of their craft through both vision and body and in the interaction with others in a shared time and space.This is what matters for performance research.So when we ask these questions as researchers sometimes we want to make it very neat and clean and consistent and coherent.Our artistic research needs to admit that sometimes it is the messiness and the Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/chaos and the uncertainty of our work that is a highly valuable way of knowing, and these tenuous forms of knowledge are how we try to capture a little bit of understanding.Platforms like this conference permit us to work and talk with these diverse modes of knowing in and through the performing arts." Five questions came from the floor after the performance researchers spoke in the roundtable.The questions dealt with: 1) how has work in these projects opened new ways of doing creative work together?2) how to assess the value of research tied to performance, since performance -like improvisation and creative interplay -often depends on and is tied to producing distinctive subtle forms of knowlege and skills that do not easily fit into academic frameworks or forms, and are often intangible and hard to articulate; 3) is the type of creative work considered here "sustainable"? 4) how to make cultural diversity and intercultural work interesting to young people?5) can this type of work make money? Lowell Skar wondered if the artists learned to think about doing things differently or doing new things as a result of working on these projects.Pornrat first noted the research led to new productions and new insights among the artists of their arts and their tradition and their new knowledge could help them solve new problems.But this required effort.She needed to listen lot and learn to understand others, to re-educate herself to figure out -to understand and respect -other artists and their way of working.We always work with people who are different from us."Even if we are all Thai, we are so different.Recognizing this difference challenges one to reflect on and to learn about your own self and how to deal with difference."Pachaya said his project helped him discover how to use local lived and practical artistic knowledge in the classroom and in creative work outside the classroom as a base for better understanding beyond books.This knowledge became practical tools he could use to recover more understanding of himself and his relationship to Isan culture, including how to live it, to work with it, and to research it, which he continues to do.Sinnapa stressed how as a musician it was challenging to write a research paper.She saw her work as a musician as producing value from the relations with the musicians and the music they make, not the research paper.Her focus on collaborating with musicians led her to help them do their best for the project, to make them shine and get the best out of them.This requires their trust, so they are willing to do more than they think they think they can do, opening themselves up to new music-making in the collaborative work.Since the musicians are people, the human dimension is key.Developing trust with good musicians "is heaven" for her.Thammanit pointed out how his work changed by working with people from different artistic backgrounds, like Sinnapa Saran, Pornrat and Pachaya.Sinnapa helped him learn how to create new music for Nora.While doing the research project, after seeing how she worked with musicians, he tried doing it in Nora performances in Songkhla.While improvisation is normal in Nora, sometimes students learn to create or improvise a bit more or differently from their master.He learned more about improvisation and uses it now with his students, but it difficult to use consistently. Charlene then wondered about how to evaluate research tied to performance, since performance -stressing improvisation and creative interactionoften depends on subtle forms of knowledge and embodied skills that do not easily fit into academic frameworks or forms, and are often intangible and hard to articulate.Yet none of this underlying artistic effort, this knowledge and skill enter into a top journal article.The kinds of knowledge and insights, the talents and capacities that form this work go beyond how we tend to evaluate and assign value in universities, as a number or a percentage or a score, yet they convey enormous, difficult to measure, value.This unrecognized knowledge, skill, and effort work in the performing arts challenges modes of assessing value in our institutions and our academic work, since they often ignore or streamline the hard-to-measure or hard-to-enumerate ways of knowing or these ways of evaluating we normally work with in the arts.The performance on July 19 could not be evaluated in a data-driven way.How to assess the innovative knowledge and creative capacities with the same kind of confidence and clarity and legitimacy and validation that lets us say "yep, that's good for a tier-one journal"?Saran offered some hope by saying that when he moved out of his own safe space in his own northern Lanna culture, he saw his interests and efforts were not alone.After the two years of this project, he met other researchers and worked with his students, making bigger performances, one which finished last week (in July 2019).His project with Pornrat helped him step out of his comfort zone so he could proudly present his contemporary Lanna way in the arts to a wider public.He also found out that he was part of a group of other like-minded artists doing their own distinctive contemporary work.Mark Teh from the Five Arts Centre in Kuala Lumpur added a comment on relevance, suggesting that the anxieties Charlene highlighted often affects artists doing folk, traditional, or classical work in new ways.But relevance and being relevant in contemporary work often seems so disposable.It can be massproduced, shaped by social media, which form another metrics.So defining relevance and relevance for whom is crucial.Danny Butt brought up the transnational solidarity produced in the Bandung Conference from 1955.How it is relevant now for us to think about our situation and about connecting across different cultures and societies, seeking authentic ways of being relevant?Sometimes examples from the past are relevant, not just for nostalgic reasons, but because they highlight a forgotten solidarity in the past and possibilities for the future.Earlier events can remind us of something neglected or The musician and scholar, Dr. Anan Nakkong, wondered how to sustain the feeling of being part of diverse, but ongoing, common project after it ends.How to keep this kind of work sustainable?Pornrat sees the sustainability as based on relationships created by working together.Previous work generated durable bonds of trust, and these provide the basis for working together again.Developing bonds of trust by collaborative work will permit the work to continue, even though we may not know exactly when, where or in what circumstances this will happen.This uncertainty is part of the fragility of all sustainable systems.But sustainability requires attentive care, and this care is what makes it sustainable as friends of artistic collaboration, helping to bring together artists and communities interested in the arts.Getting young people to know what the older generation has to share through exchanging and supporting others is what matters.So sustainability means knowing we can work together in new ways on projects as they arise or as we create them based on our mutual trust in one another.Pachaya agreed that sustainability emerges from bonds of trust from previous work and the new skills for working with people from different backgrounds.He is now confident to go out of the university to work with other artists or primary schools in local areas around Khon Kaen to network with others, and confident in how he can learn, use, and develop knowledge about theatre, working together and collaborating with another musicians, artists workers in communities.This kind of work is fun and permits him to share our experience and work together to something to benefit communities in northeastern Thailand.Saran pointed out that new chances to work together often depend on luck.But collaboration can increase the chances for both future and better work together since those who worked together know each other better.Since everyone is strong friends now, it is only a matter of finding space and time to walk together again in the future.Thammanit also saw sustainability as the inexplicable connections among good friends and artists who have worked together.This will permit him to continue his artistic work and to contact others here if he has questions or problems or opportunities in his work.He can trust them to provide him with their artistic knowledge and to respect his knowledge, and when they have new projects, he hopes to be part of them again, so he can include his embodied knowledge in them.He will also ask for them to help in big projects in his native Songkhla, too.Sinnapa concurred, since they all have the same mind and head.Without trust, you don't click, so we would not be here.Having the same mind is key. Lawrence added that sustainability in collaborative work in the arts does just evaporate.Parting after a collaborative event or project does not erase or undo all the work done in preparation for and in the activity of the event, since participants carry these things with them as shared experience and knowledge, including what they get from another and from their teachers.This ties back to the issue of relevance Mark Teh discussed since what is relevant is what one can pass on and share.Sustainability is a way of saying we are willing and able to share so the arts continue to grow in in groups and communities into the future.To do otherwise would be selfish.How we can help society?We still carry all of our teachers' work as dancers or musicians and as researchers.By seeing our work in this longer endeavor, we can go on in our work, but we cannot really claim anything as our own alone.This is the essence of sustainability: continuing to be involved in work with others we consider to be relevant. Responding to how to involve young people in this type of work, Pornrat said that getting young people more involved in the arts required they see classical performing arts as relevant to them, informing not just their bodies, but also their minds and hearts.This relevance must be alive, embodied and felt.Young people respond best to these traditions when they are made easy and fun for them to become involved in.Classical traditions need to be simplified, connected to things learners can play around with by using improvisation and engagement.In culturally mixed Bangkok, it is important to make the culture relevant to people living here, to show how it is alive now, while respecting those you learn it from and those you do it for.This will help young people understand how they can enjoy traditional culture, make it their own, and contribute to it in new ways.What they do and how they do this might not be in the same form as in old tradition, but it will be easier to become involved in it."Doing so will ensure there will always be someone performing Thai culture into existence." Responding, finally, to Anan Nakkong's question on whether the projects could make money, Pornrat said she "does not see making money as bad, but that money-making is not the goal of this kind of performance research."The artists focused on exploring, learning, creating, and sharing their beautiful embodied inventiveness -what they love -with other artists and their communities.The pilot experimental pieces sought to create value by enriching the artists' knowledge and their communities rather than make money. The July 19 roundtable on performance research highlighted the variety of approaches needed to develop their projects and achieve their results.Central to these diverse approaches is their focus on the importance of local contexts for every project, the complexity of the performance-making process, and the liveness of each performance event.These approaches examine "the engaged social-environmental production of systems and the cultural production of flexible research ecologies wherein tacit understandings, inferred practices and theoretical assumptions can be made explicit and can, in turn, be queried Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/and contested" (Kershaw and Nicholson 2011, 2).The ecological grounding of performance research points toward both the second roundtable and the underlying rationale and guiding principles of the new research cluster in the arts and culture at Chulalongkorn University: cultural ecologies. 2 July 20 Roundtable: "Artistic Research for Cultural Ecologies in the University, Now and Anon" A key insight emerging from the performance research projects conducted between 2017 and 2019 around Thailand is that performance lives in cultural ecosystems.Moreover, the goal of this research is "to create diverse and dynamic research ecologies."(Kershaw and Nicholson 2011, 2) This research also revealed how universities and local communities were two key ecological niches where the creative interactions of performing arts practitioners could be put to work in ways to sustainably give performance cultures new life.Researchers in these projects involved artists from different backgrounds who generated new modes of creative collaboration and community engagement by reimagining performance traditions in innovative contemporary work.Their artistic collaborations interacted in specific spaces, involving specific forms of cultural knowledge and skills, tools and equipment that connected universities to local communities, and developed inventive and durable artistic assemblages. Discovering that these performance projects were embedded in cultural ecologies -what Ann Markusen describes as "as the complex interdependencies that shape the demand for and production of arts and cultural offerings"provided a useful way to think about how to better involve performing arts' knowledge and skills in connecting universities to local communities, adding value to both.5 Viewing performance as living in cultural ecosystems which produces creative knowledge and adds social value to communities gave way to a research program in the new Chulalongkorn University Research Cluster on Art and Culture that started in late 2018 and led to the July 2019 conference underlying this special issue of Manusya.This program seeks to integrate and leverage the collective knowledge in the performing arts from four different Chulalongkorn University faculties in diverse projects that engage and help revitalize local urban communities around the university.Central to the diverse projects in this research manusya 23 (2020) program was the theme of performance for life.Performing arts practitioners from the Faculties of Arts, Fine Arts, Communication Arts, and Education use their knowledge in the performing arts to collaborate with experts in diverse fields -psychiatry, education, medicine, management, etc. -to develop new ways to enhance individual and community lives.These creative collaborative projects to use performance to improve life mostly developed in new off-stage sites and venues outside theatres, late 2019 to mid-2020. One key outcome planned for this research program was a series of public performance-focused events -most of which were designed for non-theatre spaces.These events -planned to be spread over eight months or so from late 2019 to the Summer of 2020 at various off-stage sites around and off-campuswere collectively called a "festival" and given the name Life | Performance.This extended "festival" was imagined as a kind of nascent cultural ecology for performance that could enhance the synergies of inter-faculty cooperation, urban university-community engagement, and transdisciplinary knowledge-making that would not only improve the lives of participants, but offer innovative forms of social entrepreneurship.Although the early stages of this distributed festival took place in late 2019 as designed, the covid 19 outbreak disrupted plans for the remaining parts of the festival planned for 2020.Some of the planned festival events were moved online or postponed for later in the summer of 2020.In addition, new forms of online engagement helped to continue the research projects. This roundtable session followed the conference presentations on July 20.It focuses on the role of research and creative activities tied to the urban university ecosystem, especially the idea of a "festival" as a key type of cultural ecology.The festival creates interaction spaces which enable the performing arts to engage and add sociocultural value to the urban university and its surrounding communities. Charlene Rajendran once more moderated the session, which included Danny Butt from Victorian College of the Arts, University of Melbourne, Mark Teh from the Five Arts Centre in Malaysia,6 Norihiko Yoshioka, Director of the Japan Foundation in Bangkok, Thailand, along with Sukanya Sompiboon, Parida, Manomaiphibul, Dangkamon Na-Pombejra, Premmarin Millinthasoot and Lowell Skar, all from Chulalongkorn University.Chetana Nagavajara, Pornrat Damrhung and Lawrence Ross provided many insights to this discussion from the floor, too.What follows is a summary of the roundtable, which considered its role in the university's new Research Cluster in Arts and Culture.It focused on imagining the future of performance research in relation to urban cultural ecologies, especially how it could be linked to the idea of a "festival" based on the "performance for life" theme, one key aspect of the new research cluster, with special attention given to how a flexible but powerful notion of a "festival" could be used to frame the projects in this new research cluster. Charleen Rajendran began with keywords and key points from the rich conference, which marked a transition or turning point for the research team, moving from earlier work on performance research to new thinking and work with performance as part of cultural ecologies, which would be the focus of the new research cluster."So the question before us now is 'where do we go from here?'"What is the new research agenda of the research cluster?How does it relate performance research to cultural ecologies? In addressing this question, the roundtable dealt with several important topics.First, Premmarin pointed out how the research of the cluster would relate performance and ecology through a set of collaborative research projects which mixed theatre methods with other types of expertise and participation from different university faculties, linking them to nearby urban communities.Linking performance with those working in fields like psychiatry, education, local communities, and dance in events from late 2019 into 2020 will focus on the connections, extensions, and inclusion of performance on-and off-stage.Not only performance for theater audiences and stage performances, but also to connect, extend, and include acting and performing in new off-stage spaces and initiatives.The research projects and the festival would seek to engage diverse artists, experts, participants and audiences from both within the university and the surrounding urban community.In this dynamic process, the paths taken by each research project and in the design of the overall structure of presenting them to the public and the cultural world were always changing, so flexibility to a changing situation was crucial. Mr. Yoshioka's experience in the Japan Foundation made him familiar with cultural and artistic dynamism, possibility, and uncertainty.But he also stressed that funders still needed to make decisions and justifications for specific projects, and to do so in relation to wider cultural and social aims, even while recognizing that things might not turn out as expected for any given project.When seen over time, and collectively, the funding decisions do seem to contribute to changes that are both noticeable and noteworthy.His main point is that involving people in the community depends on a clear strategy for developing a festival for the groups you want to include.As a university, involve Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/manusya 23 (2020) 450-474 people who could stick with the project as good part-time work and to do so for at least five years, since if there is only a one-or two-time result, and does not seem successful in terms of numbers, it does not mean it is unsuccessful.In organizing tpam -the Tokyo Performing Arts Meeting -in Yokohama each February, the design is a "meeting" of performers and producers.It is not called a "festival."Only about 300 attend tpam each year.This often raises questions on its effectiveness, but the organizers do not seek big audiences.It differs from other "festivals" and it needs to differentiate itself from other festivals, to show why we keep doing festival type work at our meeting.Since the "festival" considered here for communities in and around Chulalongkorn University is also not typical and not aiming for a general theater-going public but more for ordinary people who may normally care about theatre, it will be important to figure out how to reach them.This will depend on its direction and how to make it stand out from alternatives and the best at what it does.If it is not designed for many diverse groups of the people, then try to figure out which groups to reach, looking for a main target group and figure out how to reach and appeal to them.Develop a coordinated effort to find a target group and go for a wide target group and continue doing the same for the many months.After working through this process for five years, you will be able to judge if things are working or if it is a success.Even if it is stable and not growing, it could be seen as a success.If it becomes less well attended over time, then maybe there is something wrong.Be sure to start it, and don't give up too early, since then it won't mean anything. Lowell Skar added some thoughts on linking the research projects and the festival to "ecology" since he was involved some of the thinking on the cluster.He pointed out that the Chulalongkorn's support for the cluster made possible an ecosystem that could link research tied to performance to different areas of expertise and diverse communities in and around the university.Seeing performance cultural projects at parts of ecosystems could be used to encourage a kind of "experimental aesthetics" done out of the lab and in the field.This would seek to produce new knowledge, new skills, insights, perspectives and knowledge of what is going on through creative interactions, with all of their surprises.Increasingly research is being done outside of labs, in the field, in various ecological zones or niches, where there is less knowledge of and control over what goes on.In these marked-off niches, complex intra-actions, to use Karen Barad's term, among different types of living, inert and nonliving things, bring phenomena into existence through their performativity.By developing an open "festival" distributed over a long period like being planned here, one that can bring performance out from the theater lab into complex interactions Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/with the rest of the urban world.This will let performance cultures evolve in many different urban ecological niches."Experimental aesthetics" points to trials being done in the field, seeking to produce a sustainable cultural ecology, with unexpected results.These experiments will help in planning for what might come even though it is never sure what the results will be, or whether the performance environment itself will change.Chulalongkorn University is providing a platform where that experimental aesthetic ecology can grow.This first phase of support for sustained research in the humanities and the arts and culture here provides the arts and culture with opportunities to developing valuable for society.Through this platform that we have now and here, there is a space for creative interaction and experimentation, and many other communities.It will permit producing new synergies, intra-actions, connections, and possibilities within and beyond the institution, as experimental aesthetics at work in urban cultural niches rather than in the theatre. As a director of research in a university arts organization in Melbourne, Danny Butt reminded us that universities only recently became a platform for the arts.The arts and culture face three significant challenges for funding by governments and universities, especially forms of cultural ecologies or sustainable artistic communities that are tied to traditional or new forms of arts like being considered here.First, innovative work in the arts is often still seen as potentially risky with unsure outcomes which are difficult to evaluate in standard ways, such as "knowledge-making" or economic metrics.This means that creative reflections on the nation-state or society often makes relations between the university and the nation-state problematic.Thirdly, new technologies-especially digital technologies-also question the relations of the arts in universities and the wider society.In this uncertain environment, one place to look for support is through intergenerational connection between people of different ages where cultural or artistic forms are alive.In this type of community learning produces knowledge that is flexible and enables people adapt themselves to new situations better.The ecologically rooted and organically produced knowledge and practice tied to the traditional arts provides an alternative style of work amid a changing urban environment increasingly focused on always-on internet connected living as the norm.In this light, the work being planned by this new research cluster and shown in the conference is promising. Mark Teh used his own experiences with Five Arts Centre in Kuala Lumpur to remind everyone that there are a variety of ways to organize festivals and to link them to the groups or collectives producing work for these various types of festivals.In his recent work he has travelled the world discussing Malaysia's Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/manusya 23 (2020) 450-474 postcolonial history.He wondered why anyone would be interested in what happened in Malaysia in 1955?7 While at first refusing to show our projects in Europe due to concerns of how the work would be interpreted outside of the Malaysian context, the contributing artists, activists and educators worried about the different sense of politics of show in Europe.As they learned the vocabulary and dynamic of the "festival tour" however, since about 2015, they toured to different places, and they began to see how their work relates to the festival format, which typically have their own structure, duration and flow, with 15-30 shows over several days or a week, and every show needing to fit into a particular niche in the festival schedule.Their work has been framed as "Asian," "Southeast Asian" or "urban Asian" so joining a festival required working with these kinds of categories, and learning how to fit into them, even though they are often not productive ways to talk or think about individual works in the festival.They found other platforms or festivals more interesting, such as one in Greece on the relationships between theatre, archaeology and history in Greece, with a focus on the theme of "emergency" -a key term to describe what has been happening there since 2009.When they showed their work at a festival on communist history in Kerala, India -the first democratically elected communist state government in the world -people there could not believe it was possible to ban communists as was stressed in the Baling project, so it produced many interesting questions.In Jakarta since the September 30th incident in 1965, many people across Indonesia who were labelled communists or suspected communists or communist sympathizers were basically massacred and eliminated in the mid-1960s.They went there just to present it, and knowing there would be some discussion.That helped open some topics tied to communism across Southeast Asia. Given this variety of ways in festivals formats, Pornrat Damrhung and Premmarin offered their thinking on what the festival for the new research cluster tied to performance in cultural ecologies aimed to do, along with some discussion of the festival theme of "performance for life."The starting point was developing ways to involve creative people on projects in the university.What projects could an urban university in a big city could do to work involve different parts of the university's creative arts and link them to communities and audiences outside the university campus?How could our campus develop as a platform to integrate the Faculties of the Arts, Fine Arts, and Communication Arts, so they could start talk and working more with each other?How could they collaborate in ways that would open them up to something else outside of the theater, the classroom, the stage. The research theme of the festival: "performance for life" would focus on people and activities not only prepared in or for a theatre or a stage, but also projects involving different aspects of everyday life performed but not normally staged.Some of these performances are those once done but not common now or today in Bangkok, things still around, but rare and maybe hard to recognize or remember anymore.They should be fun to see to do or to be a part of it again.They seek to be tied to different ways that performance knowledge, techniques and skills can help to highlight new aspects of life or to contribute new skills or perspectives tied to life.Premmarin noted the theme was also chosen for its planned use of performance-related skills and knowledge to diverse types of experts, including arts practitioners, ordinary people in urban communities, and diverse community audiences, to actively involve these different groups in a project and to work outside of the theater -part of "real life" so to speak.It is about seeing how performance can be used in new arenas and recognizing that we perform as ordinary people.Not things that are already in scripts, but open performances which are lively and grounded in daily life.They will form the core parts of the festival.Things are always changing, so until the deadline, they need to adjust."We are making a new type of festival, and things are changing all the time."8 To provide more detail on the performance for life theme, three of its researchers -Sukanya Sompaiboon, Parida Manomaipiboon and Dangkamon Na-pombejra -introduced themselves and the projects they would do for the cluster and the festival.Sukanya would work with people in the Patumwan district close to the Chula campus, using the Likay form and collect community stories for this, designing productions based on stories or do a film.Parida would works with high school teachers to develop their playwriting through writing workshops which they could then use to help their students to write manusya 23 (2020) 450-474 scripts and stage small productions based on their work, which would be part of the festival.Dangkamon, Dean of Students, would link life and performance by using the theatre arts to work with a psychiatrist and drama therapist to help some students live better lives despite facing crises in their families, friends, or in their mental or emotional condition, through acting lessons to help them express their feelings and their thoughts better. Charlene thanked the researchers for their projects and noted how understanding a festival as the full process of making the events and their role in the structure as a whole, along with how the individual component parts came to be.It is a complex and multi-dimensional process and set of practices that involve negotiations and willingness to deal with the discomfort and willingness to have these dreams.This is all important in understanding the ecology of the festival, and how it emerges as a prolonged process rooted in diverse, evolving research projects. Professor Chetana Nagavajara offered some advice to the plans considered here, based on his long observation of theater in Bangkok and on his own research projects during a changing time.His eloquent statement suggested that success would come from keeping things small, informal, flexible and focused on performance for life in ways that would be open to as wide participation as possible.Cuing on Thailand's "lean theatre" troupes, which lack spaces to rehearse or perform, but were able to create the Bangkok Theatre Festival, he noted how it grew not from the "festival" idea but because small theatre groups attracted enough people and interest to show their work by the river in Banglumpu.Like them, he stressed making daily life as artistically rich as possible. In final remarks, Lawrence Ross said this roundtable and the conference provided some good ideas but they needed to be put to work locally.Danny Butt returned to the issue of mental health discussed by Dangkamon.The university can help develop a different relationship to knowledge through providing people with a voice and storytelling, helping to find useful knowledge in neglected places.People's stories are everywhere, in our lives, our media, and on the street.But they often become invisible, covered over with fancy paving and walls.So it is important to be open to the voices of others and their stories in the everyday since they are tied to understanding the larger ecosystem we live in.This would help to move towards a more inclusive and holistic social body, to include those often neglected or missing from the scene and invisible parts of normal lives.It is also worth remembering that art is something quite artificial, not natural.It requires effort and creativity to learn it or work with it.We make art because it is something new that offers some unique potential Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/or something different.The evolutionary ecologist Jaap de Roode sees art as a kind of prosthesis.So if you need an artificial leg, it is very key to your life, but it is artificial.It is not alive, but it allows you to live in an everyday way.This is a useful way to think about the artificiality of art and its role in our lives.Art is not alive.It is not natural.But it helps us to live in everyday life. Lowell Skar stressed ways of learning to live with the complex cultural ecologies we have are part of, and stressed the performative, intra-active aspect of performance ecologies.To involve more students and ordinary people from the neighboring communities around the university could be two ways to do this.Following Prof. Chetana's suggestion, big projects could be smaller and multiplied to get more involvement with things and people.So part of the festival could involve more people who are tied to the university to gather more ideas and projects and mini projects around those who live around the university. Overall, this roundtable fruitfully considered some key aspects of a future research agenda of the performance area of Chulalongkorn University's new research cluster in arts and culture.By seeking to integrate theatre knowledge and performance practice into urban cultural ecologies within and around the university, its diverse research projects would involve from people from several faculties and areas of expertise.The research would develop innovative knowledge and creative skills that addressed issues of everyday life and living well.Their varied projects would develop to the point of being shown in a structured set of public presentations or performances which would be called a "festival" planned for a 6-8 months period spanning from late 2019 to mid-2020.Although most of the planned projects for this "Life | Performance" festival were modified and moved to online venues due to the 2020 covid 19 outbreak, the individual projects have continued and there has been significant online engagement.short morning session considered how to think about continuing the connections and the work begun in the international conference on July 19 and 20.9 The researchers in this cluster project at Chulalongkorn University considered ways to keep sharing and connections with one another, including through universities and institutions in the Asia-Pacific. The fourteen contributing scholars at the international conference discussed, expressed, and assessed how the conference tried to first embed Thailand's experience in performance research into the working of cultural ecologies, and then to extend this works to international friends in Asia.Participants sought to listen to what is happening to find ways to continue their old and new friendships.This writeup includes just a few of these reflections.What all attendees stressed was their interest in continuing and extending what was begun here, their certainty that these connections would continue, but to continue them first by keeping the relations flexible and informal. Ritirong, whose role as Director of Thai Studies and long involvement in the Cultural Management program at Chulalongkorn University, first provided wonderful opening thoughts about how networks seem to be working now in the 21st century.He stressed that it could be good to keep the network informal so that connections could benefit those who share the same mind and interests.Keeping the connections loose and open in a new platform would enable everyone to support those with the same interests, while helping open up the new platform to other related alliances.This looseness helps encourage connections while being able to do new things.It would be a good starting point to encourage ongoing involvement. Charlene Rajendran from Singapore pointed out how her role as moderator allowed her to finding out more about the many different interesting participants and projects with an open mind.The presence of artists and scholars at this small international conference compressed so much into two days, producing a small interactive community -a kind of aesthetic ecology -permitting the discovery of new attitudes and insights.The environment of free and insightful exchanges with scholars and artists was refreshing and made her interested in being part of something that would continue and would look for ways to do so. Danny Butt from Victorian College of the Arts, University of Melbourne, encouraged regional networking by introducing the Asia Pacific Artistic Research manusya 23 (2020) 450-474 cooperation in the future.It could help generate a wider arena and a loose inter-university alliance. Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/manusya 23 (2020) 450-474 of July 21, 2019, conference participants met to offer their final thoughts, and to consider how to extend the experience of the conference, and to continue their artistic efforts and research collaboration in the future.Discussing these issues over a light breakfast, the guests included Associate Professor Ritirong Jiwakanon, Director of the Institute of Thai Studies at Chulalongkorn University, conference participants and other members of Chulalongkorn University's new Research Cluster in Arts and Culture.The Downloaded from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/manusya 23 (2020) 450-474 7 This refers to the Baling project done by the Five Arts Center from Kuala Lumpur.It is about the so-called December 1955 "Baling Talks" which sought to negotiate peace in the Malayan peninsula devastated by the Emergency.This documentary performance reconstructs talks between Tunku Abdul Rahman, David Marshall and Chin Peng and how they helped explore different visions for building a new nation.Using publicly available transcripts, the performerresearchers reconsider what terms like freedom, loyalty, terrorism, reconciliation, surrender, sacrifice and independence could mean, and look at how history still haunts today.from Brill.com 10/31/2023 11:52:37PM via Open Access.This is an open access article distributed under the terms of the CC BY-NC 4.0 license https://creativecommons.org/licenses/by-nc/4.0/
2020-12-31T09:05:25.023Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "c7db30a305c8d8f1ab8be32fa16973c4be120449", "oa_license": "CCBYNC", "oa_url": "https://brill.com/downloadpdf/journals/mnya/23/3/article-p450_450.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "15a6629615fef41c19d7cc7000ff4e21a55f6dcb", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Sociology" ] }
155512833
pes2o/s2orc
v3-fos-license
Contrasting motif preferences of platinum and gold nanoclusters between 55 and 309 atoms The atomic structure of size-selected Pt clusters in the range 10–600 atoms is investigated with aberration-corrected scanning transmission electron microscopy and reveals significantly different behaviour from the existing data for Au clusters. The Pt clusters show a dominance of the FCC motif from relatively small sizes, whereas traditionally for Au multiple motifs – the icosahedron, decahedron and FCC motifs (and related structures) compete. The new data motivates a comprehensive computational investigation to better understand similarities and differences in the structures and energetics of the two different metallic clusters. Low energy structures of Pt and Au clusters with 55, 101, 147, 228 and 309 atoms (±2%) are identified using a global optimisation algorithm, and the relative energies found by local minimisation using density functional theory. Our computational results support the experimental observations; for Au clusters all motifs are comparably stable over the whole size range, whereas for Pt, the motifs only compete at the smallest sizes, after which the FCC motif is the most stable. Structural analysis suggests the greater tendency of Au towards amorphisation enables the icosahedron and decahedron to remain competitive at larger sizes. Introduction Nanoclusters are nding increasing use in a wide range of elds from sustainable energy production and storage 1 to biomedical applications. 2The properties of nanoclusters are highly dependent on size and atomic-scale morphology.For example, despite being inert in the bulk, 3 Au nanoclusters present interesting catalytic properties such as high activity for CO oxidation [4][5][6] and selective oxidation of hydrocarbons. 7Pt nanoclusters are used extensively as catalysts in the green energy sector; in particular in fuel cells 8,9 and electrochemical hydrogen formation. 10The recently developed synthetic technique of cluster beam deposition enables precise control over size and deposition energy such that a population of supported clusters with a narrow size distribution can be generated. 11,12To understand the properties of these nanoclusters it is necessary to elucidate their atomic-scale structure. A number of experimental investigations have sought to probe the morphology of Au clusters.While many small (<100 atoms) Au clusters are rather disordered, 13 a high-symmetry, tetrahedral isomer of Au 20 has been identied (amongst others). 14,158][19] For larger clusters (several hundred atoms) higher symmetry structures are observed, such as the truncated decahedral motif, identied by X-ray powder diffraction 20 and HRTEM. 21For Au 309 , observed clusters can be assigned to distinct structural classications commonly observed in nanoclusters, namely decahedral (Dh), octahedral (FCC) or icosahedral (Ih) motifs. 22ll three motifs have also been observed at 561, 742 and 923 atoms, although the proportion of metastable Ih clusters can be limited by control of the cluster source parameters. 23For 561 atoms, at high temperatures both Dh and FCC clusters are observed but at lower temperatures the metastable Dh isomer transforms to FCC. 24 From the high-temperature data, the energy difference between the isomers is determined experimentally to be only 0.040 AE 0.020 eV. Relatively little is known experimentally about the morphology of Pt clusters, particularly bare or unsupported clusters.Non-crystalline to crystalline transitions have been observed using HRTEM and EXAFS for supported Pt clusters. 25he size at which the transition occurs is dependent on the support but is in the vicinity of 1.5 nm.Dh nanoclusters were rarely observed.A preference for ordered structures has been observed for polymer-capped Pt 26 and Pt-Pd core-shell particles, where the presence of Pt atoms in the core tended to facilitate ordering. 26xperimental determination of the most thermodynamically stable structures of deposited clusters is not a trivial task.The process can be hampered by heating under the electron beam as well as kinetic trapping during growth.Gas phase formation conditions can be tuned to reduce the formation of far-from equilibrium structures, 23 but this does not completely eliminate other factors.Theoretical calculations can aid in deducing the equilibrium structure of clusters by quantifying the relative energies of clusters, as well as by explicitly considering the effects of temperature and by simulating growth processes. 27,28 variety of calculations have been performed for both Au and Pt clusters, from survey calculations over a large cluster size range using an empirical potential 29 to detailed global optimisations at a few selected sizes.17,18,30,31 Many studies have provided support for the notion of a crossover (either gradual or abrupt) from Ih to Dh to FCC as the cluster size is increased.20,29,[32][33][34][35][36] However, it should be noted that both Dh and FCC have been shown to be competitive structures over a wide range of sizes; 29,37 recently, it has been found that Dh and FCC motif preference continues to oscillate even at sizes up to 4000 atoms.38 Furthermore, small changes to the theoretical description can have a large impact on the relative energies of these motifs. Detailed theoretical studies of Au clusters at selected sizes have shown the close competition between the Dh and FCC structures.Global optimisations of Au clusters with 315 AE 15 atoms found predominantly Ih-like clusters, although FCC-like clusters were also observed. 39This study also found that energy differences between motifs were small (<1 meV) and that the lowest energy structural motif changed over the size range considered (300-330 atoms).A wider size range was considered by Bao et al. (13-318 atoms); again, it was found that the most stable motif oen changed with the addition of a single atom. 40olecular dynamics simulations have recently been employed to study the growth of Au clusters from 13 to 923 atoms at several growth temperatures. 27Here it was observed that, aer solidication, the vast majority of clusters retained their underlying motif for the remainder of the simulation.This suggests that the structures observed experimentally for Au clusters are usually due to kinetic trapping, rather than energetic stability. Theoretical calculations for Pt are relatively scarce.An early global optimisation study of Pt clusters, with up to 60 atoms using an empirical potential, found low-symmetry clusters to be the most stable. 41This was conrmed using density functional theory (DFT), where a reconstruction of the ve-fold vertices of an Ih cluster results in a more stable cluster than the perfect Mackay Ih cluster. 42DFT calculations investigated the energetics of competing isomers of Pt clusters, largely focussed on clusters with fewer than 38 atoms. 43FCC structures were observed for sizes smaller than 21 atoms, then again from 24-38 atoms.The study is less comprehensive for clusters larger than 38 atoms but suggests continued dominance of the FCC motif. In the current study, we present experimental results showing the FCC motif dominance of Pt clusters, which contrast with the rather different behaviour of Au clusters documented in the literature.A systematic computational investigation into both Au and Pt clusters is therefore performed.Low energy structures for a range of cluster sizes are determined using a combination of global and local optimisation techniques using empirical potentials and density functional theory.The result is a comprehensive overview of the structures and energies of Pt and Au clusters, within a common theoretical framework.Analysis of the calculated energies and structures rationalise some of the observed experimental structural similarities and differences between these two kinds of clusters. Methodology 2.1 Experimental methods 2.1.1Production of nanoclusters.Pt clusters were produced with a magnetron sputtering, gas condensation beam source 44,45 and selected in size with a lateral time-of-ight mass selector 46 prior to deposition, with a resolving power of M/DM $ 20.Two kinds of samples were generated: clusters of a single mass (55, 147 and 309 atoms) and also a size range (10-600 atoms) for which the mass selector was run over size selected ranges.Argon gas was used to create a DC plasma that sputters Pt atoms from a pure Pt target (99.95%,from PI-KEM, U.K.) at an average pressure of 10 À7 Torr.The hot atoms were condensed into clusters in a mixture of argon and helium gas, the walls of which were cooled by liquid nitrogen.Ar and He ow rates were both 200 sccm.Clusters were produced under "slow growth" conditions 23 (250 mm condensation length, 150 W magnetron power) and deposited onto holey carbon TEM lms supported on copper grids.The deposition energy was approximately 1 eV per atom to ensure the clusters were solanded. Initially samples of a range of cluster sizes were made so that the distribution of cluster motifs could be investigated across a wide mass range.Specically, three samples were made covering the 10 to 600 atom range: 10-147, 147-309 and 309-600 atoms.700 clusters were imaged in total and there was an approximately even distribution of each cluster size.Three samples were made (rather than one) so that the density of clusters could be kept low, preventing aggregation, whilst still having enough clusters of each size to image.For these samples, the cluster size was determined based on the HAADF intensity (vide infra).To investigate the structures in more detail, single sized clusters were produced with 55, 147 and 309 atoms, where the cluster size was determined by the mass selector; the mass resolution was approximately AE2.5%. 2.1.2Characterisation using STEM.The imaging of the clusters was performed using a 200 keV (JEOL 2100F) STEM with a spherical aberration corrector (CEOS) and resulting probe size of 0.1 nm.The HAADF detector's inner and outer collection angles were 62 and 164 mrad, respectively.This enabled the intensity of the images of the clusters to be used to calculate the number of atoms in each cluster, 47 whilst using the background measurements to normalise between grids. Multiple electron scattering simulations were performed for model Ih, Dh and FCC clusters with 55, 147, 309 and 561 atoms using the QSTEM soware. 48Simulations included the full range of possible orientations of the clusters, generating a "simulation atlas". 49Imaged clusters were then compared with the simulation atlas to determine the atomic structure of the cluster.Distinctive patterns were noted e.g. the images in the FCC atlas exhibit parallel lines, while the Ih atlas shows ve fold symmetry oen with rings present.These characteristic features made it possible to identify the structural motifs of clusters that did not have exactly the same size as the simulation atlas. Computational methods Clusters at the so-called magic number sizes, 55, 147 and 309 atoms were chosen as a focus for study, as well as two cluster sizes between these, 101 and 228 atoms.The magic numbers represent cluster sizes at which a closed-shell structure is possible for each of the three motifs (Dh, FCC and Ih).The intermediate regions were chosen to explore size regimes between the sizes that have traditionally dominated the literature, as well as to allow for the possibility of clusters that are structurally different from those at the magic sizes.Clusters with mass AE $2% were considered around each of these sizes chosen, to reect size variation typical in an experiment.The studied cluster sizes were: 55 AE 1, 101 AE 2, 147 AE 3, 228 AE 4 and 309 AE 6 atoms. In this study we employ a workow in which we (i) manually construct an ensemble of low-energy starting clusters, (ii) explore the surrounding potential energy surface using a global optimisation algorithm then (iii) rene the resulting clusters using DFT.This yields low energy, oen asymmetric or distorted Ih, Dh and FCC clusters.All calculations here neglect the inuence of temperature and support. 2.2.1 Generating the initial ensemble of structures.The ensemble of initial structures was constructed using a recentlydeveloped interpolation scheme 38 to estimate the lowest energy clusters of each motif for any size.This scheme is based on the premise that asymmetric FCC or Dh or Ih clusters that may have incomplete (stepped/kinked) surfaces or present adatoms are relatively stable if their "parent", symmetric clusters are themselves stable.For a given number of atoms, surrounding symmetric clusters were constructed and the requisite number of atoms removed to reach the desired cluster size.This was repeated for all nearby symmetric structures yielding an ensemble of asymmetric clusters of a given size.More details can be found in the ESI.† All ensemble structures were generated using the Atomic Simulation Environment 50 (ASE) version 3.9.0 and locally minimised using the soware package EON 51 and the RGL interatomic potential with parameters from Baletto et al. 29 (hereaer referred to as the empirical potential (EP) approach). The RGL potential, and this particular parametrisation, was chosen as it is widely used for metallic clusters. 2.2.2 Global optimisation.Global optimisation (GO) calculations were performed using the recently-developed "global optimisation using saddle traversals" (GOUST) algorithm 52 as implemented in EON with the RGL potential for interatomic interactions.The method explores the potential energy surface by locating rst-order saddle points and their connected minima, applying the minimum-mode following method 53,54 with the bowl breakout extension. 55From the initial local minimisation carried out on each of the ensemble structures, GO calculations were performed for each of the lowest energy Ih, four lowest energy Dh and four lowest energy FCC nanoclusters and run until at least 130 minima were found in each case.A second GO run was initiated for each of the lowest energy structures, with a larger initial displacement to attempt a wider exploration of the potential energy surface near each initial structure.Verication of the GOUST algorithm was undertaken through comparison with results obtained from a genetic algorithm where it was found to perform equal to or better than the genetic algorithm.For discussion and validation of the global optimisation techniques, see the ESI.† 2.2.3 DFT renement.Structures from all GO runs for each cluster motif of a given number of atoms were combined and the three lowest energy clusters of each motif, FCC, Dh and Ih, were then re-optimized using DFT.The DFT calculations were performed with the PBE 56 exchange-correlation functional.Each cluster was placed in a cubic supercell and separated from its nearest replica by at least 10 Å in each dimension.The Brillouin zone sampling was restricted to the gamma point only.A plane wave basis set with an energy cutoff of 229 eV was used to describe the valence electrons, while the core electrons were treated using the PAW 57,58 representation.All DFT calculations were performed using the VASP soware package. 59Results and discussion Experimental results for Pt clusters so-landed on carbon lms The identication of the cluster structures was performed by comparison of the experimental images to simulated QSTEM atlases, allowing for characteristic features in the experimental clusters to be identied, and for these clusters to be assigned a motif.This laborious manual process is more effective than automated image recognition at the present time.FCC clusters exhibit parallel planes of atoms that show up as parallel lines in the QSTEM atlas.Dh clusters oen show curved lines as well as areas of more distinct dots, while Ih clusters show more circular contours oen with the patterns radiating out from the centre of the cluster.These distinct patterns allow clusters of any size to be identied, which avoids the need to generate a QSTEM atlas for every size cluster investigated, and allows us to assign clusters over a wide size range.If no positive identication can be made, the cluster is classed as unidentied/amorphous (UI/ A).Clusters that are UI/A do not match any of the three candidate structures considered, but do sometimes exhibit distinctive features. The proportion of Pt clusters assigned to each motif between 10 and 600 atoms is shown in Fig. 1 and tabulated in the ESI.† At large sizes (>300 atoms) FCC is the dominant motif (66%), while 31% of the clusters are UI/A and 3% of the clusters were found to be Dh and no Ih structures were found.However, for smaller clusters (<300 atoms) most clusters are UI/A (80%).The FCC motif was also present (19%) but only a minimal appearance of the Dh structure was seen (1%); again, no Ih clusters were seen.Due to the high proportion of UI/A clusters in this region, exact isomer proportions in Fig. 1 should be treated with caution as our assignments are rather conservative. To investigate in more detail the high proportion of UI/A clusters, size-selected clusters containing 55, 147 and 309 atoms were produced and imaged; representative images can be found in Fig. 2. When these clusters were imaged a more conservative approach was taken in motif assignment when comparing to QSTEM atlases.For both 55 and 147 atoms, all clusters were identied as UI/A but some distinct features such as parallel lines, characteristic of FCC clusters (Fig. 2a) and the appearance of rings with dots, characteristic of Ih clusters (Fig. 2d) could be seen.These patterns suggest the possibility of ordered regions within glassy overall structures, which do not extend sufficiently to match simulations to allow for assignment of these clusters as traditional FCC, Dh or Ih motif.For the 309 atom clusters, 34% of clusters were assigned as UI/A.However, some of these could be assigned as twinned FCC clusters, with non-continuous planes (Fig. 2f). Theoretical results for Pt and Au clusters Cluster structures were found using global optimisation methods with an empirical potential (EP) and then rened using DFT before reporting the nal structures and energies.Renement of the EP structures was found to be necessary as the EP oen predicted (i) the wrong ordering of the energies of clusters within a structural motif (e.g. the wrong truncation of a given motif) and (ii) the wrong energy ordering between the Ih, Dh and FCC motifs.Furthermore, notable differences in the structures were also observed between EP starting structures and relaxed DFT structures.For a full discussion of the results obtained using the EP, see the ESI.† Henceforth, only DFT structures and energies will be discussed.While this study rened the energies of several low-lying clusters of each motif, rather than just the lowest energy isomer, the use of an EP followed by renement by DFT is not equivalent to direct exploration of the DFT potential energy surface therefore some low energy isomers may be missed by the present approach.Regrettably, DFT-based global optimisation is prohibitive for clusters of this size. 3.2.1 Cluster morphology.Clusters were assigned to a given motif by eye, based on a majority of structural features such that none of the clusters in the computational study were assigned to be UI/A. Fig. 1 Overview of the dominant motif for Pt clusters between 10 and 600 atoms, classified as either FCC (blue), Dh (red), Ih (black) or unidentified/amorphous (UI/A, green).700 clusters were imaged in total.The bin size is chosen for ease of viewing and does not represent the experimental uncertainty in cluster size.3.For Pt and Au clusters in the 54-56 atom bracket, the structures are rather distorted (Fig. 3a and b, respectively), especially for Au, in agreement with previous literature. 60We assign these structures to the Ih motif here due to the appearance of Ih-like features but we note that the heavy distortion of the clusters makes this assignment somewhat subjective. For clusters between 144 and 150 atoms, the lowest energy Ih clusters for both Pt and Au exhibit predominantly ordered structures with some surface reconstruction; perfect Ih are not the most stable structure for either metal.A common reconstruction is the so-called rosette reconstruction, where six atoms form a ring with a central atom slightly depressed within the ring (Fig. 3d). 42The rosette feature is typically more pronounced in Au than Pt.Other surface reconstructions include the appearance of small (100)-like facets.Previous computational studies have also concluded that distorted Ih structures are more stable than perfect Ih structures at a cluster size of 147 atoms. 30,61For the largest clusters considered between 303 and 315 atoms, similar reconstructions are also observed but they tend to be more localised and the remainder of the cluster is ordered. The 99-103 atom and the 224-232 atom size brackets are interesting size ranges as they lie directly between magic sizes, i.e. they are as far as possible from a closed shell Ih.Accordingly, the Ih structures that are observed in these regions are distorted.A common feature is a severely truncated Ih, where a signicant portion of the cluster is missing.This causes the Ih features of these clusters to distort and strong Dh features can also be seen, such as (100) facets and re-entrant edges typical of a Marks decahedron (Fig. 3c). 3.2.1.2Decahedral and FCC clusters.The calculated low energy Dh and FCC structures of Au and Pt are common across both metals and are typically quite ordered.Across all size brackets, the features observed within each motif for both metals form three distinct categories: (1) perfect, closed-shell clusters (Fig. 4a and d) (2) open-shell clusters with incomplete facets and/or with adatoms (Fig. 4b and e) or (3) clusters with a stacking fault, which may, in addition to this, be open-shell (Fig. 4c and f).The perfect, closed-shell structures (1) are not necessarily symmetric with respect to the size of the facets/ depths of the corners but all facets are complete and the surfaces do not contain adatoms, nor do these clusters have any stacking faults. Stacking faults (Fig. 4c and f) are quite common in the smaller clusters but are rarely observed for clusters greater than 100 atoms.Stacking faults are a bulk (rather than a surface) defect and have an associated energy penalty that increases with the amount of bulk. 62Due to the high surface area to bulk ratio of small clusters, bulk energy penalties are of little consequence at small sizes.As the size of the cluster increases, bulk effects dominate more and stacking faults destabilise the clusters, explaining their absence in the low-energy structures of the larger clusters. For all size brackets, the most stable structures within a given motif are very similar for both metals.For example, the lowest energy Dh clusters around 147 atoms, for both Pt and Au, are based on a Dh cluster with (100) facets that contain 6 atoms and Marks edges with a depth of 1 atom.This further emphasises the similarity in structure of the Pt and Au clusters.The change in structure upon increasing the cluster size by one atom is typically the simple addition of one more atom to a (100) facet of the same base structure. 3.2.2Cluster energetics.The calculated energy of the most stable cluster of each motif (Ih, Dh and FCC) for each metal is in Fig. 5, for all of the size ranges considered here, enabling comparison of the calculated motif dominance for Au and Pt clusters as a function of size. For both Pt and Au, the smaller clusters (54-56 and 99-103 brackets) show competition between the (distorted) Ih and FCC motifs.In these size brackets, Pt shows more FCC dominance than Au, while for Au, Ih is more stable more oen.The Dh motif is only found to be the lowest energy motif once, for Au 100 .The dominance of Ih clusters for Au at small sizes is expected from general arguments of cluster stability; Ih clusters have the most favourable surface packing of the three motifs and at small sizes the high internal strain of the Ih is not severe enough to offset this.The stability of Pt FCC clusters at small sizes is less intuitive as it has the least favourite surface packing.However, previous theoretical work by Kumar and Kawazoe found FCC clusters to be the dominant motif for Pt from sizes of around 40 atoms. 43They postulated that the stability of the FCC isomer was due to growth from stable triangular Pt 6 and square planar Pt 9 clusters, such that larger clusters rich in these features (such as the FCC motif) are particularly stable. For medium-sized clusters (144-150 atom size bracket), Au exhibits close competition between all three motifs, suggesting the presence of multiple motifs for clusters generated in this size range.Au does, however, have a slight preference for Dh at this size, with 5 out of 7 of the clusters in this bracket nding the Dh motif to be the most stable.For Pt, within this same size bracket, the Dh and FCC motifs compete while the Ih clusters are very unstable. For Au clusters of 224-232 atoms, the energetics of Ih clusters lie well above the realm of the lowest energy clusters.However, between the FCC and Dh motifs there remains strong competition, with the lowest energy clusters uctuating between the two motifs.For Pt clusters in this same size bracket, there is a strong preference for the FCC motif.The Dh clusters in this bracket are relatively unstable (with the possible exception of Pt 227 ), and the instability of the Ih clusters persists. For the largest Au clusters considered (303-315 atoms), the competition between Dh and FCC motifs, as witnessed for medium-sized clusters, is still apparent, with slight preference once more for the Dh motif for all except Au 303 and Au 304 .The Ih gain stability in this bracket, such that the energy difference between all three motifs is small, particularly for the larger clusters (>309 atoms).Conversely, Pt behaves very differently to Au in this bracket.The separation of energy between the motifs is now clear; FCC is the most stable structure while the Dh and Ih clusters are very unstable in contrast. Comparison of experiment and theory 3.3.1 Structure.The calculated cluster structures presented here can be compared with experimental observations of structural features in Au and Pt clusters.The lowest energy structure of Au 55 found here is in good agreement with previous computational [17][18][19]31 and experimental 16 studies that identify the lowest energy isomers of Au 55 to be a family of distorted, chiral clusters. Th exact structure of the global minimum depends highly on the theoretical description of the atomic interactions (see ESI † for detailed discussion). Fo larger clusters, literature on the specic structural features of Ih structures is scant and we are unable to compare our calculated structures to experiment for these sizes.In the intermediate size brackets (99-103 and 224-232 atoms), our calculated Ih structures were generally rather distorted or severely truncated and strong Dh features were observed.Mixed Ih-Dh Au structures have also been observed experimentally, albeit synthesised with non-size selected chemical methods on a carbon support.63 Three families of Dh and FCC clusters were identied in the calculations, (i) closed-shell clusters, (ii) open-shell clusters with adatoms or incomplete facets and (iii) clusters with stacking faults.Examples of all three of these types of clusters have been observed experimentally; Dh Au 923 clusters have been observed to exhibit partial Marks corners 23 and FCC Au 923 clusters have demonstrated adatoms on both (111) and (100) facets.64 Stacking faults are possibly observed in our experimental Pt data, in which some of the UI/A experimental Pt 309 clusters exhibited features that could be due to twinned particles or particles with stacking faults (see Fig. 2f for an example). 3.3.2Energetics.Comparison of experimental and calculated data of the relative energies of different cluster motifs is somewhat difficult for many of the clusters considered here.To our knowledge, there is no size-selected experimental data on , where E tot is the energy of the cluster, N the number of atoms and E bulk is the energy per atom in the bulk FCC crystal.D essentially divides the excess energy of the cluster by the approximate number of surface atoms, thereby enabling facile comparison of stability of clusters of different sizes.However, this has no effect on the relative stability of the motifs (see ESI †).Values of D are given relative to the most stable cluster for each metal (Au or Pt).Due to the high instability of the Ih clusters > 300 atoms, only selected Ih clusters were calculated in this region. motif preference for Au at sizes 99 and 103 and 224 and 232 atoms.For Au 147 clusters relatively little experimental data exists.Available data is either for specic sized clusters but synthesised by ligand templating 65 or clusters with a rather large size distribution.Hence, we refrain from making any specic comparisons between our theoretical motif dominance and experimental observations for Au clusters smaller than 300 atoms. Experimentally, size-selected Au clusters with >300 atoms have been studied extensively.An early study found that, for Au clusters with 309 AE 6 atoms, no single motif dominates the observed distribution of clusters; Dh clusters were found to make up about a third of the observed clusters, FCC about a quarter and Ih less than 10%. 22The remaining clusters could not be condently assigned.However, it should be recognised that clusters are inherently metastable and the observed isomeric population is not necessarily representative of the equilibrium structures; instead, the structures observed can be due to kinetic trapping/ growth effects.Wang and Palmer developed a method of deducing the most stable isomer by irradiating the cluster sample under the electron beam, i.e. annealing inside the electron microscope. 49Less stable clusters are transformed into more stable clusters under the electron beam.The structure of Au 309 was recently revisited using electron beam irradiation experiments. 66For this size, rather than observing the majority of clusters transforming to one motif, the structures tended to oscillate back and forth between motifs under the electron beam.Frame-by-frame analysis, 14,16 was therefore used to monitor the relative occurrence of each isomer; it was found that FCC clusters were observed most oen (56%), followed by Dh (37%) and Ih clusters (7%), respectively, concluding that the FCC motif was the most stable, followed by Dh and Ih.Our computational results show that the energies of all motifs are rather close in energy, which is reected in the ease with which the clusters transform back and forth between motifs under the electron beam.Our simulated results indicate that the Dh clusters are the most stable, rather than FCC as seen in the experiments.However, there were a large number of clusters that were not able to be identied in the experiments, which could affect the exact proportions of each isomer observed. Comparison of calculated and experimental motif dominance for the smaller (<250 atoms) Pt clusters is difficult.Experimentally, most of the clusters in this region could not be assigned to any particular motif.Those structures that could be identied were determined to be FCC.Our calculated results support the absence of Ih clusters and presence of FCC clusters, but the large proportion of UI/A clusters complicates the situation somewhat.To further identify some of the UI/A clusters, QSTEM atlases were generated using some of the low energy clusters obtained from the GO simulations.These clusters were either distorted Ih clusters or Dh and FCC with stacking faults and/or incomplete facets.This allowed some additional clusters to be assigned to one of the motifs.For example, for the 55 atom clusters, this led to 15 more clusters from a total of 100 clusters being positively identied (6 Ih, 4 Dh and 5 FCC) and an additional 7 clusters from a total of 100 for the 147 atom clusters (6 Dh, 1 FCC, see ESI † for the new QSTEM atlases and reassigned clusters).While the majority of clusters remained unidentied this demonstrates how inclusion of imperfect clusters in the QSTEM atlas can aid in identication of clusters that are further from the traditional perfect motifs. For Pt clusters with >300 atoms, the calculated data are in broad accordance with the experimental results.FCC clusters make up 66% of the observed clusters in the experiments, with a small proportion (3%) of Dh clusters and no Ih clusters.Indeed, our calculated data in this region show that Ih are very unstable and that FCC is clearly the dominant structure, save for few specic sizes where Dh clusters are competitive. We acknowledge that our calculations are temperature free, the inclusion of which would likely alter the precise proportions of each motif.It should also be noted that the experimental clusters are so-landed on carbon lms.However, the low deposition energy means that clusters are not fragmented or coalesced on impact 22 and the conducting nature of the carbon lm leads to efficient dissipation of energy and charge. 67urthermore, the electronic structure of clusters, 68 as well as the melting point 69 have been shown to be only minimally perturbed by a carbon support. Rationalisation of differences between Pt and Au Both the experimental and theoretical results for Pt and Au in this work reveal clear differences in the trends in motif dominance with size.Trends in cluster motifs are commonly rationalised in terms of competition between surface and bulk effects.Ih is favoured at low sizes due to low surface energy.However, it has high internal strain, which becomes more signicant as the size of the cluster increases.Therefore, Dh is favoured next before eventually, FCC, which is stable in the bulk limit.From Fig. 5, it can be seen that Pt clusters roughly follow this trend, where the Ih clusters are the rst motif to become unstable, followed by the Dh clusters until, at the largest size considered ($309 atoms), the FCC motif is the only stable motif.In contrast, for Au the Dh and, to a lesser extent, Ih motifs remain competitive at $309 atoms.This is a counterintuitive result as one would expect the FCC motif to gain in dominance as size is increased, which is clearly not the case here.Furthermore, experimental results for Au clusters of larger sizes suggest that the motif competition between Dh and FCC isomers of Au persists even up to at least 900 atoms. 23,24,27This is clearly in contrast to the more expected behaviour of Pt clusters. To gain some insight into these contrasting trends, the structures of selected clusters are examined in more detail.Low energy closed shell clusters around 309 atoms, locallyminimised with DFT, are considered.These are the 309 atom Mackay Ih, a 318 Marks Dh and a 314 atom truncated octahedron FCC cluster.The relative energies of these clusters are similar to the relative energies of the global minimum Ih, Dh and FCC clusters in the 303-315 atom size bracket (see ESI †), but without the added complexity of large-scale structural distortions.At the smallest size ($55 atoms) there is more difference in the relative energies between the global minimum and closed-shell clusters, which is discussed in more detail in the ESI.† the aforementioned importance of internal strain for cluster stability, in Fig. 6 the radial distribution functions (rdf) of the nearest neighbours in the core region of the closed shell clusters are analysed.For Pt, the rdf reveals a very ordered core for all motifs; only a few sharp peaks are observed.In contrast, for Au, disorder in the core region is evident (especially for the Ih) where the peaks in the rdf are broadened.It has been shown that amorphisation of the core of Au clusters leads to a signicant lowering of the energy of the core atoms, thus reducing the strain. 70This lowering more than compensates the concomitant gain in energy of the surface atoms due to increased disorder, leading to a more stable cluster. 42,70Accordingly, we propose that for Au, the disorder evident in the core of the Ih and Dh clusters reduces the internal strain and stabilises these clusters, thus allowing them to remain competitive with the FCC clusters at the largest size here.For Pt there is a lesser tendency towards amorphisation as shown in the ordered core atoms and consequently the Ih and Dh clusters are rather unstable at the largest size considered here. The tendency of Au to amorphise has been discussed in detail previously by Soler et al. 70 There, the tendency of metallic bonds to contract at the surface of clusters, and the low energy cost to bond length and coordination disorder, were cited as key factors to favouring amorphisation.These were quantied by comparing the ratio of the elastic energy to the enthalpy of melting.Pt has a higher elastic energy, which should therefore favour amorphisation.However, this is offset by the enthalpy of melting, which is signicantly smaller in Au, thereby enabling more facile amorphisation for Au than for Pt.Factors favouring amorphisation are enhanced from le to right and down the dblock such that Au has the highest tendency towards amorphisation. 70However, the metal with the next highest tendency towards amorphisation is Pt, which has also been noted in another study. 42This explains why, while the nal motif trends are somewhat different between Pt and Au, there are still very strong structural similarities between the two types of clusters.Notably, the morphology of the most stable clusters of Pt and Au are oen very similar, with similar defects observed.Furthermore, somewhat disordered clusters are always lower in energy than their closed-shell counterparts. Conclusions This study investigates the structural differences observed between Pt and Au nanoclusters.Experimental Pt clusters up to 600 atoms were synthesised in a cluster beam source, so landed on carbon lms and imaged using aberration-correction scanning transmission electron microscopy.The resulting clusters were found to be largely amorphous/glassy at sizes less than 200 atoms.From approximately 200 atoms, a large proportion of FCC clusters is seen.Very few Dh clusters are observed experimentally over the range studied.In contrast, Au clusters show close competition between Ih, Dh and FCC motifs at all sizes up to $300 atoms. We have used a combined computational approach to explore the energies of Pt and Au clusters of sizes 54-56, 99-103, 144-150, 224-232 and 303-315 atoms.First, an ensemble of clusters was generated using recently developed methodology that estimates the structure of low energy asymmetric clusters of a given size based on the energies of nearby symmetric clusters.These clusters were then subjected to a global optimisation that located the lowest energy structures of each competing motif using the RGL empirical potential description of the atomic interactions.Finally the energies of the most stable clusters were rened using DFT.This renement was necessary as the empirical potential was oen found to predict the wrong energetic ordering of clusters both within a given motif and between motifs. None of the most stable calculated Ih clusters were the highsymmetry Mackay icosahedron but were somewhat distorted, for both Pt and Au; in the smallest size bracket (54-56 atoms) the distortion was rather pronounced.At larger sizes closer to the traditional "magic" sizes (144-150, 303-315), Ih structures are less distorted, with surface reconstruction being the predominant feature.Ih clusters in the size ranges between magic sizes (144-150 and 303-315) were interesting as they exhibited both Ih and Dh features.Interatomic distances are normalised with respect to the nearest neighbour distance in the bulk, r 0 .Cluster structures are shown in the insets to each plot, where the atoms defined as the core are opaque.For FCC 314 , the core is defined as the innermost 6 atoms, for Dh 318 it is 19 atoms and for Ih 309 it is 13 atoms.The core is defined as the smallest region that has the same symmetry as the overall cluster. Despite the observed structural similarity of Au clusters within each motif, energetic analysis of the two types of clusters revealed notable differences.For Au we nd all three motifs to be competitive in energy at all sizes.Conversely, Pt only nds all three motifs to be competitive at small sizes (54-56 and 99-103) aer which size the distinction between motifs becomes clear.From sizes of 144 atoms the Ih motif is not competitive and the relative instability increases with size.Dh is competitive in the 144-150 size bracket but from the 224-232 size brackets it also becomes unstable such that by the 303-315 bracket the FCC motif is clearly the most dominant motif.These results are in broad occurence with experimental results. Structural analysis reveals possible reasons for the differences in motif dominance of the two types of clusters.Au clusters show a greater tendency towards amorphisation, which is able to lessen the energy penalty arising from the strained core of the noncrystalline Ih and Dh clusters.As a result, these motifs remain competitive for Au for Pt they quickly become disfavoured due to the lesser tendency towards amorphisation. Fig. 2 Fig. 2 Representative HAADF STEM images of Pt clusters (a): UI/A Pt 55 cluster showing parallel lines; (b) UI/A Pt 55 with no discernible structure; (c) UI/A Pt 147 cluster showing a parallel lines; (d) UI/A Pt 147 cluster showing a ring-dot pattern; (e) UI/A Pt 147 cluster with no discernible structure; (f) Pt 309 cluster showing a possible twin plane; (g) FCC Pt 309 cluster; (h) Pt 309 electron scattering simulation, corresponding to the cluster seen in (g). 3. 2 . 1 . 1 Icosahedral clusters.Examples of the of low energy Ih structures calculated are shown in Fig. Fig. 3 Fig. 3 Examples of low energy icosahedral (Ih) clusters as calculated using DFT.(a) Distorted lowest energy Ih structure of Pt 55 ; (b) distorted lowest energy Ih structure of Au 55 ; (c) distorted Ih cluster common in 99-103 atom bracket for both Pt and Au, exhibiting both Ih (pink) and Dh features (red).(d) Distorted Ih Au 147 with a highlighted rosette surface feature (orange).Grey: Pt; yellow: Au; pink: structures common to both Au and Pt. Fig. 4 Fig. 4 Examples of low energy decahedral (Dh) and octahedral (FCC) clusters typical of both Au and Pt, in all size brackets, as calculated with DFT.(a) Closed-shell FCC: deeper truncation at one corner but all exposed facets are complete; (b) open-shell FCC: missing atoms on a (100) facet; (c) FCC with stacking fault; (d) closed-shell Dh: Marks Dh with one (100) facet larger than the others; (e) open-shell Dh: atoms missing on a (100) facet; (f) Dh with stacking fault.Dark red atoms indicate either facets that are incomplete (b and e) or are used to illustrate the two parts of the cluster either side of the stacking fault (c and f).Dashed white circles indicate missing atoms and arrows indicate stacking faults. Fig. 5 Fig. 5 Lowest energy structures from each structural motif for Au (top) and Pt (bottom) clusters, as calculated using DFT; FCC (blue), Dh (red), and Ih (black).Stability of clusters is represented by the quantity D¼ (E tot À NE bulk )/N 2/3, where E tot is the energy of the cluster, N the number of atoms and E bulk is the energy per atom in the bulk FCC crystal.D essentially divides the excess energy of the cluster by the approximate number of surface atoms, thereby enabling facile comparison of stability of clusters of different sizes.However, this has no effect on the relative stability of the motifs (see ESI †).Values of D are given relative to the most stable cluster for each metal (Au or Pt).Due to the high instability of the Ih clusters > 300 atoms, only selected Ih clusters were calculated in this region. FCC and Dh clusters generally fell into one of three categories (i) closed-shell, (ii) open-shell with adatoms or incomplete facets and (iii) clusters with stacking faults.In terms of individual structures, Pt and Au showed strong similarity. Fig. 6 Fig. 6 Top: radial distribution function, of core atoms in closed shell (a) Ih 309 , (b) Dh 318 and (c) FCC 314 Au and Pt clusters, locally minimised with DFT.Interatomic distances are normalised with respect to the nearest neighbour distance in the bulk, r 0 .Cluster structures are shown in the insets to each plot, where the atoms defined as the core are opaque.For FCC 314 , the core is defined as the innermost 6 atoms, for Dh 318 it is 19 atoms and for Ih 309 it is 13 atoms.The core is defined as the smallest region that has the same symmetry as the overall cluster.
2019-05-17T13:55:28.024Z
2019-05-03T00:00:00.000
{ "year": 2019, "sha1": "5c50e11fac2322063e225dcba8dd1aa9ffefe1a9", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/na/c9na00122k", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bbc1e4a2d2c66ca0b0fcbf0d0392efd9e3b5ff66", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
129114990
pes2o/s2orc
v3-fos-license
Constructed Ponds and Small Stream Habitats : Hypothesized Interactions and Methods to Minimize Impacts Extensive research has been conducted on how large impoundments and reservoirs affect hydrologic, geomorphologic and ecological processes in downstream ecosystems. Surprisingly, few studies have addressed the effects of smaller impoundments and constructed ponds. Pond construction has been considered an important tool for managers seeking to reduce sediment, nutrient and pollutant loads, and increase habitat heterogeneity in streams in an effort to conserve or enhance aquatic species diversity. However, we lack information on the interaction between ponds and stream habitats, which may compromise the efficacy of conservation efforts. The objective of this review is to outline possible physical and biological changes to stream ecosystems resulting from pond construction. Greater understanding of how ponds influence watershed processes at various spatial scales is crucial to evaluating the effects of constructed ponds on stream ecosystems. Introduction Research on the effects of large impoundments on downstream ecosystems has largely focused on reservoirs and rivers.Numerous studies have shown that impoundments cause drastic changes in ecosystem structure throughout watersheds and even continents by changing numerous ecological, hydrologic, and geomorphic processes [1,2].At smaller scales, stream ecologists have long been interested in the effect of wetlands and beaver impoundments on stream fishes, aquatic macroinvertebrates, water chemistry and geomorphic characteristics [3,4].Lakestream interactions have been recently reviewed in an attempt to guide future research on incorporating lakes into the river continuum [5].However, little research has addressed the interaction of human-constructed ponds and adjacent streams, despite the global proliferation of small impoundments and diversions and an increase in the number and geographic extent of anthropogenic ponds [2]. Ponds number in the millions worldwide [2].At the continental scale, ponds may play a measurable role in the global carbon cycle [6] and sediment movement [2].At regional and watershed scales, ponds can reduce stream sediment loads and nutrient concentrations [7].It is well documented that pond construction can benefit regional biodiversity by increasing freshwater habitat heterogeneity [8].Furthermore, ponds can support a range of recreational activities for humans, be used to improve water quality, and provide other important ecosystem services.Overall, the construction of ponds within highly degraded or biologically depauperate watersheds can be a beneficial prescription.Yet, we know little of how ponds alter stream ecosystem dynamics, especially in relatively undisturbed watersheds. Despite the recent proliferation of artificial ponds within watersheds throughout the United States, there is limited literature examining the effect of these ponds on in-stream habitat.The current lack of understanding of pond-stream interactions underscores the need to provide a synthetic framework to guide future research and management of watersheds with respect to the construction, placement, and maintenance of constructed ponds.In this synthesis, we provide a broad examination of the effects of constructed ponds on in-stream habitat.We focus this discussion on hypothesized alteration of physical and biotic processes in adjacent streams because stream habitat is formed, maintained, and altered by the reciprocal interactions of these processes.This review is purposely interdisciplinary because of the inherent complexity of the topic.Our primary goal is to step back from the details of in-pond dynamics in order to call attention to broader patterns of pond-stream interactions and to identify specific points of relevance to conservation and management. Streams integrate landscape properties in a hierarchical fashion, moving from network, to stream, to reach, to habitat [9,10].Our vision of the interaction of ponds and streams applies hierarchical patch dynamic [11] and network dynamic [12] perspectives of the river continuum.Specifically, we suggest that the effect of constructed ponds on streams depends heavily on pond density at the network scale and individual pond design at the stream, and habitat scales (Figure 1).While pond design is designated by landowners and tends to lie within the confines of regional and federal regulations, pond density within a stream network depends on the dominant land use and the aggregative (and possible cumulative) actions of many landowners. Terminology Constructed ponds are highly diverse with respect to their purpose, design, water storage capacity, catchment characteristics (i.e., surrounding land use, vegetation) and biota.As a result of this diversity, ponds are difficult to define.We define constructed ponds as man-made water bodies with areas between 10 m 2 and 60,000 m 2 that hold water throughout the year.To further restrict the types of waterbodies discussed here, we will focus on those ponds that gain water through the direct impoundment or diversion of surface flow, rather than solely rainfall and/or groundwater.Diversion-fed ponds are often built by landowners and government agencies to serve a variety of purposes, including water supply for livestock, sediment trapping, erosion control, nutrient removal, recreation, and aesthetic improvement [13]. In this context, we can place constructed ponds in one of two categories; 1) on-stream ponds-built by impounding the existing stream channel and causing an abrupt shift from lotic to lentic habitat where the stream enters the pond and from lentic to lotic at the impoundment, and 2) off-stream ponds, which require the diversion of part of total stream discharge and are located in the floodplain adjacent to the stream channel (Figure 1). On-stream ponds can be viewed as a single patch within a stream network, as conceptualized by hierarchical patch dynamic perspective of the river continuum [11].According to this perspective, a stream is comprised of hierarchically nested patches arranged longitudinally in space.Patches have unique community and biogeochemical structures and functions that vary with time, although the dynamics of individual patches are not independent of other patches.Therefore, biological and chemical fluctuations within an upstream patch can alter the dynamics of downstream patches. Off-stream ponds alter stream network structure by removing flow at the point of diversion, much like a braided channel, and creating a confluence at the point where the effluent discharge channel joins the stream.Here, we apply a network dynamics perspective because braids and confluences can cause locally abrupt changes in water chemistry and sediment flux while also altering channel and floodplain characteristics [12].At the network scale, the number and arrangement of off-stream ponds interact with the natural stream to influence the diversity of stream habitat patches by the accumulation of local changes to biogeochemical and geomorphological characteristics. Flow Regime The residence time and amount of water in ponds may reduce temporal variation in stream discharge, mimicking the flow regimes downstream of mountain lakes and beaver ponds.Residence time is calculated as the volume of the pond divided by inflow per unit time.In snowmelt dominated lake-stream systems, the retention of water in a lake reduces the magnitude and increases the duration of over-bank flooding events downstream [14].The downstream flow regime is dictated by the water level in the lake.Beaver ponds also dampen peak flows and increase low flows [15].Despite the highly variable outlet structures of on-stream ponds, water level relative to outlet elevation remains the dominant control of discharge.Off-stream ponds should have more variable effects on flow regimes, mainly the magnitude and duration peak flows.We predict the degree to which off-stream ponds alter flow regimes to be a function of the percentage of flow diverted into the pond and the ratio of undiverted flow to effluent discharge. At the watershed scale, the ratio of constructed pond/ wetland area to total watershed area is one of the most important factors determining the capacity of a watershed to decelerate high stream flows [16].At the scale of individual ponds and the adjacent stream, deceleration of flow and sediment trapping capacity depends on residence time of the pond.Variation in residence times among ponds of similar volume and inflow rate is partially attributed to differences in hydraulic efficiency, defined as the efficiency with which the volume of the pond is utilized [17].Features that improve the hydraulic efficiency of constructed ponds include an elongated shape, submerged terraces, baffles and islets [17]; all of which act to distribute the energy of inflowing water throughout the pond and minimize sediment re-suspension events during high flows [18].With respect to suspended sediments, higher residence times in the pond allows more suspended sediments to settle out of the water column.Pond designs with long residence times coupled with high hydraulic efficiencies should have the largest effect on natural flow regimes and sediment loads, with consequences for substrate characteristics and habitat complexity in receiving streams. Sediment Load and Habitat Complexity By decelerating flow and allowing suspended particles to settle out of the water column, ponds reduce the load of fine sediments downstream by acting as a sediment sink.Discontinuity of sediment loads due to lentic habitat patches can have profound effects on physical attributes of downstream segments.For example, a study characterizing the influence of glacial lakes on the physical form and function of mountain streams in central Idaho, USA, found that sediment size, channel shape, and sediment entrainment is best described by the location of sediment sources (e.g., hillslopes and tributaries) and sinks (e.g., lakes) because fine sediments are removed and downstream flow regime is altered [19].Because elevated sediment loads in streams often have negative effects on stream organisms [20], ponds have been prescribed as a conservation tool to lower suspended sediment concentrations in watersheds disturbed by agricultural development, road construction, or fires [18]. In relatively undisturbed streams, the relationship between suspended sediment load and discharge is viewed as a dynamic equilibrium that can be disrupted by changes in either sediment load or discharge [21].Constructed ponds can alter the suspended-sediment discharge relationship by 1) changing the discharge regime by diverting flow and increasing water residence time at the pond location, 2) changing channel morphology by building impoundments, diversion and effluent channels, and the pond itself, and 3) slowing the downstream transport of bed load materials and suspended sediment from sources upstream in the watershed. While used as a measure of success for restoration projects in many disturbed watersheds, drastically lowered sediment loads in undisturbed watersheds disrupt the relationship between the load and size of sediments (i.e., supply) and stream power-a function of streamflow discharge, water surface slope, and the specific weight of water.Channel degradation, or the localized removal of channel bed material by stream water without adequate deposition, occurs when stream power exceeds the sediment supply [21], and can lead to channel incision and streambed armoring [28].The magnitude and extent of channel degradation depends heavily on the location of new sources of sediment downstream.Channel incision below off-stream ponds can be easily avoided by constructing a low gradient effluent channel with low water surface slope. Streambed armoring occurs when only small particles are entrained and the resulting streambed consists of large substrates that are immobile during bank-full flows [19].We predict that armoring below on-stream ponds and some off-stream ponds is more likely to occur than channel incision because it is more difficult for landowners to maintain the supply of small sediments to the downstream streambed at pre-pond levels.In most cases, small sediment supply will be negligible immediately below on-stream ponds and greatly reduced below offstream ponds with high ratios of diverted stream discharge to un-diverted stream discharge and effluent discharge to receiving stream discharge. Temperature The direct discharge of high temperature water from a pond into a stream can increase temperatures of the receiving stream to above normal levels [22].Excellent literature reviews exist that explore the fundamental controls on stream water temperature [23] and biological responses to temperature variation [24].We suggest four important factors that interact to influence in-pond water temperatures and consequent effects of effluent discharge on the temperature regimes of receiving streams: 1) the placement of a pond within a watershed, 2) the surface area to volume ratio of the pond, 3) light penetration into the pond, and (4) residence time of stream water in the pond. Pond surface area to volume ratio (SA:V) governs the efficiency of radiant heating and water column mixing [25] because the exposure of a pond to radiant energy and wind increases with surface area.Pond depth influences light penetration and whether wind mixing will affect the hypolimnion.Differences in light penetration among ponds leads to variation in radiant heating of ponds with similar SA:V.Light penetration is influenced primarily by aquatic macrophyte coverage and pollution [26].The heated water is then discharged from the pond back into a stream, altering downstream temperatures during summer months when low flows and high temperatures already may cause harm at a critical time in the life histories of stream organisms [27]. In general, ponds fed by headwater streams tend to be much cooler than ponds fed by larger streams in the same system [28].Because ponds have a greater thermal inertia than small streams, warmer water temperatures may persist in ponds into the autumn months.The steepest temperature gradients in a stream-pond network occur at pond-stream transitions, and the gradient between upstream and downstream temperature is steeper for onstream ponds as opposed to off-stream ponds.An example of the former is described by Maxted et al. [22], who observed higher mean daily stream temperatures throughout the summer and autumn in streams below onstream ponds in New Zealand, especially below ponds lacking riparian canopy cover.Similarly, Jones and Hunt [29] found that off-stream ponds acted as point sources of thermal pollution.In some ponds, however, substantial input of stable temperature groundwater can reduce diel temperature ranges and keep ponds cooler than the adjacent stream regardless of a ponds location in a watershed (Ebel and Lowe, unpublished data). Changes in water temperature have consequences for the lotic communities and ecosystem processes by limiting dissolved oxygen concentrations, influencing the feeding and metabolic rates of stream organisms and altering microbially mediated nutrient cycling [30].In streams dominated by species that require low, stable water temperatures, like many in the Northern Rocky Mountains, water temperatures exceeding 20˚C lead to decreased growth rates and survival of salmonids [31].On-stream ponds will likely cause an abrupt change in temperature and biotic community structure [1] along a longitudinal gradient encompassing the pond and adjacent upstream and downstream reaches.The direction of the temperature gradient depends on whether pond effluent is released from the surface (warmer effluent) or from the bottom of the pond (colder effluent).The effect of off-stream ponds with regulated headgates [32] will likely be less than on-stream ponds, and should vary seasonally depending on the amount of inflow, residence time of water in the pond, and the ratio of effluent flow and temperature to mainstem flow and temperature [33].We hypothesize that discharge from off-stream ponds affects stream temperatures more when those ponds are located along lower-order headwater streams with lower flows and higher pond-to-stream temperature ratios. The most effective strategy to prevent artificial increases in stream temperature from pond effluent is to prevent pond temperatures from reaching levels beyond the tolerances of native stream organisms.This can be accomplished by designing ponds with low surface area to volume ratios, altering outlet structures such that effluent is drawn from the lower strata of the pond, or minimizing the ratio of effluent discharge to receiving stream discharge. Alteration of Chemical Conditions The creation of standing water along a stream changes the downstream transport of nutrients.For this reason, pond construction is a common technique used by environmental engineers and stream restoration practitioners [7].Summarizing the nutrient chemistry of ponds is made difficult by the range of pond designs, the varying sources of nutrients from the atmosphere and watershed, intricacies of nitrogen and phosphorous dynamics within ponds, and widely differing water retention times.The details of these processes have been extensively reviewed elsewhere [34].It is well understood that ponds can improve the water quality of streams degraded by intensive mining and sewage treatment plant effluent; however, changes to the chemical cycling of currently unimpacted streams may alter microbially mediated chemical cycling with unintended consequences for intact lotic food webs and must be considered during watershed planning.Overall, the degree to which a constructed pond will influence streamwater chemistry is design and region dependent, requiring pond architects to have in-depth knowledge of watershed characteristics if in-stream effects are to be minimized. Phosphorous (P) sorption and desorption in aquatic systems are governed by the structure of sediment particles, the degree of phosphate saturation, and sensitivity to environmental changes [35].Pond sediments can be a temporary phosphorous sink and provide an important ecosystem service, especially in agricultural areas, by reducing phosphate concentrations downstream.A Swedish study estimated that a constructed, open water wetland retained 17% of mean annual P load over 4 years, with 78% of retained P held in sediments close to the inlet [36].Retained P in this wetland was predominately in potentially mobile forms; i.e., organic P and P associated with iron or aluminum.Potentially mobile implies that P can be released from sediments.P-release from sediments into pond effluent depends on the sediment phosphate capacity and varies seasonally because warming water temperatures can promote phosphate release by the increasingly anoxic conditions at the sediment-water interface associated with increased microbial activity [37].Released phosphates can be exported to adjacent streams through pond effluent.Such pulses of phosphates can be especially harmful to headwater systems where phosphate concentrations will depend mainly on effluent discharges since background concentrations are close to detection limits [35] In headwater streams, the majority of energy driving the system is derived from allochthonous sources [28].Impoundments greatly depress the ratio of course particulate organic matter to fine particulate organic matter as the downstream movement of detritus is blocked and phytoplankton and fine sediments is discharged from the pond.Nutrient loading of ponds and nutrient discharges can stimulate in situ production by in-pond phytoplankton communities and in-stream biofilms possibly altering the trophic state of the recipient stream [38].Discharge of pond autotroph biomass (either dead or alive) can change the energy dynamics of downstream communities by shifting the predominant source of utilizable carbon from recalcitrant leaf litter to labile algal photosynthates.This shift in major energy source would cause changes in downstream macroinvertebrate community assemblage and production. Well placed, designed, and managed ponds can play a critical role in the removal of nitrogen from streamwater, but can discharge retained nitrogen under some conditions.Pond construction can stimulate nitrogen removal by providing the anaerobic conditions necessary for denitrification pathways.Total nitrogen removal often varies between 40% and 55% of total input depending on the design of the pond or wetland and nitrate loading rate.Nitrate not removed through denitrification is transformed and retained by burial and sorption to sediments and is thus available to be discharged during high flow events, pond dredging, or outflow control failures.Such nitrogen pulses can cause changes in downstream biotic communities.For example, Selong and Helfrich [39] found that overall macroinvertebrate species richness decreased below constructed trout ponds.The reduction in overall species richness was accompanied by a decrease in the ratio of mayflies, stoneflies, and caddisflies to chironomids, and the ratio of shredders to the total insects.They attributed the shift in the macroinvertebrate community to increased nitrogen levels caused by high concentrations of nutrients in off-stream pond effluent during pond-dredging events and pond nutrient enrichment by trout feeding loads. Biotic Changes Research on ponds and biodiversity is rapidly increasing, showing that ponds increase regional biodiversity and promote rare aquatic species [40].Specific management techniques to maximize pond biodiversity have been reviewed [8].The arrangement and size of constructed ponds within a watershed can influence the effectiveness of pond construction as a tool for biodiversity conservation.For example, Oertli et al. [41] found that many small ponds in close proximity to each other can support greater regional biodiversity than a few large ponds.Furthermore, the creation of areas of high pond density that provide increased habitat complexity and maintains high among-pond connectivity can help to sustain persistent metapopulations of rare species [41].However, the benefits of pond construction aimed at protecting rare lentic species should be weighed against the possible costs of de-stabilizing intact stream ecosystems. Habitat alterations and subsequent biological invasions are cited as two major drivers of biodiversity loss worldwide.Clearly, the excavation of a pond causes a drastic change in the local biotic community on the floodplain by creating lentic habitat where it did not previously exist.Increases in invader abundance typically follow any natural or anthropogenic disturbance [42] and the disturbed floodplains and streams provide a prime opportunity for invader establishment.In addition to the susceptibility of newly-constructed ponds to biological invasions, constructed ponds also can pose a threat to native species assemblages in adjacent streams by decreasing the ability of stream communities to resist invasions. An invader of increasing importance to the health of streams in the western USA is Myxobolis cerebralis, an invasive myxosporean parasite identified as the cause of whirling disease in salmonid species.Allen and Bergersen [43] found the highest densities of Tubifex tubifex, the invertebrate host of M. cerebralis, in the Poudre Rearing Unit (PRU) of the Cache la Poudre River in Colorado, a trout rearing facility consisting of a series of off-stream ponds.T. tubifex densities were three orders of magnitude higher in the PRU than in all alcoves and eddies along the stream reach.Similar results were found on the Salt River, Wyoming [44], and the Fryingpan River, Colorado [45], where ponds served as point sources of infectious triactinomyxons (spore emitted into the water column by the invertebrate host; TAMs). Measures of juvenile rainbow trout infection and recruitment rates, as well as densities of infected T. tubifex, indicate that the complete infection of all salmonids in a system requires very few T. tubifex and TAMs.Outbreaks of whirling disease greatly reduce salmonid recruitment in cold, oligotrophic, sediment poor, high gradient streams [43], especially during times of decreased stream flow [46].Several techniques have been used to decrease the concentrations of TAMs in pond effluent, the most efficient being sand filtration [45].As a severe threat to wild salmonid populations across the western United States, more information is needed on the role of on-and off-stream ponds in the spread and severity of whirling disease outbreaks. Increases in physical heterogeneity of stream systems can cause variation in colonization, persistance and dispersal of both vertebrates and invertebrates.For example, Schlosser [4] concluded that discharge-mediated interactions between beaver ponds and streams benefited creek chub populations (Semotilus atromaculatus) by controlling fish dispersal, fish diversity, fish predation, and macroinvertebrate community composition (e.g., densities in riffles vs. pools during high and low discharge).Knutson et al. [47] came to a similar conclusion, relating the persistence of amphibian populations to the increase in breeding habitat as a result of pond construction.In contrast, Olsson et al. [48] found that the physical habitat alteration associated with constructed ponds caused increased mortality of migrating brown trout smolts (Salmo trutta) in Sweden.They attributed this increase in smolt mortality to the shift to standing water habitat, causing changes in downstream migration speed and exposing smolts to increased levels of predation.In this case, anthropogenic changes to the physical heterogenity of streams had a negative effect on recruitment because the specific population of brown trout had not evolved to survive in standing water habitats. Conclusions and Future Challenges Constructed ponds have situation-specific effects on adjacent streams; a single pond may alter physiochemical and biotic conditions at scales ranging from local habitats to entire stream segments.In lower order drainages with high pond density, the cumulative effects of on and/or off-stream ponds may alter entire watersheds.Decisions on whether to construct ponds along streams and how to manage existing ponds must be evaluated based on the broader goals of human communities within the watershed, as well as management and conservation objectives of local, state, and federal governmental entities.There is a mismatch between the proliferation of constructed ponds and the current state of empirical knowledge regarding the myriad of possible consequences for stream ecosystems.Small-scale and short-term studies are inadequate because they fail to accurately identify threats and critical scales of management [49]. Pond construction and management must delicately balance recreational and economic needs while minimizing alterations or disturbances to adjacent stream ecosystems.Before effective policy for pond construction can be implemented, it is imperative that we answer the following questions: 1) What types of streams and geographical regions are most susceptible to the negative effects of constructed ponds? 2) Can pond design and management protocols overcome negative effects on sensitive streams or regions?What designs and protocols are most effective? 3) How many ponds along a stream are necessary to achieve restoration or mitigation goals while minimizing negative or cumulative effects on in-stream habitats? Although the design and placement of a pond is watershed-specific, a well-designed pond should achieve two objectives: 1) increase the biodiversity of a watershed by providing increased aquatic and riparian habitat complexity and valuable ecological services (e.g., sediment trapping, nutrient retention) and 2) minimize negative effects on natural stream processes and habitat. Many questions about pond-stream interactions remain unanswered, illustrating our limited understanding of interactions among aquatic ecosystems and how these interactions should influence management decisions.Although some researchers suggest that the construction of ponds should be encouraged because they can facilitate the persistence of rare aquatic species and increase the biodiversity of a watershed [8], pond construction can also pose serious threats to the overall ecological health of a watershed.Processes resulting from the interaction of constructed ponds and nearby streams are varied and complex.Our understanding of these interactions would benefit from quantification of downstream changes in the physical, chemical and biological variables discussed in this review in a Before-After-Control-Impact study framework [50] Furthermore, the recognition that material processing within pond and stream habitats is spatially dependent will provide the means to place constructed ponds in the context of entire landscapes [51].The connection of ecological processes of constructed ponds to stream habitats is essential to pond management practices that maintain the biological integrity of watersheds. Figure 1 . Figure 1.Multiple scales of physical and chemical consequences of constructing on-stream (hatched) and off-stream (black) ponds for small stream habitats.
2019-01-02T09:20:11.863Z
2013-06-26T00:00:00.000
{ "year": 2013, "sha1": "5fea632307e5f0451d7f5036deef2c3155929676", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=34063", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "aa6a6f06449ac304f44be26494c95e0b18a790c1", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
221757837
pes2o/s2orc
v3-fos-license
Structure of Wheat and Corn Farming: A Survey on Amik Plain Farmers Received : 02/09/2019 Accepted : 10/08/2020 This study was conducted to identify the current problems of cereal crops like wheat and corn producers and to suggest solutions for overcoming those problems in Amik Plain (Antakya, Kirikhan, Kumlu, Reyhanli districts) in Hatay province of Turkey. In this study, the primary data was obtained by face-to-face survey from 100 cereal producers in Amik plain. All variables are given as frequency and percentage distribution, and numerical variables as mean. The survey assessed the level of education of grain producers, the number of individuals in the farm, record keeping, social security, crops (wheat and corn) growing area, yield, sowing and harvest date ranges, property and leasehold land use, cultural practices and grain production. The data were analysed using simple statistical analysis methods (frequency, averages, percentage distribution). The results indicated that about 50% of the cereal producers had higher educational degree. It was determined that producers had an average of 12.3 ha of wheat and 15 ha of corn cultivated area. Moreover, cereal production is well known as one of the cultural practices in the study area. The main problem of grain producers is that the low cereal prices. In addition, the Turkish Grain Board (TMO) does not purchase the production at the time of harvesting. Moreover, high production costs and corn irrigation are considered other problems that cereal producers are facing. Introduction Cereals crops like rice (Oriza sativa), wheat (Triticum asetivum), corn (Zea mays), barley (Hordeum vulgare), sorghum (Sotghum vulgare), oat (Avene sativa), pearl millet (Panicum mileacceum), and foxtail millet (Setaria italica) are grown about half of the treated areas (720.6 million hectares), which is 1.4 billion hectares in the world (FAO, 2014). Wheat and corn crops are produced as annual cereals in any climatic and soil conditions in the world (Barutcular et al., 2017;Hossain et al., 2018;Yildirim et al., 2018). Wheat, which covered the most cultivated area among the cereals, is taken the first place of 220.1million hectares of planting area. Corn is the world's largest cereal crop, playing a significant role in guaranteeing global food security (Abdelaal et al., 2017). Corn is secured the second position after wheat that covering 183 million hectares of cultivated area, and followed by rice with 163 million hectares. Globally, rice is one of the major cereals developed and grown over the world (Anis et al., 2019). The production trends of maize, rice and wheat are 1 billion tones, 741 million tons and 729 million tons, respectively (FAO, 2016). In terms of planting area, 7.7 million ha of land are used for wheat production (21.5 million tons) which takes the first place in Turkey, and followed by barley and corn covering the area of 2.72 million ha and 639084 ha, respectively. The production figures of wheat, barley and maize are 21.5, 8.0 and 5.9 million tons against those production areas, respectively (TUİK, 2018). The average yield of wheat and maize are 391 kg/da and 1.087 kg/ha respectively in Hatay, Turkey (TUIK, 2017). While the national average is 907.5 kg/da, and the average yield in the world is 566.4 kg/da (FAO, 2014). Considering the fact that the cereals plays a central role in the human and animals feeding. In previous study, Kırtok, (1997) concluded that cereal crops play a significant role for foods and feedings. People living in developing countries cannot reach sufficient food sources originated from reach food. It causes malnutrition in the population of those countries. In this case, people tend towards to vegetable proteins such as wheat and corn. Various studies were performed on different agricultural cultivation and economical situation, previously (Koc et al., 2011;Parlakay et al., 2015a;Parlakay et al., 2015b;Yılmaz et al., 2015). In the future, the increase in temperature will cause a large amount of water loss in the soil and plants. Water shortage will result in a significant decrease in crop productivity. So, the main aim of this study was to identify problems in wheat and maize production, and to propose solutions to these problems for Amik plain of Hatay Province, Turkey. Material The main material of this study was collected from the questionnaire of the agricultural enterprises who are engaged in the field of cereal cultivation in the research area. The data were collected through face-to-face interviews. The survey was conducted in spring 2015. In the study, the data from FAO, TUIK, and Ministry of Food, Agriculture and Livestock were used as secondary data. The Ministry of Food, Agriculture and Livestock Hatay Provincial Area (Kırıkhan, Antakya and Reyhanlı), national and international information were taken into consideration for the research. Methods The determination of sample size, the villages that represent the research area in terms of agricultural techniques and natural conditions, where cereal production is common, were determined by the method of nontraditional sampling, and surveying of these villages was carried out by randomly selected enterprises. Amik Plain grain (corn and wheat) producers and the data obtained by questionnaire were analysed by SPSS 22.0 program. Numerical variables are given as average, categorical variables are percentage distribution. The sample size is planned to be at least 100 Farmers. The questionnaire survey was conducted by considering the size of agricultural area of the location of area under investigation. In the study on maize agriculture which was made in the same region previously, the sample size was determined as 140 (Gözübenli et al., 2000) and in another study, the sample size was 67 (Öztekin, 2017). As a matter of fact, the sample size of 100 well-selected enterprises operating in a region with similar characteristics seems to be sufficient in agricultural management surveys (Yang, 1964). Data of The Hatay Province Surface area of Hatay Province is 5.827 km² excluding lakes, 46.1% of provincial lands are consisted of mountains, 33.5% are consisted of plains and 20.4% are consisted of plateaus. The area size of Amik Plain is 105.388 ha and it is consisted 73% of the total plain area in Hatay province. The crops that are grown in the Amik Plain respectively are wheat, corn, cotton, potatoes, onion, sunflower and soybeans (Anonymous, 2016). Kaya and Budak (2018) revealed that, cotton, wheat and corn products has an important place in Amik Plain; (Kaya, 2017) also stated that Melon, onion, carrot, olive, apricot, spinach, tomato, vetch, potato, nectarine and parsley is formed the production pattern of the region. When we look at product-based distribution of producers in terms of cultivation area usage; the average wheat cultivation area was 123 da, the number was changed by 10 to 500 da. The average corn planting area was 150 da, and this number varies between 15 and 600 da. The average planting area was 62 da, and this number varies between 20 and 162 da. The average of the fruit planting area was 41 da. It is seen that this number changes between 10 and 95 da, the average cropping area of other crops is 135 da, and this number changes between 3 and 700 da. It was determined that the vast majority of producers (60%) were members of an agricultural establishment, about 70% of them use credit, and all receive product support ( Table 2). The average cultivation area of 100 producers growing corn in Amik Plain is 185 da, yield is 1.332 kg/da and price is determined as 0.71 TL / kg (Kaya and Budak, 2018). The rotation distribution of the grain farmer is given in Table 1. Total 44% of the producers stated that they preferred wheat-cotton-corn-vegetable, 19% of wheatcorn, 6% of only corn, 1% of only wheat and 1% of wheatforage crops. Producuer usually prefer bread wheat varieties, and Sagittario, Basribey, Masaccio, Adana 99, Victoria, Stendal, and Golia are the most popular varieties of soft wheat. They also stated that they generally regarded the durum wheat as a local Karakılçık as bulgur which is traditional meal. Sowing Time The dates of wheat planting period are produced in Table 3. According to the Table, 63% of the producers were planting in the period of 1-15 November, 33% in November 15-30, and 4% in December 1-30. It has been determined that the sowing time of maize was largely done before May. Producers of 38% reply the sowing time of corn during 10-20 March, 24% during 1-10 March, 7% during 20-30 March, 17% during 1-10 April, 14% during 10-20 April (Table 3). Gözübenli et al. (2010) stated that the most suitable sowing time for maize should be done in May in Hatay ecological condition. It was determined that the sowing time of corn as the main crop should be set up in the coming years in Amik Plain. Irrigation Irrigation systems like drip, furrow irrigation, spring are practiced for irrigation in wheat and maize production area. In the surveyed area, the frequency of wheat irrigation was one times during the production period. Irrigation was generally used in reproductive period. While, 57% of wheat producers in the region do not irrigate wheat plantation, and 43% producers irrigate in wheat field. It is seen that the producers meet the irrigation water with 56% from their private well, 27% from the dam, 14% from the irrigation cooperative, and 3% from the neighbour (Table 4). Kaya (2017) investigated and reported that 53% of the producers of the Amik Plain participating wells, 23% of the river, 14% of the dam, 10% of both the well and the dam for irrigation. In Hatay, 206.553 hectares of agricultural land out of 275.578 hectares is suitable for irrigation. However, only 176.515 hectares of suitable irrigated land can be irrigated (Anonymous, 2014). Demirtaş et al., (2017) stated that 65% of producers are satisfied with irrigation in Hatay province. Fertilization The study results showed that growers use chemical fertilizer (N, P, K) during the production period. Soil analysis were made by 79% corn producers and were not making the soil analysis by %21 farmers. Before sowing, producer prefer 15:15:15 (NPK) 6 kg/da, After emergence, they prefer ammonium nitrate 12 kg/da inV2-V3 stage and lastly V6-V7 stage period they applied ammonium sulphate 5 kg/da N. Nitrogen fertilizer (250.48 kg ha -1 ) and P fertilizer (28.32 kg ha -1 ) were applied during June, July and August. While wheat producers of 79% were doing soil analysis, 21% were not analysing. Producer applied the phosphorous, potassium and nitrogen before sowing time about 6 kg/da, and they applied two times, first 12 kg/da N (Ammonium Nitrate), second 5 kg/da N (Ammonium sulphate) after tillering. Weed Control Almost all wheat producers were struggling the weed control (Table 6). Farmers of 91% were using herbicide for maize. 4% farmers use herbicides before sowing, 86% used it after sowing and 10% after sowing and emerging. Around 87% of the wheat producers were struggling with insects, 93% of them were fighting with aphids, 3% of cereal bugs, 4% of them were fighting against both aphids and cereals bugs. Wheat producers of 97% struggle with diseases, which are 24% of septoria fighting,19% of rust, 42% of both septoria and rust, 1% of riding and rusting, 14% septoria, rust and riding. It was determined that the vast majority of farmers were fighting the disease because of product losses. Wheat producers of 47% observed the average yield between 500-600 kg, 40% between 600-700 kg, 5% between 400-500 kg, and 3% was between 300-400 kg (Table 9). It is recorded that the average wheat production in the Amik Plain is higher than the country's and world average. Wheat producers of 47% observed the average yield between 500-600 kg, 40% between 600-700 kg, 5% between 400-500 kg, and 3% was between 300-400 kg (Table 9). It is recorded that the average wheat production in the Amik Plain is higher than the country's and world average. Grain Yield Main crop grain yields of corn of 1000-1200 kg/da 1200-1400 kg/da and 1400-kg/da were recorded by the farmers of 33.35% and 28% (Table9). The second crop of maize grain yields of 900-1000 kg/da, and 700-800 kg/da were recorded by 60% and 40% farmers respectively. Marketing Management Wheat producers of 82% sell their product after harvest, and other wheat producers of 18% store it. In Turkey, maize policy was introduced in 1941 as a result of the appointment of the Turkish Grain Board (TMO). Among wheat producers, 69% of the product were sold to the Merchant, 14% of product were sold to the TMO, 16% of product were sold to the others and 1% of product were sold to the feed factory in Hatay cereals producers (Table 10). Among the maize producers, 24% of the product were sold to the Merchant, 23% of product were sold to the TMO, 13% of the product were sold to both Merchant and TMO, the rest were sold to starch factories. In wheat production, 51% producers used rotation, 34% used easy cultivation, and 32% used both rotation and easy cultivation. İn maize production, 56.9% producers replied as they use rotation for easy farming, lots of money gain is 3.1%, rotation is 29.2% and seeing next farmers is 4.6%, all reasons is 6.2%. When asked about the futures of cereal grain producers in question, producers responded as increase of the product prices, irrigation water, improvement of the price of electricity and irrigation water, reduction of input costs, improvement of product sales price (Table 11). Hatay province has an important role in agriculture. It has a good soil and climatic conditions. Different products grow up in region, wheat and maize are the product group with the sowing area. All the wheat producer is harvest with combine harvest. Conclusion The farmer's family is five people on average. Education status of farmers is about 50% high school and university. Because of high education and large having field, ıt is determined that higher grain yield than in Turkey and in the World. But only 10% of farmers are livestock. Generally farmers use conventional methods for cereals production. More than 90% of producers have social security and are in active labor force. The averages of the field used as farm land for producers are 210 decares, the average of 61 decares used as for garden plants and and they have both the field and garden plants to determined 190 decares. All area is irrigated but last years not to irrigation in maize expecially in second crop maize. For some years, due to heavy rainfall, wheat is under water. Farmers produce wheat on average 123 decares in Hatay. Maize area is 15 decare, Furits area is 41 decare, and others is 135 da. Farmers prefer soft wheats, which are Sagittario, Basribey, Masaccio, Adana99, Victoria, Stendal, and Golia. Besides this, a variety of Karakılçık, which is durum wheat, is also grown more than 60% of producers are sowing wheat cultivation between November 1-15. Wheat harvest time; %65-20-31 in May, %35-in June. Almost 60% of the producers do not irrigate wheat production, while 43% irrigate with 1 to 2 irrigation depending on the drought. Around 56% of the producers take irrigation water from their well. Production cost is increased due to high electricity cost in irrigation water made from deep wells. In order to reduce the cost of cereals, the Reyhanlı Dam which is in the construction stage should be completed as soon as possible. The problems that wheat growers have about wheat growing and trading are the product price, the ineffective disclosure of product price, the low profit margin and the storage problem-the percentage of waste. Maize farmer of 76% prefer main crop. Generally they sow in March as main crop. And harvest time is 15-30 Augst. İn 60% cases, the second crop harvest time is 15-30 October. Irrigation number of main crop is 5-8, 70% as second crop is irrigated 7-9 number. Weed control is good. About 91% of farmers are used a herbicide. All the producers used a insecticide for controlling insects. Grain yield is higher than Turkey and in the world. Almost all farmers or producers sell their products.
2020-09-03T09:05:40.553Z
2020-08-29T00:00:00.000
{ "year": 2020, "sha1": "e957e45918484564de23075916ce5b1d0cfd675d", "oa_license": "CCBYNC", "oa_url": "http://www.agrifoodscience.com/index.php/TURJAF/article/download/2935/1730", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c446c20c754a189c86faa38b687c7de441c3f0bd", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
211203892
pes2o/s2orc
v3-fos-license
Clinical characteristics of keratoconus patients at the University of KwaZulu-Natal eye clinic Background: Patients with keratoconus, which is a common corneal ectasia, often present to specialised clinics for management. Understanding the clinical characteristics of keratoconus patients can help improve knowledge of the presentation and management of this corneal ectasia and predict the needs of the clinic providing care for affected individuals.Aim: To describe the clinical characteristics of keratoconus patients attending the University of KwaZulu-Natal (UKZN) eye clinic.Setting: University of KwaZulu-Natal eye clinic.Methods: The study used a retrospective research design by reviewing the clinical record cards of patients attending the UKZN contact lens eye clinic over a 4-year period (January 2014 to December 2017). Data related to age, clinical characteristics and method of management of the keratoconus patients were extracted and analysed using descriptive statistics.Results: Just less than one-quarter of all patients (n = 1210) attending the UKZN contact lens eye clinic had keratoconus that was most often bilateral. The mean age at presentation was 25.2 ± 9.6 years with 74% of the sample being younger than 30 years. More than 90% (n = 419) of the sample reported refractive reasons as the primary reason for presenting to the clinic. The majority of the sample had severe keratoconus (n = 257) and rigid contact lenses were most commonly used for management of keratoconus patients.Conclusion: Keratoconus presents at an early age with a more severe grade and it is most commonly managed using rigid contact lenses. These findings should be considered for keratoconus screening, diagnosis and treatment programmes in KwaZulu-Natal. Introduction Keratoconus is a common progressive corneal ectasia characterised by corneal thinning and protrusion that leads to the development of myopia and irregular astigmatism. [1][2][3] Consequently, patients with keratoconus frequently complain of blurred and distorted vision, halos and/or frequent changes in their spectacle prescriptions. 2,4,5 In the early stage of keratoconus, spectacles and sometimes soft contact lenses are often used to correct the caused refractive error. 6,7 However, as this corneal ectasia progresses, the irregular astigmatism that results is better managed with the use of rigid (corneal and scleral) contact lenses as they serve to provide a uniform anterior ocular surface and consequently improve vision. 8 Surgery is also considered in the management of keratoconus and this includes collagen cross-linking and intra-stromal ring segment implants. Penetrating keratoplasty is advised for the advanced stage of keratoconus when there is contact lens intolerance, poor fitting of contact lenses, sub-optimal visual outcomes and/or frequent displacement of the rigid contact lenses. 2,4 Data related to the incidence and prevalence estimates of keratoconus vary in different populations worldwide, and this may be attributed to the varying diagnostic criteria, definitions and methods of investigation that have been used in different studies. 3 An early study reported the incidence of keratoconus in the general population to be between 50 and 230 per 100 000 individuals with a prevalence of 54.5 per 100 000 individuals. 1 Even though the exact aetiology of keratoconus is unknown, certain factors have been associated with this corneal ectasia. Some studies have reported a link between excessive eye rubbing and atopy with keratoconus. 3,9,10 It is also suggested that the incidence of keratoconus is higher in countries that have hot, dry and dusty climates. 8 Despite being largely an isolated condition, keratoconus may be associated with certain systemic Background: Patients with keratoconus, which is a common corneal ectasia, often present to specialised clinics for management. Understanding the clinical characteristics of keratoconus patients can help improve knowledge of the presentation and management of this corneal ectasia and predict the needs of the clinic providing care for affected individuals. http://www.avehjournal.org Open Access (Down's syndrome and Marfan's syndrome) and ocular (retinitis pigmentosa, aniridia and vernal keratoconjunctivitis) conditions. 2,11 Although an early study 1 reported that keratoconus affects all ethnic groups similarly, some researchers have shown differences between Asian and Caucasian individuals 12 as well as the different Asian subpopulations, 13 suggesting that genetic and/or environmental factors may have an influence on the development and progression of keratoconus. The clinical characteristics of patients with keratoconus have been reported from several non-African countries. [9][10][11]14,15 In contrast, there are only a few studies that have reported on keratoconus patients in Africa. 16,17 Overall, these studies have reported on characteristics such as age, gender, corneal power, severity of keratoconus, visual acuity (VA) and methods of management for patients with keratoconus. [9][10][11][14][15][16][17] The purpose of this study is to provide an analysis of a large cohort of keratoconus patients presenting at the University of KwaZulu-Natal (UKZN) contact lens eye clinic. This clinic serves as one of the referral clinics that assesses and manages patients with keratoconus in KwaZulu-Natal. This study is important as it reports on the clinical characteristics of keratoconus patients in KwaZulu-Natal. This study thus aids in better understanding the clinical characteristics of keratoconus patients to improve knowledge of the presentation, diagnosis and management of this corneal ectasia at the UKZN contact lens clinic. Methodology Gatekeeper approval was also obtained from the Academic Leader at the UKZN Discipline of Optometry to access the record cards of all patients presenting at the eye clinic between January 2014 and December 2017. The study involved a retrospective review of the record cards of all patients presenting for ocular examination at the UKZN contact lens eye clinic during the study period. The study population included keratoconus patients who were presenting for assessment and fitting of contact lenses and/or follow-up. These patients were either previously diagnosed with keratoconus or were newly diagnosed during the study period with keratoconus. The ocular examination at the UKZN contact lens eye clinic includes case history, VA, corneal power measurements, anterior ocular health examination using slit lamp, spectacle refraction and contact lens evaluation (fitting and assessment). The keratoconus diagnosis was made based on the results of the ocular examination and included clinical signs associated with this corneal ectasia as determined using slit lamp examination such as corneal thinning, Munson's sign, Fleischer's ring, Vogt's striae, Rizzuti's sign and/or corneal scarring. 2,6 Other clinical signs associated with the irregular corneal surface in keratoconus included the presence of scissors reflex during retinoscopy as well as distortion of the mires during keratometry and/or corneal topography. 2,4,6 The study entailed reviewing the clinical record cards of all keratoconus patients where data related to the study variables were collected and analysed. These variables included age at presentation to the eye clinic, the main reason for the visit, the presence of allergies, the best corrected spectacle distance VA with a Snellen acuity chart as well as the steepest and flattest corneal power measurements determined using either keratometry or corneal topography. At the UKZN eye clinic, corneal power is measured along the two principal meridians using either keratometry with a one position keratometer and/ or corneal topography with the Oculus Keratograph. Data related to the final method of management, and if keratoconus was present in only one eye (unilateral) or both eyes (bilateral), were also recorded. Keratoconus patients with ocular diseases other than keratoconus and a previous history of ocular surgery and/or trauma were excluded from the study. Moreover, keratoconus patients with any missing data in their clinical record cards, in terms of age, main reason for visit, history of allergy, surgery and/or trauma, corneal power measurements, slit lamp examination and final method of management, were also excluded from the study. There are different classification systems that may be used to grade the severity of keratoconus such as the Amsler-Krumeich classification, ABCD classification and the Collaborative Longitudinal Evaluation of Keratoconus (CLEK) study classification. In the present study, the CLEK classification system was used to grade the severity of keratoconus in each eye. According to this classification, which is based on the steepest corneal power measurement, keratoconus may be graded as either mild (corneal power less than 45 D), moderate (corneal power between 45 D and 52 D) or severe (corneal power more than 52 D). The keratoconus patients' eyes were also grouped into the different levels of visual impairment, depending on the best corrected spectacle distance VA, as defined by the World Health Organization. 18 Mild or no visual impairment was defined as VA of equal to or better than 6/18, moderate visual impairment was defined as VA worse than 6/18 but better than 6/60, severe visual impairment was defined as VA worse than 6/60 but better than 3/60 and blindness that was defined as VA worse than 3/60 but better than no light perception. 18 Data were captured and analysed using the Statistical Package for Social Sciences (SPSS) version 25. Four researchers independently screened all the contact lens clinical record cards to determine whether they satisfied the criteria for the study. Only one researcher was responsible for data capturing for standardisation. To ensure accuracy of data capturing, another researcher double-checked all the data entries and any inconsistencies were resolved by cross-checking against the patient's clinical record card. The analysis comprised descriptive statistics, including mean values, standard deviations, ranges, medians, frequencies and percentages. The Shapiro-Wilk's test and histogram were used to evaluate normality of the data related to age. A probability value of less than 0.05 (p < 0.05) was considered as statistically significant. Ethical considerations Ethical clearance to conduct the study was obtained from the Biomedical Research and Ethics Committee at the UKZN (clearance number: BE240/18). Results A total of 1210 patients were examined at the UKZN contact lens eye clinic during the study period and of these, 293 (24.2%) patients had keratoconus. In this population, 209 (71.3%) patients presented with bilateral keratoconus, while 84 (28.7%) patients presented with unilateral keratoconus. This implies that the population comprised 502 eyes that were identified with keratoconus. However, 48 eyes of 30 patients were excluded as they failed to meet one or more of the study criteria. Consequently, the study sample consisted of 454 eyes of 263 patients with keratoconus. In the sample, 191 (72.6%) and 72 (27.4%) patients were found to have bilateral and unilateral keratoconus respectively. The age of the keratoconus patients ranged from 10 to 64 years with a mean and median of 25.2 ± 9.6 and 23 years respectively. Approximately 74% of the sample was younger than 30 years with 137 eyes from patients who were younger than 20 years and 198 eyes from patients who were aged between 20 and 30 years. Only a small proportion of 39 eyes (8.6%) were from patients who were older than 40 years. The data on age had a non-Gaussian distribution (p < 0.05) and was positively skewed towards younger age ( Figure 1). The majority of the sample (n = 419, 92.3%) reported refractive reasons, which included blurred and/or distorted vision, frequent change in refraction and/or progression of refractive error, as the main reason for presenting at the UKZN contact lens eye clinic. Other reasons reported for presenting at the UKZN contact lens eye clinic included cosmetic (n = 2), therapeutic (n = 4) and some patients (n = 29) reported two or more of the above reasons (refractive, cosmetic and/or therapeutic). There was a positive history of allergies for 189 eyes (41.6%) in the sample. After spectacle refraction, 284 eyes (91.6%) had no or mild visual impairment, 24 eyes (7.7%) had moderate visual impairment and one eye each had severe visual impairment and blindness. It was not possible to determine the spectacle refraction and resulting best corrected VA in 144 eyes (18 eyes with mild keratoconus, 51 eyes with moderate keratoconus and 75 eyes with severe keratoconus). In terms of the CLEK classification of keratoconus, 41 eyes (9.0%) had mild keratoconus, 156 (34.4%) had moderate keratoconus and 257 (56.6%) had severe keratoconus ( Figure 2). Overall, the mean corneal power measurements were 54. 16 The mean values and standard deviations for age and corneal variables in the three grades of keratoconus are presented in Table 1. Patients with severe keratoconus were slightly younger than those with mild and moderate keratoconus. In spite of this finding regarding the mean age of keratoconus patients, the median age was similar in these three groups (23 years, 24 years and 22 years in the mild, moderate and severe groups respectively). As the grade of keratoconus increased, the mean corneal power along the two principal meridians and amount of corneal astigmatism also increased ( Table 1). In terms of management, approximately 95% of the eyes in the sample were managed with some form of optical correction with the majority (n = 418) being fitted with different types of rigid contact lenses (in terms of lens materials and designs) and only 13 eyes from nine patients received spectacles. Some patients received no treatment as the contact lens evaluation revealed that the eyes of these patients needed referral for surgery (n = 7), dry eye therapy (n = 11) or further investigation for other ocular pathologies (n = 5). Discussion Understanding the clinical characteristics of a disease is necessary to predict the profile of affected individuals and the need of any clinic providing care for these individuals. 14 Even though keratoconus has been well documented in large-scale 15 Saudi Arabia, 14 Singapore 9 and India, 8 few studies have been conducted in Africa 16,17 and limited studies in South Africa specifically. Abdu et al. 16 argued that knowledge of the clinical characteristics and management of keratoconus patients is important as this can inform interventions that will aid in improving the quality of life of affected individuals. This study reports on the clinical characteristics and management of keratoconus patients attending the UKZN eye clinic in KwaZulu-Natal, South Africa. Over the 4-year study period, almost 25% of all patients attending the UKZN contact lens eye clinic had keratoconus. It is well known that keratoconus is a bilateral yet asymmetrical ocular condition where the fellow eye is at the greatest risk of progressing towards keratoconus within the first 5-6 years after the onset of this corneal ectasia. 2,5 Studies have shown that the prevalence of bilateral keratoconus varies between 56.0% and 98.3%. [9][10][11]15,17 In the present study, more than 70% of the patients had bilateral keratoconus and this finding is in agreement with the results of previous studies. 10,16,17 For example, in the two studies conducted in Africa, Abdu et al. 16 found that 77.9% of their patients had bilateral keratoconus, while Rashid et al. 17 reported that 98.3% of their sample had bilateral keratoconus. Studies that have reported on keratoconus patients attending hospitals and clinics in countries outside Africa have also reported the same trend whereby the majority of patients have bilateral keratoconus. Naderan et al. 10 showed that 88.3% of the keratoconus patients attending a tertiary eye-care centre over a 2-year period had bilateral keratoconus. Ameerh et al. 11 reviewed the record cards of patients attending a university hospital and showed that more than 90% of their sample presented with bilateral keratoconus. An early study by Lim and Vogt 4 asserted that keratoconus rarely has a unilateral presentation, and this has been shown in other studies whereby low percentage values (4.3% -22.1%) of unilateral keratoconus have been reported in clinic-based and/or hospital-based studies. 9,10,16 In the present study, just less than one-quarter of the sample (24.4%) had unilateral keratoconus, which is consistent with findings in the literature. 9,10,16 The mean age of patients in this study was 25.2 ± 9.6 years and this is slightly older than the mean age of keratoconus samples reported in studies conducted in African countries, including the Sudan 16 (21.4 ± 8.2 years) and Kenya 17 (21 ± 11.1 years). The mean age in this study is more comparable with that reported for keratoconus patients in Scotland 5 (24.1 ± 8.9 years), Singapore 9 (25.4 years) and Malaysia 13 (26.1 ± 7.4 years). In a recent study focused on keratoconus patients in South Africa, Chetty and Rubin 19 reported a mean age of 24.0 ± 8.5 years, which compares well with the mean age reported in the present study. In spite of these slight differences in mean age, it can be seen that patients with keratoconus usually present at clinics and/or hospitals for care mostly during their second decade of life. In the present study, the distribution of the age showed a non-Gaussian distribution and was skewed towards younger age as has been shown for patients with keratoconus attending a specialist contact lens clinic in the study conducted by Rashid et al. 17 This is an interesting finding, as despite differences in the mean age between the present study and that by Rashid et al. 17 (reported above), the distribution of data related to age was similar. In the present study, approximately three-quarters of the sample were younger than 30 years, suggesting that the patients with keratoconus who attended the UKZN contact lens eye clinic are generally adolescents and young adults. This is consistent with reports in the literature regarding the age of presentation being puberty and during the second decade of life for keratoconus patients. 8,9 Mahadevan et al. 8 reported that 90% of their sample of keratoconus patients, who attended a tertiary eye-care centre in South India, were aged between 10 and 30 years. Khor et al. 9 reported that 61% of their keratoconus patients in a hospital-based study in Singapore were aged between 21 and 30 years. It should be noted that the mean age being described in this and the two studies conducted in South India and Singapore relates to the age of the sample when they attended the contact lens eye clinic and not the age corresponding to the onset of keratoconus. Despite this, one can infer that the age of onset of keratoconus was most likely during the teenage years or very early second decade of life for most of the patients in these samples which also corroborates with the literature related to age and onset of keratoconus. [1][2][3] In the present study, only a small proportion of the sample (8.6%) was older than 40 years and this is probably related to keratoconus usually stabilising in the third or fourth decade of life. 2,4 This finding is not unique for this study as previous studies have also shown that relatively few keratoconus patients are diagnosed and/or present to clinics and/or hospitals after the age of 40 years. 5,11 It is for this reason that Mohd-Ali et al. 13 asserted that having more severe grades of keratoconus at an older age is less likely than during younger age. The sample showed increased severity of keratoconus as most of the patients were classified with severe keratoconus (56.6%), followed by moderate keratoconus (34.4%) and mild keratoconus (9.0%). This finding, in which the largest number/percentage of patients presented with severe keratoconus and the number/percentage of patients decreased as the grade of keratoconus decreased, is consistent with the trend seen in other hospital-based studies reporting on keratoconus patients. 11,19,20 This finding is not surprising and may be explained by the study sample and setting comprising keratoconus patients presenting at a university clinic that serves as a referral clinic in KwaZulu-Natal. Consequently, it is likely that keratoconus patients who present at this clinic may be individuals who are no longer being fully managed by eye-care personnel (optometrists and ophthalmologists) and are therefore referred for further management. This is similar to the explanation provided by Ameerh et al. 11 that keratoconus patients are often referred only when spectacles fail to provide reasonable vision. Furthermore, contact lens fitting and assessment in patients with moderate to severe stages of keratoconus is challenging and requires specialised equipment and expertise. Thus, it is likely that the equipment and wider range of contact lenses available at specialised contact lens clinics, such as the UKZN eye clinic, may be more suitable to provide for the needs of patients with more advanced stages of keratoconus. Previous studies have reported the same trend of a higher proportion of patients with severe keratoconus attending specialised contact lens clinics and hospitals. 6,8,17 Saini et al., 6 who reported on keratoconus patients attending a tertiary eye-care facility and used the same keratoconus classification system as the present study, reported that 67.2% and 32.8% of their patients' eyes had severe and moderate keratoconus respectively. Mahadevan et al. 8 also found that the majority of their keratoconus patients from a tertiary eye-care centre (42%) had advanced keratoconus defined as a corneal curvature value greater than 52 D. Taken together, the findings of this and previous studies are probably biased as they contain an over-representation of patients with severe keratoconus because of the data being extracted from specialised contact lens clinics and hospitals to which these specific patients are most likely referred. Several clinic-based and hospital-based studies have reported that contact lenses (soft and rigid) have been used as the primary method of correction for their samples of keratoconus patients. 4,9,15,16,17,21 This may be the case as contact lenses allow for good visual outcomes in keratoconus patients and delay the need for surgery such as penetrating keratoplasty. 20 In the CLEK study, 74% of the patients were prescribed with either soft or rigid contact lenses, while only 16% received spectacles as the primary method of vision correction. 21 A more recent study done by Rashid et al. 17 showed that 98% of their sample was managed with spectacles or rigid contact lenses with only 2% receiving no treatment. Fathima et al. 20 conducted a retrospective analysis of keratoconus patients attending a hospital in India and reported that 79.5% of these individuals were managed with rigid contact lenses. A similar trend was observed in this study where the most common method of management was different types of rigid contact lenses in terms of materials and designs. This finding is most likely explained by the majority of the sample having moderate to severe keratoconus where prescription of spectacles or soft contact lenses may not allow for optimal vision correction because of the irregular corneal surface. 3 Consequently, rigid contact lenses may be preferred as they result in better vision correction by providing a more uniform optical surface for refraction of light rays. 2,13 In the UKZN contact lens clinic, different materials and designs of rigid contact lenses are used to fit the keratoconus patients using three fitting philosophies (three-point touch, apical clearance and apical bearing). Khor et al. 9 speculated that the reliance on the use of contact lenses is due to the severity of keratoconus that patients present with when they are being assessed and managed in specialised contact lens clinics. This is probable as it is well known that moderate to severe stages of keratoconus are difficult to manage with the use of spectacles. 3 Overall, only a few patients were prescribed with spectacles as the method of management which is in agreement with the findings of the CLEK study. 21 Some of the possible reasons for keratoconus patients opting for spectacles could be that they may have had early stages of this corneal ectasia and/or may not be able to tolerate rigid contact lenses. 15 The findings related to reasons for reporting to the clinic and history of allergies in this study are consistent with literature reports. More than 90% of the sample reported refractive reasons as the main reason for presenting at the clinic and this finding is in agreement with the reports from previous studies. 4,8,9,17 In this and previous studies, refractive reasons were defined as poor vision (blur and/or distorted) not adequately corrected by spectacles, progression of refractive error and/or frequent change in spectacles. 4 The complaint of refractive reasons is most likely explained by a large proportion of the keratoconus patients' eyes (n = 257) being classified with severe keratoconus, which is associated with poorer visual function, than mild and/or moderate keratoconus. 20 More than 40% of the sample reported a positive history of allergies which is consistent with the results of the CLEK study where approximately 53% of their sample reported positively for allergies. 7 Although there are reports of a possible association between the presence of allergic conditions and keratoconus, 22 the exact mechanism of these allergic conditions on the pathogenesis of keratoconus is still unknown. It is for this reason that, rather than the presence of allergic conditions only, the behavioural pattern of excessive eye rubbing in individuals with a history of allergic conditions has been associated with the development and progression of keratoconus. 5,13 The comparison of age and corneal clinical characteristics among the three grades of keratoconus patients revealed interesting findings. In terms of age, the severe keratoconus patients were slightly younger than the patients with moderate and mild keratoconus, which is consistent with the results of a previous study. 6 When describing a possible relation between age and severity of keratoconus, Naderan et al. 10 observed that patients who had an earlier onset of keratoconus presented with more severe grades and this may be because of the progressive nature of this corneal ectasia. The findings, that corneal power in the two principal meridians and amount of corneal astigmatism increased as the grade of keratoconus increased, are not unexpected and are probably attributed to keratoconus being associated with progressive structural changes in terms of corneal thinning and protrusion. 3 Even though an earlier study suggested that keratoconus has no gender predilection, 1 there are contradictory reports in the literature regarding which gender group is more affected by keratoconus. The majority of studies have shown that there are gender differences in the proportion of keratoconus with more males than females being affected by this corneal ectasia. 8,9,11,15,16,17 However, Saini et al., 6 who reported on keratoconus patients attending a tertiary eye-care facility in North India, found that there were slightly more females than males in their sample. Despite these inconsistent results, it has been shown that there are no gender differences for the mean corneal power along the two principal meridians and severity of keratoconus. 13 In the present study, it was not possible to report on gender and its influence on keratoconus as the current clinical record cards at the UKZN contact lens eye clinic do not allow for recording of this demographic characteristic. It should be noted that this limitation of the current recording system has been incorporated into the revised computerised clinical record cards that are going to be implemented at the UKZN eye clinic. Moreover, it was not possible to report on the prevalence and/or incidence of keratoconus in the sample or different gender groups as this study only included the record cards of keratoconus patients that were presenting at the UKZN contact lens clinic for assessment and fitting of contact lenses and/or follow-up. Limitations of this study include its retrospective design and that the contact lens ocular examinations were performed by different clinicians at the UKZN contact lens eye clinic. As the study site was an eye clinic in a university, it may not necessarily describe the keratoconus patients presenting at private and/or public optometric practices for routine assessment and management. Moreover, the inherent selection bias in this study because of the study site should be considered as the keratoconus patients included in the study were only those that presented for assessment and management at this specific eye clinic in KwaZulu-Natal. Consequently, it is possible that some of the keratoconus patients would have been referred for surgery before being referred to the UKZN contact lens clinic and that some patients with mild and/or incipient keratoconus would be under-represented in the sample as they are being managed by other optometric personnel. As a consequence of the study design, it was not possible to determine the incidence, prevalence and age of onset of keratoconus. Despite the sample including a large number of keratoconus patients at the UKZN contact lens eye clinic, it is recommended that future prospective studies be undertaken at other sites within KwaZulu-Natal to better understand the nature, clinical presentation and management of this corneal ectasia. Conclusion To the best of the researchers' knowledge, this is the first study to report on the clinical characteristics of keratoconus patients at the UKZN contact lens eye clinic. The results of this study show that keratoconus patients who present at an early age have a more severe grade of this corneal ectasia and are most commonly managed with the use of rigid contact lenses. These findings should be considered for keratoconus screening, diagnosis and treatment programmes in KwaZulu-Natal to improve the quality of life of affected individuals. This is because the early detection and management of keratoconus can help to enhance the quality of life of affected individuals.
2020-02-13T09:05:02.414Z
2020-02-10T00:00:00.000
{ "year": 2020, "sha1": "41435bcb03355105c339260d036fd76fb4f9d75b", "oa_license": "CCBY", "oa_url": "https://avehjournal.org/index.php/aveh/article/download/528/1162", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "41435bcb03355105c339260d036fd76fb4f9d75b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
196562408
pes2o/s2orc
v3-fos-license
Rapid Isolation and Identification of Pneumonia-Associated Pathogens from Sputum Samples Combining an Innovative Sample Preparation Strategy and Array-Based Detection. With this study, an innovative and convenient enrichment and detection strategy for eight clinically relevant pneumonia pathogens, namely, Acinetobacter baumannii, Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Moraxella catarrhalis, Pseudomonas aeruginosa, Staphylococcus aureus, and Streptococcus pneumoniae is introduced. Bacteria were isolated from sputum samples with amine-modified particles exploiting pH-dependent electrostatic interactions between bacteria and the functionalized particle surface. Following this, an asymmetric polymerase chain reaction as well as subsequent stringent array-based hybridization with specific complementary capture probes were performed. Finally, results were visualized by an enzyme-induced silver nanoparticle deposition, providing stable endpoint signals and consequently an easy detection possibility. The assay was optimized using spiked samples of artificial sputum with different strains of the abovementioned bacterial species. Furthermore, actual patient sputum samples with S. pneumoniae were successfully analyzed. The presented approach offers great potential for the urgent need of a fast, specific, and reliable isolation and identification platform for important pneumonia pathogens, covering the complete process chain from sample preparation up to array-based detection within only 4 h. ■ INTRODUCTION According to the World Health Organization (WHO), pneumonia is the most common infectious disease worldwide. It is associated with very high morbidity and mortality, especially among patients with a compromised immune system or small children. 1 In 2015, pneumonia killed 92 0136 children under the age of five, accounting for 16% of all deaths of children under five years. 2 Pneumonia can be caused by bacteria, viruses, or fungi. 3 Most often, bacterial pneumonia is induced by an infection with Streptococcus pneumoniae. 4 Currently, the gold standard for diagnosing this disease is the microbiological cultivation of the pathogens along with an X-ray examination of the lung and a blood cell count. 5,6 Unfortunately, this approach is very time consuming and not all pathogens can be successfully cultivated. Accordingly, for therapy, a broad-spectrum antibiotic is frequently administered before the result of the cultivation is even known. This unspecific and random use of antibiotics is highly problematic regarding the increase in drug-resistant bacteria. A delayed initiation of the appropriate therapy, however, is also not ideal as an effective treatment of pneumonia requires early intervention. 7 At present, only one-third of children with bacterial pneumonia receive the right antibiotics. 8 This illustrates that there is an urgent unmet medical need for fast and highly specific assays for diagnosing pneumonia. In principal, this need already has been recognized, and efforts were made to develop alternatives to a culture-based diagnosis. Generally, assays based on the polymerase chain reaction (PCR) have a great potential for detecting pathogens causing pneumonia because of the speed and high specificity of this method. Kim et al. developed a multiplex PCR assay capable of differentiating between virulent S. pneumoniae and ubiquitous normal flora, Streptococcus mitis, and Streptococcus oralis in respiratory samples. 9 Li et al. combined real-time quantitative PCR (qPCR) with an immunoassay for improving the detection of Mycoplasma pneumonia infections in children with pneumonia. 10 Furthermore, a real-time PCR assay targeting eight different bacteria and six viruses relevant for respiratory tract diseases was introduced by Edin et al. 11 Recently, Gadsby et al. established two real-time multiplex PCR assays enabling the quantification of eight respiratory pathogens. 12 While qPCR provides both high specificity and sensitivity even with the possibility for quantitation, major obstacles preventing its use in routine clinical diagnostics are the high costs as well as the rather complicated associated experimental procedures. Regarding point-of-care applications, isothermal amplification has been proven to be less demanding in terms of equipment and financial burden. For example, Gotoh et al. successfully applied loopmediated isothermal amplification (LAMP) for detecting Mycoplasma pneumoniae. 13 Huang et al. integrated a LAMPbased detection scheme into a microfluidic system, allowing a parallel and automated nucleic acid analysis of 24 samples. 14 Moreover, Renner et al. designed a user-friendly cartridge, in which isothermal amplification enables detecting various clinically relevant bacterial species. 15 To the best of our knowledge, the BIOFIRE FILMARRAY (Pneumonia Panel) currently offers the widest range of respiratory pathogen identification based on multiplex PCR, including a fully automated sample preparation. Results will be available within only 1 h. Next to PCR-based approaches, matrix-assisted laser desorption ionization time-of-flight mass spectroscopy (MALDI-TOF MS) can be used to identify bacterial species from blood cultures within 1 h. Even though MALDI-TOF MS is an attractive tool for the identification of bacteria, within the context of pneumonia diagnosis, it is a distinct disadvantage, that a precultivation step is necessary in order to generate sufficient amounts of biomass for the analysis. 16 Alternatively, immunoassays are a further option for the specific detection of pneumonia pathogens. Li et al. developed an immunochromatographic assay, which allows a rapid and sensitive detection of M. pneumoniae from sputum samples or throat swabs. 17 In practice, urinary antigen tests, such as ImmuView UAT and BinaxNOW, are frequently applied for detecting S. pneumoniae and Legionella pneumophila in urine samples. 18 Furthermore, Wang et al. recently reported an electrochemical immunosensor to detect the pneumococcal surface protein A, enabling ultrasensitive detection of S. pneumoniae. 19 While it is encouraging that various promising attempts for a fast diagnosis of bacterial pneumonia already have been made, some challenges still remain. For instance, many approaches only encompass the amplification and detection steps, omitting the sample preprocessing which is necessary for the application of real samples. Others only address single species. The aim of the present study is to cover the entire process chain, starting with an efficient isolation and enrichment of the bacteria from patients' sputum samples up to the specific detection of eight pneumonia-relevant pathogens in parallel ( Figure 1). With regard to the potential application in point-of-care testing, we made special efforts to incorporate cost-efficient materials and to keep the assay as simple and convenient as possible. For sample preparation, we employed expanded glass beads (EGB), which are an intermediate product occurring during the recycling process of glass. Our assay begins with the liquefaction of the sputum samples, followed by the isolation of the bacteria from the complex sample matrix using EGB. These feature an amine ACS Omega Article modification, that allows a pH-dependent adhesion and desorption of cells. After a lysis step, the target DNA is exponentially amplified and labeled using specific primers. Finally, the PCR products are further analyzed using array-based hybridization and enzyme-induced silver deposition. The generated silver nanoparticles in case of a positive reaction served as robust endpoint signals allowing an immediate visual readout by the naked eye. ■ RESULTS AND DISCUSSION The main objective of this study was the development of a PCRbased assay enabling the fast and reliable detection of bacterial pneumonia from sputum samples. We addressed the whole process chain and considered sample preparation as well as a user-friendly detection scheme. Prior to the amplification of the target DNA, we utilized a bead-based approach for the isolation of the bacterial cells from the sputum matrix. As it is our goal to address multiple species, we chose amine-modified particles for the sample preparation. Based on electrostatic interactions between the bacterial cell wall and the bead surface, this type of surface modification allows directed binding or detachment of the bacteria depending on the pH value of the surrounding buffer. The particle surface will exhibit a positive charge because of protonated amine groups in the acidic to neutral pH range causing an attractive force for the bacteria, which generally have a negative charge on their surface. By raising the pH to the alkaline region, the amine groups will be deprotonated. Accordingly, the bacterial cells are no longer drawn to the particle surface and can be detached. This principle is highly interesting for assays, which aim at isolating several pathogens, because various bacterial species can be captured with only one type of surface modification. 20,21 The expensive and often troublesome surface modification of particles with antibodies can be avoided. Array-Based Detection of Eight Bacterial Species Relevant for Respiratory Diseases. We tested two types of amine-functionalized particles, namely, EGB and PEI-modified PE particles, for the isolation of pneumonia-relevant bacteria from sputum samples. In order to optimize the protocol for the particle-based isolation of the pneumonia pathogens from sputum samples, we used an artificial sputum matrix spiked with defi ned cell numbers of eight diff erent species (Acinetobacter baumannii, Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Moraxella catarrhalis, Pseudomonas aeruginosa, Staphylococcus aureus, and S. pneumoniae) to a final concentration between 10 5 and 10 6 cfu/mL. Each sample was divided into two equal volumes and the isolation was performed with both types of particles. After that, a heat lysis of the cells was performed, followed by the amplification of the target DNA. In order to verify each single step of the process chain, successful amplification of all pathogens was as well examined on agarose gels (exemplarily shown in Figure S1). Using the biotinylated primers (see Figure 1B) resulted in biotin-labeled PCR products. These were finally hybridized on polymer chips modified with specific capture probes for the pathogens of interest. By incubating the chips with a streptavidin enzyme conjugate, the successful hybridization can be visualized exploiting the enzyme-catalyzed reduction of silver ions: the streptavidin enzyme conjugate will bind to biotinylatedhybridized PCR products and induce the reduction of the silver ions to elementary silver resulting in the formation of silver nanoparticles (see Figure 1C). The silver nanoparticles are visible to the naked eye as black or gray spots on the chip surface and can be further investigated using gray value analysis (see Figure 1D). The results of the corresponding analysis for the samples processed either with EGB or polymer particles are summarized in Figure 2. For clarity, we only displayed the gray values of the positions of the correct capture probes for each species. Both sample preparation methods enabled the PCRbased detection of all eight investigated species. For each sample, the results of the array-based hybridization were specific, meaning no false positive signals were obtained at the positions of the noncomplementary probes. The respective diagrams can be found in the Supporting Information ( Figure S2 for EGB and Figure S3 for polymer particles). Generally, we found that the signal intensities were slightly higher for the EGB, with the exception of the samples with H. influenzae and S. aureus. With these results, we were able to demonstrate that our introduced assay enables detecting each of the eight investigated respiratory species from sputum samples. The time required for completing the whole process chain only accounts for 4 h. As the handling of the glass beads with the highly viscous sputum matrix turned out to be even more convenient than the use of the polymer beads, we decided to employ the glass beads for all future experiments. Detection of Patient Isolates. In order to investigate the applicability of the abovementioned assay, the optimized protocol was further tested for several strains of S. pneumoniae, S. aureus, and H. influenzae isolated from patients of the University Hospital Jena. For this purpose, artificial sputum was spiked with ten different strains to a final concentration of approximately 10 5 cfu/mL. Subsequently, the complete protocol starting from the isolation of the pathogens with EGB was conducted. The results of the gray value analysis from the polymer chips after performing the hybridization assay are depicted in Figure 3. Because no unspecific signals at the noncomplementary probe positions were detected, we only included the values of the correct species-specific probes for better comprehensibility. The detailed evaluation of each chip ACS Omega Article can be found in the Supporting Information (see Figure S4). Overall, the results are in good agreement with the previously investigated strains. We were able to unambiguously detect all of the ten patient strains. For the two isolates of H. influenzae, the intensities of the gray values were even more pronounced than those of the ATCC 9007 strain. The same trend was observed for the three S. pneumoniae isolates, the gray values of which all exceeded 80%, while the type strain DSM 20566 typically yielded intensities of approximately 55%. For the five isolates of S. aureus, we obtained signal intensities between 40 and 80%, while the type strain DSM 20231 displayed a mean intensity of 55% in comparable experiments. These results indicate that the established protocol is capable of covering several different pneumonia-relevant bacterial species with only one set of parameters for the isolation protocol. Even though the signal intensities vary, depending from the species and also the strain, a distinct identification is enabled in each case. One factor impacting the gray values of the chip-based hybridization protocol is the isolation yield in the sample preparation. Depending on the composition of the bacterial cell surface and the presence of functional groups contributing to the surface charge of the cells, the attractive interaction between the particles and the cells will be influenced. The constitution of the cell surface of course is affected by the species (Gram-positive, Gram-negative bacteria), the strain, and even the growth conditions. Accordingly, it would be possible to optimize the isolation protocol for each species or strain. However, we refrained from this proceeding, as it was our goal to access multiple species with one protocol. Based on the presented results, it can be concluded that the introduced approach shows great promise to be robust enough for coping with the cell surface variations that occur within one species from strain to strain. Determination of the Limit of Detection. A critical parameter for the application of an assay in a clinical setting is the sensitivity. Because S. pneumoniae is the most common causative agent of pneumonia, we performed the corresponding experiments with the type strain of S. pneumoniae. Briefly, samples with artificial sputum in the concentration range between 10 2 and 10 6 cfu/mL were prepared and the full process chain was completed. Down to a concentration of 10 3 cfu/mL, we were able to detect the type strain of S. pneumoniae as can be referred from Figure 4. For the sample of 10 2 cfu/mL, we did not obtain a significant gray value in the hybridization assay. In contrast, the analytical gel showed distinct bands only for 10 6 and 10 5 cfu/mL (see Figure S5). Once again, the results of the array-based hybridization were specific and no signals were detected at the noncomplementary probe positions (data not shown). The sensitivity of our assay is sufficient for investigating sputum samples as typical bacterial loads range from 10 3 to 10 7 cfu/mL. 22 Comparable results were reported by Gillespie et al. They also targeted the autolysine gene lytA of S. pneumoniae and were able to detect the pathogen down to a concentration of 10 4 cfu/mL in a simulated sputum matrix using conventional PCR. 23 The sensitivity can be further improved by employing qPCR. Werno et al. achieved reproducible and quantifiable results to a level of 10 2 cfu/mL also addressing the lytA gene. 22 Ikegame et al. investigated clinical sputum specimens for evaluating the performance of an antigen kit for the C-polysaccharide of the bacterial cell wall of S. pneumoniae. They found that 10 5 cfu/mL was required for a positive test result. 24 Overall, the sensitivity of our assay is comparable or superior to other PCR-based detection schemes for S. pneumoniae from complex samples. Investigation of Real Sputum Samples. Finally, we challenged our assay by processing actual patient samples. From the University Hospital in Jena, we received three leftover samples, which had been previously characterized via routine microbiological methods. In each of the samples, only the standard microflora was detected. We spiked each sample with the type strain of S. pneumoniae to a concentration of 10 6 cfu/mL and performed the whole process chain from isolation of the bacterial cells to the array-based hybridization. In general, we were able to detect S. pneumoniae in all samples as can be seen in Figure 5. For the samples R002 and R003, we obtained signal intensities over 80%. For the sample R001, however, only a comparably low intensity of 20% was achieved. The lower yield can probably be ascribed to difficulties during sample preprocessing. Sample R001 displayed a very high viscosity compared to samples R002 and R003, accordingly the mixing of the beads and sample was apparently less efficient than in the other samples. Nevertheless, the outcome of the experiment is encouraging and proves the general applicability of our assay for real samples. ■ CONCLUSIONS We developed a fast and convenient PCR-based assay enabling the detection of eight different pneumonia-relevant pathogens within 4 h. Our approach covers the complete chain of analysis ACS Omega Article from sample preparation to array-based detection. We included robust and cost-efficient materials, wherever possible, that is, EGB, which are a byproduct from glass recycling. Our sample preparation strategy relies on amine-functionalized particles, which are capable of isolating a broad range of different Gramnegative and Gram-positive species, as demonstrated in this study. The species-specific identification was achieved with a hybridization assay on low-cost polymer chips. An additional advantage of these chips is the simple functionalization with capture probes, without the need of any additional surface modification. The detection method based on enzyme-catalyzed silver deposition is not only robust, but also entails great potential for an automated readout and analysis, that is via smartphone. We successfully detected eight species in an artificial sputum matrix and could also show that the previously optimized parameters work well for different patient strains. Regarding sensitivity, we found a detection limit of 10 3 cfu/mL for the type strain of S. pneumoniae, which is sufficient for the vast majority of sputum samples. Furthermore, we were able to successfully analyze real sputum samples spiked with S. pneumoniae. Overall, our results make an important contribution toward the development of culture-independent, sensitive, and specific point-of-care assays for the reliable detection of pneumonia. Future work will be aimed at an automated read out of the hybridization assay and implementing the sample preparation into a cartridge for more user friendliness. ■ EXPERIMENTAL SECTION Ethics Considerations. The study was approved by the ethics committee of the Jena University Hospital (Germany), accession number: 4672-01/16. Cultivation of Bacteria. In this study, we investigated eight different bacterial species, namely, A. baumannii, E. coli, H. influenzae, K. pneumoniae, M. catarrhalis, P. aeruginosa, S. aureus, and S. pneumoniae. The strains were either purchased from the Leibniz Institute DSMZ-German Collection of Microorganisms and Cell Cultures (DSMZ, Brunswick, Germany) or isolates from patients' samples provided by the Jena University Hospital (UKJ, Jena, Germany). All species, except from H. influenzae which was raised on chocolate agar with Vitox (Oxoid Deutschland GmbH), were cultivated on Columbia sheep blood agar (Oxoid Deutschland GmbH). The spiking of the sputum samples was carried out with cells from the exponential growth phase. To that end, 5 mL of the liquid growth medium (further information, see Table 1) was inoculated with a small amount biomass from an agar plate and incubated overnight at the appropriate temperature. On the next day, 2−3 mL of the overnight culture was added to 50 mL fresh growth medium and incubated until the exponential phase was achieved. The progress was monitored by measuring the optical density at 600 nm (OD 600 ) using a biophotometer (Eppendorf AG, Hamburg, Germany). Bacterial cells were harvested when OD 600 attained 0.4−0.6. Then 1 mL of the bacterial solution was centrifuged at 10 000 rcf for 3 min. The supernatant was discarded and the remaining cell sediment was dissolved in 1 mL phosphate-buffered saline (PBS). Afterward, a serial dilution from 1:20 up to 1:200 000 was performed and aliquots of 50 μL were plated in order to determine the colony-forming units (cfu) per mL. Expanded Glass Beads and Polymer Beads. The EGB were purchased from Liaver GmbH & Co KG in Ilmenau, Germany. The size of the particles ranges from 0.5 to 1.2 mm and they have a density of 0.9 g/cm 3 . Their surface was modified with primary amine groups. The modification was a three-stage coating. First, it was coated with 3-(aminopropyl)triethoxysilane (Sigma-Aldrich, Munich, Germany) followed by lysine diisocyanate ethyl ester (LDI, Actu-All Chemicals, Oss, The Netherlands) and finally with a polyetheramine (JEFFAMI- Figure 5. Gray value analysis for three sputum samples spiked with S. pneumoniae DSM 20566 and corresponding chips after hybridization assay and silver deposition. ACS Omega Article NET403, Huntsman Corp., Bergkamen, Germany), each as a solution in toluene. The polyethylene particles (Ticona GUR 2122) had a size distribution between 100 and 200 μm and were functionalized with Lupasol WF (BASF, Ludwigshafen, Germany) and succinic anhydride as previously described. 20 Scanning electron microscopy (SEM) images of the particles can be found in the Supporting Information (see Figure S6). Primer and probes for the target sequence of basC, gad, khe, and ecf X were designed using consensus areas of all target genes and their alleles. For the target genes f ucK, copB, nuc, and lytA the primer sequences were already published 12 and only the hybridization probes were designed within the aligned amplicon sequences. Abbott's in-house primer design software (Abbott (Alere Technologies GmbH), Jena, Germany) was used to design the primer and hybridization probes. Special attention was given for highly conserved regions of each species marker gene to cover all known alleles which were published at the time as the assay was designed (Juli 2017). Briefly, all GenBank entries for any given target were retrieved and one proofed and published entry was selected as the reference sequence (see accessions numbers, Table 2). The resulting BLAST hits were reannotated and archived into a local database. Sequences were classified into paralogues and allelic variants based on their similarity. For this, all matching regions from the alignments were used for the design of probes and primers. Sequences were selected which specifically have a similar GC content, length, and melting temperature. Afterward, all designed sequences were reblasted against all available target sequences to rule out false negative or cross-reactive binding events. Subsequently, probes were modified with an amino-C6 linker and all forward primers were biotin-labeled for the staining procedure after hybridization on the chip. The primers and probes were synthetized by Eurofins Genomics GmbH (Ebersberg, Germany). Detailed information about the used primers and probes (sequences, melting temperature, and amplicon size) as well as the target genes (accession numbers) are provided in Table 2. Asymmetric PCR. In the present study, an asymmetric PCR was conducted to amplify a fragment within the respective target gene region of the bacteria species ( Table 2).The primer ratio between biotinylated and nonbiotinylated primers was 0.2:0.05 μM as determined experimentally (data not shown). A single reaction mix contained 1 μL cell lysate, 1× PCR buffer, 2.5 mM MgCl 2 , 0.4 mM dNTPs, 0.2 μM of each primer, and 0.1 U/μL Taq Polymerase (innuTaq DNA Polymerase Kit, Analytik Jena AG, Germany) in a final volume of 30 μL. All PCR reactions were carried out with the thermocycler FlexCycler (Analytik Jena AG, Jena, Germany). The PCR was performed with the following cycle profile: initial denaturation at 95°C for 300 s, 45 cycles of denaturation at 94°C for 30 s, and annealing/ elongation at 58°C for 30 s. Successful DNA amplification was verified on a 2% (w/v) agarose gel. For visualization, the DNA was stained with GelRed (Biotium, Fremont, CA, USA) according to the recommendations of the manufacturer. The molecular weight marker "GeneRuler 100 bp DNA Ladder" as well as "pUC19 DNA/ MspI (HpaII) Marker, ready-to-use" were purchased from Thermo Fisher Scientific (Thermo Fisher Scientific, Waltham, MA USA). Array-Based Hybridization and Signal Detection. White planar polypropylene (PP) sheets (Modulor GmbH, Berlin, Germany) were cut into chip size (17 × 17 mm) by laser, and successively cleaned in an ultrasonic bath with acetone, ethanol, and water for 10 min each. The capture probes (Eurofins MWG Operon, Ebersberg, Germany) were dissolved in 1× micro spotting solution (ArrayIt Corporation, Sunnyvale, USA) to a final concentration of 20 μM and spotted (Nanoplotter 2.1, GeSim, Grosserkmannsdorf, Germany) in an array format on the PP chips (layout see Figure S7 of Supporting Information). A biotin-labeled noncomplementary probe was immobilized as a process control to verify the binding of streptavidin enzyme and the subsequent silver deposition (=positive control). Another noncomplementary probe was spotted to exclude unspecific binding (=negative control). The specific biomolecule interaction on the chip was realized as previously described by Schwenkbier et al. 25 Briefly, 70 μL of hybridization solution (20 μL asymmetric PCR product in 3× SSC/0.5% SDS) was incubated on the PP chip surface for 1 h at 50°C, followed by 30 min of incubation with the streptavidin− horseradish peroxidase (Sigma-Aldrich Chemie GmbH, Taufkirchen, Germany; 1:1000 diluted in 1× PBS with TWEEN 20, PBST). Afterward, the PP chips were washed with PBST and tap water. Finally, the enzymatic silver deposition was performed by applying the EnzMetTM HRP detection kit (Nanoprobes Inc., Yaphank, USA; components A−C). The generated silver nanoparticles in case of a positive reaction served as robust endpoint signals allowing an immediate visual readout by the naked eye. In addition, the amount of silver deposits was ACS Omega Article quantified by their gray values. The respective spots were scanned (ProScan 7200, reflecta GmbH, Rottenburg, Germany) and analyzed using the software ImageJ (National Institutes of Health, USA). After inversion of the image, the gray value could potentially range from 0 (white) indicating a clear chip surface without any deposition of silver to 255 (black). The gray value was determined by mean gray value calculation (at least 10 spots with a size of 2 pixels each), subtracting the measured background values from the sample values. The gray mean value of the biotin-positive control was set at 100% and the gray value signals of the capture probes were presented as gray value percentage of the positive control. The mean gray values of the negative controls plus three times the standard deviation were used to set the threshold for a positive test result. * S Supporting Information The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsomega.9b00904. Spotting layout for the capture probes, analytical agarose gel of the eight investigated respiratory species, results of gray value analysis for samples preprocessed with EGB and PEI beads, results of gray value analysis for patient strain samples, analytical agarose gel of the amplification of S. pneumoniae in a concentration range between 10 2 and 10 6 cfu/mL, SEM images of EGB and PEI beads (PDF) Medical Microbiology (University Hospital Jena) for kindly providing all investigated bacterial strains and Prof. Dr. Marie von Lilienfeld-Toal from the Department of Hematology and Medical Oncology of University Hospital Jena for kindly providing the sputum samples. We also want to thank Dr. Jan Dellith from the Leibniz IPHT, Jena for acquiring the SEM images of the particles used in this study.
2019-07-15T22:29:23.607Z
2019-06-14T00:00:00.000
{ "year": 2019, "sha1": "97375f2c2f50537a5a034d136e8208c903eba032", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.9b00904", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "588aa11d136eae11f32b9761a7d49c19ef69f133", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
235290902
pes2o/s2orc
v3-fos-license
Design and implementation of spatial analysis module for forestry ecological engineering monitoring system Spacial Analyst(SA)is the core technology of GIS and it’s also the technology which is different from other information systems. In this paper, the design of spatial analysis module is a sub-module of forestry ecological engineering monitoring system. in view of the forest is dispersed, vast and dynamic, the long-term characteristic, this paper designs its functions include digital terrain analysis, spatial characteristics of geometric analysis, statistical analysis, etc., and it briefly introduces the method and technical requirements of realization of each function module, the result is suitable for resourceful forestry workers are applied in the spatial data processing and analysis. Introduction The monitoring and evaluation technology of national forestry major ecological engineering is to integrate the application system on the technical platform of unified planning and design to provide technical support for information resource sharing and technology sharing in the construction of forestry major ecological engineering. This technology can release project information regularly, submit project progress dynamic monitoring, plan the project, and can also benefit evaluation and decision. Extensive forestry ecological engineering management and monitoring to provide technical support and services. Thus, it serves for the planning, management, monitoring, evaluation, analysis and decision making of major forestry ecological engineering construction in China. [1] Each information system depending on the object to the function of GIS have different needs and focus on, for the forestry ecological engineering monitoring system, in view of the forest is dispersed and vast in terms of spatial distribution, dynamic in time distribution, and has the characteristics of long-term growth period, the key lies in data acquisition, modeling, spatial analysis and decision analysis. This paper focuses on the design and implementation of spatial analysis module. Is the core technology of GIS spatial analysis and GIS is unique, the analysis of forest monitoring helps to the GIS decision makers and decision, for the current situation of ecological environment construction and the demand to provide accurate and reliable and the dynamic change of indicator and surface data, for the planning, statistical analysis, which provides the basis for evaluation and auxiliary decision-making, reduce the cost of research and management. 2.1.The module objectives The spatial analysis module designed for forestry ecological engineering monitoring system should lay more emphasis on spatial superposition analysis, buffer analysis, digital terrain analysis and so on, besides inherits the basic function of spatial analysis of common geographic information system. This space analysis module is designed as a tool to solve the geographical space problems. Using this module to solve the spatial problems, we should first spatialize and model the problems, and then solve the problems to make decision analysis. 2.2.Module Functions Spatial Overlay Analysis: When the data files of concentrated polygon elements in the same area and the same scale are overlaid, the statistical analysis of attributes of polygon or polygon range with various attributes is produced. For example, by superimposing a soil map on a forest map, the soil type of each forest area can be determined and the area of a certain type of soil can be counted. Digital terrain analysis: a digital representation of changes in the continuity of spatial relief. Its main contents include: contour line generation and analysis, topographic elements generation and analysis, sectional map analysis, three-dimensional display and calculation, etc. Buffer analysis: Buffer analysis is to study point, line and surface entities based on GIS database and automatically establish buffer polygons within a certain range around them. There are three types of buffer: one is the buffer based on the point element, usually with a point as the center of a circle and a certain distance as the radius of the circle. The other is the buffer zone based on the line element, the polygon of the parallel strip with the line as the central axis and a certain distance from the central axis. Thirdly, based on the buffer of polygon, the new polygon can be generated by extending inward and outward for a certain distance. Buffer zone analysis is widely used in the analysis of returning farmland to forest and forest management. For example, the delimitation of natural forest reserves around rivers and lakes, the demarcation of the core areas and buffer zones of nature reserves, etc. [3] Spatial statistical analysis: due to the original nature of the data stored by GIS, users can extract and analyze the data according to different purposes to obtain the required information. Spatial variables in the same or different research areas are compared and corrected, and new features of GIS data, such as spatial model, spatial generalization, spatial association, spatial clustering and classification, are explored and developed. It should be noted that AO is not an independent application product, but a software development kit attached to ArcGIS Desktop product. ArcGIS Engine includes a development kit for building custom applications. ArcGIS Engine embeds GIS functions in applications by adding controls, tools, menu bars and object libraries to the development environment. For example, a programmer could build an application that includes an ArcMap thematic map, some mapping tools from the ArcGIS Engine, and other custom features. ArcGIS Engine development package includes three key parts: controls, toolbars and tools, object library. 2.3.The functional structure of the module Controls are an integral part of the ArcGIS user interface that you can embed and use in your applications. A toolbar is a collection of GIS tools used in applications to interact with map and geographic information. The object library is a collection of programmable ArcObjects components, including a range of libraries from geometry to mapping, GIS data sources, and Geodatabase. Using these libraries on Windows, UNIX, and Linux platforms, programmers can develop a wide range of customized applications, from low-level to advanced. 3.1.The realization of spatial superposition analysis The data and files of two or more groups of polygons in the same area and the same scale are superimposed through the superposition analysis, and polygons with multiple attributes are established according to the intersection points of the boundary of the two groups of polygons or the statistical analysis of the attribute characteristics of the range of multilateral rows is carried out. For example, the land use status map of afforestation demonstration area and afforestation planning and design map are analyzed by three-dimensional superposition, so as to more intuitively understand the changes of afforestation small class attributes before and after afforestation, and the changes of land types before and after afforestation. Engine provides you with a powerful object library. In terms of spatial overlap analysis, Engine provides IBasicGeoProcessor interface, which includes CLIP, DISOLVE, MERGE, UNION, INTERSECT and other members. For example, the realization of the function of disolve. According to the basic concept of disolve, we know that the operation of disolve is to merge the adjacent polygons with the same value of the fusion field and eliminate the common boundary. Before undergoing fusion operation should be specified by the user to merge fields, we adopt the way of choice, namely the user on a specified data layer integration analysis, the data layer of all property fields are reflected in the design of the combox, users can according to need in combox drop-down list, select the required property field into blue, then the system will according to user selection for IbasicGeoprocessor. Disolve operation, to achieve fusion (disolve) function. Figure 1 shows the comparison before and after fusion. In this operation I have selected the fusion field as SUB_REGION. Users can achieve the objective simplification of complex polygons by integrating a certain attribute field of a certain data layer, so as to make corresponding decision and analysis. As shown in the image above, the fusion field is SUB_REGION and according to the fused layer we can see that the data layer has changed from more than 500 polygons to more than 10 polygons. Each polygon is combined from the polygons that have a common attribute under the common SUB_REGION attribute field. The implementation of functions such as CLIP, MERGE, UNION and INTERSECT are also in the same interface as DISOLVE, and the interface should be as simple and clear as possible during the 3.2.The implementation of buffer analysis function The iToLogicalOperator interface provides buffer function in ArcGIS Engine. The system design idea is that when the user selects a certain point, line or plane entity in the layer and highlights it, the user specifies the buffer radius and the buffer style (to establish a buffer in, to establish a buffer out or to establish a buffer in both directions), and finally realizes the function of generating a buffer through itoLogicalOperator.Buffer. Buffer analysis is widely used in ecological monitoring system. For example, in forestry planning, the cutting area of the forest needs to be planned according to a certain depth from the river to prevent soil erosion; In the evaluation of wildlife protection, the range of activities of many animals is always limited to a certain range of habitat (rivers, caves, nests).In each of these cases, you need to create a buffer of some range along these points, lines, and surfaces. Buffers are new polygons that do not contain origin, line, or plane elements. [3] 3.3.The realization of digital terrain analysis function This function includes spatial interpolation analysis, slope and slope aspect analysis, topographic slope analysis, 3D topographic display and analysis, watershed analysis, etc. Engine provides DTM analysis module --GRID model analysis and TIN model analysis. The Grid model analysis function can be used to complete the local "Grid three-dimensional map rendering" of the forest area, which can visually display the terrain and landform of the demonstration area. On this basis, the registration and superposition of topography and afforestation types can be realized to reflect the spatial distribution of forestland after afforestation. In addition, the TIN model analysis can be used to complete the local "slope analysis map" of the forest area, and superimposed with the afforestation class. The slope distribution of each afforestation class can be seen, providing information for tree species selection, afforestation construction and other work. Conclusion Through the design of the spatial analysis module of forestry ecological engineering monitoring system,This paper has realized the transformation of complex and original information into graphical spatial information. And also it enhancs the objectivity and correctness of analysis and decision making, and it provides a better basis for decision making and management of forest resources management.
2021-06-02T23:45:31.919Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "612f0f08b225a50ddc65f08cd278815d596865de", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1903/1/012018", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "612f0f08b225a50ddc65f08cd278815d596865de", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
270240893
pes2o/s2orc
v3-fos-license
Review Study on Mechanical Properties of Cellular Materials Cellular materials are fundamental elements in civil engineering, known for their porous nature and lightweight composition. However, the complexity of its microstructure and the mechanisms that control its behavior presents ongoing challenges. This comprehensive review aims to confront these uncertainties head-on, delving into the multifaceted field of cellular materials. It highlights the key role played by numerical and mathematical analysis in revealing the mysterious elasticity of these structures. Furthermore, the review covers a range of topics, from the simulation of manufacturing processes to the complex relationships between microstructure and mechanical properties. This review provides a panoramic view of the field by traversing various numerical and mathematical analysis methods. Furthermore, it reveals cutting-edge theoretical frameworks that promise to redefine our understanding of cellular solids. By providing these contemporary insights, this study not only points the way for future research but also illuminates pathways to practical applications in civil and materials engineering. Introduction Cellular solids have attracted significant attention and made substantial advancements across various disciplines because of the variety of their porous periodic or irregular microstructure and lightweight properties.These materials possess desirable attributes such as high strength-to-weight ratios [1][2][3][4][5], high energy absorption capabilities [6][7][8], and thermal insulation properties [9], making them particularly valuable in civil and mechanical engineering.Their applications in this field range from structural components and insulation systems to lightweight filling materials [10][11][12][13][14][15]. Natural cellular materials provide innovative solutions for construction by replicating efficient and strong structures found in nature, such as bones and plants.These materials offer strength, lightweight properties, and energy-absorbing capabilities.By mimicking these natural designs, building materials with higher performance and sustainability can be developed, including applications such as lightweight panels, load-bearing structures, and insulation systems.The use of renewable and biodegradable materials is consistent with environmentally friendly building practices.Ongoing research in this area promises to transform buildings with high-performance and environmentally friendly alternatives [16][17][18]. The global economic impact of cellular solids in civil and materials has different quantitative analysis with normal material for the economic perspective, such as in terms of cost and saving in Mexico, the use of novel cellular concrete mixtures was found to reduce annual energy costs by 15% to 28% compared to conventional concrete [19], and the cellular concrete beams could lead to material cost savings ranging from 40% to 63% per beam compared to traditional box-shaped beams [20]. Cellular structures, both microscopic and macroscopic variants, are integral to numerous applications due to their unique properties.Microcellular structures are characterized by cell sizes in the nanometer to micron range and possess unique properties such as a high surface area to volume ratio and customized functionality due to their complex microscale structure.They have a wide range of uses in catalysis, filtration, and biomedical engineering, where precise control at the nanoscale is crucial.In contrast, macroscopic honeycomb structures, with larger cell sizes ranging from millimeters to centimeters, offer advantages such as lightweight structure, efficient energy absorption, and thermal insulation.They are widely used in the aerospace, automotive, and construction sectors and are used in applications such as sandwich panels, thermal insulation, and impact-absorbing structures.A comprehensive understanding of the design principles, fabrication methods, and performance characteristics of micro-and macro-cellular structures is critical to maximizing the potential of different disciplines, fostering innovation, and pushing technological boundaries [21][22][23][24]. In the fields of civil and materials engineering, the impact of cellular solid materials on the global economy has revealed quantitative analysis insights that distinguish them from traditional materials.Mexico is a notable case study where groundbreaking research has illuminated significant avenues for cost reduction.For example, investigations into innovative cellular concrete mixes have shown significant reductions in annual energy expenditures, ranging from 15% to 28%, compared to conventional concrete practices [19].In addition, the use of cellular concrete beams has shown significant economic advantages, with potential material cost savings per beam ranging from 40% to 63% compared to conventional box beams [20].These findings highlight the clear economic benefits of adopting cellular solid materials in civil engineering, not only in terms of improved energy efficiency but also in terms of significant reductions in material expenditures. Mechanical properties of cellular materials, such as honeycombs, foams, or networks, including increased specific strength, stiffness, and high energy absorption capacity, make them suitable for applications involving impacts, collisions, or explosions [25][26][27][28][29][30][31][32][33].Recently, network architectures within cellular structures have gained attention for their ability to outperform traditional designs in absorbing impulsive charges, allowing ample space for plastic deformations and efficient dissipation of impact forces.However, creating geometrically accurate cellular network structures remains a challenge and an open research problem.Some advances in additive manufacturing now enable the fabrication of intricate structures using customizable materials, offering exciting opportunities for design and optimization across various geometrical scales. The significance of honeycomb structures in various fields, focusing on their structural capabilities, mechanical properties, and uses.Honeycomb structures are known for their hexagonal cell composition, which gives them an excellent strength-to-weight ratio, making them ideal for lightweight components in the aerospace, automotive, and construction industries.Their ability to efficiently absorb energy and high tensile strength make them suitable for applications requiring impact resistance.Ongoing research and development programs aim to enhance the mechanical properties, manufacturing processes, and environmental sustainability of honeycomb structures.Understanding the complex design principles and functions of these devices is critical to improving their efficiency and promoting technological advancement in different engineering fields [34][35][36][37][38]. Polymeric, metallic, and ceramic cellular materials have specific structural properties that make them ideal for a variety of technical applications.Polymeric cellular solids, such as polymethacrylimide (PMI) foams [39], are lightweight and have high energy absorption and thermal insulation capabilities, making them perfect for the automobile and aerospace sectors [40].Metallic cellular structures [41], such as aluminum and titanium foams, provide high strength-to-weight ratios and excellent mechanical performance under load, which are essential for structural and impact-resistant applications.Ceramic cellular materials, such as silicon carbide foams, have excellent thermal stability, high-temperature resistance, and chemical inertness, making them ideal for use in thermal protection and filtration systems [3,42].Advanced production processes, such as additive printing and high-precision molding, allow for fine control over the microstructure and porosity of these materials, improving their performance properties. The engineers can tailor cellular materials to meet specific performance requirements, leading to aerospace, automotive, defense, and civil engineering innovations.Mathematical and numerical techniques facilitate the simulation and analysis of these materials, enabling rapid prototyping and optimization, and thus accelerating the development and deployment of advanced cellular solids solutions in real-world applications.Some research methods enabling the analysis of such media and their application areas [38,[43][44][45] have been shown in Table 1. Table 1.Research articles on the prescription of cellular materials. Cellular Material Structure Mechanical Parameter Performance Reference shape memory polymers and shape memory alloys periodic cellular solids tensile and creep mechanical properties, modification, reversibility, repeatability and control of imperfection [38] artificial acoustic metamaterial (3D) truss-lattice structures bulk modulus, buckling strength, and stiffness-to-mass ratio bulk modulus and buckling strength increase with radius, while stiffness decreases [43] 3D Archimedean solid flat using paper folding origami and kirigami principles geometrical parameters of spherical and Bennett linkages, dihedral angles potential engineering applications of foldable cellular structures, such as space habitats and robot arms [44] polycrystalline materials lognormal or gamma distributed volumes volume, surface area, mean width and number of facets difference between lognormal or gamma distributed tessellations [46] hexagonal structure strain localization and micro-buckling microstructural instabilities and localization bands [35] polystyrene (EPS) foam -compressive and shear stresses foam density and strain-rate [47,48] metal foams 3D Voronoi structure strain, stress ahead of the shock front, and dynamic stress-strain relationship loading-rate sensitivity mechan, sm of cellular materials [41] photopolymer Objet Vero Blue Full Cure M840 material open-cell structure Young's modulus and Poisson's ratio good agreement between numerical simulations and experimental results [49] various types of foams (EPP, PUR (Bayfill EA), EPS, and PPO/PS (Noryl GTX) structural foams Elastic property introduces new empirical formulations and identifies density-dependent laws [50] hexagonal honeycomb structure buckling behavior establishes a homogenization theory and derived conditions for microscopic symmetric bifurcation [34] Polymethacrylimide (PMI) foam 3D microstructures strength-density, stress-strain curves, buckling strength index of cell walls, and deformation banding deeper understanding of the mechanical behavior [39] Piezoelectrically active cellular solids hexagonal, tetragonal, and triangular structure elastic, dielectric, and piezoelectric properties relationship between cellular structure, deformation modes, and electromechanical properties [36] Mechanical properties of cellular structures can also be discussed in the context of possible uncertainty in their geometrical parameters and/or mechanical properties of the original solid serving as the skeleton of the given unit cell [51][52][53].Using various stochastic and probabilistic methods, scientists can predict and identify the behavior of cellular materials under various conditions in terms of basic probabilistic characteristics like expectations and variations, as well as the probabilistic entropy concept.Probabilistic models provide statistical information about the probability of various outcomes, including failure modes, fatigue life, and deformation characteristics, providing valuable information for reliability-based design optimization and also risk assessment [54][55][56].Numerous studies have been undertaken to explore the mechanical properties of cellular materials through the application of stochastic and probabilistic analyses, and some of them are collected in Table 2. Finite Element Method with Random Fields porous aluminum tensile modulus and yield strength [61] Artificial Neural Networks for porosity prediction wood fiber elastic moduli [62] Utilized for modeling nano porous cellular materials using probability density functions open-porous cellular materials compressive response [63] Stochastic lattice creation techniques cellular materials properties heat exchangers and mechanical components [64] Stochastic tessellation via Aboav-Weaire law classification and selection of cellular materials biomimetic approach [65] Stochastic modeling based on µCT-image analysis open-cell foam microstructure complexity and mechanical performance [66] Stochastic models cellular materials imperfection (ti6al4v cellular structures) mechanical property of 2d structure (elastic modulus and strength) [67] Finite element models metal foams dynamic stress-strain relationships and exploring energy absorption [41] lognormal distribution Polymethacrylimide (PMI) foam analytical constitutive model [39] It is seen that researchers are integrating mathematical and numerical models to clarify the underlying mechanisms that govern the behavior of cellular materials, thereby paving the way for advances in design optimization, predictive modeling, and material development.One of the most important observations from Tables 1 and 2 is that, contrary to a number of Stochastic Finite Element Method (SFEM) studies with various solids and structures, now some non-Gaussian variables and processes become very important, so it needs more advanced computational algorithms. Open-cell foam is characterized by interconnected pores that facilitate the free flow of fluids and gases within the material.Open-cell foam modeling involves the complex capture of the network of struts and ligaments that make up its structure, where geometric parameters such as pore size distribution and cell morphology can significantly affect its mechanical properties.Computational models play a crucial role in simulating various aspects of open-cell foam behavior, including deformation, compression, and failure mechanisms under different loading conditions.In contrast, closed-cell foams have isolated cells, requiring modeling approaches to account for their unique morphology and the presence of air bubbles within the cells.These closed-cell structures typically exhibit higher stiffness and strength compared to open-cell structures, which is attributed to the gas encapsulated within the pores.Computational models are used to analyze the elastic deformation, col-lapse behavior, and energy absorption capacity of closed-cell foam under different loading conditions.The difference between open-cell and closed-cell foam modeling in various research studies is demonstrated in Table 3. Numerical and experimental study of open-cell foams for the characterization of heat exchangers Open-cell foam is less dense and offers better sound absorption.Closed-cell foam is denser and offers higher strength and insulation. [ [70][71][72][73][74] Finite element analysis (FEA) has become an important tool for understanding and predicting the mechanical behavior of porous solids, providing crucial insights for engineering applications.This study provides a comprehensive overview of recent advances in applying FEA technology to various aspects of porous solids, including 3D structures with open pore configurations, polymethacrylimide (PMI) foam microstructures, and piezoelectrically active porous solids with different geometries.Furthermore, we discuss the integration of stochastic methods such as the stochastic finite element method (SFEM) for probabilistic analysis, coupling Voronoi structures with FEA models to study complex geometries, and analyzing programmable cellular solids.Furthermore, to discuss the implementation of the finite element method in software packages such as ABAQUS (version 2023, for instance) and its impact on modeling the mechanical behavior of cellular solids.Through these advancements, FEA continues to play a key role in optimizing the design and performance of porous materials and structures for diverse engineering applications. The main aim of this review study is to elucidate the mechanical properties of cellular materials and solids within the framework of numerical and mathematical models.This work also explores the microstructure formation of cellular solids.In addition, it examines computer-based models including 3D Additive Manufacturing (AM) structures, Laguerre tessellation, 2D and 3D Voronoi diagrams, ABAQUS-based models, tetradecahedral (Kelvin) structures, in situ X-ray tomography Scanning, finite element modeling, and Bravais lattice systems to explain mechanical properties through homogenized equations.Furthermore, this review considers computational simulations and experimental results of cellular materials to gain a comprehensive understanding of their mechanical behavior. This review work is subdivided into several sections, each addressing specific aspects.In Section 1, we provide an introductory overview of cellular materials, detailing their mechanical properties through various methodologies and also in this section describes the introduction of cellular material and the various tabulated forms of specification of cellular material, explained by various approaches to defining their mechanical properties.Section 2 discusses the widely applied microstructure of cellular solids through numerical and computational models.These include 3D AM (acoustic metamaterials) lattice structures with specific geometric patterns for controlling sound wave propagation, random tessellations of polycrystalline materials, Voronoi tessellations involving regular and perturbed hexagons, 3D Voronoi structures coupled with finite element (FE) models, tetrakaidecahedral (Kelvin) models of cellular solids based on FEA, polymethacrylimide (PMI) foam analysis via in situ X-ray computed tomography, Archimedean solids such as square and hexagonal folded octahedrons, and programmable materials featuring stretching-dominated and bending-dominated honeycomb unit cells.Section 3 We delve into the homogenization of mechanical properties within cellular solids, focusing on formulation aspects.Section 4 explores experimental investigations conducted on cellular solids, encompassing mechanical properties of programmable materials, buckling strength of 3D AM-based lattice unit cells, band frequency analyses, tessellation outcomes (gamma distribution), and the development of phenomenological and micro-mechanical models.Section 5 discusses computer simulations applied to cellular solids, including finite element analysis (FEA) of programmable cellular solids and 3D AM-based lattice structures, along with computing mean and standard deviation of surface areas and the analysis of transmission loops in kinematic cells.Meanwhile, Section 6 proposes mitigation strategies in the face of uncertainties associated with cellular materials.Section 7 consolidates discussions on numerical, mathematical, and computational simulations, and Section 8 concludes with an overview summary of findings and insights gained from numerical applications in this study of cellular solids. Microstructure of Cellular Material In industrial applications, tuning the stiffness of cellular materials by selectively removing material is a key strategy to achieve desired mechanical properties.Engineers utilize a variety of methods to achieve this, including advanced computational tools such as finite element analysis (FEA) combined with topology optimization algorithms.Through iterative simulations, material is strategically removed from the honeycomb structure to achieve an optimized stiffness distribution while ensuring mechanical integrity.In addition, gradient density structures are employed, where spatial variations in material density allow for local stiffness tuning.Additive manufacturing (AM) technologies play a key role, enabling the fabrication of complex honeycomb geometries with precise material distribution control.Auxetic structures exhibit unique negative Poisson's ratio behavior, expanding laterally when stretched, providing enhanced stiffness and energy absorption properties.Engineers also utilize layered designs and hybrid material combinations to further fine-tune stiffness properties.Iterative design methods involving prototyping and testing ensure that honeycomb materials meet stringent mechanical requirements while optimizing weight and cost.Together, these strategies have led to the creation of lightweight, customized honeycomb materials to meet the specific needs of different industrial sectors, from automotive to aerospace to infrastructure. Using contemporary techniques, the creation of cellular materials' micro-and macrostructures can be thoroughly comprehended.The arrangement and connectivity of cells or pores define structures at the microscopic scale, and these structures are frequently modelled using sophisticated computational methods like digital image correlation (DIC) [75,76] and finite element analysis (FEA) [77][78][79][80].By simulating and visualizing interior geometries with accuracy, these techniques shed light on how microstructural characteristics affect the overall properties of materials.Contemporary techniques additionally integrate additive manufacturing technology, such as three-dimensional printing [81,82], to fabricate intricate, regulated microstructures that emulate native biological materials.Homogenization procedures are used to close the gap between bulk material properties and microstructural behavior at the macroscale.This is predicting the mechanical reaction of the entire material by averaging its tiny qualities.By combining these multiscale methods and experimentally verifying them using imaging methods like computed tomography (CT) scanning [83] and mechanical testing, cellular materials can be fully understood and optimized for a range of engineering uses.These techniques help to clarify the connection between mechanical qualities and microstructure while also making it easier to design materials with unique characteristics in response to particular building requirements. The literature devoted to the creation of cellular material microstructures is addressed in this section.Cellular materials are usually produced using two main techniques.One approach involves processing techniques, wherein objects with honeycomb or lattice patterns are designed using 3D printing and materials like polymers, metals, or ceramics.Foaming is a different processing method that creates cellular materials by adding gas or bubbles to a liquid or molten form before solidifying [45,84].Some of the research work that defines types of cellular materials and the method of microstructure generation is shown in Table 4.The second and most popular approach uses computational techniques to virtually create cellular materials, such as topology optimization and finite element analysis.Using these techniques, materials' behavior under various conditions was simulated, and their microstructure was optimized to produce desirable characteristics like stiffness, strength, or thermal conductivity [46,[90][91][92].Some of the research studies that used computational techniques for the microstructure formation of cellular material are shown in Table 5.The engineering materials known as "acoustic metamaterials" can stop elastic or acoustic waves from propagating within a given frequency range [43,100,101].Localized elastic resonances are generally produced by a softer material encircling a dense core.This idea was modified to create a straightforward body-centered cubic lattice structure, as seen in Figures 1 and 2. The lattice structures cause low-frequency resonances by creating a discontinuity in the unit cell's cross-struts.The three primary components of this unit cell were the resonator, inner rods, and outer frame, as shown in Figure 2.This innovative approach allows us to fabricate lattice structures with low-frequency bandgaps, preventing elastic wave propagation within the material [43]. As space-filling arrays of non-overlapping convex polytopes, random tessellations are frequently employed as models for polycrystalline or cellular materials [46].The facets of these tessellations depict closed-cell foam, and the edges represent open-cell foam in foam structures.Unlike deterministic models like the Weaire-Phelan foams [102] and the Kelvin approach [103], they frequently fall short of adequately representing the variety of cell sizes and shapes found in solid foam structures.For modeling foam structures, non-overlapping spherical packing's provide random Laguerre tessellations, which show great promise.The tessellations from sporadic sphere arrangements blend the regular and random features found in actual foams.Grain growth simulations for sintered materials demonstrate how polycrystalline materials can be modeled using Laguerre tessellations.The relevance of mosaics created by sphere fillings seen in Figure 3 [46] was further highlighted by the early stages of the sintering processes, which are characterized by models based on random sphere fillings.method 3D open-cell structure 3D CAD models [49] Poly-methacrylimide (PMI) foam Digital volume correlation (DVC) [39] The engineering materials known as "acoustic metamaterials" can stop elastic or acoustic waves from propagating within a given frequency range [43,100,101].Localized elastic resonances are generally produced by a softer material encircling a dense core.This idea was modified to create a straightforward body-centered cubic lattice structure, as seen in Figures 1 and 2. The lattice structures cause low-frequency resonances by creating a discontinuity in the unit cell's cross-struts.The three primary components of this unit cell were the resonator, inner rods, and outer frame, as shown in Figure 2.This innovative approach allows us to fabricate lattice structures with low-frequency bandgaps, preventing elastic wave propagation within the material [43].As space-filling arrays of non-overlapping convex polytopes, random tessellations are frequently employed as models for polycrystalline or cellular materials [46].The facets method 3D open-cell structure 3D CAD models [49] Poly-methacrylimide (PMI) foam Digital volume correlation (DVC) [39] The engineering materials known as "acoustic metamaterials" can stop elastic or acoustic waves from propagating within a given frequency range [43,100,101].Localized elastic resonances are generally produced by a softer material encircling a dense core.This idea was modified to create a straightforward body-centered cubic lattice structure, as seen in Figures 1 and 2. The lattice structures cause low-frequency resonances by creating a discontinuity in the unit cell's cross-struts.The three primary components of this unit cell were the resonator, inner rods, and outer frame, as shown in Figure 2.This innovative approach allows us to fabricate lattice structures with low-frequency bandgaps, preventing elastic wave propagation within the material [43].As space-filling arrays of non-overlapping convex polytopes, random tessellations are frequently employed as models for polycrystalline or cellular materials [46].The facets [102] and the Kelvin approach [103], they frequently fall short of adequately representing the variety of cell sizes and shapes found in solid foam structures.For modeling foam structures, nonoverlapping spherical packing's provide random Laguerre tessellations, which show great promise.The tessellations from sporadic sphere arrangements blend the regular and random features found in actual foams.Grain growth simulations for sintered materials demonstrate how polycrystalline materials can be modeled using Laguerre tessellations. The relevance of mosaics created by sphere fillings seen in Figure 3 [46] was further highlighted by the early stages of the sintering processes, which are characterized by models based on random sphere fillings.Some regular hexagonal honeycomb structures of the representative material cell may be used for numerical analysis-an example with wall length l = 1 mm and thickness t = 0.1 mm was analyzed by Nguyen & Noels (2014) [35] in this context.However, random generations are used frequently to create such a microstructure.As an example-the Voronoi tessellation technique is popular and was utilized to generate both regular and perturbed hexagons, as shown in Figure 4. Visualizations of the edge system of a Laguerre tessellation of a dense packing of spheres together with some of the generating spheres [46].Some regular hexagonal honeycomb structures of the representative material cell may be used for numerical analysis-an example with wall length l = 1 mm and thickness t = 0.1 mm was analyzed by Nguyen & Noels (2014) [35] in this context.However, random generations are used frequently to create such a microstructure.As an example-the Voronoi tessellation technique is popular and was utilized to generate both regular and perturbed hexagons, as shown in Figure 4. Some regular hexagonal honeycomb structures of the representative material cell may be used for numerical analysis-an example with wall length l = 1 mm and thickness t = 0.1 mm was analyzed by Nguyen & Noels (2014) [35] in this context.However, random generations are used frequently to create such a microstructure.As an example-the Voronoi tessellation technique is popular and was utilized to generate both regular and perturbed hexagons, as shown in Figure 4.In the work presented by Zargarian et al. [49], the elastic properties of cellular solids were discussed using the Finite Element Analysis (FEA) for the tetrakaidecahedral (Kelvin) unit cell.This unit cell comprises a 14-sided polyhedron composed of 6 square faces and 8 hexagonal faces, with 36 edges and 24 vertices.The joints in the unit cell are formed by two lines parallel to the center line of each edge, as represented Figures 6 and 7 In the work presented by Zargarian et al. [49], the elastic properties of cellular solids were discussed using the Finite Element Analysis (FEA) for the tetrakaidecahedral (Kelvin) unit cell.This unit cell comprises a 14-sided polyhedron composed of 6 square faces and 8 hexagonal faces, with 36 edges and 24 vertices.The joints in the unit cell are formed by two lines parallel to the center line of each edge, as represented Figures 18 and 19. Chai et al. 2020 [39], Polymethacrylimide (PMI) foams have high stiffness and strength compared to other foams.In situ X-ray computed tomography was used to examine the 3D microstructure properties under quasi-static compression.The computed tomography of cylindrical samples of two densities, 52 and 75 kg m −3 , is shown in Figure 6. Three basic cellular solid structures hexagonal, tetragonal, and triangular were studied by Iyer et al. [36] to illustrate piezoelectrically active cellular solids, highlighting their bending and stretching advantages.This study focuses on a two-dimensional honeycomb piezoelectric porous solid, as shown in Figure 7, by performing a comparative analysis using finite element modeling. Rajakareyar et al., 2023 [37], discussed periodic cellular lattice structures classified based on the shape of the cell envelope.The envelope is an orthorhombic, tetragonal, or cubic shape as it belongs to the Bravais lattice system shown in Figure 8. Conversely, non-orthogonal envelopes with irregular shapes correspond with triclinic, monoclinic, or hexagonal systems.The representative volume element (RVE) of the lattice is the basic unit cell, consisting of the unit cell envelope and the lattice structure.It describes various representations of RVE, including orthogonal and non-orthogonal bases, using the honeycomb lattice shown in Figure 8. Furthermore, it presents a discrete 2D hexagonal geometry with non-orthogonal cell envelopes, highlighting three types of voxels.Among them, green voxels represent voids within the RVE envelope that contribute to volume calculations but are not included in calculations involving periodic node pairs and stiffness matrix determination.Chai et al. 2020 [39], Polymethacrylimide (PMI) foams have high stiffness and strength compared to other foams.In situ X-ray computed tomography was used to examine the 3D microstructure properties under quasi-static compression.The computed tomography of cylindrical samples of two densities, 52 and 75 kg m −3 , is shown in Figure 8. Three basic cellular solid structures hexagonal, tetragonal, and triangular were studied by Iyer et al. [36] to illustrate piezoelectrically active cellular solids, highlighting their bending and stretching advantages.This study focuses on a two-dimensional honeycomb piezoelectric porous solid, as shown in Figure 9, by performing a comparative analysis using finite element modeling.In the longitudinally poled structures (a-c) the porosity is aligned with the poling direction (i.e., three-direction), while in the transversely poled structures, the porosity is orthogonal to the poling direction [36].Chai et al. 2020 [39], Polymethacrylimide (PMI) foams have high stiffness and strength compared to other foams.In situ X-ray computed tomography was used to examine the 3D microstructure properties under quasi-static compression.The computed tomography of cylindrical samples of two densities, 52 and 75 kg m −3 , is shown in Figure 8. Three basic cellular solid structures hexagonal, tetragonal, and triangular were studied by Iyer et al. [36] to illustrate piezoelectrically active cellular solids, highlighting their bending and stretching advantages.This study focuses on a two-dimensional honeycomb piezoelectric porous solid, as shown in Figure 9, by performing a comparative analysis using finite element modeling.respectively (d-f), representing the honeycomb, tetragonal, and triangular structures studied in the present work.In the longitudinally poled structures (a-c) the porosity is aligned with the poling direction (i.e., three-direction), while in the transversely poled structures, the porosity is orthogonal to the poling direction [36].(a-c) the porosity is aligned with the poling direction (i.e., three-direction), while in the transversely poled structures, the porosity is orthogonal to the poling direction [36]. The approach involves creating universal weave designs for various fabric structures such as triangular (TR), trapezoidal (TPZ), rectangular spacer RECTSL, rectangular spacer structure with double layer (RECTDL), and sandwich structure connected with core piles (SPY).These structures consist of two layers of skin fabric and a center fabric layer, or pile yarn (in the case of SPY).The generalized weave design is modified to achieve the desired fabric structure by calculating the number of picks required for each fabric section based on the required geometric parameters.The resulting fabric structure is shown in Figure 9 [104]. representations of RVE, including orthogonal and non-orthogonal bases, using the honeycomb lattice shown in Figure 10.Furthermore, it presents a discrete 2D hexagonal geometry with non-orthogonal cell envelopes, highlighting three types of voxels.Among them, green voxels represent voids within the RVE envelope that contribute to volume calculations but are not included in calculations involving periodic node pairs and stiffness matrix determination.The approach involves creating universal weave designs for various fabric structures such as triangular (TR), trapezoidal (TPZ), rectangular spacer RECTSL, rectangular spacer structure with double layer (RECTDL), and sandwich structure connected with core piles (SPY).These structures consist of two layers of skin fabric and a center fabric layer, or pile yarn (in the case of SPY).The generalized weave design is modified to achieve the desired fabric structure by calculating the number of picks required for each fabric section based There have been significant advances in spacer fabrics in recent years, resulting in a range of specialized textile products suitable for different applications.These technical textile innovations have led to the emergence of spacer fabric as a versatile material capable of replacing traditional materials such as polyurethane foam in a variety of applications, including car seats, wheelchairs, sofas, and mattresses.The increasing prominence of technical textiles has driven the adoption of spacer fabrics due to their superior performance and added value.These 3D spacer structures are typically manufactured using braiding, weft-knitting, and warp-knitting techniques, as shown in Figure 10. Fernandes et al. (2015) [106] state that Peti-bol (EPP and EPS), Sofalca (EC), and CORKSRIBAS (AC), all Portuguese companies, provided samples for the testing of the mechanical properties of synthetic EPP and EPS in comparison to natural cork cellular material, as shown in Figure 11. Creating an Archimedean solid includes the square and hexagonal facets that can be folded into an octahedron [44].Its edges were sliced to make it fold flat.The square facets are cut away in two stages, and the edges connecting them are trimmed.With only one degree of freedom, the resulting foldable shape rigorously converts from three dimensions to two.During fabrication, a 0.3 mm thick card is used, machine-cut creases and contours are created, and double-sided tape is used for assembly.The construction of the truncated octahedron process is shown in Figure 12 [44].There have been significant advances in spacer fabrics in recent years, resulting in a range of specialized textile products suitable for different applications.These technical textile innovations have led to the emergence of spacer fabric as a versatile material capable of replacing traditional materials such as polyurethane foam in a variety of applications, including car seats, wheelchairs, sofas, and mattresses.The increasing prominence of technical textiles has driven the adoption of spacer fabrics due to their superior performance and added value.These 3D spacer structures are typically manufactured using braiding, weft-knitting, and warp-knitting techniques, as shown in Figure 12.There have been significant advances in spacer fabrics in recent years, resulting in a range of specialized textile products suitable for different applications.These technical textile innovations have led to the emergence of spacer fabric as a versatile material capable of replacing traditional materials such as polyurethane foam in a variety of applications, including car seats, wheelchairs, sofas, and mattresses.The increasing prominence of technical textiles has driven the adoption of spacer fabrics due to their superior performance and added value.These 3D spacer structures are typically manufactured using braiding, weft-knitting, and warp-knitting techniques, as shown in Figure 12.The very specific case of the cellular solid is a microstructure having a longitudinal direction, where material distribution is constant and the perpendicular cross-section to this direction includes some specific and regular geometrical patterns Figure 13.They are similar to the fiber-reinforced structures in composite materials engineering, but they are formed using a single material.Here, a periodic cellular solid is made up by repeating the unit cells [84,[107][108][109], which are the Representative Volume Elements (RVE) frequently discussed in the context of the homogenization method.They can be formed by the so-called programming procedures, and some of the programming imperfections appearing in different research studies are shown in Table 6. Materials 2024, 17, 2682 14 of 42 Fernandes et al. ( 2015) [106] state that Peti-bol (EPP and EPS), Sofalca (EC), and CORKSRIBAS (AC), all Portuguese companies, provided samples for the testing of the mechanical properties of synthetic EPP and EPS in comparison to natural cork cellular material, as shown in Figure 13.Creating an Archimedean solid includes the square and hexagonal facets that can be folded into an octahedron [44].Its edges were sliced to make it fold flat.The square facets are cut away in two stages, and the edges connecting them are trimmed.With only one degree of freedom, the resulting foldable shape rigorously converts from three dimensions to two.During fabrication, a 0.3 mm thick card is used, machine-cut creases and contours are created, and double-sided tape is used for assembly.The construction of the truncated octahedron process is shown in Figure 14 [44].The very specific case of the cellular solid is a microstructure having a longitudinal direction, where material distribution is constant and the perpendicular cross-section to this direction includes some specific and regular geometrical patterns Figure 15.They are similar to the fiber-reinforced structures in composite materials engineering, but they are bending-dominated honeycomb with a hexagonal unit cell, as shown in Figure 16.The base material used was a shape memory polymer.Shape memory polymers can return to their original shape after being deformed when exposed to a specific stimulus, such as heat [110,111].This study considers an aluminum honeycomb made from sheets of 5052 alloy that are 0.004 inches.The honeycomb has nominally hexagonal cells that are 1/4 inch in size [38].[38].Figure 13.Programmable aluminum base material (a) before programming (b) after programming in X 1 direction (c) after Programming in X 2 direction [38]. According to the model delivered by Restrepo et al., 2016 [38], two types of programmable materials are a stretching-dominated honeycomb with the kagome unit cell and a bending-dominated honeycomb with a hexagonal unit cell, as shown in Figure 14.The base material used was a shape memory polymer.Shape memory polymers can return to their original shape after being deformed when exposed to a specific stimulus, such as heat [110,111].This study considers an aluminum honeycomb made from sheets of 5052 alloy that are 0.004 inches.The honeycomb has nominally hexagonal cells that are 1/4 inch in size [38]. Homogenization Methodology It is widely known that the calculation of the effective (homogenized) material characteristics of cellular solids may proceed in a way similar to that of periodic composite materials.One may use an equity of deformation energy for the homogenized and original solid to determine effective bulk and shear moduli or to directly determine the components of the effective elasticity (material) tensor.This homogenization procedure seems to be even easier for the cellular media due to their programmable distribution of the skeleton made of the same isotropic elastic solid.Two types of programmable material Re- Table 6.Imperfections on the Cellular materials. Homogenization Methodology It is widely known that the calculation of the effective (homogenized) material characteristics of cellular solids may proceed in a way similar to that of periodic composite materials.One may use an equity of deformation energy for the homogenized and original solid to determine effective bulk and shear moduli or to directly determine the components of the effective elasticity (material) tensor.This homogenization procedure seems to be even easier for the cellular media due to their programmable distribution of the skeleton made of the same isotropic elastic solid.Two types of programmable material Restrepo et al., 2016 [38], (hexagonal (H) material and Kagome (K) material system) are most frequently considered in the literature.Their effective properties can be defined by some analytical expressions given below, and these are, in turn, effective mass density (ρ), effective elastic modulus in two perpendicular directions (X1, X2), and yield strength. where E s is elastic modulus, σ y s is yield strength, ρ s is density, t is thickness, l is length of wall and θ is angle between joint shown in Figure 14.Similarly, in the case of the Kagome, E s (Elastic Modulus) and ρ s (density) are expressed as [38]: where a specific values of the modulus Kh = 8.5754 N•m/rad can be determined by some FEM simulations.An et al. [43] observed that the static property of 3D acoustic metamaterial (AM) based on lattice structures shown in Figure 2 can be analytically defined using the relative density as [43]: Meanwhile, the mechanical property of 3D acoustic metamaterial (AM) was calculated by [116], whereas the strain energy (k ε ) with deformation component are applied as [43]: V (volume of the unit cell) equals a 3 , k uc is the stiffness matrix of this cell, "a" is its edge length, and R is the radius marked in Figure 2. The bulk modulus ( k * ) and buckling strength of such a 3D acoustic metamaterial (AM) can be defined as [43] k where k ucσ stands for the stiffness matrix in global coordinates.Further, Redenbach [46] proposed some statistical approach, where the sphere or volume (v s ) may follow lognormal and/or gamma probability distributions.This volume having lognormal distribution (g(v S )) v S ≥ 0 (12) or in case of gamma (f (v S )) is expressed as where K and θ are some shape parameters.Further, according to the Kirigami-inspired foldable 3D cellular structures, for spherical linkage, the loop closure equation is engaged [44]: The additional kinematic equation for spherical linkage is used according to [44] as where the applicable Transformation Matrix is proposed as [44]: with φ being a dihedral angle. Close to some analytical models, a variety of the numerical approaches have been delivered in this area and particularly, Nguyen & Noels [35] addressed an issue of the microscopic and macroscopic instabilities in cellular materials, specifically focusing on the microstructure of hexagonal honeycomb structures.This study employs the discontinuous Galerkin method on a macroscopic scale and the finite element method on a microscopic scale.The instability in both cases is resolved using the arc length path and one obtained at the macroscopic-scale where ∆U n+1 is the macroscopic-scale load correction parameter, ∆µ n+1 is the macroscopicscale displacement correction parameter, δu stands for the macroscopic-scale correction increment and K is traditionally stiffness matrix.One provides the following statements at the microscopic-scale where ∆U n ′ +1 is the microscopic-scale load correction parameter, ∆µ n+1 is the microscopicscale displacement correction parameter, δu is analogous microscopic-scale correction increment and k ′ denotes the microscopic stiffness.Further, some elastoplastic models have been developed (Ling et al. [47]) where a definition of the von Mises yielding stresses of expanded polystyrene (EPE) was necessary in both 3D and 2D models where σ c is compressive and σ v is shear stresses Zheng et al. [41] proposed another approach, where the conversion of the mass and momentum across the shack front according to continuum-based stress theory as and A combination of the above relations ( 23) with ( 24) results in where ϕ(t) is shock front speed, v A , ε A and σ A physical quantity ahead of the shock front and v B , ε B and σ B are behind the shock front. Later on, Zargarian et al. [49] introduced the model, where mechanical property of cellular material in term of density can be expressed as The above Equation ( 26) can be expressed for Young's Modulus as The analytical derivation for Young's modulus as a function of relative density of tetra-kai-decahedral unit cell in the following form: and also the equation for the Poisson's ratio as [49]: where P * and P s are the properties of cellular and bulk material, similarly and ρ * and ρ s are densities of cellular and bulk material; C and n are some constants that depend upon the topology of cell and shape of the walls.Some nonlinear models have been recently developed and reported in [50].The constitutive equations can be composed with three different contributions, namely i. in linear elastic region: ii. plateau region: iii. and also densification region: Further, Rush model [50] assumes power function: whereas the Gibson proposition [50] is based upon the following representation: New empirical model has been also recommended [50] and the specific energy W and efficiency E can be expressed analytically as the functions of strain [50] where σ is the engineering stress, ε is engineering strain, σ yield is yield stress, and ε D is strain value characteristic of the densification phase of D and m.A and B density are dependent parameter while m and n are not.Moreover, Ohno, Okumura and Noguchi [34] used update Lagrangean formulation, which can be expressed in terms of macroscopic strain rate, macroscopic spin and rigid translation . where .u 0 i (y, t) corresponds to the Lagrangean formulation .0i (t) is translation rate.The principle of virtual work for the unit cell of an infinite periodic material in macroscopically uniform deformation is provided in this approach as where the integral y .π ji δ .u i, j represents the microscopic virtual work n rate as conducted, and . Π i,j is macroscopic work conducted.According to Chai et al. [39], the poly-methacrylimide (PMI) foam cell morphology was characterized by gyration tensor as follows: where G αβ is Gaussian function, r αi and r α are coordinates of voxel i, m is barycenter of cell and r is gyration tensor.The effective properties on relative density is represented by scaling law: where c and n is fitting parameters, E * is property of cellular solid, E s is property of constituent material, ρ is relative density and ρ * is density cellular solid.The very recently Rajakareyar et al., 2023 [37] delivered the asymptotic homogenization process based on the double-scale expansion theory [117][118][119][120][121][122], the homogenized macroscopic elasticity tensor can be expressed as: where V stands for the volume of the based RVE, E pqrs is the locally varying stiffness tensor, pq represents the macroscopic strain, and ε pq is the microscopic strain (locally periodic).For more detail in 2D and 3D voxels can be expressed as: Based on displacement fields x ij , which are found by solving the elasticity equations with the prescribed macroscopic strains: where v is the virtual displacement field. Theoretically, the homogenized elasticity tensor ( C H ij ) in the Voigt notation for an orthotropic lattice cell takes the form as [37]: Let us note by the way that if the given programmed microstructure is too complex for development of analytical formulas relevant to the homogenized isotropic material, then it is possible to engage the Finite Element Method system to make 2D or 3D discretization using solid elements, to apply kinematic periodicity conditions on the outer edges of such a unit cell, and to simulate uni-or bi-directional extension as well as shear deformation of this periodic element.This methodology has been extensively studied in the context of various composite materials and may be applied here with no modifications. FEniCS is a popular open-source computing platform for solving partial differential equations (PDEs).It can be used to model cellular materials.In the FEniCS, problems involving different materials are handled by defining subdomains within a domain.FEniCS also supports defining complex subdomains and implements variable coefficients [123].According to Bleyer, 2020, macroscopic stiffness and the homogenization formula for the material unit cell as: [k] = 12 <µ>s 2 0 0 12 <E oe >s 2 (x,y) (46) where E 0e is oedometric modulus, and C hom is macroscopic stiffness. Topology optimization of cellular material microstructures is investigated utilizing the Harmony Search (HS) algorithm in conjunction with the Bi-directional Evaluation Structure Optimization (BESO) approach.The bulk (K) or shear (G) modulus cellular material optimization problem is defined as [124]: where D H is homogenized elasticity matrix The elastic modulus, determined by the Gibson-Ashby scaling law, is formulated as a function of the mechanical properties and relative density of the solid structure in Equation (50).According to this theory, the mechanical properties of porous structures depend on whether they exhibit bending-or stretch-dominated mechanical responses [125]. where E lat and E sol are modulus elasticity of lattice structure and bulk material, ρ cell ρ sol is relative density, and n is coefficient. An isotropic, nonlinear material model called hyperfoam is used to describe elastomeric foams that behave in a hyperplastic manner.Up to 90% compression strain is permitted for elastic deformation, and it is intended for finite-strain applications.The elastic behaviour of the model is determined by a certain strain energy function as [106]: Here N is Polynomial order, λ αi i are principal stretches.J is elastic volume ratio, µ i is share moduli, α i and β i are curve fitting non-integral exponents. One also notices that having closed-form analytical equations describing effective properties is very attractive while optimizing (programming) cellular solid microstructure, but it is also very important for further uncertainty analysis.Such equations may just be implemented into computer algebra systems with statistical libraries or just probabilistic features.Then, one can introduce some experimentally motivated probability distributions of the cellular solid design parameters and deliver analytical integration if only the probability integrals do exist; otherwise, Monte-Carlo simulation always allows for statistical estimation (cf.Table 2). Experimental Works and Some Manufacturing Method The experimental work was conducted on programmable cellular material like the thermal-mechanical property of epoxy-based SMP NGDE2 (shape-modified polymer (SMP)) using a universal test machine and extensometer as per ASTM D638.The mechanical property is shown in Table 7 [38].Also, the Mean and Standard deviation of Kagome (K) and Hexagonal (H) cellular material are shown in Table 8 [38].Also, the mechanical behavior of hexagonal material programmed at various θ is shown in Tables 9 and 10 in direction X1 and testing in direction X2(P1T2) in direction X2 and testing in direction X2(P2T2) [38].For reprogramming, the Modulus for in-plane compression for various materials corresponds to the direct programming shown in Table 11.Similarly, for Kagome (k), material programming in direction X2 and testing in direction X1(P2T1) are shown in Table 12, and the reprogramming result is shown in Table 13 [38].The stress-strain graphs demonstrate that synthetic materials (EPP and ESP) exhibit a higher Young's modulus than agglomerated cork, allowing them to densify at higher strains and reach the plateau zone with less deformation, as shown in Figure 15 [106].An optimal response for energy absorption consists of a protracted plateau at moderate stress, which is followed by densification at high strain.In comparison to cork, synthetic foams function poorly under repeated impacts.Young's modulus is lower for cork than for synthetic foams, depending on relative density, as shown in Figure 15 [106]. Programmed 2% 107.5 ± 6.9 17.9 ± 0.03 0.44 ± 0.01 0.26 ± 0.06 0.10 ± 2.5 × 10 −3 Programmed 5% 92.3 ± 10.2 8.3 ± 0.18 0.31 ± 0.02 0.28 ± 0.02 0.13 ± 3.7 × 10 −3 Table 13.Modulus for in-plane compression (E1*) moduli from the reprogramming trials on a specimen from the K material system [38].The stress-strain graphs demonstrate that synthetic materials (EPP and ESP) exhibit a higher Young's modulus than agglomerated cork, allowing them to densify at higher strains and reach the plateau zone with less deformation, as shown in Figure 17 [106].An optimal response for energy absorption consists of a protracted plateau at moderate stress, which is followed by densification at high strain.In comparison to cork, synthetic foams function poorly under repeated impacts.Young's modulus is lower for cork than for synthetic foams, depending on relative density, as shown in Figure 17 [106].The mechanical properties of expanded polystyrene (EPS) foam under combined compression and shear loading were examined, as shown in Figures 18 and 19.This study successfully measured EPS foam strain with a DIC strain field measurement system using a specialized device integrated with an INSTRON testing machine.The results show that the yield shear stress is lower compared to the compressive stress, and pure compression tests of four different densities of EPS foams show different compressive stress-strain behavior [48].The mechanical properties of expanded polystyrene (EPS) foam under combined compression and shear loading were examined, as shown in Figures 16 and 17.This study successfully measured EPS foam strain with a DIC strain field measurement system using a specialized device integrated with an INSTRON testing machine.The results show that the yield shear stress is lower compared to the compressive stress, and pure compression tests of four different densities of EPS foams show different compressive stress-strain behavior [48]. Programming Cycle A comparison between finite element method (FEM) predictions of Young's modulus and experimental measurements is shown in Figure 20, where values are normalized by the modulus of the lowest solid fraction.FEM predicts an increase of 38.2%, while experiments show an increase of 33.1%, which is in good agreement.This experiment was designed to experimentally observe and verify this effect.Although the sample size has an impact on the experimental results, the goal was to validate the numerical simulations and demonstrate that redistributing the material to the vertices significantly increases stiffness.The author recommends expanding the experimental part if facilities permit, to further explore the elastoplastic properties and failure modes, and looks forward to more interesting results [49]. The fatigue response of additively manufactured titanium scaffolds with different unit cell geometries and relative densities under cyclic compressive loading was investigated.The failure criterion adopted in the numerical simulations is based on the rapid increase in accumulated macroscopic strain in the representative volume element (RVE), as shown in Figure 21.S-N plots illustrating the fatigue life at different relative densities for each stent geometry were generated (Figure 5).Despite slight differences between simulated and experimental relative densities, there is substantial agreement between the simulated and experimental results.It is worth noting that the predicted fatigue life shortens significantly with increasing stress levels, which is consistent with the experimental results.Both simulated and experimental S-N curves exhibit linear behavior on a logarithmic scale shown in Figure 21, indicating a power-law relationship between uniaxial fatigue strength and fatigue life [126].A comparison between finite element method (FEM) predictions of Young's modulus and experimental measurements is shown in Figure 20, where values are normalized by the modulus of the lowest solid fraction.FEM predicts an increase of 38.2%, while experiments show an increase of 33.1%, which is in good agreement.This experiment was designed to experimentally observe and verify this effect.Although the sample size has an impact on the experimental results, the goal was to validate the numerical simulations and demonstrate that redistributing the material to the vertices significantly increases stiff- A comparison between finite element method (FEM) predictions of Young's modulus and experimental measurements is shown in Figure 20, where values are normalized by the modulus of the lowest solid fraction.FEM predicts an increase of 38.2%, while experiments show an increase of 33.1%, which is in good agreement.This experiment was designed to experimentally observe and verify this effect.Although the sample size has an impact on the experimental results, the goal was to validate the numerical simulations and demonstrate that redistributing the material to the vertices significantly increases stiff- [48]. In the work presented by Zargarian et al. [49], the elastic properties of cellular solids were discussed using the Finite Element Analysis (FEA) for the tetrakaidecahedral (Kelvin) unit cell.This unit cell comprises a 14-sided polyhedron composed of 6 square faces and 8 hexagonal faces, with 36 edges and 24 vertices.The joints in the unit cell are formed by two lines parallel to the center line of each edge, as represented Figures 6 and 7 In the work presented by Zargarian et al. [49], the elastic properties of cellular solids were discussed using the Finite Element Analysis (FEA) for the tetrakaidecahedral (Kelvin) unit cell.This unit cell comprises a 14-sided polyhedron composed of 6 square faces and 8 hexagonal faces, with 36 edges and 24 vertices.The joints in the unit cell are formed by two lines parallel to the center line of each edge, as represented Figures 6 and 7 The fatigue response of additively manufactured titanium scaffolds with different unit cell geometries and relative densities under cyclic compressive loading was investigated.The failure criterion adopted in the numerical simulations is based on the rapid increase in accumulated macroscopic strain in the representative volume element (RVE), as shown in Figure 21.S-N plots illustrating the fatigue life at different relative densities The experimental sample was molded into a solid and fabricated from photosensitive resin with the help of laser-sintering-based 3D printing technology.The compressive test was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute load for the printed unit cells of 3D acoustic metamaterial-based.The buckling strength change with radius is shown in Figure 22 [43].The bandgap frequency for different sizes of resonators is calculated by the FE method shown in Table 14 [43].The experimental sample was molded into a solid and fabricated from photosensitive resin with the help of laser-sintering-based 3D printing technology.The compressive test was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute load for the printed unit cells of 3D acoustic metamaterial-based.The buckling strength change with radius is shown in Figure 22 [43].The bandgap frequency for different sizes of resonators is calculated by the FE method shown in Table 14 [43].Table 14.The bandgap frequency ranges calculated by the finite element method for different sizes of resonators [43]. Unit Cell L (cm) Materials 2024, 17,2682 The experimental sample was molded into a solid and fabricated from p resin with the help of laser-sintering-based 3D printing technology.The com was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minut printed unit cells of 3D acoustic metamaterial-based.The buckling strength radius is shown in Figure 22 [43].The bandgap frequency for different sizes is calculated by the FE method shown in Table 14 [43].All simulation cells were included in the statistics, and a total number to 50,000 was taken to investigate each set of parameters, and the tessella shown in Figure 23 [46].The reconstruction visualization of these two types aluminum and open polymer foam) are shown in the Figures 23 and 24.The experimental sample was molded into a solid and fabricated from photo resin with the help of laser-sintering-based 3D printing technology.The compre was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute loa printed unit cells of 3D acoustic metamaterial-based.The buckling strength cha radius is shown in Figure 22 [43].The bandgap frequency for different sizes of re is calculated by the FE method shown in Table 14 [43]. 8-25 All simulation cells were included in the statistics, and a total number of ce to 50,000 was taken to investigate each set of parameters, and the tessellation shown in Figure 23 [46].The reconstruction visualization of these two types of foa aluminum and open polymer foam) are shown in the Figures 23 and 24.The experimental sample was molded into a solid and fabricated from photosensi resin with the help of laser-sintering-based 3D printing technology.The compressive was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute load for printed unit cells of 3D acoustic metamaterial-based.The buckling strength change w radius is shown in Figure 22 [43].The bandgap frequency for different sizes of resona is calculated by the FE method shown in Table 14 [43].All simulation cells were included in the statistics, and a total number of cells eq to 50,000 was taken to investigate each set of parameters, and the tessellation resu shown in Figure 23 The experimental sample was molded into a solid and fabricated from photosensitive resin with the help of laser-sintering-based 3D printing technology.The compressive tes was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute load for the printed unit cells of 3D acoustic metamaterial-based.The buckling strength change with radius is shown in Figure 22 [43].The bandgap frequency for different sizes of resonator is calculated by the FE method shown in Table 14 [43].All simulation cells were included in the statistics, and a total number of cells equa to 50,000 was taken to investigate each set of parameters, and the tessellation result i shown in Figure 23 The experimental sample was molded into a solid and fabricated from photosensitive resin with the help of laser-sintering-based 3D printing technology.The compressive test was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute load for the printed unit cells of 3D acoustic metamaterial-based.The buckling strength change with radius is shown in Figure 22 [43].The bandgap frequency for different sizes of resonators is calculated by the FE method shown in Table 14 [43].All simulation cells were included in the statistics, and a total number of cells equal to 50,000 was taken to investigate each set of parameters, and the tessellation result is shown in Figure 23 The experimental sample was molded into a solid and fabricated from photosensitive resin with the help of laser-sintering-based 3D printing technology.The compressive test was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute load for the printed unit cells of 3D acoustic metamaterial-based.The buckling strength change with radius is shown in Figure 22 [43].The bandgap frequency for different sizes of resonators is calculated by the FE method shown in Table 14 [43].All simulation cells were included in the statistics, and a total number of cells equal to 50,000 was taken to investigate each set of parameters, and the tessellation result is shown in Figure 23 All simulation cells were included in the statistics, and a total number of cells equal to 50,000 was taken to investigate each set of parameters, and the tessellation result is shown in Figure 23 [46].The reconstruction visualization of these two types of foam (open aluminum and open polymer foam) are shown in the Figures 23 and 24. A specific manufacturing process to automatically manufacture cellular media is the folding process.By using the matrix method of Denavit and Hartenberg (DH) notation shown in Figure 25 of the linkage process of folding [44,127].The experimental sample was molded into a solid and fabricated from photosensitive resin with the help of laser-sintering-based 3D printing technology.The compressive test was conducted on a 30 KN Instron (USA) test machine at a 1 mm/minute load for the printed unit cells of 3D acoustic metamaterial-based.The buckling strength change with radius is shown in Figure 22 [43].The bandgap frequency for different sizes of resonators is calculated by the FE method shown in Table 14 [43].A specific manufacturing process to automatically manufacture cellular media is the folding process.By using the matrix method of Denavit and Hartenberg (DH) notation shown in Figure 25 of the linkage process of folding [44,127].A specific manufacturing process to automatically manufacture cellular media is the folding process.By using the matrix method of Denavit and Hartenberg (DH) notation shown in Figure 25 of the linkage process of folding [44,127].The physical model of folding processes of seven types is shown in Figure 26 [44]. A specific manufacturing process to automatically manufacture cellular media is the folding process.By using the matrix method of Denavit and Hartenberg (DH) notation shown in Figure 25 of the linkage process of folding [44,127].Avalle, Belingardi, and Ibba 2007 [50] provided two categories of cellular solids models: phenomenological models and micro-mechanical models.Phenomenological models focus on accurately representing experimental mechanical behavior without directly relating to the physics of the phenomenon.On the other hand, micro-mechanical models analyze the deformation mechanisms of the micro-cell structure under loading.A cubic sample of 50 mm in side length and a cylindrical sample of 100 mm in diameter and 35 mm in height were taken into account for the testing procedure shown in Figure 27. Materials 2024, 17, 2682 29 of 42 Avalle, Belingardi, and Ibba 2007 [50] provided two categories of cellular solids models: phenomenological models and micro-mechanical models.Phenomenological models focus on accurately representing experimental mechanical behavior without directly relating to the physics of the phenomenon.On the other hand, micro-mechanical models analyze the deformation mechanisms of the micro-cell structure under loading.A cubic sample of 50 mm in side length and a cylindrical sample of 100 mm in diameter and 35 mm in height were taken into account for the testing procedure shown in Figure 27.The stress-strain behavior of the two foams is shown in Figure 28.Positive stress represents compression, and positive strain represents contraction.The solid line with stress drop is derived from in situ CT testing with pauses and coincides with the dashed line with continuous loading.Three stages were identified: pre-collapse, collapse, and densification.Young's modulus and collapse resistance increase with the increasing initial density of the foam, while the initial densification strain decreases.Collapse strength is defined as the stress at the inflection point between the pre-collapse stage and the collapse stage [39].The stress-strain behavior of the two foams is shown in Figure 28.Positive stress represents compression, and positive strain represents contraction.The solid line with stress drop is derived from in situ CT testing with pauses and coincides with the dashed line with continuous loading.Three stages were identified: pre-collapse, collapse, and densification.Young's modulus and collapse resistance increase with the increasing initial density of the foam, while the initial densification strain decreases.Collapse strength is defined as the stress at the inflection point between the pre-collapse stage and the collapse stage [39].Avalle, Belingardi, and Ibba 2007 [50] provided two categories of cellular solids models: phenomenological models and micro-mechanical models.Phenomenological models focus on accurately representing experimental mechanical behavior without directly relating to the physics of the phenomenon.On the other hand, micro-mechanical models analyze the deformation mechanisms of the micro-cell structure under loading.A cubic sample of 50 mm in side length and a cylindrical sample of 100 mm in diameter and 35 mm in height were taken into account for the testing procedure shown in Figure 27.The stress-strain behavior of the two foams is shown in Figure 28.Positive stress represents compression, and positive strain represents contraction.The solid line with stress drop is derived from in situ CT testing with pauses and coincides with the dashed line with continuous loading.Three stages were identified: pre-collapse, collapse, and densification.Young's modulus and collapse resistance increase with the increasing initial density of the foam, while the initial densification strain decreases.Collapse strength is defined as the stress at the inflection point between the pre-collapse stage and the collapse stage [39]. Computer Simulations Mechanical properties of programmable cellular material (like Hexagonal (H) and Kagome (K), for instance) may be determined now using a specific numerical simulation [38].The key problem is the adjacent definition of the Dirichlet and Neumann boundary conditions.In the case of the Hexagonal (H) material system, one may impose such boundary conditions using the directions x1 and x2, as depicted in Figure 29 below [38]. Computer Simulations Mechanical properties of programmable cellular material (like Hexagonal (H) and Kagome (K), for instance) may be determined now using a specific numerical simulation [38].The key problem is the adjacent definition of the Dirichlet and Neumann boundary conditions.In the case of the Hexagonal (H) material system, one may impose such boundary conditions using the directions x1 and x2, as depicted in Figure 29 below [38].Similarly, in the case of the Kagome (H) material system, numerical analysis was based on the boundary conditions delivered in Figure 30 [38].The computed band structure of the considered lattice structure is shown in Figure 31, and the mechanical properties of 3D acoustic metamaterial-based materials were also calculated using the FEM approach [116].This simulation and experimental result of the transmission spectrum for the 3 × 3 × 3-unit cell in 3D acoustic metamaterial-based are shown in Figure 32.Similarly, in the case of the Kagome (H) material system, numerical analysis was based on the boundary conditions delivered in Figure 30 [38]. Computer Simulations Mechanical properties of programmable cellular material (like Hexagonal (H) and Kagome (K), for instance) may be determined now using a specific numerical simulation [38].The key problem is the adjacent definition of the Dirichlet and Neumann boundary conditions.In the case of the Hexagonal (H) material system, one may impose such boundary conditions using the directions x1 and x2, as depicted in Figure 29 below [38].Similarly, in the case of the Kagome (H) material system, numerical analysis was based on the boundary conditions delivered in Figure 30 [38].The computed band structure of the considered lattice structure is shown in Figure 31, and the mechanical properties of 3D acoustic metamaterial-based materials were also calculated using the FEM approach [116].This simulation and experimental result of the transmission spectrum for the 3 × 3 × 3-unit cell in 3D acoustic metamaterial-based are shown in Figure 32.The computed band structure of the considered lattice structure is shown in Figure 31, and the mechanical properties of 3D acoustic metamaterial-based materials were also calculated using the FEM approach [116].This simulation and experimental result of the transmission spectrum for the 3 × 3 × 3-unit cell in 3D acoustic metamaterial-based are shown in Figure 32. In the simulation procedure of the microstructure of cellular material, random packing of cellular material may also be considered, and this volume may follow lognormal or gamma statistical distributions.The results in both cases are shown in Figures 33 and 34 [46]; where the QHull software 31 August 2020 (8.0.2) was used to compute the results in the form of mean values and standard deviations of the surface area [128].In the simulation procedure of the microstructure of cellular material, random packing of cellular material may also be considered, and this volume may follow lognormal or gamma statistical distributions.The results in both cases are shown in Figures 33 and 34 [46]; where the QHull software 31 August 2020 (8.0.2) was used to compute the results in the form of mean values and standard deviations of the surface area [128].In the simulation procedure of the microstructure of cellular material, random packing of cellular material may also be considered, and this volume may follow lognormal or gamma statistical distributions.The results in both cases are shown in Figures 33 and 34 [46]; where the QHull software 31 August 2020 (8.0.2) was used to compute the results in the form of mean values and standard deviations of the surface area [128].The kinematic cell is shown in Figure 35 in a closed loop, and the following transmission loop is presented below [44]. The periodic arrangement of truncated octahedrons in cellular assemblies can be conceptualized as a series of closed-loop mechanisms, each consisting of four interconnected truncated octahedrons.This mechanism is called a kinematic unit and represents the entire cell assembly.By kinematic analysis of this unit alone, one can derive a kinematic model of the entire assembly, specifically focusing on a closed loop consisting of four foldable truncated octahedrons labeled Cell A, Cell B, Cell C, and Cell D. Cells AB and CD are connected by a type 5 connection, while cells BC and DA are connected by a type 4 connection, as shown in Figure 35 [44].The kinematic cell is shown in Figure 35 in a closed loop, and the following transmission loop is presented below [44].The kinematic cell is shown in Figure 35 in a closed loop, and the following transmission loop is presented below [44].The periodic arrangement of truncated octahedrons in cellular assemblies can be conceptualized as a series of closed-loop mechanisms, each consisting of four interconnected truncated octahedrons.This mechanism is called a kinematic unit and represents the entire cell assembly.By kinematic analysis of this unit alone, one can derive a kinematic model of the entire assembly, specifically focusing on a closed loop consisting of four foldable truncated octahedrons labeled Cell A, Cell B, Cell C, and Cell D. Cells AB and CD are connected by a type 5 connection, while cells BC and DA are connected by a type 4 connection, as shown in Figure 35 [44]. Uncertainty Analysis Aspects According to Restrepo et al., 2016 [17], programmed materials' mechanical properties present uncertainties arising due to several factors during manufacturing and testing.In manufacturing, defects such as broken walls, misalignments, and thickness variations lead to a ±2% uncertainty in the modulus of elasticity due to broken walls.Sample size effects can result in uncertainties ranging from ±1% to ±5% ineffective properties, particularly when sample dimensions are smaller than the critical buckling mode wavelengths.Changes in material symmetry resulting from programming can introduce ±3% uncertainty in certain mechanical properties.The controlled introduction of morphological imperfections during programming can generate up to ±4% uncertainties in the effective modules.Although analytical models have their well-known limitations in predicting the behavior of manufactured samples, finite element simulations can alleviate uncertainties by up to ±2%.Furthermore, exogenous factors, such as friction boundary conditions and strain rate sensitivity, can cause variations of up to ±3% ineffective mechanical properties. The efficiency and dependability of the suggested periodic truss-lattice constructions were based on the acoustic metamaterial local resonance process [43].Although whole bandgaps were produced in the three-dimensional lattice structures, the precise mechanisms responsible for the bandgap production remain unclear.Though the exact correlations between these factors and bandgap width were unclear, parametric studies suggest that modifications in some geometric parameters can enlarge the bandgaps [43].While the impacts of unit-cell size and material properties on the bandgaps were studied, their precise consequences are still unknown [43].The suggestion of using composite lattice structures to achieve bigger bandgaps raises more questions about their efficacy and applicability.At the same time, there was agreement between simulation findings and experimental validation of the proposed designs; possible limits or discrepancies still needed to be completely addressed.Overall, while the work's results provide new insights into reducing elastic waves and vibrations, questions about the mechanisms regulating bandgap Uncertainty Analysis Aspects According to Restrepo et al., 2016 [17], programmed materials' mechanical properties present uncertainties arising due to several factors during manufacturing and testing.In manufacturing, defects such as broken walls, misalignments, and thickness variations lead to a ±2% uncertainty in the modulus of elasticity due to broken walls.Sample size effects can result in uncertainties ranging from ±1% to ±5% ineffective properties, particularly when sample dimensions are smaller than the critical buckling mode wavelengths.Changes in material symmetry resulting from programming can introduce ±3% uncertainty in certain mechanical properties.The controlled introduction of morphological imperfections during programming can generate up to ±4% uncertainties in the effective modules.Although analytical models have their well-known limitations in predicting the behavior of manufactured samples, finite element simulations can alleviate uncertainties by up to ±2%.Furthermore, exogenous factors, such as friction boundary conditions and strain rate sensitivity, can cause variations of up to ±3% ineffective mechanical properties. The efficiency and dependability of the suggested periodic truss-lattice constructions were based on the acoustic metamaterial local resonance process [43].Although whole bandgaps were produced in the three-dimensional lattice structures, the precise mechanisms responsible for the bandgap production remain unclear.Though the exact correlations between these factors and bandgap width were unclear, parametric studies suggest that modifications in some geometric parameters can enlarge the bandgaps [43].While the impacts of unit-cell size and material properties on the bandgaps were studied, their precise consequences are still unknown [43].The suggestion of using composite lattice structures to achieve bigger bandgaps raises more questions about their efficacy and applicability.At the same time, there was agreement between simulation findings and experimental validation of the proposed designs; possible limits or discrepancies still needed to be completely addressed.Overall, while the work's results provide new insights into reducing elastic waves and vibrations, questions about the mechanisms regulating bandgap generation and the viability of suggested designs still need to be answered [43].More investigation and testing are required to resolve these uncertainties and guarantee the practical effectiveness of the proposed lattice architectures. According to Redenbach, 2009 [46], the variations seen in Laguerre tessellations are produced by spherically packed spheres with different volume distributions [46].While tessellations from gamma and lognormal distributions differ little when sphere volumes vary slightly, these differences become more noticeable as sphere volumes vary significantly.Moreover, disparities in geometric properties are shown when Laguerre tessellations are fitted to actual foam structures; these differences may result from production-related restrictions in the foam structure [46].Examining these restrictions may help us better understand and provide better suggestions for choosing pertinent traits.Additionally, models of foam structures that are helpful for mimicking macroscopic material properties can be created by dilating the tessellations to meet the appropriate volume proportion.The modeling method is made more complex and unpredictable by its capacity to reproduce local variations in structure thickness using locally adaptable dilations. It focuses on investigating foldable arrays made of truncated octahedrons with facets that have either finite or zero thickness [44].The arrays were kinematically identical despite differing mechanisms, e.g., Bennett 4R linkages or spherical 4R connections, guaranteeing continuous correlations among dihedral angles along creases.Although the work concentrates on particular connection types, there are possible applications in many other scientific and engineering domains, such as robotic arms and deployable habitats.While this study focused on truncated octahedrons [44], the idea applies to other polyhedrons as well, such as octahedrons and cuboctahedrons.During deployment, this extension presents issues with physical interference and connection conditions.Examining the application of alternative polyhedrons offers a fascinating direction for further theoretical and computational studies. The uncertainty of the cellular material/solid with modern computation techniques is shown in Table 15. Computer Simulation Technique Uncertainty Threshold Reference Cellular automata simulations of microstructure evolution Uncertainty quantification in multiscale material modeling [129] Finite Element Analysis (FEA) Investigates the impact of functionally graded porosity on the energy absorption of metal lattices.[130] Coupled FEA and Discrete Element Method (DEM) Develops a multiscale modeling framework to predict the mechanical response of polymer cellular solids under impact.[40] Thresholding segmentation Thresholding can be used to reduce segmentation errors and uncertainty in material imaging, especially when high contrast is present.Proper selection of threshold values is important.Heterogeneous thresholding may be needed for cellular materials with complex microstructures. [131] A multiscale optimization framework considering microscale material uncertainties Designs optimized assuming spatially varying material uncertainties are up to 74% more robust (in terms of standard deviation of compliance) compared to designs optimized with uniform uncertainties when subjected to spatially varying uncertainties.[132] Ensemble methods for molecular dynamics simulations Ensemble methods play a key role in uncertainty quantification for molecular dynamics simulations of materials.Using 25 replicas with 4 ns of simulation each can often achieve accurate and reproducible results. [133] Probabilistic cellular automata simulations, Metamodeling Quantitative assessment of uncertainties is essential.Advanced statistical methodology is needed for uncertainty propagation and sensitivity analysis. [134] Generative Adversarial Networks (GANs) Utilizes machine learning to optimize the design of cellular materials for bone implant applications [135] Finite Difference Time Domain (FDTD) method Investigates bioinspired cellular designs for manipulating sound waves [136] Discussion A three-dimensional truss lattice structure is customized for effective absorption of low-frequency and broadband elastic waves.With the incorporation of local resonance mechanisms, these structures introduce band gaps that effectively hinder elastic wave transmission, which is facilitated by radius-hopping discontinuities in the cross braces of each unit.This innovative approach has practical advantages over traditional acoustic metamaterials.Studying the band structure of these lattice structures can provide insights into the influence of unit cell composition on band gap formation while also proposing composite structures aimed at enlarging the band gap width.In addition, the discussion addresses the enhancement of mechanical properties through targeted adjustments of crossbrace dimensions within the unit.Experimental verification highlights the effectiveness of this design strategy, paving the way for the development of grid truss structures capable of meeting bearing and vibration damping requirements. Laguerre tessellation is derived from random sphere packing as a potential model for the microstructure of porous or polycrystalline materials, focusing on hard sphere packing with lognormal or gamma-distributed volumes.The geometric characteristics of Laguerre cells vary with the volume fraction of sphere packing and the coefficient of variation of the volume distribution.Polynomial descriptions of certain element characteristic moments are provided, allowing tessellation models to fit real materials without the need for additional simulations. Inspired by origami and kirigami, create 3D cellular components with rigid faces that can be folded flat using a single degree of freedom.By identifying seven connection types between foldable truncated octahedrons, various 3D honeycomb arrays were constructed, characterized by spherical 4R links or a hybrid of spherical and Bennett 4R links.A thorough kinematic analysis confirmed the single degree of freedom of the array and the kinematic equivalence of thin and thick surfaces.Physical models validated the designs, highlighting their potential applications in deployable space habitats and robotic arms.This innovative approach extends the application of traditional folding technology to advanced, practical structures.Cellular materials are homogenized using complex multiscale computational homogenization methods focusing on microbuckling and positioning zones during macroscopic loading, using advanced methods including the discontinuous Galerkin method and classical finite element resolution.The aim of the review is to enhance the understanding of cellular material responses and provide valuable insights for structural design optimization. The dynamic compression and shear tests on EPS foams of varying densities focused on finite strain rates and loading angles due to experimental limitations.In the future it should explore higher strain rates and wider loading angles to better understand foam deformation behavior.Addressing these limitations and expanding the scope of this study could enhance understanding of the mechanical properties of EPS foams for different applications. The dynamic impact behavior of metal foams, particularly focusing on their shock-like response at high loading rates, It addresses limitations in existing experimental methods and proposes a virtual test method to explore rate sensitivity in cellular materials.By analyzing stress-strain states and deformation modes, this study reveals distinct behavior in cellular materials compared to dense metals.It identifies a unique curve representing dynamic stress-strain states and improves experimental techniques for characterizing rate sensitivity in real-world applications. Finite element analysis (FEA) has been applied to study the elastic properties of cellular solids.Similarly, in situ X-ray computed tomography was used to explore the 3D microstructure of PMI foams under compression.The cellular structures were studied to highlight their piezoelectric properties.It focuses on periodic cellular lattice structures based on Bravais lattice systems, providing insights into representative volume elements (RVEs) and their mechanical behavior. The discussions include the development of fabric structures for various applications, advances in spacer fabrics, and modeling of programmable materials such as hexagonal and unit cells.The mechanical properties of these materials were determined through numerical simulations and experimental verification.Restrepo et al. (2016) [38] studied the uncertainty in the mechanical properties of programmed materials due to manufacturing defects and changes in material symmetry.It highlights the importance of finite element simulations in reducing these uncertainties.This review also deals with the fatigue response of additively manufactured titanium stents and the mechanical properties of expanded polystyrene (EPS) foam under different loading conditions.The agreement between finite element predictions and experimental measurements emphasizes the validity of the numerical models used in these studies. Overall, this review synthesizes theoretical models, numerical simulations, and experimental data to provide a comprehensive understanding of the mechanical properties of cellular materials and aims to guide future research and applications in this field. Concluding Remarks In conclusion, this review highlights the innovative potential of programmable cellular solids in engineering materials, providing tunable mechanical properties through strategic defects in the unit cell.The exploration of periodic lattice structures designed using acoustic metamaterial principles highlights their suitability for vibration control, as well as their lightweight and strong properties.Through band structure analysis, the influence of unit cell composition on band gap formation, as well as the influence of structural and material parameters, are clarified.Stochastic analysis within homogenization holds the promise of improving lattice structure performance by resolving uncertainties associated with manufacturing and environmental factors.Additionally, the research delves into 3D foldable cellular components inspired by origami and kirigami, demonstrating their diverse applications in areas such as space habitats and robotics. This study provides a comprehensive overview of Laguerre tessellation, demonstrating their potential as microstructural models of cells or polycrystalline materials.These tessellations can provide insights into material geometry by employing random sphere packings with lognormal or gamma-distributed volumes.Polynomial descriptions can be fitted to real materials such as open polymers and aluminum foam, highlighting changes in geometric properties under different filling parameters.Stochastic analysis helps evaluate the variability of these properties, which is critical to understanding factors such as sphere volume distribution and packing density.Furthermore, the homogenization technique establishes a link between microscopic geometry and macroscopic behavior, providing avenues for future research in mechanical property analysis.Stochastic homogenization helps predict the statistical distribution of mechanical parameters, aiding design optimization and reliability assessment.Integration of stochastic analysis and homogenization promises to enhance foam structural design, resulting in improved performance, robustness, and reliability. We conclude that the second-order computational homogenization framework for cellular materials addresses microbuckling and macropositioning problems by integrating the Mindlin strain gradient continuum at the macroscale and utilizing the discontinuous Galerkin method.It goes beyond classical multiscale schemes and shows effectiveness in capturing microbuckling and localization bands, as demonstrated by uniaxial compression testing of hexagonal honeycomb specimens.Despite the limited drop height and mass, the deformation characteristics of EPS foam can be effectively analyzed using the novel drop weight tower system, providing valuable insights into its mechanical performance under complex loading conditions.Integration of the INSTRON testing machine with the DIC strain field system shows that shear deformation significantly reduces the compressive strength of EPS foam under combined compression and shear loading.Furthermore, a 3D Voronoi-based finite element model successfully studied the dynamic behavior of metal foams under impact, demonstrating its efficacy in capturing the impact-induced mechanical response.Finally, finite element simulations highlighted the influence of solid distribution on the elastic properties of open-cell porous materials.Adjustments in solid distribution affect the elastic modulus and Poisson's ratio, especially at different relative densities.Experimental validation supports these simulation results, emphasizing the importance of solid distribution in optimizing the mechanical properties of porous materials. With the future integration of stochastic analysis into homogenization [134,135], there was potential to improve understanding and prediction of how morphological imperfections embedded in periodic cellular solids influence their effective mechanical characteristics.This stochastic approach offers insights into the variability and uncertainty associated with programmable materials, promoting more robust design and manufacturing methodologies.Furthermore, homogenization techniques allow the examination of the macroscopic behavior of these materials, covering imperfections and their influence on mechanical properties, thus advancing the understanding of customized applications. Figure 1 . Figure 1.(a) Sectional view of the unit-cell of a 3D acoustic metamaterial, (b) design of the unit-cell of a 3D AM-based lattice structure [43]. Figure 1 . Figure 1.(a) Sectional view of the unit-cell of a 3D acoustic metamaterial, (b) design of the unit-cell of a 3D AM-based lattice structure [43]. Figure 1 . Figure 1.(a) Sectional view of the unit-cell of a 3D acoustic metamaterial, (b) design of the unit-cell of a 3D AM-based lattice structure [43]. Figure 3 . Figure 3. Visualizations of the edge system of a Laguerre tessellation of a dense packing of spheres together with some of the generating spheres [46]. Figure 3 . Figure 3. Visualizations of the edge system of a Laguerre tessellation of a dense packing of spheres together with some of the generating spheres [46]. Figure 4 . Figure 4. Voronoï diagram of the hexagonal honeycomb: (a) regular control points; (b) generated regular hexagons; and (c) coordinate perturbation at each control point i [35].Zheng et al. 2014[41] used the Finite Element Method implementation in the system ABAQUS to study the cellular specimen having a volume of 30 × 20 × 20 mm with 600 nuclei, as shown in Figure5. Figure 4 . 42 Figure 5 . Figure 4. Voronoï diagram of the hexagonal honeycomb: (a) regular control points; (b) generated regular hexagons; and (c) coordinate perturbation at each control point i [35].Zheng et al. 2014[41] used the Finite Element Method implementation in the system ABAQUS to study the cellular specimen having a volume of 30 × 20 × 20 mm with 600 nuclei, as shown in Figure5.Materials 2024, 17, 2682 10 of 42 . Figure 5 . Figure 5. (a) 3D Voronoi structure; (b) Corresponding cell base on the FE model; and (c) middle section perpendicular to the 2nd direction [41]. Figure 8 . Figure 8.(a) CT characterizations of two foams with initial density 0 = 52 and 75 kg m −3 (b) Cell size (equivalent diameter) distribution.Symbols denote experimental data, and dashed lines fit with the Gaussian function [39]. Figure 9 . Figure 9. Schematics illustrate the piezoelectric cellular structures with nodal connectivity (α) of 3, 4, and 6, respectively (d-f), representing the honeycomb, tetragonal, and triangular structures studied in the present work.In the longitudinally poled structures(a-c) the porosity is aligned with the poling direction (i.e., three-direction), while in the transversely poled structures, the porosity is orthogonal to the poling direction[36]. Figure 6 . Figure 6.(a) CT characterizations of two foams with initial density ρ 0 = 52 and 75 kg m −3 (b) Cell size (equivalent diameter) distribution.Symbols denote experimental data, and dashed lines fit with the Gaussian function [39]. Figure 8 . Figure 8.(a) CT characterizations of two foams with initial density 0 = 52 and 75 kg m −3 (b) Cell size (equivalent diameter) distribution.Symbols denote experimental data, and dashed lines fit with the Gaussian function [39]. Figure 9 . Figure 9. Schematics illustrate the piezoelectric cellular structures with nodal connectivity (α) of 3, 4, and 6, respectively (d-f), representing the honeycomb, tetragonal, and triangular structures studied in the present work.In the longitudinally poled structures(a-c) the porosity is aligned with the poling direction (i.e., three-direction), while in the transversely poled structures, the porosity is orthogonal to the poling direction[36]. Figure 7 . Figure 7. Schematics illustrate the piezoelectric cellular structures with nodal connectivity (α) of 3, 4, and 6, respectively (d-f), representing the honeycomb, tetragonal, and triangular structures studied in the present work.In the longitudinally poled structures(a-c) the porosity is aligned with the poling direction (i.e., three-direction), while in the transversely poled structures, the porosity is orthogonal to the poling direction[36]. Materials 2024, 17, 2682 13 of 42 on the required geometric parameters.The resulting fabric structure is shown in Figure 11 [104]. Figure 13 . Figure 13.(a) Black EPS with a density of 90 kg/m 3 ; (b) White EPP with a density of 60 and 90 kg/m 3 ; (c) agglomerated with a density of 199 and 2016 kg/m 3 ; and (d) Expanded black with a density of 159 kg/m 3 [106]. Figure 12 . Figure 12.The construction of a foldable truncated octahedron.(a) A truncated octahedron.(b) First cutting step.(c) Second cutting step.(d) Folding process of the foldable truncated octahedron.(e) Pictures of a foldable truncated octahedron made of 0.3 mm thick card [44]. Figure 15 . Figure 15.Programmable aluminum base material (a) before programming (b) after programming in X1 direction (c) after Programming in X2 direction [38].Figure 13.Programmable aluminum base material (a) before programming (b) after programming in X 1 direction (c) after Programming in X 2 direction [38]. Figure 16 . Figure 16.Cellular material honeycombing is (a) bending-dominated honeycomb with a hexagonal unit cell and (b) stretching-dominated honeycomb with the kagome unit cell [38]. Figure 14 . Figure 14.Cellular material honeycombing is (a) bending-dominated honeycomb with a hexagonal unit cell and (b) stretching-dominated honeycomb with the kagome unit cell [38]. . c Figure 21 . Figure 21.Numerical simulation of the accumulated strain vs. cycles at different alternating stress levels and S-N curves of Titanium scaffolds with different relative densities compared to experimental results for rhombic, diamond, and truncated cuboctahedron structures [126]. Figure 21 . Figure 21.Numerical simulation of the accumulated strain vs. cycles at different alternating stress levels and S-N curves of Titanium scaffolds with different relative densities compared to experimental results for rhombic, diamond, and truncated cuboctahedron structures [126]. Figure 22 .Table 14 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of the resonators [43].Table14.The bandgap frequency ranges calculated by the finite element method for different sizes of resonators[43]. . Figure 22 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of the resonators [43]. Figure 23 . Figure 23.Sections of realizations of the tessellations for the gamma case with param c = 0.2 and 2.0, and VV = 60%, c = 0.2 and 2.0 (from left to right) [46]. Figure 22 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of th tors [43]. Figure 23 . Figure 23.Sections of realizations of the tessellations for the gamma case with parameters c = 0.2 and 2.0, and VV = 60%, c = 0.2 and 2.0 (from left to right) [46]. Figure 22 .Table 14 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of the res tors [43].Table14.The bandgap frequency ranges calculated by the finite element method for different of resonators[43]. [46].The reconstruction visualization of these two types of foam (o aluminum and open polymer foam) are shown in the Figures23 and 24 . Figure 23 . Figure 23.Sections of realizations of the tessellations for the gamma case with parameters VV = c = 0.2 and 2.0, and VV = 60%, c = 0.2 and 2.0 (from left to right) [46]. Figure 22 .Table 14 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of the resona tors [43].Table14.The bandgap frequency ranges calculated by the finite element method for different size of resonators[43]. [46].The reconstruction visualization of these two types of foam (open aluminum and open polymer foam) are shown in the Figures23 and 24 . Figure 22 .Table 14 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of the resonators [43].Table14.The bandgap frequency ranges calculated by the finite element method for different sizes of resonators[43]. [46].The reconstruction visualization of these two types of foam (open aluminum and open polymer foam) are shown in the Figures23 and 24 . Figure 22 .Table 14 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of the resonators [43].Table14.The bandgap frequency ranges calculated by the finite element method for different sizes of resonators[43]. [46].The reconstruction visualization of these two types of foam (open aluminum and open polymer foam) are shown in the Figures23 and 24 . Figure 22 .Table 14 . Figure 22.Buckling strength of the 3D AM-based lattice unit-cells with different radii of the resonators [43].Table14.The bandgap frequency ranges calculated by the finite element method for different sizes of resonators[43]. . Figure 25 .Figure 26 . Figure 25.(a) A portion of a spatial linkage with coordinate systems under DH notation (b) A portion of a spatial linkage with coordinate systems under DH notation, and (c) The setup of coordinate systems on the foldable truncated octahedron [44,127].The physical model of folding processes of seven types is shown in Figure26[44]. Figure 25 .Figure 26 . Figure 25.(a) A portion of a spatial linkage with coordinate systems under DH notation (b) A portion of a spatial linkage with coordinate systems under DH notation, and (c) The setup of coordinate systems on the foldable truncated octahedron [44,127].The physical model of folding processes of seven types is shown in Figure26[44]. Figure 25 . Figure 25.(a) A portion of a spatial linkage with coordinate systems under DH notation (b) A portion of a spatial linkage with coordinate systems under DH notation, and (c) The setup of coordinate systems on the foldable truncated octahedron [44,127]. Figure 25 .Figure 26 . Figure 25.(a) A portion of a spatial linkage with coordinate systems under DH notation (b) A portion of a spatial linkage with coordinate systems under DH notation, and (c) The setup of coordinate systems on the foldable truncated octahedron [44,127].The physical model of folding processes of seven types is shown in Figure26[44]. Figure 26 . Figure 26.(a) Four types of connecting methods for two foldable truncated octahedrons in the horizontal direction (b) Three types of connecting methods for two foldable truncated octahedrons in the vertical direction [44]. Figure 28 . Figure 28.In situ CT testing of the two types of foams (0 = 52 and 75 kg m −3 ).(a) Stress-strain curves.The solid curves with stress drops are from in situ CT test with pauses (step-hold), while the dashed curves are from continuous loading.(b,c) Volume renderings at different axial strains for the 52 kg m −3 and the 75 kg m −3 foam samples, respectively.Color-coding refers to axial strain fields obtained via DVC.Positive strain refers to contraction [39]. Figure 28 . Figure 28.In situ CT testing of the two types of foams (0 = 52 and 75 kg m −3 ).(a) Stress-strain curves.The solid curves with stress drops are from in situ CT test with pauses (step-hold), while the dashed curves are from continuous loading.(b,c) Volume renderings at different axial strains for the 52 kg m −3 and the 75 kg m −3 foam samples, respectively.Color-coding refers to axial strain fields obtained via DVC.Positive strain refers to contraction [39]. Figure 28 . Figure 28.In situ CT testing of the two types of foams (ρ 0 = 52 and 75 kg m −3 ).(a) Stress-strain curves.The solid curves with stress drops are from in situ CT test with pauses (step-hold), while the dashed curves are from continuous loading.(b,c) Volume renderings at different axial strains for the 52 kg m −3 and the 75 kg m −3 foam samples, respectively.Color-coding refers to axial strain fields obtained via DVC.Positive strain refers to contraction [39]. Figure 29 . Figure 29.The effective properties for (a) material system H under programming and testing modes P1T2 and for (b) material system H under programming and testing modes P2T2. Figure 30 . Figure 30.Effective properties for material system K under programming and testing mode P2T1. Figure 29 . Figure 29.The effective properties for (a) material system H under programming and testing modes P1T2 and for (b) material system H under programming and testing modes P2T2. Figure 29 . Figure 29.The effective properties for (a) material system H under programming and testing modes P1T2 and for (b) material system H under programming and testing modes P2T2. Figure 30 . Figure 30.Effective properties for material system K under programming and testing mode P2T1. Figure 30 . Figure 30.Effective properties for material system K under programming and testing mode P 2 T 1 . Figure 31 . Figure 31.FE simulation for the 3D AM-based lattice structure made of epoxy [43]. Figure 32 . Figure 32.(a) FE simulation and (b) experimental results for the transmission spectrum of the graded AM-based lattice structure.The gray-shaded regions indicate the frequency ranges where vibration attenuates [43]. Figure 32 . Figure 32.(a) FE simulation and (b) experimental results for the transmission spectrum of the graded AM-based lattice structure.The gray-shaded regions indicate the frequency ranges where vibration attenuates [43]. Figure 32 . Figure 32.(a) FE simulation and (b) experimental results for the transmission spectrum of the graded AM-based lattice structure.The gray-shaded regions indicate the frequency ranges where vibration attenuates [43]. Figure 35 . Figure 35.A loop is composed of four thick-panel cells.(a) The setup of coordinate systems.(b) The schematic diagram of the mobile assembly of spherical 4R linkages and Bennett linkages [44]. Figure 35 . Figure 35.A loop is composed of four thick-panel cells.(a) The setup of coordinate systems.(b) The schematic diagram of the mobile assembly of spherical 4R linkages and Bennett linkages [44]. Table 2 . Modern approach status of stochastic approaches in cellular material. Table 3 . Difference between open and closed cell foam. Table 4 . Processing techniques for the Microstructure formation of Cellular Material. Table 5 . Computational techniques for the Microstructure formation of Cellular Material. Table 6 . Imperfections on the Cellular materials. Table 9 . [38] summary for the H material system in mode P 1 T 2 .Mean ± standard deviation.The sub-index 2 indicates the testing direction[38]. Table 11 . Modulus [38]in-plane compression (E2*) from the re-programming trials on a specimen from the H material system[38]. Table 13 . [38]lus for in-plane compression (E 1 *) moduli from the reprogramming trials on a specimen from the K material system[38]. Table 15 . Computer simulation verses uncertainty threshold of cellular material/solid.
2024-06-05T15:06:33.550Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "f68e0af764330403f39830b1adc2033ac72fd6ea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/17/11/2682/pdf?version=1717325979", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7552939fcd712690bdc0aa4312cb76992b3dce9b", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
1388569
pes2o/s2orc
v3-fos-license
Vitamin Intake Reduce the Risk of Gastric Cancer: Meta-Analysis and Systematic Review of Randomized and Observational Studies Aim The association between vitamin intake and gastric cancer (GC) has been widely debated due to the relatively weak evidence. In this study, a meta-analysis of prospective and well designed observational studies were performed to explore this association. Methods MEDLINE, Cochrane Library, and Sciencedirect were searched for studies of vitamin consumption and gastric cancer. This produced 47 relevant studies covering 1,221,392 human subjects. Random effects models were used to estimate summary relative risk (RR). Dose-response, subgroup, sensitivity, meta-regression, and publication bias analyses were conducted. Results The RR of gastric cancer in the group with the highest vitamin intake was compared to that of the lowest intake group. Total vitamin intake was 0.78 (95% CI, 0.71−0.83). In 9 studies that individuals were given doses at least 4 times above the tolerable upper intake (UL) vitamins, the RR was 1.20 (95% CI, 0.99−1.44). However, in 17 studies that individuals received doses below the UL, the RR was 0.76 (95% CI, 0.68−0.86). Dose-response analysis was conducted on different increments in different types of vitamins (vitamin A: 1.5 mg/day, vitamin C: 100 mg/day, vitamin E: 10 mg/day) intake with a significant reduction in the risk of gastric cancer, respectively, 29% in vitamin A, 26% in vitamin C, and 24% in vitamin E. Conclusion This meta-analysis clearly demonstrated that low doses of vitamins can significantly reduce the risk of GC, especially vitamin A, vitamin C, vitamin E. Introduction Gastric cancer (GC) is the second leading cause of cancer-related mortality worldwide, with an estimated 989,600 new cases and accounted for 738,000 deaths in 2011. [1]. Despite the decrease in overall incidence, the total survival rate for GC patients did not improve significantly over the past two decades [2].The only potentially curative treatment for GC is surgery, but only about 20-40% of patients can undergo radical resection. GC have become the main contributors to the total cancer burden in many parts of Asia [3]. Effective primary prevention strategies for GC, especially vitamin intake, have drawn considerable attention. For example, vitamins have been reported to play an important role in the prevention of GC in many studies [4,5]. Some in vitro studies have also suggested that vitamins may prevent GC through different processes, such as scavenging the concentration of nitrite in the stomach, reducing oxidative stress, and inhibiting nitrosation. Since 1970 s, the association between vitamin intake and GC has been assessed in a large and rapidly expanding body of literature. [6][7][8] However, most RCTs (Randomized, Placebo-Controlled Trials) included were not designed primarily to investigate the relationship between vitamins consumption and GC and performed in high-risk individuals. The current study is the first high-quality analysis of both prospective and retrospective studies to explore the relationship between vitamin intake and the riskof GC. Search Strategy and Study Selection MEDLINE, Cochrane Library and Sciencedirect were searched for studies of vitamin consumption and GC that were published only in English and performed on human participants from inception to February 2, 2014. Search terms were as follows: (vitamin OR supplement OR food OR diet OR dietary) AND (gastric OR stomach) AND (cancer OR neoplasm OR carcinoma). The reference lists of the articles identified were scanned manually for further potentially relevant studies. Authors were asked if they knew of any useful additional information (S1 Table and S2 Table in S1 File). A study was included if it met the following criteria: 1) original article; 2) placebo-control, case-control or cohort design; 3) vitamin intake as the exposure of interest; 4) GC occurrence provided; 5) odds ratio (OR) or RR, and the corresponding 95% confidence interval (CI). Animal, mechanistic studies and non-peer-reviewed articles were excluded. This meta-analysis was performed in accordance with the Preferred Reporting Items for Meta-Analyses (PRISMA) statement checklist (checklist in checklist S1). Data Extraction and Quality Assessment Four authors independently assessed the retrieved studies and extracted all data according to the pre-specified selection criteria. Disagreements were resolved by discussion. The following information was collected from each study: the last name of the first author, year of publication, study design, location, participant age, participant sex, study period, type of control subjects in case-control studies, sample size, type of vitamins evaluated and type of intake, the OR or RR with corresponding 95%CI for each category, and adjustments for confounders. When several articles discussed the same study, only the most recent or the one with the most complete data was included here. An evaluation system based on the Newcastle-Ottawa scale (NOS) was used to estimate the quality of observational studies. The studies included here were evaluated for three major factors: selection, comparability, and exposure/outcome assessment. The perfect score was 10 stars, and studies with 7 or more stars were defined as high-quality. Due to the risk of overestimation of beneficial intervention effects RCTs of low or inadequate methodological quality, we also assessed the RCTs methodological quality from the following domains: allocation sequence, allocation concealment, blinding, follow-up, and other apparent biases. Statistical Analysis All analyses were performed with Rev Man version 5.2 and STATA 12.0. P,0.05 was defined as significant. ORs or RRs were extracted from the studies included here, and their standard errors (SEs) were calculated from their respective CIs. A random-effects model was used to quantify the relationship between vitamin intake and the risk of GC, considering both intra-and inter-study variability (t 2 ). The measure of effect of interest was RR with 95% CI. Because the absolute incidence of GC was low, the RR was mathematically similar to the OR in the studies included here. For this reason, all results were reported as RR for simplicity. Heterogeneity among studies was evaluated withx 2 and I 2 statistical testing. [9] To assess heterogeneity across all included studies, the variables of study design, geographic area, method of evaluation of vitamin intake, and dose were further examined in a meta-regression model. Subgroup stratification analyses were performed to assess variations in influence of these variables on overall results. Because the characteristics of the subjects, method of assessment of vitamin intake, and adjustments for confounders differed across studies, a sensitivity analysis was performed to assess any possible causes of heterogeneity and to evaluate the impact of different exclusion criteria on overall outcome. The influence of each single study on the results was evaluated by removing each study from consideration one at a time. For the dose-response meta-analysis, only studies that listed the following data were analyzed: number of the case and control subjects, examined RR or OR and their 95% CI, and at least three quantitative exposure categories. For each included study, the mean vitamin intake for each quantitative exposure category was assigned an RR. Publication bias was assessed using funnel plots and Egger's test method [10,11]. Quality scores of observational study are summarized in S4 Table and S5 Table in S1 File. Quality scores ranged from 7 to 10. The average score was 8 for casecontrol studies and cohort studies. In this way, all observational studies were found to be high quality according to the NOS evaluation system. RCTs quality scores were also evaluated in S6 Table in S1 File. Twenty-two studies were excluded because they did not report usable data. Four papers were excluded because they reported the same study. Eight studies were excluded because they did not investigate the association between vitamin intake and GC risk. Noncohort studies and 142 reviews were also excluded. 3). Vitamin Dose Analysis by vitamin dose showed dosage (low dose) less than UL to be associated with lower risk of GC (Fig. 4). In 9 studies (n 5152,848), individuals were given doses at least 4 times above the UL (high dose), and Table 2). Sensitivity Analyses and Meta-regression Sensitivity analyses were conducted to explore possible causes of heterogeneity and the effect of various exclusion criteria on the overall result were examined (data not shown). Sixteen studies that were not adjusted for total energy intake or dietary factors were omitted. [ Meta-regression analysis demonstrated that study design (P 50.075), vitamin dosage (P 50.006), and method of assessing vitamin intake (P 50.006) were significant sources of heterogeneity. Study design alone explained 8.49% of the t 2 in the meta-regression analyses, vitamin dosage explained 24.54% of the t 2 and assessment of vitamins intake explained 23.84% (S8 Table in S1 File). Publication Bias The funnel plot did not show any obvious asymmetry (S6 Figure in S2 File). No publication bias was detected using the Egger's test (P 50.254). Discussion In this study, data were available for more than 1.2 million individuals and more than 11,000 GC events. This work provided convincing evidence that vitamins intake is associated with a reduced risk of GC, especially at low doses. This relationship between vitamin intake and GC risk was apparent and consistent across a wide range of stratified subgroups. The dose-response meta-analysis indicated that appropriate increase vitamins intake (vitamin A: 1.5 mg/day, vitamin C: 100 mg/day, vitamin E: 10 mg/day) were associated with a statistically significant decreased risk of GC: 36% in vitamin A, 35% in vitamin C, and 32% in vitamin E, respectively. In fact, since 1970 s, many observational studies and RCTs have evaluated the relationship between vitamin intake and the risk of GC, though results have been mixed. Zheng and Carman have provided evidence that higher vitamin intake may be relevant to the prevention of cancers of the upper digestive organs. [59,64] A interesting study from China also reported higher circulating vitamin was associated with a reduced risk of incident GC [65]. However, Other investigators concluded that supplementation with vitamins has no major impact on the occurrence of GC [49,55]. The discrepancy has several possible explanations, including differences in study design and type of vitamin intake (dietary or supplemental), differences in vitamin dosage used, differences in the assessment of vitamins intake and potential biases in each study. The lack of a statistically significant outcome in the clinical trials may have been caused by any of several methodological limitations of trials, such as short follow-up period and high levels of vitamins used. Several meta-analyses of RCTs have also analyzed the effect of vitamins on the prevention of gastrointestinal cancer [66][67][68][69]. Wu revealed that vitamin A intake was inversely associated with GC risk by a meta-analysis, [66] while other researchers came to a opposite conclusion. They found that antioxidant vitamins supplements cannot prevent GC, and may even increase overall mortality [67][68][69]. However, there were many limitations in these meta-analyses. Firstly, the RCTs included in previous meta-analyses had higher doses than those usually found in individuals who ate a balanced diet, and some trials used dosages well above the recommended UL. [7, 40, 44, 45, 48, 50-52, 55, 56] (S9 Table in S1 File) The doses used in this study are more reasonable. Secondly, in prior articles, many retrospective case-control studies on this topic were excluded, despite which showed strongly that vitamins intake can prevent GC. In fact, most RCTs included in previous meta-analyses were not designed primarily to investigate the relationship between vitamins consumption and GC. This led to a lack of adjustment for the main confounders of GC. Moreover, most of these RCTs were performed in high-risk individuals, such as longtime smokers, [40,44,50,55,56] and subjects with a history of premalignant lesions, [8,42,54] which may not reflect the vitamin intake of normal risk population. Thus, the total number of subjects of previous meta-analyses was not very substantial and their conclusions should be treated with caution. This paper includes discussion of many well designed observational studies. These were conducted in normal risk populations, and are closely related to the topic. Indeed, it should not be assumed that RCTs always provide high-quality evidence for therapy. [70] High-quality observational studies are also important sources of powerful evidence in meta-analyses. [71]. Some studies have reported other non-antioxidant vitamins' that affect GC prevention, [8,33,39,54] others have focused on antioxidant vitamins (vitamin A, vitamin C and vitamin E). [45,53,56] However, in daily diet, it is difficult to draw distinctions between non-antioxidant vitamins and antioxidant ones. In this study, we combine them and demonstrate vitamins intake can reduce risk of gastric cancer. The results of this meta-analysis indicate that relatively low doses of vitamins can prevent the occurrence of GC. Dose and method of administration are often clinically important and can be manipulated to prevent cancer [72]. For example, in the famous ATBC clinical trial, [56] the long-term use of vitamin A (4 years) at a high dose (7.5 mg/day, about 2.5 times the UL) showed no benefit with respect to preventing lung cancer in high-risk individuals (smokers). However, in a HCC study conducted in southwestern France, the author emphasized that dietary vitamin A (2 mg/day, less than the UL) might have a distinct and important protective effect on lung cancer prevention. [73] Some high-quality retrospective analyses indirectly showed that relatively low doses of vitamins (less than UL) prevented cancer more effectively. [74] These conclusions are similar to our study. Notably, in the dose-response analysis, we revealed that relatively low doses vitamin A, vitamin C, and vitamin E can significantly reduce the risk of GC (vitamin A: 1.5 mg/day, vitamin C: 100 mg/day, vitamin E: 10 mg/day). They are hopeful to be a possible recommendation dosage of vitamin intake for GC prevention. However, the mechanism of low doses of vitamins reduce risk of cancer is still unknown. Some researchers have also revealed that the long term administration of mega-dosages of vitamins can bring out many adverse effects. The current study also draws attention to the fact that vitamins from food (plant or animal) contribute more to reductions in GC risk than synthetic vitamin supplements. Some investigators have noted that the bioavailability of vitamins differs depending on whether the vitamin comes from food or is synthetic, which could explain the results. For example, Carr reported differences in bioavailability between synthetic and kiwifruit-derived vitamin C in a randomized crossover pharmacokinetic study [75]. Subgroup analyses by vitamin types, vitamin A, vitamin B, vitamin C and vitamin E produced similar outcomes, but vitamin D did not. Vitamin D is not really a vitamin. It is the precursor to the steroid hormone calcitriol and play an important role in determining cancer risk [76]. Accumulating results from preclinical and clinical studies strongly suggest that vitamin D deficiency increases the risk of developing cancer. Vitamin D supplements might be an economical and safe way to reduce the incidence of cancer and improve cancer prognosis and outcome. However, in the current meta-analysis, only 5 case-control studies have explored the association between vitamin D and GC risk [8,20,29,37,39]. This might be the reason for the discrepancy. During the past 3 decades, many studies have reported that the mechanisms of different types of vitamins may reduce the risk of GC. This includes vitamin that function in an irreversibly oxidized form, vitamins that reduce the concentration of nitrite in the stomach, and vitamins that affect free radical-mediated damage to the stomach epithelium [75]. In addition, some studies have indicated that vitamin E is a potent lipid-soluble antioxidant and might be involved in GC prevention by reducing oxidative stress [77]. Study Strengths and Limitations The current study has several strengths. First, it addresses both non-antioxidant and antioxidant vitamins and covers a large number of human subjects (1,221,392). This increased the statistical power of the analysis considerably. Second, these results are less likely to be explained by recall and selection bias because of the inclusion of 18 prospective studies (11 RCTs and 7 cohort studies). Third, a statistically significant association was observed in most of the subgroups that adjusted for confounders. These subgroups produced results similar to those of other subgroups. Fourth, the current study not only included RCTs but also many other high-quality observation studies. This was beneficial to identify the relationship between vitamins and GC. Fifth, a significant dose-response relationship was observed between vitamin intake and GC risk (Table 2). Finally, this is the first study to discuss the influence of dosage in the relationship and the effect of all kinds of vitamin compare with early studies. Several limitations should be addressed in this study. First, the studies included in this article have been conducted in different countries since the 1980 s, but some studies have had faulty designs, were not designed primarily to study vitamins consumption, and lacked stratification. This makes the combination of these studies with a random-effects model problematic. The second limitation is that the quality and power of any meta-analysis are dependent on the quality and comparability of data from the included studies. The analysis would be more convincing if original data were available, making an adjustment estimate possible. We have attempted to contact the authors of original studies to obtain more detailed information. However, it is very difficult to obtain all the original data regarding published studies. Third, the range of vitamin taken in by individuals with the lowest vitamin intake and those with the highest differed among the studies, which caused heterogeneity in the pooled analysis. Fourth, there were relatively few eligible studies of the dose-response analysis. These studies contained a few cohort and case-control studies. More and more in-depth studies are necessary. Implications The current findings may have several implications. First, vitamin intake can reduce the risk of GC, but excessive and long-term intake might disturb this antitumor function. Second, dietary vitamins might prevent GC more effectively than supplements. Third, according to the results of the current meta-analysis, overall vitamin intake can reduce the risk of GC by 23%. This reduction could be translated into a decrease of as many as 169,740 GC deaths and 227,608 new cases per year worldwide [1]. Last, the desired low but sufficient level of vitamin intake may be achieved by fruit and vegetable consumption. This is consistent with results indicating fruit and vegetable intake is inversely associated with the incidence of GC [78]. Conclusions In summary, unlike early studies, this article conducted well designed observational studies which conducted in normal risk populations and discuss the influence of dosage in the relationship and the effect of all kinds of vitamins. It shows clearly that low doses of vitamins can significantly reduce the risk of GC, especially vitamin A, vitamin C, vitamin E. However, because of potential bias and confounding factors, these results should be treated with caution. More and better-designed large clinical trials should use appropriate doses of vitamins in order to generate a more visible association between vitamin intake and the risk of GC. Supporting Information S1 PRISMA Checklist. Preferred Reporting Items for Meta-Analyses (PRISMA) statement checklist. doi:10.1371/journal.pone.0116060.s001 (DOC) S1 File. Supporting Information Tables. S1 Table Search strategy in PubMed and Cochrane Library. S2 Table. Search strategy in Sciencedirect. S3 Table. Characteristics of the included studies. S4 Table. Methodological quality of casecontrol studies included in the meta-analysis. S5 Table. Methodological quality of cohort studies included in the meta-analysis. S6 Table. Methodological quality of RCTs included in the meta-analysis. S7 Table. Dose-response analysis. S8 Table. Meta-regression analysis. S9 Table. Tolerable upper intake levels of vitamins. doi:10.1371/journal.pone.0116060.s002 (DOCX) S2 File. Supporting Information Figures. S1 Figure. Subgroup analysis: Forest plot of vitamin type. CI, confidence interval; df, degrees of freedom; I2, the percentage of total variation across studies that is caused by heterogeneity rather than by chance Squares or diamonds to the left of the solid vertical line indicate benefit with each type of vitamin intake; this is conventionally significant (P,0.05) only if the horizontal line or diamond does not overlap the solid vertical line. Relative risks are analysed with random-effects model. S2 Figure. Subgroup analysis: Forest plot of Lauren's classification (intestinal). CI, confidence interval; df, degrees of freedom; I 2 , the percentage of total variation across studies that is caused by heterogeneity rather than by chance. S3 Figure. Subgroup analysis: Forest plot of Lauren's classification (diffuse). CI, confidence interval; df, degrees of freedom; I 2 , the percentage of total variation across studies that is caused by heterogeneity rather than by chance. S4 Figure. Subgroup analysis: Forest plot of location (cardia). CI, confidence interval; df, degrees of freedom; I 2 , the percentage of total variation across studies that is caused by heterogeneity rather than by chance. Relative risks are analysed with random-effects model. S5 Figure. Subgroup analysis: Forest plot of location (non-cardia). CI, confidence interval; df, degrees of freedom; I 2 , the percentage of total variation across studies that is caused by heterogeneity rather than by chance. Relative risks are analysed with random-effects model. S6 Figure.
2016-05-04T20:20:58.661Z
2014-12-30T00:00:00.000
{ "year": 2014, "sha1": "d127a88a688408eedeecee0f068cb469bb8fad1a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0116060&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d127a88a688408eedeecee0f068cb469bb8fad1a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216744834
pes2o/s2orc
v3-fos-license
Ethyl butanoate from butanoic acid and ethyl alcohol esterification catalyzed by the lipase CALB Nádia Ligianara Dewes Nyari nadialigianara@hotmail.com http://orcid.org/0000-0003-0237-5116 Universidade Regional Integrada do Alto Uruguai e das Missões, URI, Erechim, Rio Grande do Sul, Brasil. Alessandro Rogério Paulazzi alessandro.paulazzi@hotmail.com Universidade Regional Integrada do Alto Uruguai e das Missões, URI, Erechim, Rio Grande do Sul, Brasil. Raquel Vera Zamadei raquel_zamadei@hotmail.com Universidade Regional Integrada do Alto Uruguai e das Missões, URI, Erechim, Rio Grande do Sul, Brasil. Jamile Zeni jamile.zeni@uricer.edu.br Universidade Regional Integrada do Alto Uruguai e das Missões, URI, Erechim, Rio Grande do Sul, Brasil. Rogério Marcos Dallago dallago@uricer.edu.br Universidade Regional Integrada do Alto Uruguai e das Missões, URI, Erechim, Rio Grande do Sul, Brasil. The Candida antarctica lipase B immobilized on polyurethane catalyzed of esterification butanoic acid with ethyl alcohol in mechanical and ultrasonic system in a system solventfree, was studied. The maximum of ethyl butanoate esterification obtained 666.40 and 585.84 U.g, whit 6 cycles de reuse in mechanical agitation (160 rpm) and mechanical ultrasound system (maximum power 1800 A, US 40 KHz, US 132 W) after 180 minutes of reaction time. The process was considered efficient with significant reduction of the reaction time, low instrumental requirements and improve the bioprocess performance. Until now, there were no studies available in the open literature in relation to the ester synthesis catalyzed by immobilized lipase in polyurethane as support in the ultrasound system. In addition, considered an environmentally correct and economically viable technology, it can be used in cosmetics, pharmaceuticals and food industry. INTRODUCTION Butanoate esters (in particular ethyl butanoate obtained via esterification of butanoic acid and ethyl alcohol), due to their characteristic pineapple like smell and flavor, are usually short chain and of great commercial importance used as an additive in many production processes, especially in pharmaceutical industries, cosmetics, beverages and foods ZHANG AND NAKAJINA 2015;MATTE et al., 2016). To Markets and Markets (Food Flavors Market and Type) (2020) the food flavor market is projected to grow at a CAGR (Compound Annual Growing Rate) of 5.4% from 2015 to 2020. The production of this kind of compounds has been traditionally achieved by chemical synthesis, since extraction from plant materials or the production by fermentation is at present too expensive for commercial exploitation. Aroma esters are obtained by two main processes chemical synthesis and extraction of natural sources. The classical process for esters production occurs through acid catalysis, where the fatty acid and catalyst residues can be removed by alkaline treatment, which increases the environmental impact emitting nonbiodegradable waste and the production energy cost. Furthermore, chemical synthesis can lead to the formation of undesirable secondary products and impurities, limiting their use mainly in the food industry (DHAKE et al., 2012;ROMERO et al., 2005;DOS SANTOS et al., 2017). The enzymatic route using lipase as biocatalyst has certain advantages, such as: mild reaction conditions, wide pH range, the system can operate in the absence of organic solvent, thus reducing the residues and the environmental impact. In addition, receive the connotation of being natural-like aromas, whose demand is growing rapidly, in response to consumers for new ingredients (KAPOOR AND GUPTA, 2012;MADARÁSZ et al., 2015;HIRATA et al., 2016;NARWAL et al., 2016;SJÖBLOM et al., 2016;DOS SANTOS et al., 2017). In this sense, enzymatic catalysis has been used as an alternative to obtain aromatic esters, such as ethyl butanoate, reducing the production of undesirable reactions and cost, not requiring the use of solvents (JIN et al., 2015;PALUDO et al., 2015;ZHANG AND NAKAJINA, 2015;MATTE et al., 2016). One of the cheap and commercially available nonmicrobial lipases is Candida antarctica lipase B which has high thermostability, such as high selectivity and specificity, mild reaction conditions, wide pH range, activity in anhydrous reaction mixtures as demonstrated for esterification and transesterification reactions, allowing to obtain products with high purity, reduction of co-products and/or toxic waste, consequently reducing the environmental impact (TOMIN et al., 2010;DHAKE et al. 2013;HIRATA et al. 2016;ĆOROVIĆ et al., 2017). The ultrasonic system is considered a green technology, little explored yet, is an alternative technology for the conventional mechanical agitation that provides significant reductions in the processing time and can increase esterification yields. These characteristics can be explained by the better mass transfer between substrates and enzyme (ABOU-OKEIL et al., 2010;MATTE et al., 2016). In this context, the present study aims to maximize the esterification of ethyl butanoate using CALB immobilized in polyurethane, in mechanical and ultrasonic agitation in a solvent-free system, in addition to analyze the reusability in repeated cycles. ESTERIFICATION OF ETHYL BUTANOATE The esterification of acetic acid and isomyl alcohol to isoamyl acetate ester was carried out in triplicate (n=3) in 50 mL glass flask keeping constant mass substrate in 5 g. 500 μL aliquots, performed in triplicate, were taken from the reaction mixture. 15 mL of acetone-ethanol solution was added in each sample. Titration with NaOH 0.05 mol L -1 was the method used to determine the amount of butanoate acid that have reacted until the system reach the pH 11. The blank samples were made by mixing 500 μL of standard mixture and 15 mL of acetoneethanol solution. The immobilization methodology of lipase CALB in polyurethane (PU) support was performed second to described by Nyari et al. (2016). Enzyme activity unit was defined as the amount of catalyst that is able to convert 1 µmol of fatty acid per minute, calculated by the equation (1). Where: Esterification (U g -1 ); Va: Volume of NaOH consumed during the sample titration (mL); Vb: Volume of NaOH consumed during the blank sample titration (mL) M: Molarity of NaOH solution; Vf: Final Volume of the reaction medium; t: time (min); m: free mass or immobilized catalyst mass (g); Vc: Aliquot volume of the reaction medium with drawal from the titration (mL). MECHANICAL AND ULTRASONIC SYSTEMS The kinetic study was conducted to evaluate the effect of the reaction time (0 to 400 min) in terms of esterification to ethyl butanoate. For the system with mechanical agitation, the variables studied were molar ratio butanoic acid and ethyl alcohol (1:1.64-1:8.36), catalyst mass (0.036-0.564 g) and reaction temperature (24.8-75.2°C), keeping mechanical stirring at 160 rpm and 180 min of reaction time. For the ultrasonic system, the variables studied were molar ratio butanoic acid and ethyl alcohol (1:1.64-1:8.36), reaction temperature (24.8-75.2°C) and ultrasonic power (26.4-93.6%), relative to maximum power 1800 A, US 40 KHz, US 132 W), keeping constant catalyst mass (0.5 g) the reaction time was fixed in 180 min (NYARI et al., 2018). OPERATIONAL STABILITY The study of the operating cycles number for the immobilized catalyst used in the ethyl butanoate ester synthesis was evaluated using optimized condition from CCRD. After each reaction, the catalyst was filtered to remove the reaction medium and reused in a new reaction. This process was successively repeated until esterification less than 50% of initial activity esterification. The results were expressed in terms of esterification, considering the initial esterification to 100%. STATISTICAL ANALYSIS Each experiment was done in triplicate. Data were expressed as means ± standard deviation, and subjected to one-way analysis of variance (Tukey) using Statistic 8.0 (StatSoft) software. A significance level of 95% (p < 0.05) was used. MECHANICAL SYSTEM The Figure 1 shows the evolution of ethyl butanoate esterification of butanoic acid and ethyl alcohol (400 minutes), for the free and immobilized catalyst second the condition designed by the full DCCR 2 2 (Table 1). NOTE: * Fixed parameters: substrate mass 5 g, 50 °C temperature, 1:5 molar ratio, mass enzyme 0.3g and 160 rpm of mechanical agitation. From the results, it was observed the maximal esterification of 358.78 U/g and 456.03 U/g, for the catalyst free and immobilized, respectively after 180 minutes of reaction time. Table 1 shows the full DCCR 2 3 for the esterification of butonoic acid and ethyl alcohol, as a function of the studied variables, molar ratio acid alcohol, catalyst mass and temperature (°C). According to the results, the maximum esterification in terms of butanoic acid was obtained in the assay 15, 16 and 17 of 457.15 and 666.40 U/g to free and immobilized catalyst after 180 minutes of reaction time. The results ( Table 1) observed was that esterification to free and immobilized catalyst, was directly proportional to the positive effect of the temperature reaction (assays 1 and 5, 2 and 6) and (assays 1 and 5, 2 and 6, 3 and 7, 4 and 8, 9 and 10), respectively. This positive effect of temperature is consistent with the endothermic (ΔH +) nature of the esterification reactions, which is characterized by the reversibility, that is, it presents a chemical equilibrium, indicates that it occurs with heat absorption, and that the increase of temperature provides an equilibrium in the reaction system, shifting the reaction for the products side, increasing reaction yield. Another variable of extreme importance, combined with temperature was the mass catalyst content, the Table 1 (assays 1 and 2, 3 and 4, 5 and 6, 7 and 8, 13 and 14) to free and immobilized catalyst, respectively. In general, all tests using the immobilized catalyst showed higher esterification of butanoic acid in relation to the free catalyst, which is the main advantage through an efficient immobilization method, as presented in our study, besides the possibility of reuse and reduction of inactivation by distortion of its native structure by the influence of temperature, pH and solventes. To Nyari et al. (2016), this performance involves factors such as: any enzyme added in the immobilization process is adhered to the support, there is no leaching caused by the reaction medium and the interaction of the support material with the active center of the enzyme, leading to the opening of the hydrophobic lid and leaving the exposed site, providing an increase in the activity of the esterification reaction. Orellane-Coca et al. (2005) and Colombo et al. (2015), an excess in the catalyst content is necessary to keep the enzyme activity during the reaction time. This positive effect of temperature molar ratio to free and immobilized catalyst (assays 2 and 4, 11 and 12) and (assays 1 and 3, 2 and 4, 5 and 7, 6 and 8, 11 and 12) in the acetic acid esterification, respectively. Martins et al. (2014) factors of displacement of the chemical equilibrium and reduction of the medium acidity. Azudin et al. (2013) reversible reaction (esterification), where the excess of alcohol (nucleophile/acyl receptor) leads to high esterification levels due to the excess of nucleophile to substrate transfer. Also positively affects by equilibrium displacement to the products, besides the ethyl alcohol being of branched structure, being able to hinder the micelles formation around the immobilized catalyst, thus facilitating the product solubility and the mass transfer in the reaction system. From this results, it was possible to relate the initial reaction velocity with the agitation type. Liu et al. (2008), the reduction in the reaction time can be linked to the increase in the reaction rate, which can be obtained using ultrasonic agitation system, mainly due the formation of microscopic droplets in the system, increasing the interfacial area by increasing surface contact reducing mass transference limitations between substrate and catalyst. Equation 2 and 3 presents the second-order coded model, which describes the ethyl butanoate esterification as a function of the independent variables (factors) analyzed (molar ratio butanoic acid:ethyl alcohol, temperature and mass catalyst) within the studied range. Where: R = Molar ratio acid:alcohol, M = Mass catalyst and T = temperature (°C). The correlation coefficient obtained (0.94) with Fcal (6.07) > Ftab (4.01) and Fcal (7.57) > Ftab (4.01) allowed the construction of the surface response presented in Figure 2 for free (a) and immobilized (b) catalyst. Figure 2 shows the contour plot for the interactions between the variables: molar ratio acid:alcohol, mass catalist and temperature for the batch reaction under the mechanical stirring. It was observed that the higher esterification for the ethyl butanoate were achieved to free catalyst (Figure 2 (a)) in the temperature range of (45-55°C), mass catalyst range (0.3 and 0.55 g) and molar ratio Página | 24 acid:alcohol range (1:6-1:7) (central point condition). To immobilized catalyst (Figure 2 (b)) in the temperature range (40-60°C), mass catalyst (0.3 and 0.55 g) and molar ratio acid:alcohol 1:5-1:7 (central point condition). (a) (b) Figure 2. Surface response using a free (a) and immobilized (b) catalyst in terms of ethyl butanoate esterification by mechanical system. Figure 3 shows the evolution of ethyl butanoate esterification of butanoic acid and ethyl alcohol (400 minutes), for the catalyst free and immobilized according to the condition designed by the full DCCR 2 3 (Table 2). From the results, it was observed the maximal esterification of 520.11 U/g after 180 minutes of reaction time. Table 2 shows the full DCCR 2 3 for the esterification of ethyl butanoate in ultrasonic system, as a function of the studied variables, temperature (ºC), molar ratio acid:alcohol and ultrasonic power (%). The highest esterification, 585.84 U/g was observed in the assay 15, 16 and 17 (central point -50°C, acid:alcohol 1:7 and ultrasonic power of 60%). In general, as in the study conducted with mechanical agitation, the variables evaluated, when analyzed independently, had a positive effect. The increase in temperature (Table 2) (assays 1 and 5, 2 and 6, 3 and 7, 4 and 8, 9 and 10), molar ratio (assays 1 and 3, 2 and 4, 5 and 7, 6 and 8, 11 and 12) and ultrasonic power (assays 3 and 4, 5 and 6, 7 and 8, 13 and 14), they were favorable for increasing the esterification ethyl butanoate. This positive effect of temperature is consistent with the endothermic (ΔH +) nature of the esterification reactions, which is characterized by the reversibility, that is, it presents a chemical equilibrium, indicates that it occurs with heat absorption, and that the increase of temperature provides an equilibrium in the reaction system, shifting the reaction for the products side, increasing reaction yield. Apart from of the reduction in the system viscosity, reducing the mass transfer limitation. ULTRASONIC SYSTEM According Martins et al. (2014) the positive effect of concentration molar ratio butanoic acid: ethyl alcohol can be linked to two factors, as displacement of the chemical equilibrium and reduction of the medium acidity. Azudin et al. (2013) the reversible reaction of esterification, where the excess of alcohol (nucleophile/acyl receptor) leads to high conversion levels due to the excess of nucleophile to substrate transfer. Also positively affects the conversion by equilibrium displacement to the products, besides the ethyl butanote being of branched structure, being able to hinder the micelles formation around the immobilized enzyme, thus facilitating the product solubility and the mass transfer in the reaction system. Choudhury et al. (2013) and the relation to ultrasonics power, was observed from the results that increasing ultrasonics power was possible to increase ethyl butanoate esterification and the cavitation bubbles increasing the solubility of the molecule consequently increasing the reaction rate, and providing a low energy use (KWIATKOWSKA et al., 2011;MARTINS et al., 2013). The literature reports several concerns regarding the application of the ultrasonic system as a tool in the reactions of ester synthesis. Nyari et al. (2018) he temperature usually influences the chemical equilibrium of endothermic systems due to diffusional effect, while the enzyme content increases the active sites in the reaction medium. Regarding ultrasonic power, its increase also increases acetic acid conversion Equation 4 presents the second-order coded model, which describes the ethyl butanoate esterification as a function of the independent variables (factors) analyzed (molar ratio acid:alcohol), temperature and ultrasonic power) within the studied range. The correlation coefficient obtained (0.94) with Fcal (6.79) > Ftab (4.01) allowed the construction of the contour plot presented in Where: T = temperature (°C) R = molar ratio acid:alcohol and ultrasonic power (%). The highest esterification for the synthesis of ethyl butanoate were achieved in the region corresponding to high temperature range of (35-65°C), molar ratio acid:alcohol range of (1:6-1:1.9) and ultrasonic power range of (40-80%) (central point condition). OPERATIONAL STABILITY Figure 5 shows the operational stability (number of reuse cycles) for the esterification of ethyl butanoate (mechanical agitation and ultrasonic system). It was observed that the mechanical and ultrasound ( Figure 5) system showed 6 cycles with residual activity of 50% of residual activity. The observed reduction in the esterification during the reuse cycles may be related to the biocatalyst loss of mass between cycles, by the leaching of the enzyme from the support and by the catalyst denaturation (Carvalho et al., 2015;Bansode and Rathod, 2014;Waghmare et al., 2015). Nyari et al. (2016) and (2018) the reduction in acetic acid conversion during sequential reuse cycles may be related to loss of mass of CALB after reuse, as well as the leaching of the enzyme from the support and catalyst denaturation. CONCLUSIONS The mechanical system the maximum esterification of butanoic acid with ethyl alcohol in mechanical used immobilized catalyst was 666.40 and 585.84 U/g, whit 6 cycles de reuse after 180 minutes of reaction time in mechanical ultrasound system used Candida antarctica lipase B immobilized on polyurethane in a system solvent-free. The process was considered efficient with significant reduction of the reaction time, low instrumental requirements and improve the bioprocess performance. Until now, there were no studies available in the open literature in relation to the ester synthesis catalyzed by immobilized lipase in polyurethane as support in the ultrasound system. In addition, considered an environmentally correct and economically viable technology, it can be used in cosmetics, pharmaceuticals and food industry.
2020-02-20T09:06:23.071Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "9c55c9d494dc25533e30228a854c50d352428713", "oa_license": "CCBY", "oa_url": "https://periodicos.utfpr.edu.br/rebrapa/article/download/7057/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "83f8c69640e57db19558b1014c5cb32e24fb578b", "s2fieldsofstudy": [ "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Chemistry" ] }