text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Möbius bands, unstretchable material sheets and developable surfaces A Möbius band can be formed by bending a sufficiently long rectangular unstretchable material sheet and joining the two short ends after twisting by 180°. This process can be modelled by an isometric mapping from a rectangular region to a developable surface in three-dimensional Euclidean space. Attempts have been made to determine the equilibrium shape of a Möbius band by minimizing the bending energy in the class of mappings from the rectangular region to the collection of developable surfaces. In this work, we show that, although a surface obtained from an isometric mapping of a prescribed planar region must be developable, a mapping from a prescribed planar region to a developable surface is not necessarily isometric. Based on this, we demonstrate that the notion of a rectifying developable cannot be used to describe a pure bending of a rectangular region into a Möbius band or a generic ribbon, as has been erroneously done in many publications. Specifically, our analysis shows that the mapping from a prescribed planar region to a rectifying developable surface is isometric only if that surface is cylindrical with the midline being the generator. Towards providing solutions to this issue, we discuss several alternative modelling strategies that respect the distinction between the physical constraint of unstretchability and the geometrical notion of developability. A Möbius band can be formed by bending a sufficiently long rectangular unstretchable material sheet and joining the two short ends after twisting by 180 • . This process can be modelled by an isometric mapping from a rectangular region to a developable surface in three-dimensional Euclidean space. Attempts have been made to determine the equilibrium shape of a Möbius band by minimizing the bending energy in the class of mappings from the rectangular region to the collection of developable surfaces. In this work, we show that, although a surface obtained from an isometric mapping of a prescribed planar region must be developable, a mapping from a prescribed planar region to a developable surface is not necessarily isometric. Based on this, we demonstrate that the notion of a rectifying developable cannot be used to describe a pure bending of a rectangular region into a Möbius band or a generic ribbon, as has been erroneously done in many publications. Specifically, our analysis shows that the mapping from a prescribed planar region to a rectifying developable surface is isometric only if that surface is cylindrical with the midline being the generator. Towards providing solutions to this issue, we discuss several alternative modelling strategies that respect the distinction between the physical constraint of unstretchability and the geometrical notion of developability. Introduction In mechanics, an unstretchable material sheet can sustain only deformations corresponding to isometric mappings. Although every surface obtained by bending a flat unstretchable material sheet is developable, a mapping of a prescribed planar region into a developable surface is not necessarily isometric. In spite of this, non-isometric mappings of planar regions into developable surfaces have frequently been used in misguided attempts to describe equilibrium configurations of two-dimensional bodies made from allegedly unstretchable material sheets. The primary purpose of this paper is to clarify the salient issues. Although we focus on Möbius bands, our conclusions apply equally well to orientable twisted strips. This paper is organized as follows. Some background on the modelling of Möbius bands, as initiated by Sadowsky [1][2][3][4][5][6], is provided in §2. An expanded discussion of the issues related to the modelling of unstretchable material sheets and the flaws resulting from restricting attention to non-isometric mappings that generate developable surfaces is contained in §3. Basic ideas associated with the parametrization of a surface and the associated notion of a mapping from a planar region to a surface in three-dimensional Euclidean point space are reviewed in §4. The concepts of stretch and curvature, as they relate to the in-plane deformation and out-of-plane bending of a flat material sheet identified with a planar region, are discussed in §5. Developable surfaces and isometric mappings from planar regions to surfaces in space are reviewed in §6. Our main result is established in §7. There we show that, while an isometric mapping always deforms a planar region to a developable surface, a mapping from a planar region to a developable surface need not be isometric. To demonstrate this point, we explicitly work with a class of mappings that have been used frequently to describe deformations of rectangular regions into ribbons and Möbius bands. Any such mapping describes a ruled surface that lies on the rectifying developable of its midline. Most importantly, we establish that the mappings in question are not isometric-except in the trivial case, wherein the mapped surface is cylindrical with the midline being the generator. A concrete illustration of our result is presented in §8. In that illustration, a rectangular region is mapped to a helical ribbon that lies on the surface of a cylinder. In addition to showing that the mapping in question is not isometric, we identify an isometric mapping that takes a parallelogram to the same helical ribbon. The latter mapping is directly relevant to Sadowsky's [1,2] construction showing that it is possible to bend a rectangular region into a Möbius band without stretching. In §9, we show that a non-isometric mapping from a planar region to a developable surface of the type discussed in §7 can be expressed as the composition of a non-isometric mapping between two planar regions and an isometric mapping to a surface. This is achieved by a straightforward change of independent variables. Finally, a few alternative modelling strategies that respect the distinction between the physical constraint of unstretchability and the geometric notion of developability are discussed in §10. Background In 1930, Sadowsky [1,2] published an appealingly simple constructive proof showing that a rectangular region can be bent, without stretching or tearing, into a Möbius band. In essence, his proof amounts to smoothly joining three helical ribbons bent from parallelograms, with three isosceles trapezoids (two or more of which may degenerate to isosceles triangles), to form a surface with the requisite one-sided spatial connectivity. Recognizing that a strip of paper bent to adopt the shape of his construction would change shape in the absence of externally applied loads, Sadowsky also proposed a variational strategy for determining stable equilibrium shapes of Möbius bands made from unstretchable sheets. That strategy begins by identifying a Möbius band with a developable surface S endowed with bending energy proportional to the integral over the midline C of S, where η is the ratio of the torsion τ and the curvature κ of C and ds is the element of arclength along C. While he made no attempt to find minimizers of the functional (2.2), Sadowsky [1,2] computed the bending energy for the infinitesimal width-tolength version of his construction, which thus serves as an upper bound for the minimum of the bending energy. Absent from the discussion leading to (2.2) but present in two contemporaneous papers by Sadowsky [3][4][5][6] is the crucial provision that C be locally unstretchable. This postulate is consistent with the absence of the Gaussian curvature K of S in (2.1). Sadowsky [3][4][5][6] also emphasized that S should lie on the rectifying developable of its midline C. Given an arclength parametrization γ of C, the rectifying developable of S consists of the envelope of the planes spanned by the tangent and binormal vectors of the Frenet frame of C. If S lies on the rectifying developable of its midline C of non-vanishing curvature, then it must admit a parametrization determined by an invertible mappingx from the rectangular region of length l > 0 and half-width b > 0 to S of the particular form To describe the midline of any smooth closed band, orientable or otherwise, γ and t must satisfy closure conditions γ (l) = γ (0) and t(l) = t(0). The assumed invertibility of the mappingx ensures, without loss of generality, that 1 + tη (s) > 0 for each (s, t) belonging to D. Additionally, the mean curvature H and area element da of S are given by When reducing the problem of minimizing (2.1) to that of minimizing (2.2) for an infinitesimally narrow band, Sadowsky [3,4] referred to an expression κ(1 + η 2 (s))/2 for the mean curvature, which is simply the restriction to C of the expression for the mean curvature of S that appears in (2.9). In what is perhaps the first published paper that recognizes Sadowsky's contributions to the mechanics of Möbius bands, Wunderlich [7,8] noticed that (2.9) can be used without approximation to yield Subsequent to the papers of Sadowsky [1][2][3][4][5][6] and Wunderlich [7,8] A remarkable exception to this trend is the comparatively early work of Halpern & Weaver [24], who used homotopy methods to prove that a flat rectangular strip of half-width b and length l admits an isometric immersion as a Möbius band in three-dimensional Euclidean point space if and only if π b < l and, moreover, conjectured that such a strip can be isometrically embedded as a Möbius band in three-dimensional Euclidean point space only if the more restrictive inequality 2 √ 3b < l holds. It is noteworthy that Sadowsky [1,2] imposed the latter inequality to ensure the viability of his construction. Critique It appears to have gone unnoticed that an unstretchable flat rectangular sheet identified with a region D = [0, l] × [−b, b] cannot in general sustain deformations described by mappings of the form (2.6). In §6 of the present paper, we show that a mappingx defined in (2.6) is not isometric unless τ = 0 at each point along the midline C, which must therefore be planar. Any claim that a parametrization of the form (2.6) generally provides an isometric embedding, into threedimensional Euclidean point space, of a rectangular region is consequently false. It is important to recognize that, if τ = 0 on a closed curve C, the unit binormal b of the Frenet frame of C must be perpendicular to the plane in which C resides and C must have an even number of frame switching points. This being so, (2.6) cannot parametrize a Möbius band. The oversight described above might stem from a misinterpretation of the commonly encountered characterization of a developable surface, which states that any such surface can be mapped isometrically to a planar region. Any surface parametrized in accordance with (2.6) is indeed developable and can, therefore, be mapped isometrically to a planar region. However, setting aside the degenerate case where τ vanishes at each point along the midline C, the isometric flattening of the surface S parametrized by a mappingx from D to S of the form defined in (2.6) need not be D. This error inevitably undermines any effort to determine stable equilibrium shapes of Möbius bands by minimizing Wunderlich's functional (2.10). As the isometric flattenings of two surfaces parametrized by different mappings of the form (2.6) for a given region D generally differ, comparing the corresponding values of the bending energy (2.10) amounts not to comparing the energies of two possible isometric bendings of an unstretchable rectangular region of half-width b and length l but rather to juxtaposing the energies of two differently shaped flat regions mapped into developable surfaces. Hence, such a comparison cannot generally yield useful information towards determining the equilibrium shape of the Möbius band made from an unstretchable sheet of given prescribed shape. Moreover, as usingx defined in (2.6) to map D to a surface S must inevitably involve in-plane stretching, the influence of the associated stretching energy on shape would generally be non-negligible. Unfortunately, that influence has been neglected in all computations based on minimizing only the bending energy (2.10). Our conclusions regarding the family of mappings defined in (2.6) are independent of whether the condition (2.8) needed to ensure thatx describes a Möbius band is satisfied. Specifically, such mappings cannot generally be used to reliably determine the shape of a surface made by bending an unstretchable rectangular sheet. Quantifying the magnitude of the error incurred by using (2.6) to approximate an isometric mapping of a narrow ribbon remains an interesting question. A satisfactory answer to this question would provide useful guidelines for numerical strategies based on minimizing the bending energy functional (2.10), such as those used by Starostin & van der Heijden [22] and Shen et al. [23]. Even so, there is no reason to believe, a priori, that the class of mappings (2.6) is rich enough to include all possible equilibrium shapes of such a sheet. Parametrization of a surface: deformations and associated gradients Consider a surface S, in three-dimensional Euclidean point space, with parametric representation where x denotes a generic point on S and where s and t are parameters. A parametrization of a surface is not unique and need not be tied to a physical process. If, however, the surface represents a configuration occupied by a material sheet, then the situation changes markedly. Under such circumstances, each ordered pair (s, t) serves to label a unique material particle of the sheet in some (possibly flat) reference configuration andx encodes the changes in particle positions needed to surjectively map the sheet from that configuration onto the surface S. This is the view taken in many works, the present one included. Among other things, adopting this view makes it possible to apply classical kinematical notions from mechanics to describe changes in the geometry of a surface. In mechanics, the mappingx in (4.1) is an example of a deformation-a physical process that, for example, changes the shape of a two-dimensional rectangular region into a Möbius band. We adopt a more compact notation by using a vector r to denote the independent variable on which the mappingx depends, with the parameters s and t being interpreted as the components of r with respect to an orthonormal basis. This appears to be consistent with the formulation of many authors who treat the parameters s and t as the coordinates of a point in the planar region relative to an orthogonal Cartesian coordinate system. We denote the vectorial translation spaces associated with E 2 or E 3 by V 2 or V 3 , respectively. Introducing a planar region R, which we identify with a flat material sheet, in E 2 , we thus replace (4.1) by (4. 2) The gradient ofx on R, denoted by F, is called the deformation gradient. Each value of F is a linear transformation that maps V 2 into V 3 . We assume that the restriction of F : E 2 → E 3 to its range is locally invertible. To describe the notion of curvature, it is helpful to introduce the second-order deformation gradient G, which can be viewed as a linear transformation from For brevity, we refer to G as the 'second gradient'. Stretch and curvature The stretch associated with a deformationx of a planar region R to a surface S and the curvature of S can be described in terms of the deformation gradient F and the second gradient G. (a) Stretch We employ the usual Euclidean norms and the corresponding inner products in the domain R and codomain E 3 of the mappingx. Of central kinematical importance is the polar decomposition of F, where R is a linear transformation from V 2 to V 3 and U is a linear transformation (called the stretch tensor) from V 2 to V 2 . For completeness, we provide a derivation of (5.1). The Cauchy-Green deformation tensor C is defined by where F : V 3 → V 2 is defined in the usual way through inner products involving the domain and codomain of F. It follows from (5.2) that C : V 2 → V 2 is symmetric. By the assumption that x is locally invertible, C is positive-definite and therefore has a square root U : V 2 → V 2 , which satisfies and is itself symmetric and positive-definite. In mechanics, U is called the stretch tensor. We now define tensor R : For arbitrary elements a and b of E 2 , we may use (5.2)-(5.4) to find that (Ra) · (Rb) = a · b, (5.5) where the inner products on the left-and right-hand sides of the equation are those defined on V 3 and V 2 , respectively. Using conventional arguments, we find from (5.5) that R preserves the length of any line segment in R and the angle between any two line segments in R. In this sense, R can be thought of as a linear transformation that rotates vectors in V 2 into vectors in V 3 . A mappingx from a planar region R in E 2 to a surface S in E 3 is isometric if it preserves the length of an arbitrary curve {r =r(u), u 0 ≤ u ≤ u 1 } traced on R while mapping the curve into a space curve on S, Using the particular choicer shows that (5.6) holds only if |Fa| = |a| (5.8) for each a in V 2 , or, equivalently, C = I. (5.9) Equations (5.8) and (5.9) are also sufficient forx to be isometric. Thus, forx to be an isometric mapping, the associated Cauchy-Green deformation tensor C, and hence the stretch tensor U, must be the identity tensor. The material sheet identified with R is said to be unstretchable if it is capable of sustaining only deformations for which C = I or, equivalently, U = I. (b) Curvature Given linearly independent elements c and d of V 2 , a unit normal vector to S is determined by The second fundamental form of the surface S can be represented by a tensor D : V 2 → V 2 given in terms of n and G by D = nG, (5.11) where nG is defined so that a · ((nG) The curvatures of the surface S can be expressed in terms of the first and second fundamental forms. For completeness, a brief derivation of the relevant expressions is provided next. Given an element a of V 2 , the normal curvature κ of the surface S in the direction of Fa is defined by The principal curvatures κ 1 and κ 2 of S correspond to the minimum and maximum values of κ for all a in V 2 , and can be found by solving the equation dκ/da = 0, which leads to Da = κ Ca, (5.13) or, by the local invertibility ofx, from which it follows that κ 1 and κ 2 are the eigenvalues of C −1 D. Additionally, the mean and Gaussian curvatures H and K of S are defined in terms of κ 1 and κ 2 by Finally, substituting (5.17) into (5.13), using (5.2), and defining an element v of V 3 by v = Fa, we find that Lv = κ v, (5.18) from which it follows that κ 1 and κ 2 are also the eigenvalues of L. Developable surfaces and isometric mappings A surface is said to be developable if its Gaussian curvature vanishes everywhere. It is common knowledge that the image of a planar region under an isometric mapping must be a developable surface. As a basis for the ensuing discussion, we provide a simple proof of this fact. Letx be an isometric mapping defined on some planar region R. Then, by (5.2) and (5.9), the associated deformation gradient F must satisfy for all a and b in V 2 . On taking the gradient of (6.2), we find that for all a, b and c in V 2 , from which we obtain the conditions for all a in V 2 and all A in the space Lin(V 2 ) of all linear transformations on V 2 . Next, choosing orthonormal elements a and b of E 2 (so that |a| = |b| = 1 and a · b = 0) and defining elements and m of V 3 by = Fa and m = Fb, we find, as a consequence of (5.11) and (6.5), that the second gradient admits a representation of the form Next, taking the gradient of (6.3), we find that for all a, b, c and d in V 2 , where f : a, b, c, d with H being the third-order deformation gradient. Taking advantage of the various symmetries of G and H, we find from (6.8) that a, a, b, b) − 1 2 ( f (a, b, a, b) + f (b, a, b, a) Next, using the representation (6.6) for G in (6.9), we obtain Finally, recalling that a and b are orthonormal, we recognize the left-hand side of (6.10) as the determinant of D and, with reference to definition (5.16) of the Gaussian curvature, we conclude that We have shown that if a mappingx from a given planar region R in E 2 to a surface S in E 3 is isometric, then S must be developable. However, the converse is not true. It is possible to map a planar region R to a developable surface S non-isometrically. For instance, the entire plane can be mapped into itself non-isometrically. A less trivial example, which we will next explore in detail, is provided by the particular class of mappingsx defined in (2.6), which has prevailed in the literature in the efforts to model ribbons and bands. Our analysis also leads unambiguously to the conclusion that the mappings belonging to this class are not generally isometric, and, therefore, are unsuitable for modelling unstretchable material sheets. Mappings from planar regions to rectifying developable surfaces are typically not isometric Consider a surface S parametrized by a mappingx of the form (2.6), which is recast in the current notation asx where r 1 = r · e 1 and r 2 = r · e 2 are the components of r with respect to a positively oriented orthonormal basis {e 1 , e 2 } for V 2 , is a rectangular region in E 2 , γ represents a unit speed curve C in E 3 , are the tangent, normal and binormal vectors of C, respectively, κ = |t | and τ = t · (p × p ) are the curvature and torsion of C, respectively, and η is the ratio The deformation gradient F ofx is where we have made use of (7.4) and the Frenet-Serret relations t = κp, p = −κt + τ b and b = −τ p. Importantly, by invoking (7.4), we have tacitly assumed that the curvature κ of C is non-vanishing. In this regard, it is worth observing that if the curvature κ of C vanishes in an interval of [0, l], then the unit normal p and unit binormal b of the Frenet frame C are undefined on that interval. Under these circumstances, the notion of a rectifying developable, and thus a mapping of the form (7.1), becomes completely irrelevant. The second gradient G ofx is given by G = (κp + r 2 (η t + κη p)) ⊗ e 1 ⊗ e 1 + η t ⊗ (e 1 ⊗ e 2 + e 2 ⊗ e 1 ). stipulated invertibility ofx, that 1 + r 2 η (r 1 ) > 0 for all r in D, this yields whereby the unit normal of S is directed opposite to the unit normal of the Frenet frame of the midline C. As a consequence of (7.6) and (7.7), it follows that Thus, det(nG) vanishes and we infer from (5.16) that the corresponding Gaussian curvature vanishes, A mappingx of the form defined in (7.1) therefore deforms the rectangular region D to a developable surface S. Although the image of a mappingx of the form (7.1) is a developable surface, the mappingx is not isometric unless it satisfies some highly restrictive conditions that rule out the intended utility of (7.1). To verify the foregoing assertion, we first use (7.5) in (5.2) to give C = F F = (1 + r 2 η ) 2 e 1 ⊗ e 1 + η(1 + r 2 η )(e 1 ⊗ e 2 + e 2 ⊗ e 1 ) + (1 + η 2 )e 2 ⊗ e 2 . (7.10) Inspection of (7.10) then reveals that the Cauchy-Green tensor C corresponding to a mappingx of the form ( If κ and τ obey κ > 0 and τ = 0 at each point of C, then C must be planar and the unit binormal b of the Frenet frame of C must be perpendicular to the plane in which C resides. In other words, the rectangular planar region D must be mapped to a cylindrical surface with the midline C being perpendicular to the axis of the cylinder. Only in this degenerate case is the mappingx defined in (7.1) isometric. Importantly, as condition (2.8) for frame switching points cannot be met in this special case,x can never be used to describe a pure bending of a planar region into a Möbius band. It is noteworthy that the requirement necessary to ensure that a mappingx of the form (2.6) is isometric, namely that τ vanishes identically, is also necessary to preserve the lengths of the lines of constant r 1 in the region D and, thus, to ensure that the isometric flattening of the ruled surface S parametrized byx is rectangular. To verify this assertion, it suffices to notice that the rule vector b + ηt is of magnitude 1 + η 2 . Thus, if η(s) = 0 at some r 1 in [0, l], then the corresponding line in the planar region determined by the isometric flattening of S is of length 2 1 + η 2 (r 1 )b > 2b. It follows thatx elongates material filaments along e 2 unless τ ≡ 0. Randrup & Røgen [12] and Sabitov [14] considered an alternative to the parametrization (7.1) for a surface S on the rectifying developable of its midline C. Their alternative involves a mappinḡ x of the formx (r) = γ (r 1 ) + r 2 (b(r 1 ) + η(r 1 )t(r 1 )) 1 + η 2 (r 1 ) , (7.11) which, in contrast to a mappingx of the form (2.6), would preserve the lengths of the lines of constant r 1 in the rectangular region D defined in (7.2). Despite this, calculations entirely analogous to those leading to (7.10) show that the Cauchy-Green tensor associated withx coincides with the identity tensor if and only if η = 0 (or equivalently τ = 0) at each point of C. Generally, a surface that lies on the rectifying developable of its midline has the parametrization of the formx (r) = γ (r 1 ) + r 2 (α(r 1 )b(r 1 ) + β(r 1 )t(r 1 )), (7.12) of which (7.1) and (7.11) are special cases. It is readily shown that such a surface is developable if and only if β(ατ − βκ) = 0, and that mapping (7.12) is isometric if and only if α = 1, β = 0 and b is constant. This is in complete agreement with the conclusion drawn above. In an effort to establish the existence of an embedding of a Möbius band in three-dimensional Euclidean point space, Chicone & Kalton [25] considered a class of developable surfaces with parametrizationx (r) = γ (r 1 ) + r 2 ω(r 1 ), (7.13) where ω is a unit vector-valued function. The parametrization (7.13) encompasses (7.12), and therefore (7.1) and (7.11), as special cases. It is easy to show that the parametrization (7.13) is isometric if and only if ω is constant and everywhere orthogonal to t = γ . This is again in agreement with the central conclusion of the present work. Because the calculations performed in this section rely on representing vector and tensor fields in terms of their components relative to a fixed rectangular Cartesian basis, it is perhaps natural to wonder whether the results remain true if a curvilinear basis is used instead. Granted that the rectangular region D is also an admissible configuration, which seems to be an entirely reasonable assumption, Chen & Fried [26] demonstrate that the results of this section hold independent of basis. A relevant example (a) General helical ribbons We now provide an explicit example of a non-isometric mapping of the form (7.1) that takes the rectangular region D = {r ∈ E 2 : 0 ≤ r 1 ≤ l, |r 2 | ≤ b} to a developable surface S. We further show that the surface S can be mapped isometrically only onto a planar regionD that is not rectangular. The latter mapping is directly relevant to the curved portions of Sadowsky's [1,2] construction of a Möbius band. For simplicity, we choose and fix a positively oriented orthonormal basis {e 1 , e 2 , e 3 } for V 3 and represent all vectors and tensors in terms of their components relative to that basis. We emphasize that e 1 and e 2 need not be the base vectors previously used for V 2 . Our example is based on a circular helix H with axis e 3 , radius ρ, pitch angle θ, and arclength l > 0. Invoking the well-known identities Since a mappingx of the form (7.1) is isometric if and only if τ (and, thus, η) vanishes, we conclude from (8.2) that the particular mappingx defined in (8.5) is not isometric. Indeed, by (7.10) and (8.2), the Cauchy-Green tensor C corresponding to (8.5) has the representation C = e 1 ⊗ e 1 + tan θ (e 1 ⊗ e 2 + e 2 ⊗ e 1 ) + (sec 2 θ)e 2 ⊗ e 2 (8.7) and thus does not coincide with the two-dimensional identity tensor I for any non-zero pitch angle θ . The mappingx defined in (8.5) takes the rectangular region D to a surface S which coincides with a portion of the surface of a cylinder and is clearly developable. It therefore follows that S can be mapped isometrically to a planar regionD. Although the developable surface S cannot be obtained by an isometric mapping from the rectangle D, we show below that it can be obtained by an isometric mappingx from a planar regionD which is not rectangular. The planar regionD and the isometric mappingx can be conveniently found on making reference to the polar decomposition (5.1), which can be interpreted as a composition involving the gradients of two mappings associated withx defined in (8.5). The first element U in the composition is the gradient of a mapping that non-isometrically deforms D to another planar region, which is identified asD. The second element R in the composition is the gradient of a mapping that isometrically deforms the planar regionD to the surface S. The inverse R of the second element R of the composition is also isometric and maps S toD. The specific form of U corresponding to the mappingx defined in (8.5) can be obtained by computing the square root of the Cauchy-Green tensor C in (8.7) and has the representation U = 2e 1 ⊗ e 1 + tan θ (e 1 ⊗ e 2 + e 2 ⊗ e 1 ) + (1 + sec 2 θ)e 2 ⊗ e 2 3 + sec 2 θ . (8.8) This stretch tensor takes the rectangular region D to a parallelogram which can be rotated so that its midline coincides with that of D. The corresponding rotation tensor is found to be Q = 2e 1 ⊗ e 1 + tan θ (e 1 ⊗ e 2 − e 2 ⊗ e 1 ) + 2e 2 ⊗ e 2 3 + sec 2 θ . (8.9) It then follows that the tensor QU = e 1 ⊗ e 1 + tan θ e 1 ⊗ e 2 + e 2 ⊗ e 2 (8.10) takes D to a parallelogram, which we identify asD. If the rectangular region D is given (7.2), the regionD is the parallelogram defined bỹ D = {r ∈ E 2 : 0 ≤ r 1 − (tan θ)r 2 ≤ l, |r 2 | ≤ b}. (8.11) The (r 1 , r 2 )-coordinates of the vertices ofD are tan θ , b). The isometric flattening of the surface S arising from (8.5) is therefore the parallelogramD instead of the domain D of the mappingx defined in (8.5). Consistent with the contents of the paragraph immediately prior to the paragraph containing (7.11), the images of the lines of constant r 1 in the isometric flattening of S are straight lines of length 2 1 + tan 2 θb > 2b. Moreover, the tensor (QU) −1 takesD back to D. The isometric mappingx that takesD to S is therefore given bỹ x xF igure 1. A mapping from a planar region to a developable surface need not be isometric and, thus, is generally inconsistent with the constraint of material unstretchability: a mappingx of the general form (7.1), with the specific form (8.15), takes a rectangular region D = {r ∈ E 2 : 0 ≤ r 1 ≤ l, |r 2 | ≤ b} to a helical ribbon S that lies on a cylinder of radius 1/ √ 2 and axis directed along e 3 . The midline of S, indicated in red, is a circular helix with axis e 3 , radius ρ = 1/ √ 2, pitch angle θ = π/4 and length π . Its preimages in D andD, also indicated in red, are also of length π . Whereas S is not isometric to D, it is isometric to the parallelogramD = {r : 0 ≤ r 1 − r 2 ≤ π , |r 2 | ≤ 1} with acute interior angle π/4, horizontal side length π and inclined side length 2 √ 2. The mappingx defined in (8.20) is, on the contrary, isometric and accordingly bendsD into S without stretching. The mappingξ that takes D ontoD describes a homogeneous simple shear. (To avoid clutter, the basis vectors e 1 , e 2 This example demonstrates unambiguously that a mappingx of the form (7.1) can parametrize a developable surface that is not an isometric mapping of the rectangular region D and, thus, provides a concrete illustration of our general result concerning all such mappings. (b) Specific helical ribbon For illustrative purposes, we take Moreover, since πρ sin θ = π 2 b/2 > b, these choices comply with the restriction (8.6) needed to ensure that the parametrization is free of overlap. For simplicity and without loss of generality, we take the half-width b to be of unit length, in which case it follows that x(r) = o + cos r 1 e 1 + sin r 1 e 2 + (r 1 + 2r 2 )e 3 √ 2 , r ∈ D. (8.15) The mappingx defined in (8.15) maps the rectangular region D to a surface S which is a portion of the surface of a cylinder of radius 1/ √ 2, as depicted in figure 1. In particular, relative to the Cartesian coordinate system with origin o and basis {e 1 , e 2 , e 3 }, it, respectively, maps the corners (0, −1), (π , −1), (π , 1) and (0, 1) of D to the points with coordinates The transformation tensor QU defined in (8.10) specializes to QU = e 1 ⊗ e 1 + e 1 ⊗ e 2 + e 2 ⊗ e 2 . (8.17) We define a mappingξ : D → E 2 bŷ ξ (r) = QUr = (r 1 + r 2 )e 1 + r 2 e 2 . (8.18) This mapping is a homogeneous simple shear that takes the rectangle D onto the parallelogram Crucially, whereas the rectangle D cannot be mapped isometrically to S, the parallelogramD can be mapped isometrically to S, as depicted in figure 1. The (r 1 , r 2 )-coordinates of the vertices of D are (−1, −1), (π − 1, −1), (π + 1, 1) and (1, 1). For the particular example (8.15), material fibres oriented along e 2 are elongated by a factor of √ 2. Moreover, the isometric mappingx defined in (8.12) specializes tõ It is readily verified that the isometric mapping (8.20) takes the parallelogramD to S. In particular, relative to the Cartesian coordinate system with origin o and basis {e 1 , e 2 , e 3 },x maps the vertices of (−1, −1), (π − 1, −1), (π + 1, 1) and (1, 1) ofD respectively to the points with coordinates given in (8.16). From figure 1, it is evident that (8.20) simply wraps, without stretching, the parallelogramD over a cylinder of radius 1/ √ 2. The procedure of constructing the mappingsξ andx and a planar regionD described above can be carried out for any mappingx of the form (7.1). Alternatively, the decomposition of a nonisometric mappingx : D → E 3 into a non-isometric mappingξ : D → E 2 and an isometric mapping x :D → E 3 can be achieved by a change of independent variables, which effectively generatesx. We briefly describe this approach in §9. Inducing isometry by a change of variables For a mappingx belonging to the class (7.1), we consider a change of the independent variable r defined by a mappingξ : D → E 2 of the particular form ξ (r) = (r 1 + r 2 η(r 1 ))e 1 + r 2 e 2 . (9.1) Additionally, denoting the image of D underξ bỹ we define a mappingx :D → E 3 implicitly bỹ x(ξ (r)) =x(r), r ∈ D. Then, using the chain rule to differentiate (9.3) with respect to r and invoking (7.10), we find that the gradient J ofx onD must obey J J = I, (9.4) from which it follows thatx defined by (9.3) is an isometric mapping of the planar regionD to the surface S determined byx. It is, however, essential to recognize that the shape of the domainD ofx is unknown unless the ratio η = τ/κ involving the curvature κ and torsion τ of the midline C of S is itself known. Moreover, that shape generally differs from D unless τ vanishes identically. simple case where η is constant. Any strategy that seeks to determine a mappingx belonging to the class (7.1), say by minimizing the energy (2.2) of Sadowsky [1,2] or its generalization due to Wunderlich [7,8], will therefore generally yield a developable surface S that is isometric to a planar regionD different from the domain D ofx, leaving unsolved the problem of finding the shape of an unstretchable sheet identified with a planar region D. Alternative strategies We have shown that the class of mappings of the form (7.1), which has been used extensively in the literature to model bands and ribbons, is not suitable when the bands and ribbons are made of unstretchable material sheets. We now offer three possible strategies for avoiding the drawbacks of working with such mappings. One of these strategies involves relinquishing the constraint of unstretchability. The other two mimic approaches that are familiar from treatments of internally constrained three-dimensional bodies. (a) Removing the constraint of unstretchability Since mappings of the form (7.1) are not generally isometric and are hence inadequate for the purpose of modelling pure bendings of unstretchable flat material sheets, one possible remedy would involve dropping the isometry requirement in favour of considering stretchable flat material sheets. Among other things, this would require a modification of the elastic energy function to include the change of elastic energy induced by in-plane stretching. The Sadowsky and Wunderlich functionals (2.2) and (2.10) incorporate bending only. As a consequence, these functionals are insensitive to the energy that is required to deform, for example, the rectangular strip D in figure 1 to the parallelogramD. This is physically unreasonable. In a theory that incorporates the effect of in-plane stretching, the energy density generally depends on the stretch tensor U (or, if the flat material sheet is assumed to be isotropic, the principal stretches of U), in addition to the curvature of the surface. Associated models are more complicated, both kinematically and in regard to constitutive relations. There is, however, another more fundamental and seemingly inescapable problem with this strategy. The class of mappings of the form (7.1) has been used previously because of the belief that it describes the deformations of unstretchable flat material sheets. As we have demonstrated that this belief is unfounded, a justification would therefore be needed to support continued use of such mappings in a context where stretching energy is properly incorporated. (b) Using strictly isometric mappings Another possible strategy would be to replace (7.1) with the correct and complete class of isometric parametrizations. This strategy is consistent with the spirit of the literature concerning Möbius bands made from unstretchable flat material sheets, as the constraint (5.9) serves as a good approximation for a large class of two-dimensional materials, including those often used to construct model Möbius bands. It has the obvious advantage of leading to a description of great simplicity, in which the energy function depends on the mean curvature only. The task of characterizing the class of three-times continuously differentiable isometric mappings was recently addressed by the present authors and Fosdick [27]. That class neither contains nor is contained in the class (7.1), albeit there exists an intersection consisting of precisely those mappings in (7.1) with zero torsion τ , namely the degenerate case where the midline is planar and the conditions needed to describe a Möbius band cannot be met. Moreover, in contrast to the position of a point on a surface S determined by a mapping of the form (7.1), the position of a point on the surface mapped isometrically from a planar region R depends on the coordinates of the planar region through certain intermediate coordinates which are generated by the characteristic curves of an ordinary-differential equation associated with (6.1), which constitutes a system of the first-order differential equations for the mappingx. (c) Using a theory with properly imposed constraints The third strategy, which is perhaps more familiar to workers in mechanics and which we are currently pursuing independent of the work reported here, would be to develop a theory for internally constrained flat material sheets. In such a theory, unstretchability is treated as a constraint on the class of admissible mappings used to parametrize surfaces. The theory can be developed on general grounds for a class of constraints that includes the constraint (5.9) of unstretchability as a special case. Having derived the general theory, an energy functional that incorporates bending only, and is therefore compatible with the constraint of unstretchability, can be used. The partial-differential equations of equilibrium and the complete set of edge conditions can be derived by computing the first variation of that energy functional subject to the constraint (5.9). The resulting boundary-value problem, which includes reactive forces due to the constraint, needs to be solved in conjunction with the constraint. By contrast, the approach described in the previous subsection amounts to satisfying the constraint a priori and substituting the result in the objective functional. Data accessibility. The research reported here is purely analytical and thus does not have any associated experimental data. Authors' contributions. The authors contributed equally to the research and to writing the manuscript. They both gave final approval for submission. Competing interests. Neither of the authors has competing interests. Funding. The work of Eliot Fried was supported by the Okinawa Institute of Science and Technology Graduate University with subsidy funding from the Cabinet Office, Government of Japan.
10,207.4
2016-08-01T00:00:00.000
[ "Mathematics" ]
E-Commerce Shopping Intentions in the Industrial 4.0 Era: An Analysis of the Impact of Millennium Attitudes The rise of the millennial generation defined the fourth industrial revolution. This generation is known as a generation that is innovative, creative and likes freedom. The contemporary generation, often known as the millennial generation or the generation that was born at the turn of the millennium, is currently exhibiting highly noticeable shifts in consumer lifestyles. Digital technology is simultaneously beginning to permeate many facets of life in this time period.This study uses quantitative data. The sample used in this study was 300 millennial generation respondents at a private university in Surabaya. The results: trust, price and service quality have a significant effect on the online buying attitude of the millennial generation, subjective norms have an effect on the online buying interest of the millennial generation and attitudes have a significant effect on the online buying interest. INTRODUCTION In the current era of globalization, business is becoming more and more cutthroat (Anwar & Adidarma, 2016). Businesses compete with one another to maintain a business in order to survive in the face of fierce competition. Nowadays, it's not difficult to acquire information technology because it affects every aspect of people's life. Information technology is constantly evolving, changing, and becoming more sophisticated (Mujiyana & Elissa, 2013). Because of this sophistication, it can bring a lot of comforts to people's life. In the economic sphere, particularly in trade, this convenience is also felt. Nowadays, commerce and information technology go hand in hand. The word "e-commerce" was coined as a result of the convergence of commerce and information technology. Due to the fact that people in the Industrial Revolution 4.0 age fight not only with one another but also with machines supported by digitalization technology systems, company competitiveness in this regard is today extremely fierce. In this era, humans are required to be able to prepare themselves to have reliable competence as creators of systems and experts in operating digitalization technology systems. Millennials are a generation that contributes a lot and encourages the Industrial 4.0 revolution. Millennials are the ones who make technology more advanced and the era changes digitally (Hartatin & Simanjuntak, 2016). The opportunity for the 4.0 industrial revolution is enormous if Indonesia is able to adapt and transform relevantly. The millennial generation has an important role in filling this opportunity. Because they are the group of people who are most able to adapt in the midst of industrial digitalization. Because technology advancement will always go hand in hand with human development and scientific advancement, the industrial revolution 4.0 demands that humans participate in technical breakthroughs (Lupiyoadi, 2013). International Journal of Social Service and Research https://ijssr.ridwaninstitute.co.id/ IJSSR Page 1398 Human thought, behavior, and interpersonal relationships have all been significantly altered by the Fourth Industrial Revolution. This time period will have an impact on a variety of human activities and social relationships (Akroush & Al-Debei, 2015). The company's objective is to increase sales volume through an internet marketing approach by attracting new consumers, keeping existing ones, and retaining existing ones. By introducing products with fresher concepts and steadily raising product quality, it is feasible to draw in new clients or keep hold of current ones. Data from an internet user survey conducted in Indonesia in 2015 indicated that there were 88.1 million internet users there, according to the Center for Communication Studies (PUSAKOM) UI and the Association of Indonesian Internet Service Users (APJII). The percentage of internet users in Indonesia is 34.9% when measured against the country's 252.4 million total inhabitants. Comparing this number to 2013, when there were 28.6% more new internet users, shows a sharp growth. With a combined 52 million users, the Java and Bali regions of Indonesia have the highest internet users. With 16.4 million users, West Java is the province with the largest online population. Indonesia has had relatively rapid growth in online retail (Haryono, 2014). There were 36% more active internet users in 2013 than there were in 2010 (Laudon & Traver, 2014). This amount is nearly twice as high as growth in Malaysia and Thailand and twice as high as the Philippines. even 3.5 times Singapore's rate of internet growth . There are many different sorts of e-commerce that are increasing around the world, but just a few of them are doing so quickly in Indonesia. The marketplace is the kind of e-commerce that is growing in Indonesia. The benefit of using a marketplace over setting up a personal website or online store is that vendors can offer their goods through electronic media. Tokopedia.com, bukalapak.com, blibli.com, zalora, lazada, olx, sale stock, elevenia, and other online markets are only a few of the Indonesian marketplaces. Sellers only need to provide photos of merchandise and upload images including prices and other descriptions of their merchandise. Network security is when a computer connected to an internet network has many risks of security threats compared to a computer that is not connected to any network (Ersada, 2021). Convenience is the next significant aspect that can have an impact on online purchases. Every facet of life has been made more convenient by the internet, including the ability to purchase and sell things online. Anyone who wishes to shop online can do so with ease from anywhere at any time as long as their device is linked to the internet (Efraim Turban & King Jae Kyu Lee, 2015). Marketplace usually offers steps in purchasing, namely select, buy, pay and receive goods. But in reality the steps are not as easy as they are offered. As a third party that mediates the transaction process, customers who want to buy products must go through a series of processes that are longer than customers buying directly from the seller. This is due to the fact that when a client chooses to buy something through one of the media marketplaces, buyers and sellers must communicate via a third party, the marketplace, rather than directly. Unlike other online stores, where buyers and sellers can communicate directly without having to go through a lengthy process via a third party [3]. The high number of smartphone users in Indonesia makes it easier for people to connect and make online shopping transactions anywhere and anytime. This inevitability is what makes the online shopping business in Indonesia increasingly widespread, and also the emergence of several online shopping applications that provide various conveniences. The degree of convenience is determined by how readily the user can use the system, how free of issues it is, and how simple it is for a novice to operate. Some of the conveniences presented in transactions in online shopping applications are time efficiency, without having to face to face customers can buy the desired items, as well as some of the features offered by the application that are simple to use and understand. Apart from being easy to use, several online shopping applications offer discounts and other facilities such as free shipping, easy payments, there are options that you can pay on the spot, bank transfers, transfers via ATMs, credit cards and can pay at minimarkets that have worked with the online shopping application. Today's students often discuss online shopping as a topic of conversation to establish communication, and students also feel spoiled by the convenience of online shopping and the many products offered that are sometimes not available in offline stores. The current spending habits of students are affected by lifestyles that frequently follow trends and indulge excessively, therefore this represents the biggest chance for producers to make money. Noticing this opportunity, sellers are trying to reach millennial generation buyers by making various offers such as super quality, low prices, to offering other conveniences such as offering attractive features and delivery service. International Journal of Social Service and Research, Denta Sella Lokunuha IJSSR Page 1399 Today's millennial consumers are critical and intelligent in choosing which products they need and how they benefit. In order for businesses that use the marketing concept to sell their products to be aware of how customers behave and what elements affect their purchasing decisions. Along with the development of technology and the internet in Indonesia, producers are trying to take advantage of opportunities to gain profits. Students, who belong to the millennial age, use a variety of methods to express themselves, from picking a method of study to selecting friends, how to dress, how to choose entertainment, and even how to purchase. One of the reasons is through social media, apart from being a means of socializing, consumers can get comfort in the ease of getting the product information they need or want (Hidvégi & Kelemen-Erdős, 2016). In the online marketplace Bukalapak.com, additional study on customer buying interest was undertaken. This research found that consumer buying interest is influenced by use, shopping pleasure, shopping experience, and consumer trust. METHODS A quantitative research method was selected for the study. The population of Sukabumi's modern community, which engages in online shopping, is the subject of this study. The Surabaya city is the site of the research. The Sukabumi generation's young adults who engage in online shopping in the marketplace serve as the study's subjects. sample approach determined utilizing the Slovin formula. 300 respondents made up the samples used in this investigation. The measurement method employs a structural equational model (SEM) with purchase intention and online shopping decisions as endogenous factors and service quality and trust characteristics as exogenous variables. WarpPLS 6.0 has been used as the measurement tool. WarpPLS 6.0 is a program for structural equation models that tests the simultaneous existence of a linear link between latent constructs, either in a reflective or formative manner (Haryanto & Priyo, 2020). The reliability indicator criterion for the measurement model is the significant weight parameter with a p value 0.05. Collinearity uses tolerance > 0.2 and VIF 5. The premise of VAF > 80% is used to examine indirect factors for full mediation, 20% VAF 80% for partial mediation, and VAF 20% for no effect mediation. The level at which the theoretical model and empirical facts are put to the test of applicability (Goodness-of-fit statistics). Online surveys were used as the primary means of data collection. A Likert scale is used for the changeable measurement scale. Primary data, or data gathered directly through questionnaires, and secondary data, or data obtained in the form of documents, are the two types of data sources employed. RESULTS A descriptive analysis of the respondents' characteristics and a descriptive analysis of their replies to the research variables are presented before discussing the analysis of the study's findings. Both analyses serve to summarize the respondents' responses to the variables under consideration. The two stages of the two-stage structural equation modeling (SEM) methodology used in this study's statistical analysis tool were measurement model analysis and structural equation model analysis. AMOS 22 software will be used to streamline and assess the presumptions prior to analysis, including: descriptive analysis, validity test, reliability test, normality, and outliers, to assure the accuracy of the calculation findings. The use of e-commerce services by millennials is influenced in this study's analysis of consumer behavior. It indicates that as the usefulness of current technology increases, so will students' consumption habits. So that it is consistent with the current digital economy period that is well-liked by millennials since they are seen as practical and time-saving. In According to this report, students who are millennials are categorized as being busy with a variety of activities. Students must be independent and not rely on anyone when attending lectures. This is due to the numerous activities that take up a lot of time in the world of lectures, which can keep students quite busy. For instance, a demanding class schedule, numerous assignments, time-consuming organizational tasks, etc. Students don't have enough time to consume fashion products that can enhance their appearance due to this extremely time-consuming activity. This study establishes that the information media variable has a detrimental and significant impact on how students use e-commerce services for consuming. This indicates that despite the fact that Indonesia's information media actively promote e-commerce, students' consumption habits have not changed. Given that customer demand is a consequence of their purchasing power, price is an essential component of consumer purchasing behavior and one of the considerations that consumers make [24]. This is important because people value evidence and quality over mere information. Students must be proficient in technology and open-minded to many types of existing knowledge in the world of lectures. Rapid information dissemination occurs across all media. As a result, students will constantly be aware of the information that is currently the subject of widespread discussion. The knowledge of e-presence commerce's is today dispersed in a variety of media, including television, the internet, social media, seminars, and via friends. The general public can learn more about e-commerce through these numerous channels. Nevertheless, this information only gives a general overview of each ecommerce, not information regarding the goods that are sold there. Indeed, people seek for more specific information before making a purchase. This is done to learn more about a product's specifications. Moreover, using e-commerce services has an impact on student consumption habits. So, the trust that consumers have in online retailers will increase along with the level of student behavior consumption. Notwithstanding their lack of readiness for online purchases, research on generation Y customers in Malaysia reveals that the trust element is the most important component when they choose their online shopping sites (Akroush & Al-Debei, 2015). The trust element is a key determinant of customer attitudes toward online shopping, it was also revealed. Consequently, the confidence of the millennial generation The age of the digital economy has already started to take hold. Millennials' trust in online transactions may increase as a result of their positive experiences with e-commerce services. Customer attitudes are significantly impacted by the quality of the services offered by online retailers, which gives them a competitive edge. When considering these factors, the physical form (tangible) is the most significant because Instagram already has a distinct location and a customer service contact, which can have a greater impact on millennial generation consumer attitudes about the social networking site Instagram. Instagram is a social media platform that has grown in popularity as a platform for online commerce. When provided in person, service quality is usually seen right away, but when it is done online, customers cannot see it for themselves, making it a weak factor in millennials' opinions on making purchases. Based on prior experience, consumers' feelings of security and comfort when using ecommerce services may develop. Students would feel at ease if they receive good service when purchasing items from e-commerce.commerce. This comfort will gradually foster trust. Apart from arising from a sense of comfort, students' trust ine-commerce services can also arise from experiences after the goods ordered through e-commercecommerce has been received and the item is in accordance with expectations. So students as consumers will have confidence that the goods offered by e-commerce completely true to the original item. Halbini demonstrates that the pace of the economy is still strong thanks to the increase in people's spending power. On the other hand, society does engage in some consumerism. Today, e-commerce plays a significant part in assisting people in meeting their basic necessities, including fulfilling their interests and other wants. Ecommerce is used for everything. The government is already utilizing e-commerce to lessen social connection and physical contact. Hence, do not be shocked if e-commerce usage increases. Every generation will benefit from the advancement of technology, but only the generation that is able to adapt will be able to control it, including the millennial generation. The millennial generation was born in an age where everything was technological, and logically, they will quickly adapt so that technology will help them carry out their functions. However, over time, there has been a misperception of the use of technology, and an understanding of the problems has emerged. 4.0. The Science of Human Behavior Social Environment sees assessment as a tool in knowing the nature and condition of client problems, one of which looks at the Biopsychosocial aspect. CONCLUSION It is clear from the description of the analysis and the findings above that e-commerce and the usage of technology are related. The high level of digital transactions shows that people have consumptive behavior so that e-commerce has a potential market in Indonesia. In addition to the increasing level of internet usage, the development of e-commerce business has also occurred due to the high public interest in the practicality of activities. During the pandemic the services and goods provided by website-based business sites fulfilled a very large number of clients and had a positive impact on the country's economy. External factors also play a role in purchasing decisions such as the environment and lifestyle. Yet, it is important to pay closer attention to whether the purchases are actually necessary or are simply wanted to keep up with fashion trends. As a result, we can restrain compulsive and hedonistic behavior. It is preferable to use creativity and self-control when making purchases in the age of 4.0 to period 5.0. There will be many changes and technical advancements in the future, which of course will be more sophisticated and make human life easier. To determine what to do and what not to do in this situation, we must be shrewd. This means that the millennial generation will be interested in making online purchases based on the level of trust and quality of service offered. If the millennial generation already has an interest in online purchases, it is very likely that they will make a purchasing decision because the influence of interest on purchasing decisions is quite large. This needs to be considered by providers of goods or sellers in online stores that they must increase the trust and quality of the services they provide. The emergence of information technology and automatically managed manufacturing processes is a result of the quick development of computer technology, which transforms one's domain of expertise into a technology-based application. The industrial revolution 4.0 has brought about the birth of digital technology, which has an impact on human life everywhere. Throughout the fourth industrial revolution, all activities were conducted through automated systems, and as internet technology advanced, it not only became a means of establishing global connections but also a foundation for the processes used in commercial transactions. Technological developments in Indonesia have affected people's lifestyles, especially the Millennial Generation. The behavior of the millennial generation has a high level of enthusiasm for technological developments. The millennial generation has a hedonic behavior pattern where they make purchases based on aspects of pleasure, sensory involvement, and out of necessity [27]. With the very rapid development of technology, supported by generations Millennials, who are now entering the world, are entering the 4.0 era generation, where everything is digital in this era, from parents to children, they have been trained in using technology such as mobile phones or gadgets. Shopping applications or mobile shopping are among the technical advancements that are currently popular with many individuals (MS). When there is activity related to purchases made by someone utilizing a smartphone or other device enabled by an internet network, mobile shopping has taken place. Mobile Shopping is very varied, because there are many shopping platforms with different features and systems in each application. The existence of many services that will be obtained by consumers makes it easier for consumers to buy goods and makes it easier for consumers to find the products they want and need without having to go to every store. Another convenience in Mobile Shopping is that consumers can interact directly with shop owners without meeting, because in the application a number will appear that consumers can contact so that consumers trust the application.
4,624.2
2023-06-25T00:00:00.000
[ "Business", "Computer Science" ]
A Lipschitz metric for the Hunter-Saxton equation We analyze stability of conservative solutions of the Cauchy problem on the line for the (integrated) Hunter-Saxton (HS) equation. Generically, the solutions of the HS equation develop singularities with steep gradients while preserving continuity of the solution itself. In order to obtain uniqueness, one is required to augment the equation itself by a measure that represents the associated energy, and the breakdown of the solution is associated with a complicated interplay where the measure becomes singular. The main result in this paper is the construction of a Lipschitz metric that compares two solutions of the HS equation with the respective initial data. The Lipschitz metric is based on the use of the Wasserstein metric. The equation has been extensively studied, starting with [13,14]. The initial value problem is not well-posed without further constraints: Consider the trivial case u 0 = 0 which clearly has as one solution u(t, x) = 0. However, as can be easily verified, also is a solution for any α ≥ 0. Here I A is the indicator (characteristic) function of the set A. Furthermore, it turns out that the solution u of the HS equation may develop singularities in finite time in the following sense: Unless the initial data is monotone increasing, we find (1.3) inf(u x ) → −∞ as t ↑ t * = 2/ sup(−u 0 ). Past wave breaking there are at least two different classes of solutions, denoted conservative (energy is conserved) and dissipative (where energy is removed locally) solutions, respectively, and this dichotomy is the source of the interesting behavior of solutions of the equation. We will in this paper consider the so-called conservative case where an associated energy is preserved. Zhang and Zheng [19,20,21] gave the first proof of global solutions of the HS equation on the half-line using Young measures and mollifications with compactly supported initial data. Their proof covered both the conservative case and the dissipative case. Subsequently, Bressan and Constantin [1], using a clever rewrite of the equation in terms of new variables, showed global existence of conservative solutions without the assumption of compactly supported initial data. The novel variables turned the partial differential equation into a system of linear ordinary differential equations taking values in a Banach space, and where the singularities were removed. A similar, but considerably more complicated, transformation can be used to study the very closely related Camassa-Holm equation, see [2,11]. The convergence of a numerical method to compute the solution of the HS equation can be found in [10]. We note in passing that the original form of the HS equation is x , and like most other researchers working on the HS equation, we prefer to work with an integrated version. However, in addition to (1.1), one may study, for instance, and while the properties are mostly the same, the explicit solutions differ. Our aim here is to determine a Lipschitz metric d that compares two solutions u 1 (t), u 2 (t) at time t with the corresponding initial data, i.e., d(u 1 (t), u 2 (t)) ≤ C(t)d(u 1 (0), u 2 (0)), where C(t) denotes some increasing function of time. The existence of such a metric is clearly intrinsically connected with the uniqueness question, and as we could see from the example where (1.2) as well as the trivial solution both satisfy the equation, this is not a trivial matter. Unfortunately, none of the standard norms in H s or L p will work. A Lipschitz metric was derived in [4], and we here offer an alternative metric that also provides a simpler and more efficient way to solve the initial value problem. Let us be now more precise about the notion of solution. We consider the Cauchy problem for the integrated and augmented HS equation, which, in the conservative case, is given by In order to study conservative solution, the HS equation (1.4a) is augmented by the second equation (1.4b) that keeps track of the energy. A short computation reveals that if the solution u is smooth and µ = u 2 x , then the equation (1.4b) is clearly satisfied. In particular, it shows that the energy µ(t, R) = µ(0, R) is constant in time. However, the challenge is to treat the case without this regularity, and the proper way to do that is to let µ be a nonnegative and finite Radon measure. When there is a blow-up in the spatial derivative of the solution (cf. (1.3)), energy is transferred from the absolutely continuous part of the measure to the singular part, and, after the blow-up, the energy is transferred back to the absolutely continuous part of the measure. Thus, we will consider the solution space consisting of all pairs (u, µ) such that where M + (R) denotes the set of all nonnegative and finite Radon measures on R. We would like to identify a natural Lipschitz metric, which measures the distance between pairs (u i , µ i ), i = 1, 2, of solutions. The Lipschitz metric constructed in [4] (and extended to the two-component HS equation in [16,17]) is based on the reformulation of the HS equation in Lagrangian coordinates which at the same time linearizes the equation. However, there is an intrinsic non-uniqueness in Lagrangian coordinates as there are several distinct ways to parametrize the particle trajectories for one and the same solution in the original, or Eulerian, coordinates. This has to be accounted for when one measures the distance between solutions in Lagrangian coordinates, as one has to identify different elements belonging to one and the same equivalence class. We denote this as relabeling. In addition, for this construction one not only needs to know the solution in Eulerian coordinates, but also in Lagrangian coordinates for all t. The present approach is based on the fact that a natural metric for measuring distances between Radon measures (with the same total mass) is given through the Wasserstein (or Monge-Kantorovich) distance d W , which in one dimension is defined with the help of pseudo inverses, see [18]. This tool has been used extensively in the field of kinetic equations [15,9], conservation laws [3,6] and nonlinear diffusion equations [8,7,5]. To be more precise, given two positive and finite Radon measures µ 1 and µ 2 , where we for simplicity assume that µ 1 (R) = µ 2 (R) = C, let and define their pseudo inverses χ i : [0, C] → R as follows Then, we define As far as the distance between u 1 and u 2 is concerned, we are only interested in measuring the "distance in the L ∞ norm". Thus we introduce the distance d as follows For this to work, it is necessary that this metric behaves nicely with the time evolution. Thus as a first step, we are interested in determining the time evolution of both χ(t, x), the pseudo inverse of µ(t, x), and u(t, χ(t, x)). Let (u(t), µ(t)) be a weak conservative solution to the HS equation with total energy µ(t, R) = C. To begin with, we assume that F (t, x) is strictly increasing and smooth, which greatly simplifies the analysis. Recall that χ(t, · ) : [0, C] → R is given by According to the assumptions on F (t, x), we have that F (t, χ(t, η)) = η for all η ∈ [0, C] and χ(t, F (t, x)) = x for all x ∈ R. Direct formal calculations yield that Recalling (1.4b) and the definition of F (t, x), we have Thus combining (1.6) and (1.7), we obtain where we again have used that χ(t, F (t, x)) = x for all x ∈ R. As far as the time evolution of U(t, η) = u(t, χ(t, η)) is concerned, we have Thus we get the very simple system of ordinary differential equations The global solution of the initial value problem is simply given by The above derivation is only of formal character, and this derivation is but valid if F (t, x) is strictly increasing and smooth. However, it turns out that the simple result (1.8) also persists in the general case, but the proof is considerably more difficult, and is the main result of this paper. The Lipschitz metric for the Hunter-Saxton equation Let us study the calculations (1.5)-(1.9) on two explicit examples. x 0 e −t 2 dt is the error function. We find that as well as C = F 0 (∞) = √ π. This implies that Considering the system of ordinary differential equations (1.8) with initial data (χ, U)| t=0 = (χ 0 , U 0 ), we find See Figure 1. Observe that it is not easy to transform this solution explicitly back to the original variable u. (ii) Let Note that u 0 is not bounded, yet the same transformations apply. We find that Here we find See Figure 2. Again it is not easy to transform this solution explicitly back to the original variable u. Let us next consider an example where the initial measure is a pure point measure. Example 2.2. This simple singular example shows the interplay between measures µ and their pseudo inverses χ(x) better. 1 Consider the example u 0 = 0 and µ 0 = αδ 0 , where δ 0 is the Dirac delta function at the origin, and α ≥ 0. The corresponding pseudo inverse χ 0 : [0, α] → R is then given by Thus 2 In general one observes that jumps in F 0 (x) are mapped to intervals where χ 0 (η) is constant and vice versa. This means in particular that intervals where F 0 (x) is constant shrink to single points. Moreover, if F 0 (x) is constant on some interval, then u 0 (x) is also constant on the same interval. Next we compute the time evolution of both χ(t, η) and U(t, η) = u(t, χ(t, η)). Following the approach in [4], we obtain that the corresponding solution in Eulerian coordinates reads for t positive (2.1c) Calculating the pseudo inverse χ(t, η) and U(t, η) = u(t, χ(t, η)) for each t then yields and, in particular, that Thus we still obtain the same ordinary differential equation (1.8) as in the smooth case! In addition, note that χ t (0, η) = 0 for all η ∈ (0, α], and hence the important information is encoded in U t (t, η). We can of course also solve χ t = U and U t = η 2 − α 4 directly with initial data χ 0 = U 0 = 0, which again yields (2.2). To return to the Eulerian variables u and µ we have in the smooth region that and we need to extend U and χ to all of R by continuity: Returning to the Eulerian variables we recover (2.1). We can also depict the full solution in the (x, t) plane in the new variables: The full solution reads See Figure 3. The next example shows the difficulties that one has to face in the general case where the solution encounters a break down in the sense of steep gradients. The above examples already hint that the interplay between Eulerian and Lagrangian coordinates is going to play a major role in our further considerations. We assume a smooth solution of (u 2 Next, we rewrite the equation in Lagrangian coordinates. Introduce the characteristics y t (t, ξ) = u(t, y(t, ξ)). Furthermore, we define the Lagrangian cumulative energy by From (2.3a), we get that 3b). In this formal computation, we require that u and u x are smooth and decay rapidly at infinity. Hence, the HS equation formally is equivalent to the following system of ordinary differential equations: Global existence of solutions to (2.4) follows from the linear nature of the system. There is no exchange of energy across the characteristics and the system (2.4) can be solved explicitly. This is in contrast to the Camassa-Holm equation where energy is exchanged across characteristics. We have We next focus on the general case without assuming regularity of the solution. It turns out that in addition to the variable u we will need a measure µ that in smooth regions coincides with the energy density u 2 x dx. At wave breaking, the energy at the point where the wave breaking takes place, is transformed into a point measure. It is this dynamics that is encoded in the measure µ that allows us to treat general initial data. An important complication stems from the fact that the original solution in two variables (u, µ) is transformed into Lagrangian coordinates with three variables (y, U, H). This is a well-known consequence of the fact that one can parametrize a particle path in several different ways, corresponding to the same motion. This poses technical complications when we want to measure the distance between two distinct solutions in Lagrangian coordinates that correspond to the same Eulerian solution, and we denote this as relabeling of the solution. We will employ the notation and the results from [4] and [16]. Define the Banach spaces We are given some initial data (u 0 , µ 0 ) ∈ D, where the set D is defined as follows. Definition 2.4. The set D consists of all pairs (u, µ) such that (i) u ∈ E 2 ; (ii) µ is a nonnegative and finite Radon measure such that µ ac = u 2 x dx where µ ac denotes the absolute continuous part of µ with respect to the Lebesgue measure. The Lagrangian variables are given by (ζ, U, H) (with ζ = y − Id), and the appropriate space is defined as follows. From the Lagrangian variables we can return to Eulerian variables using the following transformation. The formalism up to this point has been stationary, transforming back and forth between Eulerian and Lagrangian variables. Next we can take into consideration the time-evolution of the solution of the HS equation. The evolution of the HS equation in Lagrangian variables is determined by the system (cf. (2.4)) of ordinary differential equations. Here Next, we address the question about relabeling. We need to identify Lagrangian solutions that correspond to one and the same solution in Eulerian coordinates. Let G be the subgroup of the group of homeomorphisms on R such that f − Id and f −1 − Id both belong to W 1,∞ (R), f ξ − 1 belongs to L 2 (R). By default the HS equation is invariant under relabeling, which is given by equivalence classes The key subspace of F is denoted F 0 and is defined by The map into the critical space F 0 is taken care of by (cf. [4,Def. 2.9]) with the property that Π(F) = F 0 . We note that the map X → [X] from F 0 to F/G is a bijection. Then we have that (cf. [4,Prop. 2.12]) and hence we can define the semigroup We can now provide the solution of the HS equation. Consider initial data (u 0 , µ 0 ) ∈ D, and defineX 0 = (ȳ 0 ,Ū 0 ,H 0 ) = L(u 0 , µ 0 ) ∈ F 0 given bȳ , x)). Next we want to determine the solution (u(t), µ(t)) ∈ D (we suppress the dependence in the notation on the spatial variable x when convenient) for arbitrary time t. Define The advantage ofX(t) is that it obeys the differential equation (2.9), while X(t) keeps the relation y + H = Id for all times. From (2.11) we have that We know thatX(t, ξ) = (ȳ(t, ξ),Ū (t, ξ),H(t, ξ)) ∈ F is the solution of The solution (u(t), µ(t)) = M (X(t)) in Eulerian variables reads However, for X(t, ξ) = (y(t, ξ), U (t, ξ), H(t, ξ)) ∈ F 0 , which satisfies we see, using (2.8), that where we in the second equality use that X(t) ∈ F 0 . Note that we still have (u(t), µ(t)) = M (X(t)) = M (X(t)), and thus This is the only place in this construction where we use the quantity X(t). We can now introduce the new Lipschitz metric. Define which implies that A drawback of the above construction is the fact that we are only able to compare solutions (u 1 , µ 1 ) and (u 2 , µ 2 ) with the same energy, viz. µ 1 (R) = µ 2 (R) = C. The rest of this section is therefore devoted to overcoming this limitation. Similar considerations yield that the second integral in (2.23) remains finite as time evolves. Remark 2.12. Observe that the distance introduced in Theorem 2.11 gives at most a quadratic growth in time, while the distance in [4] has at most an exponential growth in time. We make a comparison with the more complicated Camassa-Holm equation in the next remark. Remark 2.13. Consider an interval [ξ 1 , ξ 2 ] such that U 0 (ξ) = U 0 (ξ 1 ) and H(ξ 1 ) = H(ξ) for all ξ ∈ [ξ 1 , ξ 2 ]. This property will remain true for all later times. In particular, this means that these intervals do not show up in our metric, and the function χ(t, η) always has a constant jump at the corresponding point η. This is in big contrast to the Camassa-Holm equation where jumps in χ(t, η) may be created and then subsequently disappear immediately again. Thus the construction for the Camassa-Holm equation is much more involved than the HS construction. This is illustrated in the next examples.
4,138.4
2016-12-09T00:00:00.000
[ "Mathematics" ]
Phase diagram and critical points of Ce alloys (invited) The pressure-temperature phase diagram of Ceo. 9 _ "Lax Tho. 10 alloys is presented for 0.1 <x <0.17. Two critical points are found in this range of compositions. The data are shown tobe consistent with an analysis based on the known Fermi-liquid behavior of Ce in compounds and alloys. The possible appearance of ß phase at intermediate pressures is noted. The pressure-temperature (P-T) phase diagram ofCe is the elements The volume collapse (-15%) across this generally to the of Ce; however, the nature of this is a strongly disputed subject. The one point on which there is agreement is that in the ground state the 4f-electronic system of cerium be described as a Fermi liquid, 2 as clearly evidenced by the enhanced Pauli paramagnetism and linear specific heat coefficient. This also for many nonmagnetic Ce intermetallics. high of universality exists among these compounds: properties such as magnetic a character istic Fermi-liquid the degrees of freedom are quenched in the Pauli paramagnetic state. is the ofthe ofthis The pressure-temperature (P-T) phase diagram ofCe is unique among the elements in that it possesses a phase boundary (the y -a boundary) ending in a critical point ( Fig. 1). 1 The !arge volume collapse (-15%) across this boundary is generally agreed to be connected with the 4f electrons of Ce; however, the nature of this connection is a strongly disputed subject. The one point on which there is agreement is that in the ground state the 4f-electronic system of cerium can be described as a Fermi liquid, 2 as clearly evidenced by the enhanced Pauli paramagnetism and linear specific heat coefficient. This situation also holds for many nonmagnetic Ce intermetallics. A high level of universality exists among these compounds: properties such as magnetic susceptibility, effective moment, etc., scale with a characteristic Fermi-liquid temperature (T FL) even with T FL varing over a !arge range ( 1-1000 K). In addition the 4f-spin degrees of freedom are quenched in the Pauli paramagnetic state. lt is on the question ofthe microscopic character ofthis Fermi liquid that no real agreement exists. The experiments tobe described address the question of what sort of phase diagram might be observed for Ce if negative pressures were accessible. Alloying experiments show that in Ceo. 9 Tho. 1 , the ß phase is entirely suppressed. 3 Because of its smaller atomic volume, Th acts roughly as a positive pressure on the Ce lattice. Addition of La to Ceo. 9 Tho. 1 depresses the y -a transition 4 as shown in the inset to Fig. 2. Beyond a critical lanthanum concentration (x = 0.09) in Ceo. 9 _ " La" Tho. 10 , the ra transition is continuous at ambient pressure. The atomic volume of La is larger than r -Ce, and La acts, in this alloy, in some sense as a negative pressure would, in addition to being a diluent of the Fermi-liquid interactions. The point is that one might expect a second critical point in the negative pressure region, on the basis of these La alloying results. lt is possible to track the ra transition using electrical resitivity measurements. Figure 2 shows data at P = 0 for samples in both the first-order and continuous regimes. Other studies have shown that the change in electrical resistivity in the critical region is proportional to the volume change. 3 A complicating feature in the analysis of the data is the presence of a small 2-3 K residual hysteresis, even for samples that exhibit continuous transitions. This can be seen in the x = 0.1 data ofFig. 2, and also contributes to the rounding of the warming transition for x = 0. We believe this is due to compositional variations in the alloy. The situation is exacerbated in high pressure measurements by nonhydrostatic stresses induced in the solid pressure medium during the enormous volume change at the transition. We show our data for Ceo. 80 Laa. 10 Tho. 10 in Fig. 3. (Experimentaldetails will appear elsewhere.) We can clearly see the hysteresis broaden and then contract as a function of pressure. In addition, the shape of the hysteresis curve changes dramatically. Combining these data with our results for x = 0.11, 0.14, and 0.17, we construct the phase diagrams ofFig. 4. While it is difficult to place the critical points exactly because of problems mentioned above, we believe there is a clear case for the existence of two critical points in the range O.IO<x<.14. We see from the composite phase diagrams in Fig. 4(c) that as the lanthanum concentration increases, the two critical points move together until for x = 0.17 there is no evidence for a first-order transition at any part studied. Therefore, at some critical concentration the two critical points coalesce to forma "critical inflection" point. Weestimate thiscritical concentration to bex = 0.16. lt is interesting to investigate the requirements of a theory needed to explain the phase diagram . .s-7 Consider an alloy Ce" _ 1 La", ignoring the complications of a third component which do not substantially alter the argument. We write the free-energy functional as The amomalous tenns which drive the phase transition are contained in F FL . The transition is viewed as occurring between the a-state with T FL:::::: 1000 K and the r state with T FL::::; 100 K , as suggested by inelastic neutron scattering experiments. 8 • 9 The temperature dependence of F FL is determined by the entropy of the Fermi liquid. This is known to scale with T FL : S FL is a universal function of the scaled tem- The free energy F FL, therefore, has the form The condensation energy E~ is expected tobe of order k 8 T FL (V), which is known from experiment tobe a strongly nonlinear function ofthe cell volume. 5 l t is this nonlinearity which drives the first-order collapse. The appropriate equation of state is obtained by mini- thesumofthe "normal" P-Visotherm and theterm -ak" T FL;av. Curve 1 is the sum of these two terms. Curve 2 represents qualitatively the change in shape of the isotherm at low tcmperatures. Curve 3 represents a critical isotherm obtained from curve 1 by either raising temperature or by alloying at T = 0. Curve 4 represents the result offirst alloying (so as to obtain curve 3 at T = 0) and then raising the temperature. wheref'FL is the derivative of/F'L with respect to the scale variable t. This equation of state is examined for negative values ofthe bulk modulusB = -V(aP 1av). For this, we need S (T IT Fd and T FL (V). These can be estimated from respectively the effective moment of Ce in compounds9- 1 ' and inelastic neutron scattering for Ce alloys, as discussed further elsewhere. 7 We also make the crude approximation thatEiL = -kaTFL ' S (t) and shown qualitatively in the progression of curves 1--+3, the high temperature critical point corresponding to isotherm 3. For x#O, we have +x(B~c!V:,")(V:,"-V~0),. (4) where p ee (V) is the P-V equation of state for pure Ce (Eq. ( 3)]. The effect of alloying is then a constant pressure shift [the last term in Eq. (4)] making part ofthe negative pressure region accessible, and a dilution of the Fermi-liquid terms. There will be a critical concentration such that the T = 0 isotherm will resemble curve 3 in Fig. 5(c), but which, as T is increased, will develop into curve 4. This gives a lower critcal point at positive pressure. The essential point is that this treatment only depends on experimentally established Fermi-liquid behavior. At the same time, we note that this behavior was originally predicted by the calculations of Allen and Martin 5 based on a Kondo model. There is a further interesting point connected with the 7.3 kbar warming curve in Fig. 3. We believe another phase started growing in at :::::; 190 K, the most likely candidate being theß-dhcp phase. We saw further much more pronounced evidence for this in other cool downs, including the occurrence of this phase, stable all the way to 4 K in one run. (See Fig. 6.) In the picture introduced, that alloying with La is in some ways similar to negative pressure, the occurrence of the phase might indeed be expected because of reduction of the effects of the Th at some pressure intermediate between the critical points. This work at Los Alamos was perfonned under the auspices ofthe U.S. DOE.
1,981
1984-03-15T00:00:00.000
[ "Physics", "Materials Science" ]
Anilinium-3-carboxylate 3-carboxyanilinium nitrate The title compound, C7H8NO2 +·NO3 −·C7H7NO2, exists in the form of a protonated dimer of two anilinium-3-carboxylate molecules related by an inversion center, and a nitrate anion located on a twofold rotation axis. The bridging H atom occupies, with equal probability, the two sites associated with the carboxyl atoms. In addition to the strong O—H⋯O hydrogen bond, in the crystal, the various units are linked via N—H⋯O and C—H⋯O hydrogen bonds forming a three-dimensional structure. anticancer agents (Congiu et al., 2005). Benzoic acid, its derivatives and their complexes are also used as food preservatives and as antiseptic agents applied in various industrial branches: pharmaceutics, textile and cosmetics (Swislocka et al., 2005). In view of this interest, the crystal structures of various amino derivatives of benzoic acid (Hansen et al., 2007;Lai et al., 1967;Lu et al., 2001;Smith et al., 1995), and their ammonium salts (Arora et al., 1973;Bahadur et al., 2007;Zaidi et al., 2008), have been reported in the literature. The crystal structures of these compounds are characterized by strong hydrogen bonding. The ammonium salts of 2-aminobenzoic acid are monomers (Bahadur et al., 2007;Zaidi et al., 2008), whereas the chloride salt of the anilinium-3-carboxylate ion (Arora et al., 1973) exists in the form of hydrogen-bonded dimers formed through the carboxylic acid groups of inversion related molecules. In the present study, we attempted to prepare a cerium(III) complex of 3-aminobenzoic acid but the resulting product was a simple nitrate salt of the acid. Herein, we present the crystal structure of this salt. In the title compound two anilinium-3-carboxylate molecules related by an inversion center are bound to a proton to form a protonated dimer through strong O-H···O hydrogen bonds ( Fig. 1 and Table 1). A nitrate anion located on a 2fold rotation axis is present as counter ion. The bridging H atom (H1O) occupies, with equal probability, the two sites associated with the carboxyl atoms, O1 and O1a [symmetry code: (a) = -x, -y + 2, -z]. The ammonium groups are involved in strong hydrogen bonds to the carbonyl as well as to the nitrate O atoms (Table 1). In the crystal, ( Fig. 2 Experimental The title compound was prepared by adding one equivalent of 3-aminobenzoic acid (0.07 g) in 15 ml methanol to a solution of cerium nitrate (0.22 g, 0.5 mmol) in 15 ml me thanol. The brown solution was stirred for one hour, after which it was filtered and the filtrate was kept for crystallization at room temperature. The solution was covered with aluminium foil. After 3 days large orange-brown crystals were obtained (M.p. = 492 (1) K). A plate-shaped fragment cut from a large crystal was used for data collection. Refinement The OH H atom was located in a difference Fourier map and refined freely with a fixed occupancy of 0.5. The NH 3 atoms were located from a difference Fourier map and refined freely. The C-H atoms were placed in calculated positions and treated as riding atoms: C-H = 0.93 Å with U iso (H) = 1.2U eq (C).
709.8
2012-12-08T00:00:00.000
[ "Chemistry", "Materials Science" ]
Realistic Type IIB Supersymmetric Minkowski Flux Vacua We show that there exist supersymmetric Minkowski vacua on Type IIB toroidal orientifold with general flux compactifications where the RR tadpole cancellation conditions can be relaxed elegantly. Then we present a realistic Pati-Salam like model. At the string scale, the gauge symmetry can be broken down to the Standard Model (SM) gauge symmetry, the gauge coupling unification can be achieved naturally, and all the extra chiral exotic particles can be decoupled so that we have the supersymmetric SMs with/without SM singlet(s) below the string scale. The observed SM fermion masses and mixings can also be obtained. In addition, the unified gauge coupling, the dilaton, the complex structure moduli, the real parts of the K\"ahler moduli and the sum of the imaginary parts of the K\"ahler moduli can be determined as functions of the four-dimensional dilaton and fluxes, and can be estimated as well. Introduction -One of the great challenging and essential problems in string phenomenology is the construction of the realistic string vacua, which can give us the low energy supersymmetric Standard Models (SMs) without exotic particles, and can stabilize the moduli fields. With renormalization group equation running, we can connect such constructions to the low energy realistic particle physics which will be tested at the upcoming Large Hadron Collider (LHC). During the last a few years, the intersecting D-brane models on Type II orientifolds [1], where the chiral fermions arise from the intersections of D-branes in the internal space [2] and the T-dual description in terms of magnetized D-branes [3], have been particularly interesting [4]. On Type IIA orientifolds with intersecting D6-branes, many non-supersymmetric three-family Standard-like models and Grand Unified Theories (GUTs) were constructed in the beginning [5]. However, there generically existed uncancelled Neveu-Schwarz-Neveu-Schwarz (NSNS) tadpoles and the gauge hierarchy problem. To solve these problems, semi-realistic supersymmetric Standard-like and GUT models have been constructed in Type IIA theory on the T 6 /(Z 2 × Z 2 ) orientifold [6,7] and other backgrounds [8]. Interestingly, only the Pati-Salam like models can give all the Yukawa couplings. Without the flux background, Pati-Salam like models have been constructed systematically in Type IIA theory on the T 6 /(Z 2 × Z 2 ) orientifold [7]. Although we may explain the SM fermion masses and mixings in one model [9], the moduli fields have not been stabilized, and it is very difficult to decouple the chiral exotic particles. To stabilize the moduli via supergravity fluxes, the flux models on Type II orientifolds have also been constructed [10,11]. Especially, some flux models [11] can explain the SM fermion masses and mixings. However, those models are in the AdS vacua and have quite a few chiral exotic particles that are difficult to be decoupled. In this paper, we consider the Type IIB toroidal orientifold with the Ramond-Ramond (RR), NSNS, nongeometric and S-dual flux compactifications [12]. We find that the RR tadpole cancellation conditions can be relaxed elegantly in the supersymmetric Minkowski vacua, and then we may construct the realistic Pati-Salam like models [13]. In this paper, we present a concrete simple model which is very interesting from the phenomenological point of view and might describe Nature. We emphasize that we do not fix the four-dimensional dilaton via flux potential. The point is that the fixed values for dilaton and Kähler moduli from flux compactifications are not consistent with those from the interesting D-brane models and predict the wrong gauge couplings at the string scale for the other models [13]. This is a blessing in disguise from a cosmological point of view [14]. Type IIB Flux Compactifications -We consider the Type IIB string theory compactified on a T 6 orientifold where T 6 is a six-torus factorized as T 6 = T 2 × T 2 ×T 2 whose complex coordinates are z i , i = 1, 2, 3 for the i th two-torus, respectively. The orientifold projection is implemented by gauging the symmetry ΩR, where Ω is world-sheet parity, and R is given by Thus, the model contains 64 O3-planes. In order to cancel the negative RR charges from these O3-planes, we introduce the magnetized D(3+2n)-branes which are filling up the four-dimensional Minkowski space-time and wrapping 2n-cycles on the compact manifold. Concretely, for one stack of N a D-branes wrapped m i a times on the i th two-torus T 2 i , we turn on n i a units of magnetic fluxes F i a for the center of mass U (1) a gauge factor on T 2 i , such where m i a can be half integer for tilted two-torus. Then, the D9-, D7-, D5-and D3-branes contain 0, 1, 2 and 3 vanishing m i a s, respectively. Introducing for the i th two-torus the even homology classes [0 i ] and [T 2 i ] for the point and two-torus, respectively, the vectors of the RR charges of the a th stack of D-branes and its image are respectively. The "intersection numbers" in Type IIA language, which determine the chiral massless spectrum, are Moreover, for a stack of N D(2n+3)-branes whose homology classes on T 6 is (not) invariant under ΩR, we obtain a U Sp(2N ) (U (N )) gauge symmetry with three anti-symmetric (adjoint) chiral superfields due to the orbifold projection. The physical spectrum is presented in Table I. The flux models on Type IIB orientifolds with fourdimensional N = 1 supersymmetry are primarily constrained by the RR tadpole cancellation conditions that will be given later, the four-dimensional N = 1 supersymmetric D-brane configurations, and the K-theory anomaly free conditions. For the D-branes with worldvolume magnetic field F i a = n i a /(m i a χ i ) where χ i is the area of T 2 i in string units, the condition for the fourdimensional N = 1 supersymmetric D-brane configurations is where θ(n i a ) = 1 for n i a < 0 and θ(n i a ) = 0 for n i a ≥ 0. The K-theory anomaly free conditions are a N a m 1 a m 2 a m 3 a = a N a m 1 a n 2 a n 3 a = a N a n 1 a m 2 a n 3 a = a N a n 1 a n 2 a m 3 a = 0 mod 2 . And the holomorphic gauge kinetic function for a generic stack of D(2n+3)-branes is given by [13,15] f a = 1 κ a n 1 a n 2 a n 3 where κ a is equal to 1 and 2 for U (n) and U Sp(2n), respectively. We turn on the NSNS flux h 0 , RR flux e i , nongeometric fluxes b ii andb ii , and the S-dual fluxes f i , g ij and g ii [12]. To avoid the subtleties, these fluxes should be even integers due to the Dirac quantization. For simplicity, we assume where i = j. Then the constraint on fluxes from Bianchi indetities is The RR tadpole cancellation conditions are a N a n 1 a n 2 a n 3 a = 16 , a N a n i a m j a m k a = − 1 2 eβ , where i = j = k = i, and the N NS7 i and N I7 i denote the NS7 brane charge and the other 7-brane charge, respectively [12]. Thus, if eβ < 0, the RR tadpole cancellation conditions are relaxed elegantly because −eβ/2 only needs to be even integer. Moreover, we have 7 moduli fields in the supergravity theory basis, the dilaton s, three Kähler moduli t i , and three complex structure moduli u i . With the above fluxes, we can assume Then the superpotential becomes The Kähler potential for these moduli is ln(u i +ū i ) . (13) In addition, the supergravity scalar potential is where K ij is the inverse metric of K ij ≡ ∂ i ∂jK, D i W = ∂ i W + (∂ i K)W, and ∂ i = ∂ φi where φ i can be s, t i , and u i . Thus, for the supersymmetric Minkowski vacua, we have then the superpotential turns out Therefore, to satisfy W = ∂ u W = 0, we obtain 3ef = βh 0 . Because Res > 0, Ret i > 0 and Reu i > 0, we require Model -Choosing eβ = −12, we present the Dbrane configurations and intersection numbers in Table II, and the resulting spectrum in Table III. The anomalies from three global U (1)s of U (4) C , U (2) L and U (2) R are cancelled by the Green-Schwarz mechanism, and the gauge fields of these U (1)s obtain masses via the linear B ∧ F couplings. So, the effective gauge symmetry is SU (4) C × SU (2) L × SU (2) R . In order to break the gauge symmetry, on the first two-torus, we split the a stack of D-branes into a 1 and a 2 stacks with 3 and 1 D-branes, respectively, and split the c stack of Dbranes into c 1 and c 2 stacks with 1 D-brane for each one. Then, the gauge symmetry is further broken down to SU (3) C ×SU (2) L ×U (1) I3R ×U (1) B−L . We can break the U (1) I3R × U (1) B−L gauge symmetry down to the U (1) Y gauge symmetry by giving vacuum expectation values (VEVs) to the vector-like particles with quantum numbers (1, 1, 1/2, −1) and (1, Similar to the discussions in Ref. [9], we can explain the SM fermion masses and mixings via the Higgs fields H i u , H ′ u , H i d and H ′ d because all the SM fermions and Higgs fields arise from the intersections on the first torus. To decouple the chiral exotic particles, we assume that the T i R and S i R obtain VEVs at about the string scale, and their VEVs satisfy the D-flatness U (1) R . The chiral exotic particles can obtain masses via the following superpotential where M St is the string scale, and we neglect the O(1) coefficients in this paper. In addition, the vector-like particles S i L and S i L in the anti-symmetric representation of SU (2) L can obtain the VEVs close to the string scale while keeping the D-flatness U (1) L . Thus, we can decouple all the Higgs bidoublets close to the string scale except one pair of the linear combinations of the Higgs doublets for the electroweak symmetry breaking at the low energy by fine-tuning the following superpotential In short, below the string scale, we have the supersymmetric SMs which may have zero, one or a few SM singlets from S i L , S i L , and/or S i R . And then the low bound on the lightest CP-even Higgs boson mass in the minimal supersymmetric SM can be relaxed if we have the SM singlet(s) at low energy [16]. (10). Next, we consider the gauge coupling unification and moduli stabilization. The real parts of the dilaton and Kähler moduli in our model are [13] Res = √ 6e −φ4 4π , Ret 1 = √ 6e −φ4 2π , where φ 4 is the four-dimensional dilaton. From Eq. (7), we obtain that the SM gauge couplings are unified at the string scale as follows Using the unified gauge coupling g 2 ≃ 0.513 in supersymmetric SMs, we get
2,716
2007-11-18T00:00:00.000
[ "Physics" ]
Embedded Parallelism Enabling Ultralow-Power Zigbee Voice Communications Short-range wireless technologies are known to transmit voice, audio, image, and video messages in real time. Energy consumption and transmission reach are critical in such networks, especially for portable and power autonomous devices. The purpose of the Voice over Zigbee technology is to provide a competitive offering that excels in these performance aspects. Due to the CSMA-CA mechanism implemented in the 802.15.4 layer, a well-designed strategy must be considered in Zigbee to create a robust, reliable, and full-duplex conversation. In past efforts, we proved that the radio channel of Zigbee has enough bandwidth to support a full-duplex conversation with narrow-band voice codecs. Our embedded implementation of the Speex voice codec targeted the development of a low-cost, ultralow-power, long-range wireless headset using Zigbee technology to transmit voice in full-duplex mode for use with leading PC VoIP programs. Furthermore, we presented the real environment performance evaluation and power consumption tests involving the developed headset prototype. Talk time is comparable to Bluetooth including at the power budget, the codec processing, and analogue audio interface, but its deep-sleep lifetime more than doubles the Bluetooth performance. This was one of the very few successful efforts to port a voice codec on an ultralow-power DSP for use with power sensitive Zigbee applications, which is highly cited in the literature, proving additionally that using an open-source codec can deliver similar voice quality, reducing the total system cost. The current paper elaborates on the embedded parallelism of the Speex implementation and the exploitation of the DSP architectural parallelism which critically enabled the Voice over Zigbee application on the ultralow-power DSP platform. Another significant contribution of this work is towards understanding and resolving the challenges faced when trying to achieve good quality transmission of media over constrained devices and networks. The way to new ultralow-power voice-related Zigbee and constrained network applications is open. Introduction Zigbee is a wireless standards-based technology that addresses the needs of sensory network applications, enabling broad-based deployment of complex wireless networks with low-cost and low-power solutions that run for years on inexpensive primary batteries. e raw data rate of this technology at 2.4 GHz, 250 kbps is low compared with other wireless technologies, but sufficient to transmit voice using narrow-band voice codes. Prior to our work, few approaches can be found in the literature that examine the possibility of 802.15.4/Zigbee networks supporting voice transmission, either through simulation or based on already deployed networks (e.g., [1][2][3][4]).e general conclusion drawn from previous work is that voice can be carried over Zigbee; however, there exist several restrictions in terms of available bandwidth and network topology. e Eurostars Z-Phone project goal was to develop a low-cost, ultralow-power, long-range, ergonomic wireless headset using Zigbee Technology to transmit voice in fullduplex mode for use with leading PC VoIP programs (e.g., Skype and Messenger), in addition to a USB-Zigbee bridge module and a PC driver [5].e initial objectives were to achieve 2x communication autonomy and 4x distance compared with Bluetooth technology.DECT and WiFi consume much more than Bluetooth and cannot be easily integrated into small devices. is increase in autonomy has also an impact on the lifetime of the rechargeable battery and consequently on the lifetime of the headset, since batteries in wireless headsets are not replaceable. Due to the limited available bandwidth, voice must be compressed before being transmitted.e selection of the most suitable voice codec and its implementation is challenging, considering the requirements of Z-Phone.e paper summarizes the results of our previous work regarding the design, implementation, and validation of the Z-Phone architecture and the headset system and presents the details of the embedded parallelism which made the application feasible in the constrained environment.Important future work items are presented in the conclusions section. VoZ Technology Various wireless communication methods are known to transmit voice, audio, and/or video messages in real time. Examples of these methods are used in the Bluetooth technologies based on the IEEE 802.15.1 protocol, WiFi based on the IEEE 802.11 a/b/g protocol, and DECT.e purpose of the VoZ (Voice over Zigbee) technology is to overcome the competitive technologies regarding energy consumption, a critical issue for portable and/or power autonomous devices, and transmission reach.Due to the CSMA-CA mechanism implemented in the 802.15.4 layer [6], several strategies must be considered in Zigbee to create a robust, reliable, and "full-duplex" conversation.Such strategies include setting up a partially asynchronous communication network and not using confirmation messages in the communication transaction.Robust and good quality conversations have been implemented over Zigbee testing several voice codecs such as G.729A (8 kbps) or G.723.1 (5.3/6.3 kbps).As shown below, the radio channel of Zigbee has enough bandwidth to support a full-duplex conversation with narrow-band voice codecs. A deeper study of the Zigbee link necessitates the determination of the Zigbee stack time.Stack time is the time that the Zigbee stack firmware running on the microprocessor takes from capturing a Zigbee packet in the air to delivering it to the application layer.A set of transmissions were carried out between Zigbee nodes, with one node sending a message and another resending it instantaneously to calculate the time required for the Zigbee stack to process a packet (Figure 1).Measurements resolution was 1 ms. e difference between actual time and time stamp (ticks) can be calculated by the following expression: To simplify the calculation, we can suppose that e final expression would be A Zigbee sniffer was used to find out the real header length transmitted.In our trials, using the stack from EmberZNet PRO 3.11 with security features disabled over the xip EM250 [7], the header length resulted to be 30 bytes (4 PHY, 10 MAC, 8 NWK, and 8 APL) [8].e PHY header length was variable between 3 and 5 bytes.Table 1 shows the test results obtained in function of the bytes of information or payload.e tick value has been obtained as the mean from one thousand measurements. Table 1 depicts the communication test results including the minimum time between voice frames required to create a full-duplex conversation in function of the payload.To extract the stack time, the Zigbee 2.4 GHz data rate of 250 kbps is taken into account in the calculation of the propagation time. is time depends on the total length of the packet to be sent.In all cases, the measurements follow a normal distribution as shown in Table 2. Figure 2 shows the similarity between both shapes-ticks measurements distribution and normal distribution-in this diagram for 80 bytes. According to Table 2 and the normal distribution probability formula, we can send and receive an 80-byte packet payload (plus headers) every 22 ms with a probability error near to zero, as shown in the following calculation. at is to say, we could create a conversation using a voice codec of 29 kbps (80 * 8 bits/0.022ms), in this case: Progress beyond e State-of-e-Art. is section presents a comparison of our work in the Z-Phone project with previous relevant work.e most similar research to the Z-Phone project was the one named "Voice Communications over Zigbee Networks," performed by AT&T Labs Research [1].In the Zigbee voice communication transmission, the main issue to solve is to avoid the collisions between packets.If two packets are transmitted at the same time, the CSMA/CA used in Zigbee adds a random delay to both packets before resending them. is way the packets are sent too late for a real-time voice communication.e difference between our solution and the one proposed by AT&T is the method used to avoid these collisions.We use a ping-pong method that only allows a node to transmit a packet when the other nodes are not sending any 2 Journal of Computer Networks and Communications information (similar to a synchronization method used in the 1-channel serial data buses), while the AT&T system uses the solution proposed in the VoIP protocol, which includes a TDMA mechanism access to the channel.At the end, the work presented in [1] will achieve similar features, with limited 1-2 hops voice communication and 8 meters free obstacle transmission distance.In our own work, we have used the G.729a codec to test and implement our ping-pong method, as it is the most standard for voice compression in embedded systems.AT&Tsystem has used the same codec in their work.In the Z-Phone project, we moved further and proved that, using an open-source codec can deliver similar voice quality, reducing the total cost of the system.Moreover, we have presented engineering details of e cient codec implementation and validation in a restricted ultralowpower-embedded environment [7,9].e works presented in [3,4] have a di erent approximation as per voice transmission.Both share the voice transmission with various data transmission, which increases signi cantly the packet collisions and applies ine ectively a solution which is similar to our ping-pong method.In these cases, they use a time-slotted communication in the MAC layer to avoid the collisions, but a strict synchronization method (beaconed frames or AM radio synchronization packets) needs to be integrated in the proposed mechanism to work correctly.e codecs used in these works are similar to G.729a or Speex, with a bit rate of 4.4 to 8 kbps in one case and 8 to 16 kbps in the other, respectively.Furthermore, the work presented in [2] is only a simulation to test if a Zigbee network could have enough throughputs to manage a 16 kbps voice codec, but the authors do not provide any information about how to solve the synchronization and collision problem. In conclusion, the focus of our work is on the challenges faced when trying to achieve high-quality voice transmission over constrained devices and networks, which is also what sets apart this work from the previous publications referenced.e aim is to present in detail the above procedure using Z-Phone's requirements as a case study for this problem.Furthermore, our porting of the Speex open-source voice codec on an ultralow-power DSP for use with power sensitive Zigbee and constrained network applications is one of the very few successful e orts in this area.Chang et al. [14] designed and implemented the Speex voice codec at 8, 11, and 15 Kbps bit rates on an embedded system with Xbee WSN nodes using multihopping Zigbee communication, presenting a performance analysis in terms of distance-signal strength dependence, network delay, and PESQ-MOS voice quality.Meiqin et al. [15] achieved wireless real-time voice communication based on TI-Zstack protocol and AMBE-1000 and CC2530 modules.e proposed wireless voice communication system consists of ve nodes, of which one is coordinator, one is end device, and the other three are routers. e system can be used in underground coal-mine equipment, earthquake disaster relief, re ght rescue, etc. e longest end-to-end communication distance is 15 m.When using power ampli er (increasing the energy consumption) and routers, distance can be longer.e max hop is two due to each router adding noise to the signal.Yang et al. [16] provide a voice communication system which adopts CML's CMX618 for voice data collection and forward error correction coding, TI's CC2530 for Zigbee network formation, and voice data transmission.e authors assess the system for Point-to-Point (P2P) and End-to-End (E2E), using a network of one voice base station and eight router nodes, con gurations. e results, stressing the voice quality in mesh Zigbee with multiple hops (for mine monitoring), show that the proposed system ensures voice quality in complex Zigbee network with higher integration and lower power consumption.Fu et al. [17] present the detailed development of a proprietary low-power WSN platform with star topology and 3 voice communication modes: P2P, Peer-Central-Peer, and voice conference, using the CVSD 15.625 Kbps voice codec.Zigbee communications are based on the 8-bit RF SOC CC2430/CC2530.Extensive analysis demonstrates that the proposed system is a low-power, low-speed, and highperformance WSN platform.It consists of up to 16 audio sensor nodes, 64 typical parameter monitoring nodes, and 1 central node for network establishment and management. e audio channel capacity is 3 real-time two-way voice communications or audio conference including all audio nodes at the same time, and the voice delay is less than 40 ms. e communication distance between audio nodes is longer than 70 meters indoors and 120 meters outdoors.e authors suggest possible applications in emergency voice communication, audio/sound sensor networks, and health monitoring systems. Chang et al. [18] develop an end-to-end rescue communication voice gateway to provide a stable voice transmission over Bluetooth and Zigbee networks for mountain climbers.To implement the device, it adopts Speex with submode 4 and 11 kbps data rate to provide the best tradeo between speech quality and computational complexity, based on our work.e performance analysis, in terms of end-to-end throughput, packet loss rate, jitter, and delay, shows that the proposed implementation can e ciently support voice transmission over wireless sensor networks.A mine wireless voice communication system based on Zigbee is presented in [19] adopting CC2530 as RF sendingreceiving unit of voice communication node and AMBE voice codec (selected as an amateur radio speech codec) to realize voice message two-way wireless communication over the Zigbee protocol.Zigbee is also examined in the framework of space mission operations [20].WPANs are used in space to convey information over relatively short distances among the participant receivers.Unlike WLANs, connections effected via WPANs involve little or no infrastructure.is allows small, power efficient, inexpensive solutions to be implemented for a wide range of devices that can be duty-cycled aggressively to a low-power sleep state. Report [20] acknowledges that Zigbee has not been as widely adopted as expected, due in large part to the difficulty of the 802.15.4 MAC in enabling reliable transport in the face of difficult networking environments.In this framework, the authors of the present work were contacted by engineers of the Institute of Space Techniques and Technologies of the Republic of Kazakhstan, who were working in the field of transmitting voice over Zigbee technology, to discuss important details of our VoZ speech codec implementation described in [9].e European EAR-IT project addresses real-life experimentations of intelligent acoustic for supporting high societal value applications in a large-scale smart environment.For instance, a city emergency center can request ondemand acoustic data samples for surveillance purposes and management of emergencies.Pham et al. [21] and Pham and Cousin [22] present experimentations on streaming encoded acoustic on the SmartSantander largescale test-bed comprising more than 2000 IoT nodes (ATmega1281 microcontroller-based WaspMote nodes and TelosB CM5000 and CM3000 motes).An audio board was built around a 16-bit Microchip dsPIC33EP512 microcontroller offering enough processing power to encode the audio data in real-time to produce an optimized 8 kbps encoded Speex audio stream (Speex encoding library is provided by Microchip).Audio boards used the TelosB nodes as host boards due to inability of multihop transmission in WaspMotes.Multihop transmission used both WaspMote and TelosB motes as relay nodes.ese works further highlight the main sources of delays and show how multihop streaming of acoustic data can be achieved by carefully taking into account these performance limitations with appropriate audio aggregation techniques. Koucheryavy et al. [23] acknowledge the positive experience of transfer of voice data over the Zigbee protocol and examines research issues of Public Flying Ubiquitous Sensor Networks (FUSN-P) and Flying Ad Hoc Networks (FANETs) involving terrestrial segments and aerial segments composed by Unmanned Aerial Vehicles (UAVs) and drones.It presents a model network for full-scale experiment and solutions for the Internet of ings.e voice and video transmission from terrestrial segment to UAV-P using different protocols (Zigbee, 6LoWPAN, and RPL) is a critical investigation task because often it may be the only chance to pass the necessary information to the area of terrestrial sensor fields.Other important tasks involve the development of clustering algorithms for terrestrial and flying segments, route optimization for data collection, as well as the consideration of UAV-P's as a queue system and FUSN-P as a queue network, respectively.Kirichek et al. [24] examine the efficiency of using Zigbee in Flying Ubiquitous Sensor Networks for transferring voice and image data.e authors conclude that Zigbee networks, praised for their low cost, high autonomy, simple creation, and survivability, are useful for transmitting multimedia information only where requirements to the quality of voice and image transmission rate are low. Z-Phone Architecture e software/firmware architecture of Z-Phone used with a PC is illustrated in Figure 3.It consists of a wireless headset and a USB dongle providing Zigbee connectivity to the PC.In addition, the Z-Phone driver installed on the PC provides a software interface to any hosted VoIP application.e Z-Phone driver and the headset's DSP contain one instance of a voice codec.Voice packets arriving from the Internet are handled by the VoIP application (e.g., Skype).e decoded voice data are buffered for the Z-Phone driver to re-encode them using the voice encoder.e encoded voice data are passed through the USB interface to the dongle where they are handled by the Zigbee stack and are transmitted over the air to the Z-Phone headset.After passing the Zigbee stack at that end, the data are transferred to the headset's DSP engine where they are decoded and reproduced as voice through the speaker.e opposite path is followed for the upstream voice data from the headset's microphone to the Z-Phone driver and then from the VoIP Application to the Internet. Several factors affect the useful bandwidth that is used for carrying the encoded voice frames.Most importantly the fact that voice is transmitted in full-duplex mode reduces the available bandwidth in half, since the two nodes do not transmit data concurrently.In addition, it was decided not to include many voice frames in one packet in order to reduce latency and also reduce the impact of lost packets on voice quality. e tradeoff for this decision is the considerable overhead induced by the headers.Since small packets are regularly transmitted, stack processing and transmission times also affect the useful bandwidth.In Section 2, we proved that, in the worst-case scenario (maximum transmission times), the respective data rates should not exceed 29 Kbps.Several measurements under these conditions show that data rates up to 38 Kbps are feasible. Codec Selection. In Z-Phone, we defined a set of requirements to provide a guideline in the selection of the most appropriate codec.A fixed-point implementation had to be considered since the DSP platform of the headset was a low-end, low-power DSP without FPU support.Bandwidth, processing, and memory limitations of the end system favored the selection of a low bit-rate codec.However, voice quality is an important factor for Z-Phone; hence, the tradeo for choosing a low bit-rate codec should be carefully evaluated.Furthermore, voice quality is a ected by the algorithmic delay of a speech codec (time required to gather the samples that form one speech frame), being a signi cant part of the transmission delay between the sender and the receiver in a VoIP call (see [9] for a comprehensive analysis on algorithmic delay).Another aspect of speech codecs that a ects voice quality in network conditions with increased packet loss rates is their packet loss concealment mechanism. Most of the voice codecs evaluated had a xed mode of operation in terms of bit rate and consequently voice quality. e only codecs that o er exibility in this matter, which would allow ne tuning when integrated in the Z-Phone system, are AMR, SILK, and Speex with the rst being royalty-based.A royalty-based codec, although used often for proof-of-concept work (e.g., [1]), clearly hinders the market perspective of the product envisioned by Z-Phone, by increasing its cost.In addition, the Z-Phone system does not require the use of speci c codecs for issues such as interoperability.For that reason, the standard G.729 choice was abandoned.SILK and iLBC presented many attractive features but at the time of performing this work they were not available as open source.erefore, the Speex codec was selected for this project, since it can o er superior voice quality to the vast majority of equivalent codecs when operating at 15 or 18 kbps [9]. Codec Implementation. Available Speex source code ports to popular DSP platforms, such as ARMv4, ARMv5E, the Black n architecture, the TI C5xx family, Freescale's DSP56852, and the Microchip dsPIC family are not suitable for Z-Phone that targeted an ultralow-power current consumption for both the DSP and the audio interface support circuitry. e DSP selected for Z-Phone was ONSemi's BelaSigna 300 [25].A very important advantage of BelaSigna 300 over other similar DSPs (e.g., microchip's dsPIC family) is the integrated analogue audio interface that minimizes the need for external components, which could prove to be a huge bene t in housing and its associated costs.It o ers high-performance audio processing capabilities and exible programming while satisfying form-factor (size of a rice seed) and low-power constraints. e main CFX DSP core is user programmable using 24-bit xed-point, dual-MAC, and dual-Harvard architecture.It is able to perform two MACs, two memory operations, and two pointer upgrades in one clock cycle.BelaSigna 300 further includes a con gurable accelerator that is controlled by the main DSP core. Codec implementation involved the porting of the subset of Speex encoder and decoder functionality that was necessary for our application.e question that was raised was whether a very small IC that operates at a low clock frequency which is expected to present performance and memory constraints such as BelaSigna 300 would be able to handle the processing power and memory requirements of Speex.In [26], a port of Speex operating at 8 kbps on an ARM-based microcontroller working at 72 MHz and the resulting resource requirements are presented; however, the target platform was not directly comparable to BelaSigna 300.On the other hand, the results of the porting of Speex on dsPIC [27], using only narrowband encoder and decoder and only one mode of operation (8 kbps), indicated that BelaSigna could support at least part of Speex functionality. We have presented detailed information regarding the codec implementation in [7,9], which we do not reiterate here.We decided to implement submodes 3 and 4 (8 and 11 kbps) as these seem to present the best tradeo between speech quality and performance requirements. ey also share the same functions for LSP quantization and adaptive codebook search meaning that supporting both submodes does not require signi cant additional e ort. During the codec implementation, it was found that it was not always possible to achieve bit-exact results (exact same output with a reference implementation for a speci c input).e two factors that a ected bit-exactness were the di erences in arithmetic and in rounding between BelaSigna 300 and Speex source code.e memory requirements of the partial Speex port to BelaSigna 300 are presented in Table 3. Speech Quality. Speex delivers good quality both in 8 kbps and in 11 kbps mode [28].In order to evaluate the outcome of the voice codec implementation, a series of tests on the speech quality was performed using ITU-T Perceptual Evaluation of Speech Quality test methodology for automated assessment of the speech quality as experienced by a user of a telephony system [29].PESQ takes as input the original signal which in this case was a wav le containing 51 seconds of di erent male speakers and the degraded signal which was a wav le that resulted from encoding and decoding the original wav le. Figure 4 depicts the PESQ-MOS obtained for the two Speex submodes that are of interest for Z-Phone, for different complexities.Each submode was tested in three 6 Journal of Computer Networks and Communications scenarios, including encoding and decoding on the PC using the original Speex library as well as Belasigna transmit encoding and receive decoding.It must be noted that this test is not used as an evaluation of the speech quality that Speex is able to deliver, but instead as an indication for the development process of the correctness of the porting.It also gives an indication of the impact of the non-bit-exact implementation.As it can be seen on the graph, the Speex port to BelaSigna 300 follows closely the original Speex implementation in terms of speech quality with no signi cant deviations observed.It can also be seen that the value of the "complexity" con guration parameter does not have a signi cant impact on voice quality.Considering the tradeo between performance requirements and speech quality, a value of 1 to 2 for the "complexity" parameter of Speex seems to be su cient. Performance. Speex uses a 20 ms speech frame.erefore, it must be ensured that one instance of the encoder, one instance of the decoder, and the remaining tasks, such as data transfer and mixing, are executed in less than 20 ms. e decoder execution time does not change signi cantly for di erent submodes and values of complexity in the encoding process, and it was measured to be around 1.5 ms. e remaining tasks require roughly 0.5 ms, mostly due to data transfers.Figure 5 depicts the execution time of all the DSP's tasks for the two submodes and for di erent values of the complexity parameter.e execution time exceeds the 20 ms limit for values of complexity higher than 2 for 11 kbps and higher than 4 for 8 kbps.However, since the speech quality is not signi cantly improved with higher values of complexity, the performance of the implementation is acceptable for this application. Transmission Tests. ree di erent distances were applied between the Z-Phone headset and the USB dongle to check the transmission capability of the system.To test the link, 256 packets were sent to the USB dongle, with 60 msec delay after each package. e transmission power was 3 dBm. e Link Quality Index and the received signal strength were recorded for every received packet (Figures 6 and 7).255 packets arrived from distances 1.5 m and 5 m (99.61%).Another packet was lost when distance between the devices increased to 8 m (successful reception 99.22%).As expected, the results show that as the distance between the Headset and the Dongle increased, the quality of the link decreased.However, there was no big di erence detected between the link quality results even if it measured at 5 m or 8 m distances. Power Consumption. is section summarizes the power consumption performance of the Z-Phone system and a comparison with a Bluetooth reference system which is based on the In neon PMB 8753 BlueMoon UniCellular single-chip Bluetooth v2.0 + EDR solution [30].e major energy consumption elements of the Z-Phone system are the Belasigna 300 audio DSP and the EM250 single-chip Zigbee solution with an integrated Zigbee transceiver.e system further comprises (i) a small power management unit targeted for low-power consumer handheld end equipment, which contains the battery charger and a step-down DCDC converter and (ii) a programmable low-power clock generator (10 μA max in power-down mode), which provides the external clock required by Belasigna 300 WLCSP package at 40 MHz working frequency for optimized codec performance on the restricted platform. e Bluetooth chipset is clocked through the internal crystal oscillator Z-Phone), while the deep-sleep lifetime (similar to the "almost ready" function available in some Bluetooth headset products) can reach 1144 h (48 days).A methodology for measurement of average current consumption as a useful tool to select a chipset/module and help the engineer understand a system/product early in the development phase is presented in [31].We further measured the efficiency of our Bluetooth reference system for carrying voice over packetoriented ACL links, which has been shown to achieve better TCP connection performance at a slight increase of voice delay in a piconet topology sharing bandwidth between voice and data links [32].Even if Bluetooth headsets do not use ACL links for voice communication, in fact this comparison with Z-Phone over a better comparable network sublayer demonstrates the power efficiency of our Zigbee implementation as opposed to Bluetooth in terms of bandwidth use.Zigbee demonstrates more than 2x deep-sleep lifetime against Bluetooth between successive battery charges, consequently extending battery life.Excluding the almost zero current drawn by the Zigbee transceiver and the power management unit and analogue input and output circuitry which can be considered common between the two systems, it turns out that Z-Phone's Belasigna 300 DSP and lowpower clock generator in total require approximately half of the ultralow-power mode current of the Bluetooth chipset.Current consumption values are taken with: LEDs and external EEPROMs disconnected, 150 Ohm speaker impedance, no RF retransmissions, RF TX power set to 0 dBm (class 3 devices), and microphone bias set to minimum current level.More details regarding the Z-Phone DSP and clock generator current consumption can be found in [30]. Z-Phone Headset Unit. e headset unit plays an important role in the Z-Phone system: it not only serves as a voice transcoder combined with a Zigbee transceiver but also provides a basic optical and interacting interface for the users and battery charging options via standard microUSB socket. e audio DSP is wired to the EM250 Zigbee transceiver via I2C communication bus and eases the interfacing of the microphone and the speaker by providing a built-in analogue front-end and Class D output stage.Mixing the different call tones/status with voice is also the task of the DSP.e unit is powered by a standard 3.7 V-150 mAh Li-Ion coin battery.Charging it and supplying the different voltage levels for the DSP and the Zigbee chip is the role of a multioutput DC/DC converter and battery charger unit.e battery can be easily charged by plugging the unit to any USB host device.e user can read the battery status from the optical indicators located on the headset.To eliminate glitches, artefacts in the voice and to handle the demand of the two different voltage levels and the two different frequencies of the clock signals, a serially configurable lowpower 2-channel PLL clock generator that offers low jitter, individually settable power supply and frequency for both clock outputs is also incorporated in the design. A standard user interface was created for the user to learn how to use the buttons of the headset: power on/off, hook on/hang up, pairing, and volume up/down.e same applies to the optical indicator.e dedicated LEDs indicate different battery and call statuses.e housing of the headset is made with injection moulding using a nonconducting thermoplastic material, ABS, which is impact resistant and mechanically robust. Z-Phone Driver. e USB dongle supports USB Audio and USB HID standards, so it is compatible with the major operating systems.e memory resident application running under Windows environment provides a bridge between selected applications (e.g., Skype and X-Lite) and the headset by handling call events, such as picking up an incoming call or terminating a call.It also contains features such as call logging and an address book including photos, etc.Without this application, the headset can be used in a similar way to the ordinary headphones. Belasigna 300 System Architecture. e BelaSigna 300 system is an asymmetric dual-core architecture, mixedsignal system-on-chip designed specifically for audio processing.e BelaSigna 300 system is centered around two processing cores: the CFX DSP and the HEAR Configurable accelerator. e CFX DSP Core is used to configure the system and coordinate the flow of signal data progressing through the system. e CFX DSP can also be used for custom signal processing applications that cannot be handled by the HEAR Core.e HEAR Core is a microcode configurable signal processing engine that works with the CFX DSP Core to execute a variety of signal processing functions. e CFX and HEAR cores have a few local and shared memories available to them. e two cores process data brought into system memory by the Input/Output Controller (IOC).e IOC can handle inputs from either an analogue input stage or PCM interface input and can handle outputs to a digital direct-drive output stage or PCM interface output.e FIFO controller simplifies access to data by the CFX DSP, the HEAR, and the IOC by providing configurable hardware-based FIFO buffers.Figure 8 illustrates the Belasigna 300 system architecture. Use of System-Level Parallelism on Speex Implementation. Although BelaSigna 300 includes a powerful coprocessor (HEAR), we decided to refrain from extensive use of it due to the following two factors: Journal of Computer Networks and Communications (i) e differences of its rounding mechanism compared to the rounding needed by Speex and the fact that Speex uses operations between operands with different accuracy would possibly require adaptation of the algorithm (e.g., recalculating filter coefficients with different accuracies).(ii) Its dedicated memory for input and output data would require frequent data transfers.is would have a negative impact on performance and development time since this approach could prove to be error-prone. Considering the development time requirements of the project, we decided to use the HEAR coprocessor only for a few basic vector operations. All processing is performed within 20 ms, which is the frame duration used by Speex.At any given time, 4 frames are being processed: one frame being captured by the input stage, one frame being encoded, one frame being decoded, and one decoded frame being reproduced by the output stage.Encoded frames are exchanged between the DSP engine and the Zigbee interface module through the I2C controller.On system level, the following tasks are performed in parallel within a 20 ms time slot: (1) Analogue-to-digital conversion and storing of the samples on input FIFO buffer by the IOC input side.(2) Reading frame from input FIFO, frame encoding, frame decoding, and possible tone generation by the DSP. is is followed by mixing the two outputs of the previous step and storing the final output on the output FIFO buffer, using the HEAR coprocessor.(3) Digital-to-analogue conversion of the previously decoded frame that is stored in the output FIFO by the IOC output side.(4) Encoded frame exchange with the Zigbee interface module through the I2C interface. Instruction-Level Parallelism. e Speex encoder and decoder are fully implemented on the CFX DSP.Extensive instruction-level parallelism is possible, in a very compact format.e DSP can execute up to four computation operations in parallel with two data transfers [33]. is is achieved by the different execution units that utilize the multiple bus and memory architecture and the dual data path (X and Y) as it can be seen in Figure 9. e execution units are as follows: On instruction level, CFX has three types of instructions. Long Instructions. ese instructions perform only one operation in one clock cycle. Arithmetic-Move Instructions. ese instructions contain two parts: an arithmetic part that is executed by the DCU and a data movement part that is executed by the DMU and AGUs.Syntactically, the two parts are separated by "|||" indicating that they are executed in parallel, so an abstract Arithmetic-Move instruction is written like this, <arithmetic part> ||| <move part> e arithmetic and move parts can be further separated into dual arithmetic and dual move instructions operating on the X and Y data path, respectively, which means that an arithmetic-move instruction can execute up to 4 providing instruction memory space optimization.e two operations are not performed in parallel but serially.eir syntax is like this, <first move part> >>> <second move part> e Z-Phone Belasigna 300 Speex implementation contains 4975 CFX assembly instructions.e distribution among the different types is shown in Table 5. Arithmetic-move instructions can perform from 1 up to 6 operations per cycle.Practically, it is very challenging if not impossible to use this feature at its full potential.Our Speex implementation has used arithmetic-move instructions varying from 1 up to 5 operations per cycle.Obviously, the performance gain is more significant for instructions (and their operations) inside a loop.Table 6 presents the distribution of the arithmetic-move instructions used in the Speex implementation, based on their loop level and also their operations per cycle count. As mentioned before, the move-move instructions consist of two move instructions that execute in 2 clock cycles.However, each of these move instructions may perform one additional pointer register arithmetic operation, improving this way their efficiency.e distribution of move-move instructions based on their loop level and the number of operations per 2 clock cycles is shown in Table 7. Because only of the implemented instruction-level parallelism the codec execution time (measured in clock cycles) is reduced by one-third on average in all test cases using several different test speech samples and profiling data, as opposed to the total number of individual Speex Belasigna 300 operations and equal number of clock cycles in a serial execution.is benefit is added to the speed-up achieved already through system-level parallelism.Performance benefit through instruction-level parallelism can increase considerably, if our Speex Belasigna CFX assembly code is carefully re-engineered to exploit better the CFX parallelism potential.e adoption of parallel programming principles for task decomposition involving the identification of tasks that can be done concurrently (linear, iterative, and recursive task decomposition), data structures (input/output/ intermediate) or parts of them that can be managed at the same time, and the dependencies that impose ordering constrains (synchronization) and data sharing can help maximize the speed-up of code execution through maximizing concurrency and minimizing parallelization overheads.Illustrating the importance of instruction-level parallelism, if it had not been implemented in the codec port, the execution time plots of Figure 5 would have raised along the y-axis above the threshold line (marginally for complexity value 1 in the 8 kbps mode), jeopardizing the feasibility of the application.Besides the gains through instruction-level parallelism, the codec execution time can be further reduced through effective use of the HEAR coprocessor/accelerator. Conclusions and Future Work e Zigbee standard is designed to enable the deployment of low-cost, low-power wireless sensor and control networks based on the IEEE 802.15.4 physical radio standard.Despite the low data rates of Zigbee, its use for transmission of voice is feasible.In this paper, we prove that robust and good quality conversations can be implemented over Zigbee.In fact, the radio channel of Zigbee has enough bandwidth to support a full-duplex conversation with narrow-band voice codecs. We have discussed various aspects of the selection process for a voice codec for high-quality voice transmission over Zigbee.e initial "obvious" choice for a good quality codec around 8 kbps was a G.729 flavour.When we realized the cost implications of such a choice on a commercial product, we looked into the open-source community and discovered a wide selection of high-quality solutions.e results of the porting effort are presented showing that it is feasible to use a narrow-band codec in a real-environment embedded application with strict lowpower requirements and bandwidth limitations such as a wireless headset based on Zigbee.We managed to achieve real ultralow-power operation, while including all the necessary analogue interfaces which proved to be also a huge benefit.Finally, we briefly presented the Z-Phone headset unit and the results of the transmission range and power consumption tests involving the developed embedded system. is is one of the very few expert engineering efforts known in the literature achieving a successful porting of a (royalty-free) speech codec on an ultralow-power DSP enabling voice communications in restricted ultralow-power devices and networks. Future work will focus on significant project extensions concerning response time and robustness in front of interference regarding the use of the system as a headset for voice communications, as well as towards the migration of the developed technology in other applications, such as Voice Directed Warehousing (VDW) and in the promising and challenging domain of Wireless Sensor Network (WSN) applications. Regarding the first point, response time, we intend to demonstrate that voice communication over Zigbee can be held using a repeater node. is would provide a coverage range significantly bigger when compared with Bluetooth headsets and double the current achievement. Regarding the second point, robustness in front of interferences, we intend to take into account that many users will be using the system, even simultaneously, and the environment (e.g., an office) will be changing.e objective of this task will be to develop a method for effective use of the available channels and ability to react to interferences that might appear during a voice conversation while ensuring that the number of simultaneous conversations that can be held in the same space (office) is high enough to be used in a realistic environment. e number of available Zigbee channels, 11, will be sufficient for supporting a sufficient number of users.Regarding the third point, the integration of the Z-Phone system in VDW makes it necessary considering a different use scenario in which, although the core of the headset for VoIP system is valid, some additional characteristics must be taken into account, such as audio quality, dynamic routing procedures, conversation profile, channel sharing to increase number of users, and delay tolerated.In the broader WSN/ VoSN (Voice over Sensor Network) scenario, voice streaming will need to address the constraints in terms of communication bandwidth, computational power, and energy budget that severely affects the actual streaming capabilities of lowpower wireless sensor devices.Several metrics of multihop communication such as throughput, jitter, latency, and packet loss will have to be measured and analyzed. Last but not least, a variable bit-rate codec implementation will be tested, using the highly-scalable novel open codec Opus (bit rates from 6 Kbps to 510 Kbps, sampling rates from 8 KHz to 48 KHz and frame sizes between 2.5 ms and 20 ms).Opus combines technology from Speex and Skype's SILK and is the first open-source codec which has become an IETF standard, unlike other open-source codecs, such as Speex targeting speech and Vorbis, which is aimed at comparing music and audio in general.Towards this end, the objective will be to perform a deep study of the different possible configurations of Opus, to implement it in a DSP system and to effectively improve the current MOS score for the platform developed in the Z-Phone project. Data Availability Reproducing the findings presented in the paper by a third party would require the on-hand availability of the Zigbee system code, besides the target platform, including the implementation of the VoZ transmission mechanism as well as the Speex assembly code implementation on the Belasigna 300 DSP. e private organisations involved in the Z-Phone project, namely, Ateknea Solutions and inAccess Networks, would not allow disclosing the system code and depositing it in a publicly available data repository. Figure 2 : Figure 2: Graphical representation of ticks measurements distribution for 80 bytes payload. Table 1 : Communication test results in function of payload. Table 2 : Distribution of 10000 tick results in function of payload. Table 4 summarizes the power consumption gures.e typical Z-Phone talk time is approximately 7.51 hours.In deep-sleep mode Z-Phone can demonstrate an impressive lifetime reaching up to 2362 h (98.42 days) before battery charging is necessary.As expected, Z-Phone takes full advantage of the exceptionally ultralow-power consumption of the Zigbee transceiver in deep-sleep mode (1 μA).e Bluetooth reference system talk time is reaching 7.88 h over an eSCO link carrying EV3 voice packets and 8.23 h over a SCO link; however, at almost the same power budget, Z-Phone integrates the DSP voice encoding/decoding and analogue microphone/audio interface.In our low-power optimized Bluetooth reference system design, the standby time can reach up to 175.91 h (mode not available in Table 4 : Power consumption: Z-Phone vs Bluetooth. Table 6 : Distribution of Speex arithmetic-move instructions based on loop level and number of operations per cycle (1-5). Table 7 : Distribution of Speex move-move instructions based on loop level and number of operations per 2 clock cycles.
9,951.2
2019-02-05T00:00:00.000
[ "Computer Science" ]
Characterizing user behavior in online social networks: Analysis of the regular use of Facebook Received May 2, 2020 Revised Nov 21, 2020 Accepted Jan 14, 2021 The analysis of user behaviour in online social networks (OSNs) is one of the important research interests related to human-computer interactions. OSNs gives a large space to share news with no limits around the world and allows user to benefit from properties of this interactive and dynamic system. The study of user behaviour on a social and popular platform characterized by the use of new technologies requires to understand and the analysis of collective behaviour on Facebook. This paper aims to analyse the usage patterns in OSNs using the visible interactions of Facebook, by studying the time of activity and the evolution of human behaviour through a process of detection of visible and non-volatile interactions. In the first step, we perform a data collection process based on breadth first search algorithm (BFS) and semisupervised crawler agent. In the second step, we build an interaction quantification process to measure users’ activities and analysis related time series. The study of the frequency of periodic use has shown that the communities monitored follow a weekly rhythm that decreases over time to reach a frequency of daily use, which reflects a stability of activities and a case of dependency of use. INTRODUCTION Social networks are more than a simple trend but a new way of life anchored among internet users. OSNs are seen as a new provider of news and a public space for sharing personal emotions, as well as occupying a large part of life and society in general way. In addition to their main function which is human communication. OSNs offer the opportunity to engage in a variety of employment, business, social and political activities. By studying the data collected from Facebook and twitter, previous works [1][2][3][4] focused on the sociological aspect of the digital and virtual world and the community's familiarity with new concepts of this system. The first paper is an analysis of connectivity based on graph theory and lays several stages of database design and used protocols. The second work presents an analysis of the evolution of interactions and activities of the Moroccan community on Facebook and detection of some social behaviors related to the use of the OSN. The principal use of the Internet in Morocco is to use OSNs and related technology, followed by a computer which has improved the overall rate of increase in Internet use and access metrics [5]. The problem of OSNs data representation is due to the complexity of this type of network characterized by high degrees and low nodes distances, which is considered as small world networks [6]. Measured distribution distance for individual entities in Facebook's graphs allowed to visualize the temporal evolution by applying probabilistic algorithms and considered the small network structures extracted from Facebook as a large global network [7]. Mining some features this kind of networks has shown that the identification of distributions' degrees with large aggregation coefficients cannot be set using the power law, which remain as the size of the network increases and the connectivity average varies with a network type [8]. Within the multitude of disciplinary areas of OSNs studies, there are some continuous changes in the social representation graphs' patterns. Although OSNs are platforms for sharing information, users are still worried about security risks. Access to these platforms raises many issues regarding the level of users' privacy protection [9] which influences OSNs' behavior. In [10] social graphs and weighted weights are employed to detect suspicious actions of certain communities and to prevent cyber-attacks before they occur. The reliability of information is an important parameter in user interactions. Using a sample of ten thousand interactions from Twitter and Facebook, a skill-based tagging process has shown that classification with the algorithms: Naive Bayes, J48 and support vector machine gave good results for classifying users according to reliability, thus increasing the accuracy of existing behavior-based classification techniques [11]. Identifying the degrees of influence among members of a virtual community through social network platforms requires to extract related knowledge and to measure potential for social influence between virtual social network users, hence implementing some quantitative measurement mechanism to model direct and indirect social influence on OSNs. The purpose of this paper is to assess virtual community behaviour based on human activities captured from a virtual social network and to observe temporal trends in interactions. Measuring and visualizing visible interactions by means an associated graphical structure is a major work step towards this goal, in which streams between individuals on social networks are characterized by a greater flexibility in terms of available actions on virtual platforms. The user account is considered the center of the network, it can create and update shared or node-oriented social links. The main goal is to study the temporal evolution of social actions to analyze the interaction and the human pattern of use through the OSNs. RESEARCH METHOD With billions of active daily users around the world, Facebook is the most popular social network the web has seen [12]. Thanks to the multitude of functionalities it offers its users, Facebook allows them to share photos, videos, messages between friends, but also to follow the news of others. Investigating user behaviour in an environment characterized by a large volume of data [13] makes it possible to have an impression about both the nature of the activities and usage periods and provides a better visibility about the user community. Characterizing users' behavior in online social networks requires setting up an appropriate knowledge extraction process in the big data environment. To this end, the first step towards our goal is data extraction, followed by the detection and tracking of visible and measurable activities to analyze the pace of usage on Facebook. Data collection from OSNs An important step in the activity analysis is the creation of the dataset which serves as a knowledge extraction database of the Facebook network. This data set should provide both sample data and components that are representative of broad network characteristics. Web browsing consists of surfing available resources on the web, specifically OSNs' data, which fluctuates with high speed and generates a large amount of data. Detection, filtering and storage are all basic operations that need to be performed in order to store relevant information. The structures presented in graphical form are browsed through several sampling strategies. The random walk is one of the graphs browsing techniques [14,15] which is generally used in large networks with higher node degrees [16] with human intervention for the choice of samples and results validation [17]. The use of multiple dependent random walks [18] or random walk with jumps [19][20][21] requires a high intervention effort which influences the process flow, whereas speed is an important criterion for data collection. Forest fire (FF), snowball sampling (SBS), BFS and depth search (DFS) are flexible web browsing approaches based on the principle of traversing without substitution, where the node is visited one time only. The BFS algorithm is one of the most widely used traversal techniques in different network topologies, it allows to traverse OSNs thanks to its heuristic sub-graph extraction function and its robustness in manipulating nodes to a high degree [22][23][24]. In order to ensure the storage of the Facebook OSNs data, the first step is to provide a list of the headers of the graph as input. Once confirmed, the process is launched through a semi-supervised agent that initiates an iterative process. For each selected node, sub-processes run through the associated sub-graph and all neighboring nodes located remotely from the selected element using the BFS method. Figure 1 illustrates the semi-supervised data collection process, whereby public information is stored in descending order from the most recent to the oldest up to the creation date of the selected element. The managing node list consists in executing a script by the Agent program to check for the node's existence to move to the next instruction. The connection operation of servers requires an access token for security reasons.in addition to renewing the access token, the program agent also manipulates responses and associated filtering operations to store the data in the dataset. The results obtained in Table 1 indicate that the extracted part of the network is only a sample of the Iceberg of the largest Facebook network, our crawling process has allowed us to store a number of around 40 million in various types of interactions, including their main properties (content, and date), with a number of starting graph heads of 892 pages and groups. OSNs ativities analysis Facebook offers various options for clients to interact with others, users need to have access to their account if they want to take advantage of this system. A friendship network is all users with whom we make a relationship link. This relationship requires a bilateral agreement between the two users, friendship on Facebook is generally represented by a network of links. According to common interests, users organize themselves into groups for the exchange and discussion of topics. Subscription and following celebrities and public persons are done through pages that share content for fans. This set of mechanisms and features offered to users help to produce a large number of interactions that require an activity analysis in order to extract some features associated to the use of this OSNs. The study of user behaviour is one of the greatest challenges of human behaviour research, difficulties related to predicting the next action are among the many problems of this kind of investigation. Hence, based on stored interactions, we can analyze changes and usage over time in order to model the variations and patterns that occur by describing users' behavior on Facebook. We consider five users, one group and two pages as shown in Figure 2, let denote the user by Ui, each user Ui can establish one or many links with other users Uj with Uj{Ua, Ub, Uc, Ud, Ue}/{Ui}. An isolated user, is the user with no links. One or many Ui users can follow a page Pk to create followers like A and B, and we can write A= P1 {Ua, Ub, Uc} and B=P2 {Uc, Ud, Ue}. Let denote by G the group of users, expressed by: C= G{Ub, Uc, Ue}. The nature of the evolution of social activities related to visible interactions on Facebook provides an opportunity to characterise the use of communities and the involvement of individuals in the virtual world. In order to characterise the graphical structure of interactions time series, we start by first eliminating accidental transitions to decompose the series and to extract the different components. To detect the trend, it is necessary to build an appropriate model adjustment process, the presence of a linear trend and according to the structure of the time series requires a stationary function using the differentiation function where the data will be modified taking into account the problem of over-differentiation [25]. The box cox family transformations are used in (4) to minimize deviations and to subdivide the data into segments. Smoothing the time series, for an integer d with d ≥ n, the Xi={1,...,d} closest to X are selected; each receives a proximity score based on its distance from X The time series is a sequence of number of interactions on Facebook represented by a cloud of points, trend is defined by a polynomial term, and the appearance of parasitic movement's needs a normalisation process in order to improve the results [26]. We use the moving average method [27] to decompose our time series due to its simplicity of implementation and its efficiency to detect the shape of the signal rhythm by removing the seasonal component and reducing the noise [28], as well as its ability to maintain the trend without modification [29]. At this level, the use of the "moving average" makes it possible to mask certain components without influencing the value of the trend and reduces the noise. Observations are extracted each day from Facebook, the majority of this data are publications and comments. Considering the time series associated to observations of visible interactions on facebook xt, xt+1,..., xn with t is the time of observation, p-order moving average calculation is performed using (5), m represents the width of the window with m=2p+1. Moving average smoothing becomes more efficient when the data have an increasing shape, as well as for performing the decomposition the value of seasonality of the data is required as a parameter [30]. Considering the decomposition of data in the series, the adapted decomposition process is based on the additive model supported by the existence of the trend and the research work [31,32] expressed in (6) and (7). Usage modeling by studying the regular behaviour is an essential step towards characterising user behaviour in OSNs, the development of a study model based on the SNR signal quality approach will allow the detection of variations in usage frequencies. The seasonal component St describes a regular pattern P that repeats periodically in time with an almost stable shape, typically assuming that the series is strictly periodic according to (8): Signal analysis and study skills are employed to model and examine time varying signals [33][34][35]. The finite succession of data of the social network Facebook is a signal to display user-generated data. The SNR report quantifies the influence of the seasonal component on the totality of the signal, by allowing the analysis of the global behaviour of the community by introducing the notion that the power of a signal helps to eliminate accompanying noise [36]. Signal power analysis of the seasonal component for a decomposition frequency P enables the identification of regular movements of use, as well as identification of interaction rhythm impact on observations marked by continuity of signal [37,38]. The signal energy of the OSNs interactions expressed in (9) combined with the SNR gives a new time series (10). RESULTS AND DISCUSSION The interactive mode in OSNs changes from one community to another, monitoring the evolution of social activities on Facebook OSNs using time series has made it possible to build a model of how communities behave globally. Modelling the behaviour of OSNs' users requires the development of a specific knowledge extraction process regarding the nature of the OSNs to be studied. For the complex network of relationships Facebook, by examining the signal fluctuations related to the global community behaviour a dependency relationship exists between the seasons of the year and the rates of evolution of interactions according to a distributed information flow model [39]. Figure 3 reflects an exponential evolution of activities on Facebook, preceded by a stable trend that expresses the significant growth in the use of this network. The annual periodicity is 365.25 days [40], and the estimated trend by moving average in (5) using an order of 365 days and a window of 731.5 days shows an exponential evolution of the speed of interactions, an important dynamism on this complex network. The time series covers two important periods: (A) between 2009 and 2013, (B) 2014 and 2016. Period (A) is characterised by a reduced growth rate reflecting a phase of discovery and training that has allowed users to become familiar with the Facebook social network. An explosion of activity in the period (B) reflects users' engagement and effectiveness of the Facebook OSN in harnessing information and communication technologies (ICT) [1]. To understand and achieve related knowledge, periodical signal detection allows the analysis of usage variations and related periodical patterns. Note that the reproduction of a signal pattern in time following a relatively constant pattern represents a rhythm [41]. Seasonal signal energy measures the reactive aspect of users, as well as energy enables to better identify movement out of period [42]. Applying the SNR method to signal energy allows us to measure periodical signal quality, by calculating the ratio of seasonal component energy St divided by noise energy Rt allows us to observe fluctuations due to the rhythm of Facebook use taking place with ordinary use without rhythm. The removal of the trend has generated a new time series for regular use. Based on the input parameter (order of seasonality), the SNR method related to the signal energies allows to detect relevant periods thanks to a correct configuration of the input data, this allows to validate the regularity of the behaviour at the level of a complex network. Table 2 shows the annual results as a function of the input frequency, the values calculated by the SNR method are near 5% and It can be seen that the strength of the signal energy associated to seasonal component corresponds to the input frequencies: one week, two weeks and three weeks, which shows that the data are characterized by a weekly periodicity. This shows that unexpected fluctuations and the noise produced have an impact on the regular information-carrying component and require a separate study. The studied OSN environment is characterized as much as a complex network by the difficulty to foresee interactions, the entities are part of a dynamic interactive system. The overall behaviour derived from studying and analysing follows a collective sense that tends, while unexpected behaviours appear in the form of disorders representing the notion of noise. Figure 4 shows significant peaks indicating that the week is the frequency of decomposition relative to the seasonality component that characterizes this series, indicating the existence of a pattern of use. The calculation of the energy associated with data signals is a means of characterizing the intensity of the information. Then the repetition of shapes over successive time intervals shows a rhythm resulting from collective use. The different calculated ratios based on a weekly frequency Figure 5 are characterised by homogeneity and a parallel rhythm which indicates stability at the level of use. The year 2014 has undergone a decrease in intensity to reach the other frequencies, multiple of 1 which expresses a great transition towards a daily interaction rate on Facebook OSN. The appearance of regularities indicates that the interactions of the studied community on Facebook are characterized by a specific periodicity which is repeated over the years, hence the intensity calculated through SNR shows that the seasonal component of the time series for a specific frequency, moreover the noise component and its energy will allow to represent non regular interactions in time. CONCLUSION Studying peoples' lives in OSNs through creating activities, shared information, direct or indirect interactions is an area of research considered as an intersection of several disciplines. Facebook user behaviours are considered as a goldmine of data, which allows to extract useful knowledge for researchers. The implementation of a knowledge extraction process based on time series and signal processing methods has allowed to extract a lot of knowledge related to the study of user behaviour on a social and popular platform such as Facebook. Using visible interactions and a non-volatile interaction detection process, the measurement of the intensity of frequency of use showed a decrease in the weekly rhythm and an increase in the daily rhythm over the years to reach a stability of use reflecting a continuity of use that generates a dependency relationship between Facebook and users. In general, we have found that user interaction behaviour follows an increasing direction, keeping the same rhythm throughout the week, and this shows the integration of OSNs in daily life. We hope that these results can contribute to the development of social media dependency models.
4,533.2
2021-08-01T00:00:00.000
[ "Computer Science" ]
Ultrasound-Guided Off-Plan Lumbar Seated Erector Spinae Plane Blocks : Are There Advantages? The erector spinae plane block (ESPB) has been widely used as a treatment strategy for a variety of acute and chronic painful conditions. ESPBs are typically performed under ultrasound guidance [USG] in an in-plane long axis approach, targeting the tip of the lumbar transverse process while the patient lies prone. Off-Plane Seated Injection Technique-ESPB [OPSIT-E] represents a useful alternative in situations where a standard prone spine injection would be technically challenged by circumstances which may include morbid obesity, orthopnoea, recent upper limbs surgeries, chest pain from recent pacemaker implant, and in a subjects were in-plane approaches may be complicated by skin lesions. A seated forward flexed off-plane injection position, may also flatten the lumbar lordosis, shift adipose tissue more anteriorly, lessening skin to target distance and facilitating bony landmark identification, in high BMI and hyperlordotic subjects. The relatively larger long axis curved transverse arc radius of the curvilinear probe [GE G1-5] in comparison to its transverse arc, also appears to offer improved central field of skin-transducer contact, earlier needle visualization, improved acute angle trajectory visualization of deep structures, which may be due to less crepuscular beam dispersion in comparison to transverse probe orientation. Even with a linear probe, the orthogonal technique facilitates a more perpendicular vector, lessening needle transit to target distance, which may in turn decrease procedure time, and improve patient comfort. The OPSITE, may also be easier to teach, learn, and master, as other studies have generally reported a higher rate of off-plane injection success among novice vascular interventionists. Introduction In 2018, the first application of lumbar ESPB for the postoperative analgesia of hip arthroplasty was published.[Tulgar & Senkurk 2018] Kose et al noted that the transverse process lumbar injection point is deeper and more lateral then in the thoracic spine, and more challenging to sonographically visualize and inject.[Kose et al 2018] This is due to the relative increased depth of structure but may also be partly due to the convex anterior lumbar curve which angulates structures.A forward flexed seated injection posture which flattens the lumbar lordosis may therefore facilitate sonographic imaging. The lumbar ESPB targets the potential space between the paraspinals muscle fascial envelope (Spinalis, Longissimus Thoracis, IIiocostalis bundle) and the deeper lumbar transverse processes.Both thoracic & lumbar ESPBs are typically performed with an in-plane probe orientation, with dynamic monitoring fluid spread expansion between erector spinae fascia away and deeper thoracic transverse processes.Unique intercostal perforating channels then facilitate anaesthetic diffusion into the deep paraspinal space, yielding circumferential thoracic-abdominal wall sensory nerve blocks of the dorsal and ventral rami of the thoracic and abdominal spinal nerves.[ [Magalhaes et al 2012] The transverse process acts as a bony anatomical barrier, which prevents inadvertent needle entry into deeper structures.An ESPB preserves bladder function and motor neuron function enabling early mobilization.Since motor function is unaltered, immediate postoperative neurological evaluation of spinal cord function is possible.USG caudal epidural is particularly safe in a transverse plane needle perpendicular approach to the epidural space [Inklebarger et al 2012], and is also used for back pain relief.However, caudal epidurals are concluded by some authors to be more invasive than ESPB, and that the former can only be performed at specialized institutions (Japan).For these reasons, ESPB may be a relatively less invasive injection option.[ In all cases, procedure accuracy was confirmed by sonographic bony landmarks and real-time needle trajectory and needle tip monitoring.Informed consent was obtained prior to each procedure.All procedures were performed by one expert practitioner (JI), with many years of spine interventional practice and US diagnostic and image-guided injection experience.The criteria for accurate needle placement was by direct needle-tip tracking, fascial plane tenting and/or volumetric fluid expansion. Discussion Studies have demonstrated differences in the relevant anatomy of thoracic and lumbar blocks.[Fusco et al 2017] The anatomy of the thoracic and lumbar nerves also differs.Thoracic spinal nerves continue as the dorsal ramus and ventral ramus (intercostal nerves) after leaving the epidural foramen, while in the lumbosacral region, the ventral ramus merges to form the lumbar and sacral plexuses.Another difference is the thoracic region dorsal ramus divides into lateral and medial branches, while the lumbosacral ramus separate into medial, intermediate, and lateral branches.Additionally the lumbar dorsal ramus of the lumbosacral nerves merge within themselves to form the cluneal nerves which are responsible for the sensory innervation of the waist and buttocks.The sensory anatomy of the lower abdomen and lower extremities is therefore more complex than the thoracoabdominal region.Subsequently, craniocaudal spread of ESPB is more limited in the lumbar region when compared to the thoracic region, and In the transverse approach, some papers recommended that the needle trajectory be nearly parallel to that of the US transducer, which yield a poorer needle visualisation in comparison to a long longitudinal approach.However, in a near vertical only a short segment of the needle echogenically displayed with shaft-tip distinguished by rocking the transducer back and forth or by withdrawing the needle slightly and re-aligning into a vertical plane.[Chapman GA et al 1996] Though the above parallel needle to orientation recommended above may be suitable for linear probe targeting of more superficial structures, off-plane curvilinear probe [CVP] needle tip tracking to TP appears to be more suitable for OPSITE.A GE 4 CD CV probe with a traverse diameter of 18 mm was used, and as the adult human index finger breath is on average 20 mm, selecting a needle entry point either 1 index finger breadth above or below the transducer [18 CVP width + 20 mm index finger width], while entering at a 60 degree angle and aiming at the centre of the TP, facilitates needle tip tracking along the hypotenuse [slope], thereby optimizing visualization of the needle tip at the cirtical average lumbar TP depth at 70 mm below the skin surface.However, it has been noted that even inexperienced ultrasound interventionists, obtain vascular access much faster using a short axis (transverse) approach than using a long axis (longitudinal) approach.A cadaver lumbar ESPB with dye noted cephalocaudal spread from L3 to L5 in all specimens with extension to L2 in four specimens.Medial-lateral spread was documented from the multifidus muscle to the lateral edge of the thoracolumbar fascia.There was extensive dye in and around the erector spinae musculature and spread to the dorsal rami in all specimens.There was no dye spread anteriorly into the dorsal root ganglion, ventral rami, or paravertebral space.The conclusion was that lumbar ESP injection has limited craniocaudal spread compared with injection in the thoracic region.It has consistent spread to dorsal rami, but no anterior spread to ventral rami or paravertebral space. Ma J et al 2021] Methods Patients were placed in a seated position.Subjects were forward flexed [FF] with elbows resting on thighs.Seated forward flexion helps to flatten the lumbar lordosis lessening the skin to transverse process target distance, while aligning the lumbar TPs and other relevant sonoanatomy more perpendicular to the US beams for image optimization.[Images below] Image 1 & 2. Lumbar OPSITE positioning for a high BMI patient.The FF posture appears to flatten the lumbar curve improve anatomical landmark-probe alignment The forward-flexed seated position appears to also lessen the TP depth and angulation.Procedures were performed using Loqic GE C1-5 [footprint 69.3 x 17.2 mm] transducer with Loqic E or GE E9 machines.[image x] The focus depth is adjusted to +/-70 mm depth of the lumbar transverse process to be targeted.[image x].US guided OPSITES were performed with the transducer positioned in a transverse plane directly centered over the tip of the target lumbar transverse process.Image 3 & 4: GE C1-5 Curvilinear 1.4-5.7 MHz Probe, Seated position image R L5 TP process.Note that the SP, Lamina, TP are all visualized with the R TP tip depth at 60 mm in this example.International Journal of Medical Science and Clinical Invention, Vol.11, Issue 06, June 2024 Image 5: Seated lumbar spine forward-flexed [FF] injection position, may augment probe to deep bony structure US beam alignment, while also decreasing the skin to TP needle travel distance.Image 6: Long axis view.White dots L4-5 Spinous processes (SPs).Star marks the sacrum STEP 1: The US probe was placed in centerline.The L4-L5 SPs and Sacrum are identified in long axis to determine level.International Journal of Medical Science and Clinical Invention, Vol.11, Issue 06, June 2024 STEP 2 Image 7: The probe is then moved in-plane from midline laterally to locate the corresponding level TP tip.This image also shows a standard prone-position in-plane ESBP, the beveled edge down with the needle tip positioned just above the L3 TP tip.[Brown arrow] STEP 3 Image 8 & 9: Off-Plane ESPB orientation midline bony landmark image: The probe in then rotated orthogonally to generate a transverse -[TV[ plane image centered over the spinous process, and visualizing the facet joint, transverse process.Note: At the L4, L5 levels the posterior superior iliac spine and crest are also visualized [image] serving as a further confirmatory landmark.This is a TV plane view at the L4 level.Blue arrowheads: Tip of the TV process.White Arrow: Off-plane injection trajectory.STEP 4: Once all relevant midline anatomical landmarks listed in Step 3 have been identified, The probe is then moved from the midline laterally to position the centre of the probe over the tip of the target transverse process.The skin is the marked & cleaned.[image] Image 10 & 11: Skin marking for a L L5 TV ESPB in a patient with Nummular Psoriasis The OPSIT-E in this case, had advantage over in-plane technique, as the needle could be introduced precisely, in a small area of skin clear of lesions, with the point of needle skin entry in this case introduced 30mm below the center of the probe.US Image: Line is the needle trajectory.Arrow points to needle tip [beveled edge lateral] at 50 mm depth STEP 5: While maintaining probe position over the tip of the Transervse Process [TP], the needle is introduced +/-1 cm [finger breath] either above or below the probe, aiming for the center of the probe.Once the needle-tip is identified, the needle is methodically advanced to the tip of the TV process and then injection performed.If the needle-shaft and tip are not well-visualized tracking may be facilitated by monitoring for tenting of the tissues as it passes through the fascial planes or by injecting small volumes of fluid to locate and brighten the needle tip.The needle may also be withdrawn slightly and steered in an arc trajectory around the PSIS/Iliac crest [if required], when injecting at the L4, L5 levels. As the meant landmark depth of the L1-5 Transverse processes are on average 70 mm in males & females.[Kawchuk GN et al 2011, Image 6.58 cm], hypotenuse 66.8 [70 degrees] needle angulation trajectory in offplane CVP injection technique, the lumbar transverse process may be readily reached with a 90 mm spinal needle while offering a surplus 10 mm shaft of steerage leeway, which would not be available in an inplane procedure.As the curvilinear probe has a lessened radius with increased arc convexity, [image xx] the crepuscular beam angles are greater.Off-plane injection takes advantage of central skin contact, affording earlier needle visualisation.As the needle trajectory is more aligned and parallel to the central vertical beams, successful needle tip tracking at greater depth may be achieved.Image 12 & 13: Skin to L4 TP tip distance of 6.58 in a high BMI patient.Focus again adjusted to TP depth.Image 2: C1-5 GE transverse arc.GE C1-5 Curved Array Transducer has a frequency of 1.4 -5.7 MHz, a footprint of 17.2 x 69.3 mm The needle tip is seen as a highly echogenic (white) spot.[Blavis M et al 2003].Anechoic injection fluid may also brighten the needle tip in off-plane injection.[blue arrowhead image] International Journal of Medical Science and Clinical Invention, Vol.11, Issue 06, June 2024 Image 14: Off-plane bright needle-tip echo reflection at depth [blue arrow] [Caspers JM et al 1997, Hendrick WR et al 1995] Determining Opsit Needle Entry Position GE C1-5 Curved Array Transducer has a frequency of 1.4 -5.7 MHz, a footprint of 17.2 x 69.3 mm, onehalf the transverse footprint distance is 8.6 mm.As the average adult index finger width 20 mm.[Johnson PW et al 2007], then the average distance between the centre of the probe and one finger breath is 30 cm.As the depth of the average adult lumbar TP on average is 70 mm the optimal needle-tip entry point from the edge of the probe can be calculated [image] International Journal of Medical Science and Clinical Invention, Vol.11, Issue 06, June 2024 Images 15 & 16: The relatively gentler arc of an orthogonal axis orientation, offers good skin surface contact area, and a needle entry point closer to the lateral border of the probe.This in turn facilitates a more acute angle injection trajectory with earlier needle visualization.Lessened US beam divergence may also facilitate needle tip tracking at depth.Starting 1.5-2 cm away from the superior-inferior edge of the curvilinear probe and angling towards the centre of the probe while advancing the needle through soft tissues, optimizing needle tip tracking and trajectory at depth.If the needle track-tip is not clearly visualized withdrawing the needle slightly, and or arcing the needle to curve to the target is recommended.This may also help to steer the needle around the PSIS.Tracking may further be guided by observing tissue tenting as the needle-tip progresses through tissue International Journal of Medical Science and Clinical Invention, Vol.11, Issue 06, June 2024 layers, and also be injecting small amounts of fluid.Once the needle tip position and shaft trajectory are confirmed, the needle is then steered to the tip of the transverse process Discussion Due to the newness of the procedure it is controversial and RCTs are urgently needed.[Qui Y et al 2020] However, recent meta-analysis has demonstrated that ESPB are effective in decreasing postoperative pain intensity and postoperative opioid consumption in spine surgery.[Ma J et al 2021] Case reports have also concluded that lumbar USG-ESPB are technically simple and safe procedures for failed back surgery pain management.[Takahashi H et al 2018]. [Harbell MW et al 2020] However, some case reports have begun to demonstrate the role of ESP blocks in the management of DLBP.[Schwartz et al 2019] Conclusions OPSITE in certain circumstances may offer some advantages as an alternative to standard prone in-plane ESPB technique.The seated forward flexed injection position may flatten the lumbar lordosis-hyperlordosis for improved target visualation, and decrease needle transit distance.The more perpendicular needle position in OPSITE may also facilitate injections when skin lesions are present [images 10-11].The seated FF position, also helps to decrease accentuated lumbar lateral flexion Cobb angles, which might otherwise challenge typical lumbar ESPB in scoliotic patients. [Harbell 20202020]sthetic is recommended at each TP lumbar level[Tulgar S et 2020]Lumbar ESP injections customarily performed at the L4 level and injectant spread for this technique has been documented in multiple studies.[Harbell2020][Tulgaretal2018][Chung et al 2018] [38] [39] [De Lara Gonzales et al 2019] Chung et al administered ESPB using a 20 mL mixture for pain management in lower extremity complex regional pain syndrome.Balaban et al performed ESPB with 30 mL mixture for postoperative analgesia in total knee arthroplasty.Fluoroscopic imaging demonstrated spread to L2-S1 levels in both lumbar ESBP cases.[Chungetal][Balaban et al 2018].In a study, a higher volume single injection (40ml), was used to demonstrate the spread of LA between L1-S4.[Celiketal 2019] De Lara González et al reported their findings in 6 cadavers after bilateral lumbar ESPB (total:12 blocks) was performed using a 20 mL LA mixture.In all applications, the spread of the LA mixture was observed between L2-4 in the craniocaudal plane.In nine applications, the spread included L5 caudally and in one application L1 cranially.The first question regarding lumbar ESPB is whether LA spreads to the anterior of the transverse process.In nine injections this anterior spread was observed, with spread to the medial border of the psoas muscle in seven and spread to the L3 and L4 spinal nerves in two injections.[DeLaraGonzalezet al 2018] Harbell et al performed nine lumbar ESPB on five cadavers using 20mL at the L4 transverse process level and reported staining of the multifidus and longissimus muscles following six injections.In only one injection the spread was reported to have been observed posterior to the lumborum muscle.No spread to the anterior of the transverse process was reported. [Hunter et al 2000]]Reports from cadaveric anatomic studies are essential for understanding the mechanism of action of plane blocks.However, due to their nature, cadaveric studies have a significant limitation.Even when fresh cadavers are used, tissue tension decreases due to the loss of vitality.Therefore, the spread of injectate in cadavers most probably does not accurately represent the spread that would occur under normal conditions.[Tulgaretal2020].Other literature has demonstrated methods of both in plane and out of plane USG-ESPB techniques.[Schartzetal2019] Customary lumbar ESP blocks with fluid, allow for continued visualization of fascial plane separation and fluid expansion under ultrasound guidance.Small amounts of air injected under ultrasound guidance, have been used to monitor needle tip position for other types of injections.[Hunteretal2000].If in doubt, a small amount of normal saline (2-5 ml) may also be injected first to confirm needle tip placement and inter fascial plane expansion.ESPB contraindications include infection at the site of injection in the paraspinal region and patient refusal However, ESPBs carry a very low risk of complications, as sonoanatomy is easily recognizable and there are no structures in close proximity at risk of needle injury.[Ahiskagliogu2018, Beek 2019] The inter-evaluator, intra-class correlation coefficient averaged 0.98 for all depth measurements.Trainees and consultants fail to use ultrasound to its full potential during interventional procedures.A lack of understanding of how to position the needle tip is a major obstacle.Needle visualisation is essential when inserting needles into tissues, which may be in close proximity to structures such as vessels, nerves.[Maury E et al 2001] International Journal of Medical Science and Clinical Invention, Vol.11, Issue 06, June 2024
4,134.2
2024-06-04T00:00:00.000
[ "Medicine", "Engineering" ]
Electrochemical Deposition of Pure-Nickel Microstructures with Controllable Size Pure nickel microstructures have been widely used in MEMS and have great application potential as a sacrificial mandrel for fabricating terahertz micro-cavity components. The performance of MEMS and terahertz micro-cavity components can be significantly improved through the use of high-quality pure nickel microstructures. Up to now, microfabrication techniques, such as laser micromachining, wire electrical-discharge machining, and cold-spray additive manufacturing, have been used to machine various types of such microstructures. However, huge challenges are involved in using these micromachining techniques to fabricate pure-nickel microstructures with controllable size and good dimensional accuracy, surface roughness, and edge radius. In this paper, taking the example of a pure-nickel rectangular mandrel that corresponds to the size of the end face of a 1.7-THz rectangular waveguide cavity, the machining processes for the electrochemical deposition of pure-nickel microstructures with controllable size, high dimensional accuracy, and good surface roughness and edge radius are discussed systematically. This proposed method can be used to manufacture various types of high-quality pure-nickel microstructures. Introduction Pure nickel is used widely in the preparation of various metal microstructures because of its high ductility, strength, and fatigue and corrosion resistances and superior magnetoelasticity [1,2]. Various types of pure-nickel microstructures have been used successfully in micromachines and microsystems, such as the microscopic coil springs of semiconductor devices [3], the microgear reducer of a microscopic transmission system [4], the microrotor of a microgyroscope [5], and the microcantilevers of hydrogen sensors [6]. Recently, the integral fabrication of a high-working-frequency terahertz rectangular waveguide cavity was reported, and this novel process depends on the manufacture of a pure-nickel sacrificial rectangular mandrel and its selective chemical dissolution [7,8]. The transmission of terahertz signals can be improved significantly through such fabrication of such a cavity, and so a pure-nickel sacrificial rectangular mandrel with controllable size and good surface roughness and fillet radius has great application potential in the manufacturing of terahertz microcavity components. Various technologies are currently available for machining pure-nickel microstructures. Song et al. studied wire electrical-discharge machining experimentally and manufactured complex pure-nickel parts at microscale and mesoscale using the optimal combination of machining parameters [9]. Hendijanifard et al. studied the Marangoni flow and phase explosion during the laser micromachining of pure nickel and machined microholes in pure-nickel films [10]. However, the influences of heat-affected zones, residual stresses, and melting layers mean that those two methods inevitably have some drawbacks. Cormier Micromachines 2022, 13, 704 2 of 10 et al. fabricated pure-nickel pyramidal fin arrays using cold-spray additive manufacturing, but that approach fell short of achieving high dimensional accuracy and good surface accuracy [11]. Bi et al. obtained a pure-nickel rectangular mandrel with controllable size, high dimensional accuracy, good surface roughness and fillet radius by wire electrochemical micromachining, but the low machining efficiency of that method is not conducive to the mass production of pure-nickel rectangular mandrels [8]. Therefore, it is necessary to explore other types of micromachining technology for pure-nickel microstructures. Electrochemical deposition (ECD) is a typical additive micromachining technology in which the product is formed layer by layer, and ECD technology based on an aqueous solution generally has the advantages of a wide range of application materials, low operating temperature, coordinated control of microstructure, morphology, and properties, and flexible application form, among others [12,13]. Theoretically, when the metal atoms or grains formed by the reduction reaction are stacked in a controlled manner as designed, metal-based structures and parts of any shape can be fabricated by ECD [14]. With increasing application requirements in the fields of microelectromechanical systems and terahertz devices, ECD has gradually been recognized as a mature micromachining technology to meet these high-precision requirements [15,16]. In this paper, an ECD method is proposed for fabricating pure-nickel microstructures with controllable size, high dimensional accuracy, and good surface roughness and edge radius. Taking the example of fabricating a pure-nickel rectangular mandrel that corresponds to the size of the end face of a 1.7-THz rectangular waveguide cavity, the manufacturing methods are described in detail together with the corresponding experimental investigations. Materials In this study, a dry-film photoresist (GPM220; DuPont, Wilmington, DE, USA) (hereinafter referred to simply as the photoresist) with a thickness of 150 µm was selected to prepare the mask with rectangular grooves, and a plate made of 304 stainless steel with a diameter of 100 mm and a thickness of 1 mm was selected as the substrate. A pure nickel plate is used as the anode. The composition of the electrochemical deposition solution of pure nickel is given in Table 1. Methods The process for manufacturing the rectangular mandrel is divided into two steps: (i) preparing the mask with rectangular grooves and (ii) the ECD of the rectangular mandrel. The end-face width of the rectangular mandrel is determined by the width of the mask, the length of the rectangular mandrel is determined by the length of the mask, and the end-face thickness of the rectangular mandrel is determined by the time of ECD. The mask is manufactured by lithography, including substrate treatment and photoresist coating, baking before exposure, exposure, baking after exposure, and development. The length and width of the mask are controlled through the photomask, and the end-face height of the mask is guaranteed by the thickness of the photoresist; the latter selected for use is greater than the end-face thickness of the rectangular mandrel prepared by ECD. The key processes of exposure and development during the preparation of the mask are shown in Figure 1. The thickness of the mask is D 1 , its length is L 1 , and its width is W 1 . The dimensional accuracy and side surface roughness of the mask are guaranteed mainly by the exposure and development parameters, and its bottom surface roughness is guaranteed by the surface roughness of the substrate. Micromachines 2022, 13, x FOR PEER REVIEW 3 of 10 use is greater than the end-face thickness of the rectangular mandrel prepared by ECD. The key processes of exposure and development during the preparation of the mask are shown in Figure 1. The thickness of the mask is D1, its length is L1, and its width is W1. The dimensional accuracy and side surface roughness of the mask are guaranteed mainly by the exposure and development parameters, and its bottom surface roughness is guaranteed by the surface roughness of the substrate. The ECD of the rectangular mandrel is shown in Figure 2. The end-face thickness of the finally prepared rectangular mandrel is D2, its length is L2, and its end-face width is W2. The dimensional accuracy, side roughness, and edge radius of the rectangular mandrel are guaranteed mainly by the accuracy of the mask and the ECD parameters, and the bottom surface roughness of the rectangular mandrel is determined by the surface roughness of the substrate. The length L2 and end-face width W2 of the finally prepared rectangular mandrel are consistent with the length L1 and width W1 of the mask, and the thickness D2 of the end face of the rectangular mandrel is less than the thickness D1 of the mask. Substrate Treatment The surface quality of the lower surface of the rectangular mandrel is guaranteed by the surface quality of the substrate. Therefore, if surface treatment of the substrate is not carried out, then various defects on the substrate surface will be copied on the lower surface of the rectangular mandrel. Moreover, impurities, such as oil stains and particles on the substrate surface, will make the photoresist unable to fit closely with the substrate, resulting in the falling off of the mask in the subsequent ECD process and the deposition of metal on the lower surface of the rectangular mandrel beyond the groove edge area after ECD. Therefore, surface treatment of the substrate is very important. The ECD of the rectangular mandrel is shown in Figure 2. The end-face thickness of the finally prepared rectangular mandrel is D 2 , its length is L 2 , and its end-face width is W 2 . The dimensional accuracy, side roughness, and edge radius of the rectangular mandrel are guaranteed mainly by the accuracy of the mask and the ECD parameters, and the bottom surface roughness of the rectangular mandrel is determined by the surface roughness of the substrate. The length L 2 and end-face width W 2 of the finally prepared rectangular mandrel are consistent with the length L 1 and width W 1 of the mask, and the thickness D 2 of the end face of the rectangular mandrel is less than the thickness D 1 of the mask. Micromachines 2022, 13, x FOR PEER REVIEW 3 of 10 use is greater than the end-face thickness of the rectangular mandrel prepared by ECD. The key processes of exposure and development during the preparation of the mask are shown in Figure 1. The thickness of the mask is D1, its length is L1, and its width is W1. The dimensional accuracy and side surface roughness of the mask are guaranteed mainly by the exposure and development parameters, and its bottom surface roughness is guaranteed by the surface roughness of the substrate. The ECD of the rectangular mandrel is shown in Figure 2. The end-face thickness of the finally prepared rectangular mandrel is D2, its length is L2, and its end-face width is W2. The dimensional accuracy, side roughness, and edge radius of the rectangular mandrel are guaranteed mainly by the accuracy of the mask and the ECD parameters, and the bottom surface roughness of the rectangular mandrel is determined by the surface roughness of the substrate. The length L2 and end-face width W2 of the finally prepared rectangular mandrel are consistent with the length L1 and width W1 of the mask, and the thickness D2 of the end face of the rectangular mandrel is less than the thickness D1 of the mask. Substrate Treatment The surface quality of the lower surface of the rectangular mandrel is guaranteed by the surface quality of the substrate. Therefore, if surface treatment of the substrate is not carried out, then various defects on the substrate surface will be copied on the lower surface of the rectangular mandrel. Moreover, impurities, such as oil stains and particles on the substrate surface, will make the photoresist unable to fit closely with the substrate, resulting in the falling off of the mask in the subsequent ECD process and the deposition of metal on the lower surface of the rectangular mandrel beyond the groove edge area after ECD. Therefore, surface treatment of the substrate is very important. Substrate Treatment The surface quality of the lower surface of the rectangular mandrel is guaranteed by the surface quality of the substrate. Therefore, if surface treatment of the substrate is not carried out, then various defects on the substrate surface will be copied on the lower surface of the rectangular mandrel. Moreover, impurities, such as oil stains and particles on the substrate surface, will make the photoresist unable to fit closely with the substrate, resulting in the falling off of the mask in the subsequent ECD process and the deposition of metal on the lower surface of the rectangular mandrel beyond the groove edge area after ECD. Therefore, surface treatment of the substrate is very important. First, the substrate surface is polished precisely to remove the macroscopic and microscopic defects on its surface. Then, the substrate is put into acetone solution for ultrasonic cleaning to remove the surface oil stains and adsorbed impurities, and then the substrate is immersed in 0.1-mol/L dilute hydrochloric acid solution for 30 s. Finally, the substrate is cleaned ultrasonically with deionized water and then dried for use. To verify the effect of substrate treatment, the substrate surface roughness must be measured, during which two different positions on three substrates are selected randomly. The specific measurement results are given in Table 2, and scanning electron microscopy (SEM) and atomic force microscopy (AFM) observations of the substrate surface morphology are shown in Figure 3. First, the substrate surface is polished precisely to remove the macroscopic and microscopic defects on its surface. Then, the substrate is put into acetone solution for ultrasonic cleaning to remove the surface oil stains and adsorbed impurities, and then the substrate is immersed in 0.1-mol/L dilute hydrochloric acid solution for 30 s. Finally, the substrate is cleaned ultrasonically with deionized water and then dried for use. To verify the effect of substrate treatment, the substrate surface roughness must be measured, during which two different positions on three substrates are selected randomly. The specific measurement results are given in Table 2, and scanning electron microscopy (SEM) and atomic force microscopy (AFM) observations of the substrate surface morphology are shown in Figure 3. The aim in photoresist coating is to apply it tightly and evenly on the substrate surface, so it is necessary to perform the process via a laminator. To achieve the film-coating effects of a flat, uniform, and bubble-free surface, it is necessary to apply the photoresist with a certain pressure and temperature, and in the present study the laminating pressure was 1.5 kg/cm 2 . A dry-film photoresist is thin and brittle and so is easily damaged when a certain pressure is applied directly by the laminator, as shown in Figure 4a. Therefore, it is necessary to set the laminating roller to a certain temperature first so that it preheats the photoresist when close to it. After the photoresist becomes soft, one tears off the polyethylene protective film on one side and quickly sticks it on the substrate. Controlling a certain temperature during coating softens the photoresist by heating, thereby making it fit the substrate more easily. Therefore, the photoresist preheating temperature, i.e., the temperature of the laminating roller, has a great impact on the coating quality. Via previous experimental exploration, 60 °C was selected for the preheating and coating of the photoresist, and the workpiece after coating is shown in Figure 4b. Photoresist Coating and Baking before Exposure The aim in photoresist coating is to apply it tightly and evenly on the substrate surface, so it is necessary to perform the process via a laminator. To achieve the film-coating effects of a flat, uniform, and bubble-free surface, it is necessary to apply the photoresist with a certain pressure and temperature, and in the present study the laminating pressure was 1.5 kg/cm 2 . A dry-film photoresist is thin and brittle and so is easily damaged when a certain pressure is applied directly by the laminator, as shown in Figure 4a. Therefore, it is necessary to set the laminating roller to a certain temperature first so that it preheats the photoresist when close to it. After the photoresist becomes soft, one tears off the polyethylene protective film on one side and quickly sticks it on the substrate. Controlling a certain temperature during coating softens the photoresist by heating, thereby making it fit the substrate more easily. Therefore, the photoresist preheating temperature, i.e., the temperature of the laminating roller, has a great impact on the coating quality. Via previous experimental exploration, 60 • C was selected for the preheating and coating of the photoresist, and the workpiece after coating is shown in Figure 4b. After the photoresist coating is completed, baking before exposure must be done. The purpose of this is to increase the fluidity of the photosensitive adhesive in the photoresist so that the photoresist and the substrate fit more closely and the photoresist surface is flatter and more uniform, with the overall aim being to obtain better exposure results. Before the baking before exposure, the polyethylene protective film on the other side of the photoresist surface must be torn off. To obtain better baking, it is necessary to (i) set the baking temperature and time so that the temperature rises slowly and (ii) allow the temperature to drop slowly for cooling after baking. Via previous experimental exploration, the chosen temperature and time settings for baking before exposure were as given in Table 3. After the photoresist coating is completed, baking before exposure must be done. The purpose of this is to increase the fluidity of the photosensitive adhesive in the photoresist so that the photoresist and the substrate fit more closely and the photoresist surface is flatter and more uniform, with the overall aim being to obtain better exposure results. Before the baking before exposure, the polyethylene protective film on the other side of the photoresist surface must be torn off. To obtain better baking, it is necessary to (i) set the baking temperature and time so that the temperature rises slowly and (ii) allow the temperature to drop slowly for cooling after baking. Via previous experimental exploration, the chosen temperature and time settings for baking before exposure were as given in Table 3. Currently, the main photoresist-exposure methods are proximity, contact, and projection, and contact exposure was used in the present study. During exposure, the photoresist surface is in direct contact with the photomask, and the designed pattern on the photomask is copied onto the photoresist in the ratio of 1:1. Because the light-source radiation intensity of the exposure machine used in the present study was stable, the exposure accuracy was determined mainly by the exposure time. If the exposure time is too short, then the photoresist layer cannot be fully sensitized, and the crosslinking reaction is insufficient. During subsequent development, the photoresist at the edge of the exposure area will also dissolve in the developer, resulting in an uneven or large final mask relative to its design size. If the exposure time is too long, then the photoresist at the mask will produce a cross-linking reaction due to light scattering or diffraction reaction, resulting in excessive exposure. During subsequent development, the photoresist near the edge of the mask cannot dissolve fully in the developer, resulting in the actual mask being small or there being more photoresist residues at the groove edges. Therefore, choosing a reasonable exposure time is the key to high-precision copying of the mask design pattern. In the present study, the end-face size of the 1.7-THz waveguide cavity was 83 μm × 165 μm, so the design width of the mask was 165 μm. Figure 5a shows how the average size of the mask relative to the design size varied with the exposure time. With increasing exposure time, the actual size of the mask decreases gradually. When the exposure time is 70 s, the consistency of the obtained mask is good, and its average size is close to the Currently, the main photoresist-exposure methods are proximity, contact, and projection, and contact exposure was used in the present study. During exposure, the photoresist surface is in direct contact with the photomask, and the designed pattern on the photomask is copied onto the photoresist in the ratio of 1:1. Because the light-source radiation intensity of the exposure machine used in the present study was stable, the exposure accuracy was determined mainly by the exposure time. If the exposure time is too short, then the photoresist layer cannot be fully sensitized, and the crosslinking reaction is insufficient. During subsequent development, the photoresist at the edge of the exposure area will also dissolve in the developer, resulting in an uneven or large final mask relative to its design size. If the exposure time is too long, then the photoresist at the mask will produce a cross-linking reaction due to light scattering or diffraction reaction, resulting in excessive exposure. During subsequent development, the photoresist near the edge of the mask cannot dissolve fully in the developer, resulting in the actual mask being small or there being more photoresist residues at the groove edges. Therefore, choosing a reasonable exposure time is the key to high-precision copying of the mask design pattern. In the present study, the end-face size of the 1.7-THz waveguide cavity was 83 µm × 165 µm, so the design width of the mask was 165 µm. Figure 5a shows how the average size of the mask relative to the design size varied with the exposure time. With increasing exposure time, the actual size of the mask decreases gradually. When the exposure time is 70 s, the consistency of the obtained mask is good, and its average size is close to the design size. Figure 5b shows the experimental results for an exposure time of 70 s, which was selected as the best exposure time in the present study. After exposure, baking is required, the purpose being to accelerate the cross-linking reaction of the photoresist in the exposure area and make the exposure area more stable. However, excessive post-baking will increase the stress in the adhesive film and even crack and deform it, while baking at too low a temperature and for insufficient time will lead to insufficient cross-linking reaction of the photoresist and poorer quality of the mask after subsequent development. The chosen temperature and time settings for baking after exposure were as given in Table 4. design size. Figure 5b shows the experimental results for an exposure time of 70 s, which was selected as the best exposure time in the present study. After exposure, baking is required, the purpose being to accelerate the cross-linking reaction of the photoresist in the exposure area and make the exposure area more stable. However, excessive post-baking will increase the stress in the adhesive film and even crack and deform it, while baking at too low a temperature and for insufficient time will lead to insufficient cross-linking reaction of the photoresist and poorer quality of the mask after subsequent development. The chosen temperature and time settings for baking after exposure were as given in Table 4. Development is a key step in the preparation of the mask, the purpose being to dissolve the unexposed areas in the photoresist and retain the exposed ones to obtain the rectangular grooves of the template. In this study, during development, the exposed workpiece was put into the developer for static development, and the developer was shaken and stirred every 30 s. Because the unexposed photoresist reacts with isopropyl alcohol solution and produces white precipitation, it can be judged whether the development is complete by putting the developed workpiece into isopropyl alcohol solution. With insufficient development time, the photoresist in the unexposed areas cannot dissolve fully, leading to poor edge uniformity of the mask due to residual photoresist. With excessive development time, although the photoresist in the unexposed areas dissolves fully, the developer penetrates into the exposed areas at the mask edges, causing them to swell and lowering the uniformity of the groove edge size of the mask and causing the edges to lose their luster. Therefore, reasonable control of the development time is also needed for mask accuracy. After exposure for 70 s, different development times were selected, and Figure 6a shows how the average size of the mask relative to the design size after development varied with the development time. It is found that a development time of 720 s gives clean photoresist development and good mask size and accuracy, the mask being close to its design size. Figure 6b shows the experimental results for a development time of 720 s, which was chosen as the development time in the present study. Development Development is a key step in the preparation of the mask, the purpose being to dissolve the unexposed areas in the photoresist and retain the exposed ones to obtain the rectangular grooves of the template. In this study, during development, the exposed workpiece was put into the developer for static development, and the developer was shaken and stirred every 30 s. Because the unexposed photoresist reacts with isopropyl alcohol solution and produces white precipitation, it can be judged whether the development is complete by putting the developed workpiece into isopropyl alcohol solution. With insufficient development time, the photoresist in the unexposed areas cannot dissolve fully, leading to poor edge uniformity of the mask due to residual photoresist. With excessive development time, although the photoresist in the unexposed areas dissolves fully, the developer penetrates into the exposed areas at the mask edges, causing them to swell and lowering the uniformity of the groove edge size of the mask and causing the edges to lose their luster. Therefore, reasonable control of the development time is also needed for mask accuracy. After exposure for 70 s, different development times were selected, and Figure 6a shows how the average size of the mask relative to the design size after development varied with the development time. It is found that a development time of 720 s gives clean photoresist development and good mask size and accuracy, the mask being close to its design size. Figure 6b shows the experimental results for a development time of 720 s, which was chosen as the development time in the present study. Results of Preparing Masks with Rectangular Grooves Using the optimized combination of process parameters, masks with a design width of 165 µm were prepared, and Figure 7 shows an example of their morphology. Six different workpieces were randomly selected for measurement, and three masks in each workpiece were randomly selected for size measurement. Results of Preparing Masks with Rectangular Grooves Using the optimized combination of process parameters, masks with a design width of 165 μm were prepared, and Figure 7 shows an example of their morphology. Six different workpieces were randomly selected for measurement, and three masks in each workpiece were randomly selected for size measurement. Figure 8 shows how the measurements varied, and accordingly the average mask width was 164.5 μm. Figure 9 shows the experimental platform used for the ECD of the rectangular mandrel in the mask. A DC power supply with a resolution of 0.01 mA was used, and an airbearing spindle was used to rotate the cathode. Agitation and temperature control of the Results of Preparing Masks with Rectangular Grooves Using the optimized combination of process parameters, masks with a design width of 165 μm were prepared, and Figure 7 shows an example of their morphology. Six different workpieces were randomly selected for measurement, and three masks in each workpiece were randomly selected for size measurement. Figure 8 shows how the measurements varied, and accordingly the average mask width was 164.5 μm. Figure 9 shows the experimental platform used for the ECD of the rectangular mandrel in the mask. A DC power supply with a resolution of 0.01 mA was used, and an airbearing spindle was used to rotate the cathode. Agitation and temperature control of the Results of Preparing Masks with Rectangular Grooves Using the optimized combination of process parameters, masks with a design width of 165 μm were prepared, and Figure 7 shows an example of their morphology. Six different workpieces were randomly selected for measurement, and three masks in each workpiece were randomly selected for size measurement. Figure 8 shows how the measurements varied, and accordingly the average mask width was 164.5 μm. Figure 9 shows the experimental platform used for the ECD of the rectangular mandrel in the mask. A DC power supply with a resolution of 0.01 mA was used, and an airbearing spindle was used to rotate the cathode. Agitation and temperature control of the Figure 9 shows the experimental platform used for the ECD of the rectangular mandrel in the mask. A DC power supply with a resolution of 0.01 mA was used, and an air-bearing spindle was used to rotate the cathode. Agitation and temperature control of the solution were realized with a temperature-controlled magnetic stirrer. During the experiment, the stirrer controlled the magnetic rotor to maintain constant rotation to stir the ECD solution and improve its renewal flow around the cathode. The experimental conditions are given in Table 5. solution were realized with a temperature-controlled magnetic stirrer. During the experiment, the stirrer controlled the magnetic rotor to maintain constant rotation to stir the ECD solution and improve its renewal flow around the cathode. The experimental conditions are given in Table 5. According to the end-face size of the rectangular mandrel corresponding to the 1.7-THz rectangular waveguide cavity, the ECD time in the present study was set to 8 h. Figure 10a shows the workpiece after ECD, and Figure 10b shows the obtained pure-nickel rectangular mandrels. The method for measuring the rectangular-mandrel width was to randomly select a rectangular mandrel and randomly select six positions along its length for measurement. The method for measuring the rectangular-mandrel fillet radius and surface roughness was the same as that of the width. Figure 10c-e show observations of the rectangular-mandrel edge, bottom, and side morphologies, respectively, and Table 6 gives the results of measuring the technical indices of the width, edge radius, bottom surface roughness, and side surface roughness of the pure-nickel rectangular mandrel. According to the end-face size of the rectangular mandrel corresponding to the 1.7-THz rectangular waveguide cavity, the ECD time in the present study was set to 8 h. Figure 10a shows the workpiece after ECD, and Figure 10b shows the obtained purenickel rectangular mandrels. The method for measuring the rectangular-mandrel width was to randomly select a rectangular mandrel and randomly select six positions along its length for measurement. The method for measuring the rectangular-mandrel fillet radius and surface roughness was the same as that of the width. Figure 10c-e show observations of the rectangular-mandrel edge, bottom, and side morphologies, respectively, and Table 6 gives the results of measuring the technical indices of the width, edge radius, bottom surface roughness, and side surface roughness of the pure-nickel rectangular mandrel. Electrochemical Deposition of Rectangular Mandrel Micromachines 2022, 13, x FOR PEER REVIEW 8 of 10 solution were realized with a temperature-controlled magnetic stirrer. During the experiment, the stirrer controlled the magnetic rotor to maintain constant rotation to stir the ECD solution and improve its renewal flow around the cathode. The experimental conditions are given in Table 5. According to the end-face size of the rectangular mandrel corresponding to the 1.7-THz rectangular waveguide cavity, the ECD time in the present study was set to 8 h. Figure 10a shows the workpiece after ECD, and Figure 10b shows the obtained pure-nickel rectangular mandrels. The method for measuring the rectangular-mandrel width was to randomly select a rectangular mandrel and randomly select six positions along its length for measurement. The method for measuring the rectangular-mandrel fillet radius and surface roughness was the same as that of the width. Figure 10c-e show observations of the rectangular-mandrel edge, bottom, and side morphologies, respectively, and Table 6 gives the results of measuring the technical indices of the width, edge radius, bottom surface roughness, and side surface roughness of the pure-nickel rectangular mandrel. Conclusions An ECD machining process for pure-nickel microstructures with controllable size, high dimensional accuracy, and good surface roughness and edge radius was investigated and discussed systematically, the following conclusions were obtained. In the process of preparing the mask with rectangular grooves, an exposure time of 70 s and a development time of 720 s were considered to be optimal choices in the present study. During the electrochemical deposition, the temperature of the solution was kept at 45 • C to give the best reaction rate with a current density of 1.0 A/dm 2 . The pH of the electrochemical deposition solution was controlled to be in the range of 3.5-4.5. The width of the final prepared rectangular mandrel is consistent with the mask. The measurement results show that the bottom surface roughness is less than 0.1 µm, the side roughness is less than 0.2 µm, and the edge radius is less than 9.2 µm. Because the specific size is controllable and the dimensional accuracy, surface roughness, and edge radius are good, the proposed method can be used to manufacture various types of high-quality pure-nickel microstructures.
7,378
2022-04-29T00:00:00.000
[ "Materials Science" ]
Terahertz correlation spectroscopy infers particle velocity and rheological properties Correlation spectroscopy is an analytical technique that can identify the residence time of reflective or fluorescent particles in a measurement spot, allowing particle velocity or diffusion to be inferred. We show that the technique can be applied to data measured with a time-domain terahertz sensor. The speed of reflectors such as silica ballotini or bubbles can thus be measured in fluid samples. Time-domain terahertz sensors can therefore be used, for the first time, to measure rheological properties of optically opaque fluids that contain entrained reflectors, such as polyethylene beads. The transparency of many optically opaque solids and fluids to terahertz radiation allows terahertz time-domain sensors to noninvasively measure chemical spectra and image threedimensional structures inside previously inaccessible systems [1].Changes over time of the internal structure of specimens, such as the ingress of a hydration front into a polymer sample, have been observed using such methods [2].We show in this Letter that the method of correlation spectroscopy [3], developed for confocal fluorescence microscopy, can be applied to data from a time-domain terahertz sensor to measure particle velocity as a spot measurement, and also to measure a onedimensional velocity profile of a fluid. In terahertz pulsed imaging, a subpicosecond pulse of radiation is focused into a beam that is mechanically scanned over a sample in lateral xand y-directions, and time-of-flight measurements of the reflected radiation are used to obtain z-positions of reflective features.Although the narrowest focus of such a beam is diffraction-limited to about 200 μm in diameter, and the axial position is determined from the time-offlight data with a resolution limited by the pulse duration, this spot-scanning system can support many of the same measurements made by an optical confocal microscope-except on a coarser scale, and often in optically opaque media such as polymers and ceramics. An important confocal imaging technique that we believe has not previously been adapted to terahertz instruments is fluorescence correlation spectroscopy (FCS).In FCS, the fluorescence intensity from a static observation volume within a fluid specimen is recorded for a period of typically milliseconds or seconds.When the fluid contains dilute fluorescent species that move, by diffusion or advection (flow), the varying number and positions of fluorophores in the observation volume cause the emitted light intensity to fluctuate.The durations of the "bumps" in the fluorescence signal depend on the speeds of the fluorophores and the size of the observation volume.The correlation time of the fluorescence signal can be evaluated using established mathematics, and this can be related to the speed of a fluid containing entrained fluorophores.FCS can also be used to identify the numerical density of fluorophores, their diffusion coefficient, and fluid viscosity [3].It is also established that reflective particles may be used instead of fluorescent probe molecules [4], and it is this approach that we adapt for terahertz imaging. In correlation spectroscopy, it is typically assumed that the response function of the measurement instrument, usually a confocal microscope, is described by a prolate ellipsoidal Gaussian observation volume [5].The signal I x; y; z that is detected from a point source at position (x; y; z) therefore has the following form: where w defines the lateral spot width, and ξw its axial width.A calibration measurement is needed to establish that Eq. (1) describes the detection of particles by the terahertz sensor, and to determine w.Provided that this is established, the autocorrelation of the terahertz reflection signal from a fluid containing randomly distributed reflectors should be identical to the behavior that has been established for FCS.If the fluctuation of a signal is defined as δI I − Ī where Ī is the mean intensity, the normalized autocorrelation G is given as follows: Gτ In the case of particles moving at uniform velocity with negligible diffusion, the expected autocorrelation curve was established as Eq. ( 3), where τ flow w∕V for uniform particle flow of speed V in the x-direction, and N is the average number of particles within the observation volume [3]: (3) This enables the particle velocity to be inferred from experimentally measured terahertz reflection signals, by fitting the value of τ flow to the normalized autocorrelation of the detected signal. Experimental measurements of terahertz reflection were made with a commercial time-domain terahertz sensor (TPI imaga 2000, Teraview Ltd., Cambridge, UK) with a focal length of 7 mm, and an established optical setup [6] in which we simply replaced the solid sample with a fluid sample.The instrument emits subpicosecond pulses generated from a Ti:sapphire laser [7], and the reflected terahertz radiation is captured by the same lens system used for illumination.The axial resolution is about 50 μm.First, calibration images were captured of silica ballotini (Q.A. Equipment Ltd., UK, density 2640 kg∕m 3 , diameter 425 μm) that were fixed to a mirror (Thorlabs, flatness 100 nm) with double-sided tape and immersed in paraffin oil (Sigma Aldrich UK, density 880 kg∕m 3 , and kinematic viscosity 101 5 mPa s measured on a Brookfield viscometer at 20°C).The sample was rastered through the beam to capture a reflection intensity image [6], which confirmed that the assumption of a Gaussian point spread function with w 350 μm was approximately valid for this setup when the sample was in focus.Interestingly, the backreflected images had a similar value of w for all the sizes of ballotini in this study, perhaps because the signal was dominated by reflection from a single point of perpendicular surface on the glass spheres.Therefore the terahertz reflection signal from the ballotini can be used to infer particle velocity via Eq.( 3) without much difficulty. To perform terahertz correlation velocimetry as a spot measurement, silica ballotini of diameters 212, 425, 600, and 850 μm were separately dispersed into 1 ml volumes of the paraffin oil so that the particles comprised 1% vol. of the fluid.Drop-time observations were used to establish the actual sedimentation velocity of individual ballotini.Small volumes of dispersed ballotini were added to the top of 30 ml of static paraffin oil in a polyethylene drop-tube.The terahertz reflection intensity of ballotini falling through the static focal spot of the sensor was measured side-on through the polyethylene, as shown in Fig. 1.The refractive index contrast of silica (n 1.98) and paraffin oil (n 1.5) at 1 THz [8] provided ample reflected signal for analysis.Time-resolved signals are shown in Fig. 2. The autocorrelation of these measured signals can be obtained by a standard MATLAB function, and the average residence time of the reflective particles within the observation volume (and hence their velocities) can be inferred by fitting Eq. (3) to this processed data.The velocities that are found by correlation spectroscopy are consistent with Stokes drag and drop-time observations.Sample data and MATLAB code for data analysis are given in Dataset 1, (Ref.[9]). The upper limit on accurate velocity measurement by terahertz correlation spectroscopy arises from the Nyquist criterion.Our sensor sampled the reflection from the system at 30 Hz, and so given a measurement spot diameter of 350 μm, it follows that ballotini moving faster than about 5 mm/s may yield unreliable velocities.In practice the Gaussian observation volume extends weakly beyond 350 μm, and speeds up to 8 mm/s were measured with less than 10% error.The dynamic range of velocity measurement could be extended by a faster sampling rate; a wider instrument response function; or cross-correlative measurement of terahertz reflection signals at multiple localizations along the direction of fluid flow.A lower limit of accurate velocity measurement comes about if very few particles are observed (say, in a region of very slow flow), in which case variations of the background noise level may lead to incorrect velocity estimates. The terahertz sensor can also be used to observe onedimensional fluid velocity profiles.A wide-gap Taylor-Couette concentric cylinder viscometer was set up, with a polyethylene outer cylinder that permitted the fluid gap to be observed using the TPI imaga 2000 instrument.A brass bob of radius 4.0 mm was fabricated in-house, and the fluid gap was 3.5 mm.The bob was driven at known rotation rates up to 24 rpm using a motor controlled via an Arduino Uno with a motor shield (Arduino Ltd.).Polyethylene spheres of 200 μm diameter (1 g, Stamylan UH, DSM Engineering Plastics) were coated with a colloidal silver paint (0.5 ml, N36BA paint, Maplin) and dispersed into paraffin oil.The suspension was loaded into the viscometer.The coated particles were almost neutrally buoyant, with sedimentation velocities less than 20 mm/h.The silver coating ensured a strong terahertz reflection was observable, and was necessary because the weak refractive index contrast between bare polyethylene spheres (n 1.52) and paraffin oil (n ∼ 1.5) resulted in negligible signal otherwise.The time-and depth-resolved terahertz reflections of the coated spheres suspended in the viscometer are shown in Fig. 3, together with inferred velocity profiles.In this analysis, the terahertz reflection signals were first binned into 70 μm slices before analysis as before, and some velocity estimates in regions of Gas bubbles can be also studied using terahertz correlation spectroscopy.As well as determining the velocity of a bubble based on its residence time in the observation volume, it is possible to simultaneously measure the bubble diameter from the separation in the double-reflection of terahertz radiation from opposite bubble surfaces.To show this experimentally, a polyethylene tube containing 48 ml of paraffin oil and 2 ml of air was shaken by hand to disperse small bubbles.The rising bubbles were observed using the same geometry established for imaging ballotini in Fig. 1.In Fig. 4, time-and depth-resolved terahertz reflections are shown for a region of fluid between 3 and 4.2 mm from the tube wall, so that wall effects are not very significant.These data were recorded after large bubbles had already risen to the top of the fluid, and the separation between paired terahertz echoes, d , indicates a bubble diameter of 0.1 mm.To analyze the velocity of individual bubbles, the total time-resolved reflection intensity of a bubble was manually cropped from the data and its duration fitted to Eq. (1) using y z 0 and x V t − t 0 .Autocorrelation as per Eq. ( 2) was not necessary because strong, sparse signals were obtained.Here, the bubble diameters were observed to be much smaller than the observation volume determined for ballotini, and the diffractionlimited value of w (350 μm) was used to interpret the bubble velocity.The estimated bubble velocity of 0.2 mm/s is plausible for spherical bubbles of this size rising in creeping flow, although faster than the predicted Stokes velocity for a single bubble [10]. Terahertz imaging offers several unique advantages for the measurement of solid structures, and some of these features may be valuable in rheometry.Most essentially, the ability to noninvasively measure flow in optically opaque fluids is potentially very valuable.Although terahertz rheometry can only offer significant imaging depth in nonpolar fluids such as paraffin oils that are relatively transparent to terahertz radiation, it is well-placed to address questions such as: What is the yield stress in a waxy crude oil or in model waxes formed by substances such as tripalmitin, where the thickness of an optically opaque wax can make it difficult to interpret the results of rheometry [11]?The use of metal-coated tracer particles, which was shown to be valuable for terahertz correlation spectroscopy in Fig. 3, could also enable noninvasive observation of solid particle speed within optically opaque fluidized beds, which is an important and challenging issue [12].In the case of air bubbles in paraffin, the negligible absorption of both fluids meant that bubble size measurement based on doublereflection spacing could be combined with velocimetry.In the case of silica glass ballotini, however, the absorption coefficient of the solid was too high for this combined measurement to be done. In order to apply correlation spectroscopy to terahertz reflection data using a simple analysis, it was important to ascertain that a simple Gaussian point spread function described the signal reflected by particles in a fluid.Calibration measurements showed this assumption was valid for the small spherical reflectors (silica ballotini, and silvered polyethylene) in paraffin oil used in this study, and we assumed it was valid for smaller air bubbles.Larger or aspherical particles might complicate the point spread function, and the refractive index of the fluid may affect the beam diameter.Therefore it is recommended to use calibration measurements to establish what point spread function width, w, can be used to study any particular combination of reflective particles and fluids. In summary, the technique of correlation spectroscopy allows time-domain terahertz sensors to be used for particle velocimetry within some fluids.The measurement is found to be consistent with other velocity measurements and creeping flow equations.Because many optically opaque fluids can be studied by terahertz imaging, this technique may allow flow profiles to be determined that were previously inaccessible for measurement.Furthermore, because an entire one-dimensional flow profile can be measured at once, it seems possible to characterize non-Newtonian flow via a terahertz correlation velocity profile obtained in a Taylor-Couette viscometer at a single rotation speed.Terahertz correlation spectroscopy could be used for flow-metering inside hydrocarbon oils, for studying the rheology of gel formation in waxy crude oils, or lubricants clouded by carbon black or magnetic particles.In confocal microscopy the technique of FCS has long been established as a method for measuring dynamic properties such as diffusion and particle density, as well as velocity, and so this terahertz version of the technique should be adaptable to quantify such phenomena as well, and we hope that many fluid dynamics questions could be studied by this type of terahertz rheology.Commercial time-domain terahertz cameras could readily be equipped with software methods to perform correlation spectroscopy measurements.Data and software are provided in Dataset 1, (Ref.[9]). Fig. 1 . Fig.1.Principle of terahertz correlation spectroscopy.Reflective silica ballotini (yellow) sink in a terahertz-transparent paraffin oil.Successive time-domain scans of terahertz reflection intensity contain time-varying signals from which the average residence time of particles in the imaging volume can be evaluated using correlation spectroscopy.Therefore the mean particle velocity can be inferred.The waterfall plot shows a typical reflection measurement of 425 μm diameter ballotini. Fig. 2 . Fig. 2. (a) Time-resolved terahertz reflection intensity (inset) due to silica ballotini falling through a static observation volume in paraffin oil.The autocorrelations of these data are approximately Gaussian, and by fitting Eq. (3) to these values the average particle residence time can be inferred.(b) The velocities corresponding to the fitted residence times are consistent with the calculated terminal Stokes velocities of spherical particles, and with drop-time measurements of velocity. Fig. 3 . Fig. 3. (a) The terahertz sensor setup for measuring the speed of entrained particles in the Taylor-Couette viscometer.(b) Time-and depth-resolved terahertz reflections show that reflective particles move faster with increasing proximity to the rotating bob.(c) The radially resolved velocity profiles obtained by analyzing the reflection data in B with correlation spectroscopy (circles) are broadly consistent with the velocity profiles that would be set up in ideal Newtonian fluids at the applied rotation rates (solid lines of matching color correspond to rotation rates of 10, 13.5, 15, 19, and 24 rpm of the 4 mm radius bob). Fig. 4 . Fig. 4. Characteristic double echoes due to terahertz reflection from both sides of air bubbles in paraffin oil.The separation (d ) indicates bubble diameter (here, about 100 μm).Fitting the residence times of individual echoes δt indicates the bubble rise velocity.
3,547.6
2016-07-15T00:00:00.000
[ "Physics" ]
Phorbol esters induce PLVAP expression via VEGF and additional secreted molecules in MEK1‐dependent and p38, JNK and PI3K/Akt‐independent manner Abstract Endothelial diaphragms are subcellular structures critical for mammalian survival with poorly understood biogenesis. Plasmalemma vesicle associated protein (PLVAP) is the only known diaphragm component and is necessary for diaphragm formation. Very little is known about PLVAP regulation. Phorbol esters (PMA) are known to induce de novo PLVAP expression and diaphragm formation. We show that this induction relies on the de novo production of soluble factors that will act in an autocrine manner to induce PLVAP transcription and protein expression. We identified vascular endothelial growth factor‐A (VEGF‐A) signalling through VEGFR2 as a necessary but not sufficient downstream event as VEGF‐A inhibition with antibodies and siRNA or pharmacological inhibition of VEGFR2 only partially inhibit PLVAP upregulation. In terms of downstream pathways, inhibition of MEK1/Erk1/2 MAP kinase blocked PLVAP upregulation, whereas inhibition of p38 and JNK MAP kinases or PI3K and Akt had no effect on PMA‐induced PLVAP expression. In conclusion, we show that VEGF‐A along with other secreted proteins act synergistically to up‐regulate PLVAP in MEK1/Erk1/2 dependent manner, bringing us one step further into understanding the genesis of the essential structures that are endothelial diaphragms. capillaries results in protein losing enteropathy, hypoproteinemia and hypertriglyceridemia causing a kwashiorkor-like wasting syndrome and death. 3,6 Interestingly, PLVAP reconstitution in the endothelial compartment in mice restores diaphragms exclusively in EC in vascular beds where the diaphragms are native, demonstrating that additional factors are required for diaphragm formation. 3 Very little is known on PLVAP and diaphragm regulation despite their importance. Phorbol esters such as phorbol myristate acetate (PMA) are known to induce robust de novo formation of fenestrae and transendothelial channels with their associated diaphragms in primary EC in culture. 18 PMA also induces diaphragms of caveolae and PLVAP expression in MEK1-dependent and PKC-independent manner. 16 Of note, PMA is also a known secretagogue in human EC. 19 Vascular endothelial growth factor-A (VEGF-A) was also shown to be essential for formation and maintenance of fenestrae with diaphragms. VEGF-A (and not FGF-2 or VEGF-C) induces new vessels with fenestrae with diaphragms [20][21][22][23][24][25] in Rac1 dependent manner. 22 Deletion of VEGF-A in the kidney podocytes, pancreas epithelial cells or hepatocytes [26][27][28] or systemically delivered VEGFR2 inhibitors 29 result in loss of fenestrae in mice. While overwhelmingly clear in the case of fenestrae, the effect of VEGF-A/VEGFR2 signalling on PLVAP expression modulation seems to be context dependent. While a VEGFR2 receptor-selective engineered form of VEGF-A upregulates PLVAP expression in single donor human umbilical vein EC (HUVEC), 30 primary EC cultured in presence of VEGF-A do not or poorly express PLVAP 16 and VEGF-A has no effect 17 or even decreases 31 PLVAP expression in immortalized mouse EC lines that constitutively express PLVAP. Moreover, VEGFR2 signalling inhibition in vivo does not modify PLVAP expression in the lung. 32 Finally, the downregulation of PLVAP in specialized vascular beds forming the blood-brain barrier, [33][34][35][36] or in developing arteries, glomeruli and cell culture [37][38][39][40] appear to be controlled by the Wnt and Notch signalling pathways respectively. In order to arrive at a cell culture system where fenestrae with diaphragms and PLVAP could be induced at high frequency by physiological cues in primary EC, we have sought to dissect the molecular mechanism of PLVAP upregulation by PMA. We show that PMA upregulation of PLVAP mRNA and protein depends on de novo protein synthesis and secretion of a group of proteins that act synergistically in autocrine fashion. Among these secreted proteins, we identified VEGF-A signalling through VEGFR2 as important but not sufficient for PLVAP expression. In addition, we show that PLVAP upregulation by the PMA-induced secreted factors is MEK1-dependent and JNK-, p38-, PI3K-and Akt-independent. points, supernatant and cells were harvested and further processed for protein or RNA analysis. | Protein synthesis inhibition with cycloheximide For chronic PMA and for conditioned medium (CM) treatments, EC were seeded in duplicate on gelatin-coated plates, grown to near confluence, serum starved 1.5 hours in EBM2 and 30 minutes presence of 10 μg/mL CHX in EBM-BSA and stimulated for the duration of the experiment with 50 nmol/L PMA + 10 μg/mL CHX or with 4-6 hours CM + 10 μg/mL CHX. For pulsed PMA treatment, the difference was that EC were stimulated with PMA/CHX for only 30 minutes followed by chase in EBM-FBS containing 10 μg/mL CHX. At indicated time points, cells were rinsed twice in DPBS and lysed for RNA or protein analysis. | Heparin depletion of conditioned medium Conditioned medium peaks (4-6 and 6-8 hours) were collected from donor cells cultured in six well plates. For each peak the respective CM was pooled in a 15 mL tube and split into two halves. One half was left untreated (control), the other half of the volume was added to 1 mL settled gel of heparin-agarose previously equilibrated (3×, 5 minutes, RT) in EBM-BSA. The mixture was further incubated (1 hour, RT) with gentle end-over-end rotation before the beads were pelleted by centrifugation (600 g, 10 minutes, RT). Two mL per well control CM or heparin-depleted | Heat inactivation of CM Conditioned medium "peaks" were collected after PMA treatment and split into two equal volumes: one half was heat inactivated (45 minutes, 60°C followed by 2 minutes on ice) and the other one left untreated (control) before transfer to serum starved acceptor cells. | Pertussis toxin treatment Acceptor cells were serum starved and then treated for 24 hours with CM in presence or absence of 0.1 μg/mL pertussis toxin (PT) (Sigma, cat# P7208). | Statistics Data were analysed using Student's t test. P < 0.05 was taken as the level of significance. | Upregulation of PLVAP mRNA by PMA requires protein translation In a first step, we asked whether PMA-induced PLVAP mRNA transcription depended on de novo protein synthesis. To answer this, we treated primary human HDMVECn with 50 nmol/L PMA (concentration demonstrated to up-regulate PLVAP and induce the formation of endothelial diaphragms and fenestrae 16 ) in presence or absence of CHX, a protein synthesis inhibitor. 44 As shown previously, 16 cells were exposed to PMA for the entire duration of the experiment. PLVAP ****mRNA significantly increased in time-dependent manner starting at~2 hours after PMA treatment onset ( Figure 1A). However, there was no increase of PLVAP mRNA or protein ( Figure 1B) when cells were treated with PMA in presence CHX for up to 8 hours of treatment, demonstrating that PLVAP upregulation by PMA requires de novo protein synthesis. To gain insight into the chemical nature of the PLVAP-inducing soluble factor(s), we depleted CM peaks (4-6 and 6-8 hours CM) of heparin-binding proteins. As shown in ( Figure 4B, left), the depletion led to marked decreased in CM ability to induce PLVAP protein in naive acceptor cells ( Figure 4B, right). Interestingly, the residual activity could not be eliminated even after passages over two sequential heparin columns, (data not shown). Endothelial cells produce chemokines, secreted factors that can bind heparin. 45 We therefore tested the ability of 4-6 and 6-8 hours CM peaks to up-regulate PLVAP in presence of PT, a general/broad Figure 4C, left). Treatment of acceptor EC with PT had no effect on PLVAP protein upregulation by the 4-6 and 6-8 hours CM peaks ( Figure 4C, right), ruling out a role for chemokine signalling in this system. | PMA up-regulates PLVAP in part via VEGF/ VEGFR2 signalling Phorbol myristate acetate up-regulates VEGF-A a known heparinbinding growth factor 48 and its receptors, VEGFR1 and VEGFR2 in HUVEC and HDMVEC, 49 making it them as good candidates for PLVAP upregulation by PMA in EC. As seen in Figure 5A Figure 5E). However, treatment of EC with up to 40 ng/ml VEGF in addition to PMA does not further increase PLVAP protein levels ( Figure 5F), suggesting that VEGF-A acts downstream of PMA. To determine which VEGFR is required for PMA/CM mediated PLVAP upregulation pharmacologic inhibitors with different selectivity for VEGFR1, 2 and 3 ( Table 1) However, while our data do not support a role for p38 signalling in PMA or CM induced PLVAP upregulation, these results are puzzling given the role of VEGF/VEGFR2 signalling in this process. Others have suggested that VEGF-A regulates PLVAP expression in PI3 kinase-dependent manner. 30 HDMVEC express all four isoforms of p110 (α/PIK3CA, β/PIK3CB, γ/PIK3CG and δ/PIK3CD) and the respective p85/p55/p150 regulatory subunits (PIK3R1-4). Using novel and more selective PI3K pharmacological inhibitors that are in clinical trials such as pictisilib (highly selective for PI3Kα/δ and 11-25 fold lower selectivity on PI3Kβ/γ) and idelalisib (selective for PI3Kγ>δ), we show that PI3K inhibition does not inhibit PLVAP upregulation by PMA or CM. Accordingly, the inhibition of Akt1-3, a major downstream target of PI3K, does not impact PLVAP upregulation. To note, wortmannin, a pan PI3K inhibitor, is partially effective at the larger concentration of 10 μmol/L either suggesting off-target effects or a role for p110β, which is not covered as well by the other inhibitors used. However, the latter is less likely as the dose of pictisilib we used is several orders of magnitude larger than the IC50 for p110β. In summary, we find that PLVAP upregulation by PMA requires de novo synthesis of multiple secreted proteins that act in an autocrine manner. One of the soluble factors involved is VEGF-A acting through VEGFR2. The signalling is dependent on MEK1/ERK1/2 and independent of p38, JNK, PI3K and Akt1-3. Further transcriptomic and proteomic studies should identify the factors (or combinations thereof) contributing to PLVAP regulation.
2,212
2018-11-05T00:00:00.000
[ "Biology" ]
An Integrated Use of Advanced T 2 Statistics and Neural Network and Genetic Algorithm in Monitoring Process Disturbance Integrated use of statistical process control (SPC) and engineering process control (EPC) has better performance than that by solely using SPC or EPC. But integrated scheme has resulted in the problem of " Window of Opportunity " and autocorrelation. In this paper, advanced T 2 statistics model and neural networks scheme are combined to solve the above problems: use T 2 statistics technique to solve the problem of autocorrelation; adopt neural networks technique to solve the problem of " Window of Opportunity " and identification of disturbance causes. At the same time, regarding the shortcoming of neural network technique that its algorithm has a low speed of convergence and it is usually plunged into local optimum easily. Genetic algorithm was proposed to train samples in this paper. Results of the simulation experiments show that this method can detect the process disturbance quickly and accurately as well as identify the disturbance type. Introduction In an intense market competition environment, product quality plays an important role in facing competition and gaining competitiveness.Both Statistical Process Control (SPC) and Engineering Process Control (EPC) are effective techniques of maintaining and improving the produce quality.EPC is used to adjust the variables for compensating the short-term output deviation by uncontrollable factors.In regard to long-term process improvement, SPC is effective technique which is used to detect out-of-control conditions and remove the controllable factors.So, lots of scholars have proposed the integrated use of SPC/EPC.However, it is very difficult to monitor the EPC process using commonly SPC methods because of the problem of "Window of Opportunity" and autocorrelation [1].In the past time, monitored variable of SPC techniques was only process output.The information of process inputs was usually ignored.For the EPC processes, once output deviation is compensated by feedback-controlled action, there is only a short window of detecting process disturbance.Even SPC charts fail to detect out-of-control when output deviation is small because EPC's feedback mechanism can compensate for such small disturbance quickly and completely.And the optimality of SPC techniques rests on the assumption of time independence.However, process output of no same time is autocorrelation for each other. To overcome these shortcomings, a little of papers have developed some joint-monitoring methods under the feedback control processes.These methods may be categorized into two aspects.The first is that various types of conventional SPC charts are integrated to monitor the process [2][3], such as Huang C.H proposed Shewhart control chart and Cusom control chart simultaneously to detect the manufacturing process.This method can detect out-of-control, also can recognize the disturbance type.However, the inherent problems of conventional SPC charts caused by the effects of feedback control actions have still not been solved.The second is that the strategy of jointly detecting the controlled outputs and manipulated inputs using bipartite SPC is suggested such as multivariate CUSUM chart, multivariate EWMA chart, T 2 statistics and multivariate profile chart [4,5].Although, these methods have solved effectively the autocorrelation problem, but the WO problem has not been settled completely and effectively because these methods can not monitor the small process disturbance quickly within the scope of the WO.Furthermore, these methods are not easy to identify the disturbance types which are crucial links of confirm and remove the controllable factors. In this research, we put forward a new program of integrated use of T 2 statistics technique, artificial neural networks and genetic algorithm: use T 2 statistics technique to solve the problem of autocorrelation and information missing; adopt neural networks technique and genetic algorithm to solve the problem of "Window of Opportunity" and identification of disturbance causes. Feedback-Controlled Process For better understanding, we consider the following process under the feedback mechanism shown in Figure 1. Where θ, Φ are constants.а t represents white noise which complies with a standard normal distribution with mean =0 and σ 2 =1.Also, let B be the usual backward shift operator, i.e., Bа t =а t-1 .m t represents random form of the process disturbance such as step change and process drift.y t is the measured output value.Without loss of generality, the target value is assumed to be zero.Then, y t represents the output deviation from the target value. u t is the feedback control action decided by the feedback process mechanism.In the industrial practice, several feedback controllers are used such as PI controllers, I controllers, PID controller and EWMA controllers in which PID controllers are the most extensively adopted.Its feedback control rules can be expressed as [6].In light of the function 4, output at the different times is autocorrelation, and input at the different times is autocorrelation.Moreover, output and input are autocorrelation for each other.So, traditional SPC control charts, such as Shewhart chart, EWMA chart and Cusum chart are invalid to monitor the above process. Design of Standard T 2 Statistics Technique Standard T 2 statistics method is used to deal with the multiple-input process.In this paper, the devised approach is similar to the standard T 2 statistics but the data vectors are made up of the process input and output at the different times.It can measure the overall distance of observation from reference values including process output, input and covariance of output and input, hence, it will come to the most commonly used schemes.According to the function 4, complete monitoring information should include control action at time t and t-1, the process output at time t, t-1 and t-2.However, to detect the closed-loop process, the five sets are co-linear.In other words, arbitrary set is equal to a linear combination of other sets.So, we can only select two sets, three sets or four sets from the above five sets to make the monitoring scheme.We design the monitoring model of T 2 statistics as follows Where Σ is the covariance matrix of Z t .In light of the above analysis, the options N of Z t are equal to 432 555 NCCC =++ There is not commonly admitted approach for confirming the best Z t selection in the T 2 statistics model.Selection of the model parameter is based on the problem which will be solved.Therefore, the design of model is scientific as well as art.Hotelling, Montgomery and Alt discussed the possibilities and advantages of the T 2 statistics method used to monitor the EPC process [7][8][9].They designed the simplest and the most basic form of Z t , i.e.Z t = [y t ,u t ].On the basis of the above study, FUGEE TSUNG elaborated on the problem and proposed that one could define Z t =[y t ,u t ,u t-1 ,u t-2 ] T or Z t =[y t , u t , y t-1 , y t-2 ] T [1].However, in these methods, ∑ is not estimated from the historical data directly, but obtained from a very complex function based on the parameter of Φ, θ, k p ,k D ,k I .So the T 2 control chart is not available for these methods. According to function 2 and 3, since outputs and inputs are correlated, all inputs can be expressed as the combination of the process outputs at different times.In other words, all information concluding the process inputs and the process outputs can be monitored as long as we detect the outputs at different times.We proposed to define Z t = [y t , y t-1 , y t-2 ,…, y t-s ] T .Selection of s value is a very difficult and challenging task.Now there is no universally recognized method for confirming the value.In this research, simulation experim-ents are implemented to determine the value of s.Aiming at each choice, experiments simulate the feedback-controlled process with the step-change step=5, 2 and 0.8.Value of the parameter Φ, θ, K P , K I and K D is randomly set to 0.8, 0.5, 0.5, 0.5 and -0.In light of these figures, when step values are the same, the larger is s, larger is the value of T 2 and the quicker is to detect the disturbance.However, the larger is s, the greater is false alarm such as Figures 2 to 6.To the process with step=5, when s is equal to 4, there are two out-of control points.However, when s is equal to 5, there are four out-of control points in which two points fall into false alarm.In the same way, to the process with step 2 and step 0.8, when s increases from s=2 to s=4, the dots of false alarm grow from 0 to 2. To the different step changes, the smaller is step value, the larger is to need the value of s to detect the process.For example, it only need s=1 to monitor the process with step=5, but need s=3 to detect the process with step=2 and step =0.8.According to Figure 2-14 and the above analysis, Z t can be expressed as Z t = [y t , y t-1 , y t-2 , y t-3 ] T .In light of the function 7, y t is a linear combination of the y t-i , (i=0, 1, 2, 3).So Z t accords with a multivariate normal distribution and Z t has a chi-squared distribution with p degrees-offreedom.The control limit UCL for Z t should be χ 2 α,p .D t contains the information of output, input and correlation for each other.So the advanced T 2 statistics can solve effectively the problem of autocorrelation and reduce the problem of "Window of Opportunity".Moreover, it is difficult to interpret the results and search for the root cause of process disturbance once system monitored out-of-control such as Figure 15. Figure 15 shows the process with the drift disturbance slope=1.But it has not essential distinction between Figure 15 and Figure 2-14 to identify the disturbance type such as significant upward or downward trend.So, advanced T 2 statistics technique can not be used solely. Artificial Neural Networks Artificial neural networks are modeled following the neural activity in human brain and rapidly developed since the last century 80's [10].The main characteristics of neural networks are the overall use of network, Largescale parallel distributed processing, Ability to study association, high degree of fault tolerance and robustness.However, neural networks are easy to fall into local optimum, slow convergence and cause oscillation effect.Genetic algorithm [11] has strong macro-search capabili-ties and greater probability of finding the global optimal solution.So, genetic algorithm can overcome the shortcomings of neural networks if it is used to finish the pre-search.In this paper, a novel algorithm combining neural networks algorithm and genetic algorithm was proposed.The framework of neural network is shown in Figure 16. Network is composed of an input layer, a hidden layer and an output layer.Input layer has three neurons which are expressed as D t , D t-1 and D t-2 representing the parameter value of T 2 statistics at time t, t-1 and t- Input layer of the network is a key decision which has a great impact on the effectiveness of the network.Now there is no commonly accepted method for selecting the input layer.In this paper, an all-possible-regression analysis [12,13] is used to define the input layer according to the R 2 P , AIC and C P criterion.It is assumed that input layer is a possible combination from D t , D t-1 , D t-2 , D t -D t-1 , D t-1 -D t-2 , D t -D t-2 .Purpose of this method is to select a good combination so that a detailed examination can be made of the regression models, leading to the selection of the final input vectors to be utilized [12].The result is shown in Table 1.In light of the R 2 P , AIC and C P criterion, we select the (D t , D t-1 , D t-2 ) combination because it has the largest R 2 P , the smallest AIC and C P values in the Table 1. Neural Network Training Based on Genetic Algorithm 1) Determination of fitness function Purpose of which genetic algorithm is used to optimize the network weights and threshold of neural network is to obtain the optimum combination of weights value and threshold.Output error measures the effect of combination.Hence, fitness function of individual chromosome should be the function of output error of BP network.Ideal output value is expressed as D j and actual output value is expressed as A j .The fitness function () fE can be written as 2 1 ()(()) (1) 2) Genetic manipulation Assumed that Group size is M and Fitness of individual i is F i .Individual probability of being selected can be expressed as follows: Arithmetic crossover operator is adopted which is specially used to solve floating-point cross, and uniform mutation operator is introduced. Simulation Experiments It is assumed that value of group size M, crossover probability, Mutation probability, training error and generation gap is 100, 0.8, 0.05, 0.005 and 0.7 respectively.To verify the performance of the above method, we make a great deal of simulation experiments on the actual production.The experiments are divided into three stages. First, 500 "in-control" sample sets (m t =0) and 500 "out-of-control" sample sets (m t ≠0), each which involves 200 data points and generated from an AMAR(1,1) noise model, are selected to train the neural network.The 500 out-of-control samples sets perform a process which is upset respectively by step-change with the step of 0.5/1/2/3/5 at data 50 and is eliminated quickly and completely at the data 150.Likewise, the 500 out-of-control sample sets are generated with the process drift with the slope of 0.25/0.5/1/2/3at data 50 and are removed quickly and completely at data 100.Second, once output error is within the permitted scope, objectives of training the neural network based on genetic algorithm have been achieved successfully.The neural network can be used to monitor the process disturbance.200 out-of-control sample sets, which are generated with the use of step=0.5, 1, 1.5, 2, 3, 5 at time t and slope= 0.5, 1, 1.5, 2, 3 at time t, is given to verify the performance of the above method.The result is shown in Table 2. At last, for comparison, Shewhart chart of Minitab software is used to simulate the above sample sets with step change.Result is shown in Table 3. Result Analysis of Simulation Experiment As seen from Figure 17 and 18, the alone neural networks need 1200 steps to converge at the error target value.However, neural networks based on the genetic algorithm only need 550 steps to converge at the error target value.So neural networks based on the genetic algorithm can reduced training time significantly.Its training speed is faster.Furthermore, if alone neural networks are used, error target value cannot gain when step is small such as 1 and 0.8. In actual manufacturing industry, parameters often change with the change of environment.So we choose five combinations of Φ, θ, k p , k D and k I in order to verify the method and cover a reasonable range of the parameter space.In terms of the Table 1, the value of parameter Φ, θ has serious impact on the resolution capability of the integrated method.It is very applicable to combine a positive and large Φ with a positive and small θ.On the contrary, the combinations of a positive Φ and a negative θ worsen with the ability to identify the process disturbance accurately.There is no obvious correlation between change of the controller parameter k p , k D , k I and monitoring ability.With respect to the drift disturbance, step change is easier to be monitored. According to Tables 3 and Table 4, the advantage of the integrated method is significant.The neural network requires only one sample to recognize the disturbance and identify the disturbance type.But Shewhart chart requires an average 3 to 7 samples to recognize the process disturbance with step 5.When step=2 and 3, an average of 70 to 100 samples are required to detect the disturbance.Even the disturbance with step=1 and 0.5 can not be monitored. Figure 14 . Figure 14.T 2 chart detects the disturbance with the parameter step=0.8 and s=4 Figure 16 Figure 16.A three-layer neural network Figure 17 . Figure 17.Training error curve of neural networks
3,749.2
2009-12-28T00:00:00.000
[ "Computer Science", "Engineering" ]
Small Schwarzschild de Sitter black holes, the future boundary and islands We continue the study of 4-dimensional Schwarzschild de Sitter black holes in the regime where the black hole mass is small compared with the de Sitter scale, following arXiv:2207.10724 [hep-th]. The de Sitter temperature is very low compared with that of the black hole. We consider the future boundary as the location where the black hole Hawking radiation is collected. Using 2-dimensional tools, we find unbounded growth of the entanglement entropy of radiation as the radiation region approaches the entire future boundary. Self-consistently including appropriate late time islands emerging just inside the black hole horizon leads to a reasonable Page curve. We also discuss other potential island solutions which show inconsistencies. Introduction The black hole information paradox [1] has seen fascinating progress over the last few years: In this context it is perhaps best to regard this, not as a detailed understanding of black hole microstates, but as the tension between the apparent unbounded growth of entanglement entropy of Hawking radiation [2] outside the black hole and the quantum mechanics expectation that entanglement entropy must become small at late times to recover purity of the original matter state (see e.g.[3], [4], which review various aspects of the information paradox).This falling Page curve [5], [6], reflecting the original purity, can be recovered when nontrivial, spatially disconnected, island saddles for quantum extremal surfaces are included [7], [8], [9], [10], [11]. Quantum extremal surfaces are extrema of the generalized gravitational entropy [12,13] obtained from the classical area of the entangling RT/HRT surface [14]- [17] after incorporating the bulk entanglement entropy of matter.Effective 2-dimensional models allow explicit calculation, where 2-dim CFT techniques enable detailed analysis of the bulk entanglement entropy.The island, arising as a nontrivial solution to extremization (near the black hole horizon, and only at late times), reflects new replica wormhole saddles [10,11] and serves to purify the early Hawking radiation thereby lowering the entanglement entropy.There is a large body of literature on various aspects of these issues, reviewed in e.g.[18,19,20]: see e.g.[21]- [122] for a partial list of investigations on black holes in various theories, and also cosmological contexts.It is important to note that several of these investigations are simply applications of the island proposal, which appears to be self-consistent, even if it cannot be rigorously derived in those contexts (see e.g.[19] for an overall critical perspective, as well as [64,65] and [66]). This paper continues the study in [92] of "small" Schwarzschild de Sitter black holes, with the black hole mass m small compared with the de Sitter scale l, but large enough that a quasi-static approximation to the geometry is valid.The de Sitter temperature is very low compared with that of the black hole so the ambient de Sitter space is approximated as a frozen classical background.For calculational purposes, we consider an effective 2-dim dilaton gravity model obtained by dimensional reduction, with the bulk matter representing the black hole Hawking radiation modelled as a 2-dim CFT propagating on this 2-dim space: this is reasonable under the assumption that the s-wave Hawking modes are dominant.We imagine that the black hole has formed from initial matter in a pure state which is a reasonable approximation since the de Sitter temperature is very low (more generally, the bulk matter CFT is in a thermal state at the de Sitter temperature).In [92], we focussed on one black hole coordinate patch in the Penrose diagram (roughly a line of alternating Schwarzschild and de Sitter patches, see Figure 1) and considered observers in the static diamond patches far outside the black hole but within the cosmological horizons.While the entanglement entropy of the radiation region exhibits unbounded growth, reflecting the information paradox for the black hole (which has finite entropy), including appropriate island contributions recovers finiteness of entanglement, and thereby expectations on the Page curve.The island emerges at late times a little outside the black hole horizon semiclassically.The Hawking radiation from the black hole is expected to cross the cosmological horizon and eventually reach the future boundary where it is collected (Figure 1).In this paper, we consider the point of view of these future boundary (meta)observers, and look for semiclassical island resolutions of the black hole information paradox with regard to a radiation region at the future boundary.The future boundary is in a sense better defined (compared to the static diamond) as a place where gravity is manifestly weak, the space expanding indefinitely.The radiation region taken as an interval with length labelled by X (alongwith spheres) on the future boundary can be parametrized via Kruskal coordinates T, X, defined by analytic continuation from the static diamond coordinates.We find that the entanglement entropy of Hawking radiation exhibits unbounded growth in the spatial length X along the future boundary, inconsistent with the finiteness of black hole entropy, and reflecting the information paradox.Using the island rule in the extremization of the generalized entropy shows islands emerging for large values of X a little inside the black hole horizon semiclassically: including the island contributions recovers expectations on the Page curve.This future boundary radiation region is entangled with island regions around the horizons of black hole regions on both left and right cosmological horizons (Figure 1): this is expected since the future boundary receives Hawking modes from both left and right black holes.Our analysis has some parallels with the island studies in [89] for dS 2 arising under reductions from Nariai limits of higher dim Schwarzschild de Sitter.One might expect timelike separated quantum extremal surfaces for the future boundary resulting in complex-valued entropies as are known in pure de Sitter (see [68] for dS 2 , and [86] for reductions of higher dimensional Poincare dS; see also [123], [124], [125], [126], [127] for classical RT/HRT surfaces anchored at the future boundary).However Schwarzschild de Sitter has a "sufficiently wide" Penrose diagram so spacelike separated islands do exist here in accordance with physical expectations for the black hole Page curve (thus we discard timelike separated ones here). In sec.2, we review the Schwarzschild de Sitter geometry and discuss parametrizations in various coordinate patches in sec.2.1.Sec. 3 discusses the entanglement entropy without islands (details in App.B), while sec.4 discusses the island calculation (details in App.C).App.A is a brief review of the analysis in [92] for radiation entropy with islands from the point of view of the static diamond observers.App.D.1-D.2 discuss inconsistencies in other potential island solutions, while App.E discusses timelike separated quantum extremal surface solutions for future boundary observers.Sec. 5 contains a Discussion on various aspects of our study. Small Schwarzschild de sitter black holes → 2-dim The Schwarzschild de Sitter black hole spacetime in 3 + 1-dimensions has the metric This is a Schwarzschild black hole in de Sitter space [128] with an "outer" cosmological (de Sitter) horizon and an "inner" Schwarzschild horizon.The general d + 1-dimensional SdS spacetime is of similar form but with f (r) = 1 − 2m l ( l r ) d−2 − r 2 l 2 , and will have qualitative parallels.We are focussing on the 4-dim case SdS 4 here: the function f (r) is a cubic and the zeroes of f (r), i.e. solutions to f (r) = 0, give the horizon locations.We parametrize this as We will take the roots r S and r D to label the Schwarzschild black hole and de Sitter (cosmological) horizons respectively.(The third zero −(r D + r S ) does not correspond to a physical horizon.)The roots r S , r D are constrained as above.The case with m = 0, or r S = 0, r D = l, is pure de Sitter space, while the flat space Schwarzschild black hole has The surface gravity at both horizons are generically distinct: Euclidean continuations removing a conical singularity can be defined at each horizon separately but not simultaneously at both [129] (see also [130,131]).The only (degenerate) exception is in an extremal, or Nariai, limit [132] where both periodicities of Euclidean time match: the spacetime develops a nearly dS 2 throat in this extremal limit [129].More on the nearly dS 2 limit and the wavefunction of the universe appears in [133].Related discussions with some relevance to this paper also appears in [134].In more detail, it can be seen that the above horizon structure is valid for m l <1 3 √ 3 , beyond which there are no horizons [128].The limit m l = 1 3 √ 3 with the cosmological and Schwarzschild horizon values coinciding, has r S = r D = r 0 = l √ 3 from (2): this extremal, or Nariai, limit has a near horizon dS 2 × S 2 throat.Overall the range of physically interesting r S , r D satisfies 0 < r S < r 0 < r D for generic values.The cosmological horizon is "outside" the Schwarzschild one since r S < r D .The black hole interior has r < r S with r → 0 the singularity.The region r D < r ≤ ∞ describes the future and past de Sitter universes, with r → ∞ the future boundary I + (or past, I − ).The maximally extended Penrose diagram in Figure 1 shows an infinitely repeating pattern of Schwarzschild coordinate patches or "unit-cells" containing Schwarzschild black hole horizons cloaking interior regions: these patches are bounded by cosmological horizons on the left and right, with future/past universes beyond the cosmological horizons. As in [92], we are considering the limit of a "small" black hole in de Sitter with The horizon locations can then be found perturbatively to be r . This is a small black hole in a large accelerating universe, so the ambient cosmology is effectively a frozen classical background while the black hole Hawking evaporates.The black hole temperature is much larger than the Gibbons-Hawking de Sitter temperature: from [130,131] (see also [135]) the surface gravities κ BH,dS = , with β S,D in (7).Then the temperatures T = κ 2π in the limit (3) become The limit of asymptotically flat space is r D ∼ l → ∞, r D l → 1, r S → 2m and T dS → 0 so the ambient de Sitter temperature vanishes.Our discussions in this paper also pertain to these small Schwarzschild de Sitter black holes. Coordinate parametrizations in various coordinate patches We will describe various coordinate parametrizations in the various coordinate choices in the Schwarzschild de Sitter spacetime, involving Kruskal variables around the black hole horizon and around the cosmological horizon. In [92], we considered the radiation region to be in the static diamond bounded by the black hole and cosmological horizons in the Schwarzschild de Sitter background: this static patch is parametrized by certain Kruskal coordinates [136] in the vicinity of the black hole horizon.For our present purposes, we would like to analytically extend the Kruskal coordinates (U D , V D ) defined in the static patch in the vicinity of the cosmological horizon and (U S , V S ) near the black hole horizon to a new set of Kruskal coordinates (U ′ D , V ′ D ) lying within the future universe, near the future boundary, and (U ′ S , V ′ S ) in the interior of black hole (inside the horizon) respectively.We will first define the set of coordinates in the static patch and then analytically extend them beyond both horizons. We will first recast the Schwarzschild de Sitter metric (1) in the static patch in terms of the Kruskal coordinates which are regular at the cosmological horizon (but not in the vicinity of the black hole horizon).Towards this, we define the tortoise coordinate following [136]: Taking f (r) > 0 in the region r S < r < r D , this gives with the parameters (which simplify dr * /dr to 1/f (r), and satisfy . (7) The SdS 4 metric (1) is recast as ds 2 = f (r)(−dt 2 + dr * 2 ) + r 2 dΩ 2 2 .We label the spacetime coordinates in the left and right regions in the vicinity of the cosmological horizon as This choice of β is simply a convenient way of incorporating the relative minus signs in the Kruskal coordinates in the left and right regions through e iβα D /2 = e iπ = −1.In the static patch, in the vicinity of the cosmological horizon, these cosmological Kruskal coordinates U D , V D , and the Schwarzschild de Sitter metric become b + : The value of α D here ensures regularity at the cosmological horizon.(β M + β S = β D ensures that W has dimensions of inverse length.)With this parametrization of the left and right time coordinates, we conveniently use the expressions in (9), with β automatically doing the left-right book-keeping. We are now considering an interval at the future boundary I + where the Hawking radiation from the black hole is expected to be collected.Towards parametrizing this future boundary radiation region, we will analytically continue the cosmological Kruskal coordinates defined above in the static patch to the region beyond the cosmological horizon (i.e. the future universe) keeping invariant, as usual, the metric expressed in terms of the new cosmological Kruskal coordinates beyond the cosmological horizon.Let us consider the analytic continuation in (t b , r b * ) coordinates as Thus the new cosmological Kruskal coordinates at both ends b ′ + and b ′ − of the future boundary radiation region are defined as (U ′ D + , V ′ D + ) and (U ′ D − , V ′ D − ) respectively and the Schwarzschild de Sitter metric in terms of (X b ′ , T b ′ ) coordinates becomes b ′ + : The Schwarzschild de Sitter metric now becomes For our purposes, it is a reasonable approximation to look at the s-wave sector of the black hole and consider the bulk matter as a 2-dim CFT: this enables the use of 2-dim CFT tools to study the entanglement entropy of bulk matter.So, we will consider the same dimensional reduction of the 4-dim Schwarzschild de Sitter spacetime to a 2-dim background, as in [92] (see the general reviews [137,138,139], and [140] for related discussions, as well as [141] for certain families of 2-dim cosmologies). Recalling from [92], the reduction ansatz ds 2 (4) = g (2) µν to absorb the dilaton kinetic term gives the 2-dim dilaton gravity theory ).The lengthscale λ −1 makes the dilaton ϕ dimensionless, which then maps to the 4-dim transverse area of 2-spheres 4πϕ equivalent to the 4-dim one.Our discussion is entirely gravitational so it is reasonable to take the Planck length as the natural UV scale with λ −1 ∼ ϵ U V ∼ l P .So finally, the dilaton is ϕ = r 2 λ 2 and the 2-dim metric is where and W b ′ is the conformal factor given in (12). Next, we define a new set of Kruskal coordinates for the location of the island boundary (the location of the quantum extremal surface): this turns out be in the black hole interior for the future boundary radiation region, so we require coordinate parametrizations within the black hole horizon.Towards this, we will again first recast the Schwarzschild de Sitter metric (1) in the static patch in terms of Kruskal coordinates regular at the black hole horizon.So we define the tortoise coordinate r * in terms of the parameters β D , β S and β M in the same way as in ( 5), ( 6) and (7) in the static patch, in the vicinity of the black hole horizon, with the SdS 4 metric recast as ds 2 = f (r)(−dt 2 + dr * 2 ) + r 2 dΩ 2 2 .We label the spacetime coordinates in the left and right regions in the vicinity of the black hole horizon as Here also β takes care of the relative minus signs in the Kruskal coordinates in the left and right regions through e iβα S /2 = e iπ = −1.In the static patch around the black hole horizon, the Kruskal coordinates U S , V S and the Schwarzschild de Sitter metric become ) . The value of α S here ensures regularity at the black hole horizon.(noting β M + β S = β D we see that W has dimensions of inverse length.)With this parametrization of the left and right time coordinates, we use the expressions in (15) with β doing the left-right book-keeping.Towards parametrizing the island boundary inside the black hole horizon, we will analytically continue the spacetime coordinates defined in the static patch near the black hole horizon keeping invariant the metric in terms of the black hole interior Kruskal coordinates.Let us consider the analytic continuation in (t a , r a * ) coordinates as Thus the new set of Kruskal coordinates at both the island boundaries a ′ + and a ′ − are defined as (U ′ S + , V ′ S + ) and (U ′ S − , V ′ S − ) respectively and the Schwarzschild de Sitter metric in terms of (X a ′ , T a ′ ) coordinates becomes The Schwarzschild de Sitter metric in terms of Kruskal coordinates becomes Here also after the same dimensional reduction, as in [92], the 2-dim metric beyond the black hole horizon becomes where and W a ′ is the conformal factor given in (18). Entanglement entropy: no island In this section, we will evaluate the entanglement entropy of the radiation at late times in the absence of any island.Here we have the radiation region R within the interval b ′ + and b ′ − as shown in Figure 1.We have chosen the bulk matter to be within the above stated interval in some fixed T slice near the future boundary.The entropy of the Hawking radiation is We calculate the bulk entropy technically using 2-dimensional techniques where we approximate the bulk matter by a 2-dim CFT propagating in the 2-dim background.In the 2-dim CFT, the matter entanglement entropy for a single interval A = [x, y] is obtained from the replica formulation [142,143] after also incorporating in d[x, y] the conformal transformation to a curved space [8], stemming from the W ′ -factor in the 2-dim metric (13), So, we obtain the entropy of the bulk matter CFT of the radiation region as Then we evaluate the bulk matter entropy near the future boundary in the Schwarzschild de Sitter geometry (13) to obtain (suppressing 1/ϵ 2 U V inside the logarithm, ϵ U V the UV cutoff) Using the Kruskal coordinates (11), the bulk matter entanglement entropy then becomes Alongwith ϵ 2 U V and the discussions around (13), it can be seen that the logarithm argument is dimensionless (noting from (7) that β D has dimensions of length).The details of the calculation are shown in Appendix B. The late time approximation is done by considering large X b ′ , which means we are considering the entire constant T slice: the above result then approximates as This linear growth of the bulk matter entropy with length X b ′ means that the entropy of the radiation will eventually be infinitely larger than the Bekenstein-Hawking entropy of the black hole for large X b ′ .This inconsistency is the reflection of the black hole information paradox from the future boundary point of view.See [89] for similar observations in dS 2 . To gain some intuition for this linear growth with "length" at the future boundary relative to linear growth in ordinary time, it is useful to compare the present situation with [92] where we studied the evolution of the entanglement entropy of radiation collected by observers labelled by b ± in the left/right static diamond patches (see Fig. 1 (10).In other words, the points b ′ ± approach the ends of the future boundary.This is consistent with the picture of Hawking radiation from the black hole eventually crossing the cosmological horizons and reaching the future boundary, so that late times for static patch observers map to large lengths for future boundary (meta)observers.Our future boundary (meta)observers perspective here is reminiscent of the "census-taker" who looks back into the past and collect data [144]: it would be fascinating to make this precise and develop further. Late time entanglement entropy with island The Hawking radiation from the black hole will eventually cross the cosmological horizon and reach the future boundary (see Figure 1), where we imagine it is collected by appropriate (meta)observers.In this section, we will evaluate the entanglement entropy of the bulk matter near the future boundary after including appropriate islands.The island proposal [9] for the fine-grained entropy of the Hawking radiation is where R is the region far from the black hole where the radiation is collected by distant observers and I is a spatially disconnected island around the horizon that is entangled with R. The intuition here is that after about half the black hole has evaporated, the outgoing Hawking radiation (roughly I) begins to purify the early radiation (roughly R).This purification by the late Hawking radiation of the early radiation reflects the entanglement between the two parts, stemming from the picture of Hawking radiation as production of entangled particle pairs near the horizon (taken as vacuum).Thus R ∪ I purifies over time, its entanglement decreasing.The decreasing area of the slowly evaporating (approximately quasistatic) black hole then leads to S(R) decreasing in time, recovering the falling Page curve expected from unitarity of the original approximately pure state. In the current case, the future boundary receives Hawking radiation from both the left and right black hole horizons so we expect islands on both left and right.Each island almost entirely covers the corresponding black hole interior: the island boundaries are at a ′ + and a ′ − (Figure 1).The islands in question turn out to emerge just inside the black hole horizon, so Including an island I at late times i.e. for large X b ′ and X a ′ , the effective radiation region becomes Σ rad ∪I.Now we make the assumption that the global vacuum state is approximately pure: this is not strictly true since the bulk matter CFT is expected to be at finite dS temperature in the ambient de Sitter space.However in the limit of a small mass black hole in a very large dS space with correspondingly very low dS temperature, one can take the bulk matter to be at nearly zero temperature and correspondingly in a global pure state. With this assumption, one instead computes the entanglement entropy of the complementary region (Σ rad ∪ I) c , which comprises the two intervals which turns out to be self-consistent. The entanglement entropy for multiple disjoint intervals is more complicated, arising from the multi-point correlation functions of twist operators: so it depends on not just the central charge but detailed CFT information.In the limit where the intervals are well-separated, expanding the twist operator products yields [142,143,145,146] For two intervals [x 1 , y 1 ] ∪ [x 2 , y 2 ], this is a limit where the cross ratio x is small, i.e. x ≪ 1, with , and we use the Kruskal distances in (21) in constructing the cross-ratio.In 2-dim CFTs with a holographic dual, this is the situation where the two intervals A, B are well-separated and their mutual information exhibits a disentangling transition [147] with is large approaching the entire future boundary, the two intervals are well-separated (as seen from Figure 1), so the cross-ratio above is indeed small, justifying the use of (29) for our purposes here).In this limit of approaching the entire future boundary, we have large b ′ ± , amounting to the assumption ( 27) here.Note that there is no holography here: we are simply applying the island rule in the 2-dim background obtained from reduction of the SdS 4 geometry and looking for self-consistent island configurations, assuming an approximate global pure state in the very low de Sitter temperature limit.It is also worth noting that while the complementary 2-interval region is unambiguously defined, the 3-interval region is more ambiguous.For instance, in Figure 1, one might imagine defining a global Cauchy slice as the spacelike slice passing through the points , where the left/right endpoints are the approximate midpoints of the left/right islands (and this "unit cell" repeats indefinitely along the Penrose diagram).Then the 2-interval subregion ( ] on this Cauchy slice.It would appear that there is nothing sacrosanct in choosing these midpoints to define the slice, whereas the 2-interval complement is well-defined via the radiation region and island endpoints.It would be interesting to understand this more elaborately. In light of the above, the entanglement entropy for the complementary 2-interval region 29) is In detail, using the Kruskal coordinates (11), (17), the total generalized entropy (30) becomes where we have added the area term, and C(a ′ ) is defined as (Note from (7) that C(a ′ ) is dimensionless.)See Appendix C for details of this calculation. Extremizing (31) with respect to the location of the island boundary a ′ as Here, since r D is large, the terms scaling as O( Next, extremizing (31) with respect to X a ′ as ∂S total ∂X a ′ = 0 gives coth We will consider all possible conditions between X a ′ and X b ′ in the extremization equations to look for consistent solutions to the location of island boundary, i.e. the value of a ′ and X a ′ .We will first consider, Putting this condition (36) back in (34) gives For large X a ′ and X b ′ , the third term in (37) is small compared to the second term.So we can ignore the third term and (37) becomes Now we recall that we are in the semiclassical regime where so that the classical area term in the generalized entropy is dominant but the bulk matter makes nontrivial subleading contributions (which are not so large as to cause significant backreaction on the classical geometry). We are looking for an island with boundary a ′ ∼ r S near the black hole horizon: this corroborates with the fact that since the entire right hand side in (38) is O(G N c), in the classical limit G N c = 0 we obtain a ′ √ r s − a ′ ≃ O(G N c) ∼ 0 giving a ′ = r S , i.e. the quantum extremal surface localizes on the black hole horizon.Thus we can solve the above extremization equation in perturbation theory setting a ′ ∼ r S at leading order to find the first order correction in G N c ≪ 1 : then schematically we have Thus, we finally obtain (with C(r S ) from (32) setting a ′ = r S ) Solving now for X a ′ from (35), we obtain This is a large X a ′ value with e 2X a ′ scaling approximately as O( 1 G N c ). Considering X a ′ ∼ −X b ′ does not yield consistent island solutions: see App.D.2.Further, considering potential island solutions just outside the horizon turns out to be inconsistent: see App.D.1.Thus ( 41), (42), with X a ′ ∼ X b ′ , encode the correct island solution for the future boundary radiation region.The condition X a ′ = X b ′ is consistent with the expectation that the island location lies on the same Cauchy slice as the radiation region location (along the same lines as the condition t a = t b within the static diamond in [92])).This amounts to the requirement of spacelike separation in considering the island and radiation as an effectively single entity which purifies so the fact that we recover this is not surprising.The fact that islands outside the horizon are inconsistent is due to causality: the entanglement wedge cannot lie within the causal wedge (we explain this further in the Discussion sec.5). With the value of a ′ in (41) and X a ′ in (42), the total on-shell entanglement entropy in (31) becomes which is independent of length X a ′ and X b ′ , stemming from the presence of the island.The leading first (area) term is twice the Bekenstein-Hawking entropy of the black hole, while the subleading second and third terms arising from the bulk entropy of the radiation region purified by the island are constant terms not growing in length.This recovers the expectations on the Page curve for the entropy of the bulk matter or Hawking radiation considered near the future boundary.The bulk matter at the future boundary radiation region is entangled with the island-like region located just inside the black hole interior in these semiclassical approximations at very low ambient de Sitter temperature. Comparing the entanglement entropy without the island (25) and that with the island (43) provides the critical length X P age at which the island transition occurs: we obtain The entropy with the island alongwith the associated purification is lower and dominates over the no-island configuration beyond this critical length X P age .Note that here the critical Page length X P age is a dimensionless quantity, using (10).This then corresponds to a Page time t P ∼ β D X P age which using (7) and the approximations (3) gives the large value t P ∼ lS BH (note that this uses the cosmological Kruskal coordinates, distinct from the black hole Kruskal coordinates in [92]).It is however important to note that this Page length is much smaller than another potentially relevant quantity X dS P ∼ S dS controlled by the entropy S dS of the cosmological horizon.In the small black hole limit (3) we are considering, , and we do not see any effects above, stemming from the ambient de Sitter space which is just a frozen background.So our critical Page length (44) controlled by black hole entropy alone is consistent with the separation of scales in the limit (3).Away from this limit, the black hole horizon shrinks while the cosmological horizon absorbs and grows, resulting in a nontrivial nonequilibrium system.It would of course be interesting to understand de Sitter horizon physics, but this appears substantially more challenging within our framework. Finally, it is worth noting that there are also timelike separated quantum extremal surface solutions following from the extremization of the generalized entropy with respect to the future boundary observer: we discuss these solutions in App.E. The timelike separation implies that the on-shell generalized entropy becomes complex valued.While complex entropies are known in investigations in pure de Sitter space (which does not have a sufficiently wide Penrose diagram) and suggest new objects [123], [124], [125], [126], [127], it is consistent to ignore them in the Schwarzschild de Sitter context where spacelike separated quantum extremal surfaces do exist in accord with physical Page curve expectations for the black hole information paradox. Discussion We have studied small 4-dim Schwarzschild de Sitter black holes in the limit of very low de Sitter temperature, building further on previous work [92] for observers within the static diamond far from the black hole horizon.In the present work, we have been considering the black hole Hawking radiation in a radiation region interval at the future boundary (see Figure 1).The black hole mass is adequately large that quasistatic approximations to the evaporating black hole in semiclassical gravity are valid.We assume the black hole radiation approximated as a 2-dim CFT at nearly zero temperature propagating in a 2-dim dilaton gravity background (13) obtained by dimensional reduction of the 4-dim spacetime.Including appropriate island contributions, we find that the generalized entropy satisfies expectations from the Page curve for the evolution of bulk matter near the future boundary.Our analysis has parallels with [89] which studied island resolutions for dS 2 JT gravity with regard to the future boundary.Our setup here is somewhat more complicated since the assumption of an approximate global pure state is only reasonable, if at all, at very low de Sitter temperature.The fact that these approximate calculations vindicate the island paradigm perhaps suggests the existence of better, more fundamental ways to formulate the information paradox in such nontrivial gravitational backgrounds and of deeper insights into replica wormholes in these sorts of quasistatic gravitational backgrounds. The Schwarzschild de Sitter (SdS) black hole is unstable and thus somewhat different from the AdS black hole.In our small black hole limit (3) the ambient de Sitter space effectively remains a frozen background reservoir.In a quasistatic approximation, the black hole evaporates away slowly, and our analysis using the eternal SdS black hole shows the radiation entanglement entropy including the island becoming saturated at some finite value (43), approximately 2S BH (so the Page curve saturates rather than falls).As the black hole evaporates, its entropy decreases so the saturation value of the radiation entropy decreases leading to the black hole Page curve falling slowly in accord with the approximately pure state that the black hole formed from.Strictly speaking, the ambient de Sitter space temperature (albeit much lower than that of the black hole) implies that the pure state consideration is just an approximation.It would be interesting to study the SdS black hole modelling the bulk matter CFT in the thermal state at finite de Sitter temperature. We recall that in [92], the radiation region was within the static diamond (with endpoints b + or b − in the left or right static diamond, in Figure 1).The late time island location was then found to be within the static diamond, just outside the black hole horizon in that case.As we have seen, the island location we have found currently is just inside the black hole horizon, which at first sight might seem contradictory.However this is in fact consistent in the current case.First, the future boundary interval (b ′ + , b ′ − ) in the present case receives Hawking radiation from both the left and right black hole patches, propagating past the left and right cosmological horizons bounding the left and right static diamonds.So this setup is physically distinct from the previous case of a single static diamond.Secondly, in obtaining the island locations we have been considering the limit of large X a ′ , X b ′ , in the extremization equations.In this limit the future boundary interval (b ′ + , b ′ − ) approaches the entire future boundary, i.e. the points b ′ ± approach the endpoints of I + .Note that the left and right static diamonds are now within the causal wedge of this interval.It would then be causally inconsistent for the entanglement wedge to be within the causal wedge of the radiation interval.The entanglement wedge of the radiation region is the domain of dependence, or bulk causal diamond of the spacelike surface between the boundary of the radiation region and the island boundary (location of quantum extremal surface).As it stands, the island boundary is just inside the black hole horizon so it lies outside the causal wedge, nicely avoiding inconsistency. The black hole interior island solution is ultimately supported by the calculational fact that other possibilities lead to inconsistencies: for instance, blindly looking for island solutions outside the horizon in the present case exhibits inconsistency in the extremization equations.We carried out this exercise by performing the calculation of sec. 4 using static diamond Kruskal coordinates for the potential island lying just outside the black hole horizon in the static diamond (a ′ ≳ r s in this case, somewhat akin to the parametrizations in [92] reviewed in App.A).The analog of (31) in this case leads to extremization equations similar to (33) and (35): however there are subtle differences which ensure that the analogs of ( 36) and (37) together do not give consistent island solutions (see App. D.1).Further, as we also noted after (42), potential island solutions with X a ′ = −X b ′ (instead of X a ′ = X b ′ ) also lead to inconsistencies (App.D.2). Thus overall, our semiclassical island solution in (41) and ( 42) should be regarded as nontrivial.Perhaps the self-consistency of these calculations (in particular using the complementary 2-interval bulk matter entropy) also vindicates the assumption of approximate purity of the initial matter that made the black hole in this very low temperature de Sitter ambience.It would be interesting to explore this in more detail, as discussed around (29). The separation of scales in the small black hole limit (3) ensures that black holes can be regarded as localized subsystems analyzable by distinct classes of observers (or metaobservers).Then, abstracting away from our technical analysis in Schwarzschild de Sitter black holes vindicates some general lessons for the black hole information paradox here as well.Islands appear to emerge self-consistently evading paradoxes with (i) unitarity as encapsulated by the Page curve (late static patch times and large future boundary lengths), (ii) causality (the island boundary does not lie within the causal wedge), (iii) overcounting (the purifying island is spacelike separated from the radiation, lying on the same Cauchy slice).In this light, de Sitter space itself and cosmological horizons appear exotic: extremal surfaces anchored at the future boundary involve timelike separations (e.g.[68], [86], for quantum extremal surfaces, and [123], [124], [125], [126], [127] for classical RT/HRT surfaces).So de Sitter space, and perhaps cosmology more generally, require new insights. Our discussions of Schwarzschild de Sitter are entirely within the bulk framework of semiclassical gravity, with no holography per se (except in the broad sense of gravity being intrinsically holographic).The future boundary is well-defined as a place where gravity is manifestly weak: however we have simply applied the island formulation in these relatively complicated higher dimensional models under various assumptions and approximations without rigorous justification.So this appears to stretch the regimes of validity of the original island proposals, although it corroborates the general expectations laid out in [18].It would be nice to better understand in more fundamental ways the deeper underpinnings of semiclassical gravity that encode these self-consistent island formulations of the black hole information paradox.In this regard it might be interesting to understand the interplay between the generalized entropy and its extremization and gravity actions (see e.g.[148], [149], [150], [89]) in the context of the general 2-dim dilaton gravity theories (13) we consider here arising from reduction of SdS 4 . A Review: static patch radiation entropy with islands In this section, we briefly review the calculation in [92] of the island resolution of the black hole information paradox in Schwarzschild de Sitter black holes in the limit of small black hole mass and very low de Sitter temperature.This has close parallels with islands in flat space Schwarzschild black holes [26].Considering the radiation region in the static patch, far from the black hole horizon but within the cosmological horizon, the entanglement entropy of the bulk matter can be shown to increase unboundedly.Including an island region I ≡ [a − , a + ] straddling across the black hole horizon, we consider the entanglement entropy of the interval R − ∪ I ∪ R + .Strictly speaking, the bulk matter should be approximated as a CFT at finite temperature corresponding to the de Sitter temperature: however in the limit of very low de Sitter temperature and a small mass black hole, we can approximate the bulk theory to be in an approximately pure state.Then calculating the complementary interval entropy and appending the area term (from the island boundary area) gives the total entanglement entropy [92] where C(a) is Extremizing S total in (A.45) with respect to the location of the island boundary a gives 4πa Next, extremizing S total from (A.45) with respect to t a gives tanh (α First consider t a = t b so t a − t b = 0 and t a + t b = 2t a for large t a , t b .Then (A.48) becomes Next, putting this condition (A.49) back in (A.47) gives 4πa For large t a and t b , the third term in (A.50) is small relative to the second term.So we can ignore the third term and (A.50) becomes Thus solving this in perturbation theory for the first order correction in Equations (A.52), (A.53), at late times t a = t b , recover the result in [92].(Considering t a = −t b i.e. t a + t b = 0 and t a − t b = 2t a for large t a and t b , the above analysis can be seen to give physically inconsistent solutions.)The island is a little outside the horizon.The late time generalized entropy including the island contribution is finite, approximately twice the black hole entropy upto small corrections from the bulk matter. B Details: entropy in the no-island case This section contains some details on the calculations of entanglement entropy in the absence of the island in sec.3. Using (11) and calculating each part of S matter in (23) separately gives Plugging all these into (23) gives Thus finally, we obtain (24). C Details: late-time entropy with island Here we give details on sec.4. We are looking to calculate (30), i.e. with W a ′ as in (18).Then we have Putting all these expressions together in (C.59) gives Similarly we obtain Now, putting (C.62) and (C.63) together gives We next calculate other relevant contributions using (11) and ( 17): we have . (C.66) Now, putting (C.65) and (C.66) together Similarly, we have Now, putting (C.68) and (C.69) together using the approximation (27), and (32).Thus we obtain The total bulk matter entanglement entropy thus is (C.64) plus (C.73), along with the area term.Thus at large values of X ′ a and X ′ b , after adding the area term the total entanglement entropy S total becomes (31) i.e. D Inconsistencies in other island solutions In this section, we briefly describe inconsistencies in other potential island solutions. D.1 Island outside the black hole horizon We discuss a potential island with boundary just outside the black hole horizon i.e. in the static diamond, similar to the results in [92] reviewed in sec. A.Here we use the static diamond Kruskal coordinates (15) redefined as X a = α S t a and T a = α S r * a for the location of the island boundary.The calculation now gives the total generalized entropy as Extremizing (D.75) with the island boundary a as ∂S total ∂a = 0 gives 4πa Next, extremizing (D.75) with respect to X a as ∂S total ∂Xa = 0 gives Putting this condition (D.78) back in (D.76) for large X a (with a − r S small) gives It can be seen that tanh Xa coth 2Xa < 1 always so that all terms are positive here: thus there is no solution with a > r S .Thus these extremization equations (D.76) and (D.77) together do not give reasonable island solutions. Similarly, if we consider X a = −X b ′ , (D.77) becomes D.2 Island inside the black hole: another possibility Recalling sec. 4 and the extremization equations ( 34), (35).Instead of X a ′ = X b ′ considered there, let us consider X a ′ = −X b ′ : then X a ′ + X b ′ = 0 and X a ′ − X b ′ = 2X a ′ .Then (35) gives for large X a ′ and X b ′ : The minus sign on the right hand side leads to trouble when this is put back in (34), giving no semiclassical a ′ ≲ r S near horizon island solution. E Future boundary, timelike separated QES In this section, we exhibit other quantum extremal surface solutions which are timelike separated from the radiation region near the future boundary.We will use several technical details from [134]. The Schwarzschild de Sitter metric (1), after the redefinitions τ = l r , ω = t l , becomes This expression for S gen should be regarded as a smooth function of U, V , with respect to which we will extremize to find quantum extremal surfaces.However the nature of the QES here can be gleaned by simply noting that the only place the spatial future boundary coordinate ω enters is through the spacetime interval ∆ 2 = (ω − ω 0 ) 2 − (y − y 0 ) 2 inside the logarithm.Thus we expect ∂Sgen The future boundary observer has r = ∞ and τ 0 = 0 so y 0 = 0. Further considering τ = 1 − ϵ, where ϵ ≪ 1, and noting 1 ≪ c ≪ 1 G , we simplify (E.94) ignoring appropriate terms and obtain 2c log[ (1−a 2 (1−ϵ)) β 2 (1+(a 1 +a 2 )(1−ϵ)) β 3 Now, considering a 1 and a 2 perturbatively as a 1 ≃ 1 − m l , a 2 ≃ 2m l , and 0 < a 2 < a 1 < 1, we can obtain the parameters β 1 , β 2 , β 3 using (E.84) perturbatively as well.This finally gives which is a consistency condition on the central charge (number of degrees of freedom) of the 2-dim CFT matter for the timelike extremal surface to exist (pure dS corresponds to m = 0). Figure 1 : Figure 1: The Penrose diagram of a Schwarzschild de Sitter black hole, with radiation region near the future boundary I + .Depicted are the radiation regions (blue lines) in the static patches, which are analytically continued to the radiation region R ≡ [b ′ + , b ′ − ] at I + and the late time island I ≡ [a ′ − , a ′ + ] on both sides. ) at late times i.e. for large |t b |.In our present context, the future boundary radiation region (b ′ + , b ′ − ) is defined by spacetime coordinates (X ′ b , T ′ b ) obtained by analytic continuation (10) of the spacetime coordinates (t b , r b * ) defined in the static diamond patches.Geometrically, using Fig. 1, we see that points in the left and right static diamonds can be mapped to points near the future boundary I + by drawing out light rays from b + to b ′ + and b − to b ′ − .In the late limit with |t b | large, the points b ± in the left/right static diamonds (left/right ends of the blue radiation regions) move towards the top end of the green lines (observer worldlines just inside the cosmological horizon).The corresponding lightrays map this to points near the left/right ends of the future boundary, giving large lengths |X b ′ |, consistent with the analytic continuation e. the disconnected surface S dis = S[A] + S[B] has lower area than the connected surface S conn = S[A ∪ B].Assuming an approximate global pure state, we are considering the complementary region as the 2-interval region (Σ rad ∪ 52) setting a ∼ r S in C(a) etc. Solving for t a from (A.48), we obtain cosh (α S • 2t a ) = 6π G N c r S (b − r S ) C(r S ) 2 tanh (α S t a ) tanh (α S • 2t a ) − tanh (α S t b ) (A.53)
11,050.2
2023-12-10T00:00:00.000
[ "Physics" ]
Development of an automated speech recognition interface for personal emergency response systems Background Demands on long-term-care facilities are predicted to increase at an unprecedented rate as the baby boomer generation reaches retirement age. Aging-in-place (i.e. aging at home) is the desire of most seniors and is also a good option to reduce the burden on an over-stretched long-term-care system. Personal Emergency Response Systems (PERSs) help enable older adults to age-in-place by providing them with immediate access to emergency assistance. Traditionally they operate with push-button activators that connect the occupant via speaker-phone to a live emergency call-centre operator. If occupants do not wear the push button or cannot access the button, then the system is useless in the event of a fall or emergency. Additionally, a false alarm or failure to check-in at a regular interval will trigger a connection to a live operator, which can be unwanted and intrusive to the occupant. This paper describes the development and testing of an automated, hands-free, dialogue-based PERS prototype. Methods The prototype system was built using a ceiling mounted microphone array, an open-source automatic speech recognition engine, and a 'yes' and 'no' response dialog modelled after an existing call-centre protocol. Testing compared a single microphone versus a microphone array with nine adults in both noisy and quiet conditions. Dialogue testing was completed with four adults. Results and discussion The microphone array demonstrated improvement over the single microphone. In all cases, dialog testing resulted in the system reaching the correct decision about the kind of assistance the user was requesting. Further testing is required with elderly voices and under different noise conditions to ensure the appropriateness of the technology. Future developments include integration of the system with an emergency detection method as well as communication enhancement using features such as barge-in capability. Conclusion The use of an automated dialog-based PERS has the potential to provide users with more autonomy in decisions regarding their own health and more privacy in their own home. Background Falls are one of the leading causes of hospitalization and institutionalization among older adults 75 years of age and older [1,2].Studies estimate that one in every three older adults over the age of 65 will experience a fall over the course of a year [3,4]. In addition to an overall decline in health, aging is also often accompanied by significant social changes.Many older adults live alone and become isolated from family and friends.Social isolation combined with physical decline can become significant barriers to aging independently in the community, a concept known as agingin-place [5].Aging-in-place allows seniors to maintain control over their environments and activities, resulting in feelings of autonomy, well-being, and dignity.In addition to promoting feelings of independence, aging-in-place has also been shown to be more cost-effective than institutional care [6].However, while aging-in-place is often ideal for both the individual and the public, elders are faced with pressure to move into nursing facilities to mitigate the increased risk of falls and other health emergencies that may occur in the home when they are alone. Personal emergency response systems (PERSs) have been shown to increase feelings of security, enable more seniors to age-in-place, and reduce overall healthcare costs [7][8][9].The predominant form of PERSs in use today consist of a call button, worn by the subscriber on a neck chain or wrist strap, and a two-way intercom connected to a phone line.If help is needed, the subscriber presses the button and a call is placed immediately to a live operator via the intercom.The operator has a dialog with the subscriber, determining the problem and co-ordinating the necessary response, such as calling a neighbour, relative, or emergency response team. Drawbacks to this approach include the possibility of a high rate of false alarms to the emergency call centre and the subsequent inundation of worried and unsolicited calls to the subscriber.In a study of older women who owned a PERS, many expressed apprehension of unexpected voices and visits from strangers, resenting the need to figure out "why a stranger is talking in my house" and "finding that they show up to check on me" [10].False alarms typically occur as a result of an accidental button press or failure, on the part of the user, to respond to regularly scheduled check-ins.According to one-call centre manger, false alarms may account for as many as 85% of call-centre calls [11].False alarms where first responders are sent to the home may further burden limited emergency resources and delay emergency responders from attending to true emergencies.Apart from the worry it may cause family and friends, false alarms may also result in financial losses because of reduced work hours for a friend or relative attending to a false alarm, or resulting from emergency responders having to break down a door or window to get into a home. Additionally, subscribers to PERSs are not always pleased with the system's usability and aesthetics.Many older adults feel stigmatized by having to wear the push-button activator and current systems place a substantial burden on the subscriber as he/she must remember to wear the button at all times and must be able to press it when an emergency occurs (i.e., the subscriber must be conscious and physically capable) [9].Finally, some older adults are hesitant to press the button when an emergency does occur because they either downplay the severity of the situation or are wary of being transferred to a long term care facility [8,9]. To circumvent these deficiencies several research groups are exploring the possibility of incorporating PERSs into an intelligent home health monitoring system that can respond to emergency events without requiring the occupant to change his/her lifestyle.Some researchers have devised networks of switches, sensors, and personal monitoring devices to identify emergency situations and supply caregivers and medical professionals with information they need to care for the individual being monitored [12,13].Through these types of PERSs, the user does not need to wear a physical activator or push anything for an emergency situation to be detected. One novel technique developed employs computer vision technology (e.g., image capture via video camera) and artificial intelligence (AI) algorithms to track an image of a room and determine if the occupant has fallen [14].Alternatively, Sixsmith and Johnson [12] used arrays of infrared sensors to produce thermal images of an occupant.The research presented in this paper assumes that a tracking system similar to these will be used to trigger an alarm to the PERS.Regardless of the detection method, once a PERS alarm has been triggered, there is a need to coordinate the response effort with the user.Involving the user allows him/her to maintain control over decisions regarding his/her own health and enables the PERS to provide the appropriate type of response.However, just as with a commercially available push-button triggered PERS, most of the automated PERSs under development immediately connect the user with a call centre when an alarm is triggered [15]. The research described in this paper presents the initial phase of a larger research study investigating the feasibility of using automated dialog and artificial intelligence techniques to improve the usability and efficiency of PERSs for older adults during an emergency situation.In particular, this first phase focuses on demonstrating the possibility of using automatic speech recognition (ASR) with a microphone array and speech recognition software to enable communication and dialog as a means of interfacing with a PERS. The new generation of ASR technology has achieved significant improvements in accuracy and commercial viability, as demonstrated by their presence in many fields, such as Interactive Voice Response (IVR) telephone systems, medical and business dictation, home and office speech-to-text computer software and others.ASR may be able to provide a simple, intuitive, and unobtrusive method of interacting directly with the PERS, giving the user more control by enabling him/her to chose the appropriate response to the detected alarm, such as dismissing a false alarm, connecting directly with a family member, or connecting with a call centre operator.The following is a description of the prototyping and preliminary testing of an ASR PERS interface, as well as a discussion of other areas within PERS where ASR could provide enhanced information about the state of the subscriber.Although the research described herein does not specifically test with older adult subjects, the results of the research are critical in setting the foundation for future prototype development and testing that will involve older adult subjects. Development of a dialog-based PERS prototype As shown in Figure 1, the development of the prototype occurred with two parallel stages of research.The left branch in Figure 1 for the prototype.The two branches were combined for the building and testing of the prototype (Stage 3). Stage 1 -Definition of dialog and dialog implementation To promote ease of use and compliance, a goal of this research was to design the automated dialog to be intuitive, effective, and friendly.Since current PERSs have included extensive research on how to interact politely, clearly and efficiently with a subscriber, the dialog for the prototype was based on the existing protocol for the Lifeline Systems Canada call centre.For example, Lifeline operators are instructed to initialise contact with a subscriber with a friendly introduction followed by the open ended question "How may I help you?".The dialog then flows freely until the operator and the subscriber determine together who, if anyone, should be summoned to help. The need for a dialog is based on the inherent uncertainty about the state of the occupant and about what triggered the alarm.Therefore, the goal of the dialog between the occupant and the PERS is to determine if the alarm is genuine, and if so, the appropriate action to take.To arrive at this goal (i.e., deciding what action to take), the system navigates through a series of verbal interactions resulting in a dialog with the occupant.Different actions available to the prototype are listed in Table 1. Actions are selected through a dialog exchange between the user and the system.The dialog structure for the prototype is depicted in Figure 2. Human factors experiments conducted on computer voice-based systems have demonstrated highest user satisfaction when automated dialog is modelled after live operators [16].Thus, the prompts have been developed to emulate the familiar and friendly tone of PERS operators, for example, by the use of personal pronouns ("would you like me to call someone else to help you?"), and pre-recording the names of the occupant and responders. At each dialog node in Figure 2, the corresponding prompt was played over a speaker, then the speech engine was activated to obtain the occupant's answer though a microphone.For these tests, close ended "yes"/"no" questions were selected to create a simple binary tree dialog structure.Transition from one state to the next depended solely on the best match of the user's response to an expression in the grammar (i.e.either 'yes' or 'no').Each prompt was pre-recorded and saved as separate audio files by the researcher. When defining the algorithms used to run the user/system dialog, the goal was to create an architecture that would be flexible and adaptable so that it could be easily modified as the project evolved.The modularity offered by modern programming practices and speech application programming interfaces (APIs) allows for flexible and scalable design, and requires minimal rewriting to integrate or remove components at any level.Java Speech API (JSAPI) is a set of abstract classes and interfaces that allow a programmer to interact with the underlying speech engine without having to know the implementation details of the engine itself.Moreover, the JSAPI allows the underlying ASR engine to be easily interchanged with any JSAPI compatible engine [17]. The prototype was tested using the Sphinx 4 speech engine, an ASR written in Java that employs a Hidden Markov Model (HMM) approach for word recognition [18].The recognition rates for several tests using Sphinx 4 have demonstrated a low word error rate under a variety of testing conditions.Furthermore, this speech engine is open source thus making it easy to use and develop when this application is expanded in the future. An XML parser was created using Jakarta Commons Digester [19] to load a file containing the dialog and action states (specified in XML format) at runtime.The XML files for the PERS application were built by modifying the Voice XML standard [20], which is generally used Operator Connect to a live operator.This option can be accessed by the user.It is also the default action the system takes if it does not detect a response from the user or cannot determine which response the user wishes to initialise.for voice enabled web browsing and IVR applications.By implementing the dialogs in separate XML files, the program code does not need to be recompiled in order to change the dialog.This is beneficial for testing different dialogs easily and allows for seamless customization of the system: a dialog for a user in a nursing home (who might want to be prompted for the nursing desk first) could be different from a dialog for a user in the community (who would be asked if they needed an ambulance first).Likewise, the grammar files (in JSGF format) and the prompt files (in .wavfile format) were also separated from the code itself to allow for easy modifications.The modular composition of the prototype enables grammars and prompts that take into account the accent or language preference of the user to be deployed on a per-user basis.Indeed, the system can be easily executed with any dialog specified in the XML format. Stage 2 -Selection and validation of hardware For a speech-based communication system, it is vital that the quality of the user's vocal response is sufficient to be correctly interpreted by the ASR.As such, the choice of microphone is very important.Wearing a wireless microphone is not an ideal solution because, just as with pushbuttons, the user must remember and choose to wear the microphone in order to interact with the PERS.Additionally, the user must remember to regularly change the batteries on the wireless device.Ideally an automated PERS should communicate with the user from a distance in a natural fashion, without requiring the user to carry any devices or learn new skills to enable interaction.For this study, the researchers decided that the best location for the microphone would be in the centre of the ceiling of the monitored room as this was out of the way, central to the room, would provide the best sound coverage and could not be easily obstructed. The close talking microphones typically used for commercial voice recognition applications (e.g., headphones or computer desk microphones) were not appropriate for use in this PERS application since these types of microphones would not be able to capture the occupant's voice with enough strength or clarity.Additionally, single ceiling mounted microphones can suffer from reverberations, echoes in the room, and a variety of background noises (e.g., TVs, radios, dishwashers, etc.) [21,22].Microphone arrays attempt to overcome such difficulties and have been designed for two purposes: 1) sound localization; and 2) speech enhancement through separation by extracting a target noise from ambient sounds. The microphone array used in the prototype was custom designed and constructed by researchers at the Department of Computer and Systems Engineering at the University of Carleton in Ottawa, Canada.The array consisted of eight, Electret, unidirectional microphones suspended in an X-shaped configuration.The microphone signal-tonoise ratio was greater than 55 dB, sensitivity was -44 dB (+/-2 dB) and the frequency response ranged from 100-1000 Hz.A low noise, low-distortion instrumentation amplifier was also built into the array system.The microphone array was mounted on the ceiling in the center of a 16 × 20 ft (4.9 × 6.1 m) room.Four microphones were spaced 10 cm apart along each axis of the array, which was calculated by the researchers from Carleton to be the optimal distance for dimensions of the testing area. The microphone array described above was designed to specialise in speech enhancement through localisation by implementing delay-and-sum beamforming to enhance audio signals coming from the user and destructively lower the impact of sounds coming from elsewhere [23]. In delay-and-sum beamforming a different delay is calculated for each microphone to account for the time the reference signal needs to travel from a given location to the array.Delay-and-sum beamforming was accomplished by passing the location (presumably known by the PERS) to a Motorola 68 k processor mounted on the array, which used this information to apply the appropriate delay to each microphone.For the prototype, the location of the user was input manually, although it is anticipated that this will be done automatically in a fully functioning PERS as it will be continually tracking the location of the user.This information about the location of the occupant could be used to direct the array to "listen" to the exact spot where the occupant is sitting or laying, making it easier to hear the occupant in both PERS-occupant and human call center operator-occupant dialogs. Test 1 -Performance of a single microphone versus a microphone array with beamforming The first experiment was designed to test the array in two modes: 1) using a single microphone from the array; and 2) using the array with the beamforming algorithm tuned into a zone of interest. The AN4 speech database developed by Carnegie Mellon University was selected to test the system.This database has been used in several batch tests throughout the development and evolution of the Sphinx speech engines [24].The AN4 database has voices from 21 female and 53 male speakers and consists of spoken words and letters.For these tests, only the spoken words were used for a total of 1846 utterances (with 79 unique words). Figure 3 illustrates the pattern of attenuation expected from the microphone array for sounds in the mid-range of human speech (1850 Hz) coming from zone 9. AN4 was played over a single computer speaker located on the laboratory floor in zone 9 for each test.Neither the speaker's location nor volume changed during the tests.For the single microphone test, only one microphone on the array was turned on.In the case of the beamforming tests, all the microphones were used and the researcher manually entered the location of the AN4 speaker.To create ambient noise interference, a pre-recorded audio track of a bubbling kettle (with a signal to noise ratio (SNR) of approximately 6.7 dB) was played over a separate speaker.The kettle noise was played from zone 17, the spot that caused the most destructive interference with the AN4 speaker.The Sphinx 4 ASR was used to analyse both sets of tests.The output from the ASR was compared with the known data to determine recognition rates. Test 2 -Testing "yes"/"no" word recognition rate While the results from the beam-forming tests were conducted with a large vocabulary, it is hypothesised that ASR recognition would improve significantly with a simple two-word vocabulary consisting of "yes" and "no".A convenience sample of nine subjects, 4 male and 5 female, was used for this experiment.The subjects ranged from 20 to 30 years of age.Each subject was asked to sit in the same spot as the AN4 speaker used in the previous tests (depicted as zone 9 in Figure 3).The subject was asked to speak at their normal volume and say the words 'yes' and 'no' twice for three conditions for a total of twelve utterances per subject (108 words in total).The three conditions were: 1) bubbling kettle interference played in the same location as previous tests (zone 17 in Figure 3 -area of the most attenuation of the human voice); 2) bubbling kettle interference played directly under the array (zone 13 in Figure 3 -intermediate attenuation); and 3) no noise interference. Stage 3 -Prototyping the PERS interface The dialog system developed in Stage 1 and microphone array selected and tested in Stage 2 were combined into the architecture depicted in Figure 4.The response planning module executes the dialog and actions outlined in Figure 2. Pre-recorded actions selected by the system were played over the speaker.In this system only audio files were played, however in a working system a call would also be placed to the appropriate party. Test 3-Efficacy of the prototype dialog This test examined the overall efficacy of the prototype automated PERS dialog interface.A convenience sample of four subjects (3 male and 1 female, healthy and between the ages of 20 and 30) each conducted a set of three scenarios with the system, for a total of 12 dialogs.Before each dialog, the subject was asked to envision a scenario read to them by the researcher and then asked to interact with the prototype to get the recommended assistance.The three scenarios were: 1) they were injured and needed an ambulance, 2) they had fallen, but only wanted their daughter to come and 3) a false alarm. The Response Planning Module employed the dialog structure outlined in Figure 2, and the ASR matched the subjects' responses to either yes or no. Attenuation pattern for frequencies of 1850 Hz originating in zone 9 Results Table 2 presents the recognition results for a single microphone versus beamforming using the AN4 database (Test 1). As seen in Table 2, tests showed about a 20% improvement in accuracy when beamforming was used, demonstrating that a microphone array using basic delay-andsum beamforming provides improved recognition results over a single microphone in the presence of moderate volume interference noise.After obtaining these results, further tests were performed at a SNR of approximately 0dB and resulted in no recognition by either the single microphone or the array. The results of the yes/no recognition test (Test 2) are summarised in Table 3.There were no errors in the no noise condition, six errors when the noise was directly under the microphone and four errors in the zone of previous tests. As the accuracy of this test was significantly higher than the AN4 test, it was decided that the prototype dialog questions would follow a closed-ended, "yes"/"no" format. When the prototype dialogue was tested through the use of scenarios (Test 3), all 12 tests concluded with the system selecting the desired action, despite a word error rate of 21% (11 errors in 52 words spoken).The reason for this was because the system confirmed the user's selection before taking an action (see Figure 2).The errors consisted of three substitutions (yes for no or visa-versa) and eight deletions (missed words).Most of the deletions were missed by the ASR because users were speaking their response while the message was still being played by the system. Discussion The results from tests with the prototype are encouraging. During the array testing, simple delay-and-sum beamforming resulted in a considerable improvement (20%) in the word recognition rate of the array over a single microphone.This improvement might be greater with more complex microphone array algorithms [25,26] and prefilters [22].Additionally, further experimentation with the Sphinx 4 configuration parameters may result in increased ASR performance [27]. The "yes"/"no" tests have twofold results.Firstly, unsurprisingly the location of noise interference has an impact on the ASR's ability to correctly identify words.This suggests that the system performance will be affected by the location and presence of unwanted noise.Secondly, the reduction of the users' response to either "yes" or "no" Prototype Architecture Figure 4 Prototype Architecture.greatly improves ASR recognition.In this case, overall recognition rates for the beam-former increased from about 50% to 90%.This increase is very likely the result of the significant simplification of possible matches the ASR had to choose from.However, it must be taken into consideration that the AN4 tests were conducted by playing the database over (high quality) speakers, while the 'yes'/'no' tests involved live humans. The full prototype test conducted in Stage 3 (Test 3), resulted in several important insights.First, although all of the errors made in Test 3 were corrected by the confirmation-nature of the dialog, there is still the possibility (4.5% given a word error rate of 21%) that 2 errors could occur in sequence, resulting in the PERS making the wrong decision.This is an unacceptably high error rate as the occupant must always be able to get help when it is needed.As such, there needs to be a method (or methods) that the occupant can use to activate or re-activate the system whenever s/he wishes.One option is to enable a unique "activation phrase" that the user selects during system set-up.When the user utters this activation phrase, a dialog is initiated, regardless of whether or not an emergency has been detected.To further improve system accuracy, information from a vision system tracking the occupant could be used to reduce uncertainty about a situation.For example, if the user is lying still on the floor, this information could increase the weighting across possible answers that lead to emergency actions as opposed to false alarms.This type of intelligent, multi-sensor fusion can be achieved though a variety of planning and decision making methods such as partially observable Markov decision processes (POMDPs) [28].Regardless, it is vital that in the case of doubt about a user's response (or lack thereof), the system should connect the user to a live operator, thus ensuring that the user's safety is maximised. Secondly, the test subjects in Test 3 quickly became accustomed to how the system worked and would often start responding while the system was still "speaking".As the microphone was not activated until after the system finished playing a prompt (so as to avoid the system interpreting its' own prompt as a user response), these responses were missed and would have to be repeated, causing some confusion and frustration.This highlights the necessity for the user to be able to "barge-in" while a prompt is in progress.This is especially important in a system designed for emergency situations, where the user may be familiar enough with the system to anticipate the last few words in a system dialog and may be too panicked or in pain to wait.Most telephone voice systems today have taken this property of dialog into account, and allow users to speak before the system has completed its side of the dialog (i.e.barge-in), however the separation of the phone earpiece and receiver makes this approach easier to implement over the telephone than it would be for the type of PERS described here.Nevertheless, it is an important feature that will be investigated in future designs. The literature has conflicting opinions on the comfort of seniors with recorded voices [10,29].There is also a lack of evidence on whether an automated system would be appropriate for emergency situations where users may be under duress.Further research is needed to determine whether a recorded voice would quell or create confusion and/or discomfort and also whether occupants can attend to a series of directed questions while in a crisis.Additionally, tests with older adults would provide feedback in terms of usability and acceptability.As older adults represent the majority of targeted users of this technology, these questions must be well investigated and answered with the intended user population. Finally, it must be stressed that although this paper presents promising preliminary research towards a new alterative to the current PERS techniques, more research is necessary to improve interactions with the user and to make the system more robust.While false positives (i.e., false alarms) can be annoying and costly, false negatives (i.e., missed events) must never occur as this could place the life of the occupant in jeopardy.Testing involving different software, hardware, and environment choices, using larger, more comprehensive groups of test subjects is needed.Only after such extensive testing with subjects in real-world settings will dialog interface technology be ready for the mass market. Although the dialog program architecture for this prototype is fairly simple and deterministic, it was created with a modular architecture into which other algorithms could be easily applied.For instance, by using appropriate abstract classes and implementations, methods such as decision theoretic planning, such as a Markov decision process (MDP) [30] or POMDP [11] based approach, could be applied in the future to converge on dialogs that were most effective for each particular user. In general, this prototype demonstrates the improved ability of a microphone array to remove noise from the environment compared to a single microphone.This enhances ASR accuracy and also allows for easier communication between a call centre representative and the occupant.Importantly, the successful recognition of most false alarms could significantly reduce false alarm call volumes in current PERS call centres, allowing operators to focus on real emergencies. Limitations Hearing loss is extremely common among seniors [31] and the loud volume settings on TVs and radios could lead to zero or even negative SNR.Therefore, before it can be implemented in a home environment, improvements in ASR performance will be needed to ensure the PERS interface is robust with smaller SNR, as well as non-uniform noise that contains human speech (e.g.TV, radio). These tests were limited in the type of voice samples used. The system was tested with users under calm, casual circumstances.It will be important to conduct tests on voices in emergency situations, either live or using recorded conversations from call centres, in order to ensure speech recognition performance is upheld when a person may be shaken by a fall or other crisis in the home.Secondly, these experiments were limited to a younger adult sample. It is important that tests be run with older adults on a system that has been trained using a database of older adult voices.The authors are currently working to build such a database.Limited work in comparing the success rate of ASR for various age groups indicates that differences may exist [32,33].Finally, tests should also be conducted with people of different backgrounds who have strong accents to assess the affect on accuracy and determine the extent of customizations that would be needed [34]. Conclusion Implementing ASR in the domain of PERS is a complex process of investigating and testing many tools and algorithms.The modularity of the code and of the components used in this study will facilitate the optimisation of the ASR and microphone array parameters, the addition of more complex dialog states, and the potential addition of statistical modelling methodologies, such as techniques involving planning and decision making. Although the prototype did not perform perfectly, accuracy was significantly improved by limiting the vocabulary to 'yes' and 'no'.By including a confirmation for each action that the system was about to take, the prototype was able to overcome errors and successfully determine the proper action for all test cases.As such, the prototype designed and tested in this study demonstrates promising potential as a solution to several problems with existing systems.Notably, it provides a simple and intuitive method for the user to interact with PERS technology and get the type of assistance he/she needs.Having an automated, dialog-based system provides the occupant with more privacy and more control over decisions regarding one's own health.Additionally, the microphone array system proposed in this research requires only one device to be installed per room in the home or apartment.If coupled with automatic event detection, such as a computer vision-based system, this would be much simpler to install and maintain than other proposed automated PERSs, which generally use a multitude of sensors or RFID tags throughout the home.These advantages would likely translate into a significant reduction in non-compliance, as greater burden would be transferred from the user to the technology. The next phase of research is currently underway and is focused on improving the robustness of the automated dialog-based and intelligent PERS specifically for older adults.An older adult speech corpus containing emergency type speech in Canadian English is being developed for this purpose.Once completed, this older adult speech corpus will be used to train the ASR component of the prototype PERS.We hypothesize that an ASR system trained with older adult speech in-context will be more effective than an ASR system trained with non-older adult speech out-of-context.In addition, older adult voices will be recorded in mock emergency situations and will be used to test the prototype PERS system.The decision making and dialogue capability of the automated PERS will also be further refined and tested possibly with a slightly larger vocabulary (e.g., help, ambulance), a probabilistic decision-making model, and/or a more complex language model.To enhance system flexibility, the ability to bargein at any time is also being explored.Once the system is operational, quantitative and qualitative system and usability testing with older adult subjects will be conducted. (Stage 1) represents the analysis and definition of the dialog that occurs between users and a live call centre in a current, commercially available PERS to develop how the prototype should respond to a detected fall.This includes the selection of software used to run the ASR dialog.The right branch (Stage 2) represents the selection and evaluation of the hardware used Prototype development process Figure 1 Prototype development process.Stage 1 -Definition of dialog and dialog implementation; Stage 2 -Selection and validation of hardware; Stage 3 -Prototyping the PERS interface. Figure 2 Flow diagram of system dialog. Figure 3 Attenuation pattern for frequencies of 1850 Hz originating in zone 9. Table 1 : Actions available to the PERS prototype Action Name Action Description and must give consent to respond to emergency calls.Responders can include neighbours, friends, and family.Responder 2See description for Responder 1.
7,669.2
2009-07-08T00:00:00.000
[ "Computer Science", "Engineering" ]
Touch-mode capacitive pressure sensor with graphene-polymer heterostructure membrane We describe the fabrication and characterisation of a touch-mode capacitive pressure sensor (TMCPS) with a robust design that comprises a graphene-polymer heterostructure film, laminated onto the silicon dioxide surface of a silicon wafer, incorporating a SU-8 spacer grid structure. The spacer grid structure allows the flexible graphene-polymer film to be partially suspended above the substrate, such that a pressure on the membrane results in a reproducible deflection, even after exposing the membrane to pressures over 10 times the operating range. Sensors show reproducible pressure transduction in water submersion at varying depths under static and dynamic loading. The measured capacitance change in response to pressure is in good agreement with an analytical model of clamped plates in touch mode. The device shows a pressure sensitivity of 27.1 ± 0.5 fF Pa−1 over a pressure range of 0.5 kPa–8.5 kPa. In addition, we demonstrate the operation of this device as a force-touch sensor in air. Introduction Capacitive pressure sensors are used for a broad range of applications due to their high pressure sensitivity, low temperature dependence and low power consumption [1,2]. A capacitive pressure sensor typically comprises a thin conductive membrane which is suspended above a fixed counter-electrode in a parallel plate geometry [3]. However, the output of such conventional capacitive pressure sensors is nonlinear with respect to the pressure applied perpendicular to the conductive membrane and the sensitivity of the near-linear regime of the sensors performance is low compared to the parasitic capacitance of the sensor. A higher sensitivity is achieved by increasing the suspended membrane area, reducing the dielectric gap or using a membrane material with a lower bulk elastic modulus. Since the conductive membrane is freely suspended in close proximity to the counter electrode, the fabrication of large area membranes with a small air gap often results in membrane collapse driven by capillary forces and stiction issues due to electrostatics during fabrication or device testing [4,5]. This not only results in reliability issues in high-sensitivity devices, but also calls for complex multiple-sensor architectures for applications where a high sensitivity as well as a large range is required. Touch-mode capacitive pressure sensors (TMCPS) have shown much promise in overcoming these challenges [6,7]. A TMCPS is designed to operate in the pressure range where the diaphragm is allowed to contact the substrate with a thin insulating layer between the conductive portion of the membrane and the conductive substrate. The large operating range of these devices makes them attractive for application in harsh environments, such as hydraulic pressure sensing, and tyre-pressure monitoring systems [6,8]. TMCPSs were initially demonstrated in 1990 by Ding et al [9], in which doped single-crystal silicon membranes were utilised to realise a high-pressure sensors. Since then, TMCPS have been developed using a range of membrane and substrate materials in order to adapt the sensing technology for applications in harsh environments. Beyond single-crystal silicon, other membrane materials such as polysilicon [10], polymer/ ceramic multi-layers [11], low temperature co-fired ceramics [12], and silicon carbide [13,14] have been demonstrated. One limitation of TMCPS is that the optimum pressure range for linear operation is not near the equilibrium pressure point, as a significant force is required to cause the membrane to touch the substrate. Therefore TMCPSs are limited to applications that have a constant pressure bias or a constant electrical bias between the two capacitor plates. Graphene-polymer heterostructure membranes [15], a multi-layered composite film, comprising a laminate of a CVD-graphene layer and one or more polymer layers, are regarded as a promising material for micro-and nano-electromechanical systems (MEMS and NEMS) due to their high elasticity, tuneable elastic modulus and high tensile strength [16,17]. Freely suspended graphene-polymer membranes are formed by a transfer method that uses solely the van der Waals adhesion between the graphene layer and the underlying substrate to clamp the membrane in place. By varying the polymers thickness and elastic modulus the bending rigidity and thus the adhesion to underlying substrates of the graphene-polymer membrane can be precisely tailored. In this paper we demonstrate a graphene-polymer TMCPS that is permanently in touch-mode, extending the linear regime of the sensor into the low pressure limit. Moreover, the low elastic modulus of the graphene-polymer and the high yieldstrength of the graphene-polymer interface give high pressure sensitivity and a large pressure range for the graphene-polymer TMCPS device. For the purpose of this study we specifically select a multi-layer polymer structure for its compatibility for underwater applications. In the underwater environment, sensors are in constant submersion in a liquid environment and therefore require being absolutely leak-proof in order to prevent electrical shorts and charge leakages. Moreover, underwater environments often contain corrosive chemicals, therefore demanding that all external facing members of the underwater pressure sensor is resistant to corrosion. For this reason, the graphene-polymer pressure sensor presented in this paper comprises a one polymer layer that has anticorrosion properties and a second that provides a moisture barrier to protect the conducting graphene layer. Structure and operation of the TMCPS device A 3D schematic of the component layers of a TMCPS device is shown in figure 1(a), where a thin conductive membrane lies on top of a square shaped cavity in an insulating layer that is on top of a conducting substrate. When assembled, the conductive membrane and the conductive substrate together form a capacitive structure with the insulating layer acting as the dielectric separating the two capacitor electrodes. In order to describe the behaviour of the TMCPS device, first we identify two distinct operating modes that are defined by the membrane being freely suspended across the cavity (normal mode) or touching the base of the cavity (touch-mode) as shown in figure 1(b). In the later mode, two different membrane morphologies are distinguished by their conformation to the cavity and are thus defined by their capacitive response; The regime in which the membrane initially laminates onto the base of the cavity is known as the linear regime and the regime where the membrane is almost fully conformal to the cavity is known as the saturation regime. Figure 1(c) shows a capacitance curve demonstrating these different operating modes of the device. At low pressures the device is in normal mode, where the membrane is freely suspended across the entire cavity, such as a conventional capacitive pressure sensor. In this operational mode the sensor experiences a sharp increase in sensitivity with pressure as the electromechanical coupling between the parallel plates increases with decreasing air gap. When the membrane touches the bottom of the cavity a non-linear relation between capacitance/ sensitivity and pressure occurs and the sensor transitions into touch-mode. At pressures above the trans ition point the device is in touch-mode and the membrane is in direct contact with the insulating layer. In this regime an increase in pressure results in a steady lamination of the membrane onto the insulating layer resulting in a linear capacitance and relatively constant device sensitivity. This regime is typically the desired pressure range of operation. At pressures beyond the linear-regime the membrane lamination saturates and the device sensitivity drops down to zero as the membrane becomes fully compressed into the cavity. Analytical model of TMCPS By using the linear elastic approximation for the clamped membrane it is possible to obtain an expression for the capacitance of the device. Whilst our experimental device comprises a square cavity geometry defined by cavity length a 2 and spacer height g as shown in figure 2, for simplicity of the analysis we assume a cell with circular geometry of radius = * ′ a a 1.05 to account for a slight increase in deflection sensitivity of a square membrane with length a 2 in comparison to a circular membrane with radius a. We consider the membrane to behave as a uniformly loaded plate with a deflection profile given by Where r is the radial distance from the center of the cavity and D is the bending rigidity of membrane [18]. Since the graphene-polymer heterostructure membrane is composed of multiple materials, we apply a composite analysis to determine an effective bending rigidity of the membrane D eff . The effective bending rigidity of a composite with N layers is given by where = ν − Q n E 1 n n 2 and the subscript n refer to the material properties of the nth layer in the compound plate [19]. For example, a compound material comprising two materials as shown in figure 2 has an effect bending modulus of where and t c represents the length between the base of the compound plate and the effective mid-plane, as shown in figure 2. In this paper, as an example of an industrial use scenario, we construct and demonstrate the operation of this pressure sensor device submerged in a water environment. Thus, we consider a three-layer stack where Material 1 is an electrically conductive layer with thickness t ec that forms the top plate of the parallel capacitor, Material 2 is a moisture barrier layer with thickness t ba preventing moisture from shorting the capacitive device, and Material 3 is an anti-corrosion layer with thickness t ac , protecting the sensor from the saline environment. In order to model the performance of the capacitive pressure sensor we construct an electromechanical model for the capacitance as a function of applied pressure. The capacitance of an entire device with an array of × N M pressure sensing cells is given by summing over all of the rows i and columns j of the array given by where C par is a lumped sum of all of the parasitic capacitances in the pressure sensor chip and C i j , is the capacitance of sensing cell in row i and column j. The capacitance of each cell in the array is given by a sum of the capacitance of the supported area of the membrane C sup , the area that is in touch-mode C t and the suspended area C sus . For the touch-mode and supported area the sensing cell we assume a parallel capacitor plate model. For the touch-mode area we assume a circular area with capacitance where ε ox is the dielectric constant of the oxide layer, r 2 is the radius of the touch-mode area of the sensor and t ox is the thickness of the oxide. Similarly for the supported area the capacitance is given by where C ox is the capacitance of the oxide layer below the support structure given by , ε sup is the dielectric constant of the support structure, w is the width of the support structure, and E sup is the elastic modulus of the support structure. The suspended area of the membrane does not lie parallel to the silicon substrate and is therefore solved by the use of axial symmetry of the boundary condition approximation. By approximating the curved geometry of the suspended area with a simple gradient defined by the angle θ the electrical field flux lines are approximated as directional arcs as shown in figure 1(b). The electrical field intensity is therefore given by where V is the applied voltage between the two capacitor plates during measurement and d is a vertical line between a point on the membrane and the silicon substrate, given by ( ) = − d g w r [20]. Thus the capacitance of the suspended area can be approximates , where the capacitance of the air gap of the suspended section is given by where ( ) w r is the deflection profile of a the membrane as given in equation (1). When evaluated the final result is given by where z is the integrand and the limits of integration . This electromechanical model along with the material properties (supplementary table 1) (stacks. iop.org/TDM/5/015025/mmedia) and device geometries (table 1) were used to calculate the deflection profiles and graphene-polymer TMCPS performance. A more detailed discussion including the assumptions made during the calculation is given in supplementary discussion 1. Sensors are fabricated from CVD graphene grown on copper foils according to Ciuk et al [22], following a two-step transfer process; the CVD graphene is first coated with 2.5 µm of parylene-C and then transferred from a square piece of copper foil of 15 mm × 15 mm size on to a flat silicon dioxide surface of a silicon substrate (SiO 2 /Si) using a poly(methyl methacrylate) (PMMA) transfer polymer and a wet transfer process described in supplementary discussion 2 and elsewhere [21] (figure 3, step 1). A 10 µm thick film of polyurethane (PU) is then coated onto the silicon substrate comprising the grapheneparylene-C membrane and is subsequently lifted off the SiO 2 /Si surface using an aqueous potassium hydroxide (KOH) etch (figure 3, step 2). We do not directly transfer the graphene from the copper foil onto the cavity baring substrate because the parylene-C coating on the graphene is not homogeneous due to undulations in the underlying copper foil. On a separate substrate, a SU-8 positive photoresist spacer grid is patterned onto the surface of a 20 mm × 20 mm piece of Si/SiO 2 wafer using UV lithography and a mask aligner. On the surface of this chip, surrounding the region of the SU-8 spacer, metal electrodes are formed using a silver epoxy (figure 3, step 3). In the final step the graphene-polymer membrane is aligned and stamped onto the cavity baring substrate using a tape supported transfer process described elsewhere (figure 1, step 4) [21]. By baking this final structure at 90 °C for 30 min, the graphenepolymer membrane is allowed to fully conform to the substrate and any residual moisture inside the cavity is evaporated off. This stage also allows the membrane to soften and deflect into permanent touch-mode without the use of any external pressure. Each sensor array now comprises a monolayer graphene membrane with a parylene-C moisture barrier layer and a PU corrosion-resistant layer of thickness t ba and t ac on top. The graphene-polymer membrane is suspended over an array of square cavities each of length 2a and height g. A SiO 2 dielectric of thickness t ox exists at the base of each cavity, beneath which lies the highly doped Si substrate which acts as the counter electrode of the capacitive sensor. An array of such cavities covers an area of 14 mm × 14 mm. Since the cavities are now sealed by the graphene-polymer membrane, an equilibrium pressure p 0 exists within each cavity. When the external pressure is changed to a value p > p 0 (positive pressure), the suspended portion of the membrane deflects in to the cavity and the area of the membrane that is in contact with the SiO 2 surface is increased. The deflection of the membrane and increase in membrane-oxide contact area results in a change in capacitance which is measured and correlated to the change in pressure. A detailed description of the entire fabrication protocol is given in supplementary discussion 2. The device dimensions in relation to the schematic of the device are given in table 1. In addition the material properties of the device's constituent materials are given in supplementary table 1. Physical characterisation of sensor arrays Characterising the morphology of the graphenepolymer membranes in the TMCPS device is crucial in accurately modelling the pressure sensor performance. In order to characterise the membrane integrity over large areas we employed a series of optical and mechanical techniques. Sensors were first imaged by optical microscopy to check for defects in the graphene-polymer membrane. Figures 4(a) and (b) shows an optical micrograph in reflective-mode at 5× magnification of a section of pressure sensing cells before and after the graphene-polymer membrane has been transferred, respectively. We can identify that the transferred membrane adheres to the center of each cavity by the optical contrast change in the membrane as the blue coloured region is suspended and the grey coloured area is touching the insulating layer. Samples with full coverage and a homogenous film transfer were subsequently analysed by a Dektak profilometer and Raman spectroscopy. Figure 4(c) shows the cross-sectional profile of the bare substrate and graphene-polymer membrane along the white dotted lines in optical micrographs of figures 4(a) and (b) respectively. The cross-section shows that the array of membranes are suspended homogenously over the spacer structure. Figure 4(d) shows the crosssection of a single sensing cell of the TMCPS device as indicated by the white dotted line in figure 4(b). The cross-section shows a smooth membrane surface and a homogenous height of the spacer-supported area of the membrane. The slight brow in the contact-region of the cell is attributed to compressive strain on the top surface of the membrane. In order to confirm that the graphene layer of the membrane maintained its integrity throughout the fabrication process we conducted Raman spectr oscopy measurements at multiple reference points across the sample. A detailed discussion of this measurement is given in supplementary discussion 3. Experimental setup Fully characterised TMCPS devices with full coverage, 100% yield and minimal defects with dimensions as described in table 1 were electrically contacted to a BNC cable using silver epoxy and mounted onto a custom built sensor housing as shown in figures 5(a)-(c). The sensor housing was then submerged into a water tank of 1 m depth and mounted onto a vertically aligned steel bar that can move along the z-axis using a stepper motor controlled through a LabVIEW program, as shown in figure 5(d). This system allows us to move the sensor housing back and forth in the vertical direction with an accuracy of 0.5 mm. The depth of the sensor was taken as the vertical distance between the water/air interface of the tank and the mid-point of the sensing area of the TMCPS sensor as shown in figure 5(c). The BNC cable used for capacitance measurement was positioned such that motion of the sensor does not affect the parasitic capacitance of the measurement cables. This was achieved by submerging an additional 1 m of cable in the tank such that the section of the cable at the water/air interface is motionless during movement of the sensor. In order to calibrate the drift of the capacitance, samples were measured for 1 h at a depth of 90 cm. During this period we observed a drift of 0.15% of the total capacitance. A detailed description of the pressure sensor calibration is given in supplementary discussion 4. The depth of the sensor was then varied between 50 mm and 850 mm at various speeds and time intervals in order to characterise the sensor response. The effective hydrostatic pressures at these depths are 0.5 kPa and 8.5 kPa respectively. As the pressure difference between the inside and the outside of the cavity is increased by submerging the sensor deeper into the water tank the graphene-polymer membranes are pressed into the cavities with a force proportional to the hydrostatic pressure and hence the depth of the sensor inside the water tank. Simultaneously the capacitance between the graphene layer and the doped silicon substrate is measured using a high precision LCR meter with a resolution of 5 pF in typical operating conditions. Capacitance measurements were taken at 1 kHz with a bias of 1 V, giving a noise limited capacitance accuracy of 0.1%. Pressure cycling measurements Devices were compared to identical devices fabricated in parallel, but without an SU-8 spacer structure (without cavities) on the surface of the substrate. This allowed us to confirm that it is truly the deflection of graphene-polymer membranes that is causing the change in capacitance, as shown in figure 6. The device with the spacer structure shows a strong correlation between the pressure and capacitance in comparison to the reference device without the spacer structure, clearly demonstrating the device's sensitivity to pressure due to the presence of the SU-8 spacer structure. We are able to extract the sensitivity of devices containing cavities from the slope of the curve in figure 6 as 27.1 ± 0.5 fF Pa −1 . Next we cycled devices containing cavities between a depth of 5 cm and 65 cm with a cycling period of 20 s and a 6 s pause at each depth, giving a pressure variation of 0.5 kPa to 6.5 kPa. This was followed by another cycle between 5 cm and 85 cm (0.5-8.5 kPa) with a period of 10 s and without an intermediate pause at each depth. Figure 7(a) shows the measured capacitance as a function of time of these two cycling experiments. The depth of the sensor and effective hydrostatic pressure on the sensor are also plotted in blue for reference. The device's response to the continuous change in pressure is reproducible to a precision of 4% of the first cycle's pressure range and to 6.5% of the second cycle's pressure range. This decrease in reproducibility is attributed to the greater turbulences in the water tank that are created by moving the sensor housing moving through the water at an increasing speed. Moreover, the first cycling period, including the short pauses between varying depths, shows unsettled capacitance signals during each pause. These artefacts are also attributed to the effect of moving water inside the test tank as the sensor housing changes momentum. It was found that these fluctuations are more prominent when the sensor comes close to the top surface of the water tank (5 cm depth). This suggests that the surface movement of the water tank also plays a role in the pressure signal measured. Finally, we measured the capacitance of the sensor during a single cycle of 11 s between a depth of 50 cm and 65 cm in order to probe the sensor's response time. Figure 7(b) shows that measured capacitance and reference pressure signals correlate well and minimal drift is observed over the complete cycle. Although a slight delay is observed in the response to the cycle, we note that the time resolution of the LCR meter is 0.9 s and is therefore insufficient in giving a reliable measurement of the time delay. In order to confirm that the observed capacitance change is solely due to the deflection of the graphenepolymer membrane and that the aqueous environment is independent of the device capacitance, we also measured the samples in an oil bath (supplementary discussion 5). Despite the robust design of the TMCPS device, our experimental setup was restricted to pressure range of 0.5-8.5 kPa, however our previous measurements on graphene-polymer membranes comprising a graphene/parylene-C bilayer showed that membrane deflections are reproducible up to a strain of 1.4%, without any slippage at the graphenepolymer interface. Assuming that the graphene/ parylene-C interface of the membrane used in this study has a comparative structural integrity as the previously tested graphene/parylene-C interface, using equation (10) and the geometry of the sensor given in In order to compare the measured sensitivity value with our electro-mechanical model, we carried out a detailed characterisation of the TMCPS geometry. First we conducted profilometry measurements at 12 randomly distributed 4000 µm traces (as shown by the black curve in figure 4(c)), giving a data set of 64 unit cell profiles. These measurements were then used to extract a distribution and hence an average value for the device geometries including the θ value, as defined by the angle in the black dotted triangle in figure 2(b). Using a θ value of θ =°±°3.8 0.4 and the sensor geometry parameters from table 1, the calculated touch-mode sensitivity is 36 ± 9 fF Pa −1 according to equation (10). A breakdown of this calculation and the associated assumptions made is given in supplementary discussion 1. With the graphene-polymer pressure sensor in its early stage of development, it remains a challenge to accomplish a large area array of suspended membranes with uniform performance characteristics. Such non-uniformities arise from a combination of variations in the membrane thickness, inhomogeneties in residual stress in the membrane. The membrane thickness, when measured across the entire array, shows a variation of ±3.5%. Considering these variations in the membrane thickness we expect deviations in the TMCPS's sensitivity of 0.4% based on our electro-mechanical model. Further, optical micrographs of the graphene-polymer membranes in touchmode highlight the variations in membrane stress, as the area in contact with the SiO 2 (green region in the optical micrograph in figure 4(b)) often deviates from the rounded square shape, as is expected for a square shaped cavity. Although we could not quantify these variations in stress, the variations in the membrane profiles were considered when modelling the TMCPS devices. A further discussion on the effect of stress on the TMCPS performance is given in supplementary discussion 1. Increasing the number of cavities within the device therefore provides a strategy towards improving the reproducibility of the device, however this approach is limited due to the cost of increasing the device footprint by adding more cavities. A more direct approach to solve the issue of reproducibility is to improve the control over the method of laminating the graphenepolymer film onto the cavity-baring substrate, thus regulating the mechanical properties of the individual suspended membranes. Moreover, by further reducing the thickness of the polymer layer a reduction in gas-leakage can be obtained as previously shown in the aforementioned study on graphene/parylene-C membranes. We attribute the difference between our calculation and measurements to a combination of charge leakage through the SiO 2 layer and along the surface of the sensor chip as well as due to an overestimation of the analytical model. In order to reduce the noise level in future iterations we propose the use of a pinhole free dielectric or deposition of a barrier layer [23]. In the second case, an overestimation of the touch-mode model is anticipated and the assumed circular symmetry of the model not being entirely accurate in describing the deflection of square shape geometry of the unit sensing cell in the fabricated device. Despite the scaling of the unit cell radius = * ′ a a 1.05 to account for an increase in deflection sensitivity of a square membrane with length a 2 in comparison to a circular membrane with radius a, the square membranes in the experiment incur additional stresses in touch mode that are not accounted for by our current analysis. Discussion The TCMPS design is aimed at withstanding highloadings, resist sudden shocks and give a linear pressure response with a high pressure sensitivity. This makes TMCPSs especially suitable for applications in harsh environments. Harsh environments can typically be subdivided into four categories, including high temperature, corrosion resistant, high loading, and biocompatible. The crucial component in providing a pressure sensor's compatibility with any one of these harsh environments is its membrane, as this member is directly exposed to the environment and determines the intrinsic performance of the sensor. TMCPSs for application in harsh environments typically employ either silicon carbide-based (SiC) or polymer-based membranes. Whilst both of these materials are resistant to corrosive chemicals, their compatibility with high temperatures differs considerably. Whilst SiC-based membranes are relatively stable up to 600 °C, polymer-based membranes are typically only stable up to 150-200 °C [11,24]. Moreover, the difference in the mechanical properties of these two sensors gives rise to great differences in their pressure sensing performance. Examples of typical SiC and polymer-based TMCPSs are shown in table 2. The high elastic modulus of the SiC-based TMCPS (350 GPa) operates in a broad pressure range (205 kPa) and requires a large bias pressure (120 kPa) in order to operate in the linear range of the touch-mode [25]. On the other hand, the polymer-based device, including a polyimide-metal membrane, is significantly softer (8.9 GPa), can operate in a lower pressure range (35 kPa) and is almost permanently in touch-mode, making such devices extremely sensitive [26][27][28]. The combination of a soft membrane material and a unique curved-cavity design lends the TMCPS sensor its excellent sensitivity; however the response is nonlinear over a small pressure range. On further comparison of the performance characteristics of the SiC and polymer-based TMCPS to that of the graphene-polymer TMCPS, the present device positions itself in the property space which currently includes polymer-based sensors, but also extends into the performance gap between polymer and SiC-based sensors in terms of sensitivity and linear pressure range. The present device enables a large pressure range (125 kPa) as the membrane is in permanent touch-mode and the graphene-polymer membrane has excellent elasticity with a high critical strain. The stiffness of the membranes used lies in the lower range of polymer-based sensors (0.5 GPa) and the membrane transfer technique enables the fabrication of densely packed suspended membranes on the wafer scale. Whilst, the stiffness of the graphene-polymer membrane was designed to operate in a specific pressure range, the elastic properties of the membrane can be tuned by modifying either the polymer thicknesses or number of graphene layers. Furthermore, the graphene-polymer membrane structure aims to overcome several reliability issues faced in current SiC and polymer-based TMCPS technologies. First, the deposition of SiC requires Category SiC-metal [25] Polyimide-metal [29] Graphene-polymer (measured) extremely high temperatures that limit the use of on-chip integrated circuitry or necessitate for a high temper ature/pressure wafer bonding process in order to deposit the SiC layer. In addition, the use of sacrificial layers in the processing of SiC puts several limitations on device architecture and material design; mat erials must be resistant to aggressive etchants and have sufficient stability to overcome capillary forces [33,34]. By transferring the active mechanical component directly onto a pre-patterned micro-cavity in air we avoid trapping liquids that initiate membrane collapse and completely seal the micro-cavity. Another limitation of existing TMCPS technologies is the non-linear pressure sensitivity and significant hysteresis. This is most common in polymer-based TMCPSs, as the polymer membrane strongly adheres to the bottom of the cavity during touch-mode. This requires complex calibration protocols when operating the sensors and a voltage bias is typically applied in order to reduce the hysteresis [29]. In comparison, the graphene-polymer TMCPS shows a linear response in the entire range it is tested in the present study. Moreover, the graphene/silicon oxide interface at the base of the cavity has a very weak adhesion strength [35]. This enables the TMCPS device to operate without any significant hysteresis. The present study concerns pressure sensing in underwater environments, whereby the sensor is designed to withstand corrosive environments and high shock loadings, whilst operating in a low pressure range (<10 kPa). Therefore, it is appropriate to also compare the sensor performance to other pressure sensing technologies that are designed to operate in the same application space. An example of other underwater pressure sensing devices is shown in table 3. In order to compare the sensitivities of the devices, the sensitivity of the graphene-polymer TMCPS is converted into µV Pa −1 by considering a typical AC bridge circuit with a capacitance-to-voltage conversion sensitivity of 0.54 µV/fF, giving a pressure sensitivity of 14.8 µV Pa −1 [36]. This places the graphene-polymer TMCPS in the same performancespace as existing piezoresistive pressure sensors. Although we recognise that the graphene-polymer TMCPS does not out-perform current underwater sensing technologies in terms of repeatability in its current form, the touch-mode structure provides increased durability and an extended operating range which existing technology cannot achieve. Moreover, capacitive pressure sensing has significantly lower power consumption and typically shows an improved thermal stability compared to piezoresistive pressure sensors. Therefore we believe that the graphene-polymer TMCPS has significant advantages over existing underwater pressure sensing technologies. Whilst we recognise that the reproducibility and hence the accuracy of the graphene-polymer TMCPS presented here is not comparable to commercial pressure sensors, the aim of the present study is to demonstrate the significant engineering benefits, such as ease of fabrication and the ability to develop low-powered, low-cost, arrays with a robust sensor design. Moreover, we identify inaccuracies in the pressure cycling data as engineering challenges which will be addressed by refining the sensor fabrication technique and design. For example, by pre-patterning the CVD graphene before the polymer deposition step in the fabrication of the device, the adhesion of the graphene-base of the membrane to the cavity bottom can be optimised. Further, by integrating the sensors into a protective canalbore, pressure fluctuations due to water turbulences and surface-chop can be minimised [40]. Considering the performance of the graphenepolymer TMCPS device we envisage both static and dynamic pressure sensing applications in liquid environ ments. Whilst the initial testing of the device demonstrates the device as a robust level or depth sensor [41], the device can equally be applied to industrial processes, where the pressure of corrosive liquids are monitored [7]. Moreover, the fast response time of the TMCPS device also allows for mid-frequency applications such as MEMS flow sensors or implantation of an artificial lateral line, in which time resolutions of ~0.1 s and pressure resolution of ~10 Pa are required [40]. For high-frequency applications that require a time response on the other of 1 µs and a sub 1 Pa pressure resolution, such as underwater acoustics, the current sensor architecture will require further optimisations. We also tested the TMCPS device as a force sensor by measuring the capacitance whilst pressing down on the sensing area. A schematic diagram and the pressure response of the force sensor is shown in supplementary discussion 6 and the force-sensor in operation is also shown in the supplementary video. Whilst, we did not perform calibrated force measurements on this device, an estimated pressure of 5 kPa, approximated by a finger applying a uniform force of 1 N across the entire sensor array (194 mm 2 ) showed a capacitance change of 145 pF, giving a sensitivity of 28.4 fF Pa −1 . Moreover, this estimated sensitivity is in good agreement with our previous measurements in water. This force-sensor test demonstrates that the graphene-poly mer TMCPS could also be used in tactile applications such as touch interfaces or force sensors in robotics. Further, we note that the silicon substrate can be easily replaced for a flexible substrate, as all other materials in the sensor are entirely flexible. This enables the TMCPS device to be mounted onto curved surfaces and allows flexing of the device when its supporting member is exposed to shocks or vibrations. Beyond this initial demonstration of a capacitive pressure sensor, we aim to develop more sophisticated MEMS and NEMS devices using graphene-polymer membranes. The excellent elasticity and hightemper ature compatibility of graphene makes graphene-polymer membranes attractive for pressure sensing in harsh environments, where shock and elevated temper atures would cause significant damage to tradition silicon MEMS. Whilst we note that the polymer layer is likely to be the limiting material in harsh environ ment applications, the large variety of polymers available as ultra-thin coatings also allows us to fabricate a range of ultra-thin membranes with properties tuned to specific applications, where the polymer layer not only acts as a mechanical reinforcement but also gives additional functionality to the membrane [42,43]. In view of the latter, we also envisage the use of graphenepolymer membranes in polymer MEMS devices ranging from micron-scale pumps and valves in microfluidics [44] and lab-on-chip devices [45] to pressure sensors and actuators in biomedical applications [46]. Conclusion We have demonstrated the fabrication and characterisation protocol of a touch-mode capacitive sensor with a graphene-polymer membrane as the active sensing material. Using a custom membrane transfer method we were able to fabricate and package devices for underwater depth testing experiments. The device design consists of an array of 28 × 28 SU-8 polymer cavities patterned on a Si/SiO 2 substrate that has a thin graphene-polymer membrane partially suspended on top of the SU-8 polymer structure. The fabrication results in a permanent-touch-mode morphology of the graphene-polymer membrane that enables highsensitivity pressure transduction and is robust to high overpressures and harsh sensing environments. The pressure sensors are tested in a water tank by changing the depth of the sensor relative to the water/air interface of the sensor, causing a change in hydrostatic pressure on the sensor. We measure a pressure sensitivity of 27.4 fF Pa −1 over a hydrostatic pressure range of 0.5 kPa to 8.5 kPa. Finally we discuss the current challenges in state of the art harsh-environment pressure sensing technologies and how graphene-polymer TMCPS can enable highly sensitive devices with a large operating range and excellent reliability.
8,444.8
2017-11-30T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Solar control of CO2 + ultraviolet doublet emission on Mars The CO2+ ultraviolet doublet (UVD) emission near 289 nm is an important feature of dayside airglow emission from planetary upper atmospheres. In this study, we analyzed the brightness profiles of CO2+ UVD emission on Mars by using the extensive observations made by the Imaging Ultraviolet Spectrograph on board the recent Mars Atmosphere and Volatile Evolution spacecraft. Strong solar cycle and solar zenith angle variations in peak emission intensity and altitude were revealed by the data: (1) Both the peak intensity and altitude increase with increasing solar activity, and (2) the peak intensity decreases, whereas the peak altitude increases, with increasing solar zenith angle. These observations can be favorably interpreted by the solar‐driven scenario combined with the fact that photoionization and photoelectron impact ionization are the two most important processes responsible for the production of excited‐state CO2+ and consequently the intensity of CO2+ UVD emission. Despite this, we propose that an extra driver, presumably related to the complicated variation in the background atmosphere, such as the occurrence of global dust storms, is required to fully interpret the observations. In general, our analysis suggests that the CO2+ UVD emission is a useful diagnostic of the variability of the dayside Martian atmosphere under the influences of both internal and external drivers. Introduction Airglow emission provides important information on the energy deposition in and chemistry of a planetary upper atmosphere (Slanger et al., 2008, and references therein). For Mars, a typical dayglow spectrum in the ultraviolet (UV) region includes the Cameron band, the fourth positive band, the ultraviolet doublet (UVD), the Fox-Duffendack-Barker band, and several distinctive emission lines from atomic , , and (Barth et al., 1971;Stewart, 1972;Stewart et al., 1972). More recently, additional faint emission features, such as the Vegard-Kaplan band, the γ band, and the first negative band, have been identified (Leblanc et al., 2006;Jain et al., 2015;Stevens et al., 2015Stevens et al., , 2019. Existing analyses suggest that these dayglow emission features are mostly produced by either photon or photoelectron impact excitation (Fox and Dalgarno, 1979;Fox, 1992). Among the important emission features of a Martian dayglow spectrum, the UVD ( ) emission near 289 nm has captured extensive research interest over the past several decades. Since 1969, the UVD emission on Mars has been measured remotely by the UV spectrometer or spectrograph on board Mariner 6, 7, and 9 (Barth et al., 1969(Barth et al., , 1971Stewart, 1972;Stewart et al., 1972), the Spectroscopy for Investigation of Characteristics of the Atmosphere of Mars (SPICAM) on board the Mars Express (MEx; Leblanc et al., 2006), and the Imaging Ultraviolet Spectrograph (IUVS) on board the Mars Atmosphere and Volatile Evolution (MAVEN; Jain et al., 2015). This feature is mainly caused by the photoionization and photoelectron impact ionization of which produces excited-state (Fox and Dalgarno, 1979;Cox et al., 2010;Gérard et al., 2019). Fluorescent scattering of solar photons also contributes but is usually negligible at altitudes below 180 km (Fox and Dalgarno, 1979). Previous analyses of UVD emission have mainly focused on the observed brightness profiles and their variations. The average peak brightness measured by the MEx SPICAM (Leblanc et al., 2006) is ~30 kR (kilorayleigh) for the solar zenith angle (SZA) range from to . With the aid of the more recent MAVEN IUVS measurements, Jain et al. (2015) reported a peak brightness of 76 kR over a similar SZA range. The difference in peak intensity between the two works is mainly attributable to the solar cycle variation. The former is appropriate for a moderate solar activity condition with a solar radio index at 10.7 cm, denoted as F 10.7 , of 105 in solar flux units (SFU, ), and the latter is appropriate for a higher solar activity condition, with of ~160 SFU. Regarding the peak altitude of UVD emission, the MEx SPICAM observations indicate that it is typically at 120-130 km and increases with increasing SZA (Leblanc et al., 2006). However, the SZA trend in peak altitude has not been confirmed by the MAVEN IUVS observations . The MAVEN spacecraft has been orbiting Mars since September 2014 , operating with a better trajectory design around the planet in terms of UV observations as compared with the MEx. It has an elliptical trajectory in a 4.5-hr orbit with an inclination of 75°. The apoapsis altitude is ~6,200 km and the periapsis altitude is 150-160 km during the nominal mission phase but could be as low as 120-130 km during isolated "deepdip" campaigns. The MAVEN IUVS instrument has collected thousands of UVD emission profiles in different Mars seasons and under different solar activity conditions, providing a more extensive data set complementary to the earlier Mariner and MEx observations. Furthermore, the Extreme Ultraviolet Monitor (EUVM) on board MAVEN is able to make simultaneous measurements of short-wave solar irradiance received directly at Mars, allowing a better understanding of the solar control of UVD emission. The present study is aimed at a systematic investigation of UVD emission on Mars by using the MAVEN IUVS measurements. We describe in Section 2 the data set used. In Section 3, we present the observed solar cycle and SZA variations in both peak emission altitude and intensity. This is followed by possible interpretations in Section 4 and concluding remarks in Section 5. Observations The IUVS is one of the eight scientific instruments on board the MAVEN spacecraft, measuring the far UV airglow on Mars between 110 and 190 nm at ~0.6 nm resolution and the middle UV airglow between 180 and 340 nm at ~1.2 nm resolution (Mc-Clintock et al., 2015). The IUVS is mounted on an articulated payload platform, which directs the orientation of the instrument slit as it captures the spectra of the Martian atmosphere. The IUVS has three operational modes, limb scan, coronal scan, and disk scan, whose implementation depends on the orbital phase. Here we consider limb observations that cover the tangent point altitude range of 80-225 km. Limb scans are taken near the periapsis, with the slit orientation parallel to the surface. A scan mirror sweeps the slit up and down, allowing the IUVS to perform periapsis observations at different altitudes with ~5 km resolution. A maxim-um number of 12 limb scans are taken during a single orbit. The observed raw data counts are corrected for detector dark currents and then converted to physical brightness by using the sensitivity derived from UV bright stellar observations made during the MAVEN cruise phase. The middle UV and far UV systematic uncertainties estimated from stellar calibrations are 30% and 25%, respectively. The IUVS instrument team provides three levels of data product on the National Aeronautics and Space Administration Planetary Data System: level 1A (raw data), level 1B (calibrated data), and level 1C (processed data). The processed data provide the brightness profiles for a total number of 28 individual emission lines, which are obtained through multiple linear regression fits of individual spectral components combined with the laboratory spectral data and the reflected solar spectrum background . In this study, we used the level 1C data files, which are tagged "periapse" with version tag V13_R01. We include in this study 2,189 MAVEN orbits that cover the time period from March 2015, reflecting southern summer conditions in Martian year (MY) 32, to October 2018, also reflecting southern summer conditions but in MY 34. The IUVS limb scan observations are available during each of these orbits, providing a suffi- The profile closest to the median situation is marked by blue, and the second-order polynomial fitting to the observations within 20 km centered at the apparent emission peak is indicated by red. The solar zenith angle for each brightness profile is also indicated in the figure legend for reference. ciently large sample to analyze the variations in UVD emission on Mars. For each orbit, we computed the respective median brightness profile with the criterion that at least 10 individual measurements have been recorded over the altitude range of 100-170 km and restricted to SZA below 75°. Several representative UVD brightness profiles are presented in Figure 1, obtained during MAVEN orbit number 2,983 appropriate for SZÃ 53°. The profile closest to the median situation is indicated by the blue line. Individual brightness profiles, whenever with tangent points penetrating down to sufficiently low altitudes, show peak emission at 120-130 km. Analogous to Gkouvelis et al. (2018), the peak altitude and intensity were estimated from the second-order polynomial fitting to individual measurements made within ~20 km centered at the apparent emission peak, as indicated by the red line in Figure 1. A similar procedure was applied to the entire data set. The typical uncertainty in peak altitude is ~1 km and the typical uncertainty in peak intensity is less than 5%, both small enough to allow their variations in the Martian upper atmosphere to be retrieved. CO + 2 Figure 2 shows the distribution of the MAVEN IUVS observations used in this study, in terms of the SZA and integrated solar extreme ultraviolet (EUV) and X-ray flux. The SZA refers to the peak of the median brightness profile obtained for each orbit. For the integrated solar flux as a proxy of solar activity, we used the level 3 solar spectral model constructed from the Flare Irradiance Spectral Model-Mars (version 11) and calibrated it with the MAVEN EUVM band irradiance data (Eparvier et al., 2015;Thiemann et al., 2017). The integration was made over the wavelength range of 0.5-69 nm, where 69 nm corresponds to the minimum photon energy required to produce excited-state . Variations in Ultraviolet Doublet Emission For the MAVEN IUVS data set analyzed in this study, the mean peak intensity of UVD emission is 38 kR with a scattering of 34%, where the scattering is defined as the standard deviation in peak intensity divided by the median peak intensity. The maximum peak intensity is 95 kR in MY 32 and the minimum intensity is 12 kR in MY 34. The mean peak altitude is at 120 km with a scattering of ~7%. The highest and lowest peak altitudes are 146 km in MY 33 and 107 km in MY 34, respectively. These values suggest the presence of strong variations in the peak parameters of UVD emission, which we detail below. The SZA variations in the median peak intensity of UVD emission are presented in Figure 3, distinguishing between different solar activity conditions characterized by different ranges of the integrated solar flux over 0.5-69 nm. The vertical and horizontal bars indicate the standard deviations of peak intensity and SZA within each bin. Each bin contains at least 10 individual brightness profiles to ensure that the derived SZA variations are statistically robust. The figure demonstrates a clear trend of decreasing peak intensity with increasing SZA at all solar activity conditions. The observed dependence of the peak intensity on SZA could be reasonably described by a cosine function in the form of where is the subsolar peak intensity. The best-fit cosine functions are indicated by the dashed lines in Figure 3, predicting a subsolar peak intensity of 56 kR, 71 kR, and 102 kR, respectively, for the integrated solar flux below , between and , and above . A comparison of the subsolar intensities quoted above demonstrates the presence of strong solar cycle variation, which is further displayed in Figure 4 by using the integrated solar flux over 0.5-69 nm as a proxy of solar activity. Different SZA ranges are shown separately: red for 0°-30°, blue for 30°-60°, and green for 60°-75°, respectively. The vertical and horizontal bars represent the standard deviations of peak intensity and solar flux within each bin. Each bin contains at least 18 individual brightness profiles to ensure that the derived solar cycle variations are statistically robust. The figure reveals a remarkable increase in peak intensity with increasing solar activity, which is persistent at all SZAs. Here we use the power law relation to describe the solar cycle variation in peak intensity, hereafter denoted as , which is written as where is the integrated solar flux over 0.5-69 nm, is the peak intensity at a reference solar flux of chosen to be , and is the power index to be constrained by the data. Such a power law relation is chosen to properly reflect the expected limiting behavior of zero peak intensity when the solar radiation is switched off. The best-fit relations are given by the dashed lines in the figure, demonstrating a comparable solar cycle variation with a common power index of ~0.77 independent of the SZA. The best-fit peak intensity for a reference solar flux of is 69 kR at , 48 kR at , and 30 kR at , respectively, suggesting a trend that is compatible with the SZA variation displayed in Figure 3. Variations in the Ultraviolet Doublet Peak Altitude This section is devoted to the solar control of the peak altitude of UVD emission. We show in Figure 5 the median peak altitude of UVD emission as a function of the SZA, obtained for different ranges of the integrated solar flux over 0.5-69 nm. The horizontal and vertical bars in the figure show the standard deviations of peak altitude and SZA in each bin. The figure reveals a nearly constant peak altitude at , accompanied by an appreciable increase with increasing SZA toward the terminator. Analogous to the SZA variation in peak electron altitude in the dayside Martian ionosphere (e.g., Morgan et al., 2008;Fox and Weber, 2012;Yao MJ et al., 2019), the SZA variation in the peak emission altitude can be approximately described by a logarithmic secant relation in the form of Z sub H 0.9 mW⋅m −2 0.9 1.3 mW⋅m −2 1.3 mW⋅m −2 H where denotes the subsolar peak altitude in kilometers and is a length scale to be constrained by the data. The best-fit subsolar peak altitudes are 115 km, 120 km, and 122 km, respectively, for the solar flux range of below , between and , and above . The length scale, , is estimated to be 6.5 km, 7.0 km, and 5.5 km, respectively, under these solar cycle conditions, suggesting that the SZA variation in the peak altitude tends to be less pronounced at high solar activity conditions. We further show the observed solar cycle variation in peak altitude in Figure 6 where the integrated solar flux over 0.5-190 nm is used as a proxy of solar activity. The IUVS observations made at different SZA ranges are indicated separately in the figure, whereas the vertical and horizontal bars represent the standard deviations of peak altitude and solar flux within each bin. Figure 6 reveals a clear trend of increasing peak altitude with increasing sol- ar flux at all SZAs, which can be empirically modeled with a linear relation in the form of where is the peak altitude in kilometers at a reference solar flux of chosen to be and is the linear slope to be constrained by the data. The best-fit reference peak altitude, , is 101 km, 103 km, and 112 km, respectively, for , , and , suggesting a trend that is consistent with the SZA variation depicted in Figure 5. We caution that here we use the solar flux integrated up to 190 nm (instead of 69 nm as before) as a proxy of solar activity, which is closely related to the physical interpretation of the observed variation (see Section 4). Without showing the details, we shall mention that when using instead of in Figure 6, the correlation between the solar flux and peak altitude is greatly reduced and no unambiguous solar cycle variation could be concluded. Finally, we emphasize that in obtaining the best-fit linear relations, those measurements made with are excluded. This is because under such a condition, the MAVEN IUVS was coincidently observing the Martian upper atmosphere during global dust storms (Fu MH et al., 2020), which would elevate the entire brightness profile, including the emission peak, to higher altitudes (see also Section 4). Doublet Emission In Section 3, we reported the solar cycle and SZA variations in both peak intensity and altitude of UVD emission with the aid of the extensive limb scan observations made by the MAVEN IUVS. The solar cycle and SZA variations in peak intensity are clearly suggested by the data, showing that the peak intensity increases steadily with increasing solar EUV and X-ray flux and also increases systematically with decreasing SZA. In addition, the same variations in peak altitude are revealed by the data, characterized by an increase in peak altitude with both increasing solar activity and increasing SZA. Different empirical functional forms are adopted to describe the observed variations, as given by Equations (1) -(4). The observations presented here are favorably compared with those presented in the early IUVS investigation of Jain et al. (2015), who reported a similar correlation between the peak intensity of UVD emission and the EUVM irradiance measured in the 17-22 nm band (see their Fig. 5), characterized by a linear correlation coefficient of 0.77. The SZA variation in peak altitude was not obtained by Jain et al. (2015), possibly due to the much smaller IUVS sample used in their study. Using the MEx SPICAM observations, Leblanc et al. (2006), however, were able to obtain a similar SZA variation in peak altitude as reported here. The observations presented so far could be interpreted by the solar-driven scenario in that the variation in peak intensity is a natural result of enhanced photoionization and photoelectron impact ionization under high solar activity conditions, as these are the two processes mainly responsible for the production of excitedstate in the dayside Martian upper atmosphere (e.g., Fox and Dalgarno, 1979). The same scenario also accounts for the observed decrease in peak intensity toward the terminator following a simultaneous decrease in solar irradiance. The above line of reasoning is analogous to the interpretation of the well-known (4)) obtained for different SZA ranges. solar cycle and SZA variations in peak electron density in the dayside Martian ionosphere using the idealized Chapman theory (e.g., Fox and Yeager, 2009). The SZA variation in peak emission altitude could be interpreted by the fact that the emission is peaked where the optical depth at relevant wavelengths reaches unity in the Martian upper atmosphere, also analogous to the interpretation of the established SZA variation in peak electron altitude according to the idealized Chapman theory (e.g., Fox and Weber, 2012). Finally, the solar cycle variation in peak altitude is driven by the expansion of the atmosphere with increasing solar activity, similar to the observed solar cycle variation in exobase altitude at Mars (Fu MH et al., 2020). Even though the solar-driven scenario successfully interprets the general characteristics of the solar cycle and SZA variations presented so far, we caution that such a simple scenario is not able to account for all the details, as indicated by the considerable scattering of individual IUVS observations, even at a fixed SZA and a fixed solar flux (not shown here). Clearly, an extra driver is required, presumably related to the variation in the background atmosphere, which is far more complicated than the simple effect of atmospheric expansion considered above. Variations driven by thermospheric global circulation (e.g., Bougher et al., 2006Bougher et al., , 2009González-Galindo et al., 2018), wave activity (e.g., England et al., 2017;Siddle et al., 2019), and global dust storms (e.g., Strausberg et al., 2005;Kass et al., 2016;Wu ZP et al., 2020) should all leave noticeable signatures in the observed variations in UVD emission. In particular, we speculate that the abnormal elevation of the peak emission altitude at the maximum available solar flux, as displayed in Figure 6, is more likely driven by global dust storms rather than thermospheric expansion, but such a speculation needs to be verified by detailed model calculations, which are beyond the scope of the present study. Existing studies have indeed demonstrated clearly that the brightness profiles of UVD emission contain important information on the structure and dynamics of the Martian upper atmosphere. For instance, Cox et al. (2010) predicted an increase in the emission peak altitude from northern summer to northern winter, which they interpreted as a consequence of the thermospheric density variation driven by global circulation. On the basis of the observed scale heights of the topside brightness profiles, Leblanc et al. (2006) derived an average temperature of 191 K at 130-170 km in the Martian thermosphere, and Bougher et al. (2017) further reported the seasonal and solar cycle variations in thermospheric temperature. Lo et al. (2015) estimated the density structure from the UVD observations, revealing clear tidal structures between 100 and 190 km. Finally, the Martian thermospheric response to solar flares was recently reported by based on the short-term variation in the same emission feature. Combining the IUVS observations and those made by other instruments on board MAVEN, such as the Neutral Gas and Ion Mass Spectrometer, would better elucidate the role of atmospheric variability in controlling UVD emission, which we leave for follow-up studies. Conclusions Airglow emission provides important information on the structur- al variability of a planetary upper atmosphere under both external and internal drivers (Slanger et al., 2008, and references therein). This study is dedicated to a systematic investigation of the solar cycle and SZA variations in UVD emission at Mars, a distinctive feature in a typical Martian dayglow spectrum that has been extensively studied over the past several decades (e.g., Barth et al., 1969Barth et al., , 1971Stewart, 1972;Stewart et al., 1972;Leblanc et al., 2006;Cox et al., 2010;Jain et al., 2015Gérard et al., 2019). Existing studies have established that this emission feature is mainly produced via photoionization and photoelectron impact ionization of atmospheric (e.g., Fox and Dalgarno, 1979). For the purpose of this study, we analyzed a large number of UVD brightness profiles collected by the MAVEN IUVS instrument when operated in the limb scan mode (McClintock et al., 2015), from which the respective peak emission intensities and altitudes were derived. The available data set suggests the presence of strong solar cycle and SZA variations in both parameters, which manifest as (1) a significant increase in both peak intensity and altitude with increasing solar activity, and (2) a decrease in peak intensity but an increase in peak altitude with increasing SZA. Our analysis generally confirms previous results obtained either from the MEx SPICAM observations (e.g., Leblanc et al., 2006) or from a smaller set of the MAVEN IUVS observations available at an earlier epoch (e.g., Jain et al., 2015). The observed solar cycle and SZA variations can be favorably interpreted by the solar-driven scenario in that (1) enhanced solar activity leads to enhanced production of excited-state via photoionization and photoelectron impact ionization, as well as an expansion of the Martian upper atmosphere via solar heating; and (2) regions at a larger SZA feel a lower solar irradiance along a more slanted line of sight, which not only reduces the production of excited-state but also elevates the location of unit optical depth. However, we note from the available IUVS observations that the solar-driven scenario is not able to account for all the details and an extra source of variability is required, presumably related to the complicated variations in the background atmosphere driven by global circulation, wave activity, and global dust storms, among others.
5,412.4
2020-11-01T00:00:00.000
[ "Physics" ]
A Network Architecture and Routing Protocol for the MEDIcal WARNing System : The MEDIcal WARNing (MEDIWARN) system continuously and automatically monitors the vital parameters of pre-intensive care hospitalized patients and, thanks to an intelligent processing system, provides the medical teams with a better understanding of their patients’ clinical condition, thus enabling a prompt reaction to any change. Since the hospital units generally lack a wired infrastructure, a wireless network is required to collect sensor data in a server for processing purposes. This work presents the MEDIWARN communication system, addressing both the network architecture and a simple, lightweight and configurable routing protocol that fits the system requirements, such as the ability to offer path redundancy and mobility support without significantly increasing the network workload and latency. The novel protocol, called the MultiPath Routing Protocol for MEDIWARN (MP-RPM), was therefore designed as a solution to support low-latency reliable transmissions on a dynamic network while limiting the network overhead due to the control messages. The paper describes the MEDIWARN communication system and addresses the experimental performance evaluation of an implementation in a real use-case scenario. Moreover, the work discusses a simulative assessment of the MEDIWARN communication system performance obtained using different routing protocols. In particular, the timeliness and reliability results obtained by the MP-RPM routing protocol are compared with those obtained by two widely adopted routing protocols, i.e., the Ad-hoc On-demand Distance Vector (AODV) and the Destination-Sequenced Distance-Vector Routing (DSDV). Introduction The study and analysis of vital parameters of hospitalized patients are extremely important in clinical medicine [1], and multiple research projects aim to use novel technologies to support clinical practice [2]. As a result, several methodologies and solutions that exploit such technologies have been recently proposed [3,4] for the diagnosis of illnesses and pathologies in patients. In particular, the evolution of the patient condition in the pre-intensive care unit is essential to ensure early and prompt reaction on critical patients who could experience a progressive clinical deterioration. For this reason, the continuous monitoring of the patients' vital signs (e.g., body temperature, blood pressure, respiratory rate, oxygen saturation, etc.) is needed [2,5,6]. Typically, such data can be obtained using sensors and other medical instrumentations [7]. In this context, the MEDIcal WARNing (MEDIWARN) system, developed in the MEDI-WARN European Project (https://mediwarn.net, accessed on 29 June 2021), discussed in ref. [8]), is a novel solution able to predict a possible medical alert and warn about the deterioration of the patients' condition. MEDIWARN realizes a system for the continuous and automated monitoring of the vital parameters of patients through the use of a peripheral sensory system that feeds into an intelligent warning system. The collected data are sent to a central station and analyzed in real time. Such an analysis, which includes mathematical The implementation of the MEDIWARN communication system on COTS devices is described, which shows the feasibility of the proposed network architecture and offers an experimental performance evaluation in a use case scenario. • The simulative assessment of the MEDIWARN communication system performance in the same scenario used for evaluating the implemented use case is presented to assess to what extent the simulative results and the experimental ones correspond. • A comparative assessment of the timeliness and reliability results obtained by the MEDIWARN communication system using three different routing protocols, i.e., the MP-RPM presented here, the AODV [11] (an on-demand protocol) and the DSDV [12] (a proactive protocol), is presented to highlight the impact of the routing protocol on the MEDIWARN communication system performance. The paper is organized as follows. Section 2 deals with related works. Section 3 presents the MEDIWARN communication system. Section 4 discusses the relevant research challenges and the solutions, while Section 5 focuses on the MP-RPM routing protocol. Section 6 addresses an implementation of the MEDIWARN system on COTS devices and discusses the results obtained through an experimental evaluation. Section 7 presents a simulative evaluation of the MEDIWARN communication system. Finally, Section 8 gives conclusions and hints for future works. Related Work Nowadays, healthcare experts and researchers promote the need for automated health monitoring devices to enhance the patients' safety and to reduce the stress of the medical staff, enabling them to monitor their patients or interactive with them from anywhere at anytime [13][14][15]. Consequently, several recent works have addressed medical monitoring systems. Moreover, to better diagnose, forecast and characterize individuals' health, several models exploit machine learning in healthcare, as discussed in refs. [16][17][18]. In ref. [19], the authors presented an IoT-based health monitoring approach in which medical sensor data are collected and sent to an analysis module through a LoRaWAN network infrastructure. LoRaWAN represents a valid and easy-to-deploy technology for monitoring patients in field hospitals during emergency situations, as explained in ref. [20]. However, although LoRa is an appealing technology to provide low power, long-range wireless connection [21][22][23], LoRa-based approaches, e.g., the one in ref. [24], cannot cope with monitoring applications that require high sample rates and a significant amount of exchanged data. This is because LoRa is intended for low data rate transmissions and is subject to duty-cycle restrictions, and therefore it cannot support the transmission of waveforms that entail a considerable amount of data sent per second. WiFi offers higher bandwidth than LoRa, and therefore it is a more suitable technology for the addressed purpose. The work in ref. [25] proposed a full Internet-based architecture that uses the oneM2M and openEHR standards, thus allowing interoperability between different devices and software from the WiFi-enabled wearable physiological sensors to the monitoring system. The work in ref. [26] presented a monitoring system made up of Raspberry Pi single-board computers that collect sensor data and send them through the Internet using Ethernet or WiFi. Other approaches for healthcare applications exploit the Software-Defined Networking (SDN) paradigm that, as discussed in ref. [27], allows handling the complexity of heterogeneous networks in a simple way and to provide priority-based mechanisms for the monitoring services [28,29]. For example, the architecture proposed in [30] exploits different functional and security applications and services provided by SDN. Ref. [31] proposed a novel framework for the analysis of the human physiological detection system based on a biosensor, but it does not focus on communication aspects. One common feature of all the approaches discussed above as well as other ones (see, e.g., [32][33][34]) is that they are Internet-based. Conversely, one of the design requirements of the MEDIWARN system was to transmit sensitive data through a private network that is locally managed within the hospital, thus avoiding to transmit them to external servers. Moreover, to the best of the authors' knowledge, the above-mentioned state-of-the-art monitoring systems do not foresee a predictive approach able to anticipate the upcoming deterioration of a patient's condition and trigger alarms well before the patient starts to get worse. Conversely, MEDIWARN is intended for automatically monitoring the vital parameters of hospitalized patients in order to both take action in a proactive way whenever needed and provide a complete history of each patient's vital parameters during their hospitalization for data analysis. For this reason, in the following, the design and implementation of a communication system that is able to cope with all the MEDIWARN requirements is addressed. Network Architecture The MEDIWARN system, as shown in Figure 1, includes patients, doctors, sensors, monitors, some wireless nodes (including mobile handheld devices), the MEDIWARN Virtual Biosensor and a monitor station. The sensors acquire the physiological parameters of the patient. Two kinds of vital parameters are considered, i.e., both waveforms and single values. The sensory data are collected and shown on a monitor close to the patient. The monitor is equipped with a wireless transceiver and transmits the vital parameters to the MEDIWARN Virtual Biosensor for data processing. The wireless network consists of a number of wireless nodes, organized in a mesh topology, that act as relay nodes in the data exchange between the monitors and the MEDIWARN Virtual Biosensor. The MEDIWARN Virtual Biosensor ( Figure 2) is the heart of the proposed model and consists of a dedicated server that stores and processes the vital parameters of the patients. Such a server runs four software components: • A database that stores all the vital parameters sampled from the patients and the processing results. • A data acquisition component that is in charge of the communication with all the monitors, which collects the vital parameters sampled by the monitors and stores them in the database. • The fuzzy algorithm, which communicates directly with the database, reads the vital parameters, processes data and stores the results in the database. • The web portal, i.e., a web server that maintains the GUI for the workstation and the mobile devices held by the medical staff. The traffic of mobile devices is web traffic with no real-time guarantees. Data acquisition Web portal The medical team can remotely monitor the patients through the mobile devices, which show the patients' current clinical status. Moreover, the medical staff is promptly informed of any patient's condition deterioration through an alarm that is sent by the MEDIWARN Virtual Biosensor to the mobile devices. This way, the medical staff members are alerted by an acoustic signal generated by their mobile devices. The monitoring station shows a detailed view of each patient's conditions, while the mobile handheld devices (e.g., tablets) of the medical staff provide a summarized view of the patients' status. The MEDIWARN communication system involves a wireless network architecture such as the one in Figure 1 for each hospital ward. In the case multiple hospital wards would use the MEDIWARN system, the wireless networks should exchange data through wired technologies (e.g., Ethernet). Research Challenges and Solutions This section discusses the research challenges that the MEDIWARN communication system poses and the design choices made to address the system requirements. Requirements The wireless network involved in the MEDIWARN system must satisfy several application requirements, which pose the following design challenges. High data rate. The monitors may need to transmit a significant amount of data within short intervals (e.g., waveforms). For this reason, a wireless technology that support high data rates is required. Local communications. The patients' data have to be transferred over a private network that is locally managed within the hospital. This design choice was driven by the explicit request of the medical staff involved in the MEDIWARN project to avoid transmitting sensitive data to remote servers. This choice is beneficial to privacy and it also makes the data availability independent of any network provider. Reliability and fault-tolerance. The MEDIWARN communication system needs to provide high reliability and availability, as the monitoring must work and the patient's vital parameters must be delivered to the MEDIWARN Virtual Biosensor even in the case of faults. Consequently, negative conditions, such as interference or faults in one or multiple nodes, shall not make the communication network fail. For these reasons, suitable mechanisms, such as retransmissions and node/path redundancy, are needed. In addition, the host running the MEDIWARN Virtual Biosensor must be fault tolerant, thus hardware and software redundancy have to be introduced. Mobility. The MEDIWARN system involves handheld devices, i.e., mobile nodes. As a consequence, mechanisms to support node mobility are required. Multi-hop communication. The target application may demand for large-scale communication network deployments, and therefore the monitors can be multiple hops away from the MEDIWARN Virtual Biosensor. This requires the adoption of a number of relay nodes, i.e., intermediate nodes that receive the messages intended for other nodes and forward them up to their final destination. The number and placement of such relay nodes must be accurately evaluated according to some metrics, such as the coverage range of each node and the number of senders in a specific area. Moreover, a trade-off between the number of relay nodes and the network reliability (node/path redundancy) has to be found. In fact, while a higher number of relay nodes provides multiple paths for each message transmission from a source to the intended destination, and therefore alternative ways to reach the destination even in case of faults (e.g., a node failure), it entails higher costs in terms of network devices and energy consumption. Routing. In the MEDIWARN network architecture, routing, i.e., the process of selecting one or multiple paths for message forwarding, is a critical issue. In fact, the routing protocol should not introduce a significant overhead, in terms of network workload, to not impair the communication network performance in terms of end-to-end delay. This turned out to be a research question, as explained in the following subsection. The solution here proposed is presented in Section 5. Real-time. The MEDIWARN system continuously collects and processes data in order to promptly alert the medical staff. Consequently, bounded delivery times must be guaranteed to the messages carrying such data. In particular, the Virtual Biosensor considers the monitor transmission period as a soft deadline on the message delivery time, which can be occasionally exceeded without compromising the proper system operation. However, any data delivered to the receiver with a delay longer than the transmission period are not useful for real-time monitoring, and therefore they are not forwarded to the virtual biosensor. This aspect, combined with the requirements on routing previously mentioned, also poses a research challenge. In the following, the design choices corresponding to the above-mentioned requirements are described in detail. Design Solutions Communication technology. The IEEE 802.11 (WiFi) technology offers high bandwidth and high quality of service [35,36]. Compared to other high-bandwidth wireless technologies, such as cellular networks, WiFi is locally managed and its availability does not depend on any network provider. For these reasons, WiFi is the technology chosen for the MEDIWARN system. In addition, the use of IEEE 802.11-based networks allows to maintain sensitive data within the hospital intranet (instead of sending them through the Internet), thus meeting one of the previously discussed MEDIWARN system requirements. The IEEE 802.11 standard supports both the infrastructure operating mode and the ad-hoc one, but the requirements of the MEDIWARN systems lead to the use of WiFi in ad-hoc mode, as it offers higher flexibility and higher fault tolerance. Such a choice meets both the high data rate and local communication requirements. In fact, the ad-hoc mode allows each end node to communicate directly with the other end nodes within the same coverage area and supports link redundancy. This way, mesh topologies are supported and the same message can be transmitted over multiple paths. However, WiFi in ad-hoc mode increases the network management complexity. Redundancy mechanisms. In order to guarantee the message delivery even in case of faults (e.g., intermediate node failures) or corrupted transmissions due to interference or noise, both spatial redundancy and retransmission mechanisms were adopted in the MEDIWARN system. This way the reliability and fault-tolerance requirements are met. Spatial redundancy, allowing for transmissions on multiple paths, increases the probability that the messages will be received correctly. The retransmission mechanism is based on end-to-end acknowledgements (ack), and therefore a missing ack after a message transmission will indicate that something went wrong and will cause the retransmission of the unconfirmed message. Mesh topology with mobility support. According to the hospital organization, the MEDIWARN system may require a large-scale deployment that includes mobile nodes. As a result, the wireless network needs to manage a dynamic topology [37] and multi-hop communications. For this reason, a mesh topology with a proper number of relay nodes is needed to guarantee a total coverage of the hospital units included in the MEDIWARN system. The number of relay nodes needed in a specific application scenario depends on multiple parameters. Among them, the area to be covered, the devices used and their transmission power, the presence of obstacles, interference and signal attenuation and the reliability level required by the application. This strategy provides the nodes with mobility support, as the handheld devices can freely move within the area without losing the local wireless connection. The wireless nodes, therefore, dynamically establish all the possible connections between each other to create a mesh topology and support multiple transmission paths for messages from the source to the intended destination. Routing protocol. The MEDIWARN system includes mobile nodes, i.e., the handheld devices of the medical staff on the move within the hospital unit and the patients' monitors, which can be relocated according to the hospital unit needs. Consequently, a dynamic routing protocol is considered the best option for MEDIWARN. This way, message forwarding is based on the current network conditions and the routing protocol is able to route around faults, such as node failures or loss of connection between nodes. Several routing protocols were standardized for ad-hoc wireless networks [38]. Among them, the Ad-hoc On-demand Distance Vector (AODV) and the Dynamic Source Routing (DSR) are reactive protocols that, on demand, select a path from the source to the destination. As discussed in ref. [38], although these approaches entail a low overhead in terms of the workload introduced by the routing message exchange, they increase the network latency. Since MEDIWARN requires low latency communications, the above mentioned protocols are not suitable solutions, whereas proactive or hybrid routing protocols could be considered. However, both the Destination-Sequenced Distance-Vector Routing (DSDV) and the Temporally Ordered Routing Algorithm (TORA), as discussed in ref. [38], would significantly increase the network workload in the MEDIWARN system. Another option could be the hybrid routing protocol for wireless mesh routing that is defined in the IEEE amendment 802.11s [39,40]. However, the IEEE 802.11s requires specific hardware/software features that may not be supported by a number of COTS devices. Due to the limitations in this respect of the state-of-the-art solutions previously mentioned, here, a custom routing protocol is proposed for the MEDIWARN system. The protocol, called the MultiPath Routing Protocol for MEDIWARN (MP-RPM), is described in Section 5 and aims to support reliable transmissions on a dynamic network, while limiting the network overhead due to the control messages. Being totally hardware independent, MP-RPM can be used on any COTS device without any hardware/software modifications (unlike the IEEE 802.11s standard). Soft real-time communications support. Soft real-time messages have deadlines that are taken into account when routing and forwarding messages, but without strict guarantees, so an occasional deadline miss may happen and it is tolerated. However, the messages carrying, for example, a patient's physiological parameters, if not delivered on time, will not be used for real-time monitoring and will not be displayed on the screen that shows the patient's current conditions. To minimize the number of deadline misses, here, we propose an IEEE 802.11-based mesh network that includes on the relay nodes suitable mechanisms to improve the soft real-time performance of the MEDIWARN system. Each relay node implements a priority queue of outbound messages. When a node receives a message to forward, the message is inserted in the queue. Message priority is assigned according to a configurable criterion, for example, the hop count, i.e., the number of hops that the message has traversed so far. This way, the messages that traverse more links to reach the destination would be favored to compensate for the longest path. In addition, the custom routing protocol (i.e., the MP-RPM) here proposed allows choosing the best next hop for each message to be forwarded based on a route selection algorithm. In particular, every time a message needs to be forwarded by a relay node, the latter searches its routing table for all the possible routes to the destination. The routes are sorted from best to worst according to a configurable metrics, e.g., the hop distance, so the relay node selects the first path to forward the message. This way, each message follows the best path between the sender and the receiver based on a specific metrics. Note that, as discussed in Section 5.2, the MP-RPM implements multipath routing, and therefore each message can be sent over multiple paths to provide redundancy and increase the protocol reliability. Consequently, each relay node selects the first n_path paths thanks to the route selection algorithm. Summarizing, the MEDIWARN system needs a simple, lightweight and configurable routing protocol able to both take the message priority into account to reduce the delay of soft real-time messages and provide path redundancy and mobility support without significantly increasing the network workload and latency. The solution here proposed is described in detail in the following Section. The MultiPath Routing Protocol for MEDIWARN (MP-RPM) Here, we consider a network made up of three different node types, i.e., end nodes, relay nodes and the sink. The end nodes are unaware of the network routing logic, so they broadcast the data acquired by the sensors without any path (or route) selection. The relay nodes are the core of MP-RPM, as they take care of message forwarding in the network, from the source to the destination. The sink, i.e., the MEDIWARN Virtual Biosensor in Figure 1, is the network collector, i.e., the destination node of all the sensor data sent by the end nodes. The MP-RPM works over the medium access control (MAC) layer, uses standard IEEE 802.11 frames and the EUI-48 address format. The relay nodes are set to receive all frames in promiscuous mode (regardless of their destination address). Finally, the end nodes and their applications are totally unaware of the network topology and the routing protocol used, i.e., they simply send messages with the sink address as destination. The relay nodes are in charge of forwarding these messages to the sink. This way, as the end nodes do not take part in the routing decision, any node can be supported, regardless of the high-level application that runs on it. The MP-RPM can be split into two phases, i.e., initialization (init) and data exchange, as described below. Init Phase The network nodes are initially unaware of the network topology, so they do not know the possible paths to forward the messages to their destination. Each node, with the exception of the end nodes, needs to build a routing table (rTable) during the init phase to keep track of the network topology. At the end of the init phase, each node will be able to choose, according to a configurable metrics, the best path to forward the messages to their destination. Each entry of the routing table of the node A contains the following data: the destination node (B) address, the number of hops (n) between A and B and the address of the node to which forward the message in order to reach B through n hops (i.e., the next hop). Since the network topology can change, the init phase needs to periodically run (with a configurable period, here called upd_period). This period is configurable and depends on several aspects, such as the number of mobile nodes and their mobility pattern and the presence of moving obstacles. A lower value of upd_period offers higher ability to react to network changes. However, as the upd_period impacts on the overhead introduced by the routing protocol, a trade-off between the overhead introduced by MP-RPM and its reaction rate towards network changes must be determined. During this phase, the sink and the relay nodes run Algorithm 1. Both the sink and the relay nodes transmit an init message containing the address of the source node (i.e., the originating node of the init message), the number of hops field (hopCnt) that is initially set to 0 by the source node and then increased every time the init message is forwarded and a sequence number (seqN). end if 22: end while During the init phase, upon reception of an init message, both the sink and the relay nodes update their routing table as follows. • If the routing table does not contain an entry that specifies a route to the init message source node, then a new entry is added. • If the routing table contains an entry that specifies a route to the init message source node, but the next hop is a different node than the one that just forwarded the init message from the same source, then a new entry is added. • If the routing table includes an entry with a route to the init message source node and the same next hop, but with a higher number of hops than the one contained in the received init message (i.e., the bestPath() function returns f alse), then the routing table entry is updated with the new value for the number of hops. The relay nodes, i.e., the nodes for which the function isRelayNode() returns true, forward the init messages received by the other nodes (i.e., both the relay nodes and the sink) according to a controlled flooding mechanism that avoids retransmitting the init messages that have been already forwarded. The end nodes discard any init message, as they are totally unaware of the network topology and the routing protocol. Data Exchange Phase At the end of the init phase, each node has built its own routing table and the data exchange phase begins. In this phase, application messages, such as the ones containing the vital signs of a monitored patient, are sent by the end nodes to the sink. Each application message is encapsulated into an Ethernet frame that contains both the source and the destination addresses. The Ethernet frame is then converted into an IEEE 802.11 frame. The relay nodes encapsulate the message in a specific frame format that contains both a sequence number that, in combination with the source address, uniquely identifies the frame in the network and the message type, i.e., routing message or application message. The application frame is then de-encapsulated at the destination, i.e., the sink node. During the data exchange phase, each end node periodically (with a configurable period) acquires the vital parameters of a patient and broadcasts them. The end nodes do not implement any routing mechanism. In fact, running Algorithm 2 is up to the relay nodes that are in charge of forwarding the messages to the destination. f w = check_ f orwarded(m) 4: if (not f w) then 5: pathSet = rTable.search_path(m.dstAddr) 6: sort(pathSet, criterion) 7: for (i = 0; i < nPath; i++) do Upon receiving an application message, a relay node checks if such a message has been already forwarded (in such a case, the check_ f orwarded() function returns true). If so, the message is discarded; otherwise, a route selection algorithm is run. In particular, the relay node searches in its routing table for all the possible routes to reach the destination. The routes found are sorted in ascending order according to the number of hops required to reach the destination. Next, the relay node selects the first n_path paths through which to forward the message. Note that the MP-RPM implements a multipath routing, i.e., each message is sent to multiple paths, to provide redundancy and increase the protocol reliability. Finally, the message is forwarded to the nxtHop nodes of the selected routing table entries. Once an application message has reached its destination (i.e., the sink), the latter checks if such a message has been already received. If not, the sink passes it to the application. Note that the acknowledgement is transmitted at the application layer from the sink to the source and it will be forwarded to the source node using the same mechanism previously described in Algorithm 2. When the relay nodes receive the application messages sent from the end nodes nearby, they know which end nodes can be reached through a single-hop communication. This information is inserted in the routing table, by adding proper entries, and then it is propagated to all the relay nodes through a suitable message. The medical staff handheld devices are supported by the MP-RPM algorithm, as such devices periodically send (with a configurable period) to the sink a request for updates relevant to the conditions of one specific patient or of all the patients. Consequently, the sink sends the latest data collected from the end node(s) relevant to all the patients or to the specific one. The relay nodes will forward these messages to the destination, i.e., the tablet that issued the request. Implementation and Performance Evaluation Here, a proof-of-concept implementation of the MEDIWARN communication system on COTS devices is presented. The aim is to show the feasibility of the proposed network architecture and to assess its performance. As shown in Figure 3, each end node consists of a number of sensors connected to a Philips IntelliVue MP5 monitor that shows the detected vital parameters and sends them through a Raspberry Pi. The MP-RPM is implemented on the Raspberry Pis, which act as relay nodes. The implemented version of MP-RPM allows setting a static value of upd_period. However, an algorithm to dynamically change the upd_period at runtime can be very helpful in a dynamic context and will be addressed in future work. The sink, i.e., the MEDIWARN Virtual Biosensor in the architecture shown in Figure 1, consists of a Raspberry Pi that collects sensor data and a server that processes the data. Since the paper deals with the MEDIWARN network architecture, here, we assess the proposed communication system without addressing the logic implemented by the virtual biosensor. Evaluated Scenario The considered scenario, as shown in Figure 4, consists of three hospital rooms and a corridor. Each room hosts two patients, each one with their own monitor. The patients (each one with a monitor and a Raspberry Pi) represent the end nodes (N1-N6), i.e., the senders, while the MEDIWARN Virtual Biosensor, which is located at the end of the corridor in a dedicated room, is the sink (S), i.e., the receiver. Next to the latter room, there is the monitoring room that hosts the monitoring station. Five relay nodes (R1-R5) are placed in the corridor. The monitors acquire the vital signals of the patients using multiple sensors, as shown in Figure 3, and send them to MEDIWARN Virtual Biosensor through a Raspberry Pi. The location of the relay nodes is fixed, as shown in Figure 4, to ensure that multiple paths always exist. This way, the messages can be forwarded to the destination even in the case of relay node failures, for fault-tolerance purposes. The relay nodes forward the received messages through the best two paths according to the MP-RPM routing algorithm. The aim of this assessment is to evaluate the timing and reliability of the communication system when the messages are transmitted through two different paths. No retransmission mechanisms are implemented at the application layer. The MP-RPM is implemented on the Raspberry Pis, which act as relay nodes. The implemented version of MP-RPM allows setting a static value of upd_period. However, an algorithm to dynamically change the upd_period at runtime can be very helpful in a dynamic context and will be addressed in future work. The sink, i.e., the MEDIWARN Virtual Biosensor in the architecture shown in Figure 1, consists of a Raspberry Pi that collects sensor data and a server that processes the data. Since the paper deals with the MEDIWARN network architecture, here, we assess the proposed communication system without addressing the logic implemented by the virtual biosensor. Evaluated Scenario The considered scenario, as shown in Figure 4, consists of three hospital rooms and a corridor. Each room hosts two patients, each one with their own monitor. The patients (each one with a monitor and a Raspberry Pi) represent the end nodes (N1-N6), i.e., the senders, while the MEDIWARN Virtual Biosensor, which is located at the end of the corridor in a dedicated room, is the sink (S), i.e., the receiver. Next to the latter room, there is the monitoring room that hosts the monitoring station. Five relay nodes (R1-R5) are placed in the corridor. The monitors acquire the vital signals of the patients using multiple sensors, as shown in Figure 3, and send them to MEDIWARN Virtual Biosensor through a Raspberry Pi. The location of the relay nodes is fixed, as shown in Figure 4, to ensure that multiple paths always exist. This way, the messages can be forwarded to the destination even in the case of relay node failures, for fault-tolerance purposes. The relay nodes forward the received messages through the best two paths according to the MP-RPM routing algorithm. The aim of this assessment is to evaluate the timing and reliability of the communication system when the messages are transmitted through two different paths. No retransmission mechanisms are implemented at the application layer. Each end node generates one data flow, whose transmission period is set to 1 s. Each message is 60 bytes long. The sampling and sending times of the end nodes are not synchronized with each other. Each relay node implements a priority transmission queue that contains the outbound messages. The priority of messages can be assigned according to a configurable policy. In the considered scenario, to keep it simple, we adopted First-In First-Out (FIFO). The experimental assessment duration was set to 1800 s, i.e., 30 min. Analysis of the Experimental Results This subsection presents the results of the experimental assessment of the MEDIWARN communication system. The used performance metrics are the Round Trip Time (RTT) and the Packet Loss Ratio (PLR). The RTT is the time difference between the message sending time at the sender and the reception time of the relevant ack at the same node, measured at the application layer, calculated according to Equation (1): The PLR, measured at the application layer, is expressed as a percentage of the total number of transmitted messages, according to Equation (2), where n txMsg , n lostMsg and n rxMsg are the number of transmitted, lost and correctly received messages relevant to an end node. Note that a message is considered lost if either the message or the ack is lost. Table 1 shows the maximum and the average RTT measured for each end node and the corresponding confidence interval calculated at 95%. The RTT gives an estimation of the network timings. As expected, the highest average RTT values are obtained by the end nodes that are furthest from the receiver, i.e., N5 and N6, as their way to the destination and vice versa traverses the highest number of hops. The average RTT measured at the nodes N3 and N4 is slightly lower than the one at nodes N1 and N2, although the latter are closer to the sink. In fact, despite being closer to the sink in terms of hop distance than N3 and N4, the higher channel load in the coverage area of the N1 and N2 nodes negatively impacts the channel backoff time, and therefore the timing performance of the IEEE 802.11 CSMA/CA MAC layer. As a result, on average the RTT decreases with the number of hops between the sender and the receiver. However, the channel load also affects the RTT, and therefore in the monotone increasing trend that relates the number of hops to the RTT slight variations can be observed. In Figure 5, which shows the number of messages transmitted and received by each relay node during the experimental assessment, it can be seen that the highest channel load is found nearby the relay nodes R1 and R2, while the load significantly decreases with the distance from the sink. Moreover, the results in Table 1 show that, in the assessed scenario, the maximum RTT is in the order of hundreds of milliseconds, and therefore it stays below the transmission period. As a result, all the messages delivered to their destination will be used for real-time monitoring and displayed on the screen that shows the current patients' conditions. Table 2 shows the PLR measured for each end node. The PLR values obtained are always below 4%. As expected, the hop distance of the nodes from the sink affects the PLR, i.e., the higher the distance, the higher the PLR. However, the low values of the maximum RTT allow retransmitting the messages with a low probability of missing their deadlines. The second factor that affects the PLR is the channel load. For instance, the higher PLR value obtained by the node N2, comparing with that of the node N1, is because N2 is closer to N3 and R2, thus it experiences a higher channel load than N1. The PLR of node N3, compared to that of node N4, mainly depends on its position, as node N3 is closer to node R2 than node N4, and therefore N3 suffers from a higher channel load than node N4. Finally, the nodes N5 and N6 obtained the highest PLR because, in their case, the component that has a greater effect on the PLR is the hop distance from the sink. Simulations The proposed network architecture was simulated using the OMNeT++ framework. The simulation model is based on the INET libraries, while the application scenario and the network layer models were implemented from scratch based on the MEDIWARN architecture. The scenario here addressed is the one presented for the experimental assessment described in Section 6.1 and shown in Figure 4. To quantify the impact on the delays experienced by the messages transmitted by the monitor when mobile nodes are active, simulations were also performed with two additional mobile nodes on the move between the routers. Such mobile nodes transmit/receive web traffic to the MEDIWARN virtual biosensor at exponentially distributed random intervals with a mean of 10 s. The assessed metrics are the maximum and average RTT and the PLR, calculated as in Equations (1) and (2), respectively. The simulation parameters are shown in Table 3. Each simulation was repeated eight times varying the seed for random number generators. The configured channel model was the log-normal shadowing. The channel model parameters were taken from the work in [41], where the channel was modeled based on experimental measures carried out in a hospital. The RTT simulation results are shown in Table 4. The average RTT was obtained from 8000 samples and the values are shown with the confidence interval calculated at 95%. In the case with no mobile nodes, the maximum RTT values are always lower than 300 ms. Only a few of samples obtained an RTT higher than 100 ms and this was experienced by the three nodes at the highest hop-distance from the sink (i.e., N4-N6). Note that such values are of the same order of magnitude as the experimental results. Conversely, the average RTT increases with the hop-distance from the sink. In this case, the simulative results differ from the experimental results by few milliseconds (no more than 10 ms). Such a difference is due to the packet processing time, which is not considered in the simulations. In the case with mobile nodes, the maximum RTT values are very similar to the case without mobile nodes for the nodes that are closer to the sink, i.e., N1-N4. Conversely, the maximum RTT increases to 385 ms and 471 ms for N5 and N6, respectively, which is a very low increase. The average RTT values obtained in the case with mobile nodes are very close to the ones obtained in the case without mobile nodes, thus demonstrating that in the MEDIWARN system scenario mobile nodes do not significantly affect the network performance. As far as the PLR is concerned, in all the simulations all the transmitted packets were successfully delivered to the destination, despite a harsh channel was configured. Thanks to the redundant paths, the obtained PLR was zero for all the end nodes. Moreover, to highlight the impact of the routing protocol on the MEDIWARN network performance, we address a comparative assessment of the average RTT and PLR obtained by the MEDIWARN communication system using three different routing protocols, i.e., the MP-RPM presented here, the AODV [11] (an on-demand protocol) and the DSDV [12] (a proactive protocol). The last two protocols are implemented in the INET library. The simulation parameters and the scenario are the same used in the first simulation. The results are shown in Figure 6 and Table 5, respectively. The results in Figure 6 show that the MP-RPM obtained comparable RTT values than the DSDV although the transmissions, in our approach, are doubled (i.e., repeated over two different paths). In fact, the PLR results (presented in Table 5) show that the PLR values obtained with the DSDV and the AODV are significantly higher than those obtained with the MP-RPM protocol. The AODV protocol obtained significantly higher RTT values (tens of milliseconds), and this is because with AODV the routing path has to be discovered on-demand before starting data transmission, thus increasing the data packet delay. The PLR results also show that, for this kind of application, proactive protocols perform better than reactive ones. Finally, thanks to the redundant routing path, the MP-RPM outperforms the other two protocols in terms of PLR. Conclusions This paper discusses the MEDIWARN communication system and a simple, lightweight and configurable routing protocol specifically designed for the needs of the MEDIWARN system. The MEDIWARN communication system provides mobility support and high reliability, through path redundancy and message replication, thanks to the use of relay nodes and the dynamic routing policies of the MP-RPM. Simulation and experimental results in the assessed scenario demonstrate that the maximum RTT values are always in the order of hundreds of milliseconds, thus fulfilling the requirements of the MEDIWARN system, and that, thanks to the multipath support, the PLR values measured during the experimental assessment were always below 4%. Such values are due to the noisy environment in which the experimental assessment was performed. In fact, simulating the same scenario using the log-normal shadowing channel model, with the parameters obtained from experimental measures in hospitals, zero packet loss was obtained. One of the limitations of the proposed system is that some relay nodes may eventually become overloaded, when they have to relay a high number of transmissions. As the communication system and the routing protocol were designed to be easily extended with new functionalities, to solve this issue, future works will deal with enhancing the MEDIWARN communication system by introducing load balancing techniques [42] in the MP-RPM. In addition, an extensive performance evaluation of the MEDIWARN communication system will be carried out in scenarios with a higher number of rooms and more patients per room, so that a larger deployment of the MEDIWARN communication system will be assessed. Finally, a comparative performance evaluation of several message priority assignment policies will be performed, with the aim to further reduce the delays of the messages transmitted by the nodes that are a large number of hops away from the MEDIWARN Virtual Biosensor. Data Availability Statement: The data underlying this article will be shared on reasonable request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
9,951.2
2021-06-30T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Measurement of (cid:0) ee ( J = ) with KEDR detector : The product of the electronic width of the J= meson and the branching fractions of its decay to hadrons and electrons has been measured using the KEDR detector at the VEPP-4M e + e (cid:0) collider. The obtained values are (cid:0) ee ( J= ) = 5 : 550 (cid:6) 0 : 056 (cid:6) 0 : 089 keV ; (cid:0) ee ( J= ) (cid:1) B hadrons ( J= ) = 4 : 884 (cid:6) 0 : 048 (cid:6) 0 : 078 keV ; (cid:0) ee ( J= ) (cid:1) B ee ( J= ) = 0 : 3331 (cid:6) 0 : 0066 (cid:6) 0 : 0040 keV : The uncertainties shown are statistical and systematic, respectively. Using the result presented and the world-average value of the electronic branching fraction, one obtains the total width of the J= meson: (cid:0) = 92 : 94 (cid:6) 1 : 83 keV : These results are consistent with the previous experiments. Introduction The J/ψ resonance, a bound state of cc quarks, was discovered more than forty years ago but its investigation is still actual. Fundamental properties of this meson including the branching fractions of leptonic and hadronic decays are important for understanding the quarkonium decay dynamics. The leptonic width of the J/ψ meson is used in calculations of c-quark mass [1,2] and the hadronic contribution to the muon g − 2 [3]. It is also used for various calculations of radiative corrections due to the vacuum polarization and the initial-state radiation. The current precision of Γ ee in the potential models and in the lattice QCD (LQCD) calculations [4,5] is compatible with that of the world-average value [6] and increase of the experimental precision for this value can be crucial for further development of the LQCD calculation techniques. Measurements of the J/ψ widths have a long history. They were studied at MarkI [7] and ADONE [8], and later at BES [9], BaBar [10], CLEO [11], KEDR [12,13] and BESIII [14]. Usually Γ ee is measured in J/ψ decays to hadrons, e + e − or µ + µ − final states Figure 1. The VEPP-4M e + e − collider with KEDR detector. and the obtained value is the product of Γ ee to the corresponding branching fraction. At present the best accuracy in the determination of Γ ee has been obtained by the BESIII collaboration [14] based on the Γ ee · B µµ (J/ψ) measurement in the initial-state radiation process e + e − → J/ψγ → µ + µ − γ and B µµ (J/ψ) branching fraction [6]. The best accuracy of the Γ ee · B hadrons value has been reached by combining the result on Γ ee [6] with B hadrons from BES [9]. This work continues a series of experiments on measuring properties of charmonium resonances performed by the KEDR collaboration [12,13,[15][16][17]. In 2010 partial widths Γ ee · B ee (J/ψ) and Γ ee · B µµ (J/ψ) were measured with high accuracy of 2.4% and 2.5%, respectively [12]. In this article we present new results on Γ ee and Γ ee · B hadrons obtained by measuring the cross sections of e + e − → hadrons and e + e − → e + e − as a function of the centre-of-mass (c.m.) energy in the vicinity of the J/ψ resonance with the KEDR detector at the VEPP-4M e + e − collider. VEPP-4M e + e − collider was designed to operate in the centre of mass energy from 2 to 11 GeV. The peak luminosity of the collider in the J/ψ region is about 2 × 10 30 cm −2 s −1 in the 2 × 2 bunch operating mode at a beam current of 1.5 mA. One of the main features of the VEPP-4M is a possibility of precise energy determination. The resonant depolarization method (RDM) [20,21] runs has the relative accuracy of about 10 −6 . Accuracy of energy interpolation to the data taking periods is about to 10-30 keV [15,17]. Infrared light Compton backscattering method is used to carry out continuous energy monitoring with the approximate accuracy less than 100 keV. The KEDR detector covers solid angle 95% of 4π and includes the following main components: the vertex detector (VD), the drift chamber (DC), the scintillation time-offlight counters, the aerogel threshold Cherenkov counters, the electromagnetic calorimeter made of liquid krypton (LKr) barrel part and CsI crystals in the end-caps, the muon system inside the magnet yoke. Super-conducting solenoid provide longitudinal magnetic field up to 0.7 T. The detector also contains scattered electron tagging system for studies of twophoton processes. The on-line luminosity measurement is provided by two independent single bremsstrahlung monitors. Experiment and data sample Our analysis is based on the same data set, with an integrated luminosity of 230 nb −1 , as that used in the KEDR analysis of the leptonic channels [12]. During the scan the data were collected at 11 energy points as shown in figure 2 that allows a fit of the resonance shape and determination of the nonresonant background contributions to be performed. The full data sample corresponds to 250 thousands of produced J/ψ mesons. The beam energy was measured by the resonant depolarization method [20]. 26 calibrations were carried out during the scan, before and after data taking at each energy point. Between the calibrations the beam energy was interpolated with the accuracy better than 15 keV. JHEP05(2018)119 4 e + e − cross section in vicinity of a narrow resonance The cross section for the annihilation process e + e − → hadrons in the vicinity of a narrow resonance can be presented in the form [16]: where W is the c.m. energy, M is the mass of the resonance, Γ is its total width, α is the fine structure constant and R is the ratio σ(e + e − → hadrons)/σ(e + e − → µ + µ − ) outside of the resonance region. The truncated vacuum-polarization operator Π 0 does not include a contribution of the resonance itself. The radiative correction δ sf can be obtained from the structure-function approach of ref. [22]: where m e is the electron mass. The function f is defined as (4.4) The parameter λ in eq. (4.1) characterizes the strength of the interference effect in the inclusive hadronic cross section. According to ref. [16] the expression for λ can be written as The summation is performed over all exclusive hadronic modes. Here and below cos φ m Θ and sin φ m Θ are the cosine and sine of the relative phase of the strong and electromagnetic amplitudes for the mode m averaged over the phase space of the products, b m = R m /R is the branching fraction of the corresponding continuum process, B ee is a probability of the decay to an e + e − pair, B hadrons is the total decay probability to hadrons and B JHEP05(2018)119 Due to the resonance-continuum interference the effective hadronic widthΓ h can differ from the true hadronic partial width Γ hadrons = m Γ m : In this analysis it was assumed that the relative phases of the strong and electromagnetic amplitudes in different decay modes are not correlated. Consequences and experimental verification of this assumption are discussed in detail in refs. [16,17]. The differential e + e − cross section is calculated with where s = W 2 and t = −W 2 · (1 − cos θ)/2 are the c.m. energy squared and momentum transfer squared, θ is the electron scattering angle. The first term in eq. (4.7) represents the QED cross section obtained with the Monte Carlo technique [23,24]. The second term is responsible for the resonance contribution and the third one for the interference. The accuracy of the formulae (4.7) about 0.1% is sufficient for this work and is confirmed with more precise expressions given in [25]. The detailed description of the extracting Γ ee (J/ψ)·B hadrons (J/ψ), Γ ee (J/ψ)·B ee (J/ψ) and Γ ee (J/ψ) values is given in section 5.5. MC simulation We used MC samples of J/ψ inclusive decays and the continuum multihadron events to obtain the detector efficiency. The samples were generated with the tuned version of the BES generator [26] based on JETSET 7.4 [27]. The procedure of the parameter tuning is discussed in detail in section 6.2. The generated events were reweighted to ensure that the branching fractions of the most probable decay modes correspond to the results of the PDG fit [6]. MC samples of Bhabha events required for the luminosity determination were simulated using the BHWIDE [23] and MCGPJ [24] generators. Generated MC events were then processed with the detector simulation package based on GEANT, version 3.21 [28], and reconstructed with the same conditions as experimental data. During the data taking in 2005 there was an additional online condition -the number of hits in the VD should not exceed 60 which corresponded to 10 charged tracks. Due to substantial crosstalk in VD electronics, there was some loss of signal events. The effect of crosstalk was carefully simulated. To take into account the signal and background coincidences, a trigger from arbitrary beam crossings was implemented. The events recorded with this "random trigger" were superimposed with simulated events. Trigger requirements The trigger consists of two hardware levels: the primary trigger (PT) and the secondary trigger (ST) [29]. The primary trigger required signals from two or more non-adjacent scintillation counters or an energy deposition in the endcap calorimeter of at least 100 MeV. A veto from CsI calorimeter crystals closest to the beam line was used to suppress the machine background. The conditions of the secondary trigger were rather complicated, and were satisfied by events with two tracks in the vertex detector and the drift chamber or with a single track which deposited more the 70 MeV in the barrel calorimeter. During the offline analysis all events (both recorded in experiment and simulated) were required to pass through the software event filter. It used a digitized response from detector subsystems and applied tighter conditions on its input in order to decrease the effect of calorimeter energy threshold and possible hardware-trigger instability. Luminosity determination For the absolute luminosity determination, e + e − events in the barrel LKr calorimeter [19] were used taking into account the contribution of J/ψ decays into e + e − (see eq. (4.7)). The final-state radiation (FSR) effects are considered using the PHOTOS package [30]. The J/ψ → e + e − cross section is shown in figure 2b obtained by subtracting the contribution of Bhabha events from the total e + e − → e + e − cross section. The e + e − event selection includes the following criteria in addition to trigger requirements: • two clusters within the polar angle range 40 < θ < 140 • and the energy E 1,2 larger than 700 MeV each; • the energy deposition outside of those two clusters smaller than 10% of the total energy deposited in the calorimeter E cal ; • acollinearities of the polar ∆θ and azimuthal ∆ϕ angles smaller than 15 • ; • event sphericity S ch calculated with charged particles smaller than 0.05; • two or three tracks in the drift chamber coming from the interaction point: the impact parameter with respect to the beam axis ρ < 0.5 cm, the coordinate of the point of closest approach |z 0 | < 13 cm and the transverse momentum P t > 100 MeV. Cosmic background was additionally suppressed with the muon system by veto signals from opposite or adjacent to opposite octants or more than three layers fired in one octant. Alternatively, cosmic events were suppressed with the time-of-flight condition. Figure 3 shows comparison between e + e − → e + e − data and MC simulation. The distribution in the electron scattering angle for selected e + e − events is shown in figure 4. The angular distributions of events from Bhabha scattering and from J/ψ decay are different which allows us to separate those contributions at each data point. Selection of hadronic events In our analysis the following selection requirements are applied: • total energy deposition in the calorimeter 700 < E cal < 2500 MeV; • more than 15% of total energy deposited in the barel LKr calorimeter E LKr /E cal >0.15; • at least one track with ρ < 0.5 cm, |z 0 | < 13 cm and P t > 100 MeV; • at least three particles in the detector, including tracks in the drift chamber and calorimeter clusters, which are not associated with any track; • the ratio of the Fox-Wolfram moments [31] The requirements on energy deposition separate hadronic events from backgrounds: the upper requirement reduces a fraction of e + e − events and the lower one suppresses µ + µ − and machine backgrounds. The requirement on the ratio of the Fox-Wolfram moments H 2 /H 0 is significant in reducing background from quasi-collinear e + e − events with additional particles from radiation and interaction with detector material. Cosmic events were additionally suppressed as in selection of e + e − events. Figure 5 shows comparison between the most important event characteristics obtained in the experiment and in the simulation. Fitting of the data We performed a combined fit of the data on hadron and e + e − production in the energy range of the J/ψ resonance. Experimental runs were grouped into points according to run energy. The collision energy at each point was determined by interpolating the beam energy measurements and assuming the e + e − beam energy symmetry W = 2E beam . A sample of e + e − events was subdivided into 10 equal angular intervals in the range from 40 • to 140 • . The numbers of hadronic N i and leptonic n ij events observed at each energy point and each angular interval were fitted simultaneously as a function of collision energy and electron scattering angle using a minimizing function where N exp/theor i and n exp/theor ij are experimentally measured and theoretically calculated numbers of hadronic and Bhabha events, respectively. Theoretically calculated event numbers were obtained as follows: Figure 5. Properties of hadronic events produced in the vicinity of the J/ψ resonance: the number of tracks from the interaction point N IP , the total number of particles N part , energy deposited in the calorimeter E cal , inclusive P t and θ tracks distributions and the ratio of Fox-Wolfram moments H 2 /H 0 . The points represent experimental data, the histograms correspond to simulation of J/ψ decays. All distributions are normalized to unity. JHEP05(2018)119 Observed cross sections σ hadr (W i ) and σ ee (W i , θ j ) are determined from eq. (4.1) and eq. (4.7), respectively: where the cross section of the annihilation process near the J/ψ resonance is convolved with the Gaussian distribution with the energy spread σ W : The pre-exponential factor g differs from unity due to some accelerator-related effects. Its impact on the results of the measurements is considered in section 5.4. The continuum cross section is almost constant in the vicinity of a narrow resonance and can be parametrised with In eqs. (5.3) and (5.4), ε hadr and ε ee (θ) are detection efficiencies and their dependence on beam energy can be neglected. Luminosity L i at i-th energy point was determined as: where L(E i ) is the integrated luminosity measured by the bremsstrahlung luminosity monitor at the i-th energy point and R L is an absolute luminosity calibration factor. The statistical uncertainties of parameters Γ ee (J/ψ), Γ ee (J/ψ) · B ee (J/ψ), Γ ee (J/ψ) · B hadrons (J/ψ) are strongly correlated. To determine these uncertainties accurately, the fit was performed with two sets of free parameters. In the first set the parameters Γ ee (J/ψ) and Γ ee (J/ψ)·B ee (J/ψ) were floating. In the second set the parameters Γ ee (J/ψ)·B hadrons (J/ψ) and Γ ee (J/ψ) · B ee (J/ψ) were floating. Both sets contained auxiliary free parameters: absolute luminosity calibration factor R L , resonance mass m(J/ψ), beam energy spread σ W and continuum contribution σ 0 . To relate the values of Γ ee (J/ψ), Γ ee (J/ψ) · B ee (J/ψ) and Γ ee (J/ψ) · B hadrons (J/ψ) the ratio Γ e + e − /Γ µ + µ − (J/ψ) = 1.0022 ± 0.0065 was fixed from the KEDR result [13] and the variation of the ratio inside its uncertainties introduces negligible systematic uncertainty to the measured values. The results obtained from the fits are listed in table 1. The J/ψ mass value is in good agreement with that published earlier by the KEDR collaboration [17]. Study of systematic uncertainties Main contributions of systematic uncertainties to the Γ ee (J/ψ), Γ ee (J/ψ) · B hadrons (J/ψ) and Γ ee (J/ψ) · B ee (J/ψ) values discussed in detail in this section were merged into five categories: absolute luminosity measurement, hadron decay simulation, detector and accelerator effects, theoretical uncertainties. Table 2. Systematic uncertainties of the luminosity determination in %. Luminosity uncertainties The major sources of the absolute luminosity determination uncertainties are presented in table 2. The LKr calorimeter was aligned to the drift chamber using DC-reconstructed tracks from cosmic events. The position of the interaction point and the beam-line direction in the coordinate system of the detector were found using the primary-vertex distribution of hadronic events. The luminosity uncertainty due to inaccuracy of the alignment was evaluated by applying the one-sigma shift during the reconstruction. The obtained uncertainty is less than 0.2%. The uncertainty due to the imperfect simulation of the calorimeter response was estimated by varying sensitivity to the energy loss fluctuations between LKr calorimeter electrodes and appears to be less than 0.3%. The detection efficiency function for electrons, ε ee (θ), was calculated with J/ψ → e + e − simulation, with the θ angle measured in the drift chamber or LKr calorimeter, the result difference does not exceed 0.3%. A MC statistical uncertainty corresponds to 0.15%. To estimate the uncertainty of the e + e − → e + e − scattering cross section, calculated from eq. (4.7) two event generators -BHWIDE and MCGPJ were used. The difference in the Γ ee (J/ψ) value was 0.37%. The luminosity spread was estimated as a difference of the results from two independent luminosity monitors and was about 0.4%. This effect was studied with toy MC and the corresponding Γ ee (J/ψ) and Γ ee (J/ψ)·B hadrons (J/ψ) width uncertainties were about 0.04% and Γ ee (J/ψ) · B ee (J/ψ) uncertainty was about 0.06%. In addition, systematic effects related to luminosity were evaluated by using variation of the selection requirements. The requirement on the polar angle θ was varied in the broad range, and the corresponding change in the number of selected Bhabha events reached 50%. All variations are summarized in table 3. Those effects can originate from already considered sources and statistical fluctuations. Despite of this we included them in the total uncertainty to get conservative error estimation. Uncertainty due to imperfect simulation of J/ψ decays The next important source of uncertainties on the Γ ee (J/ψ) value is the imperfect simulation of J/ψ decays. To tune the simulation procedure and obtain a reliable estimate of the systematic uncertainty, we follow the method used in ref. [16]. Let us discuss the idea of the method in brief. Assume that we have a perfect simulation procedure capable of reproducing all event characteristics and correlations between them, but it has a set of internal parameters to be tuned. By varying one of the parameters, one should trace the change of the mean value of some observable, for example the mean multiplicity N IP , and the detection efficiency ε. The simulated value of the observable coincides with the measured one at the optimal setting of the parameter. For small variations the detection efficiency linearly depends on the mean multiplicity, therefore the accuracy of the efficiency determination δε = ∂ε/∂ N IP δ N IP , where δ N IP is the uncertainty of the experimental value of the multiplicity. In case of several simulation parameters to vary, one should get the set of ε(N IP ) trajectories crossing together at the point which corresponds to an experimental observable. In practice, the simulation procedure is not perfect, thus instead of one intersection point we have the situation depicted in figure 6. The uncertainty of the detection efficiency grows due to difference in trajectory slopes obtained with variations of simulation parameters. The estimate of the uncertainty interval corresponds to the vertical size of the shadow box in figure 6 while the horizontal size is determined by the track multiplicity uncertainty in the experiment. To obtain the results presented in figure 6, we iterated as follows: vary one of the JETSET parameters and then modify some complementary parameter to achieve good agreement in observed charged multiplicity. The values of the mean multiplicity and the detection efficiencies obtained for various settings of parameters are summarized in table 4. The main JETSET parameters to vary are PARJ(21), PARJ(33), PARJ(37), PARJ(41) and PAR(42) referred to σ P T , W stop , δW stop and two parameters a and b for the Lund fragmentation function, respectively. The parameter σ P T is responsible for a width in the Gaussian transverse-momentum distributions of primary particles appearing during fragmentation, while W stop is the energy of the jet system below which a final hadron pair is produced. This energy is smeared with a relative width δW stop . Beside variations of fragmentation function parameters, we tried the fragmentation with parton showers switched off. The charged multiplicity was selected for tuning as the most sensitive event characteristic. In addition to it, simulated distributions of charged tracks sphericity, Fox-Wolfram moments, energy deposited in the calorimeter, the inclusive event characteristics such as momentum, azimuthal and polar angles, were checked for agreement with experimental data. Histogram shapes were compared using a Kolmogorov test and simulated samples that gave the values of the Kolmogorov test lower than 0.6 were rejected. The multihadron efficiency was averaged over efficiencies corresponding to an experimentally measured charged multiplicity N IP in figure 6 and equals 74.2 ± 0.4%. For the calculation of the mean multiplicity some track selection criteria are required. Their choice leads to an additional uncertainty on the detection efficiency which is smaller JHEP05(2018)119 Version JETSET modifications N IP ε, % k-test χ 2 MC-data /ndf σ P T and δW stop varied than 0.3%. The track reconstruction efficiency is not exactly the same for the experimental data and simulation. The difference was studied using Bhabha events and cosmic tracks and the appropriate correction was introduced in the detector simulation with an uncertainty smaller than 0.1%. For reweighting we used significant and well-measured J/ψ decay branching fractions. To check a systematic uncertainty, the remaining branching fractions were added to the list and corresponding MC event weights were recalculated. This leads to uncertainty of less than 0.1% on the measured Γ ee (J/ψ) and Γ ee (J/ψ) · B hadrons (J/ψ) values. All systematic uncertainties due to imperfect simulation are summarized in table 5. Table 6. Sources of detector-related systematic uncertainties in %. Detector-related uncertainties The major sources of the detector-related systematic uncertainties in the Γ ee (J/ψ) width are listed in table 6. To estimate the uncertainty related to the cosmic background, the condition on the muon system veto was replaced with a condition on the average ToF time with the number of hits in the muon system not larger than two. The difference was found to be about 0.3% for Γ ee (J/ψ) and Γ ee (J/ψ) · B hadrons (J/ψ) and 0.1% for Γ ee (J/ψ) · B ee (J/ψ). In addition, we used two models of nuclear interaction during simulation -with the GHEISHA [32] and FLUKA [33] packages as they are implemented in GEANT 3.21. The variation of the resulting Γ ee value was about 0.2%. The two methods to achieve data and MC agreement in the momentum resolution and the angular resolution were used: we scale either the assumed systematic uncertainties of x(t) determination or the spatial resolution of the drift chamber. That gives a 0.2% systematic uncertainty. The trigger inefficiency includes three contributions. The inefficiency of time-of-flight counters used in the primary trigger was determined with especially selected e + e − → e + e − and cosmic events and equals 0.3%. A systematic uncertainty due to crosstalk in VD electronics was evaluated as a difference of results with two sets of VD simulation parameters obtained by using cosmic and Bhabha events. It was about 0.2%. And the last effect is a veto from CsI crystals near the beam line, which was estimated by varying corresponding trigger thresholds and equals 0.3%. The effects of possible sources of the detector-related uncertainties were also evaluated by varying the event selection requirements. Minimum and maximum total energies de- posited in the calorimeter were varied to 500 and 2700 MeV, respectively. A requirement on the Fox-Wolfram moments was removed from selection. A requirement on the number of tracks from interaction points was tightened to have N IP > 1 and track selections on ρ, z 0 and P t were varied and the obtained difference did not exceed 0.2%. The results are presented in table 7 giving in total about 0.5%. Accelerator uncertainties The influence of the machine background was estimated by using a data set collected with separated beams. The number of hadronic events selected from this data set was rescaled to the full data sample proportionally to the integrals of the beam currents. The contribution of background events to the observed cross section is about 6-12 nb. The number of selected hadron events was corrected for the number of estimated background events and the data were refitted. The relative uncertainty does not exceed 0.2%. The non-Gaussian effects in the total collision energy distribution contribute about 0.2% to the Γ ee (J/ψ) uncertainty. To estimate this contribution, we added a preexponential factor while convolving the cross section with a Gaussian function in eq. (5.5) (details are discussed in [17]): To check the uncertainty related to the beam energy determination, the values of energy assigned to the data points were corrected within their errors using the known shape of the resonance cross section. For that, eleven free parameters E fit i were introduced in the fit function (5.1) and the compensating term, Table 8. Accelerator-related uncertainties contributions in %. Other uncertainties To verify the uncertainty related to the interference parameter λ parameter, we left λ floating resulting in a shift of 0.2% on the Γ ee (J/ψ) and Γ ee (J/ψ) · B hadrons (J/ψ) values and about 0.1% on the Γ ee (J/ψ) · B ee (J/ψ) value. The interference parameter λ in the fit was fixed at the value of 0.39 assuming that the sum in (4.5) vanishes. As a crosscheck deviation ofΓ h from a sum of partial hadronic widths Γ hadrons due to interference effects was estimated in the Bayesian approach under the assumption that all phases in eq. (4.5) have equal probability as discussed in [16]. The effect depends on the fitted value and its uncertainty λ = 0.36 ± 0.14 and does not exceed 0.3%. The accuracy of the analytic expression (4.1) is about 0.1%. In addition, the 0.1% accuracy of the radiative-correction calculation [22] should be taken into account. The inaccuracy of simulation of FSR effects with PHOTOS is negligible in our analysis. The sum in quadrature of all contributions listed in this subsection is about 0.4%. The first and second uncertainties are statistical and systematic, respectively. The major sources of the systematic uncertainties for the Γ ee (J/ψ) and Γ ee (J/ψ)·B hadrons (J/ψ) values are summarized in table 9 and the total systematic uncertainty equals 1.6%. For the Γ ee (J/ψ) · B ee (J/ψ) product, the total systematic uncertainty equals 1.2%. Our result for the Γ ee (J/ψ)·B hadrons (J/ψ) value is consistent with and four times more precise than the previous direct measurement in the hadronic channel [9]. The obtained Γ ee (J/ψ) · B ee (J/ψ) value is in good agreement with our previous measurement [12] and supersedes it. KEDR (2010) Ref. [12] CLEO (2006) Ref. [11] BaBar (2004) Ref. [10] Γ (J/ψ) keV * Direct measurement. Figure 7. Comparison of Γ ee (J/ψ) and Γ(J/ψ) measured in the most precise experiments and Γ ee (J/ψ) predictions from lattice QCD calculations. The Γ(J/ψ) value from the BESIII experiment was calculated from [14] using the world-average lepton branching fraction [6]. The gray band corresponds to the world-average value with allowance for the uncertainty in it. The electronic and total widths obtained in our analysis agree well with the world average Γ ee = 5.55 ± 0.14 ± 0.02 keV and Γ = 92.9 ± 2.8 keV [6]. Figure 7 represents comparison of our Γ ee (J/ψ) and Γ(J/ψ) results with those obtained in previous experiments.
6,834.8
0001-01-01T00:00:00.000
[ "Physics" ]
Do Male and Female Cowbirds See Their World Differently? Implications for Sex Differences in the Sensory System of an Avian Brood Parasite Background Male and female avian brood parasites are subject to different selection pressures: males compete for mates but do not provide parental care or territories and only females locate hosts to lay eggs. This sex difference may affect brain architecture in some avian brood parasites, but relatively little is known about their sensory systems and behaviors used to obtain sensory information. Our goal was to study the visual resolution and visual information gathering behavior (i.e., scanning) of brown-headed cowbirds. Methodology/Principal Findings We measured the density of single cone photoreceptors, associated with chromatic vision, and double cone photoreceptors, associated with motion detection and achromatic vision. We also measured head movement rates, as indicators of visual information gathering behavior, when exposed to an object. We found that females had significantly lower density of single and double cones than males around the fovea and in the periphery of the retina. Additionally, females had significantly higher head-movement rates than males. Conclusions Overall, we suggest that female cowbirds have lower chromatic and achromatic visual resolution than males (without sex differences in visual contrast perception). Females might compensate for the lower visual resolution by gazing alternatively with both foveae in quicker succession than males, increasing their head movement rates. However, other physiological factors may have influenced the behavioral differences observed. Our results bring up relevant questions about the sensory basis of sex differences in behavior. One possibility is that female and male cowbirds differentially allocate costly sensory resources, as a recent study found that females actually have greater auditory resolution than males. Introduction When animals communicate, the sender emits a signal that is then detected and processed by the receiver, which ultimately responds behaviorally. Much attention has been devoted to the sender's and receiver's behavioral interactions [1] and to some degree the neural architecture behind those interactions [2], [3]. However, we know relatively less about how the configuration of the receiver's sensory system constrains the ability to detect and process signals [4] and how individuals allocate attention to different sensory components of a signal [5]. Differences in the sensory system of males and females have been reported in some vertebrate species [6], [7], [8]. However, little is known as to how these sex differences can influence behaviors associated with gathering sensory information. For instance, female Sceloporus graciosus lizards can detect the fast motion stimuli of male courtship signals more quickly than males [9], and they also spend more time orienting towards courtship displays with complex motion patterns [10]. Understanding sex differences in both the sensory system and information gathering behaviors is key to testing the mechanisms behind some signal evolution hypotheses (e.g., perceptual variability hypothesis [11]) as well as establishing the differential investment of males and females in different sensory modalities [12]. Our goal was to test for sex differences in visual resolution (i.e., cone photoreceptor density) and visual information gathering behaviors (i.e., head movements) in brown-headed cowbirds (Molothrus ater). Cowbirds are brood parasites, making them good models to study sex differences because (a) selection pressures vary between sexes (i.e., males' role is limited to mate attraction and copulation without providing parental care or territories, whereas only females search for hosts to lay their eggs [13]), (b) males and females differ in their auditory systems (i.e., females have better auditory resolution [14]), and (c) males and females differ in their vigilance behavior while foraging in groups [15]. First, we studied the density of cone photoreceptors associated with chromatic (i.e., single cones [16], [17]) and achromatic vision/motion detection (double cones [18], [19], [20], [21]) in two parts of the retina: the center and the periphery. In brownheaded cowbirds, the fovea is approximately at the center of the retina and projects laterally [22], and parts of the retinal periphery (i.e., temporal region) project towards the binocular visual field [23]. Areas with higher cone photoreceptor density are expected to have higher visual resolution, and thus higher visual performance [24]. Second, we conducted a behavioral experiment exposing female and male cowbirds to an object with high and low chromatic saliency and measured how they gathered visual information with head movements. Visually-guided animals actively modify the position of their visual apparatus (i.e., eye, hence retina) to enhance the quality and quantity of the sensory information they can gather [25], [26], [27]. The avian fovea generally projects laterally in those species with central foveae because of the lateral position of the orbits in the skull. Consequently, when birds fixate on an object, they mostly move their heads (due to their comparatively limited eye movements) sideways around the object of interest to get images of it with the fovea of each eye [28], [29]. Head movement rates have been proposed as an indirect proxy for different visual tasks (e.g., visual search, visual fixation) in birds [15], [30]. We considered two opposing predictions regarding sex differences in visual resolution in cowbirds. First, we expected that females would have higher visual resolution because they are the ones involved in nest searching behavior [13]. Second, we expected that females would have lower visual resolution due to their higher auditory resolution [14], because of the compensatory plasticity hypothesis by which different sensory modalities may have different energy allocation [31], [32] given that processing sensory information is costly [33]. Regarding gathering visual information, we hypothesized that individuals with lower visual resolution may need to actively compensate for the lower quality of the information obtained by their retinas (e.g. [34]). We proposed two alternatives for this compensatory mechanism based on how birds may explore objects visually [29]. First, if low visual resolution requires an increase in the time a given retina is exposed to the object to gather the necessary amount of information, then the sex with the lower visual resolution would have lower head movement rates than the sex with higher resolution. Second, if low visual resolution requires an increase in number of exposures of a given retina to the object to obtain the necessary amount of information, we predicted that the sex with the lower visual resolution would have higher head movement rates than the sex with the higher resolution. Ethics Statement All animal procedures were approved by the IACUC's at California State University Long Beach (220) and Purdue University (Protocol no. 09-018). General Procedures We studied sex differences in the density of cone photoreceptors between August and December 2008, and in scanning behavior between September and December 2010. Cowbirds were captured using mist-nets and Australian traps under the States of California (California Department of Fish and Game) and Indiana (Indiana Department of Natural Resources) and Federal (Fish and Wildlife Service) permits. We housed brown-headed cowbirds in indoor enclosures (0.61 m60.76 m60.60 m) under a 14:10 hour light:dark cycle, and provided them with food ad libitum except during the preceding periods of food deprivation for the behavioral experiment (see below). Water was always available. For the density of photoreceptor component, cowbirds were euthanized with CO2 following guidelines established by IACUC. Density of Cone Photoreceptors We used 20 adult brown-headed cowbirds (10 females, 10 males) captured from populations in Southern California. Individuals were euthanized within 24-48 hours of capture to minimize the effects that artificial lighting may have on the absorbance of oil droplets [35]. We first recorded individual body mass. We then chose one eye (right or left) at random, removed it, and measured its eye axial length with a digital caliper. We removed the retina following the methods described in detail in [36]. In brief, the eye was hemisected and the eyecup placed in a phosphate buffered saline PBS solution (Sigma Life Science, P4417-100TAB). The retina was extracted using fine paint brushes (2/0 round Princeton Art and Brush Co. 4359 R) to detach it from the retinal pigmented epithelium. The orientation of the retina was recorded during this procedure by using as a reference point the pecten, which is a pigmented and vascular structure in the avian retina [37]. We then made radial cuts to the retina to flatten it. In those cases in which the retina was torn, we used the other eye's retina only if ,30 min elapsed since the death of the individual. The retina was mounted and coverslipped on a microscope slide with the photoreceptor layer up and with a drop of PBS. Our goal was to compare photoreceptor densities between the foveal and non-foveal (i.e., retinal periphery) areas of the retina. We obtained samples from the center of the retina as this is where the fovea is localized in this species [22]. We decided to get samples from the retinal periphery in four different regions to avoid any bias: dorsal, ventral, temporal, and nasal. Therefore, we sampled from five 2.32 mm 2 sampling regions. We pooled the data of the dorsal, ventral, temporal, nasal regions into a retinal periphery area. The slide was placed in an Olympus BX51 microscope fitted with epifluorescent light (Olympus U-RFL-T), and a long-pass filter for viewing wavelengths longer than 420 nm. Samples were examined at a 406 magnification. We took pictures of each sampling area with a Moticam 2300 3.0 M pixel camera using Images Plus software Version 2.0 ML. Each of the five sampling regions (2.32 mm 2 ) was divided into a grid of 868 frames (each frame was 0.036 mm 2 ), yielding a total of 64 frames per sampling region. We then took pictures of 32 frames by starting from the upper left corner moving horizontally (and vertically at the end of each row of 8 frames) and skipping every other frame. In each frame, we took two pictures: one under the bright field and one under the epifluorescent field. We distinguished the different types of photoreceptors using oil droplets, which are organelles in the avian retina that enhance color discrimination [38]. In birds, each cone photoreceptor is associated with a specific type of oil droplet. Single cone photoreceptors with ultraviolet-(UVS) visual pigments have transparent (T-type) oil droplets that do not absorb light in the visible range [39]. Single cone photoreceptors with a short wavelength-sensitive (SWS) visual pigment have colorless (C-type) oil droplets with cut-off wavelengths from 392 to 449 nm [39]. Single cone photoreceptors with a medium wavelength-sensitive (MWS) visual pigment have yellow (Y-type) oil droplets with cutoff wavelengths from 490 to 516 nm [39]. Finally, single cone photoreceptors with a long wavelength-sensitive (LWS) visual pigment have red (R-type) oil droplets with cut-off wavelength from 514 to 586 nm [39]. The double cone photoreceptors have a LWS visual pigment, with the principal member having a P-type oil droplet (cut-off wavelength varying from 407 to 419 nm [39]). We did not identify rod photoreceptors as it required the use of other methodological procedures [40]. We estimated the density of single and double cone photoreceptors by counting the number of oil droplets/mm 2 at the center and periphery of the retina. We followed Hart's [41] criteria to distinguish the different oil droplets (Appendix S1). Three observers counted the retina after extensive training led to differences of ,5% among them. We did not correct for tissue shrinkage because we used fresh retinas. Some of the retina pictures obtained did not have any oil droplets on the whole picture or on specific parts of it. This could be an indication of loss of photoreceptors due to the use of a brush during the preparation (i.e., removal of the pigmented epithelium). Because of the potential bias this could generate in the calculation of the overall densities, we removed those pictures from the analysis. Head Movement Behavior We used 20 adult brown-headed cowbirds (10 females, 10 males) captured from populations in Tippecanoe County, Indiana. We conducted the experiment indoors with fluorescent bulbs, which had a flicker rate of 20 kHz to minimize the potential confounding effects of artificial lighting [42]. A 0.560.560.4 m enclosure was on top of a 1-m high table (2 m under the lighting fixture). The enclosure was made of mesh wire with two sides made of Plexiglas to facilitate the recording of the focal's behavior. The enclosure was sitting on a wooded bottom that was covered with light brown paper lining, which was replaced after each trial. At the center of the cage, we positioned a cube (4.5 cm side length) that was either colored (painted following the coloring pattern similar to a Rubik's cube) or black. The top of the cube had a small hole where we placed about 1 g of millet seeds to grab the visual attention of the animal during the first few minutes of the trial. Food was not available in any other area of the cage. The experimental arena was surrounded by black cloth to minimize visual distractions while the animal was in the enclosure. We used three camcorders on tripods to record the focal from the sides and one camcorder was positioned on top of the enclosure to obtain a top-view of the animal. Birds were food-deprived the night before (an average of 12.0160.14 hrs) to enhance their motivation during the trials. The experiment was conducted at averaged light levels of 670.60615.09 lux and temperatures of 22.1060.10uC. None of these three factors significantly influenced head movement rates towards the cube (light levels, F 1,17 = 0.06, P = 0.811; temperature, F 1,17 = 2.21, P = 0.155; food deprivation, F 1,17 = 0.79, P = 0.386); so they were excluded from the final models. A trial consisted of a single cowbird exposed to a single cube (either colored or black) for 10 min. After the trial, animals were returned to the housing enclosures and food and water were provided ad libitum. Each individual was exposed to both conditions (colored and black cubes) in a random order. We measured head movement rates of male and female cowbirds with JWatcher [43]. We analyzed the first 90 s of the videos, as it was the time cowbirds appeared more motivated to face the cube. We divided head movement behavior into bouts in which cowbirds' eyes were towards the cube or away from the cube based on information on the configuration of its visual field [23] and our top-view camera. Head movement behavior towards the cube was defined as when the cowbird bill was within 120u in either the right or left direction of the cube; whereas head movement rate away from the cube was any other bill orientation. We did not distinguish between head movements with different amplitude because the degree of simultaneous calibration of the different cameras was not enough to obtain accurate measurements between trails. We also measured the proportion of time animals spent with their heads towards the cube (per definition above) as a potential confounding factor. We only used head movement rate towards the cube in our statistical analyses (see below). Perceptual Modeling One of the factors that could be affecting the perception of the cubes by females and males is how salient they are from the visual background. Visual contrast models take into consideration (among other things) the reflectance of the background and the object, the spectral properties of the light, the sensitivity of the visual system (absorbance of visual pigments and oil droplets) and the relative density of photoreceptors [44]. Consequently, we estimated chromatic contrast of the colored and black cubes from the perspective of each individual male and female by calculating the relative densities of photoreceptors at the center and periphery of the retina (in relation to the UVS cone densities) based on the raw densities of photoreceptors measured for this study (see Density of cone photoreceptors section above). We obtained information on the absorbance of the visual pigments and oil droplets of brownheaded cowbirds (but without distinguishing between sexes), which have a UVS visual system, using microspectrophotometry (Appendix S2). We also measured the reflectance of the cubes, the irradiance of the fluorescent bulbs at the enclosure level, and the reflectance of the lining at the bottom of the enclosure (Appendix S3). This information allowed us to parameterize the chromatic contrast model for this particular species (details of calculations in Appendix S3). The only parameter that varied between sexes in our model was the relative density of photoreceptors. We did not calculate achromatic contrast because the model we used did not include double cone photoreceptor density as a parameter. Statistical Analysis We first assessed sex differences in body mass and eye size with general linear models. We ran general linear mixed models to test for sex differences in the density of single cones (UVS, SWS, MWS, and LWS pooled together) and double cones. Besides sex, we also included retinal sector (central, periphery) and the interaction between sex and retinal sector as independent factors. Although our study was not designed to test for differences in cone density between the right and left eyes [45], we included eye and the interaction between eye and sex to control for this potential confounding factor. When an interaction effect was significant, we ran pairwise comparisons with t-tests. We used general linear mixed models to assess sex differences in head movement rate towards the cube, including cube type, the interaction between cube type and sex, and the proportion of time with the head towards the cube as independent factors. We analyzed differences in chromatic contrasts (measured in units of just noticeable differences or JND, Appendix S3) between sexes, cube types, and retinal sectors with general linear mixed models. We also included in this model the interactions between sex and cube type, and sex and retinal sector. In all general linear mixed models, we included focal ID in the model as a repeated measures factor. We checked for the normality of the residuals and homogeneity of variances in all models. Body mass and eye size analyses were conducted in Statistica 10; all other statistical analyses, in SAS 9.2. Body Mass and Eye Size Previous studies have shown that brown-headed cowbird males are larger than females [15]. We confirmed this trend with the individuals used in the retinal analysis [females = 31.6360.85 g, males = 39.0561.06 g, F 1,18 = 29.92, P,0.001] as well as the behavioral experiment [females = 36.3661.49 g, males = 41.0661.58 g, F 1,18 = 4.68, P = 0.044]. However, we did not find significant differences in eye size measured as eye axial length [females = 6.58 mm, males = 6.66 mm, F 1,18 = 0.55, P = 0.467]. Density of Photoreceptors The density of single cones varied significantly between sexes and retinal sectors (Table 1). Overall, males (5,160.75684.37 cells/mm 2 ) had a significantly higher density of single cones than females (4,804.25688.66 cells/mm 2 ). Additionally, the density of single cones was higher at the center (6,609.466106.97 cells/mm 2 ) than at the periphery (3,355.54655.18 cells/mm 2 ) of the retina. However, the difference between the sexes varied with the retinal sector, yielding a significant interaction effect (Table 1; Fig. 1a). Males had significantly higher single cone densities than females at the center of the retina (t 18 = 3.12, P = 0.006; Fig. 1a), but this difference was not significant at the retinal periphery (t 18 = 0.22, P = 0.830; Fig. 1a). We did not find significant effects of eye and the interaction between eye and sex on the density of single cones ( Table 1). The density of double cones also varied between sexes and retinal sectors (Table 1), with males (4,538.33680.33 cells/mm 2 ) having significantly higher density than females (3,902.98684.41 cells/mm 2 ), and the retinal center (5,648.316101.85 cells/mm 2 ) having significantly higher density than the retinal periphery (2,793.00652.54 cells/mm 2 ) of the retina. Yet, the differences between sexes varied significantly with retinal sector giving rise to an interaction effect (Table 1; Fig. 1b). The difference in double cone density between males and females was more pronounced at the center (t 18 = 4.75, P,0.001; Fig. 1b) than at the periphery (t 18 = 2.60, P = 0.018; Fig. 1b) of the retina. We did not find significant effects of eye and the interaction between eye and sex on the density of double cones (Table 1). We also investigated whether the sex differences in the relative density of cone photoreceptors (in relation to the UVS cones) could affect the ability of cowbirds to perceive the cubes they were exposed to in the behavioral experiment. We estimated the chromatic contrast of the two cubes for males and females at the center and periphery of the retina. Considering both retinal sectors, we did not find any significant difference between males (42.5361.78 JNDs) and females (44.9261.78 JNDs) in chromatic contrast (Table 1). Additionally, the interactions between sex and cube type, and sex and retinal sector were not significant ( Head Movement Behavior Head movement behavior differed between sexes. Females showed significantly higher head movement rates towards the cube than males (F 1, 18 = 5.16, P = 0.036; Fig. 2), accounting for the proportion of time with the head oriented towards the cube (F 1, 17 = 7.66, P = 0.013). Neither the color of the cube (F 1, 17 = 1.96, P = 0.179) nor the interaction between sex and cube color yielded significant effects (F 1, 17 = 1.53, P = 0.232). Discussion We found that brown-headed cowbird females had lower density of cone photoreceptors associated with chromatic and achromatic/motion vision and higher head movement rates when gathering visual information than males. Although the cone photoreceptor data came from populations in Southern California, whereas the head movement behavior data came from populations in Indiana, similar sex differences in head movement rates were reported in a previous study [15]. Specifically, the time between consecutive head movements (i.e., the inverse of head movement rate) was shorter in cowbird females than males in an outdoors experiment varying the density of conspecifics [15]. The cowbirds used in Fernández-Juricic et al. [15] were also obtained from the same Southern California populations as the ones used for the cone photoreceptor data in the present study. Thus, we believe that our results may be representative of sex differences in visual resolution and visual information gathering behavior in this species. However, we cautioned that the potential link between physiology and behavior proposed below should be considered preliminary until future studies measuring both parameters on the same individuals are conducted. Nevertheless, our study does add new evidence on brown-headed cowbird sex differences besides the ones found in other components of its nervous system [46], [47], [14], as well as in other cowbird species (e.g., [48], [49]). Visual resolution is affected by two main factors: eye size and the packing of photoreceptors and retinal ganglion cells [50]. We cannot rule out sex differences in ganglion cell density, but we did not find significant differences in eye size between females and males. At the photoreceptor level, we found that the overall distribution of cones matched the previously described distribution of retinal ganglion cells [22], with higher density of both single and double cones in the central retinal region around the fovea. Thus, the fovea can be considered the center of both chromatic and achromatic/motion vision in brown-headed cowbirds. Cowbird females had 12.5% lower density of both cones types in the foveal area, and 10.3% lower density of double cones in the retinal periphery, compared to males. Future studies should determine if these photoreceptor differences also translate into differences at the retinal ganglion cell layer, which is responsible for the transfer of information from the retina into the visual centers of the brain [51]. Following Pettigrew et al. [52] and Williams & Coletta [53] and pooling the densities of both cone types, these sex differences would translate into a 7.6% reduction in the foveal visual acuity of females compared to that of males. Females are also expected to have lower performance in achromatic vision and motion tasks than males. Previous studies on the human visual system found sex differences in the relative ratios of LWS and MWS photoreceptors [54], which could affect the thresholds of color discrimination [55]. Additionally, males and females of some cichlid species differ in cone opsin gene expression and frequency of cone pigments, which could influence the ability to discriminate between potential mates [56]. The difference in cone density between the center and the periphery of the retina appears to be an important factor driving the degree of eye and head movements to sample different parts of the visual space with the high visual resolution of the fovea [57], [30]. Cowbird female head movement rates were 15.6% higher than those of males when exposed to an object in a visually controlled environment. These behavioral differences may be related to the way females perceive predation risk in relation to males; however, the study was conducted indoors where the expectation of a predator attack may be considered lower. Another possibility is that these differences at the behavioral level may be associated with our findings at the photoreceptor level. If so, female cowbirds may compensate for their lower visual resolution (i.e., lower density of single and double cones) with a different visual exploratory strategy than males. By exposing the right and left foveae to an object alternatively in quicker succession, females may boost the quality of information obtained. This behavioral result supports the view that visual fixation in birds encompasses gazing alternatively with both foveae in quick succession [29] as opposed to species with frontally placed eyes that generally lock their gaze on objects for a relatively longer period of time [58]. Similar sex differences were found in red back salamanders (Plethodon cinereus), where females nose-tap more frequently probably due to their smaller vomeronasal organs compared to that of males [59], [60]. This suggests that the sex with the sensory modality with lower resolution requires more sensory information sampling. Another alternative is that female cowbirds may have sampled more frequently because of differences in visual contrast between the object and the background rather than visual resolution per se. Yet, our estimates of chromatic contrast, based on the relative densities of single cone photoreceptors (rather than the raw cone densities described above), did not yield any significant sex effects using for the first time a model parameterized for the brownheaded cowbird visual system. Our chromatic contrast model did not include sex-specific estimates of the sensitivity of oil droplets due to sample size constraints, which could have influenced chromatic contrast estimates. We studied only one visual dimension (e.g., density of single and double cones) that could be associated with the sex differences in visual exploratory behavior; however, there are other dimensions that deserve further study. For instance, sexes may also differ in the arrangement of cone photoreceptors [61], temporal visual resolution [62], ganglion cell density [63], and the wiring between the photoreceptor and ganglion cell layers that can influence motion detection [51]. The reason behind female cowbirds having lower density of photoreceptors than males cannot be answered with our data. Many factors may be involved, such as differential adult predation, use of different cues during mate choice, host searching behavior, etc. The sex differences in the visual system and visual information gathering behavior of cowbirds could be also considered in the context of a previous study that found that female cowbirds have better auditory ability to discriminate frequency and pure tones than males [14]. We speculate that females may reduce the costs of sensory processing by investing more sensory resources in the auditory system than in the visual system, probably to enhance the likelihood of finding suitable hosts during nest searching [64], in which males are not involved. Females obtain information on male quality from mostly the vocal contents of their audio-visual displays [65,66], although the presence of the visual component (i.e., wing-spread) also stimulates them sexually [67], [68]. Females would not be challenged to resolve visually this wing-spread component because males generally display at very close distances (,0.5 m) [69]. Interestingly, female egg laying in some related cowbird species occurs before sunrise [70], when cone photoreceptors may not be necessarily activated due to low light levels. Males on the other hand also give male-directed audio-visual displays whose visual components (e.g., depth of the bowing motion) are more intense than those in the female-directed version [71]. Males use these displays to establish dominance hierarchies by assessing the rivals' display rate and intensity [72]. One possibility is that cowbird males may benefit from having enhanced chromatic, achromatic, and motion vision abilities to quickly assess the fighting ability of competitors and avoid risky physical interactions [68]. Additionally, the higher visual capabilities of males may allow them to resolve more easily the fast and subtle wing strokes that females give at certain points during the male displays as a precursor of the copulatory posture [73]. This interpretation assumes that investment in sensory processing is costly [33], as found in other species that invest differentially across sensory modalities (e.g., [74]). Our results bring up interesting questions about the sensory basis of differences in the behavior of males and females. If there are sex differences in brown-headed cowbird sensory physiology, as found in this and a previous study [14], it is likely that the sensory space of males and females varies, which would affect the quality and quantity of information available for higher order processing in the brain and ultimately behavior. Future work on species where sexes are under different selection pressures should explicitly test the degree to which different behaviors can compensate for lower sensory resolution and whether trade-offs between sensory modalities are the result of optimizing gathering information of different fitness value. Supporting Information Appendix S1 Criteria used to determine the different types of oil droplets.
6,906.2
2013-03-27T00:00:00.000
[ "Biology", "Environmental Science" ]
Evaluation of ERP Systems Quality Model Using Analytic Hierarchy Process (AHP) Technique The Enterprise Resource Planning (ERP) system is a complex and comprehensive software that integrates various enterprise’s functions and resources. ERP system cleanly encapsulates cross-cutting concerns which cannot be encapsulated by other types of information systems like data synchronization and standardization, system complexity and system modularity. Many studies are conducted to propose software quality models with their quality characteristics. However, there is currently no dedicated software quality model that can describe and involve new features of ERP systems. Thus, this study has proposed an ERP system quality model (ERPSQM). Analytic Hie-rarchy Process AHP has been employed to evaluate the quality of ERPSQM. Furthermore, this proposed model can be used to make a comparison of ERP systems to help companies implement better systems. Introduction The Enterprise Resource Planning (ERP) system is a complex and comprehensive software developed to better integrate firms' functions and resources [1].Today, most organizations use the ERP systems due to cost reductions, improving responsiveness to customer needs, replacement of legacy systems, and faster data transactions [1]- [3].However, many studies have shown a rather high failure rate in the implementation of ERP systems [3]- [5].Many features must be considered for high quality systems with the focus on ERP system characteristics, in which assuring successfully developing and implementing the systems. With respect to the software systems quality, much work has been conducted to propose software quality models and metrics.Among these models are McCall's software quality model, Boehm's software product quality model, Dromey's quality model, FURPS quality model and ISO\IEC 9126.The metrics are qualitative indicators of system characteristics and the quality models explain the relationships between such metrics [6].Additionally, other studies have been conducted to provide guidelines for evaluating the quality of different types of software systems.However, there is a lack of studies conducted to propose ERP systems quality models and their characteristics [5]. The ERP systems have different type of abstraction.In additional to its complexity and modularity, the basic concept in the ERP system is the standardization and synchronization of information [3].Thus, most of software quality characteristics and sub-characteristics of ISO/IEC 9126 will be applicable to the ERP system quality model with appropriate modification.Because of the new abstraction type in ERP system, some new software quality characteristics should be involved, which can describe new features of ERP system.So, the novelty of this work is to derive the ERP Software Quality Model (ERPSQM) from ISO/IEC 9126 in which compatibility, Modularity, complexity, and reusability have been involved as sub-characteristics under characteristics functionality, usability and maintainability. Consequently, the Analytic Hierarchy Process (AHP) technique has been applied for evaluating the quality and ranking the characteristics of the ERP system quality model.AHP has extensively been applied in multi-criteria decision making and to many practical decision making problems [7]. Literature Review In order to propose an appropriate software quality model for ERP systems, this section highlights the most popular software quality models in the literature, their contributions and disadvantages.These models are McCall's software quality model, Boehm's software product quality model, Dromey's quality model, FURPS quality model and ISO\IEC 9126. McCall's Quality Model McCall's model is one of the most commonly used software quality models (Panovski, 2008).This model provides a framework to assess the software quality through three levels.The highest level consists eleven quality factors that represent the external view of the software (customers' view), while the middle level provides twenty three quality criteria for the quality factors.Such criteria represent the internal view of the software (developers' view).Finally, on the lowest level, a set of matrices is provided to measure the quality criteria [8].The contribution of the McCall Model is assessing the relationships between external quality factors and product quality criteria [9].However, the disadvantages of this model are the functionality of a software product is not present and not all matrices are objectives, many of them are subjective [10]. Boehm's Quality Model In order to evaluate the quality of software products, Boehm proposed quality model based on the McCall's model.The proposed model has presented hierarchical structure similar to the McCall's model [11].Many advantages are provided by the Boehm's model, namely taking the utility of a system into account and extending the McCall model by adding characteristics to explain the maintainability factor of software products [9].However, it does not present an approach to assess its quality characteristics [12]. FURPS Quality Model The FURPS model was introduced by Robert Grady in 1992 [13].It's worth mentioning that, the name of this model comes from five quality characteristics, including Functionality, Usability, Reliability, Performance and Supportability.These quality characteristics have been decomposed into two categories: functional and nonfunctional requirements [13].The functional requirements defined by inputs and expected outputs (functionality), while nonfunctional requirement composes reliability, performance, usability and supportability.However, the one disadvantage of this model is the software portability has not been considered [14]. Dromey's Quality Model Dromey's model extended the ISO 9126: 1991 by adding two high-level quality characteristics to introduce a framework for evaluating the quality of software products.Therefore, this model comprehends eight high-level characteristics.Such characteristics are organized into three quality models including requirement quality model, design quality model and implementation quality model [15].According to Behkamal et al. [10], the main idea behind Dromey's model reveals that, formulating a quality model that is broad enough for different systems and assessing the relationships between characteristics and sub-characteristics of software product quality. The One disadvantage of Dromey's model is the reliability and maintainability characteristics could not be judged before a product actually implemented [9]. ISO 9126 Model ISO 9126 is an international standard for software quality evaluation.It was originally presented in 1991; then it had been extended in 2004.The ISO 9126 quality model presents three aspects of software quality which address the internal quality, external quality and quality in use [16].Therefore, this model evaluates the quality of software in term the external and internal software quality and their connection to quality attributes.In this respect, the model presents such quality attributes as a hierarchical structure of characteristics and sub-characteristics.The highest level composes six characteristics that are further divided into twenty one sub-characteristics of the lowest level.The main advantage of this model is the model could be applied to the quality of any software product [9]. Because of the ISO/IEC 9126 provides quality characteristics and sub-characteristics that are general and common for evaluating the quality of every type of software products, the recently proposed quality models have been derived from ISO/IEC 9126.For example, Kumar et al. [6] have proposed quality model depend on ISO/IEC 9126 in order to evaluate the quality of aspect-oriented software.Adnan et al. [17] have also proposed model to evaluate the quality of COST systems.Additionally, Bertoa et al. [18] have adapted the ISO/IEC 9126 to establish a component-based systems quality model.Therefore, the ISO/IEC 9126 is adapted in this work to propose the ERP software quality model. ERPSQM In order to define a software quality model which should comprise all the features of ERP systems, the new features of ERP systems should be recognized.The ERP system is complex and comprehensive software used to integrate organization functions and resources.As well as, the main concentration of ERP systems is providing real-time and accurate information.Other features of an ERP system that should be considered, are system modularity and modules reusability.It indicates that the ERP system quality model can be extended from any standard quality model which is applicable to the ERP system nature.Other quality characteristics and sub-characteristics are required to be involved, which can cover the new features of ERP systems, including the system complexity, information synchronization and standardization, modularity of systems and modules reusability.Redefined existing quality characteristics and sub-characteristics are also required in context of ERP systems. ISO/IEC 9126 has been extended to propose the ERPSQM.The compatibility and modularity has been added as sub-characteristics under functionality, complexity as sub-characteristics under usability, and reusability under maintainability.Definition and justification of new and existing ISO/IEC 9126 characteristics and sub-characteristics are given in the next section. Compatibility The compatibility was defined as "the degree to which an innovation is perceived as being consistent with the existing values, needs and past experience of potential adopter" [19].In this work, an ERP system is a suite of software models and each module has its own functions.In all probability, in order to perform a particular function, exchange data between the modules or with other stand alone applications that mostly used with an ERP system, is required.In other words, the compatibility refers to the capability of ERP system modules to exchange data between each other and with other applications [3].Therefore, the compatibility is added as a sub-characteristic under functionality characteristic. Complexity As previously mentioned, the ERP system is integrated suite software modules that support firms' functions and resources.The complexity of such system is due the interaction between its software modules [6] [17].So, the complexity characteristic could be useful not only in efforts that are needed for development and maintenance a system, but also efforts needed to move from module to another by the end user.The Less complex system will be easier for developing and using [3].That's why the complexity proposed as a sub-characteristic under usability characteristics. Modularity One of the most obstacles to implement ERP systems is the development cost, especially in small firms [20].However, an ERP system is a set of software modules, each module permits automating a certain function.All software modules could be installed and implemented as single, so companies can implement only the modules that are required for its functions and compatible with their resources, in which they can reduce the development costs [3].For this reason, the modularity has been added to the ERPSQM as a sub-characteristic under functionality characteristic. Reusability The reusability is defined as "use of software originally developed for one project to a new software project currently being developed" [17].The reuse of software is expected to shorten the development period of time; to save development resources; and to provide tested and validated modules (high quality modules).So, due to reducing the implement cost companies could share some ERP system modules [17].Sometimes organizations need to customize the modules according to their functions and process (by a third party provider or in-house).Such customization includes adding new functions and adjusting existing functions [3].Thus, one of the most important ERP system features is the reusability.For this reason, reusability has been proposed to be involved in ERPSQM as sub-characteristic under maintainability characteristic. Definitions of Existing Characteristics in ERP Systems Context The quality characteristics, functionality, reliability, usability, efficiency, maintainability, and portability have commonly been proposed in most quality models.However, scholars have different opinions while choosing subcharacteristics of these characteristics.This research concentration is on the product quality rather than on quality in use.Therefore, this section defines the various characteristics and its sub-characteristics in term of ERP systems. The Functionality has been defined by ISO [21] as the capability of the software to provide functions which meet the stated and implied needs of users under specified conditions of usage.In order to evaluate such characteristic, it has been divided into four sub-characteristics, namely accuracy, suitability, interoperability, and security [6].Adapting the functionality of the ERP systems reveals that the systems software should provide its functions, namely financial process, human resource management, supply chain process, manufacturing process and/ or customer service process as per the requirements when it is used under specific conditions.Therefore, as previously mentioned additional two sub-characteristics have significantly been proposed under this quality characteristic including modularity and compatibility. The reliability is the capability of the software to maintain its level of performance under stated conditions for a stated period of time.Reliability has three sub-characteristics consist maturity, fault tolerance, and recoverability [9].In terms of ERP systems, the reliability refers to the capability of the systems to maintain its service provision under specific conditions for a specific period of time.In other words, the probability of the ERP system fails in a problem within a given period of time. The usability is the capability of the software to be understood learned, used, and attractive by the users, when used under specified conditions.The usability has set of sub-characteristics, including understandability, learn ability, and operability [22].This characteristic is employed in this study to suggest that the ERP systems should be understood, learned, used and executed under specific conditions.Thus, the complexity has been proposed as additional sub-characteristic under this quality characteristic. The efficiency refers to the capability of a system to provide performance relative to the amount of the used resources, under stated conditions.To be measured, it has also been divided into three sub-characteristics, namely time behaviour, resource utilization an efficiency compliance [21].Adapting this characteristic to the ERP systems suggests that the systems should be concerned with the used software and hardware resources when providing the ERP systems' functions. The maintainability is the capability of the software to be modified.The maintainability consists five sub-characteristics, including analyzability, changeability, stability, and testability [14] [16].In this research, any feature or part of the ERP system should be modifiable.As well as identifying a feature or part to be modified, modifying, diagnosing causes of failures, and validating the modified ERP system should not require much effort.Thus, reusability has been proposed as sub-characteristics under this quality characteristic. Finally, the portability of software refers to the capability of the software to be transferred from one environment to one another [21].Therefore, the ERP system should be applied using different operating systems; be applied at different organizations or departments; and be applied using a variety of hardware.Similar to the previous quality characteristics, the portability has set of sub-characteristics, namely adaptability, installability, coexistence, and replace ability [9]. Table 1 mentions the ERPSQM, its quality characteristics, and sub-characteristics.As well as, how these quality characteristics and sub-characteristics influence the quality of ERP systems in organizations. Evaluation of ERPSQM Using Analytic Hierarchy Process (AHP) This study not only proposes a quality model for the ERP systems, but also applies Analytic hierarchy Process (AHP) to rank and evaluate the quality of characteristics and sub-characteristics of such model.Analytic hierarchy Process (AHP) could be applied to measure quality as a single parameter.The AHP is a multi-criteria decision making method that was proposed by Thomas L. Saaty in 1980 [7].Interestingly, in order to evaluate ambiguity in multi-criteria decision making problems, the AHP uses the pair-wise matrix [6].Human judgment is not always consistent.Therefore, the AHP allows some small consistency in the matrix.In the case of consistency, find vector ω satisfying Equation (1). • = ℷmax, and ℷmax > . ( where ω is eigenvectors; ℷ is eigenvalues; and n represents the number of elements to be compared.The dif- ference between ℷ and n is an indication of judgment consistency. In order to verify the consistency of comparison matrix, Saaty [7] proposed a Consistency Index (CI) and Consistency Ratio (CR), Equations ( 2) and (3). where CR is the average of consistency index and should satisfy the condition of CR less than or equal 0.1. Allocating the Weights of ERPSQM Characteristics and Sub-Characteristics In this study, survey on twenty experts has been conducted, in order to assign pair wise relative weights to ERPSQM characteristics and sub-characteristics.Out of these experts, eight professionals are working at software industry and experts of ERP systems development.Remaining twelve are academicians who have whether good knowledge of the ERP systems or are doing their research in such area.Only fifteen of the experts had filled the survey forms and successfully sent it back.The survey form involves seven tables for filling the pair wise relative weight values of ERSQM characteristics and sub-characteristics.First table is for filling the pair wise relative weight values of the model characteristics, namely functionality, reliability, usability, efficiency, maintainability and portability.The remaining tables are for filling the pair wise relative weight values of sub-characteristics of the ERSQM's characteristics.The means of collected data on pair wise weight values of characteristics and sub-characteristics are filled in the square matrices, to apply the PHP process and then calculate the Eigenvectors ( ) ω and Eigenvalues (ℷ).For instance, A [aij], in Equation ( 4) represents the square matrix of six main the ERSQM's characteristics. Calculating Eigenvector and Eigen-Value The next step is to determine the eigenvectors.In this respect, there are many ways to calculate the priority vector, which is the Eigenvector.Multiplying the entries of each row in the matrix and then calculate the 6 th roots of a product.The 6th roots are summed and that sum used to normalize the eigenvector values, since the sum of its column is one.Later, the6th root of each column is divided by the sum of the 6th roots column. Table 2 presents all the calculations.It could be seen that, the eigenvector of relative importance or in the other words the weights of the main characteristics as the following: functionality (0.384), reliability (0.242), usability (0.163), efficiency (0.113), maintainability (0.063), portability (0.036).Therefore, from developers' and acade- micians' perspectives functionality of the ERP systems is the most important one over other characteristics, followed by reliability, usability, efficiency, maintainability, and the portability of the ERP systems is less important.These results provide more than ranking.In fact, the relative weights are a ratio scale could be divided among such quality characteristics.The next step is to check the consistency of participants' answers by obtaining consistency index (CI) and consistency ratio (CR) from Equations ( 2) and (3) to do that, the eigenvalueis required.Eigenvalue could be calculated from the Formula (1).All the values which obtained were merged with the values in the Table 2.These results reveal that ℷvalueshave satisfied the condition of ℷ > n, and the mean of ℷ values is 6.388 > 6. By applying Equations ( 2) and ( 3 Similarly, AHP process has been applied to calculate eigenvectors and eigenvalue of sub-characteristics of quality characteristics of functionality through portability.the results come as the following: the eigenvector values (0.382, 0.252, 0.081, 0.119, 0.90, 0.048 and 0.029) are for sub-characteristics of functionality, (0.687, 0.226 and 0.087) are for sub-characteristics of reliability, (0.608, 0.217, 0.086 and 0.089) for sub-characteristics of usability, (0.505,0.196, 0.140, 0.063 and 0.096) for maintainability sub-characteristics, (0.536, 0.241, 0.144 and 0.078) for efficiency sub-characteristics, and (0.705, 223 and 0.073) for sub-characteristics of portability.Regarding the consistency test, all the estimates are acceptable, since all the CR values of the sub-characteristics are less than 0.1. Therefore, this empirical study provides companies and organizations with the quality characteristics and their importance that should be taken into account in developing and implementing the ERP systems.In which it can be assured that the ERP systems will be successfully implemented. Conclusions The aim of this study is to develop a new ERP Systems Quality Model.This model is an extension of ISO/IEC 9126 international software quality standard that has been agreed upon by a majority of the international community.The proposed model enhances the hierarchy of this standard by adding some new sub-characteristics, including compatibility, modularity, complexity, and reusability, which have been added under the characteristics of functionality, usability and maintainability.These new sub-characteristics are involved on the basis of the new features of the ERP systems over other types of the information systems.Existing characteristics and sub-characteristics of ISO/IEC 9126, which are part of ERPSQM, have also been defined in the context of the ERP systems. In order to evaluate the quality of the proposed model and ranking its characteristics and sub-characteristics, Analytic Hierarchy Process (AHP) has been applied.Pair wise relative weights of characteristics and sub-characteristics have been taken through a survey on twenty experts.The mean of collecting data is considered as pair wise relative weights.Consequently, AHP is applied to such pair wise relative weights to get the corresponding relative weights of proposed model's characteristics and sub-characteristics, in which the total quality weights equal one.
4,545.8
2014-04-10T00:00:00.000
[ "Business", "Computer Science" ]
Heavy Metals Distribution and Their Correlation with Clay Size Fraction in Stream Sediments of the Lesser Zab River at Northeastern Iraq Heavy metals (i.e. Cr, Co, Ni, Cu, Zn, Rb, Sr, Ba, Pb, V and Ga) distribution and their correlation with clay fraction were investigated. Fifteen samples of stream sediments were collected from the Lesser Zab River (LZR), which represent one of three major tributaries of the Tigris River at north-eastern Iraq. Grain size distributions and textural composition indicate that these sediments are mainly characterized as clayey silt and silty sand. This indicates that the fluctuation in the relative variation of the grain size distribution in the studied sediments is due local contrast in the hydrological conditions, such as stream speed, energy of transportation and geological, geomorphological and climatic characterizations that influenced sediments properties. On the other hand, clay mineral assemblages consist of palygorskite, kaolinite, illite, chlorite and smectite, which in turn reveals that these sediments were derived from rocks of similar mineralogical and chemical composition as it is coincided with other published works. The clay mineral assemblages demonstrate that major phase transformations were not observed except for the palygorskite formation from smectite, since the minerals pair exhibit good negative correlation (−0.598) within the Lesser Zab River (LZR) sediments. To determine interrelation between the heavy metals and the clay fractions in the studied samples, correlation coefficients and factor analysis were performed. Heavy metals provide significant positive correlation with themselves and with Al2O3, Fe2O3 and MnO. In addition, the results of factor analysis extracted two major factors; the first factor loading with the highest percent of variation (60%) from the major (Fe2O3, Al2O3 and MnO in weight %), heavy metals and clay fraction. While the second factor with the (14%) of variance includes Cr and silt fraction, which indicate the affinity of the heavy metals How to cite this paper: Ali, A.R. and Talabani, M.J.A. (2018) Heavy Metals Distribution and Their Correlation with Clay Size Fraction in Stream Sediments of the Lesser Zab River at Northeastern Iraq. Journal of Geoscience and Environment Protection, 6, 89-106. https://doi.org/10.4236/gep.2018.64006 Received: January 20, 2018 Accepted: March 30, 2018 Published: April 2, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access A. R. Ali, M. J. A. Talabani DOI: 10.4236/gep.2018.64006 90 Journal of Geoscience and Environment Protection being adsorbed onto solid phase like clay particles. These observations suggest that a common mechanism regulates the heavy metal abundance, and that their concentrations are significantly controlled by fine clay fractions, clay mineral abundance and ferro manganese oxides-hydroxides. Introduction Fluvial sediments are sourced from the exposed rocks, among these, the crystalline rocks, which are influenced by streams under surface conditions.Other fluvial sediments origins are from soils which pass the mineralogical composition from the source rock and/or alter them, and new mineral faces could form [1]. Sediments include different grain sizes (i.e.coarse sandstone to colloidal grains).Thus, studying resent sediments offer insight on the physiochemical and environmental regimes in shallow marine and fluvial conditions which are influenced greatly by the changes of earth and human activities [2] [3], and therefore, sediments are used, usually, as pollution monitor in different aquatic regimes.On the other hand, colloidal grains are greatly influenced by various parameters and thus hardly to provide accurate analysis results.While fine grain sediments are less influenced and therefore used frequently as chemical and biochemical pollution indicators [4]. Moreover, fluvial sediments, in comparison, represent the main heavy metals settling agent in water systems and those heavy metals are not washed out from the system; instead it could be induced within the water and sediments storage by various biological and chemical processes [5] [6] [7].Thereafter, heavy metals, of various origins, main source of enrichments in water systems are through rock and soil erosion in fine grain sediments [8] [5] [9] [10]. Additionally, tiny size sediments (i.e.clay size) are the main size fraction used by researchers to determine heavy metals concentrations which are related in part to their mobility [11] [12].Furthermore, heavy metals are adsorbed on negative charge surfaces such as clay minerals, organic materials and other oxides, within insoluble organic and inorganic bond form, in the aquatic regimes by several human activities [5] [12].This study aims to identify clay minerals assemblages and their relationship with the heavy metals distribution and origins within the Lesser Zab river (LZR) sediments. Geological Setting The regional geology of northern Iraq consists of the Zagros Mountains Range with an NW-SE structural trend in the north-eastern part, and Taurus mountain Range with an E-W structural trend in the north and northwestern parts.The structural framework of Iraq was divided by [13] into Thrust Zone, Folded Zone and Unfolded Zone (Figure 1). The LZR and its tributaries traverse the Zagros Suture and the Unstable Shelf tectonic zones of northeastern Iraq.The Zagros Suture Zone which is shared between Iran and Iraq consists dominantly of igneous and metamorphic rocks belonging to Shalair, Penjween-Walash and Qulqula-Khwaqurk Zones [14].The unstable shelf is dominated by sedimentary rock formations which are parts of three tectonic zones; Imbricated, high folded and foothill zones.It is important to know the general lithological characteristic of the rock units and formations which form the bed rock of the LZR and its tributaries because of these bed rocks make the source of water and recent sediments.The LZR traverses many tectonic and rock units belonging to the Unstable shelf which were divided by [14] in to four divisions: Zagros suture zones including (Penjween-Walash zone), imbricated zone (Balambo-Tanjero zone), High folded zone and Foot hill zone (Hamrin-Makhul sub zone).Tectonically, the study area extends from highly Folded Zone of the Foreland Basin into the foreland and related basins, as well as the platform region of the Arabian Plate (Figure 2).According to [15] the folded zone contains three tectonic zones which are, from west to east: the Mesopotamian zone (Quaternary molasses and buried structures), the Foothill zone (Neogene molasses and long anticlinal structures separated by broad synclines), and High Folded zone (Paleogene molasses and harmonic folded structures).These longitudinal tectonic zones are segmented into blocks bounded by ENE-WSW (shifting to NE-SW) transverse faults with both vertical and horizontal displacement.The transverse blocks have been active, at least, since the late Cretaceous and greatly influenced the sedimentary facies of the Cretaceous and Tertiary sequences [16].Structurally, the studied area (Lesser Zab River) lies in the Foothill and high Folded zones of the platform foreland of Iraq as shown in Figure 2 [17]. The LZR is situated between 360˚00'23.67''-350˚41'14.9''North and 450˚14'29.75''-440˚03'35.4''East.The elevation ranges between 230 and 631 m above sea level.The LZR travels 400 km until its junction with the Tigris River at 35 km southwest Sharqat city and its catchment covers an area of 22,250 km 2 (Figure 3) [18].The river in the study area passes through many villages, towns and agricultural lands where possible man-made pollution sources could affect its water quality, in addition to the natural pollution causes such as spring waters, erosion and weathering of outcrops, etc. [10].River collects its water from Material and Methods Fifteen recent sediment samples were collected from the lesser Zab Main stream (Figure 3) located in the north-eastern of Iraq during April 2009.About 2 -4 kg of sediments were collected manually from the main stream in contact with running water using a metal bucket of the dimensions 26 × 8 × 3 cm 3 .Sampling was performed at distance from the river banks to avoid possible contamination from the bank material.Details of the analysis techniques are given in [19] [20]. The samples were sun-dried and then ground into fine powder in an agate mortar.The samples were sieved to pass through of 200 µm, and then pressed into thick pellets of 32 mm diameter using wax as blinder.USGS standards, GEOL, GBW 7109 and GBW-7309 sediment equally pressed into pellets in an equivalent manner as the samples, and these used for quality assurance [19] [20]. Multi-element concentration was determined by using polarized energy dispersive XRF.The PEDXRF analysis was carried out at the Earth Sciences Research and Application Center of Ankara University, TURKEY using Spectro XLAB 2000 PEDXRF spectrometer and following [21]. Grain size analysis is carried out to separate sand from the silt and clay using sieve (0.063 mm, 230 meshes) by wet sieving.The silt and clay fractions downward from (230 mesh) sieve were separated using sedimentation tube method according to [22] [23].The statistical parameters of the grains were calculated using the equations proposed by [24].Mineralogical characteristics of the samples were determined by using X-ray diffraction analysis, type P analytical Xpert PRO MPD with Ni-filtered and CuK α radiation, for diagnosis and assessment of mineral components as well as identifying the type of clay minerals in the isolated clayey size (<2 μm).Both randomly oriented powder and slide samples were prepared following the procedure described by [23] [25].They were scanned over the range from 5˚ to 40˚ 2θ at a scanning speed 2˚ 2 θ/min.The oriented slides were analyzed in various stages (non-treated, treated by Glycol ethylene at 60˚C/2 hr. to distinguish the expandable mineral phases, the slides were heated at 550˚C/2 hr. for chlorite detection).All minerals basal reflection peaks were identified according to ASTM cards [26].The semi quantitative determination of the relative amounts of major clay minerals was calculated by using analytical Xpert High Score software depending up on specific reflections and intensity factors. Sediment Grain Size and Mineralogical Analysis The results of the grain size analysis and textural composition of the studied sediment samples are given in Table 1 and Figure 2, which provide the percentages of the sediments components of sand, silt and clay.The results indicate clearly decrease in the sand percentage from 80% to 1% with an average of 22.7%, while the silt portion was high in all sites (i.e.12% -87% with an average ).The quantity of clays was relatively less than silt, and ranges between 5% to 48% (25.5% in average), and its percentage declines inversely to the silt portions.According to [27] classification, most of the studied recent LZR sediments can be classified as clayey silt and silty sand types which compose of 66.67% and 20.0% respectively of the studied samples (Figure 2).This indicates that the fluctuation in the relative variation of the grain size distribution within the studied sediments because of the local contrast in the hydrological conditions, like stream speed, energy of transportation and geological, geomorphological and climatic natures that influenced sediments properties [28]. Moreover, XRD patterns of the oriented and non-oriented slides which are obtained under different measurement conditions, as shown in Figure 3 and 2).The recorded clay minerals in the recent LZR sediments are coincided with previous works within older rock units and as suggested by [18] [29].This suggests that these sediments are inherited from other source rocks exposed in the catchment drainage basin of LZR.Thus, major transformation was not observed in these sediments except for new formation of palygorskite from the transformation layer of smectite, since the minerals pair exhibit good negative correlation (−0.598) in the LZR sediments. Heavy Metal Concentrations in Clay Fractions The concentrations of heavy metals (i.e.Co, Cr, Cu, Ni, Ba, Pb, Rb, Zn, V and Ga) and major elements (Al 2 O 3 , Fe 2 O 3 and MnO) within clay fraction (<2 μm) sediments and their portioning in the Lesser Zab stream are provided in (Table 3) and (Figure 5).Statistical parameters of the data (arithmetic mean, maximum and minimum value, standard deviation and coefficient of variation) were calculated to observe general variability in the LZR sediment chemistry (Table 3). The concentrations of trace elements generally vary by 5% to 27% for the LZR. The average contents of Al 2 O 3 , Fe 2 O 3 and MnO are about 9.98%, 6.27% and In addition to the major elements described above, ten trace elements (Co, Cr, Cu, Ni, Ba, Pb, Rb, Zn, V and Ga) analyzed in the in the clay fraction samples collected from LZR sediments (Table 4).The relationship between major and oxides in the clay fraction, suggesting that a common mechanism regulates their abundance.Thus, suggest that these element concentrations are controlled mainly by Al-rich phases such as clay mineral abundances and Fe-Mg, Fe-Mn oxides [18].Moreover, Co, Ni and Cu sourced from clay minerals while Zn could be sourced from the ferromagnesian heavy minerals such as amphiboles, pyroxenes etc. [30] as it is common with LZR and older sediments [18] [29].[30] indicates that Co has moderate mobility controlled mainly by adsorption and co-precipitation with Mn-Fe oxides.Ba and Rb are usually associated with feldspar and biotite [31] and clay minerals [18].In addition, lead (Pb) is generally adsorbed on iron oxide minerals while Rb with feldspar and mica [30].Also, copper has intermediate mobility controlled by adsorption of Fe and Mn-oxides and organic matters; it is closely associated with geogenic (lithogenic) materials and exists in ultrabasic ophiolitic rocks [32].Increasing heavy metal concentration in clay fraction of LZR sediments could be related to the adsorption on fine grain sediments.Thus, heavy metals portioning would essentially be influenced by clay content which in turn contributes significantly to the accumulation of heavy metals in the LZR sediments. Principal Component Analysis (PCA)-Factor Analysis PCA is used to determine the interaction between the measured independent properties.The principal component analysis has been widely applied in the interpretation of the geochemical and hydrogeochemical data [33] [34] [35].It is one of the multivariate statistical analytical tools used to assess metal behavior in sediments and water [36].Also, PCA is an approach to find the most crucial factors that describe the natural influence with eigenvalues ≥ 1.0 [37].According to [33] the factor loadings were classified as "excellent", "very good", " good" and "fair", as the absolute loading values of (>0.71), (0.71 -0.63), (0.63 -0.55) and (0.55 -0.45) respectively.For factor loadings, the loading was defined as excellent and very good and loadings of (<0.63) were considered insignificant, and some of the factors not explained because contain loading fair lower than (0.55).Factor analysis allows us to group the elements with similar distribution. In this study, three factors have been observed for LZR sediments, and components account for (81.915%) of the total variation in the system, (the first component accounts for 60.119% of the variation, the second 13.616% and the third accounts for only 8.180%) (Table 5, Figure 9).Thus, three components reflect the relation between the measured variables. The first factor is explained 60.119% of the variation in the system and it is very important because influenced by (Cu, Co, Zn, Rb, Ba, Pb, Ga) and clay fractions.This factor considered fine clay particles factor, which shows the fine grains influence on the enrichments of the heavy metals contents.Also, the existence of Fe 2 O, MnO and Al 2 O 3 in this factor indicated the influence of Fe-Mn oxides-hydroxide phase and clay minerals to enrich the heavy metals contents within the LZR sediments.[18] illustrated that these element concentrations are controlled mainly by clay mineral abundances and Fe-Mg, Fe-Mn oxides.The second factor has 13.616% of total variance and affected by (Cr, Ni) and silt fractions.This factor considered independent fine minerals particles factor. While Ni and Cr exist in the solid components of weathering productions [38], also substitution of Fe location by Ni in to the Fe-rich (goethite and hematite) minerals [38].Also, the most abundant independent mineral is usually chromite which is resistant to the weathering, with heavy minerals (i.e.tourmaline, rutile, hornblende and magnetite) [39] [40] [41] which can exist in siliciclastic rocks in various grain sizes from sand to clay.Thus, Cr and Ni richness in the silt fraction of LZR sediments represented by the third factor. Conclusions Grain size distributions and textural composition indicate that these sediments are mainly characterized as clayey silt and silty sand as texture.This indicates that the fluctuation in the relative variation of the grain size distribution of the study sediments is because of the local contrast in the hydrological conditions such as stream speed, energy of transportation and geological, geomorphological and climatic natures that influenced these sediments properties. The clay mineral assemblages in the LZR sediments consist mainly of palygorskite, kaolinite, illite, chlorite and smectite, following previous works, which reveals that these sediments were derived from rocks of similar mineralogical and chemical composition and that heavy metals portioning is linked to the tiny grain size amount. The distribution of heavy metal shows significant positive correlation among themselves and major (Al 2 O 3 , Fe 2 O 3 and MnO) oxides in the clay fraction, demonstrating a common mechanism regulates their abundance, thus, suggesting that these element concentrations are controlled mainly by Al-rich phases such as clay mineral abundances and Fe-Mg, Fe-Mn oxides. Figure 2 . Figure 2. The bar shape illustrates the relationship between type of sediment and the percentages for each type. Figure 3 . Figure 3. X-ray Diffraction pattern showing the main detected minerals of studied bulk sample. Figure 4 , Figure 4, reveals the existence of clay and non-clay minerals contents.Non-clay minerals are represented by quartz that appeared at the basal reflections 3.34, Figure 4 . Figure 4. X-ray Diffraction pattern from clay minerals of studied clay fraction sample.(A) Normal Sample (without treatments); (b) Sample; (c) Heated to 350˚C; (d) Heated to 550˚C. . The distribution of (Co, Ni, Ba, Pb, Rb, Zn and Ga) and somewhat (Cr, V and Cu) shows significant positive correlation among themselves and major (Al 2 O 3 , Fe 2 O 3 and MnO) Figure 6 . Figure 6.The relationship between Al 2 O 3 and other elements in the studied samples. Figure 7 . Figure 7.The relationship between Fe 2 O 3 and other elements in the studied samples. Figure 8 . Figure 8.The relationship between MnO and other elements in the studied samples. Figure 9 . Figure 9. Principal component factor analysis loading two-dimension plots for sixteen variables. Table 1 . Carver, 1971.e grain size analysis and the common texture of the LZR sediments followingCarver, 1971. Table 2 . Estimation of clay mineral constituents of LZR sediment samples. heavy metals is listed in Table4and shown in Figures6-8 Table 3 . Elements Concentration of LZR Sediments Sample Al 2 O 3 % MnO % Fe 2 O 3 % Cr ppm Co ppm Ni ppm Cu ppm Zn ppm Rb ppm Sr ppm Ba ppm Pb ppm V ppm Ga ppm Table 4 . Correlation coefficients for the selected variables; oxides, trace elements and sediments size fractions.
4,277
2018-03-26T00:00:00.000
[ "Geology", "Environmental Science" ]
Interaction between Ribosome Assembly Factors Krr 1 and Faf 1 Is Essential for Formation of Small Ribosomal Subunit in Yeast * Sanduo Zheng (郑三多), Pengfei Lan (兰鹏飞), Ximing Liu (刘希明), and Keqiong Ye (叶克穷) 2 From the Department of Biochemistry and Molecular Biology, College of Life Sciences, Beijing Normal University, Beijing 100875, the Graduate School of Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, the National Institute of Biological Sciences at Beijing, Beijing 102206, and the Laboratory of RNA Biology, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China The ribosome of yeast Saccharomyces cerevisiae is assembled from four rRNAs and 79 ribosomal proteins (r-proteins) 3 through a complex and highly dynamic process (1)(2)(3)(4)(5). This process begins in the nucleolus, where rDNA repeats are transcribed by RNA polymerase I into a long 35 S precursor rRNA (pre-rRNA). The 35 S pre-rRNA undergoes extensive modification and nucleolytic processing to produce mature 18 S, 5.8 S, and 25 S rRNAs. The 5 S rRNA is transcribed separately by RNA polymerase III. Numerous small nucleolar RNAs (snoR-NAs) function as guides to direct site-specific modification of rRNA. A few snoRNAs, such as the conserved U3, U14, and snR30/U17, are involved in rRNA processing. Genetic and biochemical studies have identified ϳ200 protein assembly factors in yeast required for ribosome synthesis. These factors include enzymes, such as RNA helicases, AAA ϩ ATPases, GTPases, kinases, nucleases, and RNA modification enzymes, many proteins with known protein-or RNA-binding domains, and additional proteins containing no recognizable domain. Ribosome assembly factors transiently associate with pre-rRNA at specific stages, forming distinct pre-ribosomal particles, but the molecular function of most assembly factors is not understood. The earliest pre-ribosome, often termed the 90 S pre-ribosome or the small subunit processome, is assembled cotranscriptionally on the nascent pre-rRNA transcript and can be visualized as a terminal ball by electron microscopy of rDNA spreads (6,7). In addition to the 35 S pre-rRNA, the 90 S particle consists of nearly 50 assembly factors, U3 snoRNA, and a subset of small subunit r-proteins (1,8,9). The U3 snoRNA forms multiple base-pairing interactions with the 5Ј-external transcribed spacer and 18 S regions of pre-rRNA, and it is essential for the formation of 90 S pre-ribosomes (1). Within the 90 S pre-ribosome, the pre-rRNA is cleaved at sites A0, A1, and A2, leading to excision of the 5Ј-external transcribed spacer and separation of the 20 S and 27SA2 pre-rRNAs; these pre-rRNAs are destined to the small 40 S and large 60 S ribosomal subunit, respectively. The 20 S pre-rRNA, packed in pre-40 S particles, is exported to the cytoplasm and maturates into 18 S rRNA after cleavage at site D (10). Compared with 90 S particles, pre-40 S particles have a much simpler composition with most of 40 S r-proteins and a handful of assembly factors (10,11). Understanding the structure and assembly of the enormous 90 S particle represents a major challenge for the field. One step toward this goal is to elucidate how each protein interacts with other protein and RNA components using methods with the highest attainable resolution. In this regard, a few proteins of the 90 S particle were shown to form independent subcomplexes, including UTPA (also known as tUTP), UTPB, UTPC, U3 snoRNP, the Mpp10-Imp3-Imp4 complex, and the Bms1-Rcl1 complex (8,(12)(13)(14)(15)(16)(17). Analysis of assembly interdependence has revealed a hierarchical assembly order for several subcomplexes (18,19). The interaction networks among seven proteins of UTPA and among six proteins of UTPB have been mapped by yeast two-hybrid assays (20,21). Most recently, protein cross-linking and mass spectrometry analysis have mapped spatial closeness of UTPB components at single residue resolution (22). The RNA-protein UV cross-linking approach has been used to locate the precise RNA-binding site of 40 S assembly factors (23)(24)(25)(26). However, high resolution structural information is very limited for the 90 S particle (other pre-ribosomal particles as well) and is available only for the H/ACA and C/D types of snoRNP (27) and a few individual components (28 -33). In this study, we focus on two interacting proteins, Krr1 and Faf1, present in the 90 S particle. Both are essential nucleolar proteins in yeast that are required for 18 S rRNA processing at sites A0, A1, and A2 and for the formation of small ribosomal subunits (34 -37). They copurified with other components of the 90 S particle in tandem affinity purification (TAP) experiments (8,38). Krr1 contains a K homology (KH) domain, which is one of the most abundant nucleic acid-binding motifs recognizing single-stranded RNA or DNA (39). Besides putative RNA binding activity, Krr1 also binds proteins. It binds Faf1 in a yeast two-hybrid assay (37) and interacts genetically and physically with Kri1, which is another factor essential for 40 S formation (34). Krr1 is universally conserved in eukaryotes, and its orthologs in Drosophila, Schizosaccharomyces pombe, and humans have been shown to be involved in ribosome biogenesis (40 -42). Faf1 contains no recognizable domain and has homologs found so far only in Ascomycetes. The assembly of the 40 S ribosome requires another KH domain protein, Dim2/Pno1 (43)(44)(45). Dim2 is associated with both the early 90 S particle and the late pre-40 S particle and localizes to the nucleolus and the cytoplasm (8,10). Both Krr1 and Dim2 contain a single conserved KH domain. In archaea, a Dim2-like protein that has tandem KH domains is implicated in ribosome biogenesis and translation initiation. The crystal structure of Pyrococcus horikoshii Dim2 (PhDim2) has been determined in both the free state and in complex with a 3Ј end fragment of 16 S rRNA and the translation initiation factor eIF2␣ (46,47). We have mapped the interacting region of Krr1 and Faf1 and determined a cocrystal structure of the Krr1 core domain bound to a short fragment of Faf1. The complex structure reveals the presence of tandem packed KH domains in Krr1 and a novel protein-binding mode of the KH domain. We demonstrated that the Krr1-Faf1 interaction is essential for 18 S rRNA processing and yeast growth. However, disruption of the Krr1-Faf1 interaction did not prevent their incorporation into 90 S particles, suggesting that the interaction is required for maintaining a functional conformation of 90 S particles. EXPERIMENTAL PROCEDURES DNA Cloning and Protein Purification-DNA cloning was mainly performed with the nonligation-based methods In-fusion (TaKaRa) and Transfer-PCR (48). The KRR1 gene was amplified by PCR from yeast genomic DNA, cloned into a modified pET28a vector (Novagen), and expressed with an N-terminal His 6 -Smt3 tag. The FAF1 gene was cloned into the multiple cloning site 1 of a modified pETDuet-1 vector and expressed with an N-terminal His 6 -GST tag followed by a Pre-Scission cleavage site. Point and deletion mutations were generated by the QuikChange method and confirmed by DNA sequencing. His 6 -Smt3-tagged Krr1 was expressed in Escherichia coli Rosetta 2(DE3) (Novagen). Bacteria were cultured in LB medium at 37°C to an A 600 of 0.8, and protein expression was then induced with 0.5 mM isopropyl 1-thio-␤-D-galactopyranoside overnight at 18°C. Cells were harvested, resuspended in buffer A (50 mM Tris-HCl, pH 8.0, and 500 mM NaCl), and lysed by sonication. After centrifugation, Krr1 was purified with a HisTrap column (GE Healthcare) and eluted with 500 mM imidazole in buffer A. The pooled fractions of Krr1 were incubated with Ulp1 for 1 h on ice to cleave the His 6 -Smt3 tag. The sample was loaded directly onto a heparin column (GE Healthcare) and eluted by ϳ700 mM NaCl with a linear salt gradient in 50 mM Tris-HCl, pH 8.0. Krr1(32-222) was labeled with selenomethionine (SeMet) in M9 medium by blocking methionine biosynthesis. The SeMet-labeled protein was purified by the same procedure as the unlabeled protein except that the lysis buffer was supplemented with 1 mM DTT. His 6 -GST-tagged Faf1 was expressed in BL21-Gold (DE3) cells. Protein expression, cell lysis, and HisTrap chromatography were performed using the same method as for Krr1. The His 6 -GST tag of Faf1 was cleaved with PreScission overnight at 4°C. The sample was diluted 3-fold with 25 mM HEPES-KOH, pH 7.6, and loaded onto a Q column (GE Healthcare). The flow-through containing Faf1 was collected and concentrated with 3-kDa cutoff ultrafiltration devices (Amicon). For assembly and purification of the Krr1-Faf1 complex, individually purified Krr1 and Faf1 proteins were mixed in a 1:2 molar ratio and incubated on ice for 1 h. The binary complex was separated from excessive free Faf1 on a Superdex S200 10/300 column equilibrated in 10 mM Tris-HCl, pH 8.0, and 300 mM NaCl. Crystallization, Data Collection, and Structure Determination-The native and SeMet-labeled complex of Krr1(32-222) and a Faf1 fragment containing residues 145-169 and 199 -220 were crystallized at 20°C using the hanging-drop vapor diffusion method by mixing 1 l of protein (20 mg ml Ϫ1 in 10 mM Tris-HCl, pH 8.0, and 300 mM NaCl) and 1 l of reservoir solution containing 0.2 M tri-ammonium citrate and 20% (w/v) PEG 3350. Rod-shaped crystals appeared after 1 day for native and SeMet-labeled proteins. The crystals were cryoprotected in 20% glycerol made in the reservoir solution and flash-frozen in liquid nitrogen. Diffraction data were collected at the Shanghai Synchrotron Radiation Facility beamline BL17U and processed by HKL2000 (49). The crystal belongs to space group P2 1 and contains two copies of the Krr1-Faf1 heterodimer per asymmetric unit. The phases were calculated with SHARP using the single-wavelength anomalous dispersion method based on a selenium derivative dataset collected at a wavelength of 0.9793 Å and solvent-modified (50). The model was built in Coot and refined in Refmac and Phenix (51)(52)(53). The model-derived phases were iteratively combined with the experimental single-wavelength anomalous dispersion phases in SHARP to improve the electron density map. The final model includes two Krr1 molecules with residues 38 -211, two Faf1 molecules with residues 144 -163 (Pro-144 is from vector), and 14 water molecules. RAM-PAGE analysis showed that 96.9% of the residues are in favorable regions, 2.9% in allowed regions, and 0.3% in outlier regions (54). GST Pulldown Assay-His 6 -GST-tagged Faf1 and its variants were expressed and purified with Ni 2ϩ beads. Krr1(32-222) was purified as described above, and full-length Krr1 was briefly purified with Ni 2ϩ beads followed by Ulp1 cleavage. For interaction analysis, individually purified GST-Faf1 and Krr1 proteins were mixed with ϳ10 M concentrations in a volume of 200 l. The mixtures were incubated with 15 l of glutathione-Sepharose beads and gently rotated for 30 min at 4°C. After the beads were washed three times with 1 ml of buffer A (50 mM Tris-HCl, pH 8.0, and 500 mM NaCl), the bound protein was eluted with 20 l of 10 mM glutathione in buffer A. The input and eluate samples of glutathione-Sepharose beads were resolved using SDS-PAGE and stained with Coomassie Blue. The FAF1 gene, including 295-bp sequences upstream of the start codon and 652-bp sequences downstream of the stop codon, was amplified by PCR from yeast genomic DNA and cloned into plasmid pRS416 with a URA3 marker to yield pRS416-FAF1. The FAF1 ORF was also cloned into LEU2 p415GPD-HA and HIS3 p413GPD-TAP plasmids to express the N-terminally HA-tagged and C-terminally TAP-tagged Faf1 protein under the control of glyceraldehyde-3-phosphate dehydrogenase (GPD) promoter, respectively. Point and dele-tion mutations were generated by the QuikChange method with proper primers and confirmed by DNA sequencing. For construction of the FAF1 shuffle strain, BY4741 was transformed with pRS416-FAF1, and the genomic FAF1 gene was replaced with a kanMX6 cassette from plasmid pFA6a-kanMX6 by homologous recombination (55). Positive clones were selected on Ura-deficient SC medium supplemented with antibiotic G418. Clones were confirmed by PCR with appropriate primers to make sure that recombination occurred in the genome but not in the pRS416-FAF1 plasmid. Growth Assay-The faf1⌬ strain containing pRS416-FAF1 was transformed with a p415GPD plasmid expressing wild-type or mutant HA-Faf1. The transformant was grown in 1 ml of Ura-and Leu-deficient SC medium overnight at 30°C. The culture was adjusted to an A 600 of 0.2 and 5-fold serially diluted in 96-well plates. Ten microliters of cells were spotted onto SC plates with or without 5-fluoroorotic acid and grown for 3 days at 20, 30, and 37°C. Sucrose Gradient Sedimentation, Western Blot, and Northern Blot-The KM804/KRR1-HA strain was transformed with an empty p413GPD plasmid or plasmids expressing wild-type or mutant Faf1-TAP. The culture was first grown in YPGA medium and then shifted to YPGA medium. Yeast Two-hybrid Assay-Two-hybrid assays were performed with the Matchmaker system (Clontech). Krr1 was cloned into the pGBKT7 plasmid encoding the Gal4 DNAbinding domain, and Faf1 or Kir1 was cloned into the pGADT7 plasmid encoding the Gal4 DNA activation domain. The two plasmids were cotransformed into the yeast AH109 strain. Leu ϩ Trp ϩ transformants were selected. The transformants were grown in 1 ml of Leu-and Trp-deficient SC medium overnight at 30°C. 5-Fold serial dilutions of cells (10 l) were spotted on SC plates lacking Leu, Trp, and His and containing 0, 1, or 5 mM 3-amino-1,2,4-triazole. The plates were incubated at 30°C for 3 days. Structure Determination of the Krr1 and Faf1 Complex-Krr1 and Faf1 were previously reported to interact with each other in a two-hybrid assay (37). We confirmed their physical interaction with a GST pulldown assay using recombinant proteins. To map the interacting region of each protein and find a compact complex suitable for crystallization, we constructed a few fragments of each protein and tested their interaction (Figs. 1A and 4B, data not shown). These analyses show that a fragment of Faf1 spanning residues 145-220 is sufficient to bind Krr1 . Moreover, Faf1(200 -250) failed to bind Krr1, indicating that Faf1 residues 145-199 are the key interaction region. We initially obtained crystals for a complex of Krr1(32-222) and Faf1(145-220), but the crystal quality was poor. After deleting the nonconserved residues 170 -198 from Faf1(145-220), the crystals showed improved quality and diffracted to 2.8 Å resolution. The structure was subsequently determined by single-wavelength anomalous dispersion phasing using a selenium derivative crystal. The current model was refined to an R work /R free of 0.232/0.278 with good geometry ( Table 2 and Fig. 1, B and C). The asymmetric unit contains two copies of the Krr1-Faf1 complex. The two complexes show nearly identical structure and dimer interface and are superimposable with a root mean square deviation of 0.369 Å over 176 C␣ pairs. Residues 38 -211 of Krr1 and residues 145-163 of Faf1 were resolved in the crystal, although the other terminal residues included in crystallization fragments were invisible, likely due to structural disorder. The structure of the complex shows that the core region of Krr1 is composed of tandem KH domains, rather than one as previously thought, and the 19-residue polypeptide of Faf1 forms an ␣-helix that binds to the C-terminal end of the Krr1 KH domain. Krr1 Contains Tandem KH Domains and Resembles Archaeal Dim2-like Protein-Although sequence analysis suggests that Krr1 contains only one conserved KH domain (residues 121-211, referred to as KH2), our structure shows that the region N-terminal to KH2 (residues 38 -120, KH1) also adopts a KH-like fold. KH domains fall into two types of topologically distinct folds (57). The two KH domains of Krr1 belong to the type I fold, except for having an extra ␣-helix at the C terminus. Each domain is composed of three ␤-strands and four ␣-helices arranged in ␤1-␣1-␣2-␤2-␤3-␣3-␣4 (in the case of KH1) ( Fig. 2A). The three ␤-strands form an antiparallel ␤-sheet, which packs against the four ␣-helices on one side. The classic KH domain is characterized by an invariant Gly-Xaa-Xaa-Gly (GXXG) sequence motif that is critical for nucleic acid interaction. The KH2 domain contains the GXXG motif that forms a short loop connecting the first two ␣-helices in the domain (Fig. 3A). The KH1 domain lacks the GXXG motif, and its first two ␣-helices are simply bent at the junction. The two KH domains are aligned roughly in parallel such that the ␣3 and ␣4 helices of KH1 pack against the ␤-sheet of KH2. The extensive and mainly hydrophobic inter-domain interface literally fuses two KH domains into an integral structure. The arrangement of tandem KH domains in Krr1 is distinct from that found in other tandem KH domains, for example Nova KH1 and KH2 (58), but it bears a strong resemblance to that of archaeal Dim2-like proteins (PDB code 1TUA) (46,47). The structure of Krr1 (residues 38 -211) and PhDim2 (residues 33-208) can be aligned with a root-mean-square deviation of 1.28 Å over 101 C␣ pairs (Fig. 3A). KH domains generally recognize a core segment of four unpaired nucleotides in a conserved mode (59). The RNA is bound on top of an aliphatic platform formed by helix ␣1 and the edge of strand ␤2. The GXXG loop contacts the sugarphosphate backbone of the RNA along one side. In the previously determined RNA complex structure of PhDim2, an 11-nucleotide single-stranded RNA, derived from the 3Ј-end of E. coli 16 S rRNA, binds across both KH domains of PhDim2 in an extended conformation (46). In the case of Krr1, the conserved KH2 domain should retain a classic RNA-binding mode. In contrast, the divergent KH1, which lacks the GXXG motif, may not bind RNA or bind in an unconventional manner. In the aligned structures of Krr1 and PhDim2, the 5Ј three nucleotides of rRNA bound to PhDim2 are incompatible with the Faf1 helix associated with Krr1 (Fig. 3A). However, the first two nucleotides do not interact with PhDim2, and the nucleotide equivalent of the third one often adopts a different conformation in the RNA complex structures of other KH domains (Fig. 3B) (59). The bound Faf1 helix might affect the conformation of the RNA at the 5Ј side of the core region. The sequence of Krr1 is extremely conserved, with 49% identity and 60% similarity between yeast and human proteins ( Fig. 2A). A large portion of the Krr1 surface is covered by highly conserved residues (Fig. 3B), including the putative RNA-binding surface on KH2, the Faf1-binding channel, and the surface opposite the classic or degenerate RNA-binding sites of KH1 and KH2. This suggests that the Krr1 surface is involved in interaction with multiple partners. Structural Consequence of Krr1 Mutations-Previous studies identified two temperature-sensitive mutants, krr1-17 and (8,34). Our structure provides insight into the effect of these mutations in Krr1 (Fig. 3C). krr1-17 contains four mutations K20E, K66N, C162R, and D261A and its protein product is unstable at 37°C. Among these mutated residues, substitution of buried Cys-162 to charged arginine may destabilize the structure. Mutation of the structurally exposed and nonconserved residue Lys-66 may be neutral. The other two residues (Lys-20 and Asp-261) are not present in the structure of Krr1 core domain and cannot be evaluated for mutational effect. krr1-18 contains three mutations, F45L, L95S, and R207G, that are distributed in both KH domains. Mutation of buried Phe-45 to hydrophobic leucine may be tolerated structurally. The side chains of Arg-207 and Leu-95 are both exposed to the solvent, and their mutations should have a minor effect on the structure. Nevertheless, these two residues are highly conserved ( Fig. 2A, 3B) and may be important for function. As the krr1-18 protein is defective in binding Kri1 (34), the mutated residues may mediate the interaction with Kri1. Leu-95 is particularly interesting because it is situated in a conserved surface patch away from the RNA-binding and Faf1-binding sites. We therefore tested whether Leu-95 is involved in binding Kri1 by a twohybrid assay (Fig. 3D). Krr1 interacted strongly with Kri1 in the two-hybrid assay, consistent with their physical interaction previously detected by coimmunoprecipitation (34). Replacement of Leu-95 with Asp completely blocked the interaction with Kri1, whereas substitution of Asp for Phe-41, which is located near Leu-95 in structure, had no effect. This result pinpoints Leu-95 as a key site for Kri1 association. A temperature-sensitive mutant of S. pombe Krr1, mis3-224, contains a single Gly to Glu mutation (41). The equivalent residue of yeast Krr1, Gly-175, is an invariant residue located at a FIGURE 3. Putative RNA and Kri1-binding sites on Krr1. A, structural superposition of Krr1 and the PhDim2-RNA complex (PDB code 3AEV). PhDim2 is shown as silver ribbons and its bound RNA as red sticks. The two RNA-binding GXXG motifs of PhDim2 are marked with arrows. Krr1 KH1 lacks a GXXG motif. B, conserved surface of Krr1 shown in two opposite orientations. Residues that are at least 98 and 80% conserved in 209 Krr1 sequences are colored orange and yellow, respectively. The GXXG motif and the putative Kri1-binding residue Leu-95 are indicated. The four RNA nucleotides shown in ribbon were modeled according to an RNA complex structure of Nova-2 KH3 domain (PDB code 1EC6). The two structures in each row have the same orientation. C, location of residues mutated in krr1-17 (K20E, K66N, C162R, and D261A), krr1-18 (F45L, L95S, and R207G), and mis3-224 (G175E). Phe-41 is a control mutation site in two-hybrid assay shown in D. D, two-hybrid assays show that Leu-95 of Krr1 mediates the interaction with Kri1. Krr1 or its mutants were fused to the Gal4 DNA-binding domain (BD) as bait. Kri1 was fused to the Gal4 activation domain (AD) as prey. 5-Fold serial dilutions of yeast AH109 cells, cotransformed with bait and prey plasmids, were spotted in plates with SC medium lacking Leu and Trp as growth controls and plates with SC medium lacking Leu, Trp, and His and containing the indicated concentrations of 3-amino-1,2,4-triazole (3-AT), a competitive inhibitor of HIS3 gene product, to assay the prey-bait interaction. turn structure between ␤6 and ␣7, and its mutation to Glu may destabilize the structure. Interaction between Krr1 and Faf1-The ␣-helix of Faf1 joins the ␣5, ␣6, and ␣8 helices of Krr1 KH2, forming an intermolecular helical buddle (Fig. 4A) (Figs. 2, A and B, and 3B), underscoring the importance of intermolecular association. The highly conserved hydrophobic residues Leu-149 and Leu-153, located on the exposed face of the Faf1 helix, are close to the RNA-binding surface on Krr1 KH2 and might be involved in binding RNA or other protein factors (Fig. 4A). Functional analysis of Krr1-Faf1 interaction in yeast necessitates identification of mutants that can efficiently disrupt the Krr1-Faf1 interaction while having minimal effect on the structure and other functions of the proteins. Mutagenesis on Krr1 is expected to be complicated, because the binding surface of Krr1 is extensive and flat and many hydrophobic residues at the interface are also involved in maintaining the structure of Krr1. By contrast, Faf1 employs a short modular helix to bind Krr1, and mutation on Faf1 is more straightforward. We individually mutated three residues of Faf1, Asp-152, Leu-155, and Leu-159, and assessed the effect on Krr1 binding with a GST pulldown assay (Fig. 4B). When the protein fragments used in crystallization were assayed, the single mutations D152A, L155D, and L159D all abolished the Krr1 binding. When the full-length Krr1 and Faf1 proteins were assayed, the L155D and L159D mutants of Faf1 failed to bind Krr1, but the D152A mutant was able to pull down Krr1. The discrepancy between the truncated and full-length proteins suggests that some residues outside the core interacting regions may be involved in intermolecular interaction. We also assayed the mutational effect on Krr1 interaction with two-hybrid assays (Fig. 4C). The results show that Krr1 interacts strongly with Faf1, as reported previously (37). The single mutations D152A and L155D and their combination reduced, but did not abolish, the interaction with Krr1, as shown by slowed yeast growth with increasing selection stringency. Apparently, the twohybrid assay carried out in yeast is less sensitive to these mutations than the in vitro pulldown assay. Nevertheless, deletion of the entire Krr1-binding helix or the triple mutation of D152A, L155D, and L159D (named DLL) completely abolished the two-hybrid interaction with Krr1. These data also validate that the Krr1-Faf1binding interface observed in the crystal structure is required for their association. Krr1-Faf1 Interaction Is Essential for Cell Growth-We next asked whether the Krr1-Faf1 interaction is functionally important by using site-directed mutagenesis of Faf1 in yeast. To this end, the genomic FAF1 gene was deleted and complemented with a URA3 pRS416 plasmid expressing wild-type FAF1. This FAF1 shuffle strain was transformed with a LEU2 p415GPD plasmid expressing HA-tagged Faf1 or mutants under the control of the constitutive GPD promoter. Growth in 5-fluoroorotic acid, which counter-selects the URA3 FAF1 plasmid, allowed the function of mutant Faf1 proteins to be assessed (Fig. 5A). Deletion of FAF1 resulted in cell lethality, as expected for an essential gene. The D152A, L155D, and D152A/L155D mutants showed no obvious growth defect at 20, 30, or 37°C. The triple mutant D152A/L155D/L159D and deletion of the Krr1-binding helix could not support cell growth at all three temperatures. The observed phenotypes were not due to protein instability, as all the mutant proteins were expressed at similar levels in yeast (Fig. 5B). The mutational effects on yeast growth are well correlated with the results of two-hybrid interaction, suggesting that the two-hybrid assay is a better indicator of in vivo interaction than the pulldown result. The weakened Krr1-Faf1 interaction caused by some mutations appears to be tolerated in vivo. It is possible that the in vivo interaction between Krr1 and Faf1 is stabilized by additional bridging factors, which are lacking in the in vitro pulldown assay. These data indicate that the Krr1-Faf1 interaction is essential for yeast growth. Krr1-Faf1 Interaction Is Required for Early 18 S rRNA Processing-Most likely, the cell lethality associated with disruption of the Krr1-Faf1 interaction was caused by defects in 40 S ribosome synthesis. To investigate the role of Krr1-Faf1 interaction in pre-rRNA processing, we used an Faf1 conditional expression strain KM804 in which the genomic FAF1 gene was deleted and complemented with a plasmid expressing Myc-tagged Faf1 protein under the control of the galactose-induced GAL promoter (36). The KM804 strain stopped to express the Myc-Faf1 protein 6 h after shifting from galactose medium to glucose medium (Fig. 6A) and failed to grow in glucose plates (Fig. 6B), as reported previously (36). The growth of KM804 under nonpermissive conditions can be rescued with C-terminally TAPtagged Faf1 expressed under the constitutive GPD promoter from plasmid, but it cannot with the DLL and ⌬145-166 mutants that are abolished in the Krr1 interaction, consistent with the complementation results conducted on the faf1 deletion strain (Fig. 5A). In addition, we tagged the genomic KRR1 gene with an HA epitope at the C terminus for protein detection; this tagging did not affect yeast growth (Fig. 6B). The steady-state levels of rRNA processing intermediates and mature rRNAs were detected by Northern blotting using 5Ј-32 P-labeled DNA probes (Fig. 6, C-E). Depletion of Faf1 resulted in accumulation of 35 S and 23 S pre-rRNAs and reduction of 20 S pre-rRNA (Fig. 6D, lanes 5-8). The 23 S pre-rRNA is generated from cleavage of 35 S pre-rRNA at site A3 in the absence of prior cleavage at sites A0, A1, and A2. The level of mature 18 S rRNA was also significantly reduced after 24 h of growth in glucose medium. In contrast, the level of 27 S pre-rRNA, which consists of multiple species leading to 5.8S-25 S rRNA, and mature 25 S rRNA were not affected by Faf1 depletion. These data indicate that early cleavages of 35 S pre-rRNA at sites A0, A1, and A2 were specifically inhibited in the absence of Faf1, consistent with the previous observations (36,37). Expression of wild-type Faf1-TAP from a p413GPD plasmid largely restored the normal processing of 18 S rRNA in cells depleted of Faf1 (Fig. 6D, lanes 1-4). However, expression of the Krr1 binding-defective mutants of Faf1-TAP failed to rescue the pre-rRNA processing (Fig. 6D, lanes 9 -16). These results indicate that the Krr1-Faf1 interaction is required for early 18 S processing. Krr1-Faf1 Interaction Is Not Required for Their Incorporation into Pre-ribosomes-The existence of interaction between Krr1 and Faf1 raises the possibility that one protein may be recruited to pre-ribosomes by the other protein. To test this idea, the KM804/KRR1-HA strain expressing wild-type Faf1-TAP or its DLL mutant was grown in glucose to deplete the wild-type Myc-Faf1 (Fig. 7A). The cell extracts were analyzed by sucrose density gradient centrifugation, and the sedimentation behavior of Krr1-HA and Faf1-TAP was detected by Western blotting. In the presence of wild-type Faf1-TAP, Krr1 migrated broadly with a main peak at ϳ80 S (Fig. 7B), consistent with a previous observation (8). Faf1 displayed a similar distribution pattern as Krr1, suggesting that they coexist in pre-ribosomal particles. We note that the formation of 40 S ribosome seemed not to be fully restored in this strain as the level of free 60 S subunits was high (Fig. 7B). The function of Faf1 may be affected by its C-terminal TAP tag and high expression level due to the GPD promoter. Upon depletion of Faf1, the distribution of Krr1 in the 80 S region was not affected (Fig. 7C), indicating that Krr1 is independent of Faf1 for pre-ribosomal association. When only the DLL mutant of Faf1 was expressed, it was generally cosedimented with Krr1, yet seemed to be more accumulated at the 60 S region (Fig. 7D). The Faf1 mutant may not always assemble together with Krr1 in pre-ribosomal particles or nonspecifically associate with 60 S subunits. The ribosomal sedimentation profiles displayed a decreased level of free 40 S ribosome and an increased level of free 60 S ribosome upon depletion of Faf1 or disruption of its interaction with Krr1. This is consistent with the defects of 18 S processing revealed above by RNA analysis. DISCUSSION Biogenesis of eukaryotic 40 S ribosome requires two functionally distinct KH proteins, Krr1 and Dim2. Krr1 functions in early 90 S pre-ribosomes, and Dim2 is present in both early 90 S and late pre-40 S pre-ribosomes. Our structure reveals that Krr1 contains a divergent KH domain in addition to the previously recognized conserved KH domain. The two KH domains of Krr1 pack into a single structural unit, which is highly similar to the structure of archaeal Dim2-like proteins. Given the significant sequence similarity with Krr1 over both divergent and conserved KH domains, Dim2 should also adopt a packed tandem KH domain structure. Dim2 has been predicted to contain a degenerate KH domain based on sequence and secondary structure analysis (60). Such an arrangement of tandem KH domains is so far unique to proteins involved in ribosome biogenesis. The presence of packed tandem KH domains in Krr1, and Dim2 by extrapolation, reinforces the notion that Krr1, Dim2, and archaeal Dim2-like proteins share a common ancestor (43). The ancestral gene likely contains two classic KH domains, similar to archaeal Dim2-like genes. After divergence of archaea and eukaryotes, the eukaryotic gene has likely lost the RNA-binding motif in KH1 and is then duplicated and diversified to extant Krr1 and Dim2, which play different roles in ribosome biogenesis. The conservation of RNA-binding motif in KH2 suggests that Krr1 directly binds rRNA in 90 S pre-ribosomes, although the binding target should await further characterization. Besides its role as a putative RNA-binding protein, Krr1 also serves as a protein-binding platform that interacts with a num-ber of proteins, including the early acting 40 S synthesis factors Faf1, Kri1, and Utp14 (34,37). In addition, Krr1 appears to form a protein module with the late-acting 40 S synthesis factors Enp1, Ltv1, Rio2, Tsr1, Dim1, and Hrr25 (61). Using structural and mutagenic approaches, we have located the binding site of Faf1 on Krr1 KH2. Our two-hybrid data also suggest that the KH1 domain of Krr1 is involved in binding Kri1, but their interaction mode should be revealed by further structural analysis. It should be noted that, in addition to tandem KH domains, Krr1 contains conserved N-and C-terminal regions ( Fig. 2A), which account for 45% of total residues and could contribute to protein and RNA interactions. The KH domain has been best studied as an RNA-binding module (39). Our findings illustrate the versatility of the KH domain in protein binding. The classic KH2 domain of Krr1 uses different faces for RNA and protein binding, whereas the divergent KH1 domain of Krr1 appears to be specialized in protein recruitment. The protein-binding mode of KH2 with Faf1 is unprecedented in KH domains and distinct from that between PhDim2 and eIF2a (46). Protein binding function has been associated with other KH proteins involved in ribosome biogenesis. The divergent KH1 domain of Dim2 is implicated in Nob1 association (60), and the classic KH1 of PhDim2 was observed to interact with the translation initiation factor eIF2␣ (46). These examples show that the KH domain can bind protein regardless of the presence of the RNA-binding motif. Faf1 has been shown to interact with multiple proteins besides Krr1. Faf1 demonstrates a two-hybrid interaction with Pxr1/Gno1 (37), which is also involved in ribosome biogenesis (62). Furthermore, Faf1, the 90 S component Utp11, the r-protein Rps16, and the pre-60 S factors Ebp2 and Rrp14 mutually interact with each other in two-hybrid assays (36,63). The Faf1 sequence consists of six conserved segments of 20 -40 residues in length, separated by rather variable sequences (Fig. 1A). We show that the third conserved segment of Faf1, located in the middle of the protein, is responsible for binding Krr1. It is tempting to speculate that Faf1 may be a nonglobular scaffolding protein that uses separate short regions to interact with different proteins. In contrast to the universal presence of Krr1 in eukaryotes, Faf1 exists only in fungi. Paradoxically, the Faf1-binding site is highly conserved from yeast to human Krr1, and the interaction between Krr1 and Faf1 is essential in yeast. Non-fungi eukaryotes likely have a functional equivalent of Faf1 that interacts with Krr1. However, we failed to identify any meaningful protein using the short Krr1-binding motif of Faf1 as a query in Blast. The function of a protein in ribosome assembly is often inferred by phenotype caused by depletion of the target protein. If a protein is involved multiple interactions, its depletion would cause complicated consequences on the assembly of preribosomes (18,19). Elucidation of the precise binding mode between Krr1 and Faf1 allowed us to study the functional role of a single protein-protein interaction, while minimizing the side effects associated with protein depletion. A subset of 90 S proteins has been shown to form subcomplexes, which are generally thought to assemble into pre-ribosomes as single entities (18,19). However, our data show that the Krr1-Faf1 complex is an exception to this scenario. Although Krr1 and Faf1 interact with each other, they are still able to assemble into pre-ribosomes in the absence of intermolecular interaction. Interactions with other proteins or pre-rRNA appear to be sufficient for their recruitment to pre-ribosomes, and the interaction between Krr1 and Faf1 may be secondary to their independent incorporation. Disruption of Krr1-Faf1 interaction blocked early 18 S rRNA processing and yeast growth, suggesting that the Krr1-Faf1 interaction is required for the 90 S pre-ribosome to adopt the functional conformation.
8,085
2014-01-01T00:00:00.000
[ "Biology" ]
Direct Observation of Structure and Dynamics of Photogenerated Charge Carriers in Poly(3-hexylthiophene) Films by Femtosecond Time-Resolved Near-IR Inverse Raman Spectroscopy The initial charge separation process of conjugated polymers is one of the key factors for understanding their conductivity. The structure of photogenerated transients in conjugated polymers can be observed by resonance Raman spectroscopy in the near-IR region because they exhibit characteristic low-energy transitions. Here, we investigate the structure and dynamics of photogenerated transients in a regioregular poly(3-hexylthiophene) (P3HT):[6,6]-phenyl-C61-butyric acid methyl ester (PCBM) blend film, as well as in a pristine P3HT film, using femtosecond time-resolved resonance inverse Raman spectroscopy in the near-IR region. The transient inverse Raman spectrum of the pristine P3HT film at 50 ps suggests coexistence of neutral and charged excitations, whereas that of the P3HT:PCBM blend film at 50 ps suggests formation of positive polarons with a different structure from those in an FeCl3-doped P3HT film. Time-resolved near-IR inverse Raman spectra of the blend film clearly show the absence of charge separation between P3HT and PCBM within the instrument response time of our spectrometer, while they indicate two independent pathways of the polaron formation with time constants of 0.3 and 10 ps. Introduction The mechanism of conductivity in conjugated polymers has been an important problem in fundamental physical chemistry and material science since conductive polyacetylene was first synthesized [1]. Photoinduced conductivity of conjugated polymers [2,3] draws much attention for the understanding of photophysics in π-conjugated molecular systems as well as for the development of new materials functioning with solar energy. The structure of charge carriers, as well as their dynamics, is an important problem when it comes to understanding how charge carriers migrate in conjugated polymers. The structure of charge carriers in P3AT has been extensively studied using chemically or electrochemically doped films by IR and Raman spectroscopy. IR absorption of positive polarons in P3AT films are significantly enhanced [26][27][28] by electron-molecular vibration coupling [29,30]. Time-resolved IR vibrational spectroscopy has been performed for the direct observation of the structure of photogenerated charge carriers [27,31,32]. Although IR bands of charge carriers are clearly observed, their assignment has not been established yet because IR spectroscopy has no selectivity in observing charge carriers with different structures, such as positive polarons and bipolarons. The structure of charge carriers can be selectively observed when resonance Raman spectroscopy is used. Raman spectra of charge carriers have been reported for chemically or electrochemically doped P3AT films [33][34][35][36][37][38]. Time-resolved Raman spectroscopy has recently been reported for solutions of P3AT in isolated and aggregate forms [39,40] with the technique of femtosecond stimulated Raman spectroscopy (FSRS) [41,42]. FSRS is capable of investigating the ultrafast carrier formation dynamics in film samples as well as in solutions, as demonstrated by Hayes, Silva, and coworkers for a blend film of another conjugating polymer, PCDTBT, and a derivative of C 70 [43]. It is difficult, however, to apply time-resolved Raman spectroscopy to films of conjugated polymers, because Raman spectroscopy usually requires a large photon flux and/or a long period of irradiation for the pump and probe pulses. Strong or long irradiation of film samples often causes irreversible photodamage in them. In this study, we employ femtosecond time-resolved inverse Raman spectroscopy in the near-IR region in order to observe the structure of photogenerated charge carriers in a direct manner. In the inverse Raman measurement, the sample is irradiated with Raman pump and probe pulses with a frequency of ω 1 and ω 2 , respectively, at the same time. Here ω 2 is set in the anti-Stokes scattering frequency region with respect to ω 1 . A decrease of probe intensity is observed when the frequency difference, ω 2 -ω 1 , matches the frequency of a Raman active vibration. Femtosecond time-resolved inverse Raman spectroscopy can be performed with the same technique as FSRS, although they are different from each other in the optical process [44,45]. Significant suppression of photodamage can be expected for near-IR inverse Raman spectroscopy, because it can be performed with a small Raman pump frequency and much shorter exposure time. We present femtosecond time-resolved near-IR inverse Raman spectra of P3AT with hexyl side chains, P3HT, in a pristine film and in a film blended with [6,6]-phenyl-C 61 -butyric acid methyl ester (PCBM). The structure and dynamics of charge carriers in P3HT are discussed from the recorded spectra and their time evolution. Steady-State Near-IR Inverse Raman Spectra of Pristine and FeCl 3 -Doped P3HT Films Steady-state inverse Raman spectra of pristine and FeCl 3 -doped P3HT films were recorded with the Raman pump and probe wavelength at 1190 nm and 900-1150 nm, respectively, for obtaining reference spectra of P3HT in the ground state and positively charged excited states. The results are shown in Figure 1. The pristine P3HT film shows a strong inverse Raman band at 1446 cm −1 and a weak band at 1379 cm −1 (Figure 1a). They are assigned to a C α =C β symmetric stretch and a C β -C β stretch vibrations of a thiophene ring [33,36]. Weak bands are observed at around 1200 and 720 cm -1 with intensity near the detection limit. The whole spectral pattern agrees well with the spontaneous Raman spectrum of a pristine P3HT film with excitation wavelength of 830 nm [36]. Inverse Raman bands of an FeCl 3 -doped P3HT film are observed with dispersive line shapes ( Figure 1b) caused by the Raman pump and probe pulses in resonance with the near-IR transitions of the sample ( Figure S1). The exact position of the band is not obtainable for a dispersive inverse Raman band unless the electronic resonance condition is fully determined [46]. We determined the exact positions of the bands of the FeCl 3 -doped P3HT film by observing its stimulated Raman scattering in the 1300-1550 nm region (Figure 1c). Positive peaks are clearly observed at 1413, 1377, and 721 cm −1 , where the resonance inverse Raman bands are observed as well. The 1413 and 1377 cm −1 bands are assigned to ring C α C β and C β C β stretch vibrations of the positive polarons, in which the conjugate structure is significantly altered from neutral P3HT [33,36]. The 721 cm -1 band is assigned to a ring deformation vibration around the C-S-C bond [33]. The stimulated Raman spectrum of the FeCl 3 -doped P3HT film agrees well in position with their spontaneous Raman spectrum observed with the excitation wavelength of 780 nm [34] and 830 nm [36], although the width of the ring C α C β stimulated Raman band appears smaller than the spontaneous one. The FeCl 3 -doped P3HT film does not show a Raman band characteristic of positive bipolarons in either the inverse or stimulated Raman spectrum. The negligible bipolaron generation is consistent with the previous studies performed by a group among the present authors [36,37]. Femtosecond time-resolved near-IR inverse Raman spectra were recorded for pristine P3HT and P3HT:PCBM blend films with the actinic and Raman pump wavelengths at 480 and 1190 nm, respectively, for observing the structure of photogenerated transients as well as their dynamics. It has been established that two principal transients of P3HT, singlet excitons and positive polarons, exhibit broad absorption bands in the near-IR region ( Figure S2). Therefore, resonance enhancement of inverse Raman bands of these transients can be expected under our experimental conditions. The obtained time-resolved inverse Raman spectra are shown in Figure 2. The spectra were recorded in a random order with respect to the time delay. Their baselines are fitted with 6th-order polynomial functions and subtracted ( Figure S3). The spectra, after the baseline subtraction, contain the inverse Raman bands of P3HT in the ground state as well as those of photogenerated transients. The ground-state inverse Raman bands are not subtracted in Figure 2, because they appear as large ground-state depletion signals in the difference spectra and conceal the transient inverse Raman bands. For a pristine P3HT film, the intensity of the ground-state inverse Raman bands slightly decreases from −0.36 to 0 ps due to the photoexcitation and almost fully recovers within a few picoseconds ( Figure 2a). Inverse Raman bands of photogenerated transients, such as singlet excitons and positive polarons, are not observed in the time-resolved spectra. The intensity of their transient inverse Raman bands is below the detection limit of the spectrometer with the actinic pulse energy density of 60 µJ cm −2 . Significant time dependence is observed in femtosecond time-resolved near-IR inverse Raman spectra of a P3HT:PCBM blend film ( Figure 2b). The intensity of the inverse Raman bands of P3HT in the ground state significantly decreases with the photoexcitation at 0 ps and then apparently recovers in part within 0.5 ps. Inverse Raman bands at around the C α C β stretch region become dispersive after 2 ps from the photoexcitation. Because the shapes and peak positions of the dispersive bands agree reasonably well with those observed for the FeCl 3 -doped P3HT film (Figure 1b), they can be assigned to photogenerated positive polarons. The relative intensity of the positive peak at around 1418 cm −1 to that at around 1374 cm −1 decreases as time delay increases from 2 to 50 ps, indicating a slow change of transients in this time region. Actinic Pump Energy Density Dependence of Femtosecond Time-Resolved Near-IR Inverse Raman Spectra Transient near-IR inverse Raman spectra at 0.2 and 50 ps were recorded with increasing energy density of the actinic pump pulse from 0 to 4.4 × 10 2 µJ cm −2 , for observing the structure and dynamics of the photogenerated transients in detail. The results are shown in Figure 3. Here, baselines of the spectra are subtracted while inverse Raman bands of P3HT in the ground state are not subtracted. For a pristine P3HT film, positive inverse Raman bands are observed at 1447 and 1379 cm −1 when the sample is not photoexcited at 480 nm ( Figure 3a). The two bands turn to negative as the energy density of the actinic pump pulse increases. The negative bands can be assigned to singlet excitons in resonance with their near-IR transition, because they appear in the subpicosecond time scale and their shape is quite different from that of positive polarons (Figure 1b). The band intensity is proportional to energy density up to around 3.5 × 10 2 µJ cm −2 , although the correlation fluctuates because of uneven thickness of the samples. The linear intensity change shows that the number of the photogenerated singlet excitons is proportional to the number of actinic photons up to an energy density of around 3.5 × 10 2 µJ cm −2 . At 50 ps, a positive inverse Raman band is observed at the identical position with the ground-state band while the negative bands significantly decrease in intensity (Figure 3b). The positive band is assigned to the ground state partially recovered as a result of a prompt decay of singlet excitons. The detailed spectral shape of the negative bands is unclear because they are significantly overlapped with the ground state bands. A P3HT:PCBM blend film shows an energy density dependence almost identical to that for the pristine P3HT film for their transient near-IR inverse spectrum at 0.2 ps (Figure 3c). At 50 ps, the intensity of the dispersive CC stretch bands of positive polarons increases with energy density (Figure 3d). The energy density dependence clearly shows that the center of the band is apparently down-shifted as the ground-state P3HT bands are more strongly depleted with high energy density. The positions of the minimum and maximum for the C α C β stretch band are located at 1476 and 1417 cm −1 , respectively, while they are estimated to be and 1434 and 1402 cm −1 for the FeCl 3 -doped P3HT film (Figure 1b). Structure of Photogenerated Singlet Excitons and Positive Polarons Femtosecond time-resolved near-IR inverse Raman spectra of pristine P3HT and P3HT:PCBM blend films show substantial spectral changes as actinic pump energy density increases from 0 to 4.4 × 10 2 µJ cm −2 . Here, we discuss the structure of transients photogenerated in these films from the observed inverse Raman spectra with various actinic pump energy densities. For the transient inverse Raman spectra of the pristine P3HT film at 0.2 ps, the peak position of the ring C α C β stretch band, which is assigned to singlet excitons, appears to be down-shifted by 21 cm −1 as energy density increases from 80 to 3.5 × 10 2 µJ cm −2 (Figure 3a). The intensity ratio of the ring C α C β stretch band to the ring C β C β stretch band increases by around a factor of 1.6 in the same range of energy densities. The downshift and the change in the intensity ratio have been reported for resonance Raman spectra of poly(3-decylthiophene) with increasing of the electrode potential [33]. These changes were explained by the increasing contribution of the quinoid structure by electrochemical oxidation, with the aid of the normal coordinate analysis in which the effects of charges are not considered. From the similarity of the spectral changes, we suggest that the photogenerated singlet excitons have a substantial contribution from the quinoid form in its resonance structure after strong irradiation of the actinic pump pulse. A similar trend is observed for the P3HT:PCBM blend film with a smaller downshift than that in the pristine P3HT film (Figure 3c). The smaller downshift can be interpreted as a smaller quinoid character of singlet excitons in the blend film than those in the pristine film. The quinoid character is perhaps suppressed when singlet excitons interact with PCBM molecules. The transient inverse Raman spectrum of the pristine P3HT film at 50 ps should contain information on structure of transients other than singlet excitons, because the shapes of the inverse Raman bands significantly differ from those at 0.2 ps. The inverse Raman bands at 50 ps, however, are not directly analyzable because of a serious overlap with those of the ground state. We tried to retrieve the transient inverse Raman bands by subtracting the 0 µJ cm −2 spectrum with a factor of 0.325 from the 4.4 × 10 2 µJ cm −2 one. The result is shown in Figure 4. The difference spectrum shows a pattern similar to the spectrum of the singlet excitons (Figure 3a) in the 1800-1350 and 1250-750 cm −1 region. The spectrum is not simply assigned to singlet excitons, however, because clear differences are observed in the 1350-1250 and 750-700 cm −1 regions. In the difference spectrum, inverse Raman intensity is positive in 1350-1250 cm −1 , and the ring C-S-C deformation band in 750-700 cm −1 has a dispersive shape. These features are not observed in the spectrum of the singlet excitons (Figure 3a) but observed in that of the positive polarons (Figure 1b). Thus, the difference spectrum suggests the coexistence of neutral and charged excitations. The coexistence of the two transients is consistent with the results of time-resolved studies in the visible to microwave region [5,6,9,10,47,48]. The transient inverse Raman spectrum of the P3HT:PCBM blend film at 50 ps shows a strong inverse Raman band with a dispersive line shape at around 1400 cm -1 , which can safely be assigned to the C α C β stretch band of positive polarons due to the similarity of the band shape to the inverse Raman bands of the FeCl 3 -doped P3HT film. The structure of positive polarons may differ from each other between the two samples, because the position of the C α C β stretch bands disagree with each other. If we assume that the shape of the C α C β stretch band does not significantly change due to the preparation method, we are able to estimate the center of the C α C β stretch band of the positive polarons photogenerated in the P3HT:PCBM blend film from the following relation: Here ν c , ν min , and ν max are the positions for the band center, the intensity minimum, and the intensity maximum of the C α C β stretch band, respectively. In the case of the FeCl 3 -doped P3HT film, the positions of the minimum and maximum for the C α C β stretch band are located at 1434 and 1402 cm −1 , respectively, while the band center is estimated to be at 1413 cm −1 from the stimulated Raman spectrum of the same sample (Figure 1c). For the P3HT:PCBM blend film, the minimum and maximum are located at 1476 and 1417 cm −1 , respectively. The band center is, therefore, determined to be at 1437 cm −1 for positive polarons photogenerated in the P3HT:PCBM blend film. The center of the C α C β stretch band of the positive polarons in the P3HT:PCBM blend film is up-shifted by 24 cm −1 from that in the FeCl 3 -doped P3HT film. The position of the C α C β stretch band is down-shifted as doping time increases for in situ Raman measurements of a P3HT film in the FeCl 3 vapor, which is most probably related to the effective delocalization length of charges or interaction between adjacent positive polarons [36]. Concentration of the photogenerated positive polarons will be much smaller than that of polarons generated by the FeCl 3 doping. The downshift of 24 cm −1 can be explained by weaker interaction between positive polarons in the photoexcited P3HT:PCBM blend film than in the FeCl 3 -doped P3HT film. Time Constants of Polaron Formation When the P3HT:PCBM blend film is photoexcited with a large energy density, inverse Raman bands of singlet excitons and positive polarons become dominant in the transient inverse Raman spectra at 0.2 and 50 ps, respectively, while the ground state bands become negligible (Figure 3c,d). If we assume that the inverse Raman spectra at 0.2 and 50 ps with an actinic pump energy density of 4.4 × 10 2 µJ cm −2 are assigned to singlet excitons and positive polarons, respectively, we can analyze the carrier formation dynamics from the time-resolved near-IR inverse Raman spectra of the P3HT:PCBM blend film. We fitted the time-resolved near-IR inverse Raman spectrum at each time delay with a linear combination of the three inverse Raman spectra representing the ground state, singlet excitons, and positive polarons. When multiple transients coexist in a sample, its inverse Raman intensity at a time delay τ, I(τ), is represented by the following expression: Here I i (τ) and χ i,IRS Figure S4). The amplitudes of the three species were obtained and plotted against the time delay for obtaining their kinetics. The results are shown in Figure 5. Raman loss of the ground state decreases immediately after the sample is photoexcited with the energy density of 60 µJ cm −2 . The rise of singlet excitons exactly matches the depletion of the ground state, indicating that the singlet excitons are formed at the moment of the photoexcitation. The singlet exciton bands fully decay within 1 ps while the ground state bands do not recover. The absence of the recovery strongly suggests that singlet excitons are almost quantitatively converted to other transients in the P3HT:PCBM film. The conversion efficiency is much higher than that of the pristine P3HT film, in which part of singlet excitons decay to the ground state within 50 ps (Figure 3b). The decay time constant of the singlet excitons is estimated to be 0.08 ps by the least-squares fitting analysis with an exponential function. The obtained time constant may be underestimated, however, because the singlet excitons provide much weaker inverse Raman bands than the ground state and the positive polarons (Figure 2b). The signal of the positive polarons rises substantially more slowly than the instrument response time of the spectrometer, 190 fs. The rise time constants are estimated to be 0.3 and 10 ps by the least-squares fitting analysis with two exponential functions. Neither of the time constants matches the decay time constant of the singlet excitons, indicating that part of the singlet excitons decay by singlet annihilation [16] and/or by formation of biexcitons. The time constant of the initial charge separation has been estimated in P3HT:PCBM blend films by femtosecond time-resolved absorption [15][16][17][18][19] and fluorescence [49] spectroscopy. The reported time constants range from 500 fs to <100 fs, or within the instrument response time. Although the time constant obtained in this study, 0.3 ps, is located within the reported range, it does not agree with the result that the charge separation completes within the instrument response time [15][16][17][18][19]. It has been widely accepted that the charge separation dynamics are sensitive to the photoexcitation condition. The actinic pump density in the present study is 60 µJ cm −2 , which is regarded as an extremely large value in recent studies. The high actinic pump density may slow down the charge separation, although Ohkita and his coworkers reported that the charge generation rate constant was independent of the actinic pump energy density up to 120 µJ cm −2 [17]. The two time constants suggest that positive polarons are generated from two kinds of transients in different manners. The following model has been suggested from time-resolved absorption studies [16][17][18][19]: (i) When singlet excitons are formed near the interface of the P3HT and PCBM domains, they can be promptly converted to positive polarons; (ii) when singlet excitons are generated in the bulk of the P3HT domain, they migrate to the interface of the P3HT and PCBM domains through carrier diffusion and undergo the charge separation with PCBM. The charge separation after the migration of the excitations provides the slow increase of the inverse Raman signal. If inverse Raman spectrum at 50 ps indicates that positive polarons are distributed predominantly around the interface of the P3HT and PCBM domains, the amplitude ratio of the two rise components indicates that around 70% of the positive polarons are generated directly from singlet excitons at the interface. If P3HT is photoexcited at the interface and in the bulk with an equal probability, 70% of P3HT in volume is regarded as forming the interface with PCBM. The large area of the effective interface estimated in this study is consistent with the high power conversion efficiency of P3HT:PCBM blend films. At present, time-resolved inverse Raman spectroscopy is less sensitive than time-resolved absorption and fluorescence spectroscopy in investigation of the charge separation dynamics in conjugated polymer films. Time-resolved inverse Raman spectroscopy has an advantage, however, over time-resolved electronic spectroscopy, in that the inverse Raman signals of positive polarons can be easily distinguished from those of singlet excitons. The positive polarons show inverse Raman bands fully different from those of the singlet excitons in the band shape, as shown in Figure 3, owing to their resonance conditions. The band intensities of the positive polarons are almost comparable to those of the singlet excitons. These features enable us to retrieve the dynamics of positive polarons in a reliable manner. Materials and Methods Regioregular P3HT was purchased from Sigma-Aldrich (St. Louis, MI, USA). PCBM was purchased from Frontier Carbon. A P3HT solution was prepared in chlorobenzene with a concentration of P3HT (20 mg mL −1 ). A blend solution of P3HT and PCBM was prepared in chlorobenzene with a concentration of P3HT (20 mg mL −1 ) and PCBM (20 mg mL −1 ). A pristine P3HT film and a blend film were prepared from the solutions by the spin-coating method on a quartz substrate (20 × 20 mm, 1 mm thick) with absorbance of around 1 at 480 nm. A P3HT film for FeCl 3 doping was prepared from a chlorobenzene solution (24 mg mL −1 ) by the spin-coating method on a glass substrate (20 × 20 mm, 1 mm thick) with absorbance of around 2.0. The film was soaked in 0.01 mol dm -3 FeCl 3 /acetonitrile solution for 40 s and then rinsed with pure acetonitrile for 50 s. Formation of positive polarons was confirmed by steady-state UV-visible spectroscopy ( Figure S1) [36,50]. A lab-built femtosecond time-resolved near-IR multiplex stimulated/inverse Raman spectrometer was constructed [51][52][53] and used for recording absorption and inverse Raman spectra of transients photogenerated in pristine and blend films of P3HT. Details of the spectrometer have been described elsewhere [51,52]. Briefly, amplified output of a Ti:sapphire laser system (Vitesse/Legend Elite-HE, Coherent, Santa Clara, CA, USA, wavelength: 800 nm, pulse duration: 80 fs, repetition rate: 1 kHz) was divided into three parts for preparing the actinic pump (480 nm, 0.1-1.0 µJ), Raman pump (1190 nm, 1 µJ, ca. 3 cm −1 ), and probe (900-1550 nm) pulses. The transients were created with the actinic pump pulse and then their inverse Raman scattering was induced with the Raman pump and probe pulses after a time delay from the actinic pump pulse. Inverse Raman signals propagating along the Raman probe beam were detected by an InGaAs array detector (Symphony IGA, Horiba, Kyoto, Japan) equipped with a 32 cm spectrograph (iHR320, Horiba France, Longjumeau, France, spectral resolution: 5 cm −1 ). The Raman pump pulse was blocked in the time-resolved absorption measurement. Polarizations of the three pulses were set parallel with each other. The full width at half maximum of the instrument response function is estimated to be 190 fs for the time-resolved absorption and inverse Raman measurements of the film samples. All the spectra were recorded at 25 ± 1 • C under the aerated condition without covering the film samples by using another substrate, because the window in front of the sample film provided strong artifact signals. For avoiding accumulation of photodamage at the focus point, the samples were translationally moved every 40 s during the time-resolved near-IR inverse Raman measurements. The stationary inverse Raman spectrum with the pump irradiation was unchanged from that recorded without the pump irradiation ( Figure S5). The effects of oxygen and strong photoirradiation were negligible on the recorded time-resolved near-IR inverse Raman spectra. Conclusions In this study, we have performed femtosecond time-resolved near-IR inverse Raman spectroscopy for investigating the structure and dynamics of photogenerated transients in pristine P3HT and P3HT:PCBM blend films. Time-resolved near-IR inverse Raman spectra of a pristine P3HT film indicate decrease of transients within 50 ps and the coexistence of neutral and charged excitations at 50 ps or later. A P3HT:PCBM blend film shows signals of positive polarons in addition to the three transients observed in the pristine P3HT film. The structure of the photogenerated positive polarons is similar to that of positive polarons in an FeCl 3 -doped P3HT film, but not identical to it, because the photogenerated positive polarons can interact only weakly with each other due to their significantly low concentration. The time-resolved near-IR inverse Raman spectra reveal the initial charge separation dynamics in the P3HT:PCBM blend film. Positive polarons are generated with a time constant of 0.3 ps when the initially created singlet excitons are located nearby PCBM. They further increase with a time constant of 10 ps by slow charge separation with PCBM after the migration of excitations in the bulk of P3HT. Time-resolved inverse Raman spectra of the P3HT:PCBM blend film provide information on the volume of P3HT forming the interface with PCBM. The volume is estimated to be around 70% for the P3HT:PCBM blend film with the mass ratio of P3HT to PCBM of 1:1. Femtosecond time-resolved inverse Raman spectroscopy is an effective tool for estimating the efficiency of the carrier generation through direct observation of the polaron formation at the interface. Supplementary Materials: The following are available online. Figure S1: Steady-state absorption spectra; Figure S2: Femtosecond time-resolved near-IR absorption spectra; Figure S3: Fitting analysis of baselines; Figure S4: Least-squares fitting analysis of time-resolved inverse Raman spectra; Figure S5: Effects of actinic pump irradiation on film samples.
6,456.8
2019-01-25T00:00:00.000
[ "Chemistry", "Physics" ]
Ytterbium-doped fibre femtosecond laser offers robust operation with deep and precise microsurgery of C. elegans neurons Laser microsurgery is a powerful tool for neurobiology, used to ablate cells and sever neurites in-vivo. We compare a relatively new laser source to two well-established designs. Rare-earth-doped mode-locked fibre lasers that produce high power pulses recently gained popularity for industrial uses. Such systems are manufactured to high standards of robustness and low maintenance requirements typical of solid-state lasers. We demonstrate that an Ytterbium-doped fibre femtosecond laser is comparable in precision to a Ti:Sapphire femtosecond laser (1–2 micrometres), but with added operational reliability. Due to the lower pulse energy required to ablate, it is more precise than a solid-state nanosecond laser. Due to reduced scattering of near infrared light, it can lesion deeper (more than 100 micrometres) in tissue. These advantages are not specific to the model system ablated for our demonstration, namely neurites in the nematode C. elegans, but are applicable to other systems and transparent tissue where a precise micron-resolution dissection is required. A focused pulse of laser light can induce accurately localized sub-cellular damage to approach diverse questions in biology. In the field of neuroscience, laser ablation can kill cells (i.e., neurons and glia) by aiming at the nucleus, or sever neurites (i.e., axotomy and dendrotomy), allowing the study of neuronal function and regeneration in-vivo. Laser microsurgery platforms often use pulsed lasers at nanosecond or femtosecond pulse width. The pulses' high strength electric field ionizes molecules to create a bubble of plasma that vaporizes water and tissue to generate damage 1,2 . The absorption process at the focus is non-linear and depends strongly on the pulse intensity, which scales with the pulse energy divided by pulse duration 3 . Hence, the ten-thousand-fold shorter femtosecond laser pulse requires lower energy for plasma formation and ablation. Pulse energies used for femtosecond pulse ablation are typically tens of nanojoules (nJ), compared to tens of millijoules (mJ) for nanosecond pulse ablations. Excess deposited energy diffuses away from the focal point. Accordingly, longer pulses generate more severe damage beyond the region of laser energy deposition compared to shorter pulses 4 . Laser microsurgery was pioneered and has been an important tool to study the development and neurobiology of Caenorhabditis elegans since 1980 when Sulston and White used it to study cell-cell interaction in post embryonic stages 5 , and when Chalfie and colleagues ablated neurons to test their necessity for touch sensitivity 6 . Microsurgery of neurites (termed axotomy) was also pioneered in C. elegans 7 where a laser-pumped titanium-sapphire laser was used to cut commissure neurites of motoneurons. These Ti:Sapphire lasers are typically configured to produce near-infrared (NIR) pulses with energies up to 50 nJ, a centre wavelength of approximately 800 nm, pulse duration of 100-200 femtoseconds, and repetition rates of 80 MHz. Ablation at 80 MHz does not allow complete dissipation of pulse energy 8 and in many cases an external electro-optic pulse picking device is added to reduce the repetition rate and ablate at 1 to 10 kHz. The longer intervals between pulses at these lower repetition rates allow the deposited energy to dissipate completely [8][9][10] , improving the surgical resolution. Nanosecond lasers were adapted for axotomy, offering lower cost, and more robust design 6,11 , but with higher energy per pulse. They are typically diode laser-pumped or Nitrogen laser-pumped dye laser typically configured to produce green (532 nm), blue (450 and 488 nm), or ultra violet (337 nm) light. These lasers have pulse energies up to several mJ, pulse durations of several nanoseconds, and repetition rates up to 10 kHz. Axotomy using nanosecond and femtosecond pulses at high or low repetition rate varies in the extent of damage to surrounding tissues and in the size of the gap induced in a severed axon, but axon regeneration seems to occur at comparable extents and rates after axotomy 12 . At low repetition rate (1 kHz), the size of damage to tissue and neurite regeneration rates appear to be mostly affected by pulse energy 13 . Laser energies near the damage threshold improve surgical resolution. For any energy level there is a minimal number of pulses that will initiate ablation but adding pulses does not substantially increase the region damaged 3,13 . Nanosecond lasers advantages are their relatively low cost, compact size, low maintenance and lower safety requirements. Femtosecond lasers dissect with higher resolution and less damage to surrounding tissues. The choice of laser system is usually motivated by application and involve a compromise, considering precision, cost, and operational reliability. Neurites spaced further apart require less surgical resolution while bundled neurites often require submicron resolution 10 . Here we compare a new type of laser, an Ytterbium-doped fibre femtosecond-pulse laser, with the two laser types most commonly used for laser axotomy in C. elegans: a femtosecond Ti:Sapphire and a nanosecond diode-pumped passively Q-switched solid state laser. Results Ytterbium-doped fibre femtosecond laser. Rare-earth doped mode-locked fibre laser produces high power pulsed radiation by amplifying seed source radiation through a thin coiled fibre. Specifically Yb 2 O 3 -doped fibre lasers emit pulses that are one hundred to hundreds of femtoseconds wide, with wavelengths 1000-1100 nm, at average power of milliwatts to tens of watts 14 . The basic design is similar to Ti:Sapphire lasers that are very commonly used for multiphoton imaging and axotomy 15,16 . One advantage of Ti:Sapphire laser over Yb-fibre is that the wavelength is tunable, but this advantage is not crucial for ablation applications such as ours. The advantages of Yb-fibre are higher possible power, lower maintenance, smaller footprint, and air cooling. The specific system used here (BlueCut, Menlo Systems GmbH, Germany) includes an internal pulse picker which simplifies set up and lowers the overall cost compared to Ti:Sapphire systems that require an external pulse picker. Our Yb-fibre system can ablate with user-defined repetition rate of single shot to 50 MHz and pulse energies of nJ to μJ. As further described in the Methods, in the Yb-fibre ablation setup the laser beam is sent through a beam expander into a microscope objective that focuses the pulses into a sample. Brightfield and fluorescent light is captured by a sCMOS camera for visualization and targeting (Fig. 1a). Lesion size. We evaluated lesion size by focusing the laser beams through a coverslip onto a layer of black ink ( Fig. 1b). We adjusted the laser to the lowest possible power setting that induced axotomy. Yb-fibre laser produced a slightly smaller damage at the focal plane ( Fig. 1e; 1.34 ± 0.25 μm; n = 15; p = 0.04) than Ti:Sapphire ( Fig. 1h; 1.66 ± 0.36 μm; n = 16); neither generated significant damage 2 μm above or below the focal plane ( Fig. 1b.d,f,g,i). The nanosecond-pulse laser produced a much larger lesion than either femtosecond laser (p < 0.0001) not only at the focal plane (12.5 ± 0.5 μm) but also 2 μm above or below (13.0 ± 0.4 μm; 14.0 ± 0.4 μm). Even though smaller www.nature.com/scientificreports www.nature.com/scientificreports/ lesion sizes were reported for the nanosecond laser, that study utilized a very low energy setting (10 pulses at 100 Hz) only meant for alignment and not for axotomy 11 . To calculate the minimum theoretical beam size diameter as well as the threshold energy we used the extrapolating method described by Liu et al. 17 . As shown in Fig. 1c, we plotted square diameter of damage area against the log of the pulse energy and fit data to a line. We calculated the theoretical minimum beam size as the square root of the slope of the line. We calculated the threshold energy as the x-axis intercept of the same line (Fig. 1c). The calculated minimum diameter was 5.9 ± 2.1 µm for Ti:Sapphire and 5.6 ± 2.2 µm for Yb-fibre; threshold energy was 1.57 ± 0.54 nJ and 3.4 ± 0.8 nJ respectively. We estimated the peak intensities by modelling the pulse intensity, I(x, y, t), as Gaussians in the transverse directions (x,y) and in time (t) 18 . The Gaussians are defined by full width at half maximum (FWHM) in each direction. The FWHM in the transverse directions are the minimum diameter, d, calculated from the fit in Fig. 1c. The FWHM in the temporal direction is the pulse duration, τ. Integrating I(x, y, t) over all x, y, and t, we find that the pulse energy E = I 0 d 2 τ (π/ln(16)) 3/2 , where I 0 is the peak intensity. By using the threshold energy calculated from Fig. 1c and assuming pulses without optical stretching (100 fs for Ti:Sapphire, and 400 fs for Yb-doped laser), we obtain I 0 = 3.7 × 10 10 W/cm 2 for Ti:Sapphire and 2.2 × 10 10 W/cm 2 for Yb-doped fibre laser. Because some linear absorption of laser light by the black ink may occur, the threshold energy is likely underestimated and the minimum diameter is likely overestimated, so that I 0 is higher for cells. Qualitative accuracy test by dendritic bundle ablation in C. elegans. A Ti:Sapphire laser system can selectively ablate one sensory dendrite in a tight bundle of twelve dendrites 10 . These dendrites are sensory organs of the twelve amphid neurons located in the nose of C. elegans. We replicated this treatment using both the nanosecond pulse laser and the Yb-fibre laser. The Yb-fibre femtosecond laser, similar to the Ti:Sapphire, is capable of ablating only one neurite in the bundle with no damage to surrounding tissue. However, the nanosecond-pulse laser produces a larger injury affecting surrounding tissue and therefore damages more than one dendrite (Fig. 2). The damage observed for the nanosecond-pulse laser is more localized than what we previously described in Fig. 1b. This could be due to the fact that the higher energy might be causing thermal damage to the ink while not vaporizing the tissue. Additionally, the three-dimensional geometry and the ability of the energy to penetrate through the tissue also need to be considered. Figure 2. A single C. elegans dendrite can be injured without damage to neighbouring dendrites. A single dendrite located in a sensory bundle in the nose of the animals was successfully ablated (yellow chevron) using the Ti:Sapphire and the Yb-fibre lasers without collateral damage to adjacent dendrites. The nanosecond-pulse laser injured more than one dendrite. Scale bar = 2 μm. (2020) 10:4545 | https://doi.org/10.1038/s41598-020-61479-0 www.nature.com/scientificreports www.nature.com/scientificreports/ Quantitative regeneration assessment in C. elegans motoneurons. Finally, we assessed how the different laser setups might impact neuronal regeneration in C. elegans (Fig. 3). We aimed the three laser systems at commissures of D-type motoneurons which extend from the ventral to dorsal side of the animal (Fig. 3a). The size of the axotomy-gap (Fig. 3b) between severed ends produced by the nanosecond laser after initial retraction (3.8 ± 1.4 μm) was significantly larger than the gap produced by Yb-fibre laser (2.4 ± 0.8 μm, p = 0.02) or Ti:Sapphire laser (2.03 ± 0.49 µm, p < 0.0001). However, similar proportion of neurites regenerated after injury (a) C. elegans motoneuron axotomy. Commissures of D motoneurons in immobilized animals were axotomized, animals were recovered and axon found again next day to asses regeneration. Images taken before, 1 second after and 24 hours after injury. Sites of axotomy (yellow chevron) and regenerating branch (green chevron) are indicated. (b) Mean gap between injured tips was larger after injury with nanosecond laser. (c) Percentage of neurons that regenerated at 24 h was the same for the three types of lasers. Images in A for before and at 24 h are maximal projections of Z stacks while the images at 1 s are each a single frame, leaving some parts of the neuron out of the imaging plane. *p < 0.05 ***p < 0.001. Scale bar. = 5 μm. Discussion Yb-doped fibre lasers possess several advantages over more commonly used nanosecond pulse lasers and similar capabilities to Ti:Sapphire lasers. Femtosecond fibre laser systems are designed for the high standards of industrial cutting and welding. Compared to a Ti:Sapphire system, Ytterbium-doped fibre laser requires much less frequent alignment and optimization and less maintenance overall. It is our experience that a Ti:Sapphire system requires daily alignment to maintain its operation, but a Yb-doped system turned off for weeks could cut axons as soon as it was turned back on. Comparable turnkey laser systems, such as the Spectra-Physics Spirit, are commercially available. However, we found that the BlueCut Yb-fibre laser was the most suitable at the time we initiated this study. The most notable reasons include the integrated pulse picking, the simple incorporation of this unit to an existing microscope, air rather than liquid cooling and the relatively lower cost (including the external pulse-picker required for other systems). The Yb-fibre laser used here offers integrated power adjustment and pulse picking, controlled by a dedicated software that can pick single pulses. In our experience, we never use the full laser power and a unit with about 20% of existing power would have been preferable, for safety and cost reasons. The ability to produce injury depends on pulse energy and duration. The injury producing plasma is induced by a multiphoton absorption that requires a minimal intensity threshold which is only reached at the focal plane. In the case of neuronal injury with a 1.4 NA objective and a pulse duration of 100 fs, this threshold occurs at an intensity of 6.5 × 10 12 W/cm 2 or an energy of 4 nJ per pulse 1 . In this study, Yb-fibre and Ti:Sapphire lasers produced comparable pulse width (<400 fs), while the nanosecond-pulse laser produced four orders of magnitude longer pulse (<1.3 ns). Hence, the former induces axotomy with lower energy levels and therefore less collateral damage as longer pulses require more energy to reach the threshold intensity. Yet, regeneration rates were comparable among all three systems. Further, the Yb-fibre laser system enables the lesion of neuronal processes located on the side of an adult C. elegans farther from the objective (n > 20 animals). The mounting and diameter of an adult animal requires optical penetration of 80-100 μm deep, compared to 30-40 microns for nanosecond laser systems 11 . The most probable reason is that near infrared photons penetrate deeper and are less scattered by biological tissue. Reduced scattering also lowers the required energy levels. Here we have described a new laser axotomy system that provides the same high precision capabilities of widely-used Ti:Sapphire lasers but with less maintenance and higher robustness. This capability is not limited to neurons in C. elegans and might be useful to address experimental questions in transparent tissue that require an accurate and localized micrometre-sized lesion up to 100 μm deep with no collateral damage. Methods Laser platforms. The Yb-fibre system which generates ~400 fs pulses in the infrared (1030 nm) (BlueCut, Menlo Systems GmbH, Germany) includes an integrated pulse picking unit composed of two acousto-optic modulators (AOMs). A fixed AOM reduces repetition rate from 50 MHz to 1 MHz, a second variable AOM reduces it further as low as 1 kHz. We use the BlueCut Control software provided by Menlo Systems with the laser. The software allows user control of the repetition rate (typically, we use 1 kHz) and external gating of the variable AOM by a transistor-transistor logic (TTL) signal. We use a function generator (model 33210 A, Keysight Technologies) to provide the TTL pulse at various lengths (typically 100 ms for 100 pulses at 1 kHz). Reliable picking of single laser pulses can be accomplished through synchronization of the two AOMs in the pulse picker unit. Alternatively, we pick a single laser pulse with a 1-ms TTL signal, accepting occasional fails that are easy to visually recognize. Users control laser power by choosing a level of 1 to 99 arbitrary units that are scaled by the software. Typically, we use 10-20 units for ablations. We measured 3-65 nJ/pulse at the image plane (PM100D Power and Energy Meter Console with S170C Microscope Slide Power Sensor Thorlabs GmbH) when changing the arbitrary units scale from 4 to 35 (of 99).The beam is directed through a beam expander (10X Achromatic Galilean Beam Expander, AR Coated: 650-1050 nm, Thorlabs, USA) and dichroic mirror (750 nm long-pass, Thorlabs, USA). The diode pumped passively Q-switched solid state system (1Q532-3), Crylas Laser Systems, USA 11 , was integrated via a flip-mounted mirror to the same optical path and microscope objective as the Yb-fibre laser. The only components that had to be replaced were the beam expander (10X Achromatic Galilean Beam Expander, AR Coated: 400-650 nm, Thorlabs, USA) and 1:1 beam splitter (Thorlabs, USA) to accommodate the shorter wavelength (532 nm). The Ti:Sapphire femtosecond laser (Mantis Pulse Switch Laser, Coherent, Inc.) generates 100-fs pulses in the near infrared (800 nm). We operated the laser at 10 kHz, used 0.25 s exposure for ablations, and propagated the beam through a home-built 10x Galilean expander. Ablation parameters. We ablated samples with 13 to 15-nJ (Yb-fibre), 10-nJ (Ti:Sapphire), and 28-μJ (nanosecond) pulse energies, at 1-10 kHz repetition rate. We used 100 pulses in all cases except for ink ablations with the nanosecond laser, for which we used 5 pulses. We focused pulses with an Olympus UAPO 40×, 1.35 NA oil immersion objective for ink, bundle, and motoneuron ablations and an Olympus 100×, 1.4 NA oil immersion objective for motoneuron ablations. Where possible, laser power and pulse number were set to the minimum setting in which damaged could be observed. Measurement of lesion area and estimation of minimal power and beam size. Lesion area at the focal plane was determined by focusing the laser through a coverslip (22 × 40 mm rectangular #1.5 (0.17 mm) thickness) onto a layer of black ink (Sharpie permanent marker, ink side was facing up and away from objective lens). Bright field mages were acquired (acquisition software: MicroManager v.2.0, camera Flash4.0 Hamamatsu, Japan) after lesion at the focal plane, as well as 2 μm above and below. Largest diameter of damage area was quantified from 5 images using ImageJ (FIJI distribution v.1.52). To estimate the minimum ablation power and damage size, we followed a procedure described by Liu 17 . We set both Ti:Sapphire and Yb-fibre lasers power to the minimum energy in which a damage spot could be observed on a slide covered in black ink frequency was set in both cases at 10 kHz. At least 10 damage spots for each power setting were made on the black slide in each case before increasing power. Power was increased in the smallest possible increments and new damage spots were made to the ink. Power for each setting was measured with an optical power and energy meter (Thorlabs). We plotted the square of the damage diameter against the log of pulse energy. Bundle ablation. Bundle ablation was performed in adult animals that express green fluorescent protein (GFP) either pan-neuronally (strain NW1229) or in the amphid neurons (strain NG3416), obtained from C. elegans Genetics Center and Gian Garriga, respectively. In all cases, an area of the bundle that included multiple neurites was brought into focus and the laser was aimed at a neurite in the bundle. Images were taken before and after lesion with MicroManager 19 or Nikon Elements. Laser axotomy and regeneration measurements. For laser microsurgery and time-lapse microscopy, C. elegans hermaphrodites at the fourth larval stage (L4) were mounted by placing them in a drop of cold, liquid 36% Pluronic F-127 (Sigma-Aldrich) with 1 mM levamisole (Sigma-Aldrich) solution, and pressed between two coverslips 20 . The slides were brought to room temperature, to solidify the Pluronic F-127 gel and immobilize the animals. Laser axotomy was performed using both the Yb-fibre and the nanosecond pulse laser system installed on the same microscope (ASI RAMM open frame with epifluorescence and bright field Olympus optics), for adequate comparison. In both cases, the beam was focused to a diffraction-limited spot that was first located on the live image by lesion of a surface of black ink on a coverslip as described above. The targeted neuron was visually inspected immediately following laser exposure (100-500 ms) to confirm successful axotomy. In some cases, multiple laser exposures were necessary to generate a break in the nerve fibre. Axotomy of D-type motor neurons were done by severing the anterior ventral-dorsal commissures 40-50 μm away from the ventral nerve cord. Neuronal regeneration was assessed 24 hours after axotomy on the same microscope and imaging system. Quantification and analysis. Z-stacks were acquired both before and immediately after injury as well as 24 h post injury. Maximum intensity projections were constructed and post injury images (1 min post injury) were analysed to quantify the damage area. Images taken 24 h post injury were analysed to quantify outgrowth. Outgrowth was counted when a new branch extended from the injury site. Image measurements and analysis were carried out with ImageJ software v.1.52, and statistical analysis was done with GraphPad Prism v.8.0.50. Statistics and interpretations of results. Most of the D motoneuron regeneration and ectopic outgrowth data is binary: we score whether or not neurons regrow or outgrow. We calculated p-values for these data by Fisher's exact test. For the size of the injury, we calculated p-values by the unpaired, unequal variance, two-tailed t-test. For the ink damage test, we conducted a one-way analysis of variance (ANOVA). We performed post-hoc comparisons using the Tukey test. Data are represented as average ± standard deviation (SD). * and ** indicate values that differ at p < 0.05 and 0.001 levels, respectively. The datasets generated during or analysed during the current study are available from the corresponding author on reasonable request.
5,030.2
2020-03-11T00:00:00.000
[ "Engineering", "Biology", "Physics" ]
Stabilizing the Li1.3Al0.3Ti1.7(PO4)3|Li Interface for High Efficiency and Long Lifespan Quasi‐Solid‐State Lithium Metal Batteries Abstract To tackle the poor chemical/electrochemical stability of Li1+x Al x Ti2‐x (PO4)3 (LATP) against Li and poor electrode|electrolyte interfacial contact, a thin poly[2,3‐bis(2,2,6,6‐tetramethylpiperidine‐N‐oxycarbonyl)norbornene] (PTNB) protection layer is applied with a small amount of ionic liquid electrolyte (ILE). This enables study of the impact of ILEs with modulated composition, such as 0.3 lithium bis(fluoromethanesulfonyl)imide (LiFSI)‐0.7 N‐butyl‐N‐methylpyrrolidinium bis(fluoromethanesulfonyl)imide (Pyr14FSI) and 0.3 LiFSI‐0.35 Pyr14FSI‐0.35 N‐butyl‐N‐methylpyrrolidinium bis(trifluoromethanesulfonyl)imide (Pyr14TFSI), on the interfacial stability of PTNB@Li||PTNB@Li and<EMAIL_ADDRESS>cells. The addition of Pyr14TFSI leads to better thermal and electrochemical stability. Furthermore, Pyr14TFSI facilitates the formation of a more stable Li|hybrid electrolyte interface, as verified by the absence of lithium “pitting corrosion islands” and fibrous dendrites, leading to a substantially extended lithium stripping‐plating cycling lifetime (>900 h). Even after 500 cycles (0.5C<EMAIL_ADDRESS>cells achieve an impressive capacity retention of 89.1 % and an average Coulombic efficiency of 98.6 %. These findings reveal a feasible strategy to enhance the interfacial stability between Li and LATP by selectively mixing different ionic liquids. Introduction The development of next-generation high-performance battery systems hinges on the advancement of science and technology that enables the use of lithium metal as the anode. [1] Despite its appealing features-high theoretical specific capacity (3860 mAh g À 1 ) and low electrochemical potential (À 3.04 V vs. standard hydrogen electrode) [2] -the severe safety issues and rather poor reversibility due to the inhomogeneous lithium deposition as well as consumption of conventional carbonatebased organic liquid electrolytes (LEs) at the electrode j electrolyte interface have impeded the development and commercialization of Li metal batteries. [3] One of the most promising approaches to overcome these challenges is to utilize nonflammable solid-state electrolytes featuring with intrinsic high safety characteristics. Among the options, the NASICON-type Li 1 + x Al x Ti 2-x (PO 4 ) 3 (LATP) solid-state electrolyte, which endows with wide electrochemical stability window, moderately high ionic conductivity and good resistance against air, appears as a promising candidate. [4] Additionally, compared to other promising candidates, such as garnet Li 7 La 3 Zr 2 O 12 (LLZO) and sulfide-based Li 10 GeP 2 S 12 (LGPS), the cheaper LATP renders it more competitive toward mass production and commercial application. [5] However, LATP suffers from chemical/electrochemical instability against lithium metal on account of the formed mixed ionic-electronic conducting interphase. [6] In view of this issue, numerous efforts have been taken to avoid the direct contact between LATP and Li by introducing an intermediate protection layer made of, for example, germanium, [7] boron nitride, [8] ZnO, [9] nanocomposites (LiF, MgF 2 and B 2 O 3 ), [10] Li 3 PO 4 , [11] or cross-linked poly(ethylene glycol) methyl ether acrylate (CPMEA). [12] Another remaining challenge of LATP lies in the poor interfacial contact with the electrodes resulting in high overall charge transfer impedances. The employment of solid-liquid hybrid electrolytes can substantially reduce the electrode j electrolyte contact impedance resulting from the improved interface wetting. [13] However, the carbonate-based LE decomposition leads to extensive battery performance decay due to the formation of a resistive solid-liquid electrolyte interphase, especially in the presence of trace water. [6a] Ionic liquid electrolytes (ILEs), on the other hand, endow with high compatibility against lithium metal. [14] Furthermore, ILE exhibits advantages of negligible vapor pressure and very low flammability, high chemical and thermal stability, and in some cases, hydrophobicity. [15] According to Pervez et al., [16] the application of 0.2LiTFSI-0.8Pyr 14 FSI ILE thin interlayers on the surface of lithium lanthanum zirconate (LLZO) resulted to substantially lowered interfacial impedances (and thus overpotentials) during lithium stripping-plating tests of symmetric Li j j Li cells and charge-discharge tests of Li j j LiFePO 4 cells. Although the introduction of the ILE interlayer may raise the overall battery cost, it certainly enables improved performance even avoiding the need of applying external compression to the cells. Furthermore, reduced price is expected when large scale production of ILE is realized for commercial batteries. [17] Strategies such as extending equipment life and economies of scale can mitigate the cost issue. [18] Recently, [19] we revealed that a poly[2,3-bis(2,2,6,6tetramethylpiperidine-N-oxycarbonyl)-norbornene] (PTNB) interlayer impregnated with the 0.4LiFSI-0.6Pyr 14 FSI ILE strongly enhances the cycle ability of the Li metal electrode against LATP-based hybrid electrolyte. However, the impact of using multi-anion ILEs, which have been found to have beneficial synergic effects, remains to be studied. [20] It is known that Pyr 14 TFSI exhibits higher stability towards oxidation with respect to Pyr 14 FSI, further allowing for an increased tolerance with the lithium metal electrode by inhibiting the electrolyte degradation during cycling. [21] A recent work done by Wu et al. revealed a synergistic interplay of FSI and TFSI dual anions which enables highly favorable interfacial passivation layers on the surface of both Li metal and Ni-rich NCM electrodes, contributing to an outstanding capacity retention of 88 % over 1000 cycles. [22] Thus, we hereby compare the performance of hybrid electrolytes employing different ILEs, in particular, the single anion 0.3LiFSI-0.7Pyr 14 FSI (ILE) and the mixed anion 0.3LiFSI-0.35Pyr 14 FSI-0.35Pyr 14 TFSI (MILE). The resulting hybrid electrolytes are denoted as H-ILE and H-MILE, accordingly. As expected, the H-MILE is found to provide a larger electrochemical stability window (i. e., higher oxidation and lower reduction potentials) benefiting from the presence of TFSI anions. [23] Moreover, the decomposition of the two anions contributes to the formation of a thinner, but LiF-rich interphase. This regulates the homogeneous Li + deposition and stripping and thus mitigates the irreversible interfacial reactions between LATP and Li extending the lithium stripping-plating cycling lifetime. Due to slightly reduced ionic conductivity and higher viscosity, the rate capacity is unfortunately sacrificed as revealed in cells comprising PTNB-coated lithium (PTNB@Li), LiNi 0.8 Co 0.1 Mn 0.1 O 2 (NCM 811 ) and H-MILE hybrid electrolyte, (PTNB@Li j H-MILE j NCM 811 ). However, the average Coulombic efficiency (CE) upon cycling at 0.5C is greatly ameliorated, achieving 99.3 % after 200 cycles and 98.6 % after 500 cycles. The remarkable cycling stability is associated to the stable interfacial impedance upon cycling. Taken together, the results reported herein demonstrate that enhancing the interfacial stability between Li and LATP by mixing different ILs is a feasible strategy. Such a method is expected to be also applicable to other solid-state electrolytes suffering from poor stability against lithium metal. (Figure 1c) linear sweep voltammetries. Setting the threshold for the current density flow at 15 μA cm À 2 , the anodic stability of H-MILE is revealed to be as high as 5.459 V, which is 0.1 V higher than that of the H-ILE. This is in a good agreement with previous reports showing that the TFSI anion is more stable toward oxidation than FSI anion. [20c,23,24] The reduction peak observed at~1.2 V in the cathodic scan of H-ILE is assigned to the formation of the solid electrolyte interphase (SEI) as well as the contribution from impurity present in LiFSI. [19,20c,23] In the H-MILE hybrid electrolyte, this peak shows a mild shift towards higher potential whereas its intensity reduces presumably owing to a lower amount of FSI anions. Additionally, a reduction peak appears at~0.7 V, which must be associated with Pyr 14 TFSI. Both H-MILE and H-ILE enable lithium plating with a rather low overvoltage. In fact, a current density flow of À 15 μA cm À 2 is attained at À 0.013 V and À 0.001 V, respectively. The better stability towards reduction is also in a good agreement with literature. [22,25] The better oxidation and reduction stability of H-MILE implies that combining Pyr 14 FSI, Pyr 14 TFSI and LiFSI into a ternary ionic liquid electrolyte may lead to better electrode j electrolyte interfacial compatibility and a wider operating voltage range. Results and Discussion The thermal stability of H-ILE and H-MILE ( Figure 2) is determined by thermogravimetric analysis (TGA). By defining the starting decomposition temperature (T start ) as the temperature at which 1 wt % of the sample weight is lost, the T start is revealed to be 175°C and 189°C for H-ILE and H-MILE, respectively. Above 335°C, an additional slope is evidenced for H-MILE, attributed to the decomposition of the TFSI anion. Besides of a higher T start , the decomposition of H-MILE occurs at higher temperatures highlighting its improved thermal stability. The higher weight loss of H-MILE compared to H-ILE (ca. 1.5 wt %) is attributed to a higher density of MILE (1.442 g cm À 3 vs. 1.409 g cm À 3 ) resulting in its larger weight fraction in the hybrid electrolyte (given that the porosity of the LATP/PVDF-TrFE film is the same for both hybrid electrolytes). Lithium stripping-plating tests ( Figure 3) were performed in symmetric PTNB@Li j H-MILE j PTNB@Li or PTNB@Li j H-ILE j PTNB@Li cells to evaluate the interfacial stability of the hybrid electrolytes with lithium metal. The overpotential of the PTNB@Li j H-ILE j PTNB@Li cell ( Figure 3a) decreases upon cycling from 45 mV (2 nd cycle) to 20 mV (125 th cycle). However, the cell failed due to internal short circuit right after, achieving a cycling lifetime of 250 h. The large initial overpotential is induced by the formation of a rather resistive interphase resulting from the native passivation film on the Li foil and the SEI spontaneously formed through the interaction between Li and ILE. [26] Afterwards, the partial breakdown of the native as well as the newly formed SEI lead to sharp overpotential reduction during initial few cycles. [26] The continuous decrease of overpotential upon consecutive cycling, is primarily attributed to increased surface area as a result of the formation of fibrous dendrites and corrosion pits (see below). The PTNB@Li j H-MILE j PTNB@Li cell, on the other hand, exhibits a substantially higher overpotential (133 mV at the 2 nd cycle). After 290 h, the overpotential gradually reduces to 64 mV. The overall higher overpotential in the PTNB@Li j H-MILE j PTNB@Li cell can be explained by the lower ionic conductivity of H-MILE, 0.76 mS cm À 1 , with respect to H-ILE, 1.48 mS cm À 1 (see the Supporting Information; Figure S1). However, no apparent overpotential growing is observed afterwards, achieving a highly stable cycling performance up to 924 h before soft short circuits were observed (Figure 3b). This indicates that the substitution of H-ILE with H-MILE enables a substantially extended lithium stripping-plating lifetime. The significantly prolonged cycling lifetime implies that the detrimental side reactions at the interface between PTNB@Li and H-MILE hybrid electrolyte are effectively suppressed. To exclude the impact from the PTNB coating layer, we also performed the same lithium stripping-plating test for Li j H-MILE j Li and Li j H-ILE j Li cells ( Figure S2). Overall, the overpotential evolution shows a very similar trend to that seen in Figure 3a. Nonetheless, generally smaller overpotentials are observed when avoiding the PTNB coating layer. This is consistent with our previous finding that the introduction of PTNB allows for extended cycling lifetime, but meanwhile increased overpotential. [19] The cycling lifetime is estimated to be 396 h and 108 h for Li j H-MILE j Li and Li j H-ILE j Li cells respectively. Given the above results, the superiority of employing H-MILE over H-ILE is consolidated in both scenarios with and without the application of the PTNB coating layer. To understand the reason for the enhanced cycling stability of the cells using H-MILE, electrochemical impedance spectroscopy (EIS) measurements were performed on PTNB@Li j H-ILE j PTNB@Li and PTNB@Li j H-MILE j PTNB@Li cells upon cycling. Figure 3c and Figure 3d show the selected Nyquist plots of the PTNB@Li j H-ILE j PTNB@Li cell (one every 10 cycles) and the PTNB@Li j H-MILE j PTNB@Li cell (one every 30 cycles), respectively. The overall impedance of the former cell is smaller than that of the latter one, in accordance with the smaller overpotential observed in Figure 3a. The depressed semicircle from high-to-medium frequency is assigned to the impedance of the PTNB@Li j H-ILE or PTNB@Li j H-MILE interface. This feature is seen to gradually shrink for the PTNB@Li j H-ILE j PTNB@Li cell, probably due to the roughening of the electrode surface upon lithium plating and stripping, initially resulting in the increase of the electrochemically active area, but later to the occurrence of soft circuit. By contrast, the interfacial impedance of PTNB@Li j H-MILE demonstrates a steady drop till 240 h and remains nearly constant afterwards. The evolution of Z' values (collected at 69.3 Hz) upon cycling is shown in Figure 3e. In both cases, the evolution of interfacial impedance is perfectly in line with the overpotential evolution trend (Figure 3a,b). The EIS results reveal a more stable interface formed between PTNB@Li and H-MILE hybrid electrolyte, which inhibits the degradation of the Li electrode interface and prevents the formation and growth of dendrites. The ex situ surface morphology of cycled lithium electrodes was subsequently observed by scanning electron microscope (SEM) investigation (Figure 4 and Figure S3). The lithium metal electrodes recovered from the cycled PTNB@Li j H-ILE j PTNB@Li and PTNB@Li j H-MILE j PTNB@Li cells (see Figure 3) are hereby denoted as C_Li 1 and C_Li 2, respectively. Notably, at low magnification ( Figure S3), residual LATP particles are observed on the surface of both cycled lithium electrodes. This could be due to partial breakdown of the native passivation layer, PTNB protection interlayer, and SEI formed upon cycling as a result of huge volume changes of lithium metal. The surface of C_Li 1 shows many "pitting corrosion islands" (red dashed circles in Figure 4a and Figure S3a). Taking a closer look at these "islands" (Figure 4b, c), aggregated LATP particles are found to penetrate into the lithium foil, while rather large and fibrous lithium dendrites are observed on the walls of pits. This underlines that strong parasitic reactions occurred between LATP and lithium metal. The inferior result obtained here comparing to our previous study which used 0.4LiFSI-0.6Pyr 14 FSI ILE-based hybrid electrolyte [19] implies that the high LiFSI concentration might play a positive impact on strengthening the interfacial stability of LATP and Li. Consequently, the formation of dendrites and pits enlarges the surface area reducing gradually the overpotential ( Figure 3a) and eventually leading to the internal short circuit. In a sharp contrast, the majority surface of C_Li 2 sample (Figure 4d, e) remains rather clean and smooth. Albeit aggregated LATP particles are also observed, they locate on the surface region of lithium metal instead of penetrating into the bulk. Taking a closer look at a representative region where aggregated LATP particles are located (Figure 4f), the lithium surface appears to be rough and uneven. Nevertheless, no fibrous lithium dendrites are observed beneath these agglomerates of LATP particles. The C_Li 2 sample was recovered from the cell undergoing a much longer lithium stripping-plating cycling (i. e., ca. 924 h vs. ca. 250 h for C_Li 1). Even under such circumstances, C_Li 2 exhibits substantially better-preserved surface morphology than C_Li 1. Therefore, the incorporation of MILE favors the establishment of a stabilized interface between Li metal and the hybrid electrolyte, preventing the formation and growth of lithium dendrites. In the next step, X-ray photoelectron spectroscopy (XPS) was conducted to analyze the interphase formed on the surface of PTNB@Li in H-ILE and H-MILE ( Figure 5: C1s and F1s regions; Figure S4: N1s and S2p regions). The interphase formed on lithium with both the electrolytes is similar in terms of chemical composition also at different depths. Specifically, the interphase species are hydrocarbons (À CÀ CÀ /À CÀ HÀ ) and CN-containing species from IL decomposition demonstrated by the C1s, [27] FSI (or FSI and TFSI) and their decomposition reaction products such as reduced À SO 2 N(À )À SO 2 À , Li 3 N, À SO x À , Li 2 S, and LiF, as indicated by the F1s, N1s and S2p XP spectra. [19,28] Moreover, the surfaces of both PTNB@Li contain carbon-oxygen species, which corresponds to PTNB polymer and suggest that the formed SEI is inhomogeneous and/or rather thin. In addition, the depth profiling study suggests that both formed interphases are stable in the range of 12 nm in depth. It is worth mentioning that the interphases formed in both samples are thicker than 12 nm because the decomposition products are still observed after 15 min of Ar + sputtering. Nevertheless, the better electrochemical performance of H-MILE might be related with the formed fluorine-rich interphases, for example, LiF and reduced species of FSI/TFSI anions such as -SO 2 N(À )À SO 2 À . In order to elucidate that the higher concentration of fluorine species is not due to the longer cycling time of PTNB@Li j H-MILE j PTNB@Li cell, we performed the XPS measurements on the Li electrodes extracted from two PTNB@Li j H-ILE j PTNB@Li and PTNB@Li j H-MILE j PTNB@Li cells, which were cycled for the same time (150 h). PTNB@Li cycled with H-MILE showed a higher LiF concentration than that cycled with H-ILE ( Figure S5). It is known that LiF is formed at the initial stages of electrolyte decomposition and subsequent bonding with Li surface. [29] Therefore, LiF is usually observed at higher depth, close to the Li metal surface. [19,29b] On the other hand, the C1s photoelectron spectra show that the Pyr 14 + decomposed more intensively in the H-ILE, resulting in higher concentration of hydrocarbons and CN-containing species. Therefore, the possible explanation for the highest LiF concentration is that the interphase of PTNB@Li tested in H-MILE system is thinner. LiF is known as a good electrical insulator, effectively blocking the interfacial side reactions, meanwhile enabling homogeneous Li + flux at the interface. [30] Therefore, the thinner, but LiF-rich interphase benefits for suppressing lithium dendrite growth and preventing pitting corrosion, enabling a better PTNB@Li j hybrid electrolyte interface and interphase. [30b] The above characterizations and analyses clearly demonstrate that H-MILE exhibits higher compatibility with lithium metal. Thus, the electrochemical performance of full cells comprising PTNB@Li as the negative electrode, NCM 811 as the positive electrode and H-MILE (or H-ILE for the sake of comparison) as the electrolyte, was investigated. Figure 6a and 6b display selected (dis)charge voltage profiles of PTNB@Li j H-ILE j NCM 811 and PTNB@Li j H-MILE j NCM 811 cells at various Crates. Initially, the PTNB@Li j H-ILE j NCM 811 cell delivers a slightly higher capacity than the PTNB@Li j H-MILE j NCM 811 cell (ca. 181 vs. 177 mAh g À 1 ). Similar capacities are delivered at low current densities (0.1C and 0.2C) by both cells (i. e., 177 vs. 178 mAh g À 1 at 0.1C and 169 vs. 168 mAh g À 1 at 0.2C). However, a further increasing of C-rates from 0.3C to 2C leads to more obvious differences of delivered capacities. Specifically, PTNB@Li j H-ILE j NCM 811 cell delivers capacities of 155 (0.5C), 146 (0.75C), 137 (1C) and 104 (2C) mAh g À 1 , higher than those obtained by PTNB@Li j H-MILE j NCM 811 cell (147 (0.5C), 131 (0.75 C), 116 (1C), and 66 (2C) mAh g À 1 ). Particularly, the sudden capacity drop from 1C to 2C and, afterwards, the gradual capacity increase of the PTNB@Li j H-MILE j NCM 811 cell ( Figure 6c) reflect a high polarization existing in the cell. This is primarily associated with the lower ionic conductivity of H-MILE with respect to the H-ILE counterpart (0.76 vs. 1.48 mS cm À 1 ; Figure S1) since a fast ion transfer is vital for the performance at high (dis)charge rates. The evaluation of long-term cycling stability at 0.5C is shown in Figure 6d. In line with the rate capability result, the PTNB@Li j H-MILE j NCM 811 and PTNB@Li j H-ILE j NCM 811 cells deliver an initial capacity of 147 and 151 mAh g À 1 after the formation cycles. The discharge capacity gradually increases upon cycling and stabilizes afterwards in both cells, achieving capacities of 149 and 155 mAh g À 1 after 200 cycles. Notably, the PTNB@Li j H-MILE j NCM 811 cell shows an apparent superiority over PTNB@Li j H-ILE j NCM 811 cell in terms of CE (i. e., 99.3 % vs. 96.9 % after 80 cycles). The high CE of the PTNB@Li j H-MILE j NCM 811 cell is comparable to (or even superior to) that of other quasi-solid-state electrolytes. [31] The lower CE of PTNB@Li j H-ILE j NCM 811 cell is related to the formation of small lithium dendrites generating soft short circuits, which lead to extra charge capacity whereas the discharge capacity remains rather stable. To verify this, ex situ SEM images of PTNB@Li recovered from PTNB@Li j H-ILE j NCM 811 and PTNB@Li j H-MILE j NCM 811 cells cycled for 200 cycles were taken. The former shows severe lithium corrosion pits ( Figure S6a, b). Additionally, long fibrous Li dendrites are seen ( Figure S6c, d), which explain the low CE (Figure 6d). On the contrary, the cycled PTNB@Li recovered from PTNB@Li j H-MILE j NCM 811 cell displays a rather smooth surface ( Figure S7a, b). A few small Li dendrites are also observed (red arrows in Figure S7c, d). However, the Li dendrite formation is effectively mitigated in the PTNB@Li j H-MILE j NCM 811 cell, in good consistency with its higher CE. It again verifies that the employment of H-MILE substantially promotes the cycling compatibility against PTNB@Li. Such a distinct behavior is highly consistent with the results demonstrated in lithium stripping-plating test (Figure 3a) and ex situ electrode morphology analysis (Figure 4). Considering the sharp drop of CE, the PTNB@Li j H-ILE j NCM 811 cell was terminated after 200 cycles. On the other hand, the PTNB@Li j H-MILE j NCM 811 cell was kept cycling for additional 300 cycles. Although soft short circuits seldomly occurred, the overall CE over 500 cycles is rather high, achieving an average value of 98.9 %. Even after 500 cycles, the PTNB@Li j H-MILE j NCM 811 cell still delivered a capacity of 131 mAh g À 1 , retaining 89.1 % of the initial capacity. Albeit the inferior performance shown at high current densities, the PTNB@Li j H-MILE j NCM 811 cell enables much better longterm cyclability regarding to both capacity and CE, which are critical in practical applications with a limited lithium inventory. For the further investigation of the stable cycling performance achieved by PTNB@Li j H-MILE j NCM 811 cell, EIS measurements were conducted on the cell discharged to 3.0 V (Figure 7a) and charged to 4.3 V (Figure 7b). Figure 7 shows selected Nyquist plots (one every 20 cycles) over 200 cycles. At the discharged state (3.0 V), the depressed semicircle has two major contributions. [19] One corresponds to the contact impedance between the Al current collector and the NCM 811 electrode, which is revealed to be independent of the state-ofcharge, generally remaining constant upon de-/lithiation unless noticeable electrode volume changes occur. The second contribution corresponds to the interfacial impedance of PTNB@Li j H-MILE. Thereby, the nearly constant diameter of the depressed semicircle in Figure 7a suggests a remarkably stable interfacial impedance of PTNB@Li j H-MILE. At the charged state (4.3 V), the interfacial impedance of PTNB@Li j H-MILE slightly increases after 80 cycles, which could be due to the thickening of the interphase on the Li surface. Additionally, the newly emerged depressed semicircle in the medium-to-low frequency range is ascribed to the NCM 811 j H-MILE interfacial impedance. In the initial 60 cycles, the NCM 811 j H-MILE interfacial impedance gradually decreases, primarily due to the slow electrode activation resulting from the viscous MILE. Afterwards, it stays constant for over 200 cycles, reflecting the high stability of NCM 811 j H-MILE interface. Conclusion The impact of incorporating different ILEs into hybrid electrolytes on the oxidation/reduction stability, thermal stability, and most importantly, the interfacial stability against lithium metal has been investigated. By replacing the binary 0.3LiFSI-0.7Pyr 14 FSI with the ternary 0.3LiFSI-0.35Pyr 14 FSI-0.35Pyr 14 TFSI, the detrimental interfacial side reactions are effectively reduced as evidenced by the elimination of the lithium "pitting corrosion islands", the fibrous dendritic lithium, as well as the nearly constant interfacial resistance upon cycling. Thus, the stabilized interface contributes to the significantly improved cycling performance in both symmetric cells and full cells. Impressively, the lithium stripping-plating lifetime is prolonged by a factor of 3.7. The evaluation of PTNB@Li j H-MILE j NCM 811 full cells demonstrates a stable cycling for 500 cycles at 0.5C, retaining 89.1 % of initial capacity and achieving an average CE of 98.9 %. Therefore, the strategy of selectively mixing different ionic liquids may provide new insights for the establishment of stable Li j solid electrolyte interface for high-performance quasi-solidstate lithium metal batteries. Experimental Section The preparation of all electrodes and electrolytes, the assembly of cells and the handling of ex situ samples were conducted in a dry room with a dew point always below À 60°C at 20°C. Preparation of hybrid electrolyte The preparation of dry LATP/poly(vinylidene fluoride-trifluorethylene (PVDF-TrFE) and LATP/PVDF-TrFE/ILE films follows the same method already described in the previous studies. [19,32] A phase inversion process was used to prepare the dry LATP/PVDF-TrFE . In brief ( Figure S8), the homogeneously mixed slurry containing LATP, PVDF-TrFE/NMP and acetone was cast on a glass plate. After the acetone was evaporated, the glass plate together with the coating sheet was immerged into a water bath. Subsequently, a vacuum drying and a hot calendering step were applied to obtain the dry LATP/PVDF-TrFE film. Afterwards, 100 μL of ILE (0.3LiFSI-0.7Pyr 14 FSI) or MILE (0.3LiFSI-0.35Pyr 14 FSI-0.35Pyr 14 TFSI) was added into the above-mentioned LATP/PVDF-TrFE film (1.5 cm × 2 cm) with the aid of ambient vacuum. The excess ILE was squeezed to avoid any freeflowing liquid electrolyte. The as-prepared hybrid electrodes are labeled as H-ILE or H-MILE, accordingly. Preparation of electrodes The PTNB@Li and NCM 811 electrodes were prepared according to the previous study. [19] In brief, a piece of lithium strip was dipped into a PTNB solution in 1,2-dimethoxyethane (DME). After 4 min, the DME solvent was immediately removed by applying a vacuum step. To realize large scale PTNB-coated Li metal, a spray coating method could be easily adapted. The NCM 811 slurries were prepared by intimately mixing NCM 811 , Super C65 and PVDF in N-methyl-2pyrrolidone (NMP) with a weight ratio of 92 : 4 : 4. A doctor blade technique was adopted to cast the slurries on aluminum foils. Next, the as-prepared wet electrodes sheets were pre-dried in an oven (60°C) to remove NMP and then fully dried under vacuum (100°C, 12 h). Prior to cell assembly, the porosity of NCM 811 electrodes was filled with 5 μL of ILE (or MILE) with the aid of a vacuum step (15 min under a pressure of 10 À 3 mbar at 20°C). The electrodes were wiped carefully afterwards to eliminate any free-flowing liquid. The average NCM 811 mass loading density was calculated to be~2.5 � 0.1 mg cm À 2 . All the specific capacity values are calculated based on the mass of NCM 811 . Cell assembly The ESW of both H-ILE and H-MILE was determined in twoelectrode Swagelok cells using a polymer-coated lithium metal electrode (PTNB@Li) and an ion-blocking stainless steel (SS) electrode (i. e., PTNB@Li j H-ILE j SS and PTNB@Li j H-MILE j SS). The ionic conductivity of H-ILE and H-MILE was measured in twoelectrode pouch cells by sandwiching a layer of H-ILE or H-MILE between two copper foil electrodes. Lithium stripping-plating tests were conducted in two-electrode pouch cells composed of two PTNB@Li electrodes separated by a layer of H-ILE or H-MILE as the electrolyte. Accordingly, the cells are named as PTNB@Li j H-ILE j PTNB@Li or PTNB@Li j H-MILE j PTNB@Li. Full cells, for example, PTNB@Li j H-ILE j NCM 811 (or PTNB@Li j H-MILE j NCM 811 ), were assembled in two-electrode pouch cells comprising NCM 811 as the positive electrode, PTNB@Li as the negative electrode and H-ILE (or H-MILE) as the electrolyte. Materials and electrochemical characterization The density of ILE and MILE was measured by a density meter (Anton Paar DMA 5000M). The thermal stability of H-ILE and H-MILE hybrid electrolytes was examined by TGA (Discovery TGA, TA instruments). The samples were heated up to 600°C at a heating rate of 3°C min À 1 in an artificial air atmosphere. The gas flow rate ratio of N 2 and O 2 was fixed at 60 : 40. All the electrochemical performance tests were performed using a battery tester (Maccor series 4000). The current density for lithium stripping-plating test is fixed at 0.1 mA cm À 2 and the time for each cycle is 2 h. For the long-term cycling test, cells were always activated by one cycle at 0.05C and three cycles at 0.1C before subjecting to 200 cycles (PTNB@Li j H-ILE j NCM 811 ) or 500 cycles (PTNB@Li j H-MILE j NCM 811 ) at 0.5C. The rate capability was evaluated by cycling fresh cells at different (dis)charge rates ranging from 0.05C to 2C. The current density at the 1C rate corresponds to 200 mA g À 1 . The voltage range for full cell test is 3.0-4.3 V. The EIS (frequency range: 1 MHz-10 mHz; AC amplitude: 10 mV) upon lithium stripping-plating in symmetric cells and galvanostatic cycling in full cells were recorded using a multi-channel potentiostat (VMP Biologic-Science Instruments). The cell operation temperature was always fixed at 20°C, controlled by a climatic chamber (Binder GmbH). To prepare ex situ samples, for example, cycled PTNB@Li recovered from symmetric cells and PTNB@Li j H-ILE j NCM 811 (or PTNB@Li j H-MILE j NCM 811 ) cells, the ILE or MILE was first removed by rinsing with dimethyl carbonate (DMC) solvent, and subsequently applying a vacuum drying step at 20°C to remove residual DMC. For ex situ SEM/XPS measurements, samples were transferred using a homedesigned air-tight transfer box to avoid any contact with moist air. The morphology of cycled lithium foils recovered from symmetric and full cells was investigated by SEM (ZEISS EVO MA 10 microscope). The interphase on lithium metal was also examined by XPS analysis using a monochromatic Al Kα (hν = 1,487 eV) X-ray source and a Phoibos 150 XPS spectrometer (SPECS-Surface concept) equipped with a micro-channel plate and Delay Line Detector (DLD). The scans were acquired in a Fixed Analyzer Transmission mode with an X-ray source power of 200 W (15 kV), 30 eV pass energy and 0.1 eV energy steps. The depth profiling was performed by a 5 keV Ar + focused ion gun with an ion filter and sputtering rate of 0.8 nm min À 1 . The CasaXPS software was used for the spectra fitting, using a nonlinear Shirley-type background and 70 % Gaussian and 30 % Lorentzian profile function.
7,200.2
2022-03-16T00:00:00.000
[ "Chemistry", "Engineering", "Materials Science" ]
THE PROBLEM OF ETHNOCENTRIC ~IAS IN SPEECH ACT STUDIES: IMPLICATIONS FOR LANGUAGE TEACHING Most language teaching specialists today hold the view that the aim of second language teaching should be to facilitate learners' acquisition of so-called "communicative competence". Leaving aside the many questions concerning the meaning and use of this term that are being hotly debated in the literature, I will use the term "communicative competence" to refer to the system(s) of knowledge that underlie the ability to use a language both accurately, that is, in a grammatically correct way, and appropriately in different social and situational contexts.1 It is with the latter aspect of communicative competence in particular, viz. the knowledge underlying the ability to use a language appropriately in context, that this paper will be concerned. Let us call this aspect of communicative competence "pragmatic competence".2 Introduction Most language teaching specialists today hold the view that the aim of second language teaching should be to facilitate learners' acquisition of so-called "communicative competence". Leaving aside the many questions concerning the meaning and use of this term that are being hotly debated in the literature, I will use the term "communicative competence" to refer to the system(s) of knowledge that underlie the ability to use a language both accurately, that is, in a grammatically correct way, and appropriately in different social and situational contexts.1 It is with the latter aspect of communicative competence in particular, viz. the knowledge underlying the ability to use a language appropriately in context, that this paper will be concerned. Let us call this aspect of communicative competence "pragmatic competence".2 Teachers, teacher trainers, curriculum designers and materials writers faced with the task of producing not only grammatically competent, but also pragmatically competent second language speakers, need answers to ~uestions such as the following: (1) What does it mean to be pragmatically competent in a language? (2) What aspect(s) of pragmatic competence can be assumed to be universal and can therefore be expected to carry over from the learner's mother tongue? (3) How can the development of pragmatic competence in a second language be facilitated? Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 Providing answers to questions such as (1) and (2) in particular is a concern of linguistics, with linguistics being taken in a broad sense to include disciplines such as pragmatics, sociolinguistics and psycholinguistics.The third question, being a question about teaching practice, is perhaps not first and foremost a linguistic question.However, given that the answer to this question is, at least to a certain extent, dependent on the answers given to the first two questions, it is also, partly, a linguistic question. The aim of this paper is to highlight the contribution that a field of linguistic research known as cross-cultural pragmatics has made and could potentially make to answering questions such as (1 )-( 3) above.The focus will be on question (2), the question of what aspects of pragmatic competence, if any, can be taken to be universal or non-Ianguage-specific.Some rather strong claims have been made in the literature regarding the putative universality of particular aspects of pragmatic competence. A number of these claims will be presented in section 3 below.However, the fact that these initial universality claims were based almost exclusively on evidence from English and languages closely related to English has given rise to the criticism that they reflect an anglocentric bias. As will be shown in section 3, this criticism is supported by the findings of a growing number of studies that compare the ways in which particular speech acts are performed in different languages and cultures.The results of these studies and the insights they offer into the way in which cultural differences are encoded in speech act performance, has important implications for first and second language teaching in linguistically and culturally diverse societies. A brief look at some of these implications in section 4 should give an indication of the direction in which answers to question (3) must eventually be sought.Section 2 will deal, very bri~fly, with question 3 A recent answer to this question is the one given in (Bachman 1990). Sociolinguistic competence, according to Bachman (1990:90), is "knowledge of the sociolinguistic conventions for performing language functions appropriately in a given context".Returning to our example, then, we may say that the knowledge This question can now be made more specific, given Bachman's view of pragmatic competence as comprising two kinds of competence, viz.illocutionary competence and sociolinguistic competence. Firstly, with regard to illocutionary competence, the more specific questions in (4) arise: (4 ) i. Do all languages allow the same speech acts, or at least the same types of speech acts, to be performed? For example, "do all languages have representative speech acts, such as asserting, claiming, saying, reporting, etc.; directives, such as ordering, requesting, suggesting, etc.; commissives, such as promising and threatening; and a number of other types that have been proposed in the literature?Secondly, as far as sociolinguistic competence is concerned, the general question (2) above gives rise to more specific questions such as those in (5): (5 ) i. Is the relationship between contextual factors and the choice of specific speech act strategies the same across languages and cultures?For example, do speakers across languages and cultures choose more polite strategies when addressing requests to older people, people of higher status, strangers, etc.? ii. Is the relationship between social norms and the choice of particular speech act strategies the same across languages and cultures?For example, are speakers across languages and cultures motivated by a desire to be polite in choosing indirect rather than direct strategies to realize directive speech acts such as requests? Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 In the next section we will con~ider some of the claims and counterclaims that have been made in the literature in response to questions such as those in (4) and ( 5) concerning the possible universality of aspects of pragmatic competence. 3 The question of universality Illocutionary competence As was pointed out in section 2, a first set of questions that bear on the issue of universality in the domain of pragmatic competence are questions about aspects of illocutionary competence, viz.knowledge of what speech acts can be performed and of the pragmatic and linguistic means available for performing them. The first question, formulated as (4i) above, is whether all languages allow the same speech acts, or at least the same types of speech acts, to be performed.According to Schmidt and Richards (1980:138), most researchers assu~e that the same basic types of speech acts (representative, directive, commissive, expressive, etc.) occur in all languages and cultures. In an often quoted paper, Fraser, Rintell and Walters (1980:78-79) go even further, claiming that every language makes available to the user the same basic individual speech acts, such as requesting, apologizing, declaring, and promising.They do make provision for the existence, outside the "basic set of speech acts", of acts such as baptizing, excommunicating, doubling at bridge, etc. that they take to be culture-specific and often highly ritualized. In a recently published monograph, Anna Wierzbicka takes issue with Fraser et al.'s claim. She (1991 :1S0ff) ~oints out that "English words such as question.command or blessing identify concepts which are languagespecific. They embody an English folk taxonomy, which, like all folk taxonomies, is culturespecific".She (1991:1S2ff) These components include "an assumption that the speaker has authority over the addressee, the intention of protecting the addressee from evil, and good feelings towards the addressee".The concepts of authority, responsibility and care do not form part of the concept encoded by the English word warning, according to Wierzbicka (1991:153). A second example comes from an Australian Aboriginal language. In the language of the Yolngu people, according to Wierzbicka (1991 :158) Examples such as these, according to Wierzbicka (1991:151), provide clear evidence that speech acts are not necessarily language-and culture-independent natural conceptual kinds, to which different languages merely attach different labels. As to the question whether the assumption holds that all languages at least have speech acts belonging to all the proposed basic types, viz.representatives, directives, commissives, etc., the answer still has to be the one given by Schmidt and (1980:138):" in fact there has been no ethnographic research carried out to confirm or disprove the assumption". Richards Let us turn to the second question, (4ii) above, which is whether the pragmatic strategies available for realizing a seem particularly odd and amusing -from a Polish point of view " It would seem, then, that even if it could be maintained that, in very general terms, the same kinds of strategies for realizing a request are available in all languages, it is still the case that the specific realization of these strategies differs from language to language.So, too, does the subset of conventionally indirect strategies which are considered to be the standard or preferred ones for performing requests indirectly in a particular language. According to Wierzbicka (1991 :26), the claim that all languages share exactly the same strategies for realizing speech acts indirectly is just one more example of the mistaken assumption that Anglo-Saxon conventions hold for human behaviour in general. Having said that, we have in fact, also partly answered the third of our questions concerning the universality of illocu-tionary competence (cf.(4iii) above), i.e. the question of whether all languages make available the same linguistic options for encoding the various pragmatic strategies by which a given speech act may be realized. As we have just seen, the answer to this question must be negative, at least as far as conventionally indirect strategies are concerned. There is abundant evidence in the literature that, even when closely related languages share an indirect pragmatic strategy, it may be the case that they encode this strategy differently. A comparison of the different linguistic forms by which the indirect request strategies of questioning the addressee's ability or willingness to perform the desired act are encoded in English and Hebrew, according to Blum-Kulka (1982:34-35), will serve to illustrate this point. a. Ability questions ii. -- In (7) above a dash ("_-.") in a given box means that an utterance with the linguistic form concerned cannot be used to realize a request in that particular language.It should not be taken to mean that such an utterance is not a possible utterance in the language. Will you be able to do X? (cf. (7 a iii)), for example, is a possible utterance of English, but it will not be interpreted by speakers of English as a request to do X.Rather, it will be interpreted as a genuine question concerning the addressee's ability to do X, illustrating that an utterance with a particular conventional illocutionary force in one language may lose this force when translated into another language. No linguist, to my knowledge, has defended the claim that if one language uses a particular syntactic structure to encode Sociolinguistic competence We turn now to the second set of questions that were raised in section 2: questions concerning the universality of aspects of sociolinguistic competence, or knowledge of the relationship between what and how on the one hand, and when and to whom on the other hand. The first question to be considered (cf.(Si) above) is whether the relationship between contextual factors and the choice and linguistic realization of speech act strategies is the same across languages and cultures.To make this question more concrete, consider the options available to a speaker of English who wants to make a request in a given situation. The speaker first of all has to make a choice from among nine different pragmatic strategies ranging on a scale of directness from direct and explicit, through conventionally indirect, to highly indirect, as shown in ( 6) above. Having chosen a strategy, the speaker then has to decide on the precise linguistic form by which the strategy is to be encoded.For example, having chosen to realize the request by means of an ability question, the speaker has to decide whether the question should be phrased by means of can you or could you, whether to address the hearer as sir, or old chap, whether or not to use slang, etc. The question, then, is whether and to what extent the relationship between pragmatic choices such as those outlined and aspects of the context within which a speech act is performed can be assumed to be constant across languages and cultures.This question is perhaps the easiest one to answer: no speech act theorist that I know of has been prepared to deny that languages and cultures differ significantly with respect to both what speech acts ought to be, ought not to be, or may be performed in what contexts, and how a given speech act is to be performed in a given context. Factors such as the sex, age, status and authority of the speaker and addressee, their familiarity with each other, whether the speech act is performed publicly or privately, orally or in writing, the topic and the actual setting all influence the ways in which speech acts are realized.But the precise way in which each of these factors influences the realization of a given speech act differs from society to society, and from one culture to the next. For example, in a comparison of the requests of speakers of British English and those of Spanish speakers, Rintell (1981: 15) found that Spanish speakers, but not English speakers, Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 were significantly more deferential when making requests of addressees of the opposite sex than when making requests of addressees of the same sex. A study by Beebe (1985) of refusals in Japanese and American English, respectively, has shown that the status of the addressee has a much stronger influence on the form of refusals in Japanese than in American English.8 Examples of studies showing that different pragmatic choices reflect the assignment of different weights to the same social and situational variables in different languages and cultures can be proliferated.However, I can do no more here than to refer the interested reader to the extensive overview provided in (Wolfson 1989:ch. 4, 7). The fact that the conventions determining the choice of strategies and forms for the realization of particular speech acts in particular situations are undoubtedly language-and culture-specific have not deterred linguists from hypothesizing that, underlying these surface differences, there may be universal norms or motivating principles to which particular pragmatic choices are systematically related across languages and cultures.This, of course, brings us to question (5ii) above: Is the relationship between social norms, or principles, and the choice of particular speech act strategies the same across languages and cultures?Let us consider one particular norm, or principle, that has been hypothesized to be universal and therefore capable of explaining aspects of the speech act performance of speakers cross-linguistically and cross-culturally, viz. the principle of politeness.The content of the notion 'politeness' is not as clear as it would seem at first blush.However, as the content of the notion is highly theory-dependent, a full clarification would take us far beyond the scope of this paper.9 I will therefore concentrate on one particular account of politeness. In this account, Brown and Levinson (1987) . desire of the individual to be liked and approved of, and negatively as the individual's desire not to be imposed upon. Some speech acts, such as directives, are considered to be intrinsically imposing and therefore threatening to the face of the addressee. The seriousness of a face-threatening act is determined by the interplay of three independent and culture-sensitive variables: (i) the social distance between the speaker and hearer, i.e. their degree of familiarity and solidarity, (ii) the relative power of the speaker with respect to the hearer, i.e. the degree to which the speaker can impose his or her will on the hearer, and (iii) the ranking of the size of the imposition, i.e. the degree of the hearer's conventionally recognized obligation to provide the goods or services, or to perform the actions concerned, the right of the speaker to impose, and the degrE!e to which the hearer welcomes the imposition. The choicE! of certain strategies rather than others to perform potentially imposing, and therefore face-threatening, speech acts is seen, then, as an attempt by the speaker to reduce the threat to the hearer's face, or to "soften" the imposition on the hearer.For example, and English speaker saying I would appreciate it if you would shut the door.rather than Shut the door!. implicates not only a request, but also the desire to be polite. The question, now, is whether there is a systematic relationship between the choice of specific speech act strategies and the desire to be polite, and whE!ther this relationship holds universally.Searle (1975:641 maintained that " ordinary conversational requirements of polite: ness normally make it awkward to issue flat imperative sentences (e.g.Leave the room) or .explicitperformatives (e.g ..I order you to leave the room), and we seek therefore to find indirect means to our illocutionary ends (e.g.I wonder if you would mind leaving the room). In directives, politeness is the chief motivation for indirectness." The implication of Searle's claim is that there is a systematic and universally stable relationship between a speaker's desire to be polite (and a hearer's recognition of this desire), on the one hand, and the degree of (in)directness of the strategy chosen to realize the speech act.In the Polish culture, by contrast, attributes such as warmth, sincerity and affection are more highly valued than personal autonomy.Therefore, the choice of speech act strategie~by speakers of Polish can never be adequately explained with reference to a norm such as politeness.Rather, a different norm must be used: one which reflects Polish cultural values rather than Anglo-Saxon ones. Conclusion I have tried to identify very briefly some of the claims that have been made regarding the universality of aspects of pragmatic competence.10 I have also trief to show that claims such as these may not be correct. In doing so, I focused on one particular line of argumentation against these claims and on the kind of evidence on which the argumentation is based. It was not my aim to be complete or balanced in my overview. This would have been impossible, given the limited scope of Their views, if correct, have implications for language teaching in a multilingual and multicultural society such as ours.Some of these implications are considered in section 4 directly below. 4 Implications for language teaching In section 3 we saw that many aspects of pragmatic competence which were initially hypothesized to be universal have since been argued to be language-specific or culture-specific.The question that now arises is why the issue of universality, a linguistic issue, should be of interest to language teachers. To answer this question, let us consider what consequences it would have for second language learners if teachers wrongly assumed aspects of pragmatic competence to be universal when they were in fact language-or culture-specific. A first possible consequence of wrongly assuming to be universal, aspects of pragmatic competence that are in fact language-or culture-specific, is that the task of the second language learner may be seriously underestimated.It may be assumed, for instance, that a second language learner of English already knows what it means to request, to insist, to hint, to suggest, etc., whereas this may not be the case. Rather, it may be that these speech acts are not conceptualized in the same way in the learner's language or culture as they are in the target language. To take another example: it may be assumed that the learner already knows the basic strategies In fact, however, there may be considerable differences between the ways in which speech acts are realized in the learner's mother tongue and the ways in which they can be realized in the target language. As for the sociolinguistic aspects of pragmatic knowledge, learners may wrongly be believed to operate with the same assumptions concerning how to realize what speech acts when and to whom as target language speakers.In fact, however, there may be significant differences between the learner's mother tongue and the target language, reflecting differences in the social 12 and cultural norms of the two groups of speakers. A " if there is anything universal about rules of speaking, it is the tendency of members of one speech community to judge the speech behavior of others by their own standards.It is exactly this lack of knowledge about sociolinguistic diversity which lies at the root of most intercultural misunderstanding." In a linguistically and culturally diverse society all speakers need to be made aware of the diversity of social and cultural value systems and in the ways in which they are expressed through language.In Thomas's (1983:110) words, "Helping students to understand the way pragmatic principles operate in other cultures, encouraging them to look for the different pragmatic or discoursal norms which may underlie national and ethnlc stereotyping, is to go some way towards eliminating simplistic and ungenerous interpretations of people whose linguistic behaviour is superficially different from their own."This is the task of those i~volved in language teaching.The linguist's task is to undertake the research that is necessary to ensure that those involved in language teaching are as wellinformed about pragmatic aspects of language as they are about grammar. An excellent overview of different interpretations and uses of the term "communicative competence" is given in (Taylor 1988). 2. Throughout this paper a terminological distinction will be made between "competence" and "performance".The term "competence" will be used to refer to knowledge of (various aspects of) language, i.e. to what speakers know about language.The term "performance", by contrast, will be used to refer to what speakers do when they use language, i.e. to the observable result of the application of knowledge of language. It is of course legitimate to ask whether grammatical competence and pragmatic competence are cognitive capacities of the same sort.This issue will not be addressed here.The interested reader is referred to (Chametsky 1992) for a discussion of different views on this issue. 3. I realize that, in practice, there may be differences between the varieties of English used by South Africans of different linguistic and cultural backgrounds and, hence, that the use of a term such as "South African English" to denote anyone variety may be objectionable. However, not wanting to become entangled in this particular controversy here, and for ease of exposition, I will continue to use the term "South African English" (SAE) to refer to the variety of English which is currently assumed, rightly or wrongly, to be the standard and hence the target for English instruction in South African schools. ( 1 ) above, i.e. the question of what linguists are talking about when they use the term "pragmatic competence".Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 2 Pragmatic competence Although different answers have been given to the question of what kinds of knowledge constitute pragmatic competence some more detailed than others there is broad consensus among linguists that pragmatic competence includes knowledge of what speech acts can be performed in the language, what linguistic means and forms are available for encoding a given speech act, and what the social and situational conditions are for its appropriate performance. determining the choice of the expression Good evening.sir! Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 -------------~~-~--~~~~-------,-----_ .. '---------------64 .rather than one of the available alternatives, depending on who is being greeted by whom and in what circumstances, is part of sociolinguistic competence.Far example, the speaker has to know that in SAE the expression Good evening.sir! may be used to greet someone whom one encounters late in the afternoon, in the evening, or even late at night, but that it cannot be used to greet someone whom one encounters for the second time in the course of the same evening.This is knowledge concerning the relationship between linguistic expressions and situational factors such as the time and circumstances of the encounter.The speaker also h.as to have knowledge of how social factors such as the relationship between him-or herself and the addressee, their respective ages, rights, obligations, etc. influence the choice of an utterance.Thus the utterance Good evening.sir! would normally be judged inappropriate if used by an adult native speaker of SAE to greet a lover, a friend or a child, whereas Hi! would be judged quite appropriate.As was mentioned earlier, an important question from the point of view of second language acquisition is whether and to what extent various aspects of pragmatic competence are universal. define politeness as the manifestation of respect for and consideration of another's face."Face" is defined both positively as the Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 circuited and the hearer is saved the trouble of having to work out the intended meaning as he or she would have to do in the case of nonconventionally indirect strategies.But, at the same time, the speaker has indicated a desire to be polite by being indirect.Reporting on the results of a study conducted within the framework of the CCSARP project, does caution that the nature of the relationship may differ across cultures.BothThomas (1983) andWierzbicka (1991) have questioned the validity of claims such as those that we have been examining.They argue that claims such as those made by Searle and ch. 2 and 3) argues on the basis of empirical evidence from languages such as Polish that direct strategies (such as the use of imperatives or speech act indicating verbs) are more polite in some languages and cultures Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0Thomas (1983:106ff) argue that norms other than politeness may be the chief motivation for the choice of particular speech act strategies in other languages and cultures: norms such as cordiality, truthfulness or sincerity.The emphasis on politeness, defined as respect for another's face, reflects the high value placed on the autonomy of the indivi- general theory of speech acts proposed by Austin and Searle, to the exclusion of valuable work done within other theoretical frameworks.The choice was motivated by the fact that the Austin-Searle theory has generated such an immense body of research and is still considered a point of departure for work on speech acts even by those who have adopted a different framework.11 The general tenor of the line of criticism that I have been focusing on is that claims about the putatively universal Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 nature of aspects of pragmatic competence may reflect no way intimating that I agree with it, or that the arguments offered are valid arguments, or that it is the only possible line of criticism.I have focused on this line of criticism because it is important that language teachers and others involved in language teaching take note of views such as those expressed by Thomas, Wierzbicka and others. for realizing speech acts and that he or she merely needs to Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 learn how these strategies are linguistically encoded in the target language. Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 Could you •.• ?.•.Would you be so good as to ••• ?.•.Would you be so kind/gracious as to •.. ?But ••• pseudoquestions which ostensibly enquire about the addressee's desire and which in fact are to be interpreted as requests (Would you like to.Do you want Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 given speech act are the same across languages.ization of two speech acts, viz.requests and apologies.5Theyassumethatarequestcanbe realized by producing an utterance of one of the following types:(6 )i.an utterance in which the grammatical mood of the verb (viz.theimperativemood)signals the illocutionary force, e.g.Leave me alone!; ii. an utterance in which the illocutionary force is explicitly named, e.g.I am asking you to clean up this mess; iii.an utterance in which the naming of the illocutionary force is modified by hedging expressions, e.g.I want to ask you to give your presentation a week earlier; iv. an utterance in which the hearer's obligation to carry out the act is stated, e.g.You'll have to move your car; v. an utterance in which the speaker's desire for the ~ct to be carried out is stated, e.g.I really wish you'd stop bothering me; vi.. an utterance in which it is suggested that the hearer carry out the act, e.g.How about cleaning up this mess?; vii.an utterance containing reference to the preparatory conditions (such as the hearer's ability or willingness to do the act) for the successful per-formance of a request, e.g.Can you clear up the kitchen for me?Would you mind moving your car?;Fraser, Rintell and Walters (1980:78-79)have made the strongest claim, hypothesizing that all languages make available the same Stellenbosch Papers in Linguistics, Vol. 25, 1992, 61-88 doi: 10.5774/25-0-76 set of strategies for performing a given speech act.Moreover, the conditions that have to be' satisfied for an utterance to count as a request are claimed to be essentially the same across all languages.An example of such a condition is the one that stipulates that an imperative utterance can count as a valid request only if the hearer is in fact able to perform the desired act (so that, e.g., Come here!, but not Drop dead!, would count as a valid request in English)."onecould perform requests, or acts closely related to requests, by ostensibly 'asking' about the addressee's ability to do something, or about his goodness (or kindness): .•.
6,850.2
2012-12-01T00:00:00.000
[ "Linguistics" ]
Comprehensive analysis to identify PUS7 as a prognostic biomarker from pan-cancer analysis to osteosarcoma validation Aim: Pseudouridylation has demonstrated the potential to control the development of numerous malignancies. PUS7(Pseudouridine Synthase 7) is one of the pseudouridine synthases, but the literature on this enzyme is limited to several cancer types. Currently, no investigation has been performed on the systematic pan-cancer analysis concerning PUS7 role in cancer diagnosis and prognosis. Methods: Employing public databases, including The Cancer Genome Atlas (TCGA), Genotype-Tissue Expression Project (GTEx), Human Protein Atlas (HPA), UALCAN and Tumor Immune Single-cell Hub (TISCH), this work investigated the PUS7 carcinogenesis in pan-cancer. Differential expression analysis, prognostic survival analysis and biological function were systematically performed. Furthermore, PUS7 potential as an osteosarcoma biomarker for diagnosis and prognosis was assessed in this study. Results: The findings indicated that PUS7 was overexpressed in the majority of malignancies. High PUS7 expression contributed to the poor prognosis among 11 cancer types, including Adrenocortical Cancer (ACC), Bladder Cancer (BLCA), Liver Cancer (LIHC), Kidney Papillary Cell Carcinoma (KIRP), Mesothelioma (MESO), Lower Grade Glioma (LGG), Kidney Chromophobe (KICH), Sarcoma (SARC), osteosarcoma (OS), Pancreatic Cancer (PAAD), and Thyroid Cancer (THCA). In addition, elevated PUS7 expression was linked to advanced TNM across multiple malignancies, including ACC, BLCA, KIRP, LIHC and PAAD. The function enrichment analysis revealed that PUS7 participates in E2F targets, G2M checkpoint, ribosome biogenesis, and rRNA metabolic process. Moreover, PUS7 is also a reliable biomarker and a potential therapeutic target for osteosarcoma. Conclusions: In summary, PUS7 is a putative pan-cancer biomarker that reliably forecasts cancer patients’ prognosis. In addition, this enzyme regulates the cell cycle, ribosome biogenesis, and rRNA metabolism. Most importantly, PUS7 possibly regulates osteosarcoma initiation and progression. extensive and aggressive tumor compared to malignancies of epithelial origin.Individuals with metastatic or recurring osteosarcoma have an overall survival rate of approximately 25% [3,4].Surgery and adjuvant chemotherapy are the mainstays of osteosarcoma treatment and have remained constant over the past 30 years due to its extensive malignancy.Therefore, researchers continue their search for novel treatments for this disease.Several targeted drugs for osteosarcoma have proceeded to clinical trials with tremendous therapeutic effects [5].Thus, the future of osteosarcoma treatment may lie in the application of small-molecule therapeutic drugs in conjunction with operations and chemotherapy.The advancement of high-throughput sequencing has boosted the discovery of new drug targets.Tumors are highly heterogeneous but with a certain level of homogeneity; hence, some oncogenes are found in multiple cancer types.For instance, TP53 is a key tumor suppressor gene mutated in more than half of human cancers [6].Pan-cancer analysis has enabled researchers to identify biomarkers involved in various malignancies using sequencing data [7], which are promising therapeutic targets.Pseudouridine(Ψ), an isomer of uridine, is the most abundant and widespread epigenetically-modified RNA in organisms [8].Despite that, the biological role of pseudouridine is not fully understood in cancer.Pseudouridine synthases (PUSs) catalyze pseudouridine formation, which classified into six families: TruA, TruB, TruD, RsuA, RluA, and PUS10 [9].Emerging studies have identified that PUSs are associated with tumorigenesis and cancer progression.As for instance, by directly triggering the transcription of HIF-1, the elevated PSU7 expression in CRC (colorectal cancer) tissues could control angiogenesis and metastasis [10].PUS7 belongs to the TruD class.The expression of PUS7 and its catalytic activity are necessary for the development of glioblastoma stem cells (GSC) tumorigenesis, and PUS7 pharmacological inhibitors prevent the growth of tumors and extend the lifespan of tumor-bearing mice [11].Additionally, PUS7 promotes CRC cell growth via effectively stabilizing SIRT1 to stimulate Wnt/-catenin pathway [12].Du et al. also discovered PUS7 overexpression accelerates colon cancer cell proliferation and invasion via PI3K/AKT/mTOR Signaling Pathway [13].PUS7 has been proven to be a valid biomarker for lung cancer diagnosis in recent research [14].Nonetheless, systematic research on PUS7 function in various malignancies remains lacking. This study identified the aberrant expression of PUS7 in tumor and normal tissues and confirmed the predictive value in cancer patient prognosis.In addition, PUS7 regulates cell division and cell cycle, as well as ribosome biosynthesis and rRNA metabolism.Finally, we identified the PUS7's oncogene role in osteosarcoma.In conclusion, PUS7 is a novel and effective biomarker, thus, an attractive molecule target for cancer treatment. Data collection and expression analysis of PUS7 The mRNA expression matrix was downloaded from the Cancer Genome Atlas (TCGA) database (https:// portal.gdc.cancer.gov/)across 33 cancer types.The bulk mRNA sequencing data of osteosarcoma (OS) in TCGA-TARGET and GSE21257 was obtained [15].Of course, we downloaded the relevant clinical information, including OS, PFS, DFI, DSS, and clinical features.Then, using the TCGA and Genotype-Tissue Expression Project (GTEx) datasets, the differential expression value of PUS7 between normal and malignant samples across 33 cancer types were examined.Finally, the UALCAN (https://ualcan.path.uab.edu/index.html)web platform was utilized to ascertain PUS7's protein level. Immunohistochemistry The Human Protein Atlas (HPA) (https://www.proteinatlas.org/)provided the immunohistochemistry images of PUS7 protein expression in 15 different cancer types and corresponding normal tissues.Meanwhile,10 pairs of paraffin-embedded osteosarcoma and adjacent samples were taken from the Tianjin Hospital of Tianjin University and none of them received preoperative chemotherapy.More crucially, all patients have approved the use of the surgical material for academic research and publications.All methods were approved by The Institutional Review Committee and the Medical Ethics Committee of the Tianjin Hospital of Tianjin University.The slides were incubated with anti-PUS7 (1:1000; ab289857, Abcam, Rabbit), following the manufacturer's protocol.Two pathologists independently investigated and quantified the slide images.The IHC intensity score is 0 (negative), 1 (weak brown), 2 (medium brown), or 3 (strong brown).The staining content was categorized into five levels: 0 (≤10%), 1 (11%-25%), 2 (26%-50%), 3 (51%-75%), or 4 (>75%).The staining value was established through the multiplication of intensity scores and extent scores. Relationship between PUS7 expression, prognosis, and clinical features Four survival indicators (OS, DSS, DFI, and PFI) were utilized to examine the connection between PUS7 expression and cancer patients' prognosis.The survival analysis was carried out via the survival R program.Meanwhile, the best-cutoff point was obtained through the "surv cutpoint" function in the survminer R package.Using the optimal cutoff point for PUS7 expression level, the patients were later divided into two groups for each cancer type and then modeled the Kaplan-Meier survival curves.Additionally, a univariate Cox regression analysis was conducted to ascertain the predictive significance of PUS7 expression.Finally, the association between PUS7's expression value and clinical data was explored in this study. Functional enrichment analysis of PUS7 The PUS7 has been identified as an un-favor oncogene in 11 cancer types in our work.To investigate PUS7's oncogenic role in malignant tumors, we extracted RNA sequences of the following cancer types, including Bladder Cancer (BLCA), Kidney Papillary Cell Carcinoma (KIRP), Lower Grade Glioma (LGG), Liver Cancer (LIHC), Sarcoma (SARC), Thyroid Cancer (THCA).For these cancer types, PUS7 exhibited a significant adverse effect.These cancer types also accounted for more than 200 individuals, which might increase the accuracy of the functional analysis result.Cases were classified into PUS7-high and -low subsets relied on PUS7's median value in each cancer type.Enriched gene sets were identified using the gene set enrichment analysis (GSEA). PUS7-related regulatory gene enrichment analysis Similarly, the RNA-seq matrix was extracted from BLCA, KIRP, LGG, LIHC, SARC, and THCA patient samples to identify PUS7-related regulatory genes.The patients were split into PUS7-high and PUS7-low subsets on the basis of PUS7 median value in each cancer type.Subsequently, a differential expression analysis between PUS7-high and PUS7-low group was performed to detect the differentially expressed genes (DEGs) (p < 0.05, Log FC > 1).The association between PUS7 and these DEGs was later determined using Spearman's correlation analysis for each cancer type (p < 0.05, Cor > 0.4).PUS7-related regulatory genes were identified as the genes that intersected for these closely related genes.Then, using clusterProfiler R package [16], the gene enrichment analyses were performed on PUS7-related regulatory genes.Finally, the effector function of these regulatory genes was ascertained using the Metascape online platform (https://metascape.org/gp/index.html). Single-cell analysis This study estimated the PUS7 expression level in cell types across numerous cancers via Tumor Immune Single-cell Hub (TISCH) database, an online platform designed for multiple single-cell analyses (http://tisch.comp-genomics.org/home/).Further research for PUS7 in single-cell resolutions was performed using the osteosarcoma GSE152048 dataset downloaded from GEO database [17].First, a quality control step was performed to exclude unsuitable cells (RNA counts 200 -7000; mitochondrial gene expression: < 5%).Data were normalized using the "LogNormalize" function with 10,000 scale factor.Meanwhile, the influence of UMIs and mitochondrial content (%) was eliminated using Seutat's ScaleData function.Subsequently, the batch effect was removed using the harmony R package.The top 30 principal components and top 2000 variable genes were selected for cell clustering and the uniform manifold approximation and projection (UMAP) visualization [18].Finally, Canonical marker genes identified in previous studies were employed to mark the cell type. Statistical analysis Analyses between the two groups were performed using the Wilcoxon test, while the one-way analysis of variance (ANOVA) test was utilized for three or more groups.All statistical calculations were carried out with GraphPad and R studio. PUS7 expression across cancers The PUS7 expression in normal and tumor tissues was assessed using the TCGA database.In this study, PUS7 was significantly upregulated in most tumor tissues, including BLCA, Breast Cancer (BRCA), Cervical Cancer (CESC), Bile Duct Cancer (CHOL), Colon Cancer (COAD), Esophageal Cancer (ECSA), Glioblastoma (GBM), Head and Neck Cancer (HNSC), Kidney Clear Cell Carcinoma (KIRC), KIRP, LIHC, Lung Adenocarcinoma (LUAD), Lung Squamous Cell Carcinoma (LUSC), Prostate Cancer (PRAD), Rectal Cancer (READ), SARC, Stomach Cancer (STAD), and Endometrioid Cancer (UCEC).Conversely, Thyroid Cancer (THCA) and Kidney Chromophobe (KICH) tumor tissues showed a considerable decrease in PUS7 expression (Figure 1A).Additionally, PUS7's relative expression value in different cancer tissues were examined.It was discovered that PUS7 expression was the highest in Testicular Cancer (TGCT), LUSC, and READ tissues and the lowest in KICH tissues (Figure 1B).Samples from TCGA and GTEx databases were merged to examine PUS7 expression in cancer and paraneoplastic tissues because the TCGA database has limited normal samples.It was found that 25 out of 33 cancer types had significantly higher levels of PUS7 in tumor tissues, contrary to KICH, THCA, and Acute Myeloid Leukemia (LAML).In summary, PUS7 expression was elevated in most cancers, suggesting its oncogene role in cancers.The PUS7 protein level in tumor and normal samples was further assessed using the UALCAN online platform, but the online proteomic data was limited.Consequently, this study identified a considerable increase in PUS7 protein levels in ovarian cancer, colon cancer, ccRCC, UCEC, LUAD, HNSC, Pancreatic Cancer (PAAD), LGG and LIHC tumor tissues, which was in line with the RNA-seq analysis (Figure 1D).The HPA database was also utilized to obtain immunohistochemical images.The protein level of PUS7 varied considerably in 15 tumor tissues (https://www.proteinatlas.org/ENSG00000091127-PUS7/pathology) and corresponding normal tissues (https://www.proteinatlas.org/ENSG00000091127-PUS7/tissue, version: 23.0) (see Figure 2A). Prognostic value of PUS7 for cancer patients The potential of PUS7 as a prognostic biomarker was explored as PUS7 was overexpressed in most malignancies.Univariate regression and Kaplan-Meier survival analyses for each cancer type were implemented to examine the association between PUS7 expression and the cancer patients' prognosis, concentrating on OS, DFI, DSS, and PFI.There exists a strong correlation between poor outcomes and upregulated PUS7 expression in 11 different types of cancer patients, including Adrenocortical Cancer (ACC), BLCA, LIHC, KIRP, Mesothelioma (MESO), LGG, KICH, SARC, OS, PAAD, and THCA, which indicated PUS7 was probably a proto-oncogene (Figure 3A-3M).High PUS7 expression was connected with shorter DFI among tumor patients with PAAD, LIHC, SARC, and UCEC (Figure 4A).Similarly, patients exhibiting higher PUS7 expression demonstrated poor DSS and PFI across numerous malignancies (see Figure 4B, 4C).Notably, PUS7 was identified as a significant risk factor for SARC in the OS, DSS, and DFI analysis, suggesting the critical Analysis of PUS7 in single cells Previous carcinoma research focused on tumor cells without recognizing the significance of non-cancerous cells.Scientists have recently begun to consider cancer an evolutionary ecosystem evolutionary ecosystem in which tumor microenvironment (TME) and tumor cells interact constantly and dynamically [19].Various subpopulations of the same cell type may vary in distribution, number, and metabolic activity owing to TME's great heterogeneity.Similarly, an oncogene can affect both tumor cells and other cells, thus, its role in tumor development is complex.Single-cell sequencing is a powerful tool to analyze oncogene expression at the single-cell resolution.We discovered PUS7 existed in malignant cells as well as in endothelial, fibroblast, and immune cells, including macrophages and T cells employing scRNA sequencing of BLCA, KICH, LIHC, SARC, and PAAD (Figure 5A).Focusing on the distribution of PUS7 in osteosarcoma, fibroblasts and cancer cells had the highest levels of PUS7 expression (Figure 5B).Subsequently, another osteosarcoma singlecell database-GSE150248, was utilized to validate the PUS7 expression in each cell type (Figure 5C).The findings exhibited that PUS7 was highly expressed in myeloid cells, cancer cells, and fibroblasts (Figure 5D, 5E).As PUS7 is highly expressed with fibroblasts and immune cells, this observation suggested the complex function of PUS7 in the TME. Analysis of PUS7-related regulatory pathways Cancer patients were divided into the PUS7-high andlow subsets based on PUS7's median value in six cancer types respectively to explore the oncogenic role of this protein.The gene sets enriched in both groups was identified via gene set enrichment analysis.The top-10 NSE-ranked enriched pathways were visualized in this study.Interestingly, GSEA outcomes showed surprising consistency among the six cancer types, demonstrating the reliability of our findings.Generally, upregulated genes in the PUS7-high subset demonstrated the enrichment of G2M checkpoint, mitotic spindle, PI3K ATK MTOR signaling, and mTORC1 signaling, which were related to cell cycle and proliferation (Figure 6).Moreover, the PUS7-high group exhibited significant enrichment in the DNA repair pathway.The PUS7-related regulatory genes were also investigated in our work.First, patients were classified into PUS7-high and -low subsets based on PUS7's median values in BLCA, KIRP, LGG, LIHC, SARC, and THCA.Subsequently, differentially expressed gene (DEGs) analysis and Spearman's correlation analysis were performed in each cancer type, yielding 76 PUS7-related regulatory genes (Figure 7A).The GO analysis identified these regulatory genes were significantly enriched in ncRNA metabolic process, ribosome biogenesis, rRNA metabolic process, RNA location, and complex ribonucleoprotein biogenesis.In addition, these genes were concentrated in nucleocytoplasmic transport and RNA degradation using KEGG analysis (Figure 7B).Finally, the effector function of these regulatory genes was validated using the Metascape online platform (Figure 7C).The results revealed that PUS7-related regulatory genes involved RNA metabolism, RNA location, amide biosynthesis process, nucleus organization, mRNA modification, DNA replication, and osteoblast differentiation. PUS7 as a promising biomarker in osteosarcoma Our previous analysis confirmed PUS7 expression was markedly elevated in sarcoma tumor tissues.Moreover, PUS7 significantly impacted OS, DSS and DFI in sarcoma patients and was linked to a poor prognosis when highly expressed.Notably, upregulated PUS7 expression was significantly associated with poor outcomes in osteosarcoma.PUS7 may therefore control the growth of osteosarcoma.The PUS7 effector function was also explored using the data from another osteosarcoma study (GSE21257).The Kaplan-Meier survival curve indicated upregulated PUS7 expression was significantly linked to poor outcomes in GSE21257 (Figure 8A).The PUS7 was also significantly upregulated in patients with osteosarcoma metastases, suggesting the modulatory role of this protein in osteosarcoma progression (Figure 8B).The RNA-seq data for osteosarcoma tissue and corresponding paired normal bone tissue was obtained from GSE99671 [20].PUS7 expression was discovered to be significantly increased in osteosarcoma tissues (Figure 8C).Immunohistochemical results identified that PUS7 was significantly overexpressed in osteosarcoma tissues compared to the corresponding non-cancerous normal ones (Figure 8D and Table 1).Finally, the target-OS cohort cases were categorized into PUS7-high and -low subsets based on PUS7's median value.The gene set enriched in PUS7-high subset was then determined using GSEA, and the output indicated that G2M checkpoint, mitotic spindle, and mTORC1 signaling were significantly enriched (Figure 8E). DISCUSSION Post-transcriptional gene expression is controlled by a critical mechanism known as RNA modification, which regulates multiple cellular processes, including translation initiation, transcript stabilization, pre-mRNA splicing, and nuclear export promotion [21,22].Furthermore, RNA modification links transcription and translation, which are essential for the development of various diseases and determine the fate of cancer cells [23].The most recently-studied RNA modification and an attractive therapeutic target are M6A, which significantly affects the carcinogenesis and metabolic reorganization of cancer cells [24,25].Pseudouridine (ψ) is a C5-glycoside isomer of uridine, which incorporates the C5 atom of the nucleobase into the glycosidic bond [26].The ψ modifies almost all RNAs, including mRNA, tRNA, and rRNA.In mRNA, ψ incorporation can mediate the conversion of non-sense to sense codons and promote base pairing in ribosomal decoding centers, leading to protein diversity [27].Furthermore, mRNAs containing ψ in stressed cells exhibited higher stability, suggesting that increased pseudouridylation can enhance cell stability [28].Recent studies reported that pseudouridylation controls the development of numerous malignancies.For instance, DKC1 binds and stabilizes mRNAs of selected ribosomal proteins based on the pseudouridine synthase activity, thus, promoting colorectal cancer progression in vitro and in vivo [29].Furthermore, DKC1 is a trustworthy biomarker for breast and prostate cancers [30,31].Therefore, ψ could be a pharmacological target and serve as a biomarker for human cancer.Despite that, PUS7 cancer research is limited to several malignancies, such as glioma, ovarian cancer, and colon cancer.To date, no studies have reported on the commonalities of PUS7 in multiple cancers.Our study confirmed that PUS7 was significantly upregulated in tumor tissues compared to normal ones and accurately predicted the prognosis of cancer patients.In addition, PUS7 was involved in the E2F targets, G2M checkpoint, ribosome biogenesis, and rRNA metabolic process.Most importantly, PUS7 is a promising biomarker for osteosarcoma that possibly regulates osteosarcoma initiation and progression. The PUS7 mRNA value in 33 different cancer types were first assessed employing TCGA and GTEx data.The results indicated PUS7 mRNA value was significantly upregulated in most cancers except KICH, THCA, and LAML, which exhibited low PUS7 levels.Moreover, the majority of cancer tissues possessed significantly greater PUS7 protein levels than the corresponding paracancerous tissues, based on the HPA database.Previous studies have revealed that PUS7 is upregulated in glioma, ovarian, and colon cancers, which is in line with our results [10,11,32].In conclusion, PUS7 was upregulated in most malignancies and may be considered a diagnostic biomarker.Thus, further in-depth investigations should be performed as PUS7 was highly expressed in various tumor tissues.The PUS7 predictive ability for the cancer patient prognosis was also assessed in this study.It was discovered that PUS7 is a risk factor for 11 cancer types, including ACC, BLCA, LIHC, KIRP, MESO, LGG, KICH, SARC, OS, PAAD, and THCA.High PUS7 expression contributed to the poor prognosis among cancer patients.Our finding suggested that PUS7 is possibly a proto-oncogene.Previous experiments have demonstrated that PUS7 is an unfavorable gene in colon cancer and GBM [11,13].Moreover, elevated PUS7 expression was typically linked to advanced TNM across various malignancies, which aligned with the survival analysis outcomes.Therefore, PUS7 is a promising cancer biomarker. Previous colorectal cancer reported on the PUS7's regulatory function in PI3K/AKT/mTOR and the Wnt/catenin signaling pathways [12,13], which promote tumor cell growth and migration.Furthermore, it has been reported that PUS7 can regulate the metastatic ability of colon cancer cells through the HSP90/PUS7/LASP1 axis [10].In this study, PUS7regulated pathways were determined via GSEA.Genes upregulated in the PUS7-high subset displayed enrichment of E2F targets, G2M checkpoints, PI3K ATK MTOR signaling and Mtorc1 signaling, which are connected to cell cycle and proliferation.For instance, PI3K-Akt-mTOR is a crucial kinase that controls and activates essential cellular processes, including proliferation, transcription, translation, survival, and growth [33].In pathological circumstances like cancer, the PI3K-Akt-mTOR signaling pathway is essential for cell survival and proliferation and regulates autophagy and apoptosis process [34].The GSEA results from this study displayed remarkable consistency among the six cancer types, indicating the reliability of the study output.A total of 78 PUS7-related regulatory genes were explored in this study.The functional analysis revealed these regulatory genes exhibit significant enrichment in the ncRNA metabolic process, rRNA metabolic process, RNA location, and complex ribonucleoprotein biogenesis.Furthermore, these genes regulated RNA metabolism and location, amide biosynthesis, nucleus organization, mRNA modification, DNA replication, and osteoblast differentiation.In conclusion, PUS7 plays a role in ribosome biogenesis, which is vital for cell proliferation, differentiation, apoptosis, development, and transformation [35].Additionally, Prakash et al. discovered that metastasis of cancer cells could be promoted by synthesizing neo-ribosomes [36]. Cui et al. discovered PUS7 regulates GSC development and carcinogenesis by modifying TYK2 translation via PUS7-dependent tRNA pseudouridylation [11].Our study emphasizes that PUS7 regulates rRNA metabolic process and ribosome biogenesis.Ribosomes are made up of rRNA and proteins, which is crucial hub for protein synthesis.Tumor growth requires elevated ribosome biogenesis.Targeting ribosomes is an important strategy for cancer therapy [37].PUS7 probably controls rRNA metabolism to encourage the cancer growth.Previous studies have indicated that deletion of PUS7 in Candida albicans results in defective rRNA processing and reduced cell surface hydrophobicity [38].The precise mechanisms by which PUS7 controls rRNA metabolism in cancer have not been studied.Increased attention to this field is necessary. The TME is the cellular setting in which cancer cells exist, comprising non-cancerous cells, their components, and the molecules they produce and secrete.The TME determines the clinical outcome of malignancies, drug resistance, and immune evasion [19].Cancer therapy can now be achieved by manipulating different cell types in TME, and some of these methods are now applied in clinic [39].Thus, it is essential to explore the effector function of oncogenes in the TME to develop effective cancer therapeutic approaches.One of the best tools to study the TME is by using scRNA-seq technology.This study found that PUS7 was expressed in malignant cells, as well as in endothelial, fibroblast, and immune cells, including macrophages and T cells.Likewise, we focused on PUS7's expression in osteosarcoma cell types and discovered that myeloid cells, fibroblasts, and cancer cells had the highest PUS7 expression.In summary, further study is required due to the complexity role of PUS7 in the TME. Sarcoma is a rare malignant tumor that originates from mesenchymal tissue.It can be classified as soft tissue sarcoma (STS) and bone sarcoma (BS).Most SARCs have a high rate of recurrence or metastasis following local surgery and are unresponsive to radiation or chemotherapy.Several pre-clinical studies on immunotherapy for sarcoma patients have yielded positive responses [40].Bioinformatics analysis in AGING sarcomas is less popular than in other tumors.Sarcoma biomarker exploration has also been unsatisfactory.The PUS7 expression was significantly upregulated in sarcoma tumor tissues.Moreover, PUS7 significantly impacted the OS, DSS, and DFI in sarcoma patients.Thus, PUS7 potentially controls sarcoma progression.Osteosarcoma is a type of sarcoma , and high PUS7 expression was connected with shorter OS in osteosarcoma patients.Hence, we further analyzed the role of PUS7 in osteosarcoma.PUS7 expression was significantly upregulated in osteosarcoma tissues and was linked to worse prognoses in osteosarcoma patients.Additionally, PUS7 was significantly upregulated in patients with osteosarcoma metastases, indicating the regulatory role of PUS7 in the tumor progression.Therefore, PUS7 is a reliable biomarker and a potential therapeutic target for osteosarcoma.Our current study has several limitations.Despite the extensive sequencing data used in this study (33 cancer types and approximately 10,000 patients), there were few osteosarcoma sequencing datasets.Thus, future studies should include more clinical cohorts to improve the accuracy of the findings. In summary, PUS7 is a putative pan-cancer biomarker that can reliably forecasts cancer patients' prognosis, including ACC, BLCA, LIHC, KIRP, MESO, LGG, KICH, SARC, OS, PAAD, and THCA .In addition, the bioinformatics output indicated the regulatory role of PUS7 in cell division and cycle, ribosome biogenesis, and the rRNA metabolic process.Most importantly, PUS7 may control osteosarcoma initiation and progression. Figure 1 . Figure 1.Differential expression of PUS7 in pan-cancer.(A) Comparison of PUS7 expression between tumor and normal samples in TCGA database.(B) Expression of PUS7 in tumor tissue for 33 cancer types.(C) Comparison of PUS7 expression between tumor and normal samples in TCGA and GTEx database.(D) Comparison of PUS7 protein level between tumor and normal samples. Figure 2 .Figure 3 . Figure 2. (A) Immunohistochemical images of the normal (left) and tumor (right) group with PUS7 protein expression. Figure 4 . Figure 4. PUS7 expression positively correlated with tumor progression.(A) The forest plot exhibited the association between PUS7 expression and DFI (A), DSS (B), and PFI (C) in cancers.(D) Association between PUS7 expression and tumor stage, metastasis, grade, and size. Figure 5 . Figure 5. Single-cell analysis of PUS7 in cancers.(A) Summary of PUS7 expression of 23 cell types in 31 single-cell datasets.(B) UMAP plots of all single cells of osteosarcoma patients, showing all cell types in the plot.(C) UMAP plot of all cell clusters in the GSE150248 dataset.(D) UMAP plot and Violin plots.(E) showing the expression of PUS7 in each cell type. Figure 6 . Figure 6.Gene set enrichment analysis (GSEA) based on the 50 cancer hallmarks.(A-F) The GSEA analysis between PUS7-high -low groups in BLCA, KIRP, LGG, LIHC, SARC, and THCA.Each panel on the left and right represents the enriched pathways of the PUS7-high andlow groups, respectively. Figure 7 . Figure 7. Analysis of PUS7-related regulatory pathways.(A) The intersecting genes which significantly correlated with DEGs were obtained using Spearman's correlation analysis (p < 0.05, Cor > 0.4).(B) GO (top) and KEGG (bottom) analysis for the PUS7-related regulatory genes.(C) PUS7-related regulatory genes were mainly enriched in the mitotic cell cycle and DNA metabolic processes.The interactive network was constructed using the Meta scape online platform. Figure 8 . Figure 8. PUS7 as a promising biomarker in osteosarcoma.(A) Kaplan-Meier analysis of the association between PUS7 expression and OS in GSE21257.(B) Comparison of PUS7 expression between non-metastasis and metastasis samples in osteosarcoma.(C) Comparison of PUS7 expression between tumor and normal samples in osteosarcoma.(D) The expression value of PUS7 in osteosarcoma tissue and adjacent normal specimens determined by IHC analysis.(E) The GSEA analysis between PUS7-high and-low groups in the Target-OS cohort.
5,845
2024-05-30T00:00:00.000
[ "Medicine", "Biology" ]
Dclk1 facilitates intestinal tumor growth via enhancing pluripotency and epithelial mesenchymal transition Doublecortin-like kinase 1 (Dclk1) is overexpressed in many cancers including colorectal cancer (CRC) and it specifically marks intestinal tumor stem cells. However, the role of Dclk1 in intestinal tumorigenesis in Apc mutant conditions is still poorly understood. We demonstrate that Dclk1 expression and Dclk1+ cells are significantly increased in the intestinal epithelium of elderly ApcMin/+ mice compared to young ApcMin/+ mice and wild type mice. Intestinal epithelial cells of ApcMin/+ mice demonstrate increased pluripotency, self-renewing ability, and EMT. Furthermore, miRNAs are dysregulated, expression of onco-miRNAs are significantly increased with decreased tumor suppressor miRNAs. In support of these findings, knockdown of Dclk1 in elderly ApcMin/+ mice attenuates intestinal adenomas and adenocarcinoma by decreasing pluripotency, EMT and onco-miRNAs indicating that Dclk1 overexpression facilitates intestinal tumorigenesis. Knocking down Dclk1 weakens Dclk1-dependent intestinal processes for tumorigenesis. This study demonstrates that Dclk1 is critically involved in facilitating intestinal tumorigenesis by enhancing pluripotency and EMT factors in Apc mutant intestinal tumors and it also provides a potential therapeutic target for the treatment of colorectal cancer. INTRODUCTION More than 80% of colorectal cancer (CRC) is associated with the APC mutation. APC is a tumor suppressor gene that is mutated in patients with familial adenomatous polyposis (FAP) and the majority of sporadic colorectal cancers [1,2]. Apc mutation dysregulates the Wnt signaling pathway and triggers the expansion and transformation of the stem cell compartment, resulting in the development of adenomatous polyps [3]. Because of stem cell self-renewal capability, irreversible or unrepaired alterations in the genomes of these cells can be preserved in their amplified progeny [4,5]. Therefore, Apc mutations in intestinal stem cells may transform these cells and initiate expansion leading to cancer development. Like humans with germline mutations in APC, Apc Min/+ mice have a heterozygous mutation in the Apc gene, predisposing the mice to intestinal and colon tumor development. These mice start developing intestinal polyps by ~4 weeks of age, with progression to dysplasia at 18-21 weeks of age, adenocarcinoma is also evident at ~26-34 weeks of age [6][7][8][9]. Younger Apc Min/+ mice (8-12 weeks of age) are good models to study the pathogenesis of FAP, while elderly Apc Min/+ mice (26-34 weeks of age) develop intestinal low and high-grade dysplasia and adenocarcinoma and are relevant models for studying tumor progression, as well as developing good therapeutic strategies [7,8]. Elderly www.impactjournals.com/oncotarget Apc Min/+ mice are a particularly clinically relevant disease model because a large percentage of patients diagnosed with advanced colon cancer have an unresectable or widespread disease [10]. Doublecortin-like kinase 1 (Dclk1) is a member of the protein kinase super family and the doublecortin family. Dclk1 is overexpressed in many cancers, including colon, pancreas, liver and esophagus [11][12][13][14]. Recent studies show that Dclk1 specifically marks tumor stem cells (TSCs) that self-renew and generate tumor progeny in Apc Min/+ mice [15]. It has been also shown that the development and progression of pancreatic cancer depend upon Dclk1+ TSCs [16]. Previous work from us and others supported that DCLK1 expression in cancer is critical for cancer growth, EMT, and metastasis [11,12,[16][17][18][19]. Studies indicate that gain of stem cell-like properties are essential features of epithelial-mesenchymal transition (EMT), a process that plays a key role in cancer progression and metastasis [20]. The functional interdependence between EMT-associated transcription factors and enhanced self-renewal ability highlights the common mechanism involved in tumorigenesis. However, the potential roles of Dclk1 in Apc mutant conditions for facilitating intestinal tumorigenesis are yet to be well known. Increased expression of Dclk1 and Dclk1+ cells in the intestine of Apc Min/+ mice is associated with adenoma and adenocarcinoma To better dissect the role of Dclk1 in intestinal tumorigenesis, we analyzed the intestinal crypt architecture and expression of Dclk1, pluripotency, and EMT associated factors between 12 week old and 30 week old Apc Min/+ mice. Moreover, to determine the significance of the Dclk1+ cell in intestinal tumorigenesis, we assessed whether Dclk1+ cells were expanded in Apc Min/+ mice at 12 and 30 weeks of age. H&E staining shows that the intestinal epithelium having hyperplastic crypts, polyps, and no sign of dysplasia and/or adenocarcinoma in 12 week old Apc Min/+ mice compared to 30 week old Apc Min/+ mice, which had intramucosal adenocarcinoma with low and high-grade dysplasia. The crypt architecture is distorted with no identifiable crypt structures in the places where we identified adenocarcinoma and high-grade dysplasia ( Figure 1B and Supplementary Figure 1A, B, C, D). As expected, the intestinal crypt architecture of wild-type (WT) mice appeared normal (Figure1A). IHC staining revealed 5-10% of Dclk1+ cells in the intestine of 12 week old Apc Min/+ mice (Supplementary Figure 2), whereas, large populations of Dclk1+ cells (25-30%) were found in the intestines of the 30 week old Apc Min/+ mice ( Figure 1D and Supplementary Figure 2). These observations suggest that Dclk1+ cells started expanding before 12 weeks of age, whereas greater populations of Dclk1+ cells at 30 week of age may represent clonal Dclk1+ neoplastic cells that have expanded during the process of tumorigenesis. In confirmation of our previous studies [11], immunohistochemical (IHC) staining of Dclk1 in the WT intestines revealed approximately 1-3% Dclk1+ cells ( Figure 1C). To verify that the Dclk1 upregulation in tumors of Apc mutation with activated wnt, we did IHC for β-catenin and found that β-catenin was localized in the intestinal regions identified as high-grade dysplasia and adenocarcinoma, and most of them strongly stained in the nucleus (Supplementary Figure 3A). We also found the protein expression of β-catenin and its downstream molecule TCF4 increased in the IECs of Apc Min/+ mice compared to WT control (Supplementary Figure 3B). These findings suggest that both β-catenin and Dclk1 in tumor lesions of Apc Min/+ mice showed a progressive increase during tumorigenesis. Furthermore, we found that Dclk1 expression was found to be ~10 fold higher and the pluripotency and EMT associated factors were massively increased in the IECs of 30 week old Apc Min/+ mice ( Figure . These observations suggest that Apc Min/+ mice at 12 weeks of age provide a good platform to understand early tumorigenesis, whereas Apc Min/+ mice at 30 weeks of age provide a compelling platform to understand the molecular events associated with advanced intestinal tumorigenesis. In this study, elderly Apc Min/+ mice at 30 weeks of age were used along with age and sex matched WT littermates to assess the molecular events associated with advanced intestinal tumorigenesis, and to determine the efficiency of targeted therapy in advanced diseases. Dclk1 upregulation in intestinal epithelial cells is associated with increased pluripotency and EMT To determine the enrichment of pluripotency associated with increased Dclk1 expression during intestinal tumorigenesis, we analyzed the expression of pluripotency factors, and found a massive increase in mRNA and protein levels of the Myc, Nanog and Sox2 (Figure 2A and 2B) in the 30 week old Apc Min/+ mice IECs compared to age and sex matched WT control mice, confirming greater self-renewal ability of IECs during tumorigenesis. We also performed IHC for Nanog on intestinal tissue sections of 30 weeks old Apc Min/+ mice and found that the nanog staining was increased in the intestinal regions identified as high-grade dysplasia and adenocarcinoma (Supplementary Figure 3D). To examine the onset of EMT during tumorigenesis, we examined IEC monolayer-forming ability. Only the IECs isolated from Apc Min/+ mice formed monolayers and revealed significant transdifferentiation into cells with mesenchymal characteristics ( Figure 2E). These cells stained positive for Vimentin and E-cadherin were decreased and lost from the cell surfaces, while moderately transdifferentiated or transdifferentiating cells were stained positive for both vimentin and E-cadherin ( Figure 2F). Moreover, most of the cells were also positive for Dclk1 ( Figure 2F), suggesting EMT may be driven in stem-like cells or by stem cells themselves. To better dissect at the molecular level associated with the onset of EMT, we evaluated the expression of EMT-associated transcription factors and found that Slug, Snail and Vimentin were all higher and E-cadherin was lower in the IECs of Apc Min/+ mice compared to WT ( Figure 2C and 2D). Furthermore, the staining of snail in the intestine of Apc Min/+ mice was greater in the intestinal regions identified as highgrade dysplasia and adenocarcinoma (Supplementary Figure 3C). Therefore our data suggest that intestinal cellular transdifferentiation is increased with Dclk1 upregulation during tumorigenesis. Figure 3A and 3B). To answer whether enterospheres formed from the Dclk1+ cells of Apc Min/+ mice that were enriched with pluripotency and EMT to support self-renewal ability, we collected enterospheres and analyzed them for EMT All quantitative data are expressed as means ± SD of minimum three independent experiments. P values <0.05 were considered statistically significant. and pluripotency factors. We found significantly higher levels of Dclk1 in the enterospheres of Apc Min/+ mice compared to WT ( Figure 3C and 3D). More excitingly the expression levels of the EMT-associated factors Snail, Slug, Vimentin, and the pluripotency factors Myc and nanog were significantly increased in the enterospheres of Apc Min/+ mice compared to WT ( Figure 3C and 3D). This data suggests that this cellular transformation may endow Dclk1+ cells with greater self-renewal ability and initiate their tumor stem cell function. Dysregulation of miRNAs Mediates Cellular Transdifferentiation Towards EMT and Neoplasia MicroRNAs (miRNAs) are potentially important for stem cell pluripotency and differentiation, and for complex cellular expression networks in development and disorders [21,22]. Using Mouse miRNA Arrays (Signosis), miRNAs were identified that were differentially expressed between the IECs of Apc Min/+ and WT control mice ( Figure 4A). Hierarchichal clustering of the miRNA data revealed significant upregulation of tumor promoter miRNAs (miR-17, miR-21, miR-31, miR-98 and miR-182) and significant downregulation of tumor suppressor miRNAs (Let7a, miR-143, miR-144, miR145, miR-30a and miR-200a) in the IECs of Apc Min/+ mice ( Figure 4A). Those that were most significantly altered as listed above were quantitatively assessed using miRNA specific RT-PCR analyses. Quantitative analysis confirmed their expression signatures, and the listed tumor promoter miRNAs were significantly increased and the tumor suppressors were decreased in the IECs of Apc Min/+ mice compared to WT ( Figure 4B, 4C). All quantitative data are expressed as means ± SD. P values <0.05 were considered statistically significant. www.impactjournals.com/oncotarget Dclk1 is critically involved in facilitate intestinal tumorigenesis in Apc Min/+ mice To determine whether Dclk1 is critical for intestinal tumorigenesis, we inhibited Dclk1 gene expression using siDclk1-NPs, along with si-Scramble-Nanoparticles (siScr-NPs) as the control, in WT and Apc Min/+ mice. Histological studies revealed significantly fewer polyps and decreased dysplasia in the intestine of Apc Min/+ mice treated with siDclk1-NPs compared to siScr-NPs, whereas WT mice did not have any abnormality or change in crypt architecture ( Figure 5A and 5B). This data shows that Dclk1 inhibition reduces intestinal tumor formation and growth in Apc Min/+ mice. IHC staining of Dclk1 ( Figure 5C) showed a massive decrease in the number of Dclk1+ cells in the small intestine of Apc Min/+ mice treated with siDclk1-NPs compared to siScr-NPs. Significantly lower expression levels of Dclk1, and the pluripotency factors Myc, Sox2, and Nanog were detected in the isolated IECs of siDclk1-NP-treated Apc Min/+ mice ( Figure 5D and 5E). Interestingly, the selfrenewal ability of Dclk1+ cells and their populations were lowered with siDclk1-NP treatment, as evidence shown by fewer and smaller enterospheres formed from Dclk1+ cells of Apc Min/+ mice and a decreased number of Dclk1+ cells by IHC analysis (Figure 5F, 5G and 5C). IECs from Apc Min/+ mice treated with siScr-NPs formed monolayers ( Figure 6A) demonstrating active EMT processes while the IECs of Apc Min/+ mice treated with siDclk1-NPs failed to form monolayers. In addition, Slug, Snail, and Viment in levels were lower and E-cadherin levels were marginally higher in the IECs of Apc Min/+ mice treated Figure 6B and 6C). Quantitative analysis of tumor suppressor and tumor promoter miRNAs revealed most were at normal or near-normal levels after Dclk1-knockdown ( Figure 7A and 7B). Tumor suppressors miRNAs levels increased and tumor promoter miRNAs levels decreased after siDclk1-NP treatment in Apc Min/+ mice. These data suggest that Dclk1 is critically involved in facilitating intestinal tumorigenesis by enhancing pluripotency, EMT associated factors, selfrenewal ability, and onco-miRNAs. DISCUSSION Apc Min/+ mice are excellent models to evaluate human FAP and sporadic CRC [1,2,8]. We used Apc Min/+ mice at 30 weeks of age, which exhibit high-grade dysplasia and intramucosal adenocarcinoma (Supplementary Figure 1) to improve our understanding of the molecular events associated with advanced intestinal tumorigenesis. Multiple intestinal tumor onset and progression in the elderly Apc Min/+ mice allowed us to investigate how Dclk1 supports intestinal tumorigenesis and to identify novel strategies for cancer prevention and potential treatment. We detected overexpression of Dclk1 in small IECs of Apc Min/+ mice, suggesting that Dclk1 marks intestinal TSCs [17], which expands during intestinal tumorigenesis. These findings agree with our previous studies using human cancer samples of colon, liver, pancreas and esophagus, where Dclk1+ cells were expanded [11][12][13][14]23]. In this study, we provide evidence that loss of Apc significantly increases the number of Dclk1+ cells in the small intestine and particularly in dysplastic and adenocarcinoma regions of advanced polyps, supporting the previous hypotheses that (i) stem like cells or stem cells are more abundant in cancerous conditions and (ii) the loss of function of Apc increases the expansion of the TSC compartment [3,24]. Pluripotency is a central, well-defined feature of stem cells and EMT plays a key role in the increase of stem-like cells during tumorigenesis [20,25]. Greater pluripotency, EMT capacity and higher Dclk1 expression in the IECs of the Apc Min/+ mice in the present study points out a common mechanism of functional interdependence between Dclk1 and pluripotency and EMT factors that may increase stem cell compartment during intestinal tumorigenesis similar to that observed in breast cancer [26]. The expression of Vimentin, loss of E-cadherin, and increased expression of EMT associated transcriptional factors snail and slug, together lead to changes in cellular morphology towards mesenchymal features that further support a role for Dclk1 in the EMT process during intestinal tumorigenesis. Furthermore, more and larger enterospheres formed from the intestinal Dclk1+ cells of Apc Min/+ mice. Molecular characterization of these enterospheres demonstrated enhanced pluripotency and EMT signaling pathways with greater self-renewal ability, which supports the process of cellular transformation into tumor cells and or TSCs. Thus, we hypothesize that the increase in Dclk1 associated with the loss of Apc escalate cellular transformation and stem cell compartment to facilitate expansion of dysplasia and adenocarcinoma in tumorinitiated intestinal epithelium. Dysregulated miRNAs signalings that control EMT, pluripotency and acquired self-renewal capacity for cellular transformation are required for tumorigenesis [21,27]. Most interesting observation is the dysregulation of miRNAs identified as tumor promoters and suppressors in the IECs of Apc Min/+ mice. However, after the treatment of siDclk1-NP, the dysregulated miRNAs were decreased and or maintained at physiological levels in the Apc Min/+ mice which led us to hypothesize that Dclk1 may regulate the miRNAs biogenesis or signaling to enhance the pluripotency, EMT, and TSCs function to facilitate intestinal tumorigenesis. However, further molecular studies are warranted to demonstrate the link between these requisite molecular alterations, which are necessary for the onset of EMT, and increased pluripotency, to support intestinal tumorigenesis and TSC functions [21,22,28]. To demonstrate the functional significance of Dclk1 in intestinal tumorigenesis, we conducted Dclk1 knockdown experiments with siDCLK1-NPs [29] and found decreased dysplasia/adenocarcinoma, and fewer polyps in Apc Min/+ mice. However, the crypt architecture in WT littermates were unaffected. These findings suggest that Dclk1 reduces tumor formation and progression in Apc Min/+ mice without affecting normal epithelial homeostasis. Our data shows that ablation of Dclk1 expression results in the regression of polyps and dysplasia without injury to the normal intestine, suggesting that Dclk1 maybe a potential therapeutic target in intestinal cancer. Recently, the Chiba group found out that specific ablation of Dclk1+ TSCs results in the regression of polyps without injury to the normal intestine [17]. Our data also supports this observation. We found that dysregulated miRNAs signature, had largely reverted to WT physiological levels in the IECs of Apc Min/+ mice after Dclk1 knockdown. Therefore, our data combined with previous findings that Dclk1 knockdown induces tumor growth arrest [12,29] suggesting that Dclk1 is critical for intestinal neoplasia. Indeed, expression levels of pluripotency factors and self-renewal ability were decreased in the IECs of Apc Min/+ mice following Dclk1 knockdown. Interestingly, knocking down Dclk1 in Apc Min/+ mice resulted in lower expression of factors associated with EMT and in fewer and small erenterospheres. These data suggest that Dclk1 knockdown diminishes dysregulation in miRNAs, EMT, and pluripotency signaling responsible for cellular transformation, enhanced self-renewal ability, and stem cell compartment required for the advancement of intestinal neoplasia. SUPPLEMENTARY TABLES In conclusion, we have demonstrated that Dclk1 is critically involved in facilitating intestinal tumorigenesis during loss of function of Apc. We also demonstrated that Dclk1 supports intestinal tumor growth via enhancing EMT and pluripotency factors. We hypothesize that Dclk1 enhances the EMT and pluripotency factors by regulating the biogenesis of miRNAs. However, additional molecular studies are warranted to demonstrate the link between Dclk1 and miRNAs. The APC and Dclk1 axis is critical for increased stem cell compartment with enhanced self-renewal ability for the advancement of intestinal tumorigenesis. Targeting Dclk1 with siDCLK1-NPs reduces the dysregulation in miRNAs, EMT, and pluripotency associated with cancer risk, suggesting that targeting Dclk1 in patients even with advanced cancer may be a therapeutic option for intestinal and/or other solid tumors. Animals All animal experiments were performed with approval and authorization from the Institutional Review Board, and the Institutional Animal Care, and Use Committee at the University of Oklahoma Health Science Center (Oklahoma City, Oklahoma). Apc Min/+ mice that had a C57BL/6J background were obtained from The Jackson Laboratory and maintained by breeding Apc Min/+ males with C57BL/6J females. Mice were genotyped to identify carriers of the Min allele of Apc with a PCR assay. Same sex matched littermates of C57BL/6J Apc Min/+ and Apc +/+ mice at 12 and 30 week of age were used in the present study. It has been shown that the average life span of Apc Min/+ mice on C57BL/6J background is ~20 weeks, whereas the mice in our facility have healthier survival rates. This was also observed in several previous studies [6][7][8][9]. Elderly Apc Min/+ mice (i.e., >30 weeks of age) were carefully monitored and sacrificed before becoming moribund. Intestinal Epithelial cell (IEC) Isolation and Monolayer Formation Small intestines were attached to a paddle, immersed in Ca 2+ -free standard Krebs-buffered saline (in mmol/liter: 107 NaCl, 4.5 KCl, 0.2 NaH 2 PO 4 , 1.8 Na 2 HPO 4 , 10 glucose, and 10 EDTA) at 37°C for 15-20 min, and gassed with 5% CO 2 , 95% O 2 . Individual crypt units were then separated by intermittent (30 sec) vibration into ice-cold phosphate buffered saline and were then collected by centrifugation [30][31][32][33]. The pellets were washed with phosphate-buffered saline, resuspended in RPMI glutamax medium/0.5 U/ml, dispased at 37°C, and shaken gently for 5 min. The cells were pelleted and resuspended in RPMI glutamax medium supplemented with 5% fetal calf serum, plus penicillin and streptomycin and incubated at 37°C in 5% CO 2 . Monolayer formation was followed for 0-20 days and medium was replaced every 72 hours [32]. Only the IECs isolated from Apc Min/+ mice formed monolayers and exhibited mesenchymal characteristics. These changes were not driven by contamination from the mesenchymal cells during the isolation process, since the purified small intestinal epithelial cells were negative for α-smooth muscle actin expression in WT and Apc Min/+ mice (Data not shown). FACS Freshly isolated IECs were washed and resuspended in RPMI glutamax medium. To avoid endothelial and stromal contamination, isolated cells were incubated with anti-CD45, anti-CD31, and anti-EpCAM in addition to anti-Dclk1 antibodies conjugated with respective fluorochromes for 30 min. The cells were washed and sorted using Influx-V cell sorter (Cytopeia). CD45-CD31-EpCAM+Dclk1+ cells were then collected and subjected to enteropshere assays. Enterosphere formation Assay Isolated IECs were plated at a density of 1000 cells/ well in 48-well plates in RPMI medium containing 0.3% soft agar and 2% fetal calf serum. The cell suspensions were plated in a 48-well plate above a layer of solidified 1% soft agar in plain RPMI medium. The plates were then incubated at 37°C under 5% CO 2. Then the cells were being monitored for spheroid formation in RPMI glutamax medium plus 1% fetal calf serum with 1X Insulin/ Transferrin/Sodium selenite (ITS) and 10,000 Units/ml IFN-gamma at weekly intervals for 5-8 weeks [32]. MicroRNA Array and Quantitative analysis Total miRNA was isolated from small intestinal epithelial cells using the miRNeasy mini kit (Qiagen, CA, USA), following the manufacturer's protocol. MicroRNA profiling was performed on the Signosis Cancer MicroRNA Array platform, which contained capture probes for all miRNAs annotated in miRBase (version 15.0; http://www.mirbase.org/). The data generated was then weighted, log base 2 converted, and analyzed. Heat maps were prepared using the Genesis software package, version 1.7.6. For quantitative analysis, total miRNA isolated from intestinal epithelial cells were subjected to reverse transcription with Superscript™ II RNase H -Reverse Transcriptase and random hexanucleotide primers (Invitrogen, Carlsbad, CA). Complementary DNA (cDNA) was subsequently used to perform real-time PCR with SYBR™ chemistry (Molecular Probes, Eugene, OR) using specific primers for selected miRNAs (Supplemental Table 1). The crossing threshold value assessed by using real-time PCR was noted for the transcripts and normalized with U6 pri-miRNA. The changes in pri-miRNAs were expressed as fold changes relative to the control value ± SD. RNA Isolation and Real-time RT-PCR Analysis Total RNA isolated from small intestinal epithelial cells were subjected to reverse transcription and the complementary DNA (cDNA) was subsequently used to perform real-time PCR with SYBR™ chemistry (Molecular Probes, Eugene, OR) using gene-specific primers (Supplemental Table 2) for specific transcripts. The crossing threshold value assessed by real-time PCR was noted for the transcripts and normalized to internal control. Immunoblot Analysis Standard immunoblot protocols were used. Twentyfive micrograms of the total protein was size separated in an 8%-12% SDS polyacrylamide gel and transferred electrophoretically onto a PVDF membrane with a wetblot transfer apparatus (Bio-Rad, Hercules, CA). The membrane was then blocked with 5% milk and incubated overnight with a primary antibody (used recommended dilutions by manufacturers) and then with horseradish peroxidase-conjugated secondary antibody (dilution 1:5000). The proteins were detected using ECL Western blotting detection reagents (Amersham-Pharmacia, Piscataway, NJ). Actin (42-kD) was then used as a loading control. Synthesis and characterization of Dclk1 siRNA NPs and treatment Poly (lactide-co-glycolide) acid nanoparticles (PLGA NPs) were synthesized using a double emulsion solvent evaporation technique as described previously [29]. The amount of encapsulated siRNA was quantified using a spectrophotometer (DU-800, Beckman Coulter, Brea, CA). The size, polydispersityindex, and zeta-potential measurements of synthesized siRNA NPs were determined using diffraction light scattering (DLS) and utilizing Zeta PALS (Brookhaven Instruments, Holtsville, NY). Sex and age matched littermates of C57BL/6J Apc Min/+ and Apc +/+ mice at 30 weeks age, were injected i.p. with 0.25 nmol of siRNA preparation on every third day for a total of 6 doses. Immunohistochemistry/immunofluorescence Standard immunohistochemistry and immunofluorescence protocols were used with specific antibodies. Statistical Analysis For statistical analyses, student t-test and analysis of variance (ANOVA) were performed using GraphPad Prism, P values<0.05 were considered statistically significant. All experiments were performed independently a minimum of three times and some a maximum of five times. Each experiment contained 3 animals per group. EDITORIAL NOTE This paper has been accepted based in part on peerreview conducted by another journal and the authors' response and revisions as well as expedited peer-review in Oncotarget.
5,483.8
2014-09-02T00:00:00.000
[ "Biology", "Medicine" ]
White Matter Abnormalities and Animal Models Examining a Putative Role of Altered White Matter in Schizophrenia Schizophrenia is a severe mental disorder affecting about 1% of the population worldwide. Although the dopamine (DA) hypothesis is still keeping a dominant position in schizophrenia research, new advances have been emerging in recent years, which suggest the implication of white matter abnormalities in schizophrenia. In this paper, we will briefly review some of recent human studies showing white matter abnormalities in schizophrenic brains and altered oligodendrocyte-(OL-) and myelin-related genes in patients with schizophrenia and will consider abnormal behaviors reported in patients with white matter diseases. Following these, we will selectively introduce some animal models examining a putative role of white matter abnormalities in schizophrenia. The emphasis will be put on the cuprizone (CPZ) model. CPZ-fed mice show demyelination and OLs loss, display schizophrenia-related behaviors, and have higher DA levels in the prefrontal cortex. These features suggest that the CPZ model is a novel animal model of schizophrenia. Introduction Schizophrenia is a devastating mental disorder affecting about 1% of the population worldwide [1]. The onset of schizophrenia ranges from mid to late adolescence through early adulthood; the majority of cases occur between the ages of 16 and 30 years [2]. Clinically, this disorder is characterized by positive symptoms (psychosis, hallucinations, and paranoia), negative symptoms (flat affect, poor attention, lack of motivation, and deficits in social function), and cognitive deficits. The positive symptoms of schizophrenia have been treatable since chlorpromazine was introduced into clinical practice in the early 1950s. Since then, a number of antipsychotic drugs have been developed, which are grouped into typical and atypical antipsychotics. All typical antipsychotics have high affinities for dopamine (DA) D 2 receptors, which correlate with the therapeutic doses of these drugs [3][4][5][6]. These observations, plus the psychotogenic effects of DA-enhancing drugs [7,8], provided solid evidence for the DA hypothesis of schizophrenia that hyperactivity of DA transmission is responsible for the positive symptoms observed in this disorder [9]. In contrast to positive symptoms, negative symptoms tend to remain stable over time in patients with established illness [10] and have been found to persist despite of treatment [11,12]. This phenomenon raised a critical challenge to the DA hypothesis. In addition, this hypothesis cannot account for why the symptoms of schizophrenia commonly first present in late adolescence and early adulthood. During adolescence and early adulthood, white matter volume expands while grey matter volume loss occurs [13]. This long-lasting development of the white matter is associated with the development of cognitive functions [14]. Given these and that schizophrenia present in adolescence or early adulthood, it is reasonable to infer that disruption to white matter development and/or damage to some white matter structures during this period are responsible for the development of psychotic symptoms. In line with this view, there are increasing human studies, especially those following the inspiring review by Davis et al. [15], showing white matter abnormalities and altered oligodendrocyte-(OL) and myelin-related genes in schizophrenic patients. On the other hand, abnormal behaviors are reported in patients with white matter diseases. In this paper, we will briefly review some of these human studies. Following that, we will selectively introduce some animal models that examine a putative role of white matter abnormalities in schizophrenia. The emphasis will be put on the cuprizone-fed mouse, a novel animal model of schizophrenia. Imaging Evidence. Lateral ventricular enlargement is the best replicated anatomic abnormality detected in the brains of patients with schizophrenia both in earlier computed tomography (CT) studies and in many magnetic resonance imaging (MRI) investigations [16]. The boundaries of the cerebral ventricles are largely made up of white matter structures. Therefore, ventricular enlargement may be due in part to volumetric reduction of adjacent white matter tracts. However, conventional MRI findings in earlier studies for cerebral white matter volume in schizophrenia have been mixed. Some of them found no differences in white matter volume between schizophrenic patients and normal subjects [17][18][19][20][21], while some reported white matter reductions in schizophrenia [22][23][24]. Foong's group [25] for the first time used magnetization transfer imaging (MTI), a technique sensitive to myelin and axonal abnormalities, to investigate the white matter in patients with schizophrenia. They found that the magnetization transfer ratios (MTRs) significantly reduced in the right and left temporal regions in schizophrenic patients compared with controls. The same group also used diffusion tensor imaging (DTI), another new MRI technique capable of examining water diffusion in different tissues and the organization of white matter tracts, to investigate the neuropathology of the corpus callosum in patients with schizophrenia [26]. The mean diffusivity (MD) increased and fractional anisotropy (FA) reduced in the splenium of the corpus callosum in the schizophrenic group compared with controls. These results confirmed the findings in an earlier study, which reported a reduction in FA in the corpus callosum in a small group of schizophrenic patients [27]. A number of recent DTI studies in patients with schizophrenia [28][29][30][31][32][33] reported FA reduction in various brain regions/structures, including the frontal white matter, the deep frontal perigenual region, the medial occipital lobe, the inferior parietal gyri, the middle temporal gyri, the parahippocampal gyri, the corpus callosum, the internal capsule, the cingulum bundle, the fornix, the superior occipitofrontal fasciculus, the frontal longitudinal fasciculus, the right inferior occipitofrontal fasciculus, the right medial temporal lobe adjacent to the right parahippocampal gyrus, the left arcuate fasciculus, the left superior temporal gyrus, and the left uncinate fasciculus. Moreover, significant FA reductions were reported in all white matter regions bilaterally in schizophrenic patients of a recent study [34]. Lower FA values were also seen in never-medicated, first-episode schizophrenia [35][36][37][38]. More significantly, white matter abnormalities in specific brain regions were associated with different dimensions of schizophrenia symptoms. For example, widespread decrements in prefrontal white matter in schizophrenic patients were related to higher levels of negative symptoms, as measured by the Scale for the Assessment of Negative Symptoms (SANS) [39]; inferior frontal white matter FA inversely correlated with the SANS global ratings of negative symptoms [40]; a significant reduction of white matter in parietal cortex of right hemisphere was found in a subgroup of patients with pronounced negative symptoms [41]. In a more recent study, there were significant positive correlations between volumes (larger) in anterior callosal, cingulate and temporal deep white matter regions and positive symptoms, such as hallucinations, delusions and bizarre behavior. Negative symptoms were negatively related to volumes (smaller) in occipital and paralimbic superficial white matter and posterior callosal fiber systems [42]. Postmortem Evidence. Decreased (MTR) found in schizophrenic patients suggests decreases in the myelin or axonal membrane integrity. Similarly, FA decrease reflects the reduction in the coherence of white matter. These suggestions are in line with the results from postmortem studies, exemplified by reduced myelin basic protein (MBP) immunoreactivity [43] and myelin pallor and myelin loss [44] in brains of chronic schizophrenic patients. Moreover, ultrastructural evidence of reduced myelin sheath compactness, including bodies and lamellar bodies, has been shown in postmortem electron microscopy studies [45][46][47]. In addition, a delayed myelination in the PFC was reported in patients with schizophrenia, suggesting the developmental nature of this change [48]. OLs are myelin sheath producing cells. Therefore, myelin sheath changes reflect OLs alterations. Indeed, OLs were first implicated in schizophrenia in 1938 when swollen OLs were observed in schizophrenic brains postmortem [49]. In recent years, the group of Uranova and Orlovskaya carried out a series of postmortem studies and reported solid evidence of disturbed structure and function of OLs [45,46,[50][51][52][53], found reductions in density and size of OLs in prefrontal and striatal regions [52,54,55], and showed a deficit of OLs in PFC and adjacent white matter [56] in schizophrenia. Loss and altered spatial distribution of OLs were also found in the superior frontal gyrus in schizophrenia [57]. Altered OL-and Myelin-Related Genes in Schizophrenia In a groundbreaking study, Hakak et al. [58] found that the expression of a series of genes related to OLs and myelin significantly decreased in the dorsolateral prefrontal cortex (DLPFC) samples of schizophrenic patients. Another microarray study measured expression of approximately 12,000 genes in the middle temporal gyrus and found significant decreases in the expression of some myelinrelated genes in subjects with schizophrenia [59]. In a more recent study [60], variations in myelin-and OLrelated gene expression were found in multiple brain regions of postmortem schizophrenic brains. The downregulated myelin-and OL-related genes include neuregulin 1(NRG1), CNP (2 , 3 -cyclic nucleotide 3 -phosphodiestase), CLDN11 (claudin 11, an OL specific protein), OLIG2 (OL lineage transcription factor 2), MAG (myelin associated glycoprotein), MAL (myelin and leukocytes protein), OKI (quaking homolog, KH domain RNA binding (mouse)), TM4SF11 (trans-membrane 4 superfamily 11), and GELS (gelsolin). In the following, we will selectively review evidence for changes in some of the above mentioned genes. 3.1. NRG1. This is a family of proteins containing an epidermal growth factor-like domain that specifically activate receptor tyrosine kinases of the erbB family: erbB2, erbB3, and erbB4 [61]. NRG1-mediated erbB signaling has important roles in neural and glial development, as well as in the regulation of neurotransmitter receptors thought to be involved in the pathophysiology of schizophrenia. In the study by Hakak et al. [58], a significant reduction in the level of erbB3 expression was found in PFC of schizophrenic patients. This decrease was confirmed by quantitative and differential-display RT-PCR analysis [62]. In a genome-wide scan, Stefansson et al. [63], by means of haplotype analysis, identified NRG1 as a candidate gene for schizophrenia. This association of NRG1 with schizophrenia was confirmed by the same group in a Scottish population [64] and by an independent study in a large sample of unrelated Welsh patients [65]. Strong association between NRG1 and schizophrenia was also found in Chinese population [66,67], but not in Japanese population [68]. In a more recent study [69], schizophrenic patients with the T allele for a singlenucleotide polymorphism (SNP8NRG221533) showed significantly decreased anterior cingulum FA compared with patients homozygous for the C allele and healthy controls who were T carriers, suggesting that NRG1 variation may play a role in the white matter abnormality of this brain region in patients with schizophrenia. CNP. This gene maps to 17q21.2, a region of the genome in which genome-wide evidence for linkage to schizophrenia was observed in a single pedigree [65]. CNP (protein) can be detected early in development, in the precursor cells to OLs. In adulthood, CNP shows a high turnover compared with other myelin-associated proteins [70]. Postmortem studies of anterior frontal cortex demonstrated less immunoreactivity of CNP in schizophrenia [71]. This result confirmed the downregulation of CNP gene in the schizophrenic brain [58]. Compatible with the underexpression of CNP mRNA in schizophrenia, the low-expressing A allele was significantly associated with schizophrenia in a case-control sample. All affected individuals in the linked pedigree were homozygous for the low-expression allele [72]. In a recent human study, reduced CNP protein in the hippocampus and anterior cingulated cortex of patients with schizophrenia was also reported [73]. [74,75]. Microarray studies reported decreased MAG mRNA expression in the DLPFC and the middle temporal gyrus of postmortem schizophrenic brains [58,62]. In studies using quantitative PCR analysis, decreased MAG mRNA was found in the anterior cingulated cortex and the hippocampus, in addition to DLPFC [59,76]. These findings were confirmed in a recent study that reported a decrease in the expression of MAG in white matter in schizophrenia using a probe that detected mRNAs for the large and small MAG splice variants. However, expression of MAG did not differ between patients with schizophrenia and controls in the grey or white matter in another study [77]. Discrepancy was also seen in genetic association analyses; some, but not all, of the analyses, linked MAG gene to schizophrenia [78][79][80]. OLIG2. This gene maps to 21q22.11 and encodes a transcription factor central to OL development. Strong association of OLIG2 and schizophrenia has been reported. There are reports of deletion in this region in patients with schizophrenia [81] and of a low risk of schizophrenia in people with trisomy 21 [82]. In the postmortem schizophrenic brain, OLIG2 mRNA reduced [60,62,83]. OLIG2 expression in cerebral cortex significantly correlated with CNP and ERBB4, suggesting interaction effects on disease risk between OLIG2 and CNP [84]. QKI. In a genome scan of a single large family from northern Sweden with high frequency of schizophrenia and schizophrenia-spectrum disorders, Lindholm et al. [85] detected a maximum LOD (logarithm of data) score of 6.6 on chromosome 6q25. This region contains only one gene described in the literature and the human databases, quaking homolog, KH domain RNA binding (mouse) (QKI) [86], pointing to the potential involvement of QKI in schizophrenia. In support of this suggestion, expression of QKI mRNA decreased in seven cortical regions and the hippocampus in the schizophrenic subjects [87], and relative mRNA expression levels of two QKI splice variants clearly downregulated in schizophrenic patients [88]. Moreover, mRNA levels of the tightly coexpressed myelin-related genes including PLP1, MAG, MBP, TF, SOX10, and CDKN1B decreased in schizophrenic patients, as compared with control individuals. Most of these differences (68-96%) can be explained by variation in the relative mRNA levels of QKI-7 kb. The same QKI splice variant was shown to be downregulated in patients with schizophrenia. Therefore, the authors suggested that decreased activity of some myelinrelated genes in schizophrenia may be caused by disturbed QKI splicing [89]. The Other Myelin-Related Genes. In addition to the aforementioned genes, there have also been reports of association with schizophrenia for the myelin-oligodendrocyte glycoprotein (MOG) [90], the proteolipid protein 1 gene (PLP1) [91], and the transferring gene (TF) [92]. Of these, PLP1 warrants to be emphasized here. Proteins (PLP1 and its splicing variant DM20) encoded by this gene are synthesized by OLs as the two major integral proteins of myelin membranes of CNS [93]. Point mutations of human PLP have been recognized as the molecular basis of one form of leukodystrophy, the Xchromosome-linked Pelizaeus-Merzbacher disease (PMD). And a novel mutation in the PLP gene has been reported to lead to PMD [94]. Lower levels of PLP1 mRNA have been reported in schizophrenia [59,62,95]. There is also evidence for a genetic association of PLP1 with schizophrenia [91]. However, in Japanese population, no association was found between PLP1 and schizophrenia [96]. Abnormal Behaviors in Patients with White Matter Diseases The third line of evidence for the involvement of white matter abnormalities in schizophrenia came from studies reporting abnormal behaviors in human sufferers from white matter diseases. Agenesis of Corpus Callosum (ACC). The corpus callosum is the largest white matter tract in the brain. Two developmental malformations of the corpus callosum associated with psychosis are partial or complete ACC and callosal lipoma [97]. When psychiatric disturbance presents in ACC sufferers, it is psychotic in nature in at least half of the patients [98]. Psychosis is also seen in Andermann's and Apert's syndrome at a higher rate, compared to healthy controls. Both Andermann's and Apert's syndrome are accompanied with ACC [99,100]. On the other hand, undiagnosed ACC has been detected in schizophrenic patients at a significantly higher rate [101]. Metachromatic Leukodystrophy (MLD) . This is a devastating demyelinating disease caused by a deficiency of the enzyme sulfatide sulfatase, also known as arylsulfatase A (ASA). Patients with MLD have abnormalities predominantly in the frontotemporal white matter. Up to 50% of patients with adolescent or early-adult onset present with psychotic symptoms such as auditory hallucinations, though disorder, affective disturbance, formal thought disorder, and catatonia [102]. In many cases of MLD, the behavioral abnormalities are the first symptoms. Some of these forms have been diagnosed as schizophrenia. Very seldom, neurological symptoms, especially ataxia, occur without cognitive or psychiatric disturbances [103]. On the other hand, a large number of adult patients with varying psychiatric manifestations have low levels of ASA-CS activity, suggesting that such patients may be asymptomatic carriers of the sulfatidase defect (heterozygotes for MLD) [104]. The Adult Onset Form of Niemann-Pick Type C (NPC) Disease. This is a lipid storage disorder. In the early stage of NPC, only white matter is affected [105,106]. Patients show white matter disruption in the corpus callosum [107] and periventricular white matter [108]. Up to 40% of cases, a rate comparable with MLD, present initially with psychosis [108][109][110][111][112]. Multiple Sclerosis (MS) . This is a demyelinating disease of CNS. The onset of most cases occurs between 20 and 40 years of age [113], reminiscent of the onset of schizophrenia that occurs mainly between the ages of 16 and 30 years [2]. In addition to the cardinal pathological features of focal areas of demyelination and immune-mediated inflammation, patients with MS show a number of different behavioral syndromes, which may be broadly divided into two categories: those pertaining to mood, affect, and behavior and those impairing cognitive functions [114]. Recent epidemiological studies estimated that the prevalence of psychosis in MS patients is two to three times those in the general population [115]. More interestingly, the prevalence is the highest (about 4.2%) in the 15-to 24-year age group in MS patients, again, which reminds us of the early onset of schizophrenia. In patients with clinically definite MS, cognitive abnormalities can be detected in 40-60% of patients [116]. Memory and executive functions are often impaired to an extent that cannot be explained as a result of the general intellectual decline [117]. Moreover, impairment in sustained attention, processing speed, and verbal memory in MS patients negatively correlated with MS lesion volume in frontal and parietal regions at baseline, 1-year, and 4-year followup, suggesting a contribution of the frontoparietal subcortical networks disruption to these cognitive impairments in MS [118]. Oligodendrocyte-Related Genetic Animal Models of Schizophrenia Although a number of biologically related genes have been reported to be downregulated in schizophrenia as reviewed above, only a few genetic animal models have been reported that show white matter development disruption and schizophrenia-related behaviors thus being used as potential animal models of schizophrenia. Plp1 Transgenic Mice Show Schizophrenia-Related Behav- iors. The first animal study that showed both white matter development disruption and abnormal behaviors was done by Boison and Stoffel [119]. They produced transgenic mice carrying a target alteration of the plp gene containing a deletion within exon III, mimicking DM20, and a neocassette in reverse orientation within intron III. The ultrastructure of the multilayer myelin sheath of all axons in the CNS of hemizygous male or homozygous female PLP/DM20deficient mice is highly disordered. This disrupted assembly of the myelin sheath was accompanied with profound reduction of conductance velocity of CNS axons, impairments in neuromotor coordination, and reduced spontaneous locomotor activity. In a more recent study, Tanaka et al. [120] analyzed a transgenic mouse line harboring extra copies of the plp1 gene (pl p1 tg/− mice) at 2 months of age. Although the pl p1 tg/− mice showed an unaffected myelin structure, the conductance velocity in all axonal tracts tested in the CNS greatly reduced. Moreover, the pl p1 tg/− mice showed altered anxiety-like behaviors, reduced prepulse inhibitions (PPI), spatial learning deficits and working memory deficit. These abnormal behaviors are schizophrenia-related behaviors, suggesting that the pl p1 tg/− mice may be used as a potential animal model to examine the role of altered plp1 gene in schizophrenia. Functional Consequence of Perturbing NRG1/erbB4 Signaling. In the functional studies, mutant mice heterozygous for either NRG1 or its receptor erbB4 showed a behavioral phenotype that overlaps with mouse models for schizophrenia. Furthermore, NRG1 hypomorphs had fewer functional NMDA receptors than wild-type mice. More interestingly, the behavioral phenotypes of the NRG1 hypomorphs were partially reversible with clozapine, an atypical antipsychotic drug used to treat schizophrenia [63]. Since then a number of mutant mice with heterozygous deletion of transmembrane domain NRG1 have been replicated in independent laboratories [121], including hyperactivity in a novel environment [122,123], mild disruption of PPI [124], and social interaction deficits [123,125]. But both spatial learning and memory, assessed in the Barnes maze, and spatial working memory, as measured by nondelay Y-maze alternation, kept intact [123]. Similarly, there was no effect of NRG1 genotype on performance in either test of emotionality/anxiety [125]. To test whether erbB signaling contributes to psychiatric disorders by regulating the structure or function of OLs, Roy et al. [126] analyzed transgenic mice in which erbB signaling was blocked in OLs in vivo. Loss of erbB signaling led to changes in OL number and morphology, reduced myelin thickness, and slower conduction velocity in CNS axons. Furthermore, these transgenic mice exhibited increased levels of DA receptors and transporters and behavioral alterations including reduced locomotion and social dysfunction. More interestingly, BACE1 (β-site APP-cleaving enzyme 1) knockout mice, in which NRG1 processing was altered, exhibited deficits in PPI, novelty-induced hyperactivity, hypersensitivity to a glutamatergic psychostimulant (MK-801), cognitive impairments, and deficits in social recognition. Some of these manifestations were responsive to clozapine, an atypical antipsychotic drug. Although the total amount of ErbB4 did not change in BACE1 knockout mice, binding of erbB4 with postsynaptic density protein 95 significantly reduced in the brains of these mice [127]. Together, the above studies suggest that altered NRG1/erbB4 signaling plays an important role in the pathogenesis of schizophrenia. Nogo-A Deficient Mice. In addition to the above two animal models, a mouse model of constitutive genetic Nogo-A deficiency deserves to be emphasized here. In a comprehensive series of behavioral tests with specific relevance to schizophrenia pathopsychology, the Nogo-A deficient mice showed deficient sensorimotor gaiting, disrupted latent inhibition, perseverative behavior, and increased sensitivity to the locomotor stimulating effects of amphetamine. Moreover, these behavioral changes were accompanied by altered monoaminergic transmitter levels in specific striatal and limbic structures, as well as changes in D 2 receptor expression in the same brain regions [128]. Therefore, the authors concluded that Nogo-A may bear neuropsychiatric relevance, and alterations in its expression may be one etiological factor in schizophrenia and related disorders. Cuprizone-Fed Mouse: A Novel Animal Model of Schizophrenia A Murine Model of Demyelination/Remyelination. Cuprizone (CPZ: biscyclohexanone oxalyldihydrazone) is a copper chelator used as a reagent for copper analysis. In early studies [129,130], higher doses (0.3, 0.5, and 0.75%) of CPZ were administered to animals via diet. These treatments were extremely toxic to mice, manifested with severe growth reduction, posterior paresis, and high mortality early in the feeding period. Convulsion and seizures were also seen in later stages (6-7 weeks after CPZ-feeding). Pathological alterations include severe status spongiosus, astrogliosis, demyelination, and hydrocephalus. Under electron microscopy, there are many large vacuoles within the myelin sheaths and swollen glial cells. The vacuoles, which resulted from giant mitochondria, were also seen in the hepatocytes [129]. Later studies [131,132] administered a lower dose (0.2%) of CPZ to mice. The animals have no evident toxic effects and neurological symptoms; consistent demyelination and mature OLs loss are main pathological alterations. When allowed to recover on a normal diet, remyelination begin within a week and progresses until all axons are myelinated. For these features, the CPZ models have been extensively used to define issues important to understanding of the pathophysiology of demyelination and to gain understanding of the mechanisms involved in remyelination. Behavioral Deficits in CPZ-Fed Mouse. Given that demyelination and mature OLs loss are main pathological alterations in brains of mice exposed to the lower dose (0.2%) of CPZ, examining possible behavioral deficits in the CPZfed mice should provide informative data relating specific behaviors to regional white matter abnormality. In the first report by Liebetanz and Merkler [133], central motor deficits were observed in mice fed the CPZ-containing diet by using a novel murine motor test, the motor skill sequence. This test was designed to detect latent deficits in motor performance. In a first step, mice were habituated to training wheels composed of regularly spaced crossbars till maximal wheel-running performance was achieved. Then, the animals were exposed to wheels with irregularly spaced crossbars demanding high-level motor coordination. Demyelinated mice showed reduced running performance on the training wheels as compared to controls. This deficit was even more pronounced when these mice were subsequently exposed to the complex wheels. Interestingly, remyelinated animals after CPZ withdrawal showed normal performance on the training wheels but abnormal performance on the complex wheels. The poor motor coordination of the CPZ-fed mice was also detected in the rota-rod analysis [134]. In addition, in the 3rd and 4th weeks after 0.2% CPZ treatment, the mice exhibited an increase in CNS activity, that is, an increase in climbing during the functional observation battery (FOB) tests, and an inhibited anxiogenic response to the novelty challenge (open-field) test. The FOB protocol consisted of 18 endpoints which evaluate CNS activity and excitability, neuromuscular and autonomic effects, and sensorimotor reactivity [135]. The results related white matter abnormality to emotional behavior and reminded us of the previous finding that transection of the rat's corpus callosum induces an increased number of rearing and activity in the centre of the open field [136,137]. A Novel Animal Model of Schizophrenia. In 2008, we, for the first time, examined effects of quetiapine, an atypical antipsychotic drug, on OLs [138]. We started from the in vitro effects of quetiapine on OL development. Quetiapine was shown to increase the proliferation of neural progenitor cells (NPCs) in the presence of growth factors, direct the differentiation of NPCs into OL lineage through extracellular signal-related kinases, upregulate the expression of MBP, and stimulate the myelination of axons by OLs in rat embryonic neocortical aggregate cultures. In the last experiment of this study, chronic administration of quetiapine prevented the CPZ-induced myelin breakdown and spatial working memory impairment in C57BL/6 mice. This protective effect of quetiapine on the CPZ-induced white matter abnormality was further substantiated in a following animal study. This drug dramatically decreased the numbers of activated microglia and astrocytes teemed in demyelinated sites, in addition to ameliorating myelin breakdown and MBP decrease in the brain [139]. Inspired by the above studies, we further characterized the behavioral and neurobiological changes in the CPZ-fed mice [140]. Mice exposed to CPZ for 2 and 3 weeks displayed more climbing behavior and PPI deficits. In addition, they showed lower activities of monoamine oxidase (MAO) and DA beta hydroxylase (DBH) in the hippocampus and PFC and had higher DA but lower NE levels in PFC. Mice exposed to CPZ for 4 to 6 weeks, when demyelination, myelin breakdown, and OL loss were evident, showed less social interaction compared to controls. At all time points, the CPZ-exposed mice spent more time in the open arms of an elevated plus-maze and exhibited spatial working memory impairment. The social interaction decrease and spatial working memory impairment were also reported in an independent study from other investigators [141]. These abnormal behaviors are reminiscent of some schizophrenia symptoms seen in human patients, thus suggested that the CPZ-fed mouse may be used as a novel animal model of schizophrenia to explore roles of white matter abnormalities in the pathophysiology and treatment of this mental disorder. More significantly, the CPZ-induced behavioral changes showed different responses to typical and atypical antipsychotics [142]. All tested antipsychotics (haloperidol, clozapine, and quetiapine), when coadministered with CPZ to mice, effectively blocked the PPI deficits ( Figure 1); clozapine and quetiapine, but not haloperidol, prevented CPZ-fed mice from spatial working memory impairment ( Figure 2); clozapine and quetiapine, but not haloperidol, ameliorated social interaction decrease (Figure 3). These different effects of typical and atypical antipsychotics on abnormal behaviors seem to be related to their effects on the CPZ-induced white matter abnormalities as clozapine and quetiapine, but not haloperidol, ameliorated the myelin breakdown and MBP decrease in PFC, hippocampus, and caudate putamen (Figure 4). These results provide experimental evidence for the protective effects of antipsychotics on white matter abnormalities and the concurrent behavioral changes in CPZ-fed mice. In a more recent study, we observed the time courses of behavioral abnormalities and remyelination in mice after CPZ withdrawal and examined effect of antipsychotics on the recovery processes [143]. The CPZ-induced abnormal performance on the elevated plus-maze recovered to the normal range within two weeks after CPZ withdrawal. In contrast, alterations in social interaction showed no recovery within the three-week postwithdrawal recovery period. And the social interaction deficit did not respond to any one of the antipsychotics (clozapine, haloperidol, olanzapine, and quetiapine) tested in this study. Altered performance in the Y-maze showed some recovery in the vehicle group; clozapine, olanzapine, and quetiapine, but not haloperidol, significantly promoted this recovery process. None of the drugs affected the recovery of damaged white matter within the three-week recovery period. These ineffective results may be due to inappropriate doses of tested drugs in this study and/or reflect the intractable feature of these abnormalities. In the latter case, a reasonable suggestion would be that the damage to OL/myelin in early phase could leave permanent damage on neural connectivity and/or its functions. To test this hypothesis, future studies should investigate the remyelination and functional recovery processes in longer recovery periods by means of various experimental approaches including electron microscopy and electrophysiological techniques. While most of animal studies applied CPZ to C57BL/6 mice, efforts were also made to develop a rat model of demyelination in the CNS. After exposed to CPZ for two and four weeks, rats showed a decrease in mRNA transcripts and protein levels of OL-specific genes in PFC [144]. Levels of myelin-related genes did not change in the striatum and hippocampus, two brain areas that should have been completely myelinated before the age of CPZ exposure. In addition, glial fibrillary acidic protein upregulated in PFC, indicating an activation of astrocytes. More interestingly, rats treated for two weeks with CPZ showed an increased difficulty to shift attention from one perceptual dimension to another in the extra dimensional shift phase of the attention set-shifting task, a modified version of the Wisconsin Card Sorting Test which depend on PFC [145]. Importantly, CPZtreated rats did not exhibit locomotor problems and had normal weight gain. Thus, the CPZ rat model can be used to study developmental vulnerability of white matter, as well as the pathogenesis and behavioral consequences of dysmyelination [146]. Comparing CPZ Model with Genetic Models of Schizophrenia. To summarize the aforementioned animal models of schizophrenia, Table 1 two genetic animal models. However, studies have shown that both hypermorphic [147,148] and hypomorphic [149][150][151] expression of the NRG1 gene may produce several common behavioral phenotypes in animals, which warrants further studies to address the underlying mechanisms on these behavioral abnormalities. The CPZ-fed mouse provided an alternative animal model showing white matter abnormalities in the brain-and schizophrenia-related behaviors. More interestingly, these CPZ-induced changes differently responded to haloperidol and atypical antipsychotic drugs. Concluding Remarks There is a great body of literature reporting white matter abnormalities in patients with schizophrenia, including imaging and postmortem evidence, suggesting a putative role of altered white matter in schizophrenia. Given its role as the primary infrastructure for long-distance communication in the brain, the evidence of altered white matter is agreeable to the disconnectivity theory of schizophrenia that emphasizes the role of abnormal interactions between brain regions [152]. Now, OLs and myelin dysfunction have been linked to neurocircuitry abnormalities in schizophrenia [153]. The white matter implication hypothesis can account for why the symptoms of schizophrenia commonly first present in late adolescence and early adulthood as the white matter development is still going on during these periods [13]. It is also consistent with the neurodevelopmental theory of schizophrenia [154]. According to this theory, the interaction of genetic vulnerability and early environmental exposures can induce a developmental trajectory which culminates [154,155]. Indeed, molecular genetics analyses revealed altered OL-and myelin-related genes in schizophrenia. Another line of evidence supporting the white matter implication hypothesis in schizophrenia came from patients with white matter diseases. As illustrated by ACC, MLD, NPC, and MS, white matter lesions are commonly accompanied with certain schizophrenia symptoms. These human studies warrant future experimental studies to examine a putative role of altered white matter in schizophrenia. The use of transgenic and mutant animal models offers a unique opportunity to analyze OLs and relevant changes in schizophrenia [156]. Examples included in this paper are the plp1 transgenic mice, mutant mice heterozygous for either NRG1 or its receptor erbB4, and the Nogo-A deficient mice. These transgenic and mutant mice show both white matter development disruption and schizophrenia-related behaviors thus may be used as potential animal models of schizophrenia. More informative data are expected to come from further studies with these animal models. Although CPZ is a neurotoxic compound, the white matter alterations seen in CPZ-fed mice are not conflict with the neurodevelopment theory of schizophrenia. Indeed, genetic and developmental factors, such as animal species, age, and developmental status of a white matter structure, have significant impacts on the white matter alterations in CPZ-fed animals [141,143,157]. Moreover, the abnormal behaviors seen in CPZ-fed mice and rats [140,141,143] are reminiscent of schizophrenia symptoms including positive and negative symptoms as well as cognitive impairment. In addition, CPZ-fed mice show higher DA levels in PFC and lower NE levels in the same brain region. These changes in DA and NE may account for the abnormal climbing behavior and PPI deficits occurred before the appearance of demyelination and myelin breakdown [140]. High levels of DA in PFC may also contribute to demyelination and myelin breakdown in this brain region. This notion is in accordance with the finding that chronic administration of amphetamine (1.0 mg/kg) to mice caused microstructural changes in the white matter of frontal cortex and induced higher locomotion and spatial working memory impairment [158]. The abnormal white matter, in turn, may affect DA neurotransmission in the brain and thus cause behavioral changes. In line with this view, a new hypothesis of schizophrenia has been proposed, which theorized that the abnormal myelination of late-developing frontal white matter is a single underlying cause of the three distinctive features of this disorder, namely, its excessive DA neurotransmission, its frequent periadolescent onset, and its bizarre, pathognomonic symptoms [159]. Extensive studies are necessary to further address the relationship of excessive DA neurotransmission and abnormal myelination in schizophrenia.
7,854
2011-08-11T00:00:00.000
[ "Medicine", "Biology", "Psychology" ]
Non-Equilibrium ϕ4 theory for networks: towards memory formations with quantum brain dynamics We investigate the time evolution of quantum fields in neutral scalar ϕ4 theory for open systems with the central region and the multiple reservoirs (networks) as a toy model of quantum field theory of the brain. First we investigate the Klein–Gordon (KG) equations and the Kadanoff–Baym (KB) equations in open systems in d + 1 dimensions. Next, we introduce the kinetic entropy current and provide the proof of the H-theorem for networks. Finally, we solve the KG and the KB equations numerically in spatially homogeneous systems in 1 + 1 dimensions. We find that decoherence, entropy saturation and chemical equilibration all occur during the time evolution in the networks. We also show how coherent field transfer takes place in the networks. Introduction What is memory and where is located in the human brain? We know that memory is rich and diverse, can be short-term and long-term but in both cases is imperfectly stable, and has a diffused nonlocal character in the brain [1][2][3]. Conventional neuroscience does not explain these properties in a satisfactory manner nor does it provide a clear mechanistic explanation of how and where memory is stored at a molecular level [4,5]. Quantum Field Theory (QFT) of the brain, also called Quantum Brain Dynamics (QBD) represents an attempt to provide a microscopic physical mechanism of memory formation and storage in the brain [6,7]. Its origin can be traced to the seminal work by Ricciardi and Umezawa in 1967 [8], which is based on the concept of spontaneous symmetry breaking (SSB) [9][10][11], equivalent to the macroscopic order of the system, triggered by external physical stimuli. In the 1970s, Quantum Brain Dynamics was further developed by Stuart et al [12,13]. According to this theory, the brain is a mixed physical system composed of classical neurons and quantum degrees of freedom named corticons and exchange bosons. Around the same time, Fröhlich proposed a theory of electric dipole moments of molecules in biological systems which are nonlinearly coupled and interact with the heat bath and phonon modes [14][15][16][17][18][19]. The electric dipole moments of biomolecules become dynamically ordered, and as a result, a coherent wave of dipole oscillations propagates through the biological system in the form of a Fröhlich condensate. Around the same time, in 1976, Davydov and Kislukha proposed a model in which a solitary wave can propagate along protein chains carrying energy quanta without dissipative losses. This form of biological energy transfer was called the Davydov soliton [20]. Both Fröhlich and Davydov theories describe nonlinear biological coherence phenomena and are derived as static and dynamic properties in the nonlinear Schrödinger equation stemming from the same quantum Hamiltonian, respectively [21]. In the 1980s, Del Giudice et al studied several properties of water dipole fields representing them using a QFT formalism adapted for biological systems [22][23][24][25]. In the 1990s, Jibu and Yasue offered a concrete representation of quantum degrees of freedom (referred to as corticons and exchange bosons) in the dynamics of the brain, namely water electric dipole fields and photon fields [6,[26][27][28][29]. In this representation, the Quantum Field Theory of the brain is a Quantum Electrodynamics (QED) description of water molecules' electric dipoles and their interactions. More specifically, this formulation adopted a superradiant phase for the coherent state of water dipoles and photons [30][31][32][33][34]. According to this theory, memory emerges as a coherent quantum state multi-energy-mode analysis in the non-equilibrium QFT. If a coherent quantum state can be shown to be robust against decoherence in multi-energy-mode analysis, we can then conclude that memory formation processes in open systems could be adequately described using this formalism. Hence, in case the information transfer between systems can be numerically simulated in QBD, this might offer a solid path toward a solution of the binding problem. The aim of this work is to introduce a toy model of the QBD and describe its non-equilibrium multi-energymode dynamics using the Klein-Gordon equations and the Kadanoff-Baym equations [43][44][45] for open systems with the central region and the multiple reservoirs (networks) in the neutral scalar f 4 theory. We adopt quantum tunneling processes to describe the transfer of coherent fields and incoherent particles among systems engaged in information transfer. Our work is an extension to the case of open systems of the neutral scalar theory of an isolated system [46][47][48]. In this paper, we demonstrate that it is possible to extend the two-reservoir representation to the case of an open system with N res reservoirs. We describe decoherence, entropy production and chemical equilibration via numerical simulations in 1+1 dimensions. We also show how information is transferred within networks. The present paper is organized as follows. In section 2 we derive time evolution equations for quantum fields in open systems with the central region and the multiple reservoirs. In section 3 we introduce a kinetic entropy current and show how it enters into the H-theorem. In section 4 we demonstrate numerical simulations performed. In section 5 we discuss our results. In section 6 we provide conclusions of this work. Time evolution equations In this section, we introduce the Klein-Gordon (KG) equations for coherent fields and the Kadanoff-Baym (KB) equations [43][44][45] for quantum fluctuations in open systems. In figure 1, we show the central region and the N res reservoirs. (This is the generalization to the multiple reservoirs from the two reservoirs described previously [49][50][51][52][53] with tunneling effects [54][55][56][57].) First, the Lagrangian density is given by, where m is the mass, λ C and λ α ʼs are coupling constants of the interaction, v α ʼs are tunneling coupling constants between the central region C and the reservoir α [49][50][51][52][53]. We shall adopt the closed time path formalism in figure 2, the Keldysh formalism [58,59], labelled by  to describe non-equilibrium phenomena, and use 2-Particle-Irreducible (2PI) effective action technique [60][61][62]. G G G G G , , 2 Tr ln 2 Tr 1 Tr  á ñ º · ·(: the density matrix), d is the spatial dimension, and the Green functions G (x, y) are given in the matrix notation by, term represents all the 2PI diagrams. Next we derive the KG equations and the KB equations by differentiating 2PI G by f and G, respectively. The KG equations are given by, where with with the self-energy G , which is given by, We neglect off-diagonal elements of the self-energy S, since they represent higher-order terms of tunneling coupling constants. Time evolution equations or solutions of each element of equation (15) are derived in a similar way to [53] by, where we have used the z-component of the Pauli matrix σ z =diag (1,−1) and the solutions g αα of time evolution equations given by, The product of Green functions or self-energy represents convolutions, for example Gg , ; , ; , 23 and, , ; , 28 C y , ; , ; . 32 We need not derive F αβ and ρ αβ with (a b ¹ ) since they do not appear in the energy formula in appendix A. We can derive the KG equations for C f and f ā as, In Next-to-Leading-Order (NLO) approximation of the coupling expansion of λ C , it is possible to write selfenergy as, and, , ; , ; , 37 , ; , ; 1 12 , ; , ; , 38 , ; , ; . 39 And Σ F,αα , Σ ρ,αα and , S r aã are expressed by changing λ C , F CC and ρ CC by λ α , F αα and ρ αα in equations (37), (38) and (39). Kinetic entropy current and the H-theorem In this section, we introduce the kinetic entropy current in the first order of the gradient expansion [63][64][65], and give a proof of the H-theorem in the Next-to-Leading Order approximation of the coupling expansion and in the Leading-Order approximation in the tunneling coupling expansion. The variable of Green functions and self-energy is (X, p) with the center-of-mass coordinate X x y 2 º + and momentum p by the use of the Fourier transformation of the relative space-time x−y of (x, y) in this section. We set t 0  -¥ in figure 2. In a similar way to [53], by use of the (C, C) component in the Kadanoff-Baym equations (15), we arrive at, The total entropy is given by, and s ¶ m m is given by, Numerical simulations In this section, we show the results of our numerical simulations of the KG and the KB equations in section 2 in 1+1 dimensions. We assume spatially homogeneous systems, which means that the condition of spatial homogeneity is satisfied for each system. The initial condition of the Green functions is the same as that in [53]. The initial temperature T ini is set to be the mass m. The (L, R) reservoirs are extended to 1, 2,L, N res . We prepare the coupling constants λ C =λ α =4.0, and set the tunneling coupling constants v α =0.2/N res , with N res =2, 3, 4, 5, 10 and 100. We set the memory time mt mem =35.25 and remove the information in earlier past. We use the momentum p x Decoherence, entropy production and chemical equilibration The initial condition for the background coherent fields is given by, At early times 0<mX 0 <10, the f ā is amplified, and at later times 10<mX 0 , the f ā is damped with subsequent oscillations and converges asymptotically to zero. The frequencies of C f are less dependent on N res at early times mX 0 <10, but the N res dependence in frequencies appears a little after the time when f a |¯| reaches its maximum value. Note that coherent fields disappear in the time evolution for any N res . In figure 5, we show the time evolution of the total energy density (divided by N res +1) given in appendix A. The energy error is within 0.3 %. At early times 0<mX 0 <10, the values of the total energy density are oscillating around their initial values for any N res due to sudden switch-on of self-energy at the initial time. At 35.25<mX 0 , they tend to increase monotonically due to the removal of information in earlier past for any N res . The increase appears a little larger for larger N res . The values of the total energy become smaller for larger N res , since the initial energy density of the coherent fields is divided by N res +1. The time evolution of the energy density of the coherent fields is given in appendix A in figure 6. The initial energy density of the coherent fields at mX 0 =0 has the same value for any N res in this initial condition. The energy of the coherent fields is damped in the time evolution and converges to zero. We find that decoherence occurs in open systems. The difference of the energy density of coherent fields for N res becomes gradually larger at larger mX 0 in the time evolution. The larger the N res is, the larger the damping of the energy density becomes. At around mX 0 ∼2, 7, 10, 13, and 17, the energy density of the coherent fields appears to increase a little for N res =10 and 100. For N res =2, 3, 4, and 5, the decrease of the energy density becomes smaller at mX 0 ∼4, 7 and 10. In figure 7, we show the incoherent particle number distribution functions at mX 0 =0 and mX 0 =250 in N res =2, and mX 0 =250 in N res =5 in the central region C. The incoherent particle number distribution function n(X 0 , p) of the mode energy Ω(X 0 , p) is defined as, x y x y x y X 0 00 0 0 Here, statistical functions are labelled by CC or α α with α=1, 2, L, N res . The difference in the n(X 0 , p) in smaller Ω(X 0 , p)<2.0 for N res =5 and N res =2 seems explicit. The n(X 0 , p) in C seems to approach the Bose-Einstein distribution with the temperature T and the chemical potential μ. The temperature T is near m at around mX 0 =250. A typical difference appears in the values of the chemical potentials μ/m=0.157 for N res =5 and μ/m=0.234 for N res =2. Even at mX 0 =250, the values of the chemical potential remain nonzero. The larger N res is, the smaller the chemical potential becomes for N res =2, 3, 4, and 5. . Figure 8. Time evolution of the number density in the system C and the reservoir α. In figure 8, we show the time evolution of the number density N X V n X p , dp 0 2 0 x ò = p ( ) ( )with the volume V. The number density in C increases at early times 0<mX 0 <10 due to field-particle conversion. The increase becomes larger for larger N res . At mX 0 ∼2, 4, and 7, the N(X 0 )/V in C decreases a little temporarily. At later times 10<mX 0 , the number density in C starts to decrease and converges to the constant values for N res =2, and 5. The number density in the reservoir α tends to increase in the time evolution and converges to the same constant values as those in the C. The constant values become smaller for larger N res for N res =2, and 5. We find that the time scales of the convergence to the constant values become larger for larger N res . For N res =10 and 100, the difference in N/V between C and the reservoir α appears even at mX 0 =250, but N/V in C tends to approach that in the reservoir α. In figure 9, we show the time evolution of the entropy density in the system C and the reservoir α. We adopt the entropy density with the quasi-particle approximation given by, s dp n n n n 2 1 ln 1 ln . 52 We find the same tendency as that for the number density. The entropy density in the system C increases at early times 0<mX 0 <10 for N res =2 due to field-particle conversion, but decreases at later times 10<mX 0 due to incoherent particles being transferred from C to the reservoirs. The decrease of the entropy density in C and the increase of the entropy density in the reservoir α become slower for larger N res , since the effects of incoherent particle transfer become smaller. In figure 10, the time evolution of the total entropy density is depicted. We find that total entropy density tends to increase monotonically over the time evolution. At early times mX 0 <20 with the field-particle conversion, the increase is rapid compared with that at 20<mX 0 . The nonmonotonic behavior is due to higher order corrections in the gradient expansion. At around mX 0 =2, 4, and 7, the total entropy density tends to decrease a little. The nonmonotonic behavior disappears at later times for mX 0 >40. The increase at the later times becomes larger for N res =100 than that for N res =2. The entropy density for N res =100 is smaller than that for N res =2 since the produced incoherent particles are spreading out in C and the N res reservoirs. We find that the values of the total entropy density divided by N res +1 for N res =2 and 5 at mX 0 =250 in figure 10 are equal to s 0 in C and α for N res =2 and 5 at mX 0 =250 in figure 9. Information transfer in networks In this section, we show how quantum coherence is transferred within networks of interacting domains. The initial condition of the coherent fields is given by, In figure 11, the time evolution of the coherent field 1 f is depicted. The 1 f is damped with subsequent oscillations and converges to zero in the time evolution. When N res is larger, the damping of 1 f becomes slightly smaller due to smaller coherent field transfer from the first reservoir to the system C. The frequency is less dependent on N res . The time evolution in figure 11 seems similar to that in figure 3. In figure 12, we show the time evolution of the coherent field C f . At early times 0<mX 0 <10, the amplitude increases, but it decreases at later times 10<mX 0 . The values of C f in the local minimum at around mX 0 =10.0 are −0.105, −0.072, −0.055, −0.044, −0.022, and −0.002 for N res =2, 3, 4, 5, 10, and 100, respectively. We find that the phase difference between 1 f in figure 11 and C f in figure 12 is π/2. The frequency is less dependent on N res . In figure 13, we show the time evolution of the coherent field f b ( 1 b ¹ ). At early times 0<mX 0 <17.5, Discussion In this paper, we have derived the time evolution equations in open systems with the central region and the multiple reservoirs (forming networks), namely the Klein-Gordon (KG) equations for coherent fields and the Kadanoff-Baym (KB) equations for incoherent particles. (We have generalized our results in [53] to systems with multiple reservoirs.) We have investigated the relativistic f 4 theory to describe a similar dynamics situation to that with photons, since the time evolution equation of coherent photons is the relativistic KG equation. We have introduced the kinetic entropy current and shown the H-theorem for open systems with a central region and multiple reservoirs. We have found that decoherence, entropy saturation, and chemical equilibration occur in numerical simulations of the KG and KB equations in 1+1 dimensions. The decoherence occurs due to parametric resonance instability and field-induced processes in self-energy as elaborate on in section 2. Coherence is not robust in the f 4 theory with m 2 >0. The entropy production seems to be consistent with the proof of the H-theorem shown in section 3. The chemical equilibration occurs due to the particle numberchanging processes in KB equations and the tunneling processes between systems. In section 4.1, we have prepared a nonzero value in C f at an initial condition. In the time evolution in figure 3, the coherent field C f is damped with subsequent oscillations. The damping of the coherent field C f seems to be slightly dependent on N res . We have shown the time evolution of f ā in figure 4. We find that the maximum amplitude is proportional to the tunneling coupling constant 0.2/N res . The frequency is less dependent on the tunneling coupling constant. In a similar way to [53], these results are explained by the forced oscillation in the Klein-Gordon equation for f ā . We find that the energy of coherent fields is damped over the time evolution (decoherence) in figure 6, although there are back reactions from particles to fields. Coherence is lost and not robust in neutral scalar 4 f theory. The damping of the energy of coherent fields is dependent on the number of the reservoirs N res . This is explained by incoherent particles being transferred between systems. As is shown in figure 8, the incoherent particle number density in the central region C increases at early times 0<mX 0 <10 due to field-particle conversion (decoherence). The field-particle conversion occurs mainly in C. The maximum value of the number density in C becomes larger as the number of the reservoirs N res increases since the tunneling (with the tunneling coupling constant v α =0.2/N res ) of incoherent particles becomes smaller for larger N res . The larger the number density is, the larger the effects of the field-induced processes become. As has been shown by several authors [46][47][48], larger damping of coherent fields occurs at higher temperatures, which means that the number density in the system is larger. Since the number density in C becomes larger for larger N res , the damping of energy of coherent fields becomes larger. In figure 8, we also see that the number density in C and in the reservoirs converges to a constant value over the course of the time evolution for N res =2 and 5. This result means that chemical equilibration occurs. In this figure, the number density near the equilibrium state is smaller for larger N res , since the produced incoherent particles are spreading within networks and the number density in each system becomes smaller for larger N res . The time scales of convergence become larger for larger N res , because tunneling of incoherent particles (with the tunneling coupling constant v α =0.2/N res ) becomes smaller for larger N res . For N res =10 and 100, the chemical equilibrium state has not been realized even at mX 0 =250. In figure 9, we show the entropy density for each system. In this figure, we find similar results to the number density in figure 8. The entropy density in C increases at early times 0<mX 0 <10 due to field-particle conversion (decoherence), but decreases at later 10<mX 0 due to incoherent particles being transferred from C to the reservoirs. The entropy density in the reservoirs increases over the course of the time evolution. The entropy density in C and that in the reservoirs converge to the constant values for N res =2 and 5. Total entropy density in figure 10 tends to increase monotonically for N res =2, 5, 10, and 100. It seems that the monotonically-increasing behavior is common for any number of reservoirs N res . This result is consistent with the H-theorem stated in section 3. Nonmonotonic behavior is due to higher order terms in the gradient expansion. In the course of time evolution, the error in the total energy density is within 0.3 %. The error has the maximum values at around the initial time due to the sudden switchon of self-energy. The error of the total energy density at later times 35.25<mX 0 <250 with memory time mt mem =35.25 does not exceed the error at around the initial time. (For mX 0 >35.25, we have removed the information in earlier past.) This is because we take sufficiently large memory time mt mem =35.25 in time evolution of the KG and KB equations. The statistical function is approximately expressed as F x z p , , with the particle number distribution n(p), the damping factor γ p and the frequency Ω p . We can estimate the order as m 10 p 0 g= at mx 0 =35 in C for N res =2 and 100 in numerical simulations in section 4.1. Hence, we can remove the information in earlier past since m/γ p ∼10 is sufficiently smaller than mt mem =35.25. The quantity γ p is related to the spectral width or Σ ρ . The larger the number density is, the larger the spectral width and the γ p become. The reason why the error of the total energy density divided by N res +1 for N res =100 is a little larger than that for N res =2 at later times 35.25<mX 0 <250 might be explained by the difference of the number density in the reservoir α. Since m/γ p becomes larger (smaller damping) by the smaller number density in the reservoir α for N res =100 than that for N res =2, the information of Green functions in the reservoir α in earlier past, which is removed at mx mt 0 mem > , might be relatively significant for N res =100 compared with N res =2. In coherent field transfer within networks in section 4.2, the damping of coherent field 1 f is smaller for larger N res since the tunneling coupling constant v α is proportional to 1/N res . The time evolution of 1 f in section 4.2 is similar to that in C f in section 4.1 because the tunneling coupling is small. The difference between these two gradually appears in the time evolution, since C f is connected to N res reservoirs, but 1 f is connected only to C. The coherent field in one reservoir is transferred to C. The maximum amplitude of C f (the minimum value at mX 0 =10) seems to be proportional to the tunneling coupling constant v α =0.2/N res . The phase difference between 1 f and C f is π/2. These results are explained by the analysis of the forced oscillation in the KG equation in C in a similar way in [53]. We also find that the minimum value of f b ( 1 b ¹ ) at mX 0 =17.5in other reservoirs seems to be proportional to N 1 res 2 , which is explained by forced oscillations in the KG equation in the reservoir β. The frequency in f b is less dependent on the number of the reservoirs N res . This result corresponds to information transfer, in which the change of one system by external stimuli is transferred to the hub system C and spreading to other systems. Since the phases in f b are the same, this situation might represent a form of synchronization induced by external stimuli. We set the positive mass squared m 2 >0 in this paper, although the negative mass squared m 2 <0 term is commonly adopted in discussing the Higgs mechanism. Even in m 2 >0, we can show the Higgs mechanism with the nonzero expectation value of charged bosons j by preparing nonzero charge density of fermions as in QED with charged fermions [66]. It is also possible to show the nonzero expectation value of charged bosons j by preparing nonzero charge density of incoherent charged bosons [67]. This is because the potential energy j F[¯] is given by m e A phase ) is equivalent to the chemical potential. The Higgs mechanism in m 2 >0 is due to a nonzero chemical potential. Then, the expectation values j becomes diverse due to the diverse charge density or chemical potential. This situation might be similar to the case in which the diquark condensate qq á ñ has different values due to a different chemical potential in the SU(2) lattice gauge theory [68,69]. We might be able to prepare diverse coherent states by preparing different charge density even in QBD. We set the same mass m in C and the reservoirs α to discuss the information transfer between systems with the quantum tunneling phenomena suggested in [29]. In section 3, we find that the momentum and the frequency of incoherent particles do not change in tunneling between C and α to the first order in the gradient expansion, since no convolution for the momentum and the frequency appears in g X p G X p , , ) in equation (48). In the quasiparticle approximation in which the spectral width is narrow enough in the small number density, the mass in C must be the same as that in α to describe the tunneling phenomena. Hence, we prepared the same mass in C and α. (We also set the same coupling constant λ C =λ α to avoid different thermal effects between systems.) In QBD, the excitation of the finite number of evanescent photons represents the effect of memory retrieval. By using quantum tunneling, we might be able to describe the associative memory in which incoherent particles are transferred between systems. Then, the difference between the masses of evanescent photons might classify each memory subdivided in each area in the brain, namely by external stimuli we might maintain only memory in networks with coherent states with similar values of the mass of the evanescent photons. We can set the Planck constant to zero in the classical approximation. The decoherence occurs even in that case, in which the incoherent particle number distribution function in the equilibrium state becomes with the temperature T instead of the Bose-Einstein distribution, since the self-energy in open systems is the same as that in the isolated system. By comparing the quantum case with the classical one, we find that the difference between the frequencies of the coherent fields gradually appears in the time evolution due to the difference between the corresponding local self-energies given by momentum integration of the statistical function proportional to the Bose-Einstein distribution or T X p , near the equilibrium states. We have started the universal usage of the same spatial variable x in the introduction of the Lagrangian. By assuming spatial homogeneity, we have constructed a model which represents various areas connected with networks having a hub. Due to spatial homogeneity, the spatial difference between the spatial boundaries and others disappears. However, since we start with QFT, it is possible to describe a phase transition occurring in this model. We can describe macroscopic properties of matter with time-dependent order parameters based on QFT. It is also possible to describe quantum tunneling phenomena such as the Josephson effect. They are the key advantages of this model. In this paper, we have studied equilibration processes in quantum networks possessing a central region and multiple reservoirs. The expectation value of the scalar coherent fields corresponds to the square root of the number of aligned dipoles and the mass of the evanescent photons in QED with water electric dipoles (QBD). When the mass of the evanescent photons remains in the equilibrium state in open systems (coherence is maintained), we can describe memory formation processes using QBD and trace the information transfer between systems in QBD within networks. In the future, we plan to describe equilibration processes in QBD in networks. This work in the f 4 theory applied to quantum networks can be extended to the case of QBD. Conclusion We can trace the time evolution of the quantum fields in the f 4 theory for open systems with the central region connected to multiple reservoirs using the Klein-Gordon and Kadanoff-Baym equations. Decoherence, entropy saturation and chemical equilibration occur over the course of the time evolution. The time scales of decoherence are smaller in the case with a larger number of the connected reservoirs N res , but the time scales of chemical equilibration are larger in the case of larger N res . The information transfer has been shown to occur between the interacting systems. In particular, the phase and frequency are less dependent on the number of the interacting reservoirs N res . and, The above results are derived by extending the result in systems with the two reservoirs to that in systems with the multiple reservoirs. Appendix B. Numerical method We show how to calculate time evolution of the Kadanoff-Baym equations in open systems in section 2, especially time integration using the Fortran programming language. We discretize the time variable as x na t 0 = and y ma t 0 = with integers n and m and time stepsize a t . Before the time evolution is implemented, the Green functions and self-energy at n, m=0, 1 are given by their initial conditions. First, we write the do-loop of n starting with 1 for time steps of evolution of x 0 . Inside the do-loop, we calculate n 1 )with the Klein-Gordon equations. Next, we calculate time evolution of the Kadanoff-Baym equations. We write the first do-loop of m (0mn+1) and momentum p, inside which we calculate in equation (23) ) if m<n+1 and the fixing ρ CC (n+1, n+1)=0, and calculate g F,αα in equation (25) and g ρ , α α in equation (26) with the fixing g ρ,αα (n+1, n+1)=0 in a similar way to the above equations or [70]. The do-loop of p is inside the do-loop of m. In case m=0, the sum of l on the right-hand side in the above equation represents n l F l m , , Next, we can calculate F α C (n+1, m) in equation (27), ρ α C (n+1, m) in equation (28), F C α (n+1, m) in equation (29) and ρ C α (n+1, m) in equation (30) ) and ρ C α (n+1, n+1)=0 inside the same do-loop. Then end the second do-loop of m and p. Next, we write the third do-loop of m (0mn+1) and momentum p, inside which we calculate F αα in equation (31) and ρ αα in equation (32) with g F, α α , g ρ,αα , F Cα and ρ C α in a similar way to the above calculations. Then end the third do-loop of m and p. Finally, we calculate the energy density, the entropy density and the number density at x 0 , and calculate the local self-energy Σ loc,C (n+1) and Σ loc, α (n+1), and the nonlocal self-energy of Σ F,CC (n+1, m), n m 1,
7,728
2019-05-30T00:00:00.000
[ "Physics" ]
A Contextual Reinforcement Learning Approach for Electricity Consumption Forecasting in Buildings The energy management of buildings plays a vital role in the energy sector. With that in mind, and targeting an accurate forecast of electricity consumption, in the present paper is aimed to provide decision on the best prediction algorithm for each context. It may also increase energy usage related with renewables. In this way, the identification of different contexts is an advantage that may improve prediction accuracy. This paper proposes an innovative approach where a decision tree is used to identify different contexts in energy patterns. One week of five-minutes data sampling is used to test the proposed methodology. Each context is evaluated with a decision criterion based on reinforcement learning to find the best suitable forecasting algorithm. Two forecasting models are approached in this paper, based on K-Nearest Neighbor and Artificial Neural Networks, to illustrate the application of the proposed methodology. The reinforcement learning criterion consists of using the Multiarmed Bandit algorithm. The obtained results validate the adequacy of the proposed methodology in two case-studies: building; and industry. I. INTRODUCTION An important aspect to improve the energy management, namely in the presence of demand response programs, is the forecasting of electricity consuming activities [1]. In fact, the present paper's authors have previously published several works in the literature concerning electricity consumption forecast [2]. K-nearest Neighbors (KNN) and Artificial Neural Networks (ANN) have been proved to be adequate technics for an office building application. However, in some specific periods, here stated as contexts, one of the algorithms is better than the other. Moreover, reinforcement learning has been largely applied to power and energy systems problems [3], providing learning of decisions in complex modeling environments. The authors of the present paper have also used reinforcement learning in buildings environments, despite not for consumption forecasting, in [4]. The electricity consumption forecasting is important to guarantee improved energy management in smart buildings [5]. Therefore, there are in the literature several buildings with data accessibility that research different machine learning techniques on how to achieve more accurate predictions, as in [6]. Buildings equipped with smart grids technology take advantage of data generated from several sources, including smart meters, phasor measurement units, and various sensors [7]. Using such data, forecasting algorithms are essential for prediction activities. Artificial Neural Networks have the advantage of extract and model unseen relationships and features. This ability gifts the neural networks with more robust choices if used the right way [8]. The K-Nearest Neighbour algorithm is an alternative recommended for time series classification. However, the algorithm's performance requires a minimum quantity of labeled data [9]. The decrease of energy costs may be more effective with the assistance of modeling strategies that combine different forecasting algorithms including Artificial Neural Networks and Random Forest [10]. In fact, the uncertainties of load demand in the energy management present obstacles to achieve accurate forecasts. Reinforcement learning is recommended to overcome complex nonlinear issues with a decision-making ability that optimizes the current solution to be more effective [11,12]. Reinforcement learning has a strong learning ability and high adaptability gifted with control and decision-making abilities. These are essential to ensure optimal outcomes in different scenarios including in robotics and distributed control [13]. Reinforcement learning is used for different applications according to the problem diversity, including performance improvement. It is also stated that a few applications use reinforcement learning to improve the prediction accuracy with different deep learning techniques, which is the case of this paper. Additionally, the learning method is also discussed being the Q-learning a researched option [14]. Given the results of the above-mentioned literature, the methodology proposed in the present paper aims to, in the first step, identify different contexts using decision trees. Then, reinforcement learning is applied in each context to identify the most accurate forecasting model. It innovates in overcoming the approach of selecting a single forecasting model for all the operational situations in a single consumer or building. For illustration purposes, models based on ANN and KNN forecasting algorithms have been used. The motivation consists in improving the forecasts obtained in recent research published by the authors of this paper [2]. Therefore, the authors reuse several forecasting aspects from [2] including the forecast horizon and forecast strategies. Innovative topics featuring the formation of new contexts with decision tree training and the reinforcement learning evaluation considering the most effective algorithm in different contexts are expected to improve these forecasts. Moreover, the decision tree and reinforcement learning innovative aspects are inspired from recent research published by the authors of this paper, respectively in [15] and [16]. After this introduction, Section 2 explains the proposed contextual approach, Section 3 evidence the details of the case study, and Section 4 presents the obtained results. Finally, Section 5 presents all the conclusions. II. PROPOSED CONTEXTUAL APPROACH In this section, it is explained the different phases of the proposed contextual approach. These include obtaining energy consumption forecasts, decision rule-based learning, definition of contexts, learning process, and the selection of the best forecasting algorithm for the target context. The main goal is to evaluate the best forecasting model for each of different contexts. After obtaining energy consumption forecasts with different algorithms, a decision tree gifted with rule-based learning defines different contexts. Later, a learning process evaluates the best algorithm for different contexts. The first step consists of obtaining energy consumption forecasts for five minutes and according to two algorithms: Artificial Neural Networks and K-Nearest Neighbors. Afterwards, a rule-based decision learning trains a decision tree with the forecasting data of both algorithms and additional factors from the actual and previous periods. These factors consider time features including the weekday and the actual period and furthermore consider quantitative data obtained from the previous period including the consumption and two sensor devices data. These last two factors monitored on sensors devices consist of CO2 and a light variable with the value one or zero corresponding respectively to light in the building or no activity at all. These two parameters have been selected in sequence of the validation made in [2]. The learning process arises to evaluate the more suitable forecasting algorithm in different contexts. A set of agents perform this evaluation in an interactive environment through trial and error using feedback from their actions, observations, and rewards. The observations correspond to the contexts defined previously in rule-based decision learning. The agent's action is triggered every five minutes, and it corresponds to the selection of a forecasting algorithm, This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3180754 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ either K-Nearest Neighbors or Artificial Neural Networks. The reward is calculated after every five minutes after the agent algorithm selection, representing how good the forecasting algorithm selection was for each actual context. In one hand, Rewards assigned to 0 correspond to scenarios where the selected algorithm is the one with higher forecasting error. On the other hand, rewards assigned to 1 correspond to scenarios where the selected forecasting algorithm has lower forecasting error. Each obtained reward is updated to an average of rewards, measuring the reward performance for all five-minute periods. In other words, the average of rewards measures the algorithm selection performance with lower forecasting error expectations. In each context evaluation, the learning methods and the exploration and exploitation rates are updated. The learning methods may correspond to greedy or upper confidence bound -the exploration rate focus on the angle of unexplored territory for each forecasting algorithm selection. The exploitation rate focus on the knowledge exploration of a particular forecasting algorithm selection. After evaluating the best forecasting algorithm for all five minutes periods, the multi-agent system is prepared to select the best forecasting algorithm for the target context. Then, according to upper confidence and greedy learning methods, the action is calculated every five minutes according (1) and (2). Where: • Nt(a) -number of times the action has been selected before time t • Q(t) -current estimation • c -degree of exploration • a -maximizing action III. CASE STUDIES In order to illustrate the use of the proposed methodology, the implemented decision tree methodology studies a sample of data obtained from electric devices measuring different units and magnitudes. It has been implemented, in this paper, for two case studies: a building case study, and a industrial case study. In the building case study, it is contextualized for a whole week from 18 to 24 November 2019 in five minutes periods. Only a week with five minutes contexts from 18 to 24 November 2019 is considered to compare the same data size studied in recent publications by the authors of this paper [15]. Table I presents the decision tree inputs structure with the weekday, the allocated period, the consumption, the light, and the CO2. This table also adds the decision tree output structure with the forecasting algorithm application. Moreover, the input variables with nonlinear behaviors are studied according to their profile during 18 to 24 November 2019 in Fig. 2. Therefore, temporal variables are excluded from the analysis in Fig. 2 keeping however the consumption, light and CO2 profile. The light and CO2 sensors were added to the decision tree structure due to previous research published by the authors of this paper concluding that these two factors have more influence on the consumption [17]. The case study researches the different factors according to a weekly profile and five minutes contexts. Five similar patterns are identified, representing the activity data from each day of the week more concretely from Monday to Friday. This is followed by two similar patterns representing the low activity of the weekend. The consumption shows usual variations from 500 to 1500 W, as seen on the patterns from Monday to Thursday. The consumption variation from Friday is shown to be more productive, reaching consumption ranges higher than 2000 W. During the weekend, the consumption behavior is described by variations nearly to 600W. The light intensity describes variations between 0 and 1, representing respectively the absence or presence of light intensity measuring devices. CO2 devices present variations between 0 and 20%. The two sensors present null values during the whole weekend. The reinforcement learning methodology studies the evaluation of the most suitable forecasting algorithm in five This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2022.3180754 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ minutes from 18 to 24 November 2019. These five minutes decisions correspond to the forecasting algorithm selection, K-Nearest Neighbors, or Artificial Neural Networks. One week with five minutes contexts is considered to compare with other publications by the authors of this paper [16]. Regarding the industrial case study, which has been included for validation purposes, detailed information is not provided due to space limitations. Further details can be obtained in [18]. IV. RESULTS In this section are presented the results regarding the use of the proposed methodology. These are obtained with the greedy learning method and according to four selected contexts (SC1, SC2, SC3, SC4). A. BUILDING The decision tree approach has been applied to the data in section III, testing different tree depths. Three data samples evidence different day features classified as the morning, afternoon, and night labeled respectively in a), b), and c), as seen in Fig. 3. These three samples correspond to previous known research published by the authors of this paper [16] and are detailed in this case study to support known forecasts in unique and different parts of the day. These forecasts are later used as research during the reinforcement learning evaluation of the most effective algorithm in different contexts. The k-nearest neighbors and artificial neural networks present very accurate predictions much nearer to the real consumption for almost all five-minute periods. The morning scenario presents consumption variations between 500 and 1500 W. The afternoon scenario presents variations between 500 and 1500 W and between 500 and 2500 W. Finally, the night scenario presents many variations between 500 and 600 W and sequences of 5 minutes reaching 1000W. The accuracy of the decision tree resulted from the depth parameterization is presented in Table II. Table II evidence very accurate results for the different depth parameterization values. It is noted that depth parameterizations assigned within ranges between 2 and 4 are not large enough to result in accuracies greater than 66.96%. However, it is possible to obtain higher accuracies by increasing the decision tree depth to values higher than 4. As seen in Table II, increasing the depth parameterization value to 5 and 6 results in more accurate results, respectively 67.86%, and 71.43%. Therefore, while no real improvements are seen for depth ranges between 2 and 4, parameterization depth value changes to 5 and 6 show accuracy improvements respectively of 0.90% and 4.47%. The reason for these improvements is a higher complexity in the elaboration of decision rules. Therefore, the higher the decision tree depth, the higher the complexity of rules, possibly resulting in more accurate results. The accuracy results obtained in the decision tree feature similar research provided by the authors of this paper [15]. A simple rules elaboration illustrates the decision tree for a depth assigned to the value two as presented in Fig. 4. This scenario is a simple example to summarize the simpler logic presented in the decision tree rules. As identified previously in TableII, the scenario with decision tree depth assigned to 6 leads to more accurate results. Therefore, the rules split of this scenario is analyzed in List 1. The decision tree presented in Fig. 4 shows very simple rules for depth assigned to 2. Two contexts are identified on the decision tree in Fig. 4 with a) weekday from Monday to Friday and consumption ranges below or equal to 568.833 W or b) weekday from Monday to Friday and consumption ranges higher than 568.833 W. List 1 presents very complex rules for a decision tree depth assigned to 6 corresponding to a total of 46 contexts. These contexts presented many differences, including the day corresponding to a weekday from Monday to Friday or a weekend and specified ranges for consumption (cons), CO2 (CO2), and the period allocated (min). From these 46 contexts, several can be identified within the restrictions defined in a) and b). Moreover, the selected contexts are identified within the restrictions defined in a) and b) and separating small from large occurrences labeling respectively in SC1, SC2, SC3, and SC4. The learning phase studies the average rewards and the history of actions for five minutes periods and all exploration and exploitation rates from 0.1 to 0.9 with the greedy learning method. Moreover, this is presented respectively in Fig. 5, and Fig. 6 for four contexts SC1, SC2, SC3, and SC4 labeled respectively in a), b), c), and d). The average reward alternates every five minutes between 0 and 1, representing algorithm selections with higher and lower forecasting errors. All presented scenarios start with an average reward assigned to 1 in the first five minutes, followed by at least an alternate decision that causes the average reward to converge to an interval between 0.2 and 0.8. Scenario a) has average rewards convergences between 0.7 and 0.8 for low exploration rates. However, it tends to decrease to patterns between 0.4 and 0.7 as the exploration rate increases. Scenario b) has average rewards to converge to 0.6 for lower exploration rates and 0.5 for higher. Scenario c) has average rewards to converge to 0.8 for low exploration rates. However, it tends to decrease to patterns between 0.3 and 0.8 as the exploration rate increases. Scenario d) has average rewards to converge to 0.5. As noted in scenarios b) and d), the increase of the exploration rate makes the different exploitation rates converge towards a more similar pattern. Thus, the exploitation rates assigned to values 0.1, 0.4, and 0.9 tend to converge to higher average rewards on some scenarios and for the different exploration rates. The historic actions associated with context SC1 and for exploitation rates of 0.9 are illustrated in Fig. 6. The history of actions is illustrated for context SC2 for the three exploitation rates identified previously as frequent cases to result in higher average rewards. These rates are within 0.9, 0.1, and 0.4, labeled respectively in a), b), and c) in Fig. 7. The historical actions for context SC1 illustrated in Fig. 6 show long sequences of five minutes deciding to use KNN repeatedly. After nearly 75 sequences of five minutes, the history of action finds it essential to alternate between KNN and ANN, being this more frequent between 190 and 230 and between 260 and 297 long sequences of five minutes. The historical actions for context SC2 show two possible behaviors for long sequences of five minutes: either to use repeatedly KNN as seen between 408 and 445 long sequences of five minutes or alternating very frequent between KNN ANN as seen between 445 and 482 long sequences of five minutes. The history of actions of context SC1 presented in Fig.6, and SC2 presented in Fig. 7 labeled in a), b) and c) suggest a long-term learning approach more capable of alternating more between KNN and ANN according to the five minutes context, rather than repeatedly evaluating for KNN. This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and Lower exploitation rates tend to repeatedly evaluate more sequences of five minutes as KNN as evidenced in Fig.7 when comparing scenario b) with scenarios a) and b), respectively low and higher exploitation rates. This is understandable as low exploitation rates take more sequences of five minutes to acquire knowledge about KNN. Therefore, scenario a) has the advantage of acquiring more knowledge about a particular forecasting algorithm in fewer periods of five minutes. The historic actions associated with context SC3 and for exploitation rates of 0.9 are illustrated in Fig. 8. The historical actions are illustrated for context SC4 for the three exploitation rates identified as frequent cases to result in higher average rewards. These rates are within 0.9, 0.1, and 0.4, labeled respectively in a), b), and c) in Fig. 9. The historical actions for context SC3 illustrated in Fig. 8 show long sequences of five minutes deciding to use KNN repeatedly. After nearly 75 sequences of five minutes, the history of action finds it essential to alternate between KNN and ANN. This behavior is presented between intervals of sequences of five minutes, including between 90 and 110, 120 and 150, 152 and 190, 192 and 294, and finally 197 and 334. The history of actions for context SC4 show two usually and possible behaviors for long sequences of five minutes: either to use repeatedly KNN as seen between 260 and 297 long sequences of five minutes or alternating very frequent between KNN and ANN as seen between 297 and 334 long sequences of five minutes. Although these two behaviors are usual, the scenario represented in b) with a low exploitation rate of 0.1 shows that the historic of actions is also capable of evaluating small sequences of five minutes periods repeatedly as ANN as seen between 112 and 149 long sequences five minutes. This is understandable as low exploitation rates need more time to acquire knowledge of ANN on five minutes contexts before having knowledge of both forecasting algorithm and reaching more pragmatic decisions. The history of actions of context SC3 presented in Fig. 8, and SC4 presented in Fig.9 labeled in a), b) and c) suggest a long-term learning approach more capable of alternating more between KNN and ANN according to the five minutes context, rather than repeatedly evaluating for KNN or ANN. Lower exploitation rates tend to repeatedly evaluate more sequences of five minutes as KNN or ANN, as evidenced in Fig. 9 when comparing scenario b) with scenarios a) and b), respectively low and higher exploitation rates. This is understandable as low exploitation rates take more sequences of five minutes to acquire knowledge about KNN or ANN. Therefore, scenario a) has the advantage of acquiring more knowledge about a particular forecasting algorithm in less periods of five minutes. It is possible to research the learning phase results for the whole week from 18 to 24 November 2019 with no contexts distinction. This research presents the average rewards for five minutes and all exploration and exploitation rates from 0.1 to 0.9, as illustrated in Fig. 10. The results obtained in Fig. 10 presents overall average rewards nearly to 0.6, highlighting average rewards above reasonable. It is possible to obtain higher average rewards with context distinction for context SC3 nearly to 0.8 as illustrated in Fig. 5 scenario c). B. INDUSTRY An identical simulation contextualized in industrial energy consumptions compares the decision tree accuracies and the average rewards with the electrical building simulation previously studied. The accuracy of the decision tree is obtained for different tree depths according to an industrial use case as visualized in Table III. The decision tree accuracies visualized in Table III evidence very accurate predictions between 60.42 and 61.11% using decision tree depths assigned to values between two and five. The decision tree loses accuracy while improving the decision tree depth from value five to value six decreasing the accuracy from 61.11 to 56.25%. This is logical as the use of time features and industrial energy consumption has its limitations while elaborating decision rules. Table III also evidences the decision tree accuracy decrease from 61.11 to 60.42% while changing the depth from value three to value four. However, a decision tree depth increase from value four to value five, improves the accuracy from 60.42 to 61.11 %. The average rewards evaluation of the most effective forecasting algorithm application in different five minutes contexts is also studied for the industrial context. This analysis considers all exploration and exploitations rates from 0.1 to 0.9 in the learning phase parameterization with the greedy method application as illustrated in Fig. 11. The average rewards contextualized in the industrial context show an initial average reward of one for all exploration rates due to the selection of the most effective forecasting algorithm in the first five minutes. This is followed by at least a forecasting algorithm selection with lower accuracy leading to the average reward decreasing from 1 to a lower value between 0.4 and 0.6. The average reward converges to 0.6 for exploration rates between 0.1 and 0.2 and to 0.5 for exploration rates between 0.3 and 0.9 until the last five minutes period evaluation. The historic of actions studies the forecasting algorithm application in different five minutes periods. The k-nearest neighbors and artificial neural networks applications are alternated in different five minutes contexts for an industrial application with an exploitation rate assigned to 0.4 as illustrated in Fig. 12. Some examples are observed including between 1 and 37 sequences of five minutes and between 91 and 145 sequences of five minutes. IV. CONCLUSIONS This paper identifies suitable contexts through decision tree rules and analyzes the best forecasting model in different periods. The results obtained for the different decision tree depth values suggest the decision tree is suitable to identify contexts. It is also noted that increasing the depth value higher enough makes the decision rules complex enough to result in more accurate results. The obtained results on the learning phase for the greedy method show average rewards converging to values above reasonable. It is noted that increasing the exploration rate may decrease the final average reward in some contexts. The historic actions present two frequent patterns on long sequences of five minutes: to select KNN or ANN repeatedly or to alternate between KNN and ANN. It also noted that it is advantageous to use large exploitation rates to acquire more knowledge of a particular forecasting algorithm selection in fewer periods of five minutes. Moreover, this motivates to alternate between KNN and ANN on different five minutes contexts faster than for low exploitation rates. An accurate analysis of the learning phase results for the whole period reveals that context use is advantageous for obtaining higher average rewards. The industrial use case also reaches very accurate decision tree accuracies, however this is limited to a maximum of 61.11% while the electrical building application contextualized in this paper reaches accuracies with maximums of 71.43 %. It is inferred that the less precise decision tree accuracy in the industrial context is because of the lack of sensors data in the decision rules. Moreover, this problem may also explain why the increase of the decision tree depth at some point decreases the accuracy. It is inferred that the rules built in the decision tree training are able to reach stronger logics when including sensors data. The average of rewards analysis on the industrial use case has also obtained above reasonable forecasting algorithm applications in different contexts. The historic of actions contextualized in the industrial use case have shown two similar behaviors leading to either alternating between k-nearest neighbors and artificial neural networks applications or evaluating repeatedly with k-nearest neighbors.
5,912.2
2022-01-01T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
The Green Revolution in the World's Religions: Indonesian Examples in International Comparison Similar to progressive political movements, the programs of many religious and spiritual groups today are converging around a shared commitment to address the impending global ecological crisis. The paper explores this convergence by looking at the impact of environmentalist thought on religious discourses in modern Indonesia, the author's primary research area, and comparing the findings to similar trends elsewhere. The research shows that the environmental movement is causing a transformation in how people understand the character and practical relevance of religion and spirituality today, in Indonesia and beyond. For some eco-spiritual groups, a heightened environmental awareness has become the central tenet of their monistic religious cosmology. The more significant phenomenon, however, is a socially much broader shift toward more science-friendly and contemporary religious cosmologies within the mainstream of major world religions. Islam and Christianity now officially accept that other forms of life have a right to exist and that humanity has a custodial obligation to protect nature. This new outlook rectifies the previous tendency within dualist religions to view nature as vastly inferior and servile to human interests. It simultaneously is a rejection of materialist-scientific cosmologies widely prevalent in late modern consumer societies, which deny any notion of the sacred. This trend in the world's religions toward a re-evaluation of the cosmological status of humanity in relation to nature and the sacred, I argue, will enhance the prospects of the global environmental movement's campaign for environmental sustainability. Introduction The current epoch in the history of life on Earth has come to be known as the Anthropocene, the age when humans became a truly world-transforming species.The world-transforming powers we have acquired through the development of technology make it imperative for us to develop also our capacity for self-reflection.We can easily fall victim to habitual activity patterns that are destructive of life unless we raise awareness, and already, there is overwhelming evidence that our activity patterns are triggering a mass extinction event and leaving us on the brink of a multidimensional and global ecological crisis [1].This crisis highlights the interdependent or "ecological" character of our existence and, hence, the vital need for a radical transformation in how we understand our place as human beings within the natural world.A more aware, caring and responsible attitude toward nature would now seem mandatory to safeguard our own survival in the near to medium term, and more so the welfare of future generations.What role will religions play in this process of transformation? Human self-understanding is shaped by cultural assumptions.The most fundamental assumptions human beings hold are enshrined in cosmologies, which can be religious, or secular, or a combination of both.Cosmologies are herein defined as explicit or implicit models for understanding our place as human beings within the world and, hence, for defining our sense of the purpose in life and our core values.If we wish to transform our self-understanding toward greater ecological awareness, we thus must begin with a critical examination of the cosmological frameworks of our contemporary societies, before rushing to change derivative value and status systems.This is not an easy process, because major cosmological corrections shake up our most fundamental and cherished ideas about the world, as well as upsetting derivative discourses and patterns of socio-political privilege.Cultural change resistance thus arises with some regularity in times of crisis, when the cosmological foundations of the prevailing culture typically become subject to critical scrutiny [2,3]. Even though local responses to the environmental crisis do vary, depending on the unique cosmological starting position of each society, change resistance is a common obstacle.There are two main causes for change resistance-cosmological and socio-political-that together explain why, despite dire warnings by natural scientists about the effects of climate change and other environmental threats, the response to this challenge has been slow and hesitant.Climate scientists have recognized this and have begun to call upon the social sciences for assistance, so as to better understand and address this systemic change resistance [4].While there have been many efforts to show that socio-political resistance flows from a desire to protect entrenched privilege, in the fossil fuel industry for example [5], there has been less public attention directed at the role of cosmological change resistance, on which this article is focused. Fear of innovation in cosmological principles can make it very difficult to even begin to think in the novel ways mandated by the advent of the global ecological crisis.Given that people dwell in interpretive worlds, it can seem to them that 'the world' shall end if they allow any cosmological adjustment to take place.Note that such fear-based resistance is not just relevant for religious cosmologies, but also applies to prevailing secular cosmologies, such as consumer society hedonism.The specific focus in this article, however, will be on the adjustments now taking place in religious cosmologies, using the rise of eco-Islam in Indonesia as the primary example. The traditional cosmological models of Islam and Christianity are both based on a principle of transcendentalist dualism, envisaging a divine creator and a human soul as non-material (spiritual) subjects existing separately from material nature and being of superior value.This transcendental dualism has been identified in popular literature as a deep-seated obstacle to enhanced ecological awareness and responsibility within these traditions [6].While such cosmological obstacles may have delayed the now desperately needed process of human self-reflection and transformation within these religions, the findings detailed below do show that significant cosmological change is now occurring. A "green revolution" has begun to unfold, transforming the cosmological assumptions of religions and spiritualities worldwide [7].This "greening" process has been studied by social scientists for some time now, for example by Mary Evelyn Tucker and her husband John Grim, who organised a now legendary series of ten conferences on "World Religions and Ecology" at the Centre for the Study of World Religions at Harvard (1995)(1996)(1997)(1998) and later established the Forum on Religion and Ecology at Yale University [8] with the aim of studying this historic encounter between religion and ecology.In this paper, I provide an update on this process of change, with a special focus on Indonesia.While it may long have been a "quiet revolution" ( [9], title), restricted to the progressive margins of the religious spectrum, a cosmological reorientation is now gathering pace and transforming religions not only in the West, but also in Asia and elsewhere.Similar to contemporary political movements [10], progressive religious and spiritual groups everywhere appear to be converging around a shared commitment to meet the challenges arising from the global ecological crisis.The public today all but expects religious teachers and organizations to integrate an ethical commitment to sustainability into their theology and practice and to work collaboratively with other faith traditions.Some progressive eco-spiritual groups have fully embraced the new environmental consciousness and made (deep) ecology the central principle of their religious cosmology, whereby nature is regarded as the embodiment of the sacred whole and humanity as holding at best a position of primes inter pares compared to other species [11,12].These new eco-religions argue for a re-enchantment of nature and the material world, for example through new interpretations of animism, and we have seen the effects thereof also within social science [13][14][15].The notion that the natural world is alive and sentient answers the need for a new cosmology that holds nature to be sacrosanct and, hence, inviolable.This progressive fringe of the contemporary spectrum of religions, though it is relatively small if we only count active followers of eco-spiritual groups, is the vanguard of a wider movement, has a large number of sympathizers among nominal followers of mainstream religions, and thus exerts some pressure on the leaders of the latter. The broader and socially more significant phenomenon, however, is a moderate shift toward a more eco-friendly religious cosmology in the major world religions, including Islam and Christianity.This shift reflects external pressures, but it is also the result of a genuine, ecology-inspired self-critique.Mainstream religions often combine this with a critique of the wholesale dismissal of the sacred that is reflected in the attitudes of late modern, secular consumer societies, which they like to portray as the root cause of callousness toward the environment.In short, this reform is not just a mea culpa but also an active assertion of the renewed relevance and truth of religion in a context of ecological crisis. The advent of a cosmological "green shift" described in this article provides some reason for hope that a fundamental reorientation is at last taking place, a revolt against rampant, nature-denigrating transcendentalism, while at the same time rejecting the cynicism and underlying despair of late modern consumer society, whose ethos of nihilist materialism has dismissed the very notion of Spirit or Soul and encouraged a view of life as devoid of intrinsic value and purpose.The emerging consensus within the mainstream Abrahamic religions, I will argue, is based on the adoption of a much more positive attitude toward nature.Nature is now upheld as a part of divine revelation and a living subject deserving of human custodial care, while rampant anthropocentrism and destructiveness toward nature are described as sinful and suicidal.Nature is still seen as "God's creation" and distinct from humanity in that it lacks an immortal soul (anima), but like humans, it is now attributed with intrinsic value and, in the case of other life forms, with sentience and an inherent dignity. The pathway to this new position has been different in Indonesian Islam compared to Christianity in the West, with considerably less resistance to the reception of ecological thought.I will begin by exploring an ecological shift in contemporary Islam as it can be observed in Indonesia, where I have conducted continuous ethnographic research on culture and religion over the last 25 years.I will then briefly compare Indonesian trends with some similar developments observable in other Muslim countries, among Christian denominations in the West and in the global interfaith movement.The results of this comparison suggest that the environmental crisis is the driving force behind a fundamental shift in how people worldwide understand their religions and spiritualities today.This does not mean that religions are simply passive recipients of an external influence.For many faith communities, their active involvement in the project of facilitating a new human self-awareness, through a green shift in religious and spiritual thinking, offers an opportunity to demonstrate the ongoing relevance of the concept of sacredness to contemporary society. Ecology and Contemporary Religion in Indonesia The re-evaluation of the cosmological status of nature within religions and spiritualities, I argue, is an important support and perhaps even an indispensable prerequisite for the success of the global environmental movement.There is a need to track such developments, and much has indeed been written about the prospects for such a transformation in the Christian majority societies of Europe and North America.The global ecological crisis and an associated demand for "greener" religions, however, is also very much felt in developing countries, wherein other world religions and local traditions may predominate, providing a rather different cosmological starting position.A pertinent example is Indonesia, where the author has been conducting long-term research on religious change, specifically on the islands of Bali, Java and Borneo.The following account seeks to illustrate how Indonesia's green shift is taking shape and to analyse comparatively to what extent its pathway differs from our experience in the West. The overwhelming majority of Indonesians profess to be of the Islamic faith, and the "greening" of Islam thus will be the most important consideration here (there is some irony in this phrasing because Islam is traditionally associated with the colour green).In some regions of Indonesia, however, Hindus, Protestants or Catholics form the majority of the local population.More importantly, their formal confession to Islam or one of the other major world religions, which is mandatory for all citizens under Indonesian law, does not prevent individuals and ethnic groups throughout Indonesia from identifying strongly, and sometimes more strongly, with their diverse local, indigenous religious traditions. The content of these local religions is officially classified within state discourses as merely "cultural" (budaya), "customary" (adat) or a matter of private "belief" (kepercayaan).While this reflects the prevalent state policy on religion, which favours monotheism and the big traditions, countless ethnographic reports have shown that indigenous religion remains very important in most parts of Indonesia and is indeed experiencing a revival [16][17][18].The history of smaller, local religious traditions is complex.In the central part of Indonesia, these indigenous traditions merged with Indian religions over a period of more than a millennium, from the 5th century onward.In the outer islands, indigenous local traditions remained largely untouched by foreign influence until the arrival of Islam and Christianity in the archipelago.These traditions were typical of the religions of Austronesian-speaking populations throughout Southeast Asia and the Pacific.The main cosmological features of Austronesian religions include a form of animism (i.e., the belief that humans and other elements of nature all have a soul, not just humans) and ancestor veneration, whereby ancestors are associated with the sacred source of life.Similar to Hinduism, the indigenous religious traditions of Austronesian-speaking societies are internally very diverse.They are not monotheistic, and their cosmologies do not postulate a sharp dualism between a spiritual creator and material creation.They thus view nature and place-specific ancestors as the sacred source of all life and imbued with profound spiritual significance.Local traditions generally involve veneration of specific ancestors and local deities, some more abstract deities that are personified forces of nature or (on the central islands) personal deities that have been adopted from the Hindu pantheon.Until today, it is Indonesia's indigenous religious traditions that define the spiritual geography of the countryside, which is dotted with countless sacred (keramat) sites that are linked together through mythical narratives of origin [19,20]. What this means, for our present purpose, is that any cosmological shift toward a "greening" of Islam or Christianity in Indonesia can also be understood as a return, in part, to age-old indigenous or Hindu-Buddhist religious attitudes toward a sacred natural world.Such nature-friendly cosmologies were diminished but never wholly abandoned following the rise of Islam as the paramount state religion in Java, some five hundred years ago, or with the later spread of Catholicism and Protestantism in other parts of Indonesia in the wake of Portuguese and Dutch colonialism.Such a "revivalist interpretation" of Indonesian eco-spirituality will not be entirely unfamiliar to Western readers.The same interpretation can also be applied to a lesser extent to Europe, where a nature-embracing "neo-paganism" based on a revival of indigenous European religions is receiving renewed interest and presents itself very much as a form of eco-spirituality [10].In Indonesia, moreover, the cosmological influence of indigenous religions is much greater than it is in Europe due to the greater continuity of these traditions in Indonesia.By the same token, there is also more political tension between indigenous and introduced religions in Indonesia. This situation rather complicates the interpretation of contemporary eco-religious trends in Indonesia.On the one hand, many supporters of indigenous religions (with or without Hindu influence) have specifically told me that they see their own traditions as fundamentally more eco-friendly than Islam and Christianity, which they say lack respect for the sacredness of nature.These arguments were raised to show that indigenous religion has continued value and relevance and is not backward (as their opponent would have it), but "more progressive" than the Abrahamic religions. Such claims are not often publicised, however, and hence, they have not generated enough political heat to invite a counter reaction from Islam.Indeed, I have been unable to find any evidence of Indonesian Muslim scholars or clerics voicing any fear that eco-spirituality could serve as a cover for the reassertion of indigenous or Hindu religious beliefs.Drawing on the works of international Muslim scholars, like Seyyed Hossein Nasr, Mawil Izzie Dien, Ziauddin Sardar, S. Parvez Manzoor, Fazlun Khalid and others [21][22][23][24][25][26], Indonesian Muslim scholars, rather, are very confident and proud to conclude that a certain variant of eco-spirituality is legitimately and can be proudly claimed as an integral part of Islam with a strong scriptural pedigree. In interpreting the rise of ecological thought in Indonesian Islam, another important question is: what is the source of innovation and, hence, the causal direction of this social process?Public debates led by Muslim theologians and clerics certainly have an impact in Indonesia, as elsewhere, but popular trends also have their own dynamic and can exert pressure on clerics and scholars.Within Islam, this bottom-up movement of ideas is particularly important, because Muslim clerics do not form a single, unified organisation with a supreme leader, certainly not in Indonesia, and hence, no person or organisation has supreme authority in the interpretation of scriptures with regard to contemporary issues.While some proclamations (fatwa) of some clerics do exert significant influence, others do not, depending significantly on the persuasiveness of their argument and not just on their social position [27].The Muslim public is thus able to be selective in what it receives from religious experts and is by no means a passive recipient of either neo-conservative or progressive religious ideas.Given the fact that popular Muslim clerics regularly pick up on contemporary issues in their sermons, it is probably fair to assume that ecological thought has become one of the trendiest topics in these sermons as a consequence of a shift in public awareness.Rising popular ecological awareness is the driving force. What surprised me most is that my research did not uncover much evidence of resistance to ecological thought from clerics or ordinary Muslims.Generally, the prospect of an impending ecological crisis is well accepted, and action to avert this crisis is now being depicted not only as necessary but also as a religious duty.One explanation for this open reception is that environmentalists have not problematized Islam in Indonesia in the same way that mainstream Christianity has been challenged in the West.The kind of "deep ecology" that is familiar to many people in the West, and was first advocated by Arne Naess [28], is also not yet well known in Indonesia.Deep ecology has issued a strong call for fundamental cosmological change and has directly criticised the objectification of "soul-less" nature in Christian cosmologies [6].Official acceptance of such a deeper cosmological shift is hardly possible within Islam or Christianity, and I certainly have seen no evidence for it in Indonesia.Such a deeper ecological shift may nonetheless be attractive to many individual Indonesian Muslims, especially those who are heavily influenced by resonant indigenous traditions. Most Indonesian Muslims tend to accept ecology as an uncontroversial and fairly self-evident scientific idea, indicating a condition of human interdependence within nature.The urgent need for political action is also widely seen as self-evident, given that most Indonesians have some knowledge of the devastation of tropical forest environments on Sumatra and Borneo at the hands of the mining and palm oil industries [29] and of the extreme environmental pollution issues now plaguing the capital Jakarta and other urban areas. What controversy there is around ecological questions arises from industry resistance to calls for better environmental protection and sustainable resource management and from the regular failure of corrupt state officials to implement existing policy and legislation on nature protection.Religious and environmentalist groups tend to be on the same side of these conflicts and often collaborate.Islamic leaders (ulama) in Kalimantan, for example, were criticised by extractive industries when they issued a fatwa declaring the environmental destruction of the island's forest as haram (forbidden by Islam) [30], while environmentalists applauded and defended them. For many of the young Indonesian Muslims I have interviewed, to promote or actively engage with environmental groups is a very safe way of projecting a self-image of being a progressive, contemporary and open-minded person.This explains, for example, why a recent article and blog, wherein leaders of WALHI (Wahana Lingkungan Hidup Indonesia, Indonesia's equivalent of Friends of the Earth) loudly called upon Muslim individuals and organisations to help fight environmental destruction "as a matter of religious duty" ( [31], p. 1), did not receive one single negative comment, notwithstanding the fact that Indonesia has a sizeable contingent of religious conservatives.Conversely, while some ecological writers do criticise conservatism in Indonesian Islam, suggesting that conservatism deprives Islam of the opportunity to contribute to a solution to the ecological crisis and similar issues, this criticism is directed at a lack of interest in activism and not at Islamic cosmology.Syafur, for example, argues that: "There has to be a serious and continuous effort to understand [the] fundamental and functional meaning of formal rituals of Islam, in such a way that Islam supports not only Theo-centric but also socio-economic concerns.Once Islam is shackled by its routines and finds no alternative interpretation of its rituals, there hardly is hope of important contributions made by Islamic scholars to cope with ecological and global crises" ( [32], p. 44). Islamic scholars generally agree with this point of view and thus are eager to demonstrate that ecology is intrinsic to Islam, e.g., [33].Some have gone so far as to describe the Prophet Muhammad as an environmentalist avant la lettre [34].While a number of passages in the Qur'an, similar to the Bible, imply that mankind is the pinnacle of creation (ashraf al-makhlouqat) and is given dominion over animals and nature, many other passages do support this claim to ecological credentials.These passages evoke the idea of human custodianship (khalifah) and responsibility for maintaining a balance (mizan) between the utilisation and the protection of nature (protection, for example, in zones designated as harim).Such nature-friendly passages in the scriptures are frequently cited by Indonesian eco-Muslims today.For example, the popular Indonesian blog, "Magazine on Islam and Environment" (Makalah Islam dan Lingkungan), has posted an extensive collection of scriptural quotes on ecology [35].One favourite scriptural passage, "even when doomsday comes, if someone has a palm shoot in his hand, he should plant it" ( [36], p. 1), has become so popular, it now serves as an inspirational quote frequently seen on t-shirts [37]. There are also concerted efforts under way to incorporate ecology systematically into Islamic education.A number of Islamic ecological boarding schools (eco-pesantren) have been established in West Java for this purpose, such as Pesantren Al-Ittifaq in Ciwidey [38] and Pesantren Darul Ulum Lido near Bogor [39].A recent popular post on Vimeo under the heading "Green Islam in Indonesia," meanwhile, provides a collection of thirty-eight documentary videos on the topic of eco-Islamic education, including numerous interviews with Muslim teachers, and also lists numerous eco-education projects in Islamic schools as inspirational examples [40].This broadly-based and accelerating trend toward a greening of Islam in Indonesia is not an isolated phenomenon.Recent international events illustrate the wider significance of ecological issues in contemporary Islam, notably the global summit on 'Islam and the Environment' in Dubai in 2013.The organisers of this historic state that: "The environment lies at the core of the Islamic faith, and the underlying principal that forms the foundation of the Prophet Mohammed's […] holistic environmental vision is the belief in the interdependency between all natural elements, and the premise that if humans abuse or exhaust one element, the natural world as a whole will suffer" [41]. In summary, Indonesian Islam, and perhaps Islam more generally, is showing a remarkable ability and eagerness to accommodate and indeed assimilate ecological thought.In part this may be explainable in theological terms, insofar as Islam has long viewed nature as a form of revelation in its own right and holds humans responsible for its protection.There are also some major cosmological limitations in Islam, however, in that the creator is seen as a transcendental entity and separate from the material world, as he is in Christian cosmology. The reception of ecological thought by Muslims in countries like Indonesia may also have benefitted from the progress already achieved by ecological campaigners in modifying the thinking of faith traditions prevalent in the developed world, particularly Christianity.This process has set a precedent and provided an incentive for Islam to move more quickly toward accepting the findings of modern ecological science and their spiritual implications.Be that as it may, the thousands of environmental actions organised by Muslim organisations in Indonesia today certainly are a testimony to the enthusiastic reception of ecological thinking in this country's largest faith community. Evidence of a Wider, Global Trend toward the Greening of Religions The brief Indonesian case study presented above now needs to be considered further within the context of an international comparison.For the purpose of this paper, some brief remarks on recent trends and events elsewhere may suffice to show how the Indonesian case fits into a larger picture and also to highlight in what ways its pathway differs. In Europe and the United States, Christian groups have been working actively toward an integration of their faith with ecological principles at least from the 1970s onward (see, for example, [42]).This project now is no longer confined to highly progressive and relatively marginal eco-enthusiast groups, but is being mainstreamed in a comprehensive fashion.This mainstreaming commenced sooner and has advanced further than in Indonesia, but the gap is small and closing rapidly now. One area in which a gap is still evident in Indonesia is the relative lack of interfaith cooperation in relation to a shared environmentalist agenda in this country.Such a trend toward interfaith convergence of religious progressives around a shared ecological project is clearly observable in Christian-majority countries.In the U.S., for example, eco-religion is now the subject of a national interfaith alliance, the National Religious Partnership for the Environment, which includes "the U.S. Conference of Catholic Bishops along with its activist arm, the Catholic Climate Covenant, the National Council of Churches USA and its affiliate Creation Justice Ministries, the Jewish Council on Public Affairs and its affiliate the Coalition on the Environment and Jewish Life, and the Evangelical Environmental Network" ( [43], p. 1).Similar trends toward interfaith convergence can be observed in many other societies and in the internationalist arena.Another interesting example at a national level is the Interfaith Centre for Sustainable Development, in Jaffa, Israel [44].At the international level, one of the best examples of the global success and convergence of the movement for a "greener" religion is provided by the largest inter-faith gathering on the planet, the World Parliament of Religions, which is organised by the Council for a Parliament of the World's Religions.I was able to attend and study the eco-spirituality-related content of presentations given at the last parliament, which was held in my hometown, Melbourne, in 2009 [45].I discovered that, if the program content of this parliament is any indication, the impending global environmental crisis is now the most talked about issue among religious traditions worldwide and is producing strong calls for a rethinking of religious cosmological assumptions, as well as our daily practices [46].In Indonesia the interfaith movement does not have the same degree of public and state support, and hence such mutual encouragement toward environmental action between different faith traditions is still uncommon. How far ecological thought has transformed Christian cosmologies is an open question and impossible to address comprehensively in a brief article such as this.Nevertheless, some pertinent examples will serve to illustrate what the current state of this transformation process is and whether or not Indonesian Islam has had a less arduous time and followed a more direct pathway to reach a similar degree of accommodation with ecology. Perhaps the most globally significant recent event indicative of the mainstreaming of ecological principles within Christianity was the publication of an encyclical letter by Pope Francis on the issue of climate change and other environmental challenges in the (European) summer of 2015 [47].This important statement had drawn much acclaim and some criticism in advance [48], reflecting some of the enduring tensions in the Catholic community around these issues.The encyclical endorses a more progressive official theology of nature within Catholicism, viewing it as a priceless part of God's creation, alongside humans.As was to be expected, the encyclical stops short of recognizing humans outright as creatures of nature, but it does make some overtures to evolution and does attribute sentience and dignity to other (soul-less) life forms.The letter repeatedly employs the rather egalitarian metaphor "our Sister Earth" ( [47], p. 1), which is taken from St Francis of Assisi.Pope Francis also makes it very clear that Catholics have a responsibility toward the environment and that theological mistakes were made in the past: "Faith convictions can offer Christians, and some other believers as well, ample motivation to care for nature [...] Christians in their turn realize that their responsibility within creation, and their duty towards nature and the Creator, are an essential part of their faith" ( [47], p. 19)."If a mistaken understanding of our own principles has at times led us to justify mistreating nature, to exercise tyranny over creation, to engage in war, injustice and acts of violence, we believers should acknowledge that by so doing we were not faithful to the treasures of wisdom which we have been called to protect and preserve.Cultural limitations in different eras often affected the perception of these ethical and spiritual treasures, yet by constantly returning to their sources, religions will be better equipped to respond to today's needs" ( [47], p. 58)."This allows us to respond to the charge that Judeo-Christian thinking, on the basis of the Genesis account which grants man "dominion" over the earth (cf.Gen 1:28), has encouraged the unbridled exploitation of nature by painting him as domineering and destructive by nature.This is not a correct interpretation of the Bible as understood by the Church.Although it is true that we Christians have at times incorrectly interpreted the Scriptures, nowadays we must forcefully reject the notion that our being created in God's image and given dominion over the earth justifies absolute domination over other creatures" ( [47], p. 20). The encyclical has been received well by the scientific community for "engaging remarkably deeply with science" ([49], p. 1).This gives rise to the hope that, while the path towards a full acceptance of ecological thought has been more difficult and slow for Christianity compared to Islam, this may turn out to have been a temporary phenomenon.Looking forward, it seems both religions will fully embrace much of the truth of the ecological perspective on life and will be somewhat transformed thereby.In voicing this hope, I would like to stress that the encyclical's significance must be assessed against the background of the protracted struggle that has preceded it. For Christian theology generally, the encompassment of ecological thought has not been an easy road, and there is still a wide spectrum of opinions when it comes to the interpretation of the cosmological implications thereof.Even though it retains some vestiges of a traditional spirit-matter dualism, the following quote from Brother Charles Cummings shows that some of the most progressive theologians have come a very long way towards a positive revaluing of nature: "The spreading ecological crisis demands that we take responsibility for the house we live in, which is this planet where we live side by side with all our neighbours-All other living and non-living creatures.From the matrix of this material cosmos human beings emerged, according to God's plan, many millions of years ago.The second account of creation in Genesis describes in its own way how humanity was formed from the reddish clay of the earth.In some sense the earth is our common mother.The commandment to honour our father and mother can be extended to include our mother earth in all her materiality.Today this maternal earth is nurturing and sustaining each of us in life; some day the same earth will receive back our lifeless bodies and incorporate them once again into the flux of elements and particles that make up the cosmos, until the final resurrection" ([50], p. 3). More conservative theologians still reject this kind of deep ecology thinking.For example, Reverend Robert Sirico, president of the Acton Institute, claims that such reinterpretations of the canon are heretical: "In secular times such as ours, perhaps, it is not surprising that strange theories that harken back to the Gnostics and the heresies of the early Christian centuries would come into political currency, even through massive popular movements such as an ill-conceived environmentalism that teaches ideas contrary to orthodoxy.But we make a profound error in attempting to graft those ideas onto orthodox faith, and especially to attempt to do so out of a misplaced desire for strategic advantage in the philosophical battles of our time" ( [51], p. 1).Such contrary voices are becoming more marginal now, but they do remind us that the cosmological shift involved in the greening of Christianity remains a difficult one, though it may be easy enough to gloss over, if one wishes to do so. Conclusions The mainstreaming of eco-religious thinking in Indonesia is likely to catch up with and perhaps overtake similar developments in the faith communities of many Western societies.The evidence shows that Islam in Indonesia, surprisingly perhaps, does not appear to be as stressed by this "green shift" as Christianity has been and continues to be to some extent.The uptake of environmentalism by Muslim organisations in Indonesia can only be described as enthusiastic. The strong influence of indigenous traditions of ancestor religion and animism, as well as that of Indic religions may have played a role in this, because these traditional views do not require any cosmological revision to accommodate the idea that nature is sacred and is to be treated with reverence.This may well be a hidden factor in Indonesia, but it is difficult to measure short of conducting an in-depth comparative study and analysis of the relative difficulty or ease of reception of ecology in a wider range of Muslim countries.Two explanatory factors that are more easily identified are, first, that the history of the encounter between ecologists and Muslims in this country has not been marked by any significant acrimony and, second, that the precedent provided by the uptake of ecology by other faiths in advanced industrial societies has provided Indonesian activists with a head start. The one missing ingredient in Indonesia is a national interfaith alliance for the environment.Some local dialogues have been held to explore this possibility, particularly in areas where Islam is not a majority religion.For example, a recent post on the blog site BaleBengong, entitled "Religion Has a Role in Saving the Environment," illustrates that an interfaith dialogue on the environment is now emerging in Indonesia [52]. More generally, my research suggests that a fundamental shift toward more eco-spiritual cosmologies is indeed taking place around the globe.This shift may eventually culminate in a global interfaith alliance for strong action on the most pressing issue of our times.
8,017.8
2015-10-16T00:00:00.000
[ "Philosophy" ]
Primordial gravitational waves amplification from causal fluids We consider the evolution of the gravitational wave spectrum for super-Hubble modes in interaction with a relativistic fluid, which is regarded as an effective description of fluctuations in a light scalar minimally coupled field, during the earliest epoch of the radiation dominated era after the end of inflation. We obtain the initial conditions for gravitons and fluid from quantum fluctuations at the end of inflation, and assume instantaneous reheating. We model the fluid by using relativistic causal hydrodynamics. There are two dimensionful parameters, the relaxation time $\tau$ and temperature. In particular we study the interaction between gravitational waves and the non trivial tensor (spin 2) part of the fluid energy-momentum tensor. Our main result is that the new dimensionful parameter $\tau$ introduces a new relevant scale which distinguishes two kinds of super-Hubble modes. For modes with $H^{-1}<\lambda<\tau$ the fluid-graviton interaction increases the amplitude of the primordial gravitational wave spectrum at the electroweak transition by a factor of about $1.3$ with respect to the usual scale invariant spectrum. We consider the evolution of the gravitational wave spectrum for super-Hubble modes in interaction with a relativistic fluid, which is regarded as an effective description of fluctuations in a light scalar minimally coupled field, during the earliest epoch of the radiation dominated era after the end of inflation. We obtain the initial conditions for gravitons and fluid from quantum fluctuations at the end of inflation, and assume instantaneous reheating. We model the fluid by using relativistic causal hydrodynamics. There are two dimensionful parameters, the relaxation time τ and temperature. In particular we study the interaction between gravitational waves and the non trivial tensor (spin 2) part of the fluid energy-momentum tensor. Our main result is that the new dimensionful parameter τ introduces a new relevant scale which distinguishes two kinds of super-Hubble modes. For modes with H −1 < λ < τ the fluid-graviton interaction increases the amplitude of the primordial gravitational wave spectrum at the electroweak transition by a factor of about 1.3 with respect to the usual scale invariant spectrum. I. INTRODUCTION In this paper we shall consider the evolution of the primordial gravitational wave background during the early radiation dominated era [1] [2] [3], from reheating after inflation up to the cosmological electroweak transition. We will use second order hydrodynamics [4] [5] as an effective theory for the matter fields, and obtain a linear theory for gravitons consistently coupled to the spin-2 component of the matter energy-momentum tensor. Our motivation in using hydrodynamics as an effective theory comes from the highly successful description of the early evolution of the fireball created in relativistic heavy ion collisions (RHICs) by these methods, even in early stages where it is unlikely that local thermal equilibrium has been established [6] [7]. As a matter of fact, our problem bears a significant similarity to RHICs [8]. Our main assumption is that among the fundamental fields there is at least one that is not conformally coupled; for simplicity we shall take this to be a light (effectively massless), minimally coupled scalar field with small coupling constants. These fields are commonly related to "axion-like particles" (ALPs) [9] [10]. Inflationary expansion brings this field to its De Sitter invariant vacuum state. However, this state is highly squeezed and its quantum fluctuations are much higher than those of the local vacuum state of adiabatic observers. Upon horizon exit, and particularly after reheating, these fluctuations lose quantum coherence and may be treated as classical particles [11] [12] [13] [14] [15] [16] -thus resembling the quark-gluon plasma generated in RHICs. These particles compose our "fluid". *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>As we have learned from RHICs, the proper treatment of a real relativistic fluids on timescales not much larger than the fluid relaxation time requires the use of "second order" theories rather than the better known Eckart or Landau-Lifshitz formulations [17] [18] [19]; one of the main points of this paper is that this is the relevant framework for our discussion. In second order theories, the viscous part of the energy-momentum tensor, or some other equivalent variable, is considered as an independent degree of freedom following a Cattaneo-Maxwell type dynamical equation [20]. This equation, together with the Einstein equations and the relevant conservation laws, completes the fully consistent dynamics we are looking for. During reheating and afterwards, we must distinguish between the physics of modes inside or outside the horizon. Reheating is dominated by the most out of equilibrium phenomenon in the history of our Universe, the sudden conversion of the energy-momentum of the inflaton field into radiation energy-momentum [21] [22] [23] [24] [25]. We do not assume our scalar field is decoupled from the rest of matter, and so it partakes of this essentially nonlinear phenomenon. However, the nonlinearities are restricted by causality and therefore they are strong only within the horizon. Outside the horizon the evolution of the graviton-effective fluid system may be described accurately enough by linearized equations. At the most basic level, a gravitational wave presents itself through an anisotropy in the rest frame of the fluid. Ideal hydrodynamics is restricted by the Pascal principle, namely, the state of the ideal fluid is defined solely by the chemical potentials associated to conserved charges (which moreover vanish for a conformal theory) and by the inverse temperature four-vector, and so it is locally isotropic on surfaces perpendicular to this vector. Moreover, for a true equilibrium state, the inverse temperature four-vector must be a (conformal) Killing vector [4], and it may happen that for a given spacetime there are no such vectors. However, in that case hydrodynamics is not built on true equilibria, but only on approximated local equilibria. Any space time will allow for the construction of coordinate systems, such as Riemann normal coordinates [14] [26], which look locally isotropic. Therefore, in the usual approach to hydrodynamics, temperature will be isotropic in the rest frame. The shear tensor, on the other hand, may be anisotropic, but because it is built from derivatives of a vector, it cannot have the symmetry of a spin-2 field. To account for the kind of anisotropy associated to a gravitational wave it is necessary to go beyond the usual framework by considering higher orders or else including from scratch a new spin-2 degree of freedom, as we shall do in the following. For further discussion we refer to [27]. Unlike ideal and first order hydrodynamics, there is no universally accepted approach to second order hydrodynamics. However, in the linearized regime we are interested in, most formalisms converge. For simplicity, we shall adopt a divergence-type theory scheme [28] [37] where the conformally invariant fluid is described by a dimensionful parameter T (which becomes the temperature when in equilibrium), the fluid four-velocity u µ (which obeys u µ u µ = −1, we adopt MTW conventions) and a dimensionless, symmetric, traceless and transverse tensor ζ µν (ζ µ µ = ζ µν u ν = 0). We scale this tensor so that in the linearized theory ζ µν = Π µν /ρ, where Π µν is the viscous energy-momentum tensor and ρ the energy density. For simplicity we shall not consider an explicit coupling of the fluid to other matter fields, the self and gauge interactions of the fluid will appear through the constitutive relations for the fluid, that is its relaxation time τ (to be discussed in Section VI), and its temperature. Under this approximation the equations of the model are the Einstein equations, energy-momentum conservation, and a Cattaneo-Maxwell equation for ζ µν to be provided below. In summary, we assume that at the end of reheating super-Hubble modes are in a state determined by their state at the end of inflation (namely, that reheating is so fast that no significant processing occurs during reheating itself), and then thermalize to the state determined by the dominant cosmic radiation background [38]; this thermalization is well described by linearized hydrodynamics. Moreover, at the relevant temperature scales the fluid may be regarded as composed of massless particles, whereby hydrodynamics becomes conformally invariant [39]. The tensor field ζ µν may be decomposed into scalar (spin 0), vector (spin 1) and tensor (spin 2) parts which are decoupled from each other at linear order. Our interest lies in the spin 2 part, which couples directly to the graviton field; for simplicity we shall disregard the scalar and vector sectors, and focus on the spin-2 sector alone. The spin 1 is relevant in scenarios including gauge fields, since it is related to magnetic field generation [40] [41] [42] [43] [44]. It is well known that the spin 2 part of the matter energy-momentum tensor may seed a primordial gravitational field [ [53]. In the literature there are several estimates of the gravitational background created by different fields, such as the inflaton [54] [55], the Higgs field [56] [57] [58], primordial density fluctuations [59], scalars and non abelian charged scalars [60], and Fermi fields [61]. In principle, the effect on the gravitational wave background may be observed through its impact on the CMB [62]. The present work is closest to [63] [64] which considers the gravitational field created out of a spectator field. However, three differences stand out, namely we put the emphasis in achieving a self-consistent dynamics, including the back reaction of the gravitons on the spectator field, we incorporate the thermalization to the dominant radiation background into this picture, and we read the initial conditions for field and gravitons directly off quantum fluctuations of super-Hubble modes just before inflation ends, rather than the Starobinsky-Yokohama equation [65]. Let us elaborate on this last point. Under the assumption of instantaneous reheating we may obtain the initial conditions for these equations from the analysis of quantum fluctuations just before reheating. For the graviton field this is conventional, for completitude the main necessary results will be summarized below. For the effective fluid we shall treat ζ µν as a stochastic Gaussian field whose self-correlation is derived from the energymomentum self correlation of a quantum minimally coupled scalar field during inflation. Of course this is a divergent quantity, but the divergence is associated to short wavelength modes within the horizon; we shall assume a local observer will subtract the correlations corresponding to the instantaneous vacuum state (as defined by adiabatic modes), and associate the remainder with the effective fluid [14] [66] [67]. The new dimensionful quantity τ (Eq. (2)) splits the range of super-horizon modes k ≤ H, where H is Hubble's constant during inflation, in two. For modes where k ≤ τ −1 as well, the fluid relaxation is efficient and there is no substantial effect of the fluid on the gravitons; the energy associated with the spin 2 field is just dissipated into heat. However, when τ −1 ≤ k ≤ H there is some amplification of the primordial gravitational spectrum due to the decay of the spin 2 part of the fluid into gravitons. This means that this mechanism may be the source of a local feature (a step) in the graviton spectrum around k ∼ τ −1 H. We quantify the height of this step by solving the linearized equations from reheating up to the time of the electroweak transition, after which the primordial gravitational wave spectrum is subject to further processing [1]. We shall show that given appropriate values of the coupling constant (similar to some axion-like particle models) this step may fall in an observationally relevant range. This is the main result of this paper. The paper is organized as follows. In Section II we introduce the framework of divergence type theories from which we extract the causal hydrodynamic equations for the fluid, particularly we derive to linearized order the expression for the energy-momentum tensor and the dynamic equation for the non-equilibrium tensor. In order to deduce the system of fluid-gravitons coupled equations we gather the closure and linearized Einstein's equations in Section III. Section IV provides the initial conditions for gravitons and non-equilibrium variable from quantum fluctuations during inflation. Section V is the main part of this paper; here we analyze the solutions of the previous system. We compute the evolution of the primordial gravitational wave spectrum for super-Hubble modes up to the electroweak transition and show that some amplification occurs for modes with H −1 < λ < τ . Then we study the values of the relaxation time τ in Section VI from quantum field theory for a scalar field with gauge coupling constant g. Finally we conclude with some brief final remarks summarizing the most important results. We add two appendices. Appendix A discusses the conformal invariance of fluid equations in the limit of massless particles, and Appendix B clarifies some technical tools to calculate the Fourier transform of the noise kernel for scalar fields. II. FLUID DYNAMICS FROM DIVERGENCE-TYPE THEORY We assume inflation brings every non-conformally coupled matter field into its de Sitter invariant vacuum state, except the inflaton which is slowly-rolling down through its potential. We also assume an instantaneous reheating, so the universe goes from inflation to radiation domination in essentially no time [38]. When inflation ends, quantum fluctuations of non-conformally coupled fields become much higher than those of the local vacuum state of adiabatic observers. After inflation, these fluctuations enter in the nonlinear regime and decohere. It therefore becomes adequate to treat them like an effective fluid. In other words, the end of inflation sets the initial conditions for the later evolution of every field in a radiation dominated universe. The proper theoretical framework for the discussion of the further evolution is given by causal relativistic hydrodynamics. We shall follow a dissipativetype theory scheme as derived from kinetic theory for massless scalar particles obeying Bose-Einstein statistics [68]. To linearized order we may consider any other relevant approach, such as viscous anisotropic hydrodynamics [ [74] or theories based on the so-called 'Entropy Production Variational Principle' [75] with equivalent results. This approach consists in formulating an ansatz for the one-particle distribution function (1pdf), parametrized by the hydrodynamic variables. Later on the hydrodynamic currents such as the particle number current and the energy-momentum tensor are derived as moments of the parameterized 1pdf, and the corresponding equations as moments of the Boltzmann equation. We assume a perturbed Friedmann-Robertson-Walker Universe with metric g µν = a 2 (η)ḡ µν with a(η) the scale factor depending only on conformal time η, and g µν = η µν +h µν , where η µν is the Minkowsky metric (with signature (−, +, +, +)) and h µν represents the primordial gravitational waves. Upon reheating the inflaton decays into radiation which is left in a state of thermal equilibrium, namely its four-velocity U µ rad = a −1 U µ follows the conformal Killing field of the Friedmann-Robertson-Walker background (U µ = (1, 0, 0, 0)), and its temperature T rad = a −1 T decays as the inverse radius of the Universe. The spectator field, which is not decoupled from radiation, thermalizes into this state, a process which may be described by linear relaxation equations. Moreover as p µ p µ = m 2 T 2 rad this theory is effectively conformally invariant. This implies the energy-momentum and non-equilibrium tensor (Eq. (5)) are traceless. Further the Boltzmann equation for massless particles also is conformally invariant and since the procedure of taking moments does not spoil this symmetry every conservation equation is conformally invariant as well. See Appendix A for details. Through conformal invariance we are able to eliminate the scale factor a from all equations. As we are interested in the equilibration process of this scalar fluid to the dominant radiation, we analyze linear perturbations around a state thermalized to the dominant radiation equilibrium state. In consequence we consider a linear deviation from a Bose-Einstein equilibrium To introduce fluctuations we define the complete 1pdf as where u µ , T and ζ µν are velocity, temperature and dimensionless non-equilibrium variable of the fluid respectively. The constant in front of ζ µν is chosen so that later on we shall obtain ζ µν = Π µν /ρ, where Π µν is the viscous part of the energy-momentum tensor and ρ the energy density, to linear order. It has the value κ = π 4 / (2 5! ζ (5) where τ is the relaxation time of the fluid. This is an external parameter of the theory, which must be derived from consideration of the fluid particles interactions between themselves and with radiation. We shall discuss this parameter in Section VI. The idea is to decompose all fields into an (homogeneous) average and a fluctuation, and obtain linearized equations for the fluctuations. From the cosmological principle we assume the background quantities have the FRW symmetry, in particular ζ µν vanishes in the background. Since our purpose is to analyze interactions between the fluid and the gravitons we consider only tensor perturbations. The linearized 1pdf reads We choose a gauge where h µν U ν = 0, due to the tensor character of perturbations also h µ µ = 0. Since ζ µν is transverse to the four-velocity to linear order we find U µ ζ µν = ζ µ µ = 0. Hydrodynamic equations To deduce the hydrodynamic equations we define the comoving energy-momentum tensor and non-equilibrium tensor as usual [28] and the non-equilibrium current We also need the second moment of the collision integral In Eqs. (4)-(6) the invariant relativistic measure is The equations are the conservation equation for energymomentum tensor T µν ;µ = 0 (8) and the closure equation for non-equilibrium current where S µ ν = δ µ ν +U µ U ν . The relevant integrals were computed in [79]. Here we summarize the final expressions and In order to derive the linearized equations in the following section, we consider a purely spin-2 perturbation (TT) of the energy-momentum tensor (10) in mixed components and closure equation (9) to first order. These expressions are respectively, with b = 20 ζ 2 (5)/ π 4 ζ(6) . If we had used a Maxwell-Juttner equilibrium distribution, we would have derived the same equation but with b = 2/9. Note the ratio of both b M J /b BE 1.024. In order to relate τ with the usual transport coefficients we compute the energy-momentum tensor up to first order in τ . For this purpose we may discard the interaction with gravitons taking h µν = 0. However it is need to introduce perturbations in temperature δT and velocity v µ , in addition to the tensor one ζ µν . Then the energy-momentum tensor reads Including the velocity perturbation Eq. (14) becomes which implies, to first order in τ , ζ αβ = −τ b σ αβ . In consequence by simple comparison with the usual viscous energy-momentum tensor, the well-known kinematic viscosity coefficient ν = b τ . III. FLUID-GRAVITONS COUPLED EQUATIONS From now on we normalize Hη → η, Hr → r, where H is the Hubble constant at the moment of reheating; we also define η = 0 there and a (0) = 1. From the linearized Einstein's equation in mixed components we get with M pl the reduced Planck mass. We apply tensor projectors to Eq. (17) in spatial indexes. It reads and for T (1)i j TT we use Eq. (13). Since h ij and ζ ij are tensor degrees of freedom we write the following Fourier decomposition for both where the conformal wave number isk phys = Hk, λ = +, × indicates polarization and the polarization tensors λ ij (k) satisfy λ ij (k) δ ij = k i λ ij (k) = 0 and λ ij (k) ij λ (k) = δ λλ . Gathering the expressions above we derive similar equations for either polarization. Dropping the λ index in h k and ζ k , together with Eq. (14), we get the system of equations to linear order for h k and ζ k where K 0 = π 2 T 4 / 15H 2 M 2 pl and τ 0 = Hτ . In the radiation dominated era a (η) = 1 + η and H(η) = (1 + η) −2 . We change variables η → z(η) = k(1 + η) and h k (z) = χ k (z)/z, therefore To solve our problem we need the solution of (21) with the appropriate initial conditions for h k and ζ k , to be discussed in next section. The magnitude of the parameter K 0 measures the interaction strength between the tensor degrees of freedom ζ k and h k . Using instantaneous and effective reheating H 2 g * π 2 30 T 4 /3M 2 pl where g * is the number of relativistic degrees of freedom at temperature T . Since O(10 2 GeV) T ≤ M pl , then g * 10 2 and IV. INITIAL CONDITIONS The purpose of this section is to compute the initial conditions for h k and ζ k at the beginning of the radiation dominated era. To do this we regard them as classical stochastic Gaussian variables with zero mean, whose self correlation matches the Hadamard propagator of the corresponding quantum operators in the Bunch-Davies vacuum at the end of inflation. Gravitons h Gravitons are tensor metric perturbations. As we have seen before there are two polarizations h + and h × . As it is well known [80], the amplitude for both can be treated as massless real scalar fields. As usual, to quantize them we use decomposition (19) and apply canonical quantization to the auxiliary field χ defined by h k (η) = χ k (η)/a(η). Explicitly This field χ must be dimensionless as well as h. As before we obtain the same equation for both polarizations of χ k . During inflation During inflation η ≤ 0 and a(η) = 1/(1 − η). We adopt the Bunch-Davies positive frequency solution [81] of (24) Under the scheme of instantaneous reheating our initial conditions for the evolution of Fourier components h k during radiation dominated Universe η ≥ 0 are and where e k =â k −â † −k and b k =â k +â † −k . Next we assume the Landau prescription AB S = 1/2 0 |{A; B}| 0 to convert quantum expectation values into stochastic ensemble averages [82] [83]. In consequence For instance, initial correlation for modes outside the horizon (k 1) at η = 0 develop a scale invariant spectrum, namely Non-equilibrium tensor ζ This case is more complicated because there is no immediate relation between the stochastic non-equilibrium variable ζ and some canonical quantum field during inflation. Instead, we write the tensor part of the energymomentum tensor self correlation for a minimally coupled scalar field during inflation, namely the so-called noise kernel N µ ν ρ σ . Then we match it at η = 0 to the stochastic self correlation function of ζ calculated during the radiation dominated era. The noise kernel is defined as Since we will take the tensor part of the noise kernel, the only possible contribution comes from the kinetic term of the energy-momentum tensor [84]. N µ ν ρ σ was computed in [85]. For the massless (m/H 1) and large scales (r 1) limit at the end of inflation (η = 0), which is our case of interest, [85] obtains the following result for the kinetic term contribution N ijkl (r, η = 0) We disregard a term which becomes constant at large separations, since it does not contribute to the tensor part. In Fourier space we define the projector Λ a i b j into tensor part (divergenceless and traceless) like with Recalling that r = |x − x |, when Fourier transforming we get two different momenta for each spatial point x and x . Due to homogeneity and isotropy, the tensor part of the Fourier transformed noise kernel N abcd with and c = 6911/(12 π 2 ) (see Appendix B). This result provides us the quantum fluctuations from inflation. In order to match it with our fluid non-equilibrium correlation we must to subtract the local vacuum fluctuations. It is possible to show that the pathological behaviour of (33) at short distance is caused entirely by the mentioned local vacuum fluctuations. In fact if we calculate the noise kernel using the local fourth order adiabatic vacua at time η = 0 we obtain the same terms as in (33). However computations also show that these vacuum fluctuations are only valid for small scales (k > 1). In consequence, after the subtraction of the local vacuum, the quantum noise kernel for large scales (k 1) is (36). On the other hand, we analyze the stochastic fluctuations of the fluid energy-momentum tensor in momentum space. We know that the energy-momentum tensor satisfies T µ ν (η = 0) = a −2 (η = 0)T µ ν = T µ ν . From (13) and using decomposition (19), we arrive Setting k = kẑ and ζ λ k (η = 0) = ζ λ k , the most general choice is The projected correlation at time zero is Terms like T i j T k l are zero to first order. Just like in the quantum case a δ-function appears due to homogeneity. As we see both polarizations follow identical equations decoupled from each other. Henceforth we shall drop the polarization label. Since we only concentrate in super-horizon modes, our analysis would be valid until modes re-enter in the horizon at z = 1. Further we consider our model to be valid up to the electroweak transition, where new effects must be considered due to the change in the number of relativistic degrees of freedom. In consequence we will analyze solutions in the limit k → 0 and η bounded by the condition z = k(1+η) < 1 or by the electroweak time, whatever happens first. We only keep the dominant terms in the power series expansion for k 1 valid for super-horizon modes until the electroweak transition. We interpret K 0 (Eq. (20)) as an interaction parameter between gravitons and tensor fluid modes. If K 0 = 0 gravitons decouple from the fluid. We determine its evolution by solving the first equation of (21) with the initial conditions (26)- (30). The dominant terms in the limit k 1 are In the general case with K 0 = 0 it is enough to consider the two limiting cases of (21), namely kτ 0 1 and kτ 0 1. Hereafter we assume 1/τ H; we shall discuss in the Section VI whether this is a realistic hypothesis. We solve the system (21) with initial conditions (26) When kτ 0 1 (k 1/τ H in unnormalized units) the fluid modes decay before they can interact meaningfully with gravitons. For these modes with very large wavelengths we recover to leading order the usual scale invariant spectrum, namely the first term in Eq. (44). The most interesting case is when kτ 0 1. It means 1/τ k H and enables us to neglect the term ζ k /(kτ 0 ) in equations (21). The system takes the form Then, C k will be set by matching the quantum noise kernel spectrum to the correlation ζ k (η)ζ * k (η) at initial time η = 0. We assume null cross correlation ζ k h * k = 0, because both variables have different physical origin. In consequence Using ζ k h * k = 0 explicitly, we get so, considering initial conditions (26)- (30) and (41)- (42) we derive and The equation for χ k (z) reads Let χ k = √ z ψ k and so h k = ψ k / √ z, therefore whose solution is where ν 2 = 1/4 − b K 0 , and J ν (z) (j ν (z)) and Y ν (z) (y ν (z)) are (spherical) Bessel's functions of first and second kind respectively. The expression for h k (z) is Our solution in the limit k 1 is where Thus, equal time self correlation for gravitons reads Let us make an ascending series expansion in K 0 around zero, recalling ν = 1/4 − b K 0 , and replace the initial correlations. In that case we obtain to leading order in k and K 0 Our description of the spectrum evolution holds up to a certain time η k,max , depending on k, at which either the modes re-enter in the horizon or the electroweak transition takes place. To estimate the electroweak time η EW we use the ratio of the scale factor between the end of inflation and the electroweak transition, which is a EW /a EOI = T EOI /T EW . The typical energy of electroweak transition is T EW 10 2 GeV and T EOI = T γ = T = 10 n GeV. Therefore a EW = 1 + η EW = 10 n−2 and η EW 10 n−2 . On the other hand we may find the conformal time at the re-entry in the horizon η k,re−entry , which depends explicitly on k, from the relation λ phys (η) = λ c a(η). It results η k,re-entry 1/k. In Fig. 1 we show a scheme to study the evolution of physical wavelengths while the Universe expands and the horizon (Hubble radius) changes. Physical wavelengths evolve proportionally to the scale factor a. Modes re-enter in the horizon when λ phys (η) = H −1 (η) → kH(η)/a(η) 1, so the smaller the wavenumber the later its entry. In particular at η = η EW one mode with comoving wavenumber k = k EW re-enters the horizon. Therefore the evolution of modes with k < k EW is bounded by η EW . Conversely the time bound for modes whose k > k EW is η k,re-entry . To finish it is relevant to know what happens with k = 1/τ . We consider fields whose relaxation time τ produces perturbations of cosmological interest, namely perturbations whose wavelength today is at least as long as 1 kpc. In comparison, the mode k = k EW has a wavelength today λ EW,0 1 pc, so we get λ τ,0 λ EW,0 , as it is shown in Fig. 1. Therefore 1/τ 0 k EW . Summarizing, we derive the following time bounds Finally, using these bounds in Eq. (59) within the range of comoving unnormalized units 1/τ < k < k EW , we obtain at η = η EW the gravitational wave spectrum for each polarization VI. ESTIMATES OF τ The main goal of this section is to estimate the relaxation time τ of the field we have considered throughout the paper. First we get a feature (step) in the spectrum at comoving wavenumber k τ = 1/τ and comoving wavelength λ τ = 2π/k τ ∼ τ . We have set a(η) = 1 + η and η = 0 at the end of inflation. For instantaneous reheating, it coincides with the onset of the radiation dominated epoch where a γ = a(η = 0) = 1. The evolution of physical perturbation wavelengths from the end of inflation until today may be calculated as with a 0 the scale factor today (subscript 0 means today). To compute the ratio a 0 /a γ we consider a nearly adiabatic expansion of the Universe in which a(η) ∝ 1/T rad . modes. We always concentrate in the former, but here it is important to note that the presence of the new dimensionful parameter τ (Eq. (2)) introduces another scale which splits the evolution of super-Hubble modes in two. First, for modes with λ > λτ τ we recover the usual invariant spectrum. However for modes with H −1 < λ < λτ the fluid-graviton interaction produces an energy transfer from the fluid to gravitons and increases the amplitude of the spectrum. We are able to extend our description until the electroweak transition. Thus, shaded zone represents the modes which are amplified with respect to the usual invariant spectrum by a factor of about 1.3 at the electroweak time according to Eq. (61). In consequence where T γ = T rad (η = 0) = 10 n GeV is the reheating temperature. Therefore Recall that physical wavelengths of cosmological interest are in the range λ 0 1 kpc. In particular we would like to concentrate on λ 0 1 Mpc which implies λ τ,0 1 Mpc. Let us consider a scalar field with a gauge coupling constant g. [14] and [86] show that it is possible to compute the relaxation time τ in the Boltzmann equation from quantum field theory. Basically it is given by where Σ is the self-energy of the field we are considering and Im [x] takes the imaginary part of x. We could expand Im [Σ] in Feynman diagrams and prove that the first non-null contribution appears at the two-loop order. We conclude on dimensional grounds that where α 2 g = g 4 represents the fine structure constant of this theory. If we take the reheating temperature T γ ∼ 10 16 − 10 15 GeV and values of g ∼ 10 −6 we find that λ τ,0 ∼ 10 Mpc which lies in the range of cosmological interest. The characteristic multipole l for this scale reads l ∼ πR LSS /x ∼ 10 3 , where R LSS 14 Gpc is the distance to the last scattering surface (LSS) and x 10 Mpc represents the perturbation wavelength. In addition from the range of reheating temperature T γ ∼ 10 16 −10 15 GeV we consider, we estimate a tensor to scalar ratio about r ∼ 10 −1 −10 −5 respectively [1]. The values of τ we are regarding here are consistent with the values for its analogous Γ −1 a→γγ (axion lifetime) in known ALP-models in the literature [87] [88] [89] [90]. We assume that the relaxation time τ and the thermalization time are of the same order and that hydrodynamics is already valid for earlier times. The validity of applying hydrodynamics in this regime has been discussed by [91] [92] [93] [94] [95] who argue that the hydrodynamic framework is valid at time scales shorter than the corresponding for isotropization and thermalization, driven by a novel dynamical attractor whose details vary according to the theory under consideration. Such attractor solutions show that hydrodynamics displays a new degree of universality far-from-equilibrium regardless of the details of the initial state of the system. In fact, the approach to the dynamical attractor effectively wipes out information about the specific initial condition used for the evolution, before the true equilibrium state and consequently, thermalization, is reached. This process is described as hydrodynamization to distinguish it from ordinary thermalization, and it has been shown by those authors that it develops on shorter time scales than thermalization. In the context of kinetic theory and standard statistical mechanics, thermalization is understood as the development of an isotropic thermal one-particle distribution function. In some particular cases, it is possible to show that even with relative anisotropies of about 50% the hydrodynamic description matches the full solution [96] [97]. VII. FINAL REMARKS When studying the early Universe, particularly just after inflation, it is important to include full interactions between all fields in our description. This may be a daunting challenge. In that way, we propose to treat the fields and its interactions with effective relativistic hydrodynamic theories. Nonetheless we discard ideal fluids in order to incorporate dissipative effects, as we have learned from relativistic heavy ions collisions. Further we go beyond covariant Navier-Stokes theory to avoid known causality and stability issues. Thus our main hypothesis lies in using causal hydrodynamics to obtain an adequate description of the phenomena we are interested in, specially during the very early Universe when almost all the matter fields could be described as a hot plasma. Incorporating these causal theories to model the fields as effective fluids during the very early Universe may bring forth new effects [79]. Throughout the paper we have analyzed a simplified case of interaction between a spectator minimally coupled scalar field and the tensor metric perturbations after inflation. Unlike ideal or Navier-Stokes hydrodynamics, this interaction may be present in any causal theory because the tensor part of the dissipative energy-momentum tensor is regarded as a new variable with non-trivial dynamics. Covariant Navier-Stokes equations has no proper tensor degree of freedom, in spite of the fact that the energymomentum tensor of a quantum scalar field has such a part [49] [83]. Causal theories allow us to keep this component of the energy-momentum tensor and thus follow its interaction with the gravitational field. In consequence causal hydrodynamics enables the description of effects that are lost in covariant Navier-Stokes theory. Its importance would be estimated by considering the constitutive parameters. To be concrete we analyze the evolution of gravitational wave spectrum. Usually H −1 is the only relevant scale that distinguishes the evolution of perturbations between super-Hubble (λ > H −1 ) and sub-Hubble (λ < H −1 ) modes, where λ represents the physical wavelength. We always concentrate in the former, but here it is important to note that the presence of the new dimensionful parameter τ which provides us the characteristic relaxation time of the fluid dynamics (Eq. (2)) introduces another scale which splits the evolution of super-Hubble modes in two, as it is shown in Fig. 1. Considering the values of the parameters on previous sections we get that for modes with λ > λ τ τ we recover the usual invariant spectrum. However for modes with H −1 < λ < λ τ the fluidgraviton interaction produces an energy transfer from the fluid to gravitons and increases the amplitude of the spectrum. We are able to extend our description until the electroweak transition. Thus, shaded zone in Fig. 1 represents the modes which are amplified with respect to the usual invariant spectrum by a factor of about 1.3 at the electroweak time according to Eq. (61). Fields at extreme conditions, like highly energetic col-lisions or very large temperatures in the early Universe, evidence the need for new schemes of description which incorporate interactions and non-ideal processes such as dissipation and thermalization. Causal relativistic hydrodynamic theories are promising candidates to include characteristic effects of these regimes in a consistent framework. where (µ i ) means that µ i index is excluded. Following we need to show which ends up proving (A13). We now show that our ansatz for the distribution function and the collision integral is consistent with conformal invariance. Indeed, we take the one-particle distribution function given in (1) Since p µ is invariant we require transformation laws which implies invariance of β µ and ζ µν /T 2 . Index disposition matters. From T = T /a we arrive to β µ = a 2β µ , u µ = a −1ūµ and ζ µν = a 2ζ µν . In addition as τ is a scale dimensional parameter we assume that τ = aτ , thus also has the required transformation law. Appendix B: Tensor part of the noise kernel In this appendix we clarify the calculation of tensor part of noise kernel in Fourier space. From Eq. (33) we write N i j k l (x, x ) = = r i r j r k r l F 1 (r) + δ il r j r k + δ jk r i r l F 2 (r) + + δ il δ jk F 3 (r) + (k ↔ l), (B2) Thus applying tensor projectors (34) to (33) in Fourier space we get where To compute Fourier transforms F i (k) we use the following relation r −2n e −i k·r d 3 r = π 3/2 Γ(3/2 − n) Γ(n)
9,298.8
2017-09-06T00:00:00.000
[ "Physics" ]
Non-specific phospholipase C4 mediates response to aluminum toxicity in Arabidopsis thaliana Aluminum ions (Al) have been recognized as a major toxic factor for crop production in acidic soils. The first indication of the Al toxicity in plants is the cessation of root growth, but the mechanism of root growth inhibition is largely unknown. Here we examined the impact of Al on the expression, activity, and function of the non-specific phospholipase C4 (NPC4), a plasma membrane-bound isoform of NPC, a member of the plant phospholipase family, in Arabidopsis thaliana. We observed a lower expression of NPC4 using β-glucuronidase assay and a decreased formation of labeled diacylglycerol, product of NPC activity, using fluorescently labeled phosphatidylcholine as a phospholipase substrate in Arabidopsis WT seedlings treated with AlCl3 for 2 h. The effect on in situ NPC activity persisted for longer Al treatment periods (8, 14 h). Interestingly, in seedlings overexpressing NPC4, the Al-mediated NPC-inhibiting effect was alleviated at 14 h. However, in vitro activity and localization of NPC4 were not affected by Al, thus excluding direct inhibition by Al ions or possible translocation of NPC4 as the mechanisms involved in NPC-inhibiting effect. Furthermore, the growth of tobacco pollen tubes rapidly arrested by Al was partially rescued by the overexpression of AtNPC4 while Arabidopsis npc4 knockout lines were found to be more sensitive to Al stress during long-term exposure of Al at low phosphate conditions. Our observations suggest that NPC4 plays a role in both early and long-term responses to Al stress. INTRODUCTION Aluminum (Al) toxicity represents a major growth-limiting factor for the regions with acid soils. Low pH of soil enables the release of toxic Al ions from its insoluble forms fixed in soil minerals. Prolonged exposure to Al ions leads to changes in root morphology, e.g., root thickening, bursting, changes in the cell wall architecture, and even cell death. However, the first indication of the Al toxicity in plants is rapid cessation of root growth. The root tip has been found to be the most Al-responsive part of roots (Panda et al., 2009). Although molecular mechanisms of the prompt Al-mediated root growth inhibition are largely unclear, research on the targets of Al action in plants has demonstrated that Al enters and binds to the apoplast (Wissemeier and Horst, 1995) and changes the properties of the PM. A number of physiologically important processes connected with PM are affected by Al. Well documented early consequences of Al toxicity are lipid peroxidation (Boscolo et al., 2003), the disruption of ion fluxes (Matsumoto, 2000), the disruption of calcium homeostasis (Rengel and Zhang, 2003), the inhibition of nitric oxide synthase (Tian Abbreviations: BODIPY, 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene; BY-2, bright yellow 2; DAG, diacylglycerol; GUS, β-glucuronidase; HP-TLC, high-performance thin-layer chromatography; MS, Murashige-Skoog; NPC, non-specific phospholipase C; PA, phosphatidic acid; PIP 2 , phosphatidylinositol 4,5-bisphosphate; PI-PLC, phosphatidylinositol-specific phospholipase C; PLD, phospholipase D; PM, plasma membrane et al., 2007), effects on the cytoskeleton (Sivaguru et al., 1999(Sivaguru et al., , 2003Schwarzerová et al., 2002), and the depolarization of the PM (Sivaguru et al., 2003;Illéš et al., 2006). It has been found that rapid Al-mediated inhibition of root growth is related to the loss of PM fluidity and the inhibition of endocytosis (Illéš et al., 2006;Krtková et al., 2012) and it is controlled through local auxin biosynthesis and signaling (Shen et al., 2008;Yang et al., 2014). The rapid response of root growth suggests that signaling pathways are a part of the mechanism participating in Al toxicity. Arabidopsis NPC gene family consists of six members, denoted NPC1-NPC6, exhibiting differences in their localization and in their biochemical properties [for review see Pokotylo et al. (2013)]. Briefly, experimentally non-characterized NPC1, NPC2, and NPC6 were supposed to contain putative N-terminal signal peptide with predicted localization in endomembranes and specific organelles (Pokotylo et al., 2013). NPC3 was described to lack the ability to hydrolyze PC (Reddy et al., 2010), NPC4 to be PM-bound protein (Nakamura et al., 2005), NPC5 to be cytosolic-localized enzyme expressed only in floral organs under normal conditions (Gaude et al., 2008;Pokotylo et al., 2013). NPC4 and NPC5 were able to hydrolyze PC, however, NPC5 possessed 40-fold lower hydrolytic activity than NPC4 (Gaude et al., 2008). We previously demonstrated that Al ions inhibit the formation of DAG generated by NPC in tobacco BY-2 cell line and pollen tubes, inhibit the growth of tobacco pollen tubes and that this growth, arrested by Al, can be rescued by an externally added DAG . This raises the following question: which NPC isoform is Al-targeted and what is the role of DAG in aluminum toxicity? Here we report our findings that Al ions inhibit the expression of NPC4 and decrease its enzymatic activity. However, the latter effect is caused neither by the direct NPC4 inhibition by Al ions nor by NPC4 translocation. Moreover, the overexpression of AtNPC4 rapidly alleviated Al-mediated retardation of tobacco pollen tubes while Arabidopsis npc4 knockout lines were found to be more sensitive to Al stress during long-term exposure of Al at low phosphate (P) conditions. PLANT MATERIAL Arabidopsis thaliana Columbia (Col-0) seeds were obtained from Lehle seeds and used as wild-type (WT) controls. The T-DNA insertion line npc4 (SALK_046713) used in our experiments was characterized earlier (Wimalasekera et al., 2010). Arabidopsis plants were grown on agar plates containing 2.2 gl −1 1/2 MS basal salts and 1% (w/v) agar (pH 5.8). Seeds were surface sterilized with 30% (v/v) bleach solution for 10 min and rinsed five times with sterile water. To synchronize seed germination, the agar plates were kept for 3 days in a dark at 4 • C. The plants were grown in the vertical position in a growth chamber at 22 • C under long day conditions (16/8 h light/dark cycle). Tobacco (Nicotiana tabacum cv. Samsun) pollen grains germinated on simple sucrose medium (Pleskot et al., 2012) containing 10% (w/v) sucrose and 0.01% (w/v) boric acid solidified by 0.5% (w/v) agar were used for biolistic transformation. ASSAYING NON-SPECIFIC PHOSPHOLIPASE C ACTIVITY IN SITU AND IN VITRO The NPC activity in Arabidopsis seedlings was measured according to Kocourková et al. (2011). Seven-day-old Arabidopsis seedlings (five seedlings for each sample) were transferred from liquid MS solution to 1/8 MS medium containing 10 μM AlCl 3 , pH 4 and labeled with 0.66 μg ml −1 of fluorescent PC (BODIPY-PC, D-3771, Invitrogen, USA). Seedlings were incubated on an orbital shaker at 23 • C for 2 h. NPC activity in vitro was measured according to Pejchar et al. (2013) using β-BODIPY-PC (D-3792, Invitrogen, USA). The identification of a BODIPY-DAG corresponding spot was based on a comparison with the BODIPY-DAG standard prepared as described earlier . HISTOCHEMICAL β-GLUCURONIDASE STAINING The construction of promoter:GUS plants was described previously (Wimalasekera et al., 2010). Seeds of pNPC4:GUS were grown on agar plates under the same conditions as described in Section "Plant Material." Ten-day-old seedlings were transferred to a 24-well plate containing 1 ml of 1/8 MS solution with or without 100 μM AlCl 3 , pH 4 for 24 h. The histochemical GUS assay (Jefferson et al., 1987) was carried out according to Kocourková et al. (2011). MOLECULAR CLONING, TRANSFORMATIONS, ISOLATION His:AtNPC4. The AtNPC4 coding sequence was amplified from Arabidopsis Col-0 cDNA with the specific forward primer 5 -CGCGAATTCATGATCGAGACGACCAAA-3 and the reverse primer 5 -GCCCTCGAGTCAATCATGGCGAATAAAG-3 by PCR using Phusion DNA polymerase (Finnzymes), digested with XhoI and EcoRI enzymes and cloned into the pET30a(+) vector (Novagen). The expression vector was transformed into the Escherichia coli strain BL21 and cells were grown overnight at 37 • C. After subculturing into fresh medium, the cells were grown at 16 • C to an OD 600 of approximately 0.4, then induced overnight with 0.1 mM isopropyl thio-β-D-galactoside. The cells were harvested by centrifugation (5000 × g, 10 min), resuspended in an assay buffer (50 mM Tris-HCl; pH 7.3, 50 mM NaCl, 5% glycerol; Nakamura et al., 2005), and sonicated after 10 min treatment with lysozyme (1 mg ml −1 ). The lysed cell suspension was centrifuged (10000 × g, 10 min) and supernatant was used for an enzyme activity assay. The western blot analysis was performed under reducing and denaturation conditions with SDS electrophoresis. 6x His tag was detected with Anti-His HRP Conjugate (Qiagen). 35S::AtNPC4/35S::GFP:AtNPC4. AtNPC4 cloned into the pENTR 223.1 entry vector (Gateway TM clone G12733, Arabidopsis Biological Resource Center) was recombined by LR reaction into the Gateway binary vector pGWB2 (35S::AtNPC4) or pGWB6 (35S::GFP:AtNPC4) under the control of the CaMV 35S promoter (Nakagawa et al., 2007). Constructs were transferred into Agrobacterium tumefaciens strain GV2260 and used to transform Arabidopsis Col-0 WT plants by the floral dip method (Clough and Bent, 1998). Transformants were selected on agar plates containing 50 μg ml −1 kanamycin and 50 μg ml −1 hygromycin B. Expression levels of NPC4 in 10-day-old T3 seedlings of homozygous lines were measured using the quantitative RT-PCR. Lines with the highest expression levels of NPC4 were used in experiments. Lat52::AtNPC4:YFP. The AtNPC4 coding sequence flanked by NgoMIV and ApaI sites was generated by PCR with Phusion DNA polymerase (Finnzymes) using the specific forward primer 5 -ATAGCCGGCATGATCGAGACGACCAA-3 and the reverse primer 5 -TATGGGCCCATCATGGCGAATAAAGCA-3 . An amplified product was introduced into the multiple cloning sites of the pollen expression vector pHD32. The pHD32 vector Frontiers in Plant Science | Plant Physiology (Lat52::MCS::GA5::YFP::NOS; Klahre et al., 2006) was kindly provided by Prof. Benedikt Kost (University of Erlangen-Nuremberg, Erlangen, Germany). This construct allowed the pollen-specific expression and visualization of AtNPC4 protein fusions controlled by the Lat52 promoter (Twell et al., 1991). The expression vector was transferred into tobacco pollen grains germinating on solid culture medium by particle bombardment using a helium-driven particle delivery system (PDS-1000/He; Bio-Rad, Hercules, CA, USA) as previously described (Kost et al., 1998). Particles were coated with 1 μg DNA. EVALUATION OF AL EFFECT To analyze root length, five-day-old seedlings grown on agar (for details see Plant Material) were transferred on agar plates containing 1/8 MS, pH 4 supplemented with 200 μM AlCl 3 . After 9 day incubation, plates were scanned (Canon CanoScan 8800F) and the root growth was measured using the JMicroVision 1.2.7 software. To measure pollen tubes length, tobacco pollen was transiently transformed with AtNPC4:YFP by particle bombardment. After 6 h of germination in dark, pollen tubes were incubated in liquid simple sucrose medium (pH 5) with or without 50 μM AlCl 3 for additional 2 h. Mean growth rate of pollen tubes expressing AtNPC4:YFP and vector-only control was evaluated using the fluorescence microscope Olympus BX-51. To determine survival rate, 7-day-old seedlings grown on agar (for details see Plant Material) were transferred to 6-well plates with liquid 1/8 Hoagland's (Kocourková et al., 2011) solution (pH 4) with 100 μM AlCl 3 for 22 days. Plates pictures were taken by Nikon SMZ 1500 zoom stereoscopic microscope coupled to a Nikon DS-5M digital camera. The survival rate was calculated as a number of viable true leaves. RESULTS Aluminum ions were described to inhibit the formation of DAG generated by NPC in tobacco cell line BY-2 and in tobacco pollen tubes . In order to find an NPC isoform that is responsible for the decrease of DAG formation during Al stress, described biochemical properties and localizations of all NPC isoforms were taken into account (see Introduction for details). Altogether, NPC4 was the first candidate to be investigated. EXPRESSION OF NPC4 IN ROOT TIPS IS DECREASED DURING AL STRESS Considering all available data about the NPC gene family and given that the root and PM are the main targets of Al toxicity, we chose NPC4 to study its role in Al stress. Although NPC4 is not the most abundant NPC gene expressed in roots (Peters et al., 2010;Wimalasekera et al., 2010;Pokotylo et al., 2013) it was described as the isoform with the strongest response to abiotic stress in plants (Kocourková et al., 2011). In our previous studies, we have shown that the expression of NPC4, investigated using pNPC4:GUS plants, was largely localized in the root tip (Wimalasekera et al., 2010;Kocourková et al., 2011). In this study, we performed histochemical analysis of Arabidopsis pNPC4:GUS seedlings treated with AlCl 3 to observe changes in the expression pattern of NPC4 during Al stress. GUS staining in both control and Al-treated seedlings was found in the apical meristem and partly in the elongation zone of main and lateral root but the intensity of GUS staining signal was lower in Al-treated seedlings (Figure 1). These observations suggest that the NPC4 expression is decreased during Al stress in Arabidopsis seedlings. AL-INDUCED INHIBITION OF DIACYLGLYCEROL FORMATION IS ALLEVIATED IN NPC4 -OVEREXPRESSING SEEDLINGS The involvement of NPC4 in Al stress was also examined in the level of its activity. Because we used a different plant model organism than in our previous work , we first tested the reaction of Arabidopsis seedlings to Al stress. To study changes in the DAG pattern under Al stress, we used the fluorescent derivative of PC (BODIPY-PC) as a phospholipase substrate. When seven-day-old seedlings were treated with different concentrations of AlCl 3 in the presence of BODIPY-PC for 2 h, a concentration-dependent inhibiting effect of Al on BODIPY-DAG formation was observed (Pejchar, unpublished), revealing 10 μM AlCl 3 as a working concentration for in situ activity measurement (Figure 2). Consequently, we tested the hypothesis that NPC4 is also the targeted isoform at activity level during Al stress and it is responsible for the inhibition of DAG formation. Therefore, a stable Arabidopsis line overexpressing NPC4 under the control of 35S promoter (NPC4-OE) was prepared and was monitored to www.frontiersin.org FIGURE 2 | BODIPY-diacylglycerol (DAG) production in Arabidopsis seedlings treated with Al for different times. Seven-day-old WT and NPC4-overexpressing seedlings were treated with 10 μM AlCl 3 for different time intervals (0, 6, and 12 h) and then incubated with BODIPY-phosphatidylcholine (PC) for 2 h. Lipids were extracted at the time intervals indicated, separated by high-performance thin layer chromatography and quantified. Each value is related to the control non-treated cells (100%). The plotted values are the means + SEM from three independent experiments with parallel samples. NPC, non-specific phospholipase C. BODIPY-DAG formation after Al treatment and compared to Altreated WT seedlings. First, our HP-TLC analysis of the labeled products showed that the trend of the Al-induced BODIPY-DAG inhibition (∼35% of control, non-treated seedlings) in WT seedlings was similar also for prolonged treatments with 10 μM AlCl 3 (Figure 2) with a slightly diminished effect of Al after 14 h of treatment (∼47% of control). NPC4-OE seedlings were slightly less sensitive to Al compared to WT when treated for 2 and 8 h. Intriguingly, the overexpression of NPC4 resulted in a more pronounced difference (∼74% compared to ∼47% of control) after 14 h Al treatment. This suggests that NPC4 is an Al-sensitive NPC isoform on activity level during Al stress, as well. THE EFFECT OF AL ON NPC4 IS NEITHER DUE TO DIRECT INHIBITION OF NPC4 ENZYME NOR DUE TO NPC4 TRANSLOCATION To test possible mechanisms that influenced NPC4 activity in Al stress, a heterologously expressed NPC4 protein was prepared ( Figure 3A) and incubated with Al in vitro to detect possible direct inhibition of NPC4 by Al. However, β-BODIPY-DAG formation in Al-treated samples was not affected compared to non-treated samples ( Figure 3B) indicating that NPC4 is not directly inhibited by Al. Given that PM is well-documented cellular target of Al and that NPC4 was described as a PM-localized protein (Nakamura et al., 2005), we studied possible translocation of NPC4 from PM during Al treatment, which should cause a decrease in the DAG formation. The protein translocation under stress conditions was previously described in plants for another phospholipase type, PLD (Wang et al., 2000;Bargmann et al., 2006). To check this mechanism, stable Arabidopsis transformants harboring fusion protein GFP:NPC4 were prepared. Seven-day-old seedlings were transferred to 1% (w/v) sucrose (pH 4.3) solution containing 50 μM AlCl 3 and the localization of GFP:NPC4 in roots was investigated with a laser scanning confocal microscope. In control, nontreated seedlings, the PM localization of GFP:NPC4 was detected confirming previously published results (Nakamura et al., 2005). The localization of GFP:NPC4 remained unchanged in Al-treated seedlings (Figure 4, upper panels). The same results were obtained for transiently transformed tobacco pollen tubes expressing AtNPC4:YFP under the control of pollen specific Lat52 promoter (Figure 4, lower panels), another plant model organism used in this study. These results provide evidence that NPC4 translocation is not a mechanism that induces DAG decrease during Al stress. OVEREXPRESSION OF AtNPC4 PARTIALLY RESTORED GROWTH OF TOBACCO POLLEN TUBES UNDER AL STRESS Next, a root growth phenotype under Al stress was investigated in Arabidopsis WT, npc4 knockout line and NPC4-OE. Five-day-old seedlings grown on agar MS medium were transferred on agar 1/8 MS medium containing 200 μM AlCl 3 . The growth of the main root of all lines tested was retarded in the presence of Al. However, the root growth ratio between Al-treated and non-treated seedlings was not different for examined lines (Figure 5). FIGURE 4 | Localization of NPC4 is not changed by Al treatment. Influence of AlCl 3 on localization of AtNPC4 was observed in 7-day-old stable Arabidopsis transformants (GFP:AtNPC4, upper panels) and transiently transformed tobacco pollen tubes (AtNPC4:YFP, lower panels) by confocal laser scanning microscopy. Bars, 10 μm. NPC, non-specific phospholipase C. Frontiers in Plant Science | Plant Physiology FIGURE 5 | Root growth assay of WT, npc4 and NPC4-overexpressing Arabidopsis seedlings treated with Al. Five-day-old seedlings were transferred on agar plates containing 1/8 MS supplemented with AlCl 3 . After 9 day incubation, the root growth was measured and compared to non-treated controls. The plotted values represent the means (left panel) and ratios of Al-treated/non-treated control seedlings (right panel) + SD from two independent experiments. At least 16 seedlings were measured for each variant. NPC, non-specific phospholipase C. This could be explained by possible compensation of NPC4 function in stable transformants by another member of lipid signaling enzyme network. To bypass this, we employed transient transformation in heterologous system of tobacco pollen. Moreover, in our previously published study , DAG was shown to restore growth inhibition caused by Al treatment in tobacco pollen tubes. To test the possible role of NPC4 in this DAG function, tobacco pollen tubes were transiently transformed with AtNPC4:YFP under the control of pollen specific Lat52 promoter and the length of pollen tubes overexpressing AtNPC4:YFP was determined in the presence of Al. In the control pollen tubes overexpressing YFP alone, the cytoplasmic YFP localization was found (data not shown) and the mean growth rate was 2.36 ± 0.12 μm min −1 (Figure 6). Al-treatment caused the inhibition of control pollen tubes growth to approximately 15% (0.34 ± 0.03 μm min −1 ) of non-treated cells. Control pollen tubes overexpressing AtNPC4:YFP showed a slightly decreased mean growth rate (2.17 ± 0.09 μm min −1 ) compared to YFP only. However, in the presence of Al the mean growth of pollen tubes overexpressing AtNPC4:YFP (1.01 ± 0.05 μm min −1 ) was higher compared to Altreated vector-only control (Figure 6). Taken together, these results clearly demonstrate that the role of DAG as a growth activator in Al stress is mediated by NPC4 activity. Arabidopsis npc4 SEEDLINGS ARE MORE SENSITIVE TO AL STRESS IN PHOSPHATE DEFICIENCY Based on the results concerning the role of NPC4 activity in longer time period of Al treatment (Figure 2), a long-term survival experiment was also performed. Seven-day-old seedlings grown on agar were transferred to liquid 1/8 Hoagland's solution with AlCl 3 for 22 days. However, all tested lines (WT,npc4, showed no difference in their survival rate (data not shown). Since NPC4 play a role in P starvation (Nakamura et al., 2005) and aluminum stress and P deficiency co-exist in acid soils (Ruíz-Herrera and López-Bucio, 2013), the same experiment was repeated under P deficiency conditions. Differences in both root abundance ( Figure 7A) and FIGURE 6 | Effect of Al on the growth rate of tobacco pollen tubes transiently expressing AtNPC4:YFP. Tobacco pollen was sown on solid germination medium and transiently transformed with AtNPC4:YFP by particle bombardment. After 6 h of germination, pollen tubes were incubated with 50 μM AlCl 3 for additional 2 h. Mean growth rate of pollen tubes expressing AtNPC4:YFP or vector-only control was evaluated using a fluorescence microscope. Data shown are from two independent experiments and represent means + SEM. At least 21 pollen tubes were measured for each variant. NPC, non-specific phospholipase C. survival rate (Figures 7A,B) were found between WT and npc4 seedlings after Al treatment. Seedlings of npc4 were able to form only a weaker root system and significantly (t-test, p < 0.0001) less true leaves comparing to WT. This suggests that npc4 seedlings are more sensitive to Al stress at low P conditions. In contrast to this finding, we also saw that NPC4-OE seedlings formed a more abundant root system compared to WT ( Figure 7A). However, only a minimal increase of survival rate was found for NPC4-OE (Figures 7A,B). Collectively, these data strongly www.frontiersin.org support the involvement of NPC4 in response to long-term Al exposure. DISCUSSION Several physiologically important cellular processes are affected by Al, the major growth-limiting factor for the regions with acid soils. However, time sequence and the exact mechanism of processes involved in Al stress are still under investigation. Phospholipases, namely PI-PLC and PLD, have been shown to be affected within minutes as well as in longer time periods after Al treatment (Martínez-Estévez et al., 2003;Ramos-Díaz et al., 2007;Pejchar et al., 2008;Zhao et al., 2011). We previously described that the formation of DAG generated by NPC is rapidly inhibited by Al in tobacco cell line BY-2 and in tobacco pollen tubes and that Al inhibits growth of tobacco pollen tubes. These results, together with the fact that Al-mediated growth arrest can be rescued by an externally added DAG , raised a question which NPC isoform is Al-targeted in aluminum toxicity. Here we showed that NPC isoform NPC4 is involved in the response of A. thaliana to Al exposure. We selected Arabidopsis NPC4 as the primary candidate gene based on the expression and localization criteria (see above). The expression analysis using pNPC4:GUS assay showed that the localization and the intensity of NPC4 expression in the non-treated main root was the same as published previously (Wimalasekera et al., 2010). However, we found stronger GUS staining in non-treated lateral roots (Figure 1). This difference could be explained by different experimental conditions used as a control for Al treatments (pH 4) or by small variation of seedling age used for GUS assay. More importantly, Al treatment caused reduction in GUS staining in both main and lateral roots suggesting that the NPC4 expression is decreased during Al treatment. In addition, pNPC4:GUS expression was confined mainly to the root tip (Figure 1), a plant tissue that was found to be the most Al-responsive part of roots (Panda et al., 2009). Although we confirmed the generally accepted symptom of Al toxicity, root growth inhibition, and we observed diminished NPC4 expression after Al treatment, we were not able to determine differences between Al-treated WT, npc4, and NPC4-OE Arabidopsis seedlings (Figure 5). We have two hypotheses regarding the lack of the differences. First, it is possible that no differences were found because NPC4 is involved in other aspects of the response to Al stress than in the studied root growth phenotype. Alternatively, the lack of the differences is caused by the positive or negative compensatory effect of another NPC isoform (or other lipid signaling genes) that may be up-or down-regulated in npc4 knockout or stable NPC4-OE line, respectively. Similar observations were indeed described for studies dealing with other lipid signaling genes, such as PLD (Bargmann et al., 2009;Johansson et al., 2014) or phospholipase A (Rietz et al., 2010). All six NPC sequences are highly conserved with four invariable motives, however, the C-termini form the most divergent part of NPC sequences, with distinct lengths and sequence conservation among NPC subfamilies. This may be the part of the molecule responsible for the functional differences of various NPC isoforms through facilitating interactions with other proteins or defining protein localization (Pokotylo et al., 2013). Interestingly, while NPC3-5 are found in triplicate in Arabidopsis genome and members of this subfamily can also be found in other monocot and dicot species, it seems to be missing in gymnosperms and it is also absent in Solanaceae (Potocký, unpublished). Taking advantage of this, we employed a strategy of studying the effect of Al in tobacco pollen tubes heterologously overexpressing AtNPC4:YFP under the control of strong pollen specific Lat52 promoter. The growth of tobacco pollen tubes was rapidly arrested by Al, supporting our previous results , and was partially rescued by the overexpression of AtNPC4:YFP (Figure 6). Together with Al-mediated pollen growth inhibition rescue by exogenously added DAG , this strongly suggests that NPC4-generated DAG plays a role in the response to the Al-mediated toxicity. Because a different plant model organism than in our previous work was used, we next tested the reaction of Arabidopsis seedlings to Al stress in the view of NPC activity. We utilized the same fluorescent derivative of PC as a phospholipase substrate and found a similar Al-mediated NPC-inhibiting effect (Figure 2) as for the tobacco cell line BY-2 and pollen tubes suggesting that this phenomenon could be conserved across the plant kingdom. Interestingly, effects on pNPC4:GUS expression and NPC activity in Al-treated plants were in the opposite way than described for another abiotic stress that targets root, salt treatment (Kocourková et al., 2011). To test the hypothesis whether NPC4 is responsible for inhibition of DAG formation during Al stress, we prepared the stable Arabidopsis lines overexpressing NPC4 and we compared the ratio of NPC activity in Al-treated/non-treated seedlings to WT seedlings. The differences between WT and NPC4-OE seedlings were slowly pronounced in time with the most evident change obtained for the seedlings treated with Al for 14 h (Figure 2) indicating that NPC4 activity is altered by Al gradually. Two possible mechanisms that could be responsible for the decrease of NPC activity were examined. The inhibition of phospholipase activity in vitro in cellular fractions is well documented for different toxic metals (Pokotylo et al., 2014) and for Al as well (Martínez-Estévez et al., 2003;Pejchar et al., 2008). Here, the direct inhibition of heterologously expressed NPC4 enzyme by Al was tested with no alteration in activity detected even for high AlCl 3 concentrations (Figure 3). The second possible mechanism, enzyme translocation, was previously described in plants under stress conditions for another phospholipase type, PLD (Wang et al., 2000;Bargmann et al., 2006). However, Al treatment had no effect on NPC4 localization neither in Arabidopsis seedlings nor tobacco pollen tubes (Figure 4). Moreover, AtNPC4:YFP was found on PM in subapical region of growing pollen tube (Figure 4) and thus partially co-localize with Cys1:YFP that was used as a DAG marker in tobacco pollen tubes (Potocký et al., 2014). DAG is an important signaling phospholipid in animals but its signaling role in plant cells is still under debate. Meijer and Munnik (2003) reported that DAG as a product of PIP 2 hydrolysis is rapidly phosphorylated by DAG kinase to PA, which plays an active role in the plant signaling processes. However, a number of studies imply that DAG is likely to act as a signaling molecule in some plant systems including Arabidopsis seedlings and tobacco pollen tubes Frontiers in Plant Science | Plant Physiology [reviewed in Dong et al. (2012)]. DAG is also known to be important in the structure and dynamics of biological membranes, where it can influence membrane curvature and induce unstable, asymmetric regions in membrane bilayers important for membrane fusion processes (Carrasco and Mérida, 2007;Haucke and Di Paolo, 2007), events that occur in many physiological processes, such as exocytosis, endocytosis, membrane biogenesis, and cell division. Moreover Al-mediated inhibition of root growth was found to be connected with the inhibition of endocytosis (Illéš et al., 2006;Krtková et al., 2012) and controlled through local auxin biosynthesis and signaling (Shen et al., 2008;Yang et al., 2014). Not incurious, expression of NPC4 was increased and npc4 seedlings exhibit shorter primary root and smaller density of lateral roots after auxin treatment (Wimalasekera et al., 2010). Thus, it is worthwhile to note that the inhibition of DAG formation during Al stress might affect the mentioned processes and rapidly inhibit growth. On the other hand, our long-term experiment revealed that npc4 seedlings were more sensitive to Al stress while NPC4-OE seedlings formed a more abundant root system (Figure 7). This suggests that NPC4/DAG functions differently, more likely participating in lipid turnover and membrane remodeling, respectively. In summary, our results suggest that the previously described involvement of NPC in response to Al stress is mediated by NPC4 in A. thaliana. ACKNOWLEDGMENTS This work was supported by the Czech Science Foundation (GACR) grant no. P501/12/P950 to P.P. The authors thank Daniela Kocourková and Kateřina Raková for their excellent technical assistance.
6,713
2015-02-16T00:00:00.000
[ "Environmental Science", "Biology" ]
Downscaling Land Surface Temperature in Complex Regions by Using Multiple Scale Factors with Adaptive Thresholds Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. Introduction As an important parameter for characterizing the balance of surface energy, land surface temperature (LST) serves a key function in biophysical-chemical processes [1] and has been widely used in common applications, such as soil moisture estimation [2,3], forest fire detection [4], and urban heat environment monitoring [5][6][7]. Thermal infrared remote sensing (TIRS) can detect surface temperature and describe the spatial differences and diversity in LST [8] dynamically and macroscopically. A large amount of thermal remote-sensing data, including those from NOAA/AVHRR, Landsat TM/ETM+, MODIS, ASTER, and Landsat 8 TIRS, have been used to retrieve LST. Landsat satellites are frequently used in LST retrieval because of their high spatial resolution and wide availability of the data to the public, however, the LST retrieved from Landsat data is usually mixed with pixel temperature. Moreover, urban surfaces are characterized by high heterogeneity [5]. In this case, the LST retrieved from satellite-borne sensors has an insufficient spatial resolution for some urban applications. Downscaling may be applied to enhance the spatial resolution of thermal images with relatively low resolution [9]. mixed areas and adaptively selects the scale factors within a moving window based on the Pearson's correlation coefficients between LST and the scale factors. The rest of this paper is organized as follows: Section 2 discusses the proposed method. Section 3 presents the study area and data. Section 4 evaluates the downscaling results. Section 5 discusses the findings. Section 6 concludes the paper. Downscaling Methods LST can be retrieved using thermal infrared images with coarse spatial resolutions. Regression models between ancillary environmental predictors and LST have been widely established to enhance LST resolution. If the relationships between LST and the predictors do not change with the variation in the spatial resolution, a detailed LST with a high resolution can be estimated by the predictors using such relationships. The four scale factors and LST image with coarse resolution are regressed into the model given by: where SAVI C , NMDI C , MNDWI C , NDBI C , and LST F are the SAVI, NMDI, MNDWI, NDBI, and fitted coarse-resolution LST, respectively. The subscript "C" indicates the variable in the coarse resolution, and the subscript "F" refers to the variable fitted by others. The coefficients a, b, c, and d change with the moving window. Owing to the sparse ground observations and the limited spatial representativeness of ground measurements, the LST retrieved using TIRS images was used as the reference of LST, considering the reliable accuracy of retrieval, which has an average bias of less than 1 • C [46]. The residual temperature (e) became the difference between the retrieved LST (LST R ) and the LST F , as shown in Equation (1). This difference was due to the spatial variability in LST/SAVI and LST/NDBI: Therefore, from the coarse-resolution LST, the simulated LST with coarse resolution (LST C ) could be estimated as LST C = aSAV I C + bN MDI C + cMNDW I C + dNDBI C + e. Owing to the scale invariance, the relationships between LST and the scale factors with coarse resolutions were applied to the four scale factors with high resolutions. Subsequently, a simulated LST with a high resolution (LST H ) is obtained, which is given by: where SAVI H , NDBI H , MNDWI H , and NMDI H are the SAVI, NDBI, MNDWI, and NMDI with high resolutions, respectively. For convenience, LST H (LST C ) is regarded as the downscaled (simulated) LST, whereas LST R is regarded as the retrieved LST. The given relationships were fitted by all scale factors because of mixed pixels. Nevertheless, the heterogeneity in a pixel decreases with the spatial resolution. Therefore, in the images with high resolutions, not all the scale factors were used in this study for the regression in every pixel. A multi-scale-factor downscaling approach based on adaptive threshold (MSFAT) was developed to solve this problem. Compared with other traditional approaches, the developed approach did not involve all scale factors in fitting the regression model. Furthermore, the actual cover types were considered in the selection of scale factors in our approach. CCs were used to compute automatically the importance scores of the four scale factors, and CC thresholds were estimated to determine which scale factors would be involved in fitting the regression model. The estimation process of the CC threshold is as follows (Figure 1). First, the CC between each scale factor and LST was calculated within every moving window until the entire image was scanned. Second, the CCs of every scale factor were sorted into several levels (the number of windows) in ascending order. Third, at a given level, the scale factors were selected within every moving window according to the CC level, and the multiple regression model was fitted using the selected scale factors (Equation (4)). Fourth, the simulated LST with a high resolution at every level was evaluated using some evaluation measures. Fifth, the CC threshold at the optimal level for every scale factor was determined using the evaluation measures. Finally, assuming that the relationships between LST and the scale factors did not change with the scale, the pixels whose CCs were higher than the threshold values were downscaled in the corresponding multiple linear regression model (Equation (4)), whereas the LSTs of the other pixels were downscaled using the most relevant scale factors in the linear simulation. Therefore, in our downscaling method, not all land cover types were involved in the regression equation fitting, but only the scale factors of the main land cover types within a moving window were involved in the regression fitting. The correlation threshold of each scale factor was estimated to determine which scale factors should be involved in the downscaling model within a moving window. multi-scale-factor downscaling approach based on adaptive threshold (MSFAT) was developed to solve this problem. Compared with other traditional approaches, the developed approach did not involve all scale factors in fitting the regression model. Furthermore, the actual cover types were considered in the selection of scale factors in our approach. CCs were used to compute automatically the importance scores of the four scale factors, and CC thresholds were estimated to determine which scale factors would be involved in fitting the regression model. The estimation process of the CC threshold is as follows (Figure 1). First, the CC between each scale factor and LST was calculated within every moving window until the entire image was scanned. Second, the CCs of every scale factor were sorted into several levels (the number of windows) in ascending order. Third, at a given level, the scale factors were selected within every moving window according to the CC level, and the multiple regression model was fitted using the selected scale factors (Equation (4)). Fourth, the simulated LST with a high resolution at every level was evaluated using some evaluation measures. Fifth, the CC threshold at the optimal level for every scale factor was determined using the evaluation measures. Finally, assuming that the relationships between LST and the scale factors did not change with the scale, the pixels whose CCs were higher than the threshold values were downscaled in the corresponding multiple linear regression model (Equation (4)), whereas the LSTs of the other pixels were downscaled using the most relevant scale factors in the linear simulation. Therefore, in our downscaling method, not all land cover types were involved in the regression equation fitting, but only the scale factors of the main land cover types within a moving window were involved in the regression fitting. The correlation threshold of each scale factor was estimated to determine which scale factors should be involved in the downscaling model within a moving window. Determination of the Moving Window Size In the calculations of the regression models and CCs, the moving window (operation window) used in downscaling determines the accuracy of the estimated subpixel temperature. The moving window is also directly related to the complexity of the downscaling operations. The criteria for selecting the appropriate MWS according to the types and characteristics of the land cover in the study area should be determined. Meanwhile, semivariance function is an important tool for understanding the spatial structure of local areas [26,27,35,[47][48][49]. In our study, the semivariance function uses variable ranges to represent the spatial variations in the four scale factors in the entire region. Determination of the Moving Window Size In the calculations of the regression models and CCs, the moving window (operation window) used in downscaling determines the accuracy of the estimated subpixel temperature. The moving window is also directly related to the complexity of the downscaling operations. The criteria for selecting the appropriate MWS according to the types and characteristics of the land cover in the study area should be determined. Meanwhile, semivariance function is an important tool for understanding the spatial structure of local areas [26,27,35,[47][48][49]. In our study, the semivariance function uses variable ranges to represent the spatial variations in the four scale factors in the entire region. A total of eight values of the variable range, R, in the horizontal and vertical directions for the four scale factors are calculated as [50]: where R is a variable range of a scale factor; i = 1, . . . , m, where m is the number of all the pixels for comparison; x i is the location of a pixel; Z(x i ) is the value of the pixel; and h, which varies from 0 to 300, is the step length of curve fit. Subsequently, R can be determined using h when the semivariance fitting curve stabilizes, and R can be regarded as the MWS. Evaluation Measures Two measures, namely, coefficient of determination (R 2 ) and root-mean-square error (RMSE) [19,35], were used to evaluate the downscaling effect of the MSFAT algorithm and compare the proposed algorithm with three other downscaling methods. In the equation below, R 2 is the coefficient of determination between the original and downscaled images. A high R 2 indicates a satisfactory downscaling. This coefficient is given by: where LST S is the simulated LST (Equations (3) and (4)), LST R is the retrieved LST with the same number of pixels as that of LST R , and LST R is the average of LST R in the entire image. Meanwhile, RMSE was used to test the errors between the original LST image and the downscaled image. The calculation formula for RMSE is given by where M and N represent the number of rows and columns of the image, respectively. When the threshold values of the scale factors are set too high, no scale factors can be fitted, and some pixels cannot be included in the simulation of LST. Therefore, the number of pixels in the solution to Equation (3) was used as the index of algorithm availability. Study Area and Data Description Nanjing (31 • 14"-32 • 37" N, 118 • 22"-119 • 14" E) is the capital of Jiangsu Province, and is in one of the largest economic zones in China, the Yangtze River Delta [51]. Nanjing covers seven districts and a total area of 6587 km 2 . It has a humid subtropical climate, which is influenced by the East Asian monsoon. The annual average rainfall and air temperature in Nanjing are 979 mm and 15.9 • C, respectively. July and August are the hottest and the most humid months, in which the average maximum air temperature is 32 • C [52]. Accordingly, LST peaks in the hot summer. The experimental area is a subset of Nanjing City ( Figure 2) based on the land use and land cover map in 2010. The area, which is characterized by heterogeneous urban landscape patterns, has four main land cover types, namely, water, vegetation, bare soil (Yangtze River Beach and idle lands), and impervious surfaces. The Landsat 8 Operational Land Imager (OLI) and TIRS images of Nanjing City were acquired on 11 August 2013 and then used in this study. The Landsat 8 datasets, which were provided by the United States Geological Survey, included OLI and TIRS images with 30 and 100 m spatial resolutions, respectively [53]. At the retrieval moment, the air temperature, humidity, pressure, visibility, wind direction, and wind speed were 38.0 °C , 44%, 1008 hpa, 12 km north, and 2.0 m/s, respectively. Except for the image in the summer, the other images under clear sky were also acquired on 28 March 2016, 14 October 2013, and 20 December 2014 to reveal the availability of our approach in the other seasons (spring, autumn, and winter). The four main land cover types were identified using the high-resolution images ( Figure 3); these were then classified by maximum likelihood classification [54]. The accuracy of the classification method was evaluated by comparison with the field survey data. With a kappa coefficient of 0.915, the maximum likelihood classification thus achieved high classification accuracy. The most dominant land cover type is impervious surface area, followed by vegetation and water. No obvious law governs the spatial distribution of the four land cover types. The Landsat 8 Operational Land Imager (OLI) and TIRS images of Nanjing City were acquired on 11 August 2013 and then used in this study. The Landsat 8 datasets, which were provided by the United States Geological Survey, included OLI and TIRS images with 30 and 100 m spatial resolutions, respectively [53]. At the retrieval moment, the air temperature, humidity, pressure, visibility, wind direction, and wind speed were 38.0 • C, 44%, 1008 hpa, 12 km north, and 2.0 m/s, respectively. Except for the image in the summer, the other images under clear sky were also acquired on 28 March 2016, 14 October 2013, and 20 December 2014 to reveal the availability of our approach in the other seasons (spring, autumn, and winter). The four main land cover types were identified using the high-resolution images ( Figure 3); these were then classified by maximum likelihood classification [54]. The accuracy of the classification method was evaluated by comparison with the field survey data. With a kappa coefficient of 0.915, the maximum likelihood classification thus achieved high classification accuracy. The most dominant land cover type is impervious surface area, followed by vegetation and water. No obvious law governs the spatial distribution of the four land cover types. Data Processing The data processing in this study was divided into three parts, namely, data preprocessing, downscaling processing, and simulation validation. Data preprocessing aims to unify the image resolution. Downscaling processing executes the MSFAT algorithm, and simulation validation evaluates MSFAT. In the data preprocessing, the OLI images were initially adjusted with the Fast Line-of-sight Atmospheric Analysis of Hypercubes (FLAASH) atmospheric correction algorithm. Owing to the unavailability of a validation reference for LST simulation, the OLI and TIRS images were upscaled to ensure that the LST simulation using Equation (4) could be validated by the LST retrieved from the TIRS images with the original resolution ( Figure 4). For convenience, the TIRS images with 100 m resolution were resampled into 90 m images by the nearest neighbor method, whereas the OLI images with 30 m resolution were resampled into 90 m images by aggregation [55]. The 90 m OLI and TIRS images were also resampled by aggregation into 360 m images. The 90 m OLI and TIRS images were high-resolution images, whereas the 360 m OLI and TIRS images were coarse-resolution images. The coarse-resolution images were used to construct the relationship model in Equation (3), whereas the 90 m OLI images were used to simulate the 90 m LST in Equation (4). The 90 m retrieved LST was used to validate the 90 m simulated LST. Subsequently, the four scale factors with 90 m (360 m) resolution were then estimated from the 90 m (360 m) OLI images, whereas the LST with 90 m (360 m) resolution was retrieved from band 10 of Landsat 8 TIRS by using the generalized single-channel method in consideration of the effect of stray light [56]. Worth Noting, the final resolution of the downscaled result is 90 m. In the downscaling process, the moving window was first adopted to explore the relationships between LST and the scale factors, and then estimated using the semivariance curve. Next, the relevant scale factors were selected, and the 360 m retrieved LST was downscaled to the 90 m simulated LST following the approach in Section 2.1. In the simulation validation, the simulated downscaled 90 m LST image was evaluated and compared with the retrieved 90 m LST. The downscaling accuracy and the spatial distribution of the simulation error were subsequently determined. Data Processing The data processing in this study was divided into three parts, namely, data preprocessing, downscaling processing, and simulation validation. Data preprocessing aims to unify the image resolution. Downscaling processing executes the MSFAT algorithm, and simulation validation evaluates MSFAT. In the data preprocessing, the OLI images were initially adjusted with the Fast Line-of-sight Atmospheric Analysis of Hypercubes (FLAASH) atmospheric correction algorithm. Owing to the unavailability of a validation reference for LST simulation, the OLI and TIRS images were upscaled to ensure that the LST simulation using Equation (4) could be validated by the LST retrieved from the TIRS images with the original resolution ( Figure 4). For convenience, the TIRS images with 100 m resolution were resampled into 90 m images by the nearest neighbor method, whereas the OLI images with 30 m resolution were resampled into 90 m images by aggregation [55]. The 90 m OLI and TIRS images were also resampled by aggregation into 360 m images. The 90 m OLI and TIRS images were high-resolution images, whereas the 360 m OLI and TIRS images were coarse-resolution images. The coarse-resolution images were used to construct the relationship model in Equation (3), whereas the 90 m OLI images were used to simulate the 90 m LST in Equation (4). The 90 m retrieved LST was used to validate the 90 m simulated LST. Subsequently, the four scale factors with 90 m (360 m) resolution were then estimated from the 90 m (360 m) OLI images, whereas the LST with 90 m (360 m) resolution was retrieved from band 10 of Landsat 8 TIRS by using the generalized single-channel method in consideration of the effect of stray light [56]. Worth Noting, the final resolution of the downscaled result is 90 m. In the downscaling process, the moving window was first adopted to explore the relationships between LST and the scale factors, and then estimated using the semivariance curve. Next, the relevant scale factors were selected, and the 360 m retrieved LST was downscaled to the 90 m simulated LST following the approach in Section 2.1. In the simulation validation, the simulated downscaled 90 m LST image was evaluated and compared with the retrieved 90 m LST. The downscaling accuracy and the spatial distribution of the simulation error were subsequently determined. Spatial Distribution of LST and Scale Factors The four scale factors, namely, SAVI, NMDI, MNDWI, and NDBI, were extracted from the OLI image ( Figure 5). A comparison of Figures 2 and 4 shows that the spatial distributions of the four scale factors and four types of land cover (vegetation, soil, water, and impervious surface) were consistent. Thus, the four scale factors can accurately characterize the four types of land cover. The Yangzi River, flowing through the northwestern part of the study area, exhibited an MNDWI higher than 0.8, thus indicating a water area. A similar MNDWI was located in the southwest area, which corresponded to the Xuanwu Lake. In the southern part of the region, an area with SAVI of more than 1.0 was located in the Zijing Mountain with dense trees. Several building zones with NDBI of more than 0 were sporadically distributed in the northern part. Furthermore, mixed-land covers occupied the other pixels of the study area. The distribution of LST (90 m retrieved values) is presented in Figure 6a. The average temperature in the study area was 37.2 °C . The lowest temperature (approximately 30 °C ) was detected in the Yangzi River and Xuanwu Lake, which had high MNDWIs. Relatively low temperatures (32 °C -34 °C ) were also recorded in the Zijing Mountain, which had a high SAVI, whereas the highest temperature (higher than 40 °C ) was sporadically located in the northern industrial zones with high NDBI. The LST distribution was evidently related to the scale factors. Figure 6b shows the 360 m retrieved LST, whose distribution is similar to that of the 90 m retrieved LST. However, detailed LST information cannot be provided in the 360 m resolution, particularly for urban areas. Thus, the downscaling approach should be applied because of the absence of high-resolution LST. Spatial Distribution of LST and Scale Factors The four scale factors, namely, SAVI, NMDI, MNDWI, and NDBI, were extracted from the OLI image ( Figure 5). A comparison of Figures 2 and 4 shows that the spatial distributions of the four scale factors and four types of land cover (vegetation, soil, water, and impervious surface) were consistent. Thus, the four scale factors can accurately characterize the four types of land cover. The Yangzi River, flowing through the northwestern part of the study area, exhibited an MNDWI higher than 0.8, thus indicating a water area. A similar MNDWI was located in the southwest area, which corresponded to the Xuanwu Lake. In the southern part of the region, an area with SAVI of more than 1.0 was located in the Zijing Mountain with dense trees. Several building zones with NDBI of more than 0 were sporadically distributed in the northern part. Furthermore, mixed-land covers occupied the other pixels of the study area. The distribution of LST (90 m retrieved values) is presented in Figure 6a. The average temperature in the study area was 37.2 • C. The lowest temperature (approximately 30 • C) was detected in the Yangzi River and Xuanwu Lake, which had high MNDWIs. Relatively low temperatures (32 • C-34 • C) were also recorded in the Zijing Mountain, which had a high SAVI, whereas the highest temperature (higher than 40 • C) was sporadically located in the northern industrial zones with high NDBI. The LST distribution was evidently related to the scale factors. The temperature distribution matched those of the scale factors. The lowest temperature corresponded to the high MNDWIs in some western water regions (Yangzi River and Xuanwu Lake), whereas the highest temperature corresponded to the high NDBI in the northern building region. A low temperature was related to the high SAVI in the southern forest region (Zijing Mountain). The temperature distribution matched those of the scale factors. The lowest temperature corresponded to the high MNDWIs in some western water regions (Yangzi River and Xuanwu Lake), whereas the highest temperature corresponded to the high NDBI in the northern building region. A low temperature was related to the high SAVI in the southern forest region (Zijing Mountain). Figure 6b shows the 360 m retrieved LST, whose distribution is similar to that of the 90 m retrieved LST. However, detailed LST information cannot be provided in the 360 m resolution, particularly for urban areas. Thus, the downscaling approach should be applied because of the absence of high-resolution LST. The temperature distribution matched those of the scale factors. The lowest temperature corresponded to the high MNDWIs in some western water regions (Yangzi River and Xuanwu Lake), whereas the highest temperature corresponded to the high NDBI in the northern building region. A low temperature was related to the high SAVI in the southern forest region (Zijing Mountain). Analysis of the MWS The R value of each horizontal and vertical scaling factor was estimated using the exponential model of semivariance. As shown in Figure 7, the R values of the four scale factors were similar in the vertical and horizontal directions. When the step length was approximately 5 horizontally or vertically, the fitting curve of the semivariance function became stabilized, as indicated by the red line. Therefore, the average of the eight R values of the four land-cover types in the vertical and horizontal directions was 5. The R values of every scale factor in the horizontal and vertical directions were closely related to the spatial distribution characteristics. Therefore, the average values of the eight R values in the vertical and horizontal directions were used to determine the MWS, which was 5 × 5 pixels in this study. Analysis of the MWS The R value of each horizontal and vertical scaling factor was estimated using the exponential model of semivariance. As shown in Figure 7, the R values of the four scale factors were similar in the vertical and horizontal directions. When the step length was approximately 5 horizontally or vertically, the fitting curve of the semivariance function became stabilized, as indicated by the red line. Therefore, the average of the eight R values of the four land-cover types in the vertical and horizontal directions was 5. The R values of every scale factor in the horizontal and vertical directions were closely related to the spatial distribution characteristics. Therefore, the average values of the eight R values in the vertical and horizontal directions were used to determine the MWS, which was 5 × 5 pixels in this study. CC Threshold Value The CC threshold values were estimated to select the major scale factors that fit the regression model within a moving window. The CCs were sorted into 1545 levels for every scale factor from the smallest to the largest in the study area ( Figure 8). The number of levels depended on the number of moving windows in the entire 360 m image. Only the scale factors with CCs higher than the corresponding CC thresholds were considered in fitting the downscaled regression model. Clearly, the higher the CC threshold is, the fewer the pixels that can be fitted. Consequently, the pixels with low CCs for every scale factor cannot be simulated. The 90 m LST was estimated using the major scale factors, which were selected on the basis of the CC thresholds. The downscaling result is shown in Figure 9. CC Threshold Value The CC threshold values were estimated to select the major scale factors that fit the regression model within a moving window. The CCs were sorted into 1545 levels for every scale factor from the smallest to the largest in the study area ( Figure 8). The number of levels depended on the number of moving windows in the entire 360 m image. Only the scale factors with CCs higher than the corresponding CC thresholds were considered in fitting the downscaled regression model. Clearly, the higher the CC threshold is, the fewer the pixels that can be fitted. Consequently, the pixels with low CCs for every scale factor cannot be simulated. The 90 m LST was estimated using the major scale factors, which were selected on the basis of the CC thresholds. The downscaling result is shown in Figure 9. CC Threshold Value The CC threshold values were estimated to select the major scale factors that fit the regression model within a moving window. The CCs were sorted into 1545 levels for every scale factor from the smallest to the largest in the study area (Figure 8). The number of levels depended on the number of moving windows in the entire 360 m image. Only the scale factors with CCs higher than the corresponding CC thresholds were considered in fitting the downscaled regression model. Clearly, the higher the CC threshold is, the fewer the pixels that can be fitted. Consequently, the pixels with low CCs for every scale factor cannot be simulated. The 90 m LST was estimated using the major scale factors, which were selected on the basis of the CC thresholds. The downscaling result is shown in Figure 9. The values of R 2 and RMSE (90 m retrieved LST versus the scale factors) as well as the number of fitting pixels were used to determine the CC thresholds. Missing information (black pixels) existed because of the excessively high CCs of the scale factors in these pixels. When the CC threshold increased, R 2 also increased with decreasing RMSE and decreasing number of available pixels. A low CC threshold resulted in the low accuracy of the simulated LST and a large number of available pixels, whereas a high CC threshold resulted in the high accuracy of the simulated LST and a small number of available pixels. The evaluation measures, R 2 , RMSE, and the number of available pixels, varied stably at level 440, where the second derivative of the curves for the measures was approximate to 0, without a dramatic variation. At this level, the average R 2 was relatively large (i.e., 0.87), the RMSEs had a low average value (i.e., 1.15 • C), and more than 95% of the pixels were involved in the fitting. This meant that the accuracy was relatively unsatisfactory below this level, whereas the number of available pixels sharply decreased at this level. The fitting information of LST was lost in most of the areas with high CC thresholds (Figure 8c,d). For example, we selected four typical CC threshold levels (i.e., 0, 440, 810, and 1545) to present the discrepancies in the evaluation measures for different CC levels, because they were the beginning, stable point, dramatic variation, and final level, respectively. As shown in Table 1, when the CC threshold levels increased from 0 to 440, 810, and 1545, R 2 increased from 0.84 to 0.87, 0.88, and 0.92, respectively, whereas the RMSE (number of available pixels) decreased from 1. Furthermore, the CCs at level 440 implied that the downscaled result of the MSFAT algorithm was the best. Furthermore, the optimal thresholds of SAVI, NMDI, MNDWI, and NDBI were 0.623, 0.773, 0.311, and 0.775, respectively. At these thresholds, most of the pixels can be involved in the downscaling process. In such case, the missing pixels mainly located in the Zijing Mountain, and the low CCs are probably related to the influence of uneven topography. The LST in the few noninvolved pixels had to be downscaled using most of the relevant scale factors because of the weak correlations of LST with the scale factors. Therefore, within every moving window, the related scale factors were selected, and the multiple linear regression model (Equation (3)) was established according to the respective thresholds of SAVI (i.e., 0.623), NMDI (i.e., 0.773), MNDWI (i.e., 0.311), and NDBI (i.e., 0.775). Downscaling Performance After the fitting of the most relevant scale factors in the pixels, the final simulated result was obtained (Figure 10a). Compared with Figure 9b, Figure 10a already provides the missing information to obtain the LST in the mountain area. The average simulated temperature in the study area was 37.0 • C. A comparison of Figure 10a with Figure 5b shows that our downscaling method obviously improved the spatial resolution of the original LST image, especially in the northern region, in which high LSTs are indicated in red and yellow, corresponding to the industrial zones and the mixed areas, respectively. Our simulated LST image could also identify the bridge above the Yangzi River and the four islands in the Xuanwu Lake. The distribution of LSTs in Figure 10a is similar to those in the simulated 90 m image (Figure 9b) and retrieved 90 m image (Figure 6a), with the lowest, relatively low, and highest temperatures detected in the water, vegetation, and building areas, respectively. Therefore, our 90 m downscaled LST showed spatial reliability and provided more detailed information than the 90 m fitting LST in Figure 9b and the 360 m retrieved LST in Figure 6b. Validation of the Downscaling Results Compared with the 90 m retrieved LST, the 90 m simulated LST had a relatively satisfactory accuracy for the entire image, with pixel-average R 2 and RMSE of 0.87 and 1.13 °C , respectively ( Figure 11a). The pixels with LST errors of −1.0 °C -1.0 °C , −2.0 °C -−1.0 °C , 1.0 °C -2.0 °C , lower than −2.0 °C , and higher than 2.0 °C accounted for 73%, 6%, 14%, 2%, and 5% of all the pixels, respectively ( Table 2). In most of the pixels, the discrepancies between the retrieved and simulated LSTs were less than 1 °C and within the scope of the retrieved accuracy [46]. Thus, reliable downscaling results appeared in most parts of the area. Table 2. Error probability of downscaled LST for our approach in all seasons. Validation of the Downscaling Results Compared with the 90 m retrieved LST, the 90 m simulated LST had a relatively satisfactory accuracy for the entire image, with pixel-average R 2 and RMSE of 0.87 and 1.13 • C, respectively (Figure 11a). The pixels with LST errors of −1.0 • C-1.0 • C, −2.0 • C-−1.0 • C, 1.0 • C-2.0 • C, lower than −2.0 • C, and higher than 2.0 • C accounted for 73%, 6%, 14%, 2%, and 5% of all the pixels, respectively ( Table 2). In most of the pixels, the discrepancies between the retrieved and simulated LSTs were less than 1 • C and within the scope of the retrieved accuracy [46]. Thus, reliable downscaling results appeared in most parts of the area. Table 2. Error probability of downscaled LST for our approach in all seasons. As shown in Figure 12, a systemic underestimation occurred in the region with a low temperature of approximately 30 • C, specifically in the locality near to the shores of the Yangzi River. This phenomenon may have been induced by the improper recognition of this region, such that the land-water mixed shores might have been directly mistaken for water. In addition, there were a small number of pixels with LST overestimation in the industrial zone in the northern part of the city, where dense tall factory buildings were located. This outcome might have resulted from the inaccurate estimation of land surface emissivity in the area; accordingly, the reliability of the retrieved LST was reduced [46]. In order to reveal the overall accuracy of the downscaling result, we also analyzed its accuracy depending on the different types of surfaces. The RMSE of results in the regions of water, vegetation, impervious surface, and bail soil were 1.18 • C, 0.83 • C, 1.08 • C and 1.10 • C, respectively. Our results thus demonstrate the higher accuracy in the region of vegetation than in water areas. As shown in Figure 12, a systemic underestimation occurred in the region with a low temperature of approximately 30 °C , specifically in the locality near to the shores of the Yangzi River. This phenomenon may have been induced by the improper recognition of this region, such that the land-water mixed shores might have been directly mistaken for water. In addition, there were a small number of pixels with LST overestimation in the industrial zone in the northern part of the city, where dense tall factory buildings were located. This outcome might have resulted from the inaccurate estimation of land surface emissivity in the area; accordingly, the reliability of the retrieved LST was reduced [46]. In order to reveal the overall accuracy of the downscaling result, we also analyzed its accuracy depending on the different types of surfaces. The RMSE of results in the regions of water, vegetation, impervious surface, and bail soil were 1.18 °C , 0.83 °C , 1.08 °C and 1.10 °C , respectively. Our results thus demonstrate the higher accuracy in the region of vegetation than in water areas. LST Error ( In addition, at the Ruijin site, the ground observation of LST was 35.02 °C . The retrieved LST and downscaling result in the pixel located at the site were 34.05 °C and 34.49 °C , respectively. Compared with ground observation, the error of downscaling result was 0.53 °C , which was within In addition, at the Ruijin site, the ground observation of LST was 35.02 • C. The retrieved LST and downscaling result in the pixel located at the site were 34.05 • C and 34.49 • C, respectively. Compared with ground observation, the error of downscaling result was 0.53 • C, which was within the accuracy of LST retrieval. Therefore, the direct validation also reveals the availability of our approach. In summary, the 90 m downscaled LST proved to be reliable (with a bias of less than 1 • C) in approximately three-quarters of the area, except for the shores of the Yangzi River and the industrial zone in the northeast region. In the entire area, the pixel-average R 2 and RMSE reached 0.87 and 1.13 • C, respectively. Comparison of Approaches As shown in Figure 13, all downscaling methods obviously improved the spatial resolution of the original LST image (Figure 6b). Some detailed information within the same land cover was found in the downscaled images (Figure 7a,c,e); in comparison, the same cannot be found in the original image (Figure 5b). The downscaled LST images can maintain the thermal characteristics and spatial distribution characteristics of the original LST image. Relative to the 90 m retrieved LST, regardless of the water area, the downscaling result of DisTrad and TsHARP approach had a R 2 (RMSE) of 0.86 (1.01 °C ) and 0.82 (1.14 °C ), when the value was 0.85 (1.04 °C ) for our approach. Hence, the accuracy of our approach proved to be better than that of TsHARP approach, and it is also similar to that of DisTrad approach. In detail, most errors of three methods ranged from −1 °C to 1 °C. Most of the errors were less than 1 °C for MSFAT algorithm, whereas there were more errors greater than 1 °C for the TsHARP method (Table 3). DisTrad method had the least errors of more than 3 °C but less errors ranging from −1 °C to 1 °C compared with MSFAT. The accuracy of our approach ranged between those of the DisTrad and TsHARP approaches in the regions of vegetation and impervious surface. The accuracies of these three approaches are similar in the region of bail soil. Worth noting, our approach can downscale the LST in water area, whereas the two other approaches have no ability of downscaling in the water area. Comparison of Approaches As shown in Figure 13, all downscaling methods obviously improved the spatial resolution of the original LST image (Figure 6b). Some detailed information within the same land cover was found in the downscaled images (Figure 7a,c,e); in comparison, the same cannot be found in the original image ( Figure 5b). The downscaled LST images can maintain the thermal characteristics and spatial distribution characteristics of the original LST image. Relative to the 90 m retrieved LST, regardless of the water area, the downscaling result of DisTrad and TsHARP approach had a R 2 (RMSE) of 0.86 (1.01 • C) and 0.82 (1.14 • C), when the value was 0.85 (1.04 • C) for our approach. Hence, the accuracy of our approach proved to be better than that of TsHARP approach, and it is also similar to that of DisTrad approach. In detail, most errors of three methods ranged from −1 • C to 1 • C. Most of the errors were less than 1 • C for MSFAT algorithm, whereas there were more errors greater than 1 • C for the TsHARP method (Table 3). DisTrad method had the least errors of more than 3 • C but less errors ranging from −1 • C to 1 • C compared with MSFAT. The accuracy of our approach ranged between those of the DisTrad and TsHARP approaches in the regions of vegetation and impervious surface. The accuracies of these three approaches are similar in the region of bail soil. Worth noting, our approach can downscale the LST in water area, whereas the two other approaches have no ability of downscaling in the water area. Availability of MSFAT in Different Seasons Except for the situation in the summer, the downscaling results of MSFAT algorithm in the other three seasons are shown in Figure 10. Compared with the 90 m retrieved LST, the 90 m simulated LST had a relatively satisfactory accuracy for the entire image, with pixel-average R 2 (RMSE) of 0.86 (1.31 °C ), 0.83 (1.28 °C ), and 0.63 (0.91 °C ) in the spring, autumn, and winter, respectively ( Figure 11). Meanwhile, the R 2 and RMSE were 0.87 and 1.13 °C , respectively, in the summer. Obviously, MSFAT algorithm had the best downscaling capability in summer than in the Availability of MSFAT in Different Seasons Except for the situation in the summer, the downscaling results of MSFAT algorithm in the other three seasons are shown in Figure 10. Compared with the 90 m retrieved LST, the 90 m simulated LST had a relatively satisfactory accuracy for the entire image, with pixel-average R 2 (RMSE) of 0.86 (1.31 • C), 0.83 (1.28 • C), and 0.63 (0.91 • C) in the spring, autumn, and winter, respectively ( Figure 11). Meanwhile, the R 2 and RMSE were 0.87 and 1.13 • C, respectively, in the summer. Obviously, MSFAT algorithm had the best downscaling capability in summer than in the three other seasons. In comparison, winter seemed to be incompatible with our proposed approach. This phenomenon is probably related to LST. Higher LST is always accompanied by better downscaling capability. When LST is too low, the ice and snow on the surface generally affect the scale factors and reduce the accuracy of our approach to some degree. In detail, the accuracy of our approach was best in the region of vegetation in all seasons with RMSE of 0.73 • C-0.94 • C (Figure 12). In the region of water, the R 2 (the result of our approach vs. the retrieved LST) in the winter (0.78) was obviously higher than those in other seasons (0.22-0.32), which may be related to the greater number of pure pixels in the regions of dried-up water shores. The opposite situation for RMSE appeared in the regions of impervious surface and bail soil. Generally, the MSFAT algorithm can be applied in all seasons, especially in the summer. However, more careful application in the winter is required. Meanwhile, the best (worst) availability occurred in the region of vegetation (water) in all seasons. Discussion LST information derived from TIR images is essential in ecology, meteorology, and hydrology research [57][58][59]. Many applications in urban ecology require high-resolution TIR remote sensing data, and downscaling facilitates the acquisition of such data [60,61]. The primary objective of this study is to develop an adaptive selection approach for the relevant scale factors to downscale the LST maps in heterogeneous regions. In view of this objective, the usefulness of the MSFAT should be evaluated. The results of the Landsat 8 downscaling experiments indicated the effectiveness of this method. In this study, fine-scale predicted LSTs obtained by the MSFAT algorithm are compared with the retrieved LST from the original TIRS images. The evaluation results showed relatively satisfactory RMSE and R 2 statistics. Unlike in other linear regression approaches to LST downscaling, only the scale factors with CCs higher than the thresholds are involved, and the appropriate scale factors are selected adaptively in our regression. In other approaches, all the scale factors are considered without setting thresholds. As a result, all scale factors within each moving window are all included in the regression model [62,63]. A poor downscaling effect is normally attained in the pixel of a single/few land cover types if all scale factors are involved in the regression. However, in other approaches, only a single scale factor is considered; as a result, the mixed areas with numerous types of land cover cannot attain a satisfactory downscaling effect [16][17][18]. Therefore, the MSFAT algorithm has the advantages of multiple scale factors, adaptive selection of factors, and satisfactory accuracy. However, our algorithm also has some limitations in LST downscaling in urban areas, especially those containing dense tall building blocks and river shores. This phenomenon in the regions of dense building blocks is partly related to the insensitivity of NDBI in these areas [45]. In reality, the unsatisfactory performance could be ascribed to the underestimation (overestimation) of land surface emissivity (temperature) estimation. Meanwhile, the limitation in the region of river shores (near the northwest water body) can be mainly ascribed to various mixed pixels existing in the inundated region. The dense tall building blocks and river shores occupy only few areas in ordinary urban regions. Thus, despite certain limitations, MSFAT remains an effective approach to rapidly and accurately downscaling LST in urban mixed areas because it uses adaptive thresholds and multi-scale factors. In addition, MSFAT is reliable in regions with uneven topography. However, in the regions with wavy terrain, other scale factors that can represent terrain characteristics may have to be integrated into the MSFAT algorithm. The incapability of fitting in the Zijing Mountain implies the limitation of MSFAT in mountainous areas. Furthermore, the edge effects in the downscaled LST image are apparent in the overlapping parts in the moving windows, which is due to different regression models existing in various moving windows. The edge effects may be reduced if the step size of the moving window is increased. Therefore, reliable LST downscaling in building and shore areas as well as in uneven regions will be our future goal. Meteorological conditions differ significantly in different seasons, accompanying with the drastic variation of the state of the underlying surface. The vegetation grows luxuriantly in the summer, while the water freezes in the winter. Furthermore, the soil water content fluctuates in the rainy or dry season. Those variations possibly have an influence on the discrepancy of the availability of MSFAT in different seasons. Thus, the relationships of underling surface variations (meteorological conditions) and the availability of MSFAT will be researched in the future. LST downscaling in middle and high spatial resolutions has been realized using our algorithm. However, limited by the low temporal resolution of the downscaled LST images and the influence of the clouds, the images are unsuitable for dynamic surface heat island analysis [47,63]. Thus, LST downscaling in middle and low spatial resolutions as well as high temporal resolution is necessary in estimating daily LST variation continuously [19,64]. The combination of LST downscaling in various resolutions contributes to the accurate monitoring of regional thermal environments [65]. Conclusions This paper presents a strategy for downscaling LST in an area with various land cover types by using four scale factors, which are adaptively selected according to the CCs between LST and the scale factors within every moving window. The comparison results based on two statistical measures and visual analyses show that MSFAT achieves a satisfactory downscaling performance, regardless of whether it is used for vegetation area, impervious surface area, water body, and mixed area. The R 2 and RMSE values between the 90 m downscaled result and the 90 m retrieved image are 0.87 and 1.13 • C, respectively. Except for the overestimation in the industrial zone in the northern region and the underestimation in the Yangzi River, the differences between the retrieved LST and simulated LST are less than 1 • C in approximately three-quarters of the study area. Spatially, compared with the 360 m retrieved LST, the 90 m downscaled result can present detailed LST information; furthermore, a similar distribution appears in both the 90 m downscaled and 90 m retrieved LSTs. Compared with other algorithms that have been proven to provide high downscaling accuracy in our studies, MSFAT has the advantages of similar accuracy, availability of downscaling in water area, multiple scale factors, adaptive selection of factors, and a relatively credible downscaling performance. MSFAT also shows availability in all seasons, especially in the summer. Furthermore, the best availability occurred in the region of vegetation, while the worst one appeared in the region of water. Thus, MSFAT has considerable potential in generating useful LST information from thermal images of mixed areas with an improved spatial resolution. Furthermore, MSFAT can select the scale factors adaptively according to land cover types by using CCs. Thus, MSFAT can be applied to more LST data, such as MODIS/LST and ASTER/LST, in other seasons, and in other regions with flat terrain. In our future research, we intend to develop a method for performing reliable LST downscaling in building and shore areas as well as in uneven regions. Through such a method, we aim to reduce the edge effects and apply MSFAT in middle or low spatial resolutions.
11,620
2017-04-01T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Mathematical Extrapolating of Highly Efficient Fin Systems Different high-performance fins are mathematically analyzed in this work. Initially, three types are considered: i exponential, ii parabolic, and iii triangular fins. Analytical solutions are obtained. Accordingly, the effective thermal efficiency and the effective volumetric heat dissipation rate are calculated. The analytical results were validated against numerical solutions. It is found that the triangular fin has the maximum effective thermal length. In addition, the exponential pin fin is found to have the largest effective thermal efficiency. However, the effective efficiency for the straight one is the maximumwhen its effective thermal length based on profile area is greater than 1.4. Furthermore, the exponential straight fin is found to have effective volumetric heat dissipation that can be 440% and 580% above the parabolic and triangular straight fins, respectively. In contrast, the exponential pin fin is found to possess effective volumetric heat dissipation that can be 120% and 132% above the parabolic and triangular pin fins, respectively. Finally, new high performance fins are mathematically generated that can have effective volumetric heat dissipation of 24% and 12% above those of exponential pin and straight fins, respectively. Introduction Fins are widely used in industry, especially in heat exchanger and refrigeration industries 1-5 .They are extended surfaces used to enhance heat transfer between the solids and the adjoining fluids 6 .Heat transfer inside fins has been extensively studied in the literature.Many mathematical analyses related to conduction and convection heat transfer in fins have been published.Harper and Brown 7 are considered the forerunner who began analyzing heat transfer inside fins mathematically.They found that one-dimensional analysis was sufficient for heat transfer inside fins.In addition, they recommended that tip heat loss can be accounted by using a corrected fin length which is equal to half of the fin thickness added to its length.Also, they pointed out that the differential surface area of the element is equal to the differential fin length element divided by the cosine of the taper angle. Later on, Schmidt 8 analyzed mathematically longitudinal and radial fins of uniform thickness and longitudinal fins of trapezoidal profile.Many works have been followed before the work of Gardner 9 .He derived general mathematical solutions for the temperature excess profile and fin efficiency for fin satisfying the Murray 10 assumptions and whose thickness varies as some power of the distance from the fin tip.Gardner 9 work is considered an important work because he reemphasized the concept of fin efficiency.This concept has been used later on by thousands of works.In addition, he was one of the first to demonstrate the use of applied mathematics including the use of modified Bessel functions in conduction and convection heat transfer.Later on, many works used applied mathematics in analyzing heat transfer inside fins subject to variable convection heat transfer coefficient 11-13 .A sufficient and interesting literature about mathematical analysis in fin heat transfer is shown in the works of Kraus et al. 14 and Aziz and McFadden 15 . The fin thermal efficiency, η f , is defined according to Gardner 9 as the fin heat transfer rate divided by the fin heat transfer rate if the fin surface is kept at uniform temperature of T b .According to this definition, the fin efficiency depends on two independent factors: i fin thickness or radius distribution, and ii fin thermal length.Nowadays, it becomes a primary goal to improve the performance of thermal systems.This goal is obviously achievable by avoiding having fin thermal lengths more than its effective value.Owing to the fact that the fin effective thermal length is directly related to the fin profile 16 , the fin efficiency can be improved and can be dependent only on the fin thickness or radius distribution.It should be noted that the fin effective thermal length is the one that produces fin heat transfer rate 1.0 percent below its maximum value.In this work, the fin efficiency based on the effective thermal length is named as the effective thermal efficiency.To the best knowledge of the author, almost negligible attention has been made towards analyzing highperformance fins based on their effective thermal efficiency.In addition, reducing the number of variables influencing the fin efficiency facilitates extrapolations of new generations of highperformance fins beyond those analyzed in the literature. In this work, high-performance fins with effective thermal lengths are mathematically analyzed.Three types are initially considered: i exponential, ii parabolic, and iii triangular fins.Analytical forms for the excess temperature are obtained.As such, the fin effective thermal efficiency and the effective volumetric heat dissipation are calculated both analytically and numerically.Comparisons between the performances of each fin are performed.Finally, ultrahigh performance fin geometries are extrapolated from the derived solutions. Problem Formulation In this work, Murray 10 assumptions are considered.In addition, the square of the fin profile gradient is neglected. Straight-Fins Consider a rectangular fin having a thickness H x that is much smaller than its length L as shown in Figure 1.H x is considered to vary along the fin centerline axis x-axis according to the following relationships: where b is a real positive number named as the exponential index.The quantity H b represents the fin thickness at its base x 0 .Equations 2.1 -2.3 correspond to exponential, triangular, and parabolic straight fins, respectively. The application of the energy equation 16 on a fin differential element results in the following differential equation: where T, T ∞ , k, and h are the fin temperature, free stream temperature, fin thermal conductivity, and the convection heat transfer coefficient between the fin and the fluid stream, respectively.The boundary conditions are the adiabatic tip conditions.Mathematically, they are given by where L ∞ is the length that produces fin heat dissipation rate equal to 99 percent of the maximum heat dissipation rate. Exponential Straight Fins By solving 2.4 using 2.1 , the following temperature distribution is obtained: where X m/b and m 2h/kH b .The fin heat transfer rate per unit width for the exponential straight fin is calculated from 2.7 The maximum heat transfer rate through exponential straight fins is obtained when considering L ∞ approaching infinity.It is equal to The effective thermal length mL ∞ is obtained by solving the equation given by q f 0.99 q f ∞ .As such, the mL ∞ must satisfy the following relationship: The fin thermal efficiency based on the fin effective thermal length is denoted by η ∞ .Mathematically, it is equal to The fin dimensionless heat dissipation per unit effective thermal volume, β ∞ , is defined as the ratio between the heat transfer rate through the fin of length L ∞ to maximum heat transfer rate from a rectangular fin having the same base thickness and volume.Mathematically, it is equal to 2.11 It should be mentioned that the previous solutions could not be located in the literature. Triangular Straight Fins Equations 2.6 -2.11 change to the following for the case of the triangular straight fin: 14 where m 2h/kH b .It should be mentioned that 2.12 -2.14 exist in different forms in the work of Kraus et al. 14 . Parabolic Straight Fins Equations 2.6 -2.11 change to the following for the case of the parabolic straight fin: where m 2h/kH b .The constants s 1 and s 2 are equal to the following: 2.24 It should be mentioned that 2.20 exists in a different form in the work of Kraus et al. 14 . Pin Fins Consider a pin fin having a radius r x that is much smaller than its length L as shown in Figure 2. r x is taken to vary along the fin centerline axis x-axis according to the following relationships: where b is the exponential index.The quantity r b represents the fin radius at its base x 0 .Equations 2.25 -2.27 corresponds to exponential, triangular and parabolic pinfins, respectively.The application of the energy equation 16 on a fin differential element results in the following differential equation: The boundary conditions are given by 2.5 . Exponential Pin Fins By solving 2.28 using 2.25 , the temperature distribution equals where X m/b and m 2h/kr b .The fin heat transfer rate is given by 2.30 The maximum heat transfer is obtained when L ∞ approaches infinity.It is equal to The effective thermal length mL ∞ is obtained when q f 0.99 q f ∞ .As such, it can be obtained by solving the following equation: . 2.32 The fin effective thermal efficiency for the exponential pin fin is equal to 2.33 The fin dimensionless heat transfer per unit effective volume β ∞ , is defined here as the ratio between the heat transfer rate of the fin with length L ∞ to maximum heat transfer rate from a rectangular pin fin having the same base radius and volume.Mathematically, it is equal to 2.34 It should be mentioned that 2.29 -2.34 could not be located in the literature at least in the same form as they are shown. Triangular Pin Fins Equations 2.29 -2.34 change to the following for the case of the triangular pin fin: 37 where m 2h/kr b .It should be mentioned that 2.37 matches a solution shown in 16 . Parabolic Pin Fins Equations 2.29 -2.34 change to the following for the case of parabolic pin fin: where m 2h/kr b .The constants p 1 and p 2 are equal to the following: 2.47 It should be mentioned that 2.43 matches with a solution shown in 16 . High-Order Polynomial Method The variation of the high-performance fin profile with its β ∞ -indicator can be approximated by the following relationship: where the coefficients b x , c x , and d x should satisfy the following conditions: 2.51 The quantities β ∞ t , β ∞ p , and β ∞ e are the corresponding β ∞ -values of triangular, parabolic, and exponential fins, respectively.Solving 2.49 -2.51 for the functions b x , c x , and d x and rearranging 2.48 , the following equation can be obtained: Exponential Method The variation of the high-performance fin profile with its β ∞ -indicator can be approximated by the following relationship: where the coefficients i x , j x , and l x should satisfy the following conditions: 2.56 The quantities β ∞ t , β ∞ p , and β ∞ e are the corresponding β ∞ -values of triangular, parabolic and exponential fins, respectively.Solving 2.54 -2.56 for the functions i x , j x , and l x , the following equations are obtained: Numerical Methodology Equations 2.4 and 2.28 were descretized using three points central differencing according to the following equations: where i is the location of the descretized point in the x direction.x, H, r, and θ are the dimensionless forms of x, H, r, and T , respectively.The resulting tridiagonal systems of algebraic equations shown by 3.1 were solved using the well-established Thomas algorithm Blottner 17 .The integrals shown in 2.11 , 2.33 , and 2.34 were computed numerically using the Simpson's rule 18 .Table 1 shows comparisons between the numerical and the analytical results of the effective efficiency.Excellent agreement is noticed in this table.As such, this led to more confidence in the obtained analytical solutions. Useful Correlations The results generated by solving 2.9 , 2.15 , 2.32 , and 2.38 are shown in form of correlations, which were developed using well-known software.The correlations have the following functional form: Π g 1 Φ g 2 g 3 Φ g 4 g 5 Φ g 6 g 7 Φ g 8 g 9 Φ g 10 g 11 Φ g 12 . 3.2 The correlation constants g 1 -g 12 for the different studied cases are listed in Table 2. Maximum errors associated with these correlations are less than 1% for all used ranges of mL and when X > 0.015. Discussion of the Results Figures 3 and 4 illustrate the variation of the effective efficiency and efficiency ratio η ∞ /η f with the fin thermal length mL.These figures show that thermal efficiencies of parabolic, triangular, and exponential fins can be significantly increased especially at large thermal lengths.This is achievable by eliminating the fin portions beyond the effective thermal length mL ∞ .Triangular pin fins are found to possess larger η ∞ /η f -values than parabolic pin fins.However, parabolic straight fins have always larger η ∞ /η f -values than triangular straight fins.It should be noted that fin volumes near the tip are maximum for triangular fins.As a consequence, triangular fins have always larger effective thermal lengths than exponential and parabolic fins.This fact can be noticed from Figures 3 and 4 along with 2.16 , 2.22 , 2.33 , 2.39 , and 2.45 . Figure 5 shows the relation between the effective thermal efficiency η ∞ and the effective thermal length L 3/2 ∞ h/kA p∞ 1/2 based on the effective profile area, A p∞ , for the different types of the straight fins.It should be noted that and mL ∞ 3/2 2X 1 − e −mL ∞ /X −1 for triangular, parabolic and exponential straight fins, respectively.Figure 5 shows that the triangular straight fin has the maximum η ∞ -value when L 3/2 ∞ h/kA p∞ 1/2 < 0.95.However, the exponential straight fin possesses the maximum Exponential pin fins have the maximum effective thermal efficiency for the same effective thermal length, mL ∞ , as shown from Figure 6.The variation of η ∞ with the fin geometry is almost insignificant for the analyzed pin fins.Figures 5 and 6 demonstrate that the minimum effective thermal efficiency for all fins is 0.377.Exponential fins have the largest effective heat dissipation per unit volume, β ∞ , as evident from Figures 7 and 8 while triangular fins have the smallest β ∞ -values.All of the analyzed fins are noticed to have an asymptotic β ∞ -value of 0.375 as their thermal lengths approaches infinity.Figures 9 and 10 show a number of fin-geometries having high thermal performances.The pin fin geometry with β ∞ 1.83 has volumetric heat dissipation 17% above that of the exponential pin fin.The exact β ∞ -value of that plot is 1.795 which deviates from the estimated value, β ∞ 1.83, by 1.95%.It is noticed from Figures 9 and 10 that errors associated with 2.53 are smaller than those associated with 2.52 .The effective thermal efficiency of the fin shown by 2.52 with β ∞ 2.3 is larger than that of the exponential fin at smaller effective thermal lengths while its effective volumetric heat dissipation is larger than that of exponential fin when mL ∞ > 0.5 as shown in Figures 5 and 7. Figure 8 shows that the pin fin with profile given by 2.53 with β ∞ 1.83 possesses maximum volumetric heat dissipation that is 24% above that of the exponential pin fin. Conclusions Heat transfer through high-performance fins was mathematically analyzed under conditions that lead to useful thermal lengths.Three fin types were considered: parabolic, triangular, and exponential straight or pin fins.Analytical solutions were obtained.The effective thermal length was obtained for each case.Accordingly, the effective thermal efficiency and the effective heat dissipation per unit volume were calculated.The analytical results were compared against numerical solutions and excellent agreements were found.The following remarks were concluded: i Triangular fins have always-larger effective thermal lengths than parabolic fins. ii Exponential pin fins possess the largest effective thermal efficiencies. iii The exponential straight fin possesses the maximum effective thermal efficiency when its effective thermal length based on profile area is greater than 1.4. iv The triangular straight fin has the maximum effective thermal efficiency when its effective thermal length based on profile area is smaller than 0.95. v Exponential straight fins were found to possess effective volumetric heat dissipation that can be 440% and 580% above parabolic and triangular straight fins. vi Exponential pin fins were found to possess effective volumetric heat dissipation that can be 120% and 132% above parabolic and triangular pin fins. vii The derived analytical solutions were used to generate new high-performance fins that possess volumetric heat dissipation 24% and 12% above those of exponential pin and straight fins, respectively. Nomenclature A p∞ : Effective fin profile area b: Exponential functions indices H: Fin thickness H b : Tin thickness at its base h: Convection heat transfer coefficient I n x : Modified Bessel functions of the first kind of order n K n x : Modified Bessel functions of the second kind of order n k: Fin thermal conductivity L: Fin length L ∞ : Effective fin length m: Fin thermal index q f : Fin heat transfer rate per unit width q f ∞ : Maximum fin heat transfer rate per unit width r: Finradius r b : Fin radius at its base T : Fin temperature T b : Fin base temperature T ∞ : Free stream temperature of the adjoining fluid V f : Fin volume X: Dimensionless exponential fin parameter x: Coordinate axis along the fin centreline. Figure 1 : Figure 1: Schematic diagram for the straight fin and the system coordinate. Figure 2 : Figure 2: Schematic diagram for the pin fin and the system coordinate. Figure 3 :Figure 4 :L 3 / 2 ∞Figure 5 :Figure 6 :Figure 7 : Figure 3: Effects of the fin dimensionless length X and thermal length mL on the effective thermal length mL ∞ for exponential straight fin and the maximum available efficiency ratio η ∞ /η f for triangular and parabolic straight fins, m 2h/kH b . Table 1 : Comparisons between the numerical and the analytical results. Table 2 : Correlation constants, see 3.2 , for the analyzed straight fins. Effect of the fin minimum effective thermal length mL ∞ on β ∞ for exponential, parabolic, and triangular pin fins, m 2h/kr b .
4,127.2
2011-09-25T00:00:00.000
[ "Physics", "Engineering" ]
Syngas Production: Diverse H 2 /CO Range by Regulating Carbonates Electrolyte Composition from CO 2 /H 2 O via Co-Electrolysis in Eutectic Molten Salts We present a novel sustainable method for the direct production of syngas (H2 + CO) from CO2/H2O co-electrolysis using a hermetic device, to address the continuously increasing level of environmental carbon dioxide (CO2). All experiments were conducted using a two-electrode system with a coiled Fe cathode and coiled Ni anode in eutectic mixtures of binary and ternary carbonates with hydroxide in a 0.1 : 1 hydroxide/carbonate ratio. With an applied voltage of 1.6–2.6 V and an operating temperature of 500–600 °C, the H2/CO product ratio was easily tuned from 0.53 to 8.08 through renewable cycling of CO2 and H2O. The Li0.85Na0.61K0.54CO3–0.1LiOH composite had the highest current efficiency among those tested, with an optimum value approaching ∼93%. This study provides a promising technique for the electrochemical conversion of CO2/H2O to a controllable syngas feedstock that can be used in a broad range of industrial applications. Introduction The ever-increasing combustion of non-renewable fossil fuels due to industrial development is releasing a large amount of carbon dioxide (CO 2 ) into the atmosphere, which has led to a serious greenhouse effect. As CO 2 is the main component of greenhouse gases, effectively controlling CO 2 generation and emission are urgent issues. [1][2][3] Using photochemical and electrochemical methods for the chemical reduction of CO 2 to reverse oxidative degradation is a huge challenge. 4-10 Among current technologies, chemical conversion and utilization of CO 2 is the most promising because it is both an economic and environmental friendly option. Since the last century, there has been gratifying progress in research on CO 2 chemical conversion, especially with respect to the electrode materials, electrolytes, and operating conditions required for electrochemical reduction in molten salts. [11][12][13][14][15][16][17][18] Syngas, a mixture of carbon monoxide (CO) and hydrogen (H 2 ), has been cited as an essential precursor to a wide range of high value-added industrial products, such as olens, fuels, and additives. Conventional syngas production methods include natural gas conversion, heavy oil conversion, and folding airow bed gasi-cation technology. [19][20][21] However, the high temperatures (over 800 C) required inevitably consume heat and promote reactor corrosion. 22 In comparison, the molten salt electrolysis technique reported herein provides a low-temperature, stable, and safe route to syngas production. In this method, the source of hydrogen (H 2 ) in syngas is LiOH, while carbon monoxide (CO) is sourced from carbonates. The co-electrolysis of CO 2 /H 2 O in eutectic molten salts provides a feasible way to produce syngas, which can be used in the Fischer-Tropsch (F-T) process to convert electrical energy to chemical energy. 23 Syngas with a H 2 /CO ratio of 1.7-3.1 was obtained by controlling the H 2 O/CO 2 feed ratio, as reported by Lee. 24 Recently, Sastre et al. developed an electrochemical method for converting CO 2 and H 2 O into syngas using a nanostructured Ag/g-C 3 N 4 catalyst, with H 2 /CO ratios ranging from 100 : 1 to 2 : 1. 25 We also demonstrated that, by rational design of the molten salt mixture, a desirable lower temperature (such as 600 C) led to the highly efficient one-pot generation of syngas via CO 2 /H 2 O coelectrolysis with a current efficiency of $92% and a H 2 /CO ratio of 1.96-7.97 in Li 1.07 Na 0.75 Ca 0.045 CO 3 / 0.15LiOH electrolyte. 22 This result demonstrates that CaCO 3 addition affects the composition of syngas. Using these methods, the CO 2 /H 2 O-derived generation of syngas has been achieved. Although syngas has successfully been produced in previous studies using a molten salt medium, H 2 is the favored product, oen resulting in a H 2 /CO molar ratio greater than 1. However, there are specic reactions in which a H 2 /CO molar ratio of less than 1 is needed, such as in alcohol synthesis with a H 2 /CO ratio of 0.5-2 using K/Cu/Co/Zn/Al catalyst. 26 To broaden the utilization range, increasing the selectivity for CO in the syngas would be a signicant development. Previously, Chery et al. studied the nature of electrolytes Li 2 CO 3 -Na 2 CO 3 (52 : 48 mol%), Li 2 CO 3 -K 2 CO 3 (62 : 38 mol%), Na 2 CO 3 -K 2 CO 3 (56 : 44 mol%), and a ternary mixture of Li 2 CO 3 -Na 2 CO 3 -K 2 CO 3 (43.5 : 31.5 : 25 mol%) by thoroughly analyzing reoxidation and reduction. 27 Herein, the CO 2 reduction mechanism at a gold electrode in molten carbonates is investigated using cyclic voltammetry. The present work is a systematic exploration of changes in the H 2 /CO ratio in various binary or ternary carbonates (Li 1.51 K 0.49 CO 3 , Li 1.07 -Na 0.93 CO 3 , Li 1.43 Na 0.36 K 0.21 CO 3 , and Li 0.85 Na 0.61 K 0.54 CO 3 ) mixed with LiOH that favor syngas formation, but inhibit metal deposition, 28 with the goal of broadening the H 2 /CO ratio in syngas. During electrolysis, alkali oxides, which are produced from the decomposition of monovalent alkali carbonate and LiOH, can combine with CO 2 and H 2 O to renew the electrolyte. This regeneration of the carbonate electrolyte affords an advantageous circulation system to give syngas as the nal product of CO 2 /H 2 O reduction via co-electrolysis in molten salts. Furthermore, the electricity needed for this electrolysis is measured to assess whether electrolysis proceeds with a relatively high current efficiency. In this study, CO 2 /H 2 O is synergistically converted into valuable chemicals by electrolyzing molten salts, providing an alternative route to resolve global excessive CO 2 emissions and convert conventional electricity to chemical energy. Experimental methods The electrolysis cell consists of an alumina crucible (Al 2 O 3 > 99.9%, 440 mm, 85 mm in height) lled with binary or ternary mixed carbonates (Li 1.51 K 0.49 CO 3 , Li 1.07 Na 0.93 CO 3 , Li 0.85 Na 0.61 -K 0.54 CO 3 , and Li 1.43 Na 0.36 K 0.21 CO 3 ) and LiOH for the CO 2 /H 2 O co-electrolysis experiments, with a total mixed molten salts mass of 80 g. The thermal energy for electrolysis was provided by a specially customized ceramic heating sleeve. Due to inevitable corrosion caused by the electrolytes and high-temperature oxidation, an affordable and corrosion-resistant electrode material was investigated for its long-term stability. Metallic materials Ni (41.6 mm, 39.7 cm in length, 20 cm 2 , Hebei Steady Metal Products Co., LTD, China) and polished Fe (41.6 mm, 39.7 cm in length, 20 cm 2 , Hebei Steady Metal Products Co., LTD, China), both in the form of spiral wires, were used as the anode and cathode, respectively. When the mixed salts reached the pre-set temperature, the two-electrode system was placed into the electrolyte and completely sealed with a sealant and sealing bolt. All electrolysis was performed in the voltage range 1.6-2.6 V. DC power (BK PRECISION 1715A) was used as the power supply for electrolytic production of carbon-based fuels in the electrolyte. The mean gas collection rate was near 120-140 mL min À1 , controlled by a volumetric owmeter. The gaseous products were expelled into a sampling bag through a topside gas-guide tube under argon, which also protected the electroactivity of the electrode. The molar ratio of hydroxide to carbonate in the LiOH and LiNa (LiK, LiNaK) eutectic electrolyte was dened as n H :n C . With n H :n C ¼ 0.1 : 1, the experiments are performed at temperatures of 500-600 C, with voltage of 1.6-2.6 V applied at each temperature. Table 1 shows the experimental electrolytic conditions in detail. Product characterization Aerwards, the syngas obtained from electrolysis was characterized by gas chromatography (GC, Agilent 7890B) equipped with a thermal conductivity detector (TCD) and hydrogen ame ionization detector (FID) to determine the content of each component. Aer obtaining the concentration of each substance from the chromatogram, only the cathode fuel gas (methane, hydrocarbons, hydrogen, and carbon monoxide) was calculated. Fourier transform infrared spectroscopy (FTIR, Tensor27) was used to characterize the molecular structure of the products. The current-voltage relationship of different cathode materials (0.5 cm 2 surface area) was measured using a Ni wire anode (20 cm 2 surface area). Additionally, the current efficiency was calculated from the charge (in Faradays) passed during electrolysis compared to the charge required to form each measured mole of the gaseous products using the following equation: 28 where h i is the current efficiency contributed by the "i-th" product (%), m i is the mass of product i (g), MW i is the molecular weight of product i (g mol À1 ), and n i is the number of electrons transferred per molecule of product i. Far is the charge Results and discussion Theoretical analysis of hydroxide selection The reduction of CO 2 /H 2 O via co-electrolysis of a molten mixture of carbonates and hydroxide can be driven by applying an external force eld to the electrolysis unit. As shown in Scheme 1, using a hermetic device, syngas and hydrocarbons are generated from the reaction of OH À and CO 3 2À on the cathode surface, and oxygen is formed by oxidation of O 2À on the anode. The intermediate product (metal oxide) in the reaction process can absorb the incoming carbon dioxide and water, generating carbonates and hydroxides to regenerate the electrolyte, which completes the construction of a circulation system. The target product, CO and H 2 , can be obtained by the coelectrolysis of ionized OH À and CO 3 2À via reaction (2). The generated O 2À can be consumed in the following two ways: (i) reaction with CO 2 or H 2 O, regenerating CO 3 2À or OH À according to reactions (3) or (4), and (ii) the oxidation of O 2À to produce oxygen via electron loss (reaction (5)). (3) The reacted OH À in the electrolysis comes from hydroxide. It is necessary to control the source of OH À to keep the produced syngas mixture at the desired H 2 /CO ratio. Compared to divalent molten salts, monovalent salts have higher conductivity, lower energy consumption, and give better electrical conductivity for the reduction of carbon dioxide at high temperature. 29,30 Basic data was obtained from NIST Chemistry WebBook 31 Fig. 1a shows the potential of metal deposition with three kinds of hydroxides. Unlike lithium hydroxide, pure sodium or potassium hydroxide tended to reduce the alkali cation to the alkali metal because of the relatively low metal deposition potentials. When KOH serves as the hydrogen source, K metal required a lower potential than H 2 , meaning that K metal deposition could become a side reaction. 32 Fig. 1b shows the calculated thermodynamic electrolysis potential of various hydroxides as a function of temperature for syngas formation. The electrolysis potential was calculated from the thermochemical enthalpy and entropy of individual species. The formulae can be written as: 33 H À H 298.15 where n B is the stoichiometric number, B is a component of the reaction, H is the standard enthalpy (kJ mol À1 ), S is the standard entropy (J mol À1 K À1 ), G is the standard Gibbs free Scheme 1 Schematic diagram of experimental process. energy (kJ mol À1 ), Far is the Faraday constant (96 485C mol À1 ), n is the number of transferred electrons, t ¼ T/1000, and T is the temperature in K, and A-G are thermodynamic parameters. 33 For ease of discussion, the absolute value of the electrolysis potential was used to represent the calculation value. As shown in Fig. 1b, the theoretical electrolytic voltage in the equations of various MOH (M ¼ Li, Na, K) for syngas generation decreased with increasing electrolytic temperature. At 700 K, the energy required for syngas production corresponded to a voltage of 1.57 V in the LiOH electrolyte, which was lower than those in NaOH electrolyte (1.70 V) and KOH electrolyte (1.80 V). In comparison to the NaOH and KOH systems, the LiOH system required a lower potential, and Li deposition was relatively low. As the KOH system showed contrary behavior at the same electrolytic temperature, the LiOH electrolyte was chosen as the optimal system. Determination of optimal temperature and operating voltage The eutectic points of pure lithium, sodium, or potassium carbonates are 723, 851, and 891 C, respectively. The relatively high melting temperature of the individual carbonates increases both the reaction energy consumption and heat loss. Low carbonate melting points are achieved by eutectic mixtures of alkali carbonates, such as Li 1.51 K 0.49 CO 3 , Li 1.07 Na 0.93 CO 3 , Li 0.85 Na 0.61 K 0.54 CO 3 and Li 1.43 Na 0.36 K 0.21 CO 3 at 490, 499, 375, and 390 C, respectively. [34][35][36] To achieve the controlled synthesis of syngas at relatively low temperatures, a binary carbonate mixture of Li-Na or Li-K or a ternary carbonate mixture of Li-Na-K were chosen as electrolytes to reduce the reaction temperature in this study. The highest eutectic point of the electrolytes studied was 499 C (Li 1.07 Na 0.93 CO 3 ) and the lowest eutectic point of the electrolytes studied was 375 C (Li 0.85 -Na 0.61 K 0.54 CO 3 ). As interpreted in our previous study, a high electrolysis temperature leads to an increased hydrogen yield due to enhanced reactivity. 28 Due to slower ionic migration, poor conductivity at low electrolytic temperature, difficulty of operation, and corrosion resistant performance at a high operating temperature, an appropriate temperature range was shown to be necessary for stable, continuous, and efficient electrolysis. Furthermore, according to previous theoretical and electrochemical reports, the selective electroreduction of CO 2 to CO likely occurs in Li-Na and Li-K molten salts at #650 C by cyclic voltammetry. 37,38 Furthermore, temperatures of over 600 C favor methane formation, achieving a methane yield of 64.9% in a eutectic mixture of carbonates as electrolyte in electrochemically reducing H 2 O/CO 2 . 28 In our previous study, 33 when the temperature was about 550 C, the methane content was less than 25%. Aer conducting a systematic experimental study of experimental data, the range of electrolysis temperatures studied herein was chosen as 500-600 C. The electrolytic voltages required to form the reduction products in various reactions were calculated using the Gibbs energy at temperatures ranging from 400 K to 900 K in molten lithium carbonate. As shown in Fig. 2, obvious downward trends were observed for cathodic product generation. Beyond that, theoretical calculation also demonstrated that electrolysis voltage was another critical factor for controlling reaction selectivity. The electrolysis temperatures investigated ranged from 773 K to 873 K, and the electrolytic potential of the mixtures of carbonates and hydroxide further decreased. Therefore, the potentials of pure lithium carbonate decomposition were below 1.6 V. However, 1.6 V was taken as the starting voltage in this study because of the concentration overpotential, electrochemical overpotential, and resistance overpotential. 39,40 The actual decomposition voltage was greater than the theoretical value, E(decomposition potential) ¼ E(theory decomposition voltage) + E(overpotential) + IR. According to our previous study, 32,33 metal deposition occurs at higher voltages and, when the voltage exceeds 2.5 V, the CO content showed a signicant downward trend. Therefore, an electrolysis voltage range of 1.6-2.6 V was determined. Optimization of electrodes for molten carbonate/hydroxide conversion In Li x Na y K z CO 3 -LiOH electrolytes, O 2 is produced from the oxidation of O 2À and CO 3 2À at the anode, according to reactions (5) and (10): To prevent electrolyte corrosion and high-temperature oxidation of these ions, a cost-effective and corrosion-resistant electrode material with long-term stability is necessary. In our previous study, a Ni electrode showed a lower anodic overpotential by polarization analysis. 28 Therefore, Ni wire was used as the anode in this electrochemical study. To select cathode materials with excellent chemical stability, Ni, Fe, and Ni-Cr were tested as applicable cathodes. The electrochemical performance of these materials was evaluated using polarization tests, as shown in Fig. 3. As the current density increased, the overpotential also increased, and the working voltage of the electrolytic cell exceeded the rest potential voltage. Minimizing this overpotential was essential to achieve maximum energy efficiency during electrolysis. The lowest cathodic overpotential and low cost made Fe a preferable cathode material. Therefore, a Fe cathode was a viable choice for long-term electrolysis in Li 2 CO 3 -Na 2 CO 3 -K 2 CO 3 /LiOH. We believe that the low overpotential observed for the Fe cathode presents an analogous electrocatalytic opportunity for syngas production via simultaneous splitting of hydroxide and carbonate on a similar surface. Therefore, iron and nickel were selected as the cathode and anode materials, respectively. Analysis and characterization of electrochemical products Gaseous products prepared from the Li 0.85 Na 0.61 K 0.54 -0.1LiOH electrolyte at a temperature of 550 C, using 20 cm 2 of Fe wire as the cathode and 20 cm 2 of Ni wire as the anode, were monitored by gas chromatography (GC) and IR spectroscopy. Gas chromatography detection consisted of a hydrogen ame ionization detector (FID) and thermal conductivity detector (TCD). As shown in Fig. 4a (FID), small by-products, such as propane and n-butane, were generated during electrolysis. TCD signals also indicated a small amount of CH 4 , and these alkanes are collectively represented as C x H y . The presence of CO, H 2 , and the anodic product were also shown in the TCD signals. As shown in Fig. 4b, peaks in the range 3000-3100 cm À1 were associated with unsaturated C-H stretching vibrations, 41,42 while peaks from 2800 cm À1 to 3000 cm À1 clearly indicated saturated C-H stretching vibrations, including -CH 3 and -CH 2 -. Trace amounts of CO 2 were present in the gaseous products. CO 2 has four modes of vibration, with two being infrared-active. The presence of CO 2 was conrmed by the stretching vibrations at 2349 cm À1 and the exural vibration at 667 cm À1 . 43 The stretching vibration of C^O was present in the range 2200-2250 cm À1 , which proved the presence of CO. The peaks from 1300-1400 cm À1 corresponded to C-H bending vibrations, and peaks from 1300-1700 cm À1 were associated with C-C stretching vibrations. The peaks at around 750 cm À1 corresponded to C]C out-of-plane exural vibrations in cis-olens. The above IR observations demonstrated that the products of CO 2 /H 2 O coelectrolysis contained a large amount of CO and hydrocarbons in this work. Effect of mixed molten carbonate compositions on syngas product selectivity As shown in Fig. 5, under all investigated electrolytic temperatures with voltage ranging from 1.6 V to 1.8 V, CO was the main product. However, a further increase in applied voltage (1.8-2.6 V) seemed to favor H 2 generation, indicating that 1.8 V resulted in the optimum CO fraction. To illustrate the dependence of syngas composition on temperature, CO, H 2 , and C x H y fractions at 1.8 V were calculated at ve temperatures, as shown in Fig. 6. Fig. 6 shows the gaseous product contents in the Li 1.07 -Na 0.93 CO 3 -0.1LiOH system at various electrolytic temperatures. When the applied temperature was increased from 500 to 550 C, the CO content increased gradually under the same electrolysis voltage. For instance, the yield of CO rose from $43.2% at 500 C and 1.8 V to $55.7% at 550 C and 1.8 V. In contrast, under the same electrolysis conditions, the H 2 content gradually decreased with increasing electrolysis temperature (500-550 C), showing that increasing temperature led to an increase in current density and favored CO generation. As the electrolysis temperature was further increased (550-600 C), the H 2 content increased gradually while the CO content gradually decreased at the same electrolysis voltage. This could be ascribed to the reduction potential required by H 2 and CO decreasing at elevated temperature. However, the rate of decrease in H 2 production was faster than that of CO, meaning that 550 C was the optimum electrolysis temperature for this system. This result showed the dependence of the CO fraction on the applied temperature, with higher temperatures found to not favor targeted CO production. The change in C x H y byproduct content was within 10%. The above-mentioned experimental results showed that 550 C was the optimum electrolysis temperature for the Li-Na system. Specically, the H 2 /CO Fig. 3 Polarization curves of various cathode materials during electrolysis. Fig. 4 Results of gaseous product analysis from (a) gas chromatograph with FID and TCD, and (b) IR spectra under the same electrolytic conditions. molar ratio of 0.62-9.60 was gained by adjusting the electrolysis voltage and operating temperature of the Li-Na system. Compared with previous research, 23-25 syngas was generated simultaneously with a H 2 /CO ratio of less than 1. Selection of the optimum electrolytic voltage for the Li 1.51 K 0.49 CO 3 -0.1LiOH system is shown in Fig. S1. † At 1.6 V, the CO content increased with increasing electrolysis temperature (500-550 C), as shown in Fig. 7, indicating that the solubility of CO 2 increased with elevating temperature and the kinetics were enhanced. 44,45 The CO content decreased with a further increase in temperature (550-600 C). This might be attributed to CO oxidation occurring at 575 C in the Li-K system. 22 This showed that the best electrolysis temperature for the Li-K system was 550 C. As shown in Fig. S1, † the H 2 /CO molar ratios were well controlled in the range 0.76-5.04, 0.67-8.08, 0.59-4.33, 0.66-4.04, and 0.81-4.07. In particular, the adjustable range of H 2 /CO molar ratio range in the Li 1.51 K 0.49 CO 3 -0.1LiOH system was 0.59-8.08. Selection of the optimum electrolytic voltage for the Li 1.43 -Na 0.36 K 0.21 CO 3 -0.1LiOH system under electrolytic temperatures is shown in Fig. S2, † and a detailed description can be found in the ESI. † Fig. 8 plots the gas concentration of the Li 1.43 Na 0.36 -K 0.21 CO 3 -0.1LiOH system at various temperatures. The CO concentration by electrolysis at 1.8 V in Li 1.43 Na 0.36 K 0.21 CO 3 -0.1LiOH increased to $61.6% as the electrolytic temperature was increased from 500 C to 550 C, indicating that the increase in the temperature favored the formation of CO. Concurrently, at the applied voltage of 1.8 V, the hydrogen concentration dropped to $32.8%, and other products stayed at around $5%. The CO content then gradually declined at temperature of 550-600 C, while the hydrogen content increased with increasing temperature (550-600 C). This phenomenon was caused by high-temperature activation of the Li 1.43 Na 0.36 K 0.21 CO 3 electrolyte and hydrogen formation was due to the activity of growth 32 The CO content of the system reached a maximum of $61.7% while the H 2 content was 32.8% at 1.8 V and 550 C. The adjustable range of the H 2 /CO molar ratio was 0.53-7.76 in the Li 2 CO 3 -Na 2 CO 3 -K 2 CO 3 system with a mass ratio of 61 : 22 : 17. Selection of the optimum electrolytic voltage for the Li 0.85 -Na 0.61 K 0.54 CO 3 -0.1LiOH system has also been interpreted in Fig. S3. † At an electrolysis voltage was 2.2 V, the compositions of the electrolysis gaseous products are shown in Fig. 9. The CO content gradually increased, the H 2 content gradually decreased, and the CO selectivity increased slightly when increasing the temperature from 500 to 550 C. With a further increase in temperature, the CO content began to decrease, while the H 2 content gradually increased. With a lower Fig. 6 Compositions of electrolysis gaseous products at temperatures of 500-600 C in the Li 1.07 Na 0.93 CO 3 -0.1LiOH electrolyte system. Fig. 7 Compositions of electrolysis gaseous products at temperatures of 500-600 C in the Li 1.51 K 0.49 CO 3 -0.1LiOH electrolyte system. Fig. 5 Compositions of electrolysis gaseous products in the operating voltage range 1.6-2.6 V at temperatures of 500 C, 525 C, 550 C, 575 C, and 600 C in the Li 1.07 Na 0.93 CO 3 -0.1LiOH electrolyte system. electrolysis potential required for the temperature rise, the higher temperature did not contribute to CO formation. These results conrmed that the CO content reached a maximum at 550 C and 2.2 V. The H 2 /CO molar ratio was 1.02, and the H 2 / CO molar ratio ranged from 1.02 to 7.42 in Li 0.85 Na 0.61 K 0.54 CO 3 by tuning the voltage and temperature. In summary, the four electrolyte systems investigated presented varying H 2 /CO molar ratio ranges under different electrolytic conditions, but with the common feature that all maximum CO fractions were observed at 550 C. This was a desirably low temperature (vs. 800 C) 22 that could lead to the highly efficient one-pot generation of syngas by CO 2 /H 2 O via coelectrolysis in molten salts. In detail, compared with Li 0.85 -Na 0.61 K 0.54 CO 3 -0.1LiOH, the other three systems demonstrated an advantage in the generated H 2 /CO ratio, implying an enlarged application potential. Furthermore, Li 1.51 K 0.49 CO 3 -0.1LiOH and Li 1.43 Na 0.36 K 0.21 CO 3 -0.1LiOH provided maximum CO contents of more than 60%, which were superior to those of Li 1.07 Na 0.93 CO 3 -0.1LiOH and Li 0.85 Na 0.61 K 0.54 CO 3 -0.1LiOH. Therefore, it was concluded that a larger Li 2 CO 3 fraction favored CO generation, and that Li 2 CO 3 -induced modication at the interface between the cathode and electrolyte might be responsible for the observed changes in CO contents. By regulating the composition of the electrolytes, the synthesis of wide range of H 2 /CO ratios has been successfully realized, and the industrial application range of syngas has been expanded. Current efficiency is a signicant metric of CO 2 /H 2 O transformation selectivity. Regarding the volume of obtained gaseous products, the current efficiency was calculated, as shown in Fig. 10. An irregular change in the current efficiency was observed in the Li 1.07 Na 0.93 CO 3 -0.1LiOH and Li 1.51 K 0.49 CO 3 -0.1LiOH systems, and the current efficiency of each reduction product was lower than 60%. Presumably, this was due to the deposition of alkali metals at the cathodic surface. 46 CO 3 2À ions can also be reduced indirectly via the prior reduction of alkali metal ions to the metal (reactions (11) and (12)). 47 In another study on a Ni electrode under similar conditions to that stated earlier, the cathodic limit corresponded to the reduction of CO 3 2À ions to carbon, while the anodic limit was assigned to the oxidation of Ni according to reaction (13). 48 Ni 2+ + 2e À / Ni Of the four electrolytes investigated, the mixture of Li 0.85 Na 0.61 K 0.54 -0.1LiOH exhibited a higher current efficiency Fig. 8 Compositions of electrolysis gaseous products at temperatures of 500-600 C in the Li 1.43 Na 0.36 K 0.21 CO 3 -0.1LiOH electrolyte system. Fig. 9 Compositions of electrolysis gaseous products at temperatures of 500-600 C in the Li 0.85 Na 0.61 K 0.54 CO 3 -0.1LiOH electrolyte system. Fig. 10 Current efficiencies of total gas generation in various electrolytes measured in the two-electrode system (a) Li 1.51 K 0.49 CO 3 -0.1LiOH electrolyte system, (b) Li 1.51 K 0.49 CO 3 -0.1LiOH electrolyte system, (c) Li 1.43 Na 0.36 K 0.21 CO 3 -0.1LiOH electrolyte system, (d) Li 0.85 Na 0.61 K 0.54 -0.1LiOH electrolyte system. with an optimum efficiency approaching $93%. However, it produced a relatively low fraction of CO ($25%). The current efficiencies for fuel gas production are all over 79%, signifying that regulating applied cell voltages leads to syngas generation of various concentrations. Conclusions In summary, the one-pot generation of syngas with a wide range H 2 /CO ratios (0.62-9.6 vs. over 1) was achieved through rationally designed molten salt electrolysis system. Electrolysis was carried out at 500-600 C with an operating voltage of 1.6-2.6 V using a low-cost Fe cathode and Ni anode. CO 2 /H 2 O was directly transformed into syngas via electrolysis in the Li 0.85 Na 0.61 K 0.54 -0.1LiOH electrolyte, with a 93.2% current efficiency at a constant voltage of 1.8 V and temperature of 500 C. Moreover, the molar ratios of H 2 /CO were 0.62-9.60, 0.59-8.08, 0.53-7.76, and 1.02-7.42 for Li 1.07 Na 0.93 CO 3 -0.1LiOH, Li 1.51 K 0.49 CO 3 -0.1LiOH, Li 1.43 Na 0.36 K 0.21 CO 3 -0.1LiOH, and Li 0.85 Na 0.61 K 0.54 -0.1LiOH electrolytes, respectively. Syngas with diverse H 2 /CO ratios was obtained by regulating the electrolyte composition, applied cell voltage, and electrolytic temperature. The methanebased hydrocarbon content varied by within 10% in the four electrolytes. In this manner, this work provides a path for further enhancements of product selectivity of CO and H 2 , and demonstrates a new sustainable process for recycling H 2 O and CO 2 . Conflicts of interest There are no conicts to declare.
6,861.4
2017-11-10T00:00:00.000
[ "Materials Science" ]
Crafting chirality in three dimensions via a novel fabrication technique for bound states in the continuum metasurfaces An additional deposition step was added to a multi-step electron beam lithographic fabrication process to unlock the height dimension as an accessible parameter for resonators comprising unit cells of quasi-bound states in the continuum metasurfaces, which is essential for the geometric design of intrinsically chiral structures. Circularly polarised light possesses chirality, i.e., tracing the light path reveals a structure with a mirror image that is not superimposable through rotation or translation operations 1,2 .This distinctiveness of the structure and its mirror image allows for the arbitrary yet specific assignment of left-or right-handedness 1,2 .Illuminating a chiral probe with circularly polarised light results in differential light-matter interactions depending on whether the light is left-or right-handed 1,2 .Manipulating the geometric design of the chiral probe can further tailor these selective light-matter interactions 1,2 . One technology that can be designed to exhibit chiral optical properties is a metasurface 2 .Metasurfaces are engineered arrangements of subwavelength resonators that can provide tuneable systems to control the interaction of different polarisation states of light with matter 2 .These resonators can be made from different materialsplasmonic 3 , dielectric [4][5][6] , or a combination of both 7 .To address the high optical losses associated with plasmonic materials, research in metasurfaces has shifted towards all-dielectric material systems 3,5 . Within this realm of dielectric metasurfaces, the phenomena of bound states in the continuum (BICs) and quasi-bound states in the continuum (qBICs) have been demonstrated [7][8][9] .BICs are discrete energy states trapped in a system surrounded by a continuum of energy states [7][8][9] .In contrast, qBICs approximate BICs but allow the release of the trapped discrete energy [7][8][9] .The intentional design of the resonators enables control over the release of energy in qBIC metasurfaces [7][8][9] .Transforming a BIC system to a qBIC system necessitates breaking the symmetry of the resonator geometry [10][11][12] , the resonator arrangement 13 , or the incidence angle of light 10 . However, most qBIC metasurfaces realized by breaking the symmetry of resonator geometry are constrained to two-dimensional manipulations (Fig. 1a), a consequence of the limitations of fabrication techniques available for alldielectric metasurfaces 5,10,[14][15][16] .All fabrication techniques must build resonators that are smaller than the operational wavelength 17 .For visible wavelengths, the fabrication techniques can be categorized into lithographical methods, laser methods, or chemical methods 17,18 .Electron beam lithography, used for the majority of reported all-dielectric metasurfaces 17 , offers precision, reliability, and repeatability, but it is limited to two-dimensional elements [16][17][18] .This drawback hinders the manipulation of the threedimensional geometry of resonators, which is crucial for the design of maximally chiral probes 19,20 .Consequently, this restricts applications in the study of chirality, including but not limited to fields of analytical chemistry [10][11][12] , pharmaceutics 6,10 , and the extraterrestrial search for life 6,10,21 . In a recent publication by Kühner and Wendisch et al. in Light: Science & Applications, the research team presented an additional deposition step to a multi-step electron beam lithography fabrication process 5 .This novel nanofabrication methodology provided control over the heights of individual resonators within unit cells comprising all-dielectric metasurfaces 5 .Employing a unit cell composed of two anti-parallel rods (Fig. 1b, Top), the study introduced height disparities between the rods to convert an achiral BIC metasurface into an achiral qBIC metasurface (Fig. 1b, Middle).By tilting the rods of varying heights toward each other, the achiral qBIC metasurface was transformed into a chiral qBIC metasurface (Fig. 1b, Bottom).Continued adjustments to the height difference and angular orientation of the two rods tuned the differential interactions of the chiral qBIC metasurface when illuminated by left-or right-handed circularly polarised light.The final parameters selected yielded a 70% difference in transmittance signals between the two polarisation states of light, underscoring the potential for achieving maximum optical chiralitywherein information from one handedness of light-matter interactions cannot be obtained from the opposite handedness, i.e., a 100% difference in signals 22 . This work introduced a new level of fabrication complexity, offering a previously unattainable degree of freedom for tailoring the optical response of chiral metasurfaces by unlocking the height dimension of resonators for geometric manipulation 5 .Further efforts to expand this freedom to the Angstrom level could pave the way for maximum chirality in response to electromagnetic waves from arbitrary angles of incidence because such small resolutions may permit the systematic study of the asymmetry of all reflection and transmission processes 5,6,19,[22][23][24] .Nonetheless, these results hold promise for chiral nanophotonic applications in biochemical sensing 25 , enantiomeric separation 11,12 , polarisation conversion 13 , and chiral emission 26 . Three-dimensional manipulation Increasing asymmetry a b Fig. 1 Fig. 1 BIC to qBIC by breaking the symmetry of resonator geometry.a Two-dimensional and b three-dimensional geometric manipulations of anti-parallel rods that can make up the unit cell of a qBIC metasurface.(Top) Symmetric, achiral rods.(Middle) Through resonator symmetry-breaking, the rods comprise an asymmetric unit cell.(Bottom) By tilting the rods towards one another, the unit cell becomes chiral
1,169
2024-02-05T00:00:00.000
[ "Physics", "Engineering" ]
Hydrojet-based delivery of footprint-free iPSC-derived cardiomyocytes into porcine myocardium The reprogramming of patient´s somatic cells into induced pluripotent stem cells (iPSCs) and the consecutive differentiation into cardiomyocytes enables new options for the treatment of infarcted myocardium. In this study, the applicability of a hydrojet-based method to deliver footprint-free iPSC-derived cardiomyocytes into the myocardium was analyzed. A new hydrojet system enabling a rapid and accurate change between high tissue penetration pressures and low cell injection pressures was developed. Iron oxide-coated microparticles were ex vivo injected into porcine hearts to establish the application parameters and the distribution was analyzed using magnetic resonance imaging. The influence of different hydrojet pressure settings on the viability of cardiomyocytes was analyzed. Subsequently, cardiomyocytes were delivered into the porcine myocardium and analyzed by an in vivo imaging system. The delivery of microparticles or cardiomyocytes into porcine myocardium resulted in a widespread three-dimensional distribution. In vitro, 7 days post-injection, only cardiomyocytes applied with a hydrojet pressure setting of E20 (79.57 ± 1.44%) showed a significantly reduced cell viability in comparison to the cells applied with 27G needle (98.35 ± 5.15%). Furthermore, significantly less undesired distribution of the cells via blood vessels was detected compared to 27G needle injection. This study demonstrated the applicability of the hydrojet-based method for the intramyocardial delivery of iPSC-derived cardiomyocytes. The efficient delivery of cardiomyocytes into infarcted myocardium could significantly improve the regeneration. Scientific RepoRtS | (2020) 10:16787 | https://doi.org/10.1038/s41598-020-73693-x www.nature.com/scientificreports/ To date, various cell-based approaches 6 were clinically tested to promote cardiac regeneration by integration, differentiation, and proliferation of implanted cells, such as the application of mesenchymal stromal cells (MSCs) isolated from bone marrow 7 or adipose tissue-derived stem cells 8 , skeletal myoblasts 9 , circulating progenitor cells 10 , and cardiac stem cells 11 . However, recent data suggest that the clinically observed benefit associated with the injection of bone marrow-derived cells is primarily due to the release of paracrine factors 12,13 . But there is a certain risk for spontaneous osteogenic differentiation of such cells after the in vivo application. For instance, intramyocardial calcification was reported after transplantation of unpurified bone marrow cells into the infarcted myocardium of rats 14 . In another study, encapsulated calcified or ossified structures were also found after the injection of MSCs into infarcted mice hearts 15 . The discovery of the reprogrammability of somatic cells in induced pluripotent stem cells (iPSCs) 16,17 opened up new possibilities for regenerative therapies in general. These cells have the potential to differentiate into a variety of cell types of the body and the unlimited proliferation capacity of iPSCs could particularly allow the generation of large numbers of autologous cardiomyocytes for the repair of infarcted myocardium. Since the first generation of iPSCs from fibroblasts by using retroviral vectors 16,17 , other integrative vectors such as lentiviral vectors 18 , plasmids 19 , or piggyBac transposon-based delivery systems 20 were also used to reprogram somatic cells. New strategies are focusing on non-integrative methods, such as the use of adenoviral 21 , Sendai 22 , and episomal vectors 23 , mRNA 24 , or proteins 25 to obtain clinically applicable terminally differentiated cells from iPSCs. In a recent study, we have shown that by using self-replicating RNA, iPSCs could be generated from human urine-derived renal epithelial cells and beating cardiomyocytes were obtained from these iPSCs 26 . This method allows the non-invasive and simple collection of patient´s somatic cells for reprogramming and the footprint-free generation of iPSCs for subsequent differentiation into cardiomyocytes, which can be used to repair infarcted myocardium. Regarding cell delivery strategy and functional outcome, an inconclusive view arises from the literature. For example, the incorporation of endothelial and smooth muscle cells into patches increased the resistance of cardiomyocytes to hypoxic injury as well as the engraftment of transplanted cardiomyocytes 27 and the implantation of such fibrin patches enhanced left ventricular function in a porcine myocardial infarction model 28 . However, Gerbin and colleagues reported the formation of scar tissue that physically separated the epicardial patch from the host myocardium 29 . In contrast, the injection of cardiomyocytes directly into the myocardium resulted in electrical integration of implanted cells 29,30 . However, covering a large area of myocardial infarction may require multiple injections, leading to procedural damage to the myocardium 31 and should, therefore, be avoided. Recently, Jäger et al. 32 described a hydrojet-based method for the delivery of MSCs into urethral tissue. They demonstrated a better yield of viable cells compared to needle injections with a fast and precise injection of viable next to or into the sphincter muscle. Here, we aimed to evaluate this new hydrojet concept for the delivery of cardiomyocytes derived from footprint-free generated iPSCs into porcine myocardium. The application was compared to the needle-based application of iPSC-derived cardiomyocytes. Material and methods ethics statement. Renal epithelial cells were isolated from the urine of healthy donors, which gave written informed consent to participate. The study was approved by the Ethics Committee of the Medical Faculty of the University of Tuebingen (911/2018BO2). All experiments were performed in accordance with relevant guidelines and regulations. Since no living animals were used in this study, ethical approval for animal testing was not required. Hearts of German Landrace pigs were purchased from a regional butcher´s shop (Faerber, Balingen, Germany). cultivation of footprint-free ipScs from urine-derived renal epithelial cells. Footprint-free iPSCs were generated as previously described in our recent study 26 by seeding of 5 × 10 4 renal epithelial cells obtained from 100-200 ml urine of healthy donors per well of a 12-well plate coated with 0.1% gelatin. The reprogramming was performed by transfection with 0.5 µg self-replicating RNA (VEE-OKSiM-GFP RNA). The generated iPSC colonies were detached and seeded onto 0.5 µg/cm 2 vitronectin-coated (Thermo Fisher Scientific, Waltham, USA) tissue culture flasks. The cells were cultivated in Essential 8 (E8) medium (Thermo Fisher Scientific) at 37 °C and 5% CO 2 with daily medium changes and passaged every 4-6 days. After reaching confluence, iPSCs were washed with Dulbecco's phosphate-buffered saline (DPBS, Thermo Fisher Scientific) and detached by 5 min incubation with DPBS containing 0.5 mM ethylenediaminetetraacetic acid (EDTA, Sigma-Aldrich, St. Louis, USA). After detachment, the EDTA solution was aspirated and the cells were rinsed with E8 medium. 2 × 10 5 cells were seeded per well of vitronectin-coated 6-well plates in E8 medium containing 10 µg/ ml ROCK inhibitor Y-27632 (Enzo Life Sciences, Lausen, Switzerland). Generation of iPSC-derived cardiomyocytes. To generate cardiomyocytes, 2 × 10 5 iPSCs were resuspended in E8 medium containing 10 µg Y-27632 and seeded per well of vitronectin-coated six-well plates. For the differentiation, PSC cardiomyocyte differentiation kit (Thermo Fisher Scientific) was used according to manufacturer's instructions with small modifications. On day 0, the medium was changed to cardiomyocyte differentiation medium A. On day 2, the medium was changed to cardiomyocyte differentiation medium B. On day 4, cardiomyocyte maintenance medium (CMM) was added to the cells and further cultivated until day 10 to 12 with medium changes every other day. The differentiation of iPSCs into cardiomyocytes was determined by flow cytometry and immunofluorescence microscopy using PE-labeled mouse anti-human α-actinin and FITClabeled anti-human cardiac troponin T antibodies (both from Miltenyi Biotec, Bergisch Gladbach, Germany). A more detailed characterization of the cardiomyocytes generated from footprint-free iPSCs was performed in our recent study 26 was used to apply cells and microparticles into the porcine heart muscle. The new hydrojet system 32 allowed the generation of pressures (E = effects) ranging from E5 to E80 while enabling a rapid and accurate change between tissue penetration pressures and cell injection pressures. First, using a "tissue penetration jet", 1 ml 0.9% NaCl solution was applied with high pressures E60 or E80 to penetrate the heart tissue. Afterwards, 100 µl cell or microparticle suspension was delivered using an "injection jet" with low pressures (E5, E10, or E20) to distribute the cells or particles within the penetrated myocardium. Application of magnetic polystyrene microparticles into porcine hearts. To simulate and predict the distribution of cardiomyocytes after the injection into the myocardium using a 27G needle or the new hydrojet system, magnetic polystyrene microparticles (Sigma-Aldrich; 6.8 × 10 6 particles/ml) with a similar diameter as cardiomyocytes (10 ± 0.5 µm) were chosen and injected into porcine hearts. Therefore, porcine hearts were rinsed with 0.9% NaCl solution, sealed in plastic bags and warmed in a water bath to 37 °C to simulate physiological body temperature. Afterwards, using the hydrojet system, the microparticles were injected in the transversal plane 2 cm above the apex from two sites at a 90° angle to the left lateral and dorsal side. Tissue penetration pressures of E60 or E80 were applied combined with an injection pressure of E10-expressed as E60/E10 and E80/E10, respectively. Based on a preliminary phantom study (data not shown), two different quantities of microparticles (85,000 or 42,500) were selected for the assessment of the optimal particle concentration in terms of magnetic resonance imaging (MRI) detection of the artifact signal. For each hydrojet injection, 100 µl 0.9% NaCl with 85,000 or 42,500 microparticles was used. As a reference, microparticles were also injected into heart muscles using a 27G needle. Therefore, the needle was inserted 2 cm above the apex at a 90° angle from two sites 0.5 cm deep into the myocardium. The hearts were positioned in beakers filled with 0.9% NaCl and MRI was performed. MRi of porcine hearts. The microparticle injected hearts were scanned on a 3 T MRI system (MAG-NETOM Prisma fit , Siemens Healthineers, Erlangen, Germany). The body coil was used for homogeneous radio frequency transmission and a 20-channel head coil was utilized for signal receiving. Morphology and other structures of the hearts were assessed by using a proton-density weighted fast spin-echo sequence with TR/ TE = 3000/11 ms, echo train length of 10, acquisition bandwidth of 240 Hz/pixel, field-of-view of 128 × 128 mm 2 , slice thickness of 2 mm, 21 slices, 384 × 384 matrix, two acquisitions, and scan time of 3:50 min. To identify depositions of injected microparticles, series of images were acquired using gradient-echo (GRE) sequence with multiple echo times (TEs). The GRE sequence was used as an MRI technique sensitive to the distribution of the Larmor frequency in the immediate vicinity of the magnetic microparticles. Higher amounts of magnetic microparticles result in a reduced effective transversal relaxation time and can be localized as negative contrasted signal voids in the magnitude images. In addition, the magnetic microparticles produce field inhomogeneities, which can be seen as a characteristical dipole field pattern in the GRE phase images. An identifier for the magnetic microparticles in contrast to the tissue with low signal intensity is the enlargement of the size of the signal voids with increasing TE. The orientation, position, thickness, and number of slices in measurements with the GRE sequence were identical to those in the fast spin-echo (FSE) sequence. The imaging parameters were: TR = 42 ms; TEs = 2.65, 6.71, 10 Analysis of microparticle distribution in porcine hearts. To compare the distribution of microparticles applied into myocardial tissue using hydrojet or 27G needle, DICOM (Digital Imaging and Communications in Medicine) data received from MRI were analyzed using 3D Slicer software (3D Slicer software, version 4.10.2). All sections were retraced with the segmentation function, reconstructed to a three-dimensional (3D) shape considering the layer thickness (2 mm) and layer distance (0.2 mm), and displayed with a smoothing factor of 0.5. Subsequently, the dark particle spots were reconstructed in the same way and the volumes were determined using the segmentation statistics function. Analysis of the viability of cardiomyocytes after the application with the hydrojet system. Cardiomyocytes obtained 10-12 days after the differentiation of iPSCs were washed with 1 ml DPBS per well and detached using 1 ml TrypLE (Thermo Fisher Scientific) for 10 min. Afterwards, 1 ml trypsin neutralization solution (TNS, PromoCell, Heidelberg, Germany) was added per well of the six-well plate. Cells were centrifuged at 200 × g for 5 min and washed once with 5 ml DPBS. Afterwards, cardiomyocytes were resuspended in CMM with a final concentration of 1 × 10 7 cells/ml. To analyze the impact of the injection jet pressure on the viability of cardiomyocytes, 100 µl cell suspension containing 1 × 10 6 cardiomyocytes was injected and collected in 15 ml tubes filled with 2 ml prewarmed CMM. Injection pressures of E5, E10, and E20 were investigated. The same procedure was also performed by manual injection of 100 µl cell suspension with a 27G needle syringe. Cells were counted in a Neubauer chamber and the viability was determined using trypan blue (Thermo Fisher Scientific). In an additional experiment, the same injection procedures were repeated and the cardiomyocytes were centrifuged at 200 × g for 5 min, resuspended, and seeded into vitronectin coated 48-well plates in CMM. The cell viability was measured after 24 h and 7 days using PrestoBlue assay. Therefore, the medium was replaced by 1 ml PrestoBlue solution (Thermo Fisher Scientific), diluted 1:10 in CMM, and cells were incubated for 90 min. The metabolized solution was then analyzed in triplicates using a fluorescence microplate reader (Mithras LB 940, Berthold Technologies, Bad Wildbad, Germany). Additionally, the cells were stained using calcein AM (Thermo Fisher Scientific) and analyzed using Axiovert135 microscope and AxioVision 4. Porcine hearts were rinsed with 0.9% NaCl solution, sealed in plastic bags and warmed up to 37 °C in a water bath to simulate physiological body temperature. Before injection, hearts were cut 5 cm horizontally above the apex and the upper part was removed. This allowed better positioning and observation of the hearts by an vivo imaging system (IVIS Spectrum, Perkin Elmer). Then, 1 × 10 6 XenoLight DiR fluorescent dye-labeled cardiomyocytes resuspended in 100 µl CMM or 100 µl CMM without cells were injected at a 90° angle 2 cm above the apex into the myocardium either using the new hydrojet system (E60/E10, E80/E10) or a 27 G needle. Detection of injected cardiomyocytes using iViS in porcine heart. After the injection of cardiomyocytes, near-infrared imaging was performed using IVIS. The imaging was performed by placing the apex upwards. Subsequently, the apex was cut sagittally at the injection site into two parts and imaging was performed again. Fluorescence intensity and distribution areas were analyzed with Living Image ® software (PerkinElmer). In order to measure the distribution of cells only in the tissue (myocardium), the detected fluorescence of cells distributed via blood vessels was analyzed separately. The fluorescence emission was normalized to photons per second per square centimeter per steradian and expressed as average radiant efficiency [p/s/cm 2 /sr]/[µW/cm 2 ]. Statistical analysis. Data are shown as mean ± standard deviation (SD) or standard error of the mean (SEM). The comparison of the means of normally distributed data was performed by paired t-test or one-way analysis of variance (ANOVA) for repeated measurements followed by Bonferroni's multiple comparison test. The means of non-normally distributed data were compared using Kruskal-Wallis test followed by Dunn's multiple comparison test. Statistical analyses were performed double-tailed using GraphPad Prism 6.01 (GraphPad Software, La Jolla, CA, USA). Differences of p < 0.05 were considered significant. Results Analysis of differentiated cardiomyocytes from iPSCs. Using the PSC cardiomyocyte differentiation kit, iPSCs were differentiated within 10-12 days into beating cardiomyocytes. The obtained cells showed the typical elongated rod-like shape of cardiomyocytes and the fluorescence microscopic analyses demonstrated the expression of cTNT as well as α-actinin (Fig. 1A). Flow cytometry analysis revealed that 89.17 ± 5.9% of obtained cells were cTNT positive (Fig. 1B) (p** = 0.0016). particle distribution in porcine hearts. After the application of 85,000 or 42,500 magnetic polystyrene microparticles into porcine hearts using the new hydrojet system and the E80/E10 pressure setting, MRI was www.nature.com/scientificreports/ performed. The application of 85,000 microparticles led to a stronger artifact signal than 42,500 microparticles ( Fig. 2A). Therefore, 85,000 microparticles were applied in further experiments. To compare the distribution of microparticles after the needle-and hydrojet system-based application, microparticles were injected from two sites 2 cm above the apex at a 90° angle to each other and the sagittal plane. After the needle injection, two small localized but strong artifacts were detected in the MRI images near the sites of injection. In contrast, the application of microparticles using the hydrojet system with penetration pressures of E60/E10 and an injection pressure of E10 resulted in a wider distribution of the microparticles in the myocardium (Fig. 2B). However, the higher tissue penetration pressure E80 led to a channel formation, which was clearly visible after the reconstruction of the 3D structure of the heart using 3D Slicer software (Fig. 2C). Compared to the 27G needle injection, both applications with the hydrojet system showed significantly larger distribution volumes [ Fig. 2D; E60/E10: 2377 ± 270 mm 3 (p** = 0.0038); E80/E10: 1811 ± 386 mm 3 (p* = 0.0439); 27G needle: 975 ± 228 mm 3 ]. Discussion Over the past decade, the ability to generate patient-specific iPSCs from somatic cells has led to significant advances in regenerative medicine and tissue engineering, which raise the hope for healing infarcted myocardium. In this study, we evaluated the deliverability of patient-specific cardiomyocytes derived from footprint-free iPSCs using a new hydrojet system into the myocardium and analyzed the distribution of the delivered cells in comparison to standard needle injection. The MRI and IVIS analyses demonstrated that the hydrojet system can be used to transfer cardiomyocytes into the myocardium with an improved distribution and significantly less injury of cardiac blood vessels compared to the single needle injection. The in vitro analyses showed that the transfer of cardiomyocytes by hydrojet with appropriate settings does not impair the recovery rate. The new hydrojet system allowed the precise application of two different jet pressures. The first jet enabled the penetration of the tissue (here epicardium and partly the myocardium) while the second jet gently transferred the cells into the target region (here the myocardium). The transfer of cardiomyocytes by hydrojet injection pressures of E5 and E10 had no significant influence on the recovery rate when compared to the injection with a 27G needle. In contrast, the transfer of cells at higher pressures, i.e. E20, led to a significant loss of the initial cell numbers. These results were not unexpected, as higher pressures correlate positively with velocity and, accordingly, increased shear stress which facilitates cell disruption 33 . Similar results have recently been shown for the injection of MSCs into the urinary sphincter complex 32 . Immediately after the injection of cardiomyocytes by 27G needle or hydrojet, no influence on cell viability was detected using trypan blue staining. However, 24 h after the cultivation of these cells, a slightly decreased cell viability was detected in cardiomyocytes injected with hydrojet compared to 27G needle injection. After 7 days of cultivation, the viability of cells applied with E20 pressure setting remained still significantly lower than the needle injection. The viability of cells applied with E5 and E10 pressure settings was not significantly different but lower than the needle application. Even though single-needle injection is a widespread cell delivery technique [34][35][36][37] , needle injections generally bear multifactorial disadvantages that may influence the viability, placement, retention rate, or distribution of Scientific RepoRtS | (2020) 10:16787 | https://doi.org/10.1038/s41598-020-73693-x www.nature.com/scientificreports/ www.nature.com/scientificreports/ cells 31 . In our study, a 27G needle was used as reference, which represents a common needle size since previous cardiomyocyte injection experiments report varying needle sizes from 23 to 29G [34][35][36][37][38] . While small needle sizes can lead to increased damage of the cardiomyocytes due to higher shear stress and pressures generated during the injection, needles with a larger diameter have an increased risk of tissue and blood vessel injury or facilitate the reflux of the cells along the penetration tract 31,39 . Concerning the specific application here, needle injection methods can cause a mechanical injury of healthy myocardial tissue and lead to inflammation of the myocardium, which in turn can increase the risk of cardiac arrhythmia 40 . Moreover, needle injection leads to the injury www.nature.com/scientificreports/ of cardiac blood vessels and thereby to an undesired spread of cardiomyocytes via blood vessels to untargeted regions in the heart. In vivo, the aggregation of cardiomyocytes in the coronary arteries could result in blockage of vessels and induce ischemia. Different approaches were applied to deliver cells into the myocardium such as intravenous infusion 41,42 , perfusion via the cardiac arteries [43][44][45][46][47] or multiple injections into the myocardium 37,48 . In a recent study, Tabei et al. applied a newly developed injection device with six needles to deliver human iPSC-derived cardiomyocyte spheroids into the myocardium 38 . Thereby, a retention rate of approximately 48% was achieved compared to the retention rate of around 17% using a single 23G needle. Multiple injections not only affect the retention rate, but also increase the size of the myocardial area in which the cells are distributed. For example, up to 15 injections were applied to deliver cardiomyocytes into the heart of Macaque monkeys 34,35 . Several clinical studies have shown that about 16% to 21% of the total mass of the left ventricle was affected immediately after the myocardial infarction [49][50][51] . Thus, to efficiently regenerate the affected myocardium and to restore the functionality, a wide distribution of the injected cells is essential. In our study, the 27G needle injection resulted in a limited distribution of cells, which was also shown in studies performed by Tabei et al. 38 . In contrast, the sequential application of two differently pressured fluid jets by the new hydrojet system allowed an improved distribution of iron oxide coated microparticles and cardiomyocytes compared to the single-needle injection, without injury of adjacent blood vessels. Thus, the entire infarct area could be covered without major tissue injury by only 2 to 3 repeated injections with the hydrojet. However, further in vivo studies with larger cohort sizes are necessary to establish the exact settings for hydrojet injection into the myocardium. In this study, a trend towards a somewhat more widespread distribution of cardiomyocytes was observed with E60 setting for penetration jet, however, due to the limited sample size, a clear difference between E60 or E80 setting for tissue penetration and cell deposition was not observed. Both hydrojet application and needle injection were performed epicardially, which is the method applied most frequently for targeted and precise delivery of cells into the infarcted myocardium 52 . This is typically performed under cardiac arrest by open heart thoracotomies 53 or without cardiac arrest via lateral minithoracotomies 54 . These invasive procedures are associated with a considerable risk of complications. Thus, less invasive catheterbased intramyocardial [55][56][57][58][59] or intracoronary delivery methods [43][44][45][46][47] have been already investigated. Injection studies by Grossmann et al. showed an equal or improved distribution when using endocardial applications compared to epicardial administration 60 . Both endocardial and intracoronary administrations are suitable for the new hydrojet system and could make hydrojet-based cell transplantation more precise, less invasive and less traumatic for the patients in the future. conclusion The novel hydrojet-based cell transfer technology enabled the efficient ex vivo administration of cardiomyocytes into the porcine myocardium using a novel sequential fluid application. Compared to standard needle injections, the hydrojet-based application resulted in significantly less displacement of cells via coronary vessels. Thereby, potential risks due to occlusion of the vessels by aggregated cardiomyocytes and ischemia can be prevented. In Determination of cell viability using PrestoBlue assay 24 h and 7 days after the seeding of injected cardiomyocytes into cell culture plates. The viability of cardiomyocytes without injection (control) was set to 100% and the viability of cardiomyocytes injected by 27G needle or hydrojet was expressed relative to these cells. Results are shown as mean ± SEM [control, E5, E10, and E20 (n = 10), and 27G needle (n = 9)]. Statistical differences were determined using one-way ANOVA followed by Bonferroni's multiple comparison test (*p < 0.05, ***p < 0.001, ****p < 0.0001). Scientific RepoRtS | (2020) 10:16787 | https://doi.org/10.1038/s41598-020-73693-x www.nature.com/scientificreports/ Figure 5. Distribution of cardiomyocytes in porcine hearts after the application with hydrojet system. 100 µl CMM without or with XenoLight DiR fluorescent dye-labeled 1 × 10 6 cardiomyocytes was applied using 27G needle or hydrojet system with tissue penetration pressures of E80 or E60 and injection pressure of E10 (E80/ E10 or E60/E10) into porcine hearts. (A) IVIS images of the apex region of the hearts from outside and inside of the myocardium. The intersection of the apex is schematically indicated as a white line. (B) Comparison of the radiant efficiency and near-infrared area after the application of cardiomyocytes into the myocardium. (C) Detection of the NIR-labeled area of blood vessels containing cardiomyocytes. Results are shown as mean ± SD (n = 3). Statistical differences were determined using one-way ANOVA followed by Bonferroni's multiple comparison test (*p < 0.05).
5,759.6
2020-10-08T00:00:00.000
[ "Biology", "Engineering" ]
The impact of digital financial inclusion on bank performance: An exploration of mechanisms of action and heterogeneity The use of digital technology by banks and other financial institutions to facilitate financial inclusion is referred to as digital financial inclusion. This fusion of digital finance and traditional banking methods has the potential to impact banks’ operational effectiveness. This study uses the panel effects model to examine the link between digital financial inclusion and bank performance in 30 Chinese provinces from 2012 to 2021. This research uses kernel density estimation to examine the spatial-temporal growth patterns of both variables. The mediator variable in examining how digital financial inclusion affects bank performance is risk-taking. Finally, the paper analyses the regional heterogeneity of the impact. It presents the following conclusions: (1) In China, digital financial inclusion and bank performance have constantly increased, with noticeable regional variances in their development levels. This regional inequality has widened gradually since 2018, yet it has not resulted in polarization. (2) The significant positive correlation between digital inclusive finance and banking performance indicates that banking performance tends to increase with the enhancement of digital inclusive finance. (3) Digital financial inclusion impacts bank performance, with risk-taking as a moderator. The spread of digital financial inclusion services enhances banks’ willingness to take risks, enhancing overall efficiency. (4) Digital financial inclusion boosts bank performance in the Northwest, South, North, and East regions while lightly inhibiting it in the Central region. Based on the findings, this study makes bank and government suggestions. Introduction Digital financial inclusion is a type of inclusive finance that has emerged due to ongoing advancements and innovations in digital technology and science.Its primary objective is to offer high-quality financial products and services to individuals from all social classes and industries.This entails expanding upon the previous modes of financial operations by overcoming the limitations of low efficiency in credit and business processing caused by geographical barriers and information disparities.By doing so, digital financial inclusion aims to facilitate the overall economic development of different regions and industries while enhancing individuals' living standards [1,2].Han, Zeng [3] argue that the financial inclusion of resources and cost consumption in daily operations is a significant concern that warrants careful consideration.This is due to the considerable societal demand for digital financial inclusion and the increasing range of areas in which financial products and services are expected to meet such demand. The use of digital technology infrastructure in banks and other financial institutions has deepened in recent years, and digital technology, primarily big data, cloud computing, and AI, can be seen everywhere in people's daily lives and has gradually penetrated the financial sector.The degree of application of such technologies in the financial sector determines the degree of convenience in updating financial products and services, affecting the customer's experience of the products and satisfaction with the services [4].Li, Long [5] underscore the current landscape of digital financial inclusion as a culmination of the historical trajectory of microfinance enterprises, ultimately transforming into a tangible expression of inclusive finance.Additionally, given their status as the earliest financial institutions in China, banks hold an indispensable and integral position within the present financial ecosystem.According to Wang, Zhang [6], the performance of banks is influenced by a myriad of factors owing to the diverse financial resources within society and the perpetual challenge of maintaining stable levels of consumption and demand over time.However, the effective deployment of digital infrastructure and the successful implementation of financial inclusion strategies have consistently positively impacted bank performance.This improvement in bank performance underscores the efficacy of financial inclusion policies and reflects the extensive endeavours aimed at its promotion. Consequently, implementing conventional inclusive financial services by commercial banks has increased workforce, risk, and operations costs.Suppose commercial banks continue to allocate more of their financial resources to clients with long-tail needs.This conclusion would unavoidably impact the allocated resources for top-tier clients, ultimately resulting in an inability to meet the financial service requirements of any specific demographic effectively.Consequently, the decline in the performance of commercial banks is growing more severe.The establishment of digital inclusive finance directly resulted from integrating financial science and technology into the inclusive finance sector.This integration was made possible by the creation and growth of the Internet financial market.Inclusive financial services have broadened their availability and advantages for a substantial number of financial consumers.The integration of digital technology into conventional inclusive financial services has significantly enhanced the efficiency of financing.Nevertheless, the emergence of digital financial inclusion has brought about both obstacles and hazards while presenting commercial banks with prospects for growth.Although the digital financial inclusion model has successfully stimulated the enhancement of bank performance and decreased the expenses related to financial services for commercial banks, it is crucial to acknowledge the emerging risks that banks are currently vulnerable to and the potential market rivalry. Therefore, investigating the influence of digital finance on bank performance not only enhances the research in relevant domains but also holds significant practical implications for enhancing bank performance.Most prior research on digital financial inclusion primarily examines the impact on individuals' lives and entrepreneurship.However, there needs to be more research investigating the relationship between digital financial inclusion and bank performance.Furthermore, the existing studies tend to adopt a macro perspective, and there is a scarcity of literature that quantitatively explores the direct influence of digital financial inclusion on bank performance and analyses any variations in this relationship.This paper uses empirical analysis to examine the impact of digital financial inclusion on bank performance.The study employs the kernel density estimation model to investigate the spatial-temporal development trend and characteristics of digital financial inclusion and bank performance. Additionally, the ordinary panel effect model is used to analyse the impact of digital financial inclusion on bank performance in detail.Furthermore, the mediation effect model is utilized to analyse the impact of digital financial inclusion on bank performance, with risk-taking serving as a mediator variable.The mediation effect model is again employed to analyse how digital financial inclusion affects bank performance.Additionally, the study investigates the regional heterogeneity of the influence of digital financial inclusion on impression performance across different areas.The research presented in this paper enhances the existing research in relevant disciplines and establishes a theoretical foundation for understanding the process via which digital inclusive finance affects bank performance. Literature review Xiao [7] contends that implementing inclusive financial policies in China has successfully reduced income inequality and addressed the disparity between urban and rural growth.Chinese banks have a crucial role in the country's financial sector, aggressively promoting the development of inclusive financial services.By incorporating digital technology into their operations, banks can overcome the constraints of the traditional financial model, optimizing the effectiveness of financial inclusion initiatives [8].Based on the previous studies, digital financial inclusion may impact bank performance regarding the breadth of coverage, depth of use, and degree of digitization [9,10].The breadth of coverage primarily pertains to the bank's utilization of big data and other digital technologies to enhance the coverage of areas and industries not typically addressed by conventional financial services.It also aims to strengthen the coverage of areas already addressed by traditional financial services.Depth of use pertains to how much the audience embraces inclusive financial policies following their widespread adoption.This includes residents' use of digital financial products and services, the frequency of their usage, and other activities that can impact the bank's performance [11,12].On the one hand, banks employ big data and artificial intelligence technology to reach out to rural and remote areas not served by traditional financial services because of time and space constraints.This greatly expands the audience and potential clientele for the banking industry and, on the other hand, broadens the opportunities for profit from the bank's performance and raises the probability of the bank's success [13,14].On the other hand, Wang, Yuan [15] argued that small and medium-sized businesses' financing problems could be effectively resolved by implementing digital financial inclusion policies and implementing banks and other financial institutions effectively.Expanding digitalization can boost rural area access to financial services by expanding bank branch ability to provide inclusive banking and making business processes and audits more easily accessible [16,17].From the preceding research, bank performance may be boosted considerably by increasing the extent to which digital inclusion is used, the breadth of coverage, and the level of digital application in financial services.This allows us to deduce the study's first hypothesis: H1: Digital financial inclusion positively affects bank performance. According to Jing, Miao [18], bank performance may show phases of localized stagnation or regression since they are subject to various risks in their operations and services, some of which can be predicted and prevented, while others are unpredictable.Improving the bank's risk-taking capability entails increasing the sorts of risks that can be averted and dealt with and decreasing the types of risks that cannot be forecast and prevented in order to ensure the bank's performance grows gradually [19,20].With the increasing adoption of digital technology in the realm of financial inclusion, financial institutions such as banks have the opportunity to optimize their online service procedures, enhance their capacity to assume risks in their day-to-day activities and reduce avoidable losses resulting from ineffective employee conduct [21,22].Additionally, it is anticipated that financial institutions across various regions will continue to elevate the standard of digital technology innovation and financial inclusion.This will generate intense market competition, leading to the complete implementation of advanced technological methods by various financial institutions and products [23][24][25].According to Bhaskaran, Chang [26], expanding digital financial inclusion has numerous benefits.It enables individuals in underserved areas, such as rural areas, to access financial products tailored to their needs.Additionally, it assists small and medium-sized businesses overcome growth barriers caused by limited capital.Furthermore, it allows banks to diversify their sources of revenue.Considering China's significant rural population, digital financial inclusion can effectively address their needs while mitigating the risk banks encounter when engaging in credit business.Thanks to big data and other digital technologies, financial institutions can now accurately forecast customer credit and spending limits [27].An improved ability to take on risk when faced with the threat of bad debts in credit operations is another benefit of a bank's diversified revenue streams [28,29].The above investigation establishes the second hypothesis of this study: H2: There is a mediating effect of digital financial inclusion on bank performance with risk-taking as the mediating variable. Since the use of digital technology in banking has increased the bank customer base and thus improved the bank performance, and since the degree to which digital technology has been developed and is being used has a significant impact on the bank performance, banks and other financial institutions have begun to invest heavily in the Internet and the innovation of digital technology [30][31][32].However, Xu [33] argues that the current state of China's digital technology growth demonstrates that the regional development imbalance persists and that this causes apparent inequalities in the economic activities of the regions.It is clear that differences in the development of digital technology in different regions are at the root of variations in the level of bank digitization and that these variations, in turn, affect the breadth of coverage and depth of use of financial inclusion, as is most obviously seen in the high level of digital technology in the region, the bank in the implementation of financial inclusion policy will cover a more comprehensive array of people [34,35].It has been found that the positive effect of digital financial inclusion on bank performance is more significant in China's southeast coastal region than in the northwest and that this may be due to the southeast coastal region's higher level of economic development, a strong capacity for innovation in digital technology, and a greater degree of integration with the financial industry [36].The growth of digital finance, Diniz, Birochi [37] argue, may also hurt bank performance.Local banks are smaller in scale and slower in capital operation, so their ability to prevent and control this impact could be more robust, and their performance may show more apparent fluctuations.This is because larger banks, such as stateowned banks, have more channels for capital accumulation and can better withstand this negative impact and minimize the fluctuation of bank performance.There is also regional variation in terms of bank size.The above analysis leads to the third hypothesis of this study: H3: There is regional heterogeneity in the impact of digital financial inclusion on bank performance. Research methodology 3.1.1Kernel density estimation.Kernel density estimation is mainly a nonparametric estimation method used to estimate the distribution density function, which is more intuitive than the histogram and is a further abstraction of the histogram [38].Therefore, this paper uses the kernel density estimation method to plot the kernel density estimation of bank performance and digital financial inclusion, respectively, which is calculated as follows: Where x denotes the point to be estimated, x i denotes the sample point selected in this study, and K((x−x i )/h) refers to the kernel function that contains the parameter h.In general, when the parameter h of this kernel function is greater than 0, it is referred to as the bandwidth, and the value of the bandwidth h affects the degree of smoothing as well as the estimation accuracy of this kernel function. 3.1.2Ordinary panel effect model.Given that understanding the impact of digital financial inclusion on bank performance is essential for this study, the paper begins by regressing the direct impact of digital financial inclusion on bank performance using the ordinary panel effect model.The formula for its calculation is as follows: Where DFI it refers to the level of digital financial inclusion in city i in year t, BP it refers to the level of bank performance in city i in year t, control it refers to the level of development of the control variables in this paper in city i in year t, γ i refers to time fixed effects, μ t Refers to the individual fixed effects, ε it denotes the random disturbance term of the model, and α 0 denotes the coefficient of return of the regression test model. Mediation effects model. Based on the analysis of Hypothesis 2, banks and other financial institutions can enhance their risk-taking capacity through technological innovation and expanding revenue channels.This can help mitigate default and bad debt in their credit business and ensure operational efficiency.Thus, this paper refers to the research approach of Hong, Xuefei [39] to take risk-taking as a mediating variable and build a mediation effect model in order to verify Hypothesis 2 and explore whether there is a mediating variable of risk-taking in the mechanism of digital financial inclusion on bank performance.Here is the exact formula for calculating it: In the above equation, TR it denotes the level of risk taking in province i in year t.The regression coefficient δ in Eq (3) is the total effect of the impact of digital financial inclusion on bank performance.The regression coefficient λ in Eq (4) is the effect of digital financial inclusion on risk-taking.The regression coefficient θ of Eq (5) is the direct effect of the impact of digital financial inclusion on bank performance, whereas the coefficient η×λ of the regression coefficient η in Eq (5) multiplied by the regression coefficient λ in Eq (4) is the indirect impact effect of the impact of digital financial inclusion on bank performance, the abbreviations of the other indicators in the formula are interpreted in the same way as in Eq (2). Variable selection and data sources 3.2.1 Selection of variables.1. Dependent variable.Banks are the most frequently used financial institutions in people's daily lives; therefore, the performance level of banks can reflect the living standards and economic conditions of residents, and at the same time, the performance level of banks can answer the degree of implementation of digital financial inclusion policies.Increased digitization and financial product innovation can always have a beneficial effect on bank performance, and the degree to which a bank invests in digital financial inclusion can be seen in the degree to which its relevant financial goods and service operations are digitized.Therefore, this research aims to examine the determinants of bank performance.Based on Rickinghall [40] analysis of the available literature, the return on equity (ROE) metric was selected to represent the level of bank performance. In the above equation, ROE it and ROA it denote the return on net assets and return on total assets in year t of province i, respectively.Where Npro it denotes the net profit of province i in year t, net profit is the profit obtained by subtracting total costs and expenses from the total income of all commercial banks in the region during the year.Sret it denotes the net assets of province i in year t, and its amount is the balance of assets minus liabilities.Tass it denotes the total assets of province i in year t and is the sum of the values of all assets of the firm in the same period. 2. Independent variable.With the deep integration of digital technology and inclusive finance, the development of digital inclusive finance has positively impacted the performance levels of banks.Conversely, the improvement in bank performance may also influence the development of digital inclusive finance, enhancing its breadth of coverage and depth of usage.These two factors are interrelated and interact with each other.Therefore, this paper takes digital inclusive finance as the core explanatory variable to explore its specific impact on bank performance.According to relevant literature, Peking University has constructed 33 related indicators based on the breadth of coverage, depth of usage, and digitalization level of digital inclusive finance, which accurately reflect China's level of development in digital inclusive finance, the specific construction is shown in Table 1.Hence, this method is used in this paper to measure China's development level in digital inclusive finance [41,42]. 3. Mediating variable.The non-performing loan provision coverage ratio is an essential indicator for assessing a bank's risk-bearing capacity.It directly impacts the bank's performance.As the non-performing loan ratio increases, banks need to set aside more provisions to cover losses, leading to a decrease in profitability.Conversely, a decrease in the non-performing loan ratio results in a corresponding reduction in provisions, which positively impacts profitability.Moreover, the level of the non-performing loan provision coverage ratio effectively reflects the risk level of bank loans and the financial soundness of the bank.Therefore, this study adopts the method referenced by Wang and Wang [43], using the loan loss provision coverage ratio to measure banks' risk-taking ability.This ratio primarily assesses the proportion of provisions set aside for managing loan default risks relative to the amount of non-performing loans.The provisions for managing loan default risks include general provisions, specific provisions, and special provisions.The classification of bank non-performing loans mainly includes substandard loans, doubtful loans, and loss loans. 4. Control variables.The performance level of banks in the actual operation process may be affected by many aspects, so this paper selects five indicators as the control variables of this study, namely economic development level, industrial agglomeration level, urbanisation level, bank net interest margin and capital adequacy ratio, respectively, from various aspects of society and bank development [44][45][46].Precisely, the amount of economic progress is quantified by the logarithm of GDP per capita.The calculation of industrial agglomeration level involves the use of locational entropy.The calculation of the urbanization level involves determining the proportion of the urban population in the total regional population.The measurement of banks' net interest margin involves calculating the difference between their interest income and interest expense as a percentage of interest-earning assets.The measurement of the capital adequacy ratio involves determining the proportion of capital to risk-weighted assets. Data sources. For this study, we first list all the relevant variables in China's data from 2012 to 2021.Data pre-processing is conducted to eliminate provinces or data that do not meet our requirements but could still influence our conclusions.Ultimately, we agreed to select a sample size of 30 provinces in China.The primary data for each variable was collected from multiple publications, such as the China Statistical Yearbook, the China Industrial Statistical Yearbook, the China Population and Employment Statistical Yearbook, the China Financial Yearbook, provincial and municipal statistical yearbooks covering the period from 2012 to 2021, and the International Bureau of Statistics database accessible on their official website. Variable data analysis Table 2 presents the descriptive statistics for each variable, obtained from statistical analysis and econometric testing conducted using the Stata software in this research.The findings reveal notable disparities in bank performance among various locations in China.Specifically, the average bank performance is 13.311, with a standard deviation 5.876.Furthermore, the gap between the highest and lowest values is nearly 80-fold.The average of the digital financial inclusion index is 250.540, with a standard deviation of 87.744.The highest recorded value is 458.97, while the lowest is 61.47.The range of values is narrower than the disparity in performance of the region's banks. Nevertheless, it continues to illustrate the imbalanced characteristics of the nation's regional financial systems.The statistical results of each control variable, such as economic development level, degree of industrial agglomeration, urbanization level, bank net interest margin, and capital adequacy ratio, show variability across China's provinces, indicating significant variations.In general, the statistical results suggest that the data for each variable falls within the normal range and does not contain any outliers.This makes the data appropriate for use as a foundation for empirical research. The study utilizes the kernel density estimation approach to generate graphs representing the kernel density estimation of digital financial inclusion and bank performance.These graphs are displayed in Figs 1 and 2, respectively.The orientation of the central point of the kernel density curve, the existence of the trailing phenomena, and the alteration in the width of the peak wave can be utilized to infer the distribution and progression of digital financial inclusion and bank performance.improvement in bank performance.The bank performance varies greatly among China's provinces, with certain provinces demonstrating significantly superior performance compared to other regions.This disparity can be attributed to the lagging phenomena shown in the kernel density curve of bank performance.This discovery provides additional evidence that a regional mismatch characterizes the evolution of China's bank performance.Between 2012 and 2018, the peak of the kernel density curve that represents bank performance tended to change from a wide shape to a small one.These findings indicate that the extent of variation in bank performance between China's provinces decreased dramatically throughout this period. Nevertheless, the highest wave condition exhibited a pattern of transitioning from a more constricted state to a more expansive state between the years 2018 and 2021.Possible reasons for this phenomenon can be attributed to the initial phases of digital advancement, where banks worldwide needed to adopt digital technology adequately.The disparity in banking and credit services was primarily influenced by non-technical factors such as population concentration.Adopting digital technology in the financial sector will result in a widening revenue disparity between provinces with favourable geographical positions, advanced economic and scientific technologies, innovative solid capabilities, and slower economic development. Fig 2 illustrates a kernel density curve representing the progress of digital financial inclusion.The curve shows a consistent rightward trend over time, indicating that China's level of development in digital financial inclusion is also increasing.The presence of lagging phenomena is also apparent in the kernel density curve of digital financial inclusion, indicating differences in the amount of digital finance development among the provinces of China.Nevertheless, no discernible polarising pattern is evident in this fluctuation upon examining the quantity of wave crests.Furthermore, the degree of China's digital financial inclusion development saw a minor increase from 2012 to 2017.However, starting from 2017, there has been a noticeable pattern of the gap widening each year until 2021.These results align with the findings of the wave crest analysis, which revealed that the wave crests of digital financial inclusion remained relatively stable from 2012 to 2017.A noticeable yearly pattern indicates a growing disparity in levels of development.In 2017, several affluent provinces experienced a surge in digital technological innovation.They increased the utilization of modern digital technology in the financial inclusion sector.However, the pace of digital technological innovation in poor regions could have been accelerated.This phenomenon can be attributed to the amalgamation of digital technology and financial inclusion, resulting in the emergence of digital financial inclusion enterprises. Baseline regression analysis This study utilizes Eqs (2)-(3) to create a regression model that examines the impact of the digital financial inclusion index on bank performance.The regression results are presented in Table 3. Model (1) displays the outcomes of regressing the digital financial inclusion index solely on bank performance, while model (2) showcases the outcomes of regressing the digital financial inclusion index on each of the control variables simultaneously.Upon examining the table, it is evident that conducting separate regressions for digital financial inclusion and bank performance yields a regression coefficient of 10.797 for digital financial inclusion.This coefficient is statistically significant at the 1% level, indicating that an increase in the level of digital finance development can significantly enhance bank performance.Specifically, for every unit of improvement in digital financial inclusion, the bank's performance will increase by 10.797%.When digital inclusive finance is included in a regression analysis along with control variables and bank performance, the results show that digital inclusive finance has a significant positive impact on bank performance.This effect remains significant even when controlling for other factors, with a significance level of 1%.The regression results above confirm Hypothesis 1, which states that digital financial inclusion has a positive effect on bank performance.The advancement of digital financial inclusion will broaden the range of services offered by banks and increase the need for financial management and financing in remote areas and small and medium-sized enterprises.Additionally, the increased level of digitization will effectively lower the operating costs and credit costs of the bank, ultimately enhancing the bank's performance. From the regression results of the control variables, the regression coefficients of the economic development level, the degree of industrial structure agglomeration and the bank net interest margin are all negative, but only the economic development level and the bank net interest margin pass the significance test at the 1% significance level, which indicates that the economic development level and the bank net interest margin show a significant negative impact on bank performance, mainly because with the economic development level and the degree of industrial agglomeration, it will The main reason is that as the level of economic development and industrial agglomeration increases, the financial market becomes more active, and the development of the market attracts more financial institutions to enter the market, intensifying the competition among banks, which in turn leads to the reduction of lending interest rates or the increase of deposit interest rates in order to compete for market share, thus compressing the profit space of banks.The regression coefficients of the urbanization level and capital adequacy ratio are positive, and both regression coefficients pass the significance test at a 1% significance level.On the one hand, with the advancement of urbanization, infrastructure construction and residents' consumption demand continue to grow, which in turn leads to an increase in the demand for financial services, and these demands provide more business opportunities and sources of income for commercial banks.On the other hand, the capital adequacy ratio is an essential indicator of a bank's ability to withstand risks.A rise in the capital adequacy ratio means that banks have more capital to cover potential risk losses, thus improving their risk management capabilities.This helps to reduce the risk exposure faced by banks and safeguard their sound operations.Therefore, both urbanization level and capital adequacy ratio show a significant positive relationship with bank performance. Mechanism analysis Based on the study of hypothesis 2, the advancement of digital finance may increase the risk tolerance of financial institutions, such as banks, which, therefore, affects the performance of banks.Thus, this study aims to thoroughly examine the relationship between digital inclusive finance and bank performance by considering risk-taking as a mediating variable.To achieve this, a mediation effect model is constructed, and a specific regression analysis is conducted to assess the impact of digital inclusive finance on risk-taking, as outlined in Eq (4).The findings are displayed in Table 4 Model (1).Formulate Eq (5) to examine the impact of digital financial inclusion and risk-taking on bank performance.The findings are presented in Table 4, model (2).The analysis of the findings from model (1) in the table indicates that the regression coefficient for digital finance development in relation to risk-taking is 0.455, which is statistically significant at the 1% level.This suggests that an increase in the level of digital finance development can enhance the bank's risk-taking behaviour.With each unit of increase in digital finance development, the risk-taking ability will increase by 0.455%.This is primarily because the development of digital finance will expand the utilization of financial products and enhance the bank's revenue sources, thereby facilitating the expansion of the bank's size and the flow of surplus funds.Simultaneously, the advancement of digital inclusive finance is accompanied by a rise in the level of innovation and utilization of digital technology.This not only enhances the efficiency of banks in their management and operations, thereby increasing the operational risk they face but also enables precise analysis of the creditworthiness of customers and the restriction of credit amounts based on their credit status.Consequently, this enhances the risk-bearing capacity of the credit business. The regression analysis of the model (2) reveals that the coefficient for the bank's risk-taking ability on bank performance level is 4.375, with a significant level of 1%.This indicates that an increase in the bank's risk-taking level leads to a noticeable improvement in the bank's performance earnings.Furthermore, a higher risk-taking ability enables the bank to engage in medium and high-risk investments, increasing the likelihood of obtaining higher returns.In addition, improving the bank's ability to handle risks can also serve as a constant driving force and assurance for advancing technological innovation and expanding the reach of inclusive finance, thereby elevating the bank's performance.When considering the idea of the mediation effect, it is evident that the direct influence of digital financial inclusion on bank performance may be measured by the regression coefficient of 3.857 in the model (2).Contrarily, based on the study of the coefficient of the model (1), it is evident that the indirect influence of the Digital Inclusion Index on the bank's performance is the result of multiplying 0.455 by 4.375.The magnitude of the indirect impact of digital financial inclusion on the bank's performance is 1.991.By considering the findings in Table 3, it is evident that the overall impact of digital financial inclusion on bank performance is 5.848.To summarize, the combined coefficients of the direct and indirect effects of digital financial inclusion on bank performance can be demonstrated through Hypothesis 2. Additionally, there is evidence of risk-taking playing a mediating role in the impact of digital financial inclusion on bank performance. Heterogeneity test The interpretation of the regression data in Table 3 reveals a general upward trend in bank performance as digital financial inclusion infrastructure develops.Despite this, China's regions are marked by significant differences in scientific and technical advancement, resource endowment, and socioeconomic status.From a regional perspective, will enhancing digital financial inclusion for bank performance show different regional characteristics?Thus, this paper will utilize the standard of China's geographical division to examine the province divided into the Eastern region, central region, western region, southern region, and Northern region, the digital financial inclusion, and the bank performance in each region separately.The results of the regression are displayed in Table 5. Upon examining the data in the table, it is evident that the regression coefficients of the digital finance index on bank performance are consistently positive and statistically significant at the 1% significance level.This suggests that the level of digital finance development has a noticeable positive impact on bank performance in different regions of China, enhancing it significantly.The Western region stands out among all regions in terms of the impact of digital finance on bank performance, as indicated by its regression coefficient of 1.455.This coefficient is statistically significant at the 1% level, suggesting that the level of digital finance development in the region has a highly noticeable effect on enhancing bank performance.The phenomenon can be attributed to the central features of digital financial inclusion, which are inclusiveness and digitization. Variables (1) In contrast to the traditional financial model in rural and remote areas, which excludes the masses from accessing financial services, the digital financial inclusion model receives policy support from the government.This support mandates financial institutions to utilize digital technology to expand the reach of financial inclusion.The primary objective is to ensure the availability of digital finance in rural and other remote areas.The primary objective is to establish digital financial services in remote regions, mainly rural areas.China, being a major agricultural nation, has a significant proportion of its population (40.42%) engaged in farming.The western region, in particular, needs to catch up economically and has a substantial number of farmers and agricultural workers.The widespread adoption of digital inclusive finance in this region will significantly influence the performance of banks due to the advantages of a large population base, a significant number of individuals affected, and the backing of relevant national policies.Although the eastern region has a higher degree of economic and technological development, banks in this region may incur higher costs in order to enhance their research and development (R&D) capabilities and promote innovation in their operations and management.As a result, the impact on bank performance improvement is less pronounced compared to the western region.Based on the study provided, it is evident that there are notable variations in the influence of digital financial inclusion on bank performance across different regions.This finding supports Hypothesis 3. Robustness check The robustness test refers to verifying the credibility of the research results by regressing the variables by changing the measurement of the variables or replacing the model and analysing whether the regression results are consistent with the previous results.This paper conducts the robustness test by changing the bank performance measurement and replacing the double fixed effects model. Table 6 displays the regression outcomes following the substitution of ROE with ROA and the replacement of the double fixed-effects model with the GMM model.The regression results demonstrate that the regression coefficients for the digital finance development level on bank performance, under both robustness tests, are 0.167 and 8.727, respectively.Both coefficients are statistically significant at the 1% level, indicating an apparent positive effect of improving the digital finance development level on bank performance.These results align with the benchmark regression results and provide evidence of the robustness of the findings. The regression results in Table 7 demonstrate the impact of each control variable on the relationship between the digital financial inclusion index and bank performance.Upon analyzing the data in the table, it is evident that the regression coefficient of the digital financial inclusion index on bank performance is 5.848.This coefficient is statistically significant at the 1% level, indicating that digital financial inclusion continues to have a significant positive effect on bank performance.The regression analysis reveals that the coefficient of economic development level on bank performance is -6.120, indicating a significant promotion effect at the 1% level.Additionally, the other control variables also exhibit varying degrees of influence on bank performance.This demonstrates that the outcomes of the durability assessment align with the outcomes of the standard regression, hence confirming the reliability of the acquired conclusions. Research findings The integration of digital technology and financial inclusion has significantly accelerated the progress of digital financial inclusion, consequently augmenting the efficacy of financial services in rural and other geographically isolated regions.These entities' combined efforts have successfully utilized digital financial inclusion capabilities to aid agricultural communities.Given the critical nature of the contemporary financial system, banks should take the lead in promoting and facilitating the dissemination of digital financial inclusion services, which would significantly contribute to expanding small and medium-sized enterprises (SMEs) and agriculture throughout the nation.Achieving greater digital financial inclusion will immediately and substantially affect the bank's performance.This study draws upon pertinent data from 30 provinces in China from 2012 to 2021.Following the selection of variables, an analysis of the spatial distribution and development patterns of digital financial inclusion and bank performance in China during that period is conducted employing the kernel density estimation method.In conclusion, conclusions are drawn regarding the impact and mechanism by which digital financial inclusion influences bank performance.To summarize, the subsequent conclusions are deduced: 1. Digital financial inclusion and bank performance in China exhibit an upward trend annually.Nonetheless, variations in development rates can be observed across distinct regions.While these disparities become more pronounced beyond 2018, they do not exhibit polarization. 2. Enhancing digital financial inclusion can significantly enhance the degree of bank performance.Control variables such as economic development level, industrial agglomeration, and bank net interest margin can hinder the development of bank performance.Conversely, the level of bank performance can be considerably enhanced by urbanization level and bank capital adequacy ratio. 3. The efficiency of financial institutions is affected by the willingness to take risks, which mitigates the impact of digital financial inclusion.Improving the efficiency of financial institutions can be accomplished by promoting the widespread use of digital financial inclusion programs. 4. The impact of digital financial inclusion on bank performance varies significantly across different areas.The Northwest region stands out regarding the favorable influence of digital financial inclusion on bank performance.This is primarily due to the region's strong emphasis on national policies and its substantial population of farmers.The heightened risks linked to lending operations in the central area negatively impact the bank's performance.The impact of digital financial inclusion on bank performance is most pronounced in the eastern, northern, and southern regions. Research recommendations 1. Maintain a high standard for building digital inclusive financial infrastructure, an essential prerequisite for any digital infrastructure.The use of digital technology to integrate financial resources, the establishment of industrial digital transformation funds, and the promotion of the transformation of small and medium-sized enterprises into digital industries are all ways to better construct digital financial inclusion facilities from the perspectives of businesses and individuals.Meanwhile, a high-coverage communication network should be established in rural areas so that all people may benefit from financial services, and regional disparities in the growth of digital financial inclusion can be gradually reduced. 2. To improve bank efficiency, it is necessary to enhance innovation in digital technology, broaden the range of financial products, and develop connectivity to large communication networks in underserved locations, such as rural areas.The credit system in our culture necessitates enhancement.Credit is a crucial cornerstone of the modern financial system, and its evolution throughout time has significant implications for banks and other financial institutions.It helps ensure the security of credit transactions and ensures the best operational efficiency of banks. 3. In order to increase customer acceptance and the efficiency of operations, banks must improve the innovation capacity of financial technology by concentrating on the regular updating of financial products and services and paying attention to the increasing degree to which digital technology is incorporated into the institution's running.If necessary, banks can also provide staff with appropriate training in digital technology to streamline business processes and win over more clients. 4. The government needs to pay attention to the variations in regional resources and technical development that affect the efficiency of digital financial inclusion and banking.A focus on areas in China's central region where digital financial inclusion has a negative impact on bank performance, followed by an investigation into the root causes and the implementation of appropriate policy deviations to spur the development of digital inclusive financial infrastructure there, aid in the enhancement of the region's credit monitoring system, and bring it into conformity with the rest of the country are all necessary steps.Additionally, real-time policy adjustments and regional parity in development necessitate dynamic monitoring of the effects of digital financial inclusion and bank performance across the country. Research limitations and future perspectives This work utilizes the kernel density estimation methodology to analyse the spatiotemporal evolution characteristics and patterns of digital financial inclusion and bank performance in 30 Chinese provinces from 2012 to 2021.Subsequently, it utilizes the conventional panel model and mediation effect model to analyse the sequential pathway via which digital financial inclusion influences bank performance and the many pathways through which digital financial inclusion exerts its effects.Geographical differences exist in how digital financial inclusion affects bank performance.This study employs rigorous data analysis and empirical research to examine the correlation between digital financial inclusion and the development of bank performance.However, it is essential to note that some practical limitations need to be considered.Increasing the number of control variables in the regression model is impossible.Several elements have varying degrees of influence on the bank's operational performance in enhancing the growth of bank performance.However, this paper primarily relies on the relevant literature to select control variables, considering a subjective perspective.Therefore, in future research, we should determine the control variables in this paper through field surveys and interviews with experts.On the other hand, this paper focuses on risk-taking as a moderating element in investigating the influence of digital inclusive finance on bank performance.According to the literature, the mediating effect of digital inclusive finance on bank performance is influenced by various elements, including digital transformation, upgrading, economic agglomeration, industrial structure upgrading, and others.As a result, we will continue to investigate the path of the influence of digital inclusive finance on bank performance in depth in future studies to supplement the limits of the research in this work. Fig 1 illustrates the changes in the kernel density curve of bank performance as time progresses.The observable shift towards the right in the kernel density curve suggests a clear
9,608.2
2024-08-20T00:00:00.000
[ "Economics", "Business", "Computer Science" ]
G-quadruplex DNA structure is a positive regulator of MYC transcription Significance DNA G-quadruplexes (G4s) are four-stranded DNA structures enriched in regulatory regions of the human genome; however, their functional role in transcription remains an incompletely answered question. Using CRISPR genome editing, genomic approaches for chromatin profiling, and biophysical assays, we demonstrate that a G4 structure folds endogenously within the upstream promoter region of the critical MYC oncogene to positively regulate transcription. Key transcription factors and chromatin proteins bind to the MYC promoter via preferential interaction with a G4 DNA structure, rather than with the duplex primary sequence. Overall, this study demonstrates how G4 structures, rather than DNA sequence, alter the local chromatin landscape and nucleosome occupancy to positively promote transcription. Supporting text We investigated the minimum number of mutations required to abolish G4 formation at the endogenous genetic context of MYC in HEK293T cells.It was essential to consider Pu27 (27 bp, five G-runs) within the context of an extended 48 bp region (i.e.MYC WT, eight G-runs), as there are flanking G-runs that can contribute to G4 folding when central G-runs are mutated (1)(2)(3). Using circular dichroism (CD) spectroscopy, we investigated oligonucleotides (fig.S1A, table S8) consisting of single point mutations within the 27 bp or 48 bp context.Mutating a single G at a time in each of the eight G-runs, resulted in CD spectra with maxima at ~260 nm and minima at ~240 nm, characteristic of G4 structure formation (Permutations 1-13) (4,5).We then added additional mutations to each G-run starting from the central ones (Permutations 14-20) to establish the threshold of mutations that abrogate G4 structure formation.We found that mutations in each of the eight G-runs within the 48 bp context were needed to completely abolish G4 formation as judged by CD (fig.S1B, permutation 19).We designated permutation 19 as minimally mutated MYC (MUT MIN).Mutations to each of the five central G-runs within the Pu27 core (permutation 18) within the 48 bp context were designated MUT CORE and found to retain canonical G4 spectrum features indicating residual G4 forming potential (fig.S1B).We further explored G4 folding under 10 mM K + or Li + conditions.G4 oligonucleotides generally show a higher molar ellipticity in K + compared to Li + (6).We observed no difference in K + over Li + preference between MYC MUT and MUT MIN suggesting lack of G4 folding.However, MYC WT and MUT CORE showed a preference for K + conditions, indicative of G4 folding (fig.S1B). To confirm that the flanking regions in the MUT CORE oligonucleotide were contributing towards G4 formation, we performed CD in a short 27 bp version of MUT CORE at 10 mM and 100 mM K + and observed a profile characteristic of a non-G4 structure (fig.S1C).Overall, this shows the involvement of the three flanking G-runs towards G4 folding in vitro.To measure structural transitions, we deployed UV thermal melting spectroscopy (7).As the 48 bp MYC G4 sequence construct had not been previously characterised, we performed experiments at nearphysiological conditions (100 mM K + ) and titrated the K + concentration (10, 20, 50, 100 mM) to determine that 10 mM K + was optimal to capture structural transitions for our constructs in thermal melting measurements (fig.S2A).We measured the UV spectra for MYC WT, MYC MUT, MUT MIN and MUT CORE oligonucleotides at 20°C and 90°C at 10 mM and 100 mM K + .The thermal difference giving the greatest fold-change between folded and unfolded states was calculated to be ~300 nm (fig.S2B, fig.S2C).Thermal melting measurements were thus taken at this wavelength.MYC MUT and MUT MIN did not display a melting transition (fig.S2D).MUT CORE showed a clear structural transition at ~45°C in 100 mM K + , whereas MYC WT displayed a transition consistent with greater structural stability (100 mM K + at ~60°C) (fig.S2D). Fig. S1. Circular dichroism MYC G4 guanine-contribution mutational study (A) Circular dichroism (CD) for oligonucleotides with different mutations to identify G contributions to G4 folding within the MYC G4 27 bp and 48 bp sequence context.A minimum of nine mutations are required for G4 spectral signature loss (Permutation 19).Six G mutations at the core Pu27 sequence retain some G4 forming potential for the 48 bp sequence (Permutation 18).(B) Cation-dependency study to interrogate G4 forming in vitro.MYC G4 shows a clear increase in molar ellipticity in the presence of K + compared to Li + .MYC MUT and MUT MIN show no differences between K + and Li + conditions.MUT CORE shows a degree of K + preference.(C) CD spectra of MYC G4 and MUT CORE short oligonucleotides (27 bp) in the presence of 10 mM and 100 mM K + .The G4 spectral signature is lost for the MUT CORE at both K + concentrations, and was not lost in the 48 bp context.This result suggests the flanking regions contribute to G4 formation.All measurements were taken in 20 mM lithium cacodylate buffer as previously described (7). Fig. S2. UV biophysical characterization of the G4 structural perturbations (A) UV thermal melting measurements for the short (27 bp) and long (48 bp) MYC G4 sequence context (27 bp, 48 bp) at different K + concentrations.Measurements were taken at 300 and 305 nm.(B) UV spectrum at 20°C and 90°C for MYC G4 (WT), MYC MUT, MUT MIN, MUT CORE at 10 mM and 100 mM K + .(C) Calculated thermal difference spectra for each oligonucleotide at 10 mM and 100 mM K + .MYC G4 shows a minimum at ~300 nm.(D) UV thermal melting measurements for the short and long oligonucleotides at 10 mM and 100 mM K + .48 bp MUT CORE shows melting at ~45 °C and 48 bp MYC G4 at ~69°C.No melting is observed with MYC MUT or MUT MIN.All measurements were taken in 20 mM lithium cacodylate buffer as previously described (7). Fig. S4. Genotyping of the generated cell lines for the study Sanger sequencing chromatograms for an amplicon spanning the edited region (black boxes) and flanking sequences in wild type and edited cell lines.The chromatogram confirms homozygosity of the targeted region in the MYC locus for both wild type and edited HEK293T cell lines.Guanine runs (GGG/GGGG) for the wild type MYC G4 are underlined in blue (top chromatogram).Point mutations in MYC MUT cells underlined in red.Guanine runs (GGG/GGGGG) for the KRAS SWP are underlined in purple.Sequence for the MYC Flip cell line is highlighted in light blue (bottom track).Guanine (G), Cytosine (C), Thymine (T) and Adenine (A) bases are shown in black, blue, red and green respectively. Fig. S5 Western blot analysis of MYC protein levels Western blot showing a reduction in MYC protein level in MYC MUT cells compared to MYC WT and KRAS SWP (Left).The drop in MYC protein intensity (~60% of WT, *: p ≤ 0.05, ns: not significant, n = 3, Right) was estimated as the area-under-the-curve relative to b-ACTIN and Pvalues were calculated using the Wilcoxon rank-sum exact test. Fig. S3 . Fig. S3.Assessment of the degree of similarity between DNA sequences Distance matrixes showing how dissimilar two sequences are when imposing numerical values of matches as penalties for gaps and mismatches based on Needleman-Wunsch algorithm (match = +10, mismatch = -5, gap = -7).This shows that MYC WT and KRAS SWP are dissimilar (NW score = -16) while MYC WT and MYC mutants are similar (NW score = 9). Fig. S7 . Fig. S7.Ontology analysis for the G4 edited generated cell lines Pathway enrichment analysis illustrating upregulated and downregulated pathways in the absence of the MYC promoter G4 (MYC MUT).Upregulated pathways include signaling pathways and pro-apoptotic programs.Downregulated pathways include are MYC targets, mRNA splicing and translation.The score is calculated using gene set enrichment analysis (GSEA) as normalized enrichment score (NES). Fig. S13 . Fig. S12.Motif analysis for G4 binders Motif analysis derived from the generated CUT&Tag data showing the binding sequences for SP1 and CNBP.The preferential binding motifs include G-rich sequences which can fold into G4s. Fig. S14 . Fig. S14.Nucleosome positioning by G4 structures (A) MNase-seq genome tracks showing nucleosome positioning at the G4 edited site across different MNase digestion time points (20 min, 40 min).(B) 2% Agarose gel with the MNasedigested genomes of MYC WT and MYC MUT cells after 5, 10, 20 and 40 min of digestion at 37 o C. The bottom bands correspond to the mono-and di-nucleosomal fragments. Fig. S15 . Fig. S15.Heatmaps of the binding of histone modifiers and histone methylation distribution in respect to G4 sites Heatmaps showing genome-wide binding of histone methyltransferase MLL1 and activating H3K4me1 and H3K4me3 marks to G4 sites.A repressive mark (H3K27me3) shows no G4 overlap.Tracks show normalized coverage values.Profiles are centered at G4s and cover +/-2Kb. Fig. S16.MLL4 protein affinity enrichment by G4 folded oligonucleotides and controlsAffinity enrichment and western blot analysis for MLL4 protein for double strand (ds) and single strand (ss) MYC G4, ss/ds MYC MUT, ss/ds and KRAS G4. Fig. S19 .Fig. S20 . Fig. S19.Heatmaps of the RNAPIIS5P and BG4 signals upon a triptolide treatment Heat maps showing genome-wide binding of RNAPII and BG4 at transcription start sites (TSS).The tracks show normalized coverage values.RNAPIIS5P signal is lost over time upon treatment.Profiles are centered at G4s and cover +/-2Kb. Fig. S23 . Fig. S23.Peak overlap across biological samples for the MYC interactomeUpSet plots and pairwise diagrams illustrating the overlap across three biological samples for CUT&Tag of CNBP, SP1, MLL1, MLL4, H3K4me1 and H3K4me3.The most abundant subset is the one where the peak is present in three out of the three family replicates, indicating high technical reproducibility across samples. Table S15 . DEseq2 analysis on RNA-seq Comparison of gene expression (fold changes) between MYC MUT and KRAS SWP compared to control revealed by DEseq2 analysis on the RNA-seq dataset.
2,254
2024-02-05T00:00:00.000
[ "Biology" ]
Interrelation of levels of development of innovation potential and transport ecosystem of regions . The article analyzes the impact of innovative approaches to the formation of transport infrastructure on the level of development of innovation potential of regions, specifies the stages of formation of innovation regional systems, systematizes the approach to ranking of Russian regional innovation indices (RRII) by thematic sub-indices for the pre-pandemic period 2019 - 2020. The author notes that there is no consensus among domestic scientists about the directions of significance of a functioning and developing transport system on the region's economy and ecosystem. On the one hand, innovations in the transport sphere influence the dynamics of income growth of the regional economy due to the reduction of costs of the transport and logistics system, increase in the speed and volume of passenger and freight traffic, growth of investment attractiveness of the region, diffusion of innovations and growth of labor productivity. However, the accompanying growth of agglomerations leads to increased density of traffic flows, environmental problems and aggravation of transport safety issues. In this connection the problems of introduction of innovations in transport, solved within the programs of increasing the level of safety of transport ecosystem of regions, formation of a qualitatively new level of development of regional innovation potential, have a high degree of influence on the regional socio-economic systems. Thus, innovation, including in the transport sector, can be considered an indispensable condition for the development of regional ecosystem. Introduction The level of development of the national, including regional economy, depends, among other things, on the effectiveness of the operation of the transport infrastructure of the territory, taking into account the environmental factor. Note that the high congestion of roads and railroads, leading, among other things, to pollution of the terrain and disturbance of the ecosystem of the territory, lead the scientific community to search for new approaches to the formation of innovative solutions and technical developments aimed at improving transport efficiency for the regions. Note that the practice does not show a direct link between the levels of development of transport infrastructure and the economy in the regions. This is due to a number of mutually exclusive factors: 1. Highly developed transport infrastructure with high load factors and prompt solution of logistical problems significantly increases trade turnover, creates jobs and, as a result, forms the vectors of regional production development; 2. However, the accompanying growth of agglomerations leads to an increased density of traffic flows, environmental problems and aggravation of transport safety issues. In this connection the problems of innovations introduction in transport, solved within the limits of programs of increase of a level of innovative development of regions, formation of qualitatively new level of development of regional innovative potential, have a high degree of influence on regional socio-economic systems. Thus, innovation, including in the transport sector, can be considered an indispensable condition for regional development. Materials and methods Schumpeter [1] in his works writes that the development of an economic entity depends primarily on the implementation of qualitative changes, rather than quantitative ones. In his opinion, in addition to the growth of production secured by monetary injections, it is also necessary to carry out qualitative growth, which is almost always supported by: -the use of new raw materials; -entering new product markets; -introduction of new technologies in the production process; -launch of a new product, etc. The works of N.D. Kondratiev [2,3] reveal the thesis that the transition from one economic cycle to another occurs due to scientific and technological progress and the accumulated innovation experience, which is gradually transformed into ready-made solutions, changing the way of everyday life. Currently, innovation is considered in the scientific community as the main factor of economic life development. It can be confirmed by Glazyev's theory of long-term economic development [4,5,6], according to which the main technological ways are distinguished, each of which is based on certain technologies. The transition to a new technological mode is accompanied by a sharp jump in production efficiency and, accordingly, by an increase in innovative activity. However, ensuring high innovative development is at the same time an important and difficult task. According to Y. V. Yakovets [7], there are so-called anti-innovations and pseudo-innovations, which significantly slow down the process of economic development. In this connection, it seems necessary to implement improvements in the field of assessing the effectiveness of innovative development. Considering algorithms of management of innovations development, we note that they consist of strategy and investment policy, based on it. These stages form a chain of interconnected factors forming successful investment activity, which can be seen in figure 1 [8]. Stages are associated not only with the process of creating and implementing innovations, but also with their development and improvement. Many of the above are performed not only at a certain stage, but also during the whole activity, among them coordination, stimulation, control. They are inherent in the work throughout, but their greatest manifestation occurs in the specified stage. If all stages are followed and the right policies are in place, a favorable innovative environment emerges. In this environment, unique ideas and technologies can actively develop, which, if properly managed and stimulated, allows innovations to be generated. The process of creating innovation is quite complex, it is not enough to have an idea, it is necessary to make calculations, to study the problems associated with innovation. Testing is difficult enough, often without proper support of investors or social institutions involved in the innovative environment to produce such expensive operations as testing and production is simply impossible. Sales and consumption of innovations is also a very difficult process in countries with underdeveloped innovation policies or in Third World countries. The high level of development of national and regional economies, first, gives territories the opportunity to form their own investment capital, which can be invested in the formation and diffusion of innovations, including in the formation of the transport ecosystem. On the other hand, a developed economy of territories attracts external investment capital interested in the implementation of innovations. Thus, the scientific community is currently discussing the expediency of launching innovative transport projects in developed macro-regions, such as the Hyperloop project by the creator of PayPal and Tesla Motors Ilon Musk -a high-speed vacuum train -a capsule moving along the highway inside a steel tube on air cushions; a bus project by the Chinese Shenzhen Huashi Future Parking Equipment, capable of moving parallel to the movement of urban transport and over it on monorails using the energy of solar panels installed on its roof; a network project by the Chinese company Shenzhen Huashi Future Parking Equipment. At the same time, when assessing the impact of innovative approaches to the formation of transport infrastructure on the level of development of innovation potential of regions, one can find both similarities and differences when considering the most relevant current methodologies for assessing the level of innovation development of regions. Of interest is the methodology used to determine the leaders of innovative development to identify applicants for state subsidizing of innovative activities, based on the system of thematic sub-indices [9]: − ISEC: socio-economic conditions of innovation activity; − IEAexport activity; − ITP: technological potential; − IA: innovative activity; − IAQ: innovation policy quality. Ranks of Russian regional innovation indices (RRII) by thematic sub-indices for the prepandemic period 2019 -2020 are shown in Table 2. Analysis of the data presented in Table 2 showed that among the leaders are Moscow (1st place) and St. Petersburg (2nd place). Another method for assessing the effectiveness of innovative development is the method developed by the Association of Innovative Regions of Russia and the National Research University "Higher School of Economics". This approach is based on multifactor models, which include a large number of heterogeneous indicators. It makes sense to use such methods when it is necessary to give a generalized assessment of innovativeness of an economic agent. Then the results obtained, due to the specificity of input data, will cover a greater number of parameters and, accordingly, give a more correct idea of the general dynamics of efficiency of innovative development of the region. Results A comparative analysis of the level of innovative development of the regions showed that higher indicators were recorded in the regions with a high percentage of clustered enterprises [10]. Figure 2 shows the data on the quantitative indicators of "isolated" innovative enterprises and indicators of innovative activity of clustered enterprises. The analysis of the data presented in Figure 2 showed that the indicators of innovation activity of clustered enterprises are significantly higher than those of "isolated" enterprises. ITP The advantages of combining isolated enterprises into clusters are presented below [10]: -stability and predictability of cash flows will increase; -sales effectiveness will increase; -the cost of organizing the business is reduced due to lower transaction costs, building relationships between internal and external participants of the enterprise, a package approach to the design and signing of contracts, negotiations and information, search for importers; -there is a positive dynamics in the development of enterprises of one cluster in the field of innovation. This is due to the fact that a large number of experienced and creative employees are concentrated in one cluster. Thus, all the accumulated knowledge, experience and skills of these people are stored in a single cluster, which provokes participants to generate unique ideas and develop new innovative projects. All of this together significantly accelerates all the innovative processes that exist in the cluster; -the relations and connections between the participants within the cluster are actively improved, which positively influences the business processes: it helps to concentrate efforts more productively and to adjust to the unstable conditions of the external environment, becoming more flexible [11]. In this case, the final product must go through all the business processes of a particular cluster. At each of these stages the product increases its value. This method of production is called "value chain". -it becomes easier to identify technological trends in innovation in a timely manner, which allows planning and forecasting of future developments; Note that clustered enterprises are located in one sector, while clusters, at the same time, can conduct activities outside of this area. At the same time, the goals of cluster strategies are different for states at different levels of economic development (see Fig. 3). Discussions Thus, increase of efficiency of resources use in the process of innovative development can be considered as actual and perspective point of growth for many regions of the Russian Federation today. At the moment there is a large variety of methods, which allow assessing the effectiveness of innovative development of economic entities of various levels. The basis of the proposed by the authors methodology for assessing the level of innovation development of the regions is the principle of correlation of resources and results by dividing all the calculated indicators into two groups of main factors: resource and result coefficients accordingly. The group of resource coefficients of the methodology is represented by two indicators [13]: 1) the coefficient of localization of companies carrying out scientific research work in the region: Dcregthe share of companies engaged in research and development in the region, from the total number of companies in the region; Dcrfthe share of companies engaged in scientific research in the region, from the total number of companies in Russia; 1) the coefficient of localization of personnel carrying out activities within the framework of research work: (2) Dpregthe share of personnel carrying out activities within the framework of research work, from the total average number of personnel in the region; Dprfthe share of personnel engaged in R&D activities in the total average number of personnel in Russia. The group of resultant coefficients of the methodology is also represented by two indicators: 1) the localization rate of companies actively implementing innovations: Diaregshare of companies actively implementing innovations from the total number of companies in the region; Diarfshare of companies actively implementing innovations from the total number of companies in Russia; The methodology assumes that after the calculation of the main 4 coefficients has been performed, it is necessary to correlate the average localization coefficients by results and resources: Кresultthe average value of the coefficients of the group of performance indicators; Кresourcethe average value of the coefficients of the group of resource indicators. According to the results of the evaluation, it is required to compare the resource and performance coefficients, after which an integral indicator of the effectiveness of innovative development of the region will be obtained. Conclusion The assessment of innovative development of Russian regions, carried out by the authors within the framework of the proposed methodology (see formulas 1...4) showed that despite the small amount of resources, which the Lipetsk, Bryansk and Tula regions possess, for example, these regions manage to achieve significant results in terms of innovative development. Such situation, from the point of view of economy, is the most favorable. On the other hand, there were identified regions, which, possessing a large number of conditions conducive to development, could not achieve high results in their own innovative development. Among such regions we can single out Tambov, Tver and Kaluga regions. The best cumulative results, as the analysis showed, were achieved in Ryazan, Kostroma, Belgorod and Bryansk regions. The Voronezh, Tambov and Moscow regions achieved a significant increase in efficiency with a small absolute value. The evaluation methodology presented by the author above can be called special. It allows you to consider the specific ratio of innovative resources used in the region and the results of its innovative activity. It makes it possible to draw a conclusion about the efficiency of use by the subject of those means which it really possesses. Moreover, serious distinctions in the received results with other methods allow asserting that the developed model is really necessary in use. So, Moscow, which is the most effectively developing region of the Central Federal District in terms of implementation of innovations, according to the results of the author's study, takes only 10th place in the overall ranking. It can be explained by the fact that the structure of innovative resources and results of activity in Moscow has been settled for some time and does not show any significant dynamics, because a lot of money has been invested in the development of its innovative potential [14]. At the same time, the Kostroma region showed the best results according to the authors of the study, which can be explained as follows: while maintaining the same level of available innovative resources the region managed to create on average more innovations in 2020 compared to 2016. The presented data show that there is a rather low level of internal expenditures on research and development, the only exception being the city of St. Petersburg, which is a modern center of innovation. St. Petersburg, which is a modern center of innovative development in Russia. It should also be noted that quite a large volume of financial resources allocated for development of innovation potential is formed in the regions where manufacturing industries are more developed. Practice shows that innovations in the transport sector affect the dynamics of income growth of the regional economy due to the reduction of costs of the transport and logistics system, increase in the speed and volume of passenger and freight traffic, growth of investment attractiveness of the region, diffusion of innovation and growth of labor productivity. The development of green innovative technologies and their introduction into the processes of regional transport infrastructure functioning leads to the growth of recreational potential and stabilization of the ecological balance of the territory [15]. On the other hand, the increase in GRP leads to an increase in investment in the development of transport infrastructure and research in the field of finding new approaches to the operation and energy supply of regional transport systems. At the same time, railway transport does not lose its competitiveness among other types of transport due to significant volumes of transported passengers and freight with differentiated weather factors, optimized cost price and high prospects of innovative changes within the development of scientific and technological progress in the digital economy.
3,763.2
2023-01-01T00:00:00.000
[ "Economics" ]
Could the observation of X(5568) be a result of the near threshold rescattering effects? We investigate the invariant mass distributions of Bsπ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_s\pi $$\end{document} via different rescattering processes. The triangle singularity which appears in the rescattering amplitude may simulate the resonance-like bump around 5568 MeV. However, because the scattering Bs∗π→Bsπ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$B_s^*\pi \rightarrow B_s\pi $$\end{document} is supposed to be weak, if the pertinent background is much larger, it would be hard to ascribe the observation of X(5568) to rescattering effects. Introduction The study of exotic hadron spectroscopy is experiencing a renaissance in the last decade. More and more charmoniumlike and bottomonium-like states (called XY Z) have been announced by experiments in various processes (see Refs. [1][2][3][4] for a review). Several charged structures with a hiddenbb orcc, such as theZ ± c (4430) [5,6], Z ± b (10610, 10650) [7], Z ± c (3900) [8,9], and Z ± c (4020) [10] were observed by experiments, which would be exotic state candidates. Very recently, the D0 collaboration observed a narrow structure X (5568) in the B 0 s π ± invariant mass spectrum with 5.1σ significance [11]. The mass and width are measured to be M X = 5567.8 ± 2.9 +2.9 −1.9 MeV and X = 21.9 ± 6.4 +5.0 −2.5 MeV, respectively. The quark components of the decaying final state B 0 s π ± are subd (or sdbū), which requires X (5568) should be a structure with four different valence quarks. After the discovery of X (5568), several theoretical investigations were carried out in order to understand its underlying structure. In Refs. [12][13][14][15], the X (5568) was thought to be a scalar or axial-vector tetraquark state, and the corresponding mass was calculated by constructing the diquarkantidiquark interpolating current in the framework of QCD sum rules. The tetraquark masses calculated with QCD sum rules are found to be consistent with the mass of X (5568). a e-mail<EMAIL_ADDRESS>b e-mail<EMAIL_ADDRESS>The tetraquark spectroscopy was also calculated by using the effective Hamiltonian approach in Ref. [16]. The lowestlying scalar tetraquark is found to be about 150 MeV higher than the X (5568) in Ref. [16], but the authors argued that it is still likely to identify the X (5568) as the scalar tetraquark when the systematic errors in the model are considered. In Ref. [17], the authors estimated the partial decay width X (5568) → B 0 s π + with the X (5568) being an S-wave BK molecular state. Some non-resonance explanations have been proposed to connect resonance-like peaks with kinematic singularities induced by the rescattering effects [18][19][20][21][22][23][24][25][26][27][28][29][30]. It is shown that sometimes it is not necessary to introduce a genuine resonance to describe a resonance-like structure, because some kinematic singularities of the rescattering amplitudes will behave themselves as bumps in the invariant mass distributions. Before claiming that X (5568) is a genuine particle, such as a tetraquark or molecular state, it is also necessary to confirm or exclude the possibilities of those non-resonance explanations. This work is organized as follows: in Sect. 2, the so-called triangle singularity (TS) mechanism is briefly introduced; In Sect. 3, we discuss several rescattering processes where the TS can be present; a brief summary and some discussion are given in Sect. 4. TS mechanism The possible manifestation of the TS was first noticed in the 1960s. It is found that the TSs of rescattering amplitudes can mimic resonance structures in the corresponding invariant mass distributions [31][32][33][34][35][36][37][38][39][40][41]. This offers a non-resonance explanation about the resonance-like peaks observed in experiments. Unfortunately, most of the proposed cases in 1960s were lack of experimental support. The TS mechanism was rediscovered in recent years and used to interpret Triangle rescattering diagram under discussion. The internal mass which corresponds to the internal momentum q i is m i (i =1, 2, 3). The momentum symbols also represent the corresponding particles some exotic phenomena, such as the largely isospin-violation processes, the production of some exotic states and so on [23][24][25][26][27][28][29][30][42][43][44][45][46][47]. For the triangle diagram in Fig. 1, all of the three internal lines can be on-shell simultaneously in some special kinematic configurations. This case corresponds to the leading Landau singularity of the triangle diagram, and this leading Landau singularity is usually called the TS. The physical picture concerning the TS mechanism can be understood like this: The initial particles k a and k b first scatter into particles q 2 and q 3 , then the particle q 1 emitted from q 2 catches up with q 3 , and finally q 2 and q 3 will rescatter into particles p b and p c . This implies that the rescattering diagram can be interpreted as a classical process in space-time in the presence of TS, and the TS will be located on the physical boundary of the rescattering amplitude [37]. The TS mechanism is very sensitive to the kinematic configurations of rescattering diagrams. It is therefore necessary to determine in which kinematic region the TS can be present. In Fig. 1, we define the invariants s 1 = (k a + k b ) 2 , s 2 = ( p b + p c ) 2 and s 3 = p 2 a . The locations of TS can be determined by solving the so-called Landau equations [48][49][50]. For the diagram in Fig. 1, if we fix the internal masses m i , the external invariants s 2 and s 3 , we can obtain the solutions of TS in s 1 , i.e., Likewise, by fixing m i , s 1 , and s 3 we can obtain similar solutions to TS in s 2 , i.e., By means of the single dispersion representation of the 3point function, we learn that within the physical boundary only the solution s − 1 or s − 2 will correspond to the TS of the rescattering amplitude, and s − 1 and s − 2 are usually defined as the anomalous thresholds [46,49,50]. For convenience, we further define the normal threshold √ s 1N ( √ s 2N ) and the critical value √ s 1C ( √ s 2C ) for s 1 (s 2 ) as follows [46]: If we fix s 3 and the internal masses m 1,2,3 , when s 1 increases from s 1N to s 1C , s − 2 will move from s 2C to s 2N . Likewise, when s 2 increases from s 2N to s 2C , s − 1 will move from s 1C to s 1N . This is the kinematic region where the TS can be present. The discrepancies between normal and anomalous thresholds can also be used to represent the TS kinematic region. The maximum values of these discrepancies take the form max In Ref. [41], it was argued that for the single channel rescattering process, when the corresponding resonanceproduction tree diagram is added coherently to the triangle rescattering diagram, the effect of the triangle diagram is nothing more than a multiplication of the singularity from the tree diagram by a phase factor. Therefore the singularities of triangle diagram cannot produce obvious peaks in the Dalitz plot projections. This is the so-called Schmid theorem. But for the coupled-channel cases, the situation will be quite different from the single channel case discussed in Ref. [41]. For the rescattering diagrams which will be studied in this paper, the intermediate and final states are different, therefore the singularities induced by the rescattering processes are still expected to be visible in the Dalitz plot projections. The reader is referred to Refs. [35,51] for some comments about the Schmid theorem, and to Refs. [52,53] for some discussions about the coupled-channel case. We will focus on the coupled-channel cases in this work. Triangle diagram The mass of X (5568) is very close to the B * s π ± -threshold, which is about 5555 MeV [2]. One may wonder whether there The symbol A represents the incident state Table 1 TS kinematic region corresponding to the rescattering diagrams in Fig. 2, in units of GeV are some connections between X (5568) and the coupledchannel scattering B * s π → B s π near threshold. In the high energy collisions, the production of B s π may receive contributions from rescattering diagrams illustrated in Fig. 2. The intriguing characteristic of this kind of diagrams is that the TSs are expected to be present in the rescattering amplitudes, which may result in some resonance-like bumps in B s π distributions around the B * s π -threshold accordingly. The momentum and invariant conventions of Fig. 2 are the same as those of Fig. 1. According to Eqs. (3) and (4), the kinematic region where the TS can be present is displayed in Table 1. It can be seen that the kinematic region of TS in s 1 is very large for both of the diagrams in Fig. 2. max s 1 is nearly 1 GeV for each of the diagrams. First, this is because the quantity [(m 2 − m 1 ) 2 − s 3 ] in Eq. (4) is large. Physically, this quantity corresponds to the phase-space factor for ρ → ππ (K * → K π ), which is sizable. Second, the ratio m 3 /m 1 is equal to M B * s /M π , which is also quite large. This means that the kinematic conditions of the presence of TS can be fulfilled in a very broad energy region of incident states. Then the kinematic requirement on the incident state would be largely relaxed, which is an advantage to observe the effects resulted by the TS mechanism. On the other hand, the kinematic region of TS in s 2 is relatively smaller. max s 2 is about 0.2 GeV for each of the diagrams, which implies that the TS peaks in B s π distributions may not stay far away from the B * s π -threshold (normal threshold √ s 2N ). We will naively construct some effective Lagrangians to estimate the behavior of the rescattering amplitudes. Taking into account the conservation of angular momentum and parity, the quantum numbers of the incident state A are set to be J P = 1 + . Some of the Lagrangians read where V and P ( ) represent the light vector and pesudoscalar mesons, respectively. The process B * s π → B s π can be Pwave scattering, and the corresponding Lagrangian takes the form The P-wave scattering implies that the quantum numbers of B * s π and B s π systems would be J P = 1 − . It should be mentioned that the processes A → B * s V in Fig. 2 can also happen through weak interactions in high energy collisions, therefore the parity does not have to be conserved for this vertex. When the kinematic conditions of the TS are fulfilled, it implies that the particle q 2 in Fig. 1 is unstable. We then introduce a Breit-Wigner-type propagator [q 2 2 − m 2 2 + im 2 2 ] −1 to account for the width effect when calculating the triangle loop integrals. The complex mass of the intermediate state will remove the TS from physical boundary by a distance [36]. If the width 2 is not very large, the TS will lie close to the physical boundary, and the scattering amplitude can still feel the influence of the singularity. The numerical results for B s π invariant mass distributions corresponding to the rescattering processes in Fig. 2 are displayed in Fig. 3. We ignore the explicit couplings but just focus on the line shapes here. The distributions are calculated at several incident energy points. From Fig. 3, it can be seen that some bumps arise around the position of X (5568). Since the TS in √ s 2 (B s π invariant mass) can be present for a very broad range of incident energy √ s 1 , the bumps around 5568 MeV would be enhanced to some extent due to the accumulative effects of the rescattering amplitudes at different incident energies. The bumps in Fig. 3a are broader compared with those in Fig. 3b. This is because the decay width of the ρ-meson (∼149 MeV) is larger than that of the K * -meson (∼50 MeV) [2]. The larger decay width will remove the corresponding TS in the complex-plane further away from the physical boundary. Furthermore, the P-wave scattering B * s π → B s π will also smooth the TS peaks to some extent. Long-range interaction and box diagram The scattering process B * s π → B s π would be OZI suppressed. In the effective Lagrangian of Eq. (7), we assume we have a contact term, which may account for the short-range part of the corresponding interaction. If we take into account the t-channel contributions, because the momentum transfer in this process will be very small, some long-range interactions, such as the electromagnetic (EM) interaction, may become important. To judge whether the EM interaction may The t-channel contributions for B * s π → B s π . a Photon exchange; b φ-exchange play a role in B * s π → B s π , we compare the contributions of t-channel processes illustrated in Fig. 4. In Fig. 4a, b, we use the photon-and φ-exchange diagrams to partly account for the EM and strong interactions, respectively. Some effective Lagrangians are constructed as follows We first of all compare the coupling constants of these two diagrams. On adopting the vector meson dominance model [54][55][56][57], the ratio R γ /φ ≡ eg B * s B s γ /g φππ g B * s B s φ would be equal to √ 4πα e g γ φ /g φππ , where the couplings g γ φ and g φππ are estimated to be 0.0226 and 0.0072 according to the decay widths of φ → e + e − and φ → ππ, respectively. The ratio R γ /φ is then obtained to be about 0.9. According to this naive estimation, we can see that the EM couplings may not be very smaller compared with the OZI suppressed strong couplings. Without introducing any form factors to account for the off-shell effects, we further integrate over the momentum transfer t and obtain the cross section ratios σ γ /σ φ for different scattering energies, which is displayed in Fig. 5. It is shown that the cross section corresponding to Fig. 4a is larger than that corresponding to Fig. 4b by about three orders of magnitude. This is mainly because the quantity 1/t is much larger than 1/(t − m 2 φ ). As a result we could make a quantitative judgment that the contribution of EM interaction may be comparable with that of strong interaction. Taking into account the above arguments, the triangle diagram in Fig. 2 can be changed into the box diagram in Fig. 6 accordingly, and the numerical results for the B s π invariant mass distributions are displayed in Fig. 7. Because the masses of B s and B * s are different, it can be judged that there is no infrared divergence in these box diagrams [58]. Compared with the resonance-like bumps in Fig. 3, the bumps in Fig. 7 are much narrower and more like resonance peaks. This implies that the interaction details of the scattering B * s π → B s π may affect the TS mechanism to some extent. If the long-range EM interactions play a dominant role in B * s π → B s π , one cannot expect there will be TS peaks arising in the B s π 0 invariant mass distributions, because there is no γ π 0 π 0 vertex. We can further conclude that if the observa- tion of X (5568) is due to the rescattering effects and the longrange EM interaction dominates the scattering B * s π → B s π , there will be no charge neutral partner of X (5568). It should be mentioned that we did not take into account the contribution of any possible background in the numerical calculations, but the background could be essential for the rescattering processes discussed in this paper. Because the reaction B * s π → B s π could be weak, if the background is very large, it is possible that the peaks resulted by the rescattering effects will be smoothed to some extent. However, the quantitative discussion on this uncertainty is beyond the scope of this paper, and it will be left for future work. Weak interaction process As stated before, the process A → B * s V can also happen via the weak interactions, such as B ( * * ) c → B * s V . Interestingly, according to the quark model calculation in Ref. [59], it can be noticed that there are many charm-beauty mesons B ( * * ) c , of which the masses just fall into the region 6.2−7.5 GeV. This energy region has a large overlap with the TS kinematic region displayed in Table 1. This is another support that the TS mechanism may play a role in the observation of X (5568). Summary In this paper, we investigated the invariant mass distributions of B s π via different rescattering processes. For a very broad incident energy region, one can expect that the TS bumps could be present around 5568 MeV in B s π distributions, which may simulate the resonance-like structure X (5568). However, to conserve the parity and angular momentum, the process B * s π → B s π should be a P-wave scattering, of which the amplitude is suppressed by the low momentum of scattering particles. Furthermore, this process is also suppressed by the OZI rule. Therefore the scattering amplitude of B * s π → B s π is supposed to be weak, which will weaken the possibility of ascribing the observation of X (5568) to rescattering effects. Some similar discussions can be found in Refs. [60,61]. The rescattering processes induced by the EM interactions can make the TS bumps narrower, and even the amplitude strength can be comparable with that of strong interaction. But if the EM interactions play a dominant role in the rescattering effects, it can also be expected that the pertinent background would be much larger. Although the rescattering amplitudes could be enhanced to some extent by the TS mechanism, it is still hard to describe the X (5568) with rescattering effects. In a preliminary result of the LHCb collaboration [62], the existence of X (5568) is not confirmed based on their pp collision data, which makes the production mechanism and the underlying structure of X (5568) more puzzling. Further extensive experiments may help us to clarify the ambiguities and check different mechanisms.
4,432.4
2016-08-01T00:00:00.000
[ "Physics" ]
Microbial Changes in the Periodontal Environment Due to Orthodontic Appliances: A Review Orthodontic appliances significantly influence the microbiological dynamics within the oral cavity, transforming symbiotic relationships into dysbiotic states that can lead to periodontal diseases. This review synthesizes current findings on how orthodontic treatments, particularly fixed and removable appliances, foster niches for bacterial accumulation and complicate oral hygiene maintenance. Advanced culture-independent methods were employed to identify shifts toward anaerobic and pathogenic bacteria, with fixed appliances showing a more pronounced impact compared to clear aligners. The study underscores the importance of meticulous oral hygiene practices and routine dental monitoring to manage these microbial shifts effectively. By highlighting the relationship between appliance type, surface characteristics, treatment duration, and microbial changes, this review aims to enhance dental professionals' understanding of periodontal risks associated with orthodontic appliances and strategies to mitigate these risks. The findings are intended to guide clinicians in optimizing orthodontic care to prevent plaque-associated diseases, ensuring better periodontal health outcomes for patients undergoing orthodontic treatment. Introduction And Background The oral cavity harbors the second richest microbial community in the human body, comprising over 700 species of bacteria that colonize both the hard tissues and soft mucosa [1,2].These bacteria form a harmonious community driven by the host's defense system, thriving in three distinct areas: group 1, subgingival and supragingival; group 2, saliva, tongue, and hard palate; and group 3, cheek and sublingual area [2]. The balance between the host and the oral microbiome is crucial for both oral and overall health.Disruptions in this balance increase the risk of developing various oral diseases [3].The terms "oral microflora," "oral microbiota," or "oral microbiome" refer to the microorganisms present in the human oral cavity [4].Joshua Lederberg coined the term "microbiome" to denote the ecological community of commensal, symbiotic, and pathogenic microorganisms that inhabit our body space, historically overlooked as determinants of health and disease [5].As noted, shifts in this microbial community can lead to major dental and periodontal issues, such as caries (white spot lesions), gingivitis, and periodontitis [6]. Orthodontic intervention, particularly with fixed appliances, is a notable factor that disrupts the equilibrium between the host and the oral microbiome.This procedure, increasingly popular among adolescents and adults due to technological advancements, often extends over one or more years and may negatively impact oral health [7].Davis et al. [8] described how dental and periodontal complications can arise from direct gingival irritation due to excessive force exerted by appliances on the periodontium or from poor oral hygiene maintenance, leading to plaque accumulation, dental caries, and periodontal diseases [8].Patients undergoing orthodontic treatment should be informed of the risks associated with instability and changes in plaque biofilm composition and should be taught preventive or control methods to ensure the successful completion of their therapy [6]. This narrative review aims to comprehensively assess the impact of orthodontic therapy on periodontal microbiology and health.The study explores key factors associated with microbial shifts, such as bacteria and fungi, the type of appliance used, and the treatment duration.The findings emphasize the need for controlling methodologies to mitigate the effects of orthodontic therapy on the oral microbiome and maximize treatment outcomes.However, it is important to note that the reviewed studies did not explore the impact of orthodontic therapy on the general/systemic health of the patients, highlighting the need for further research into the presence and effects of other microbial entities within the biofilm. Review The oral cavity is home to the second richest microbial community in the human body, with over 700 species of bacteria colonizing both hard tissues (such as teeth) and soft tissues (such as the mucosa) [1,2].These bacteria coexist harmoniously within three distinct areas: (1) subgingival and supragingival regions (around and above the gum line), (2) saliva, tongue, and hard palate, and (3) the cheek and sublingual (under the tongue) area [2]. Maintaining a balance between the host (the human body) and the oral microbiome (the community of microorganisms in the mouth) is essential for both oral and overall health.Disruptions in this balance can increase the risk of various oral diseases [3].Terms such as "oral microflora," "oral microbiota," and "oral microbiome" all refer to the microorganisms present in the human mouth [4].The term "microbiome" was coined by Joshua Lederberg to describe the ecological community of commensal (beneficial), symbiotic (mutually beneficial), and pathogenic (disease-causing) microorganisms that inhabit our bodies and significantly influence our health and disease states [5].Changes in this microbial community can lead to significant dental and periodontal issues, such as cavities (white spot lesions), gingivitis (gum inflammation), and periodontitis (severe gum disease) . Orthodontic treatment, especially with fixed appliances (braces), is a notable factor that can disrupt the balance between the host and the oral microbiome.This type of treatment, which has become increasingly popular among both adolescents and adults due to technological advancements, often lasts for a year or more and can negatively impact oral health [7].According to Davis et al. [8], dental and periodontal complications can arise from direct gingival irritation caused by the excessive force of the appliances on the gums or from poor oral hygiene maintenance, leading to plaque buildup, cavities, and periodontal diseases.It is essential for patients undergoing orthodontic treatment to be aware of the risks associated with these changes in plaque biofilm composition and learn preventive measures to ensure the successful completion of their therapy [6]. This narrative review aims to provide a comprehensive assessment of the impact of orthodontic therapy on periodontal microbiology and health.The study examines key factors associated with microbial shifts, such as bacteria and fungi, the type of orthodontic appliance used, and the duration of treatment.The findings emphasize the importance of implementing control methodologies to mitigate the effects of orthodontic therapy on the oral microbiome and optimize treatment outcomes.However, it is important to note that the reviewed studies did not explore the impact of orthodontic therapy on the general/systemic health of patients, highlighting the need for further research into the presence and effects of other microbial entities within the biofilm. Unaltered Periodontal Microbiology The human mouth hosts a diverse array of microbiomes, including bacteria, viruses, protozoa, fungi, and archaea.Researchers have identified thousands of bacterial species across various phyla, such as Actinobacteria, Bacteroidetes, Firmicutes, Proteobacteria, Spirochaetes, Synergistetes, and Tenericutes, and the uncultured divisions GN02, SR1, and TM7.It is noteworthy that approximately half of these uncultured bacteria remain undiscovered [9].The oral microbiota, consisting of several thousand species, is an integral part of the oral cavity and plays a crucial role in protecting the host from extrinsic bacteria with disease potential, thereby impacting oral health [10,11]. Popova et al. [12] examined the relationships among bacteria in mature biofilms, discovering that these inter-bacterial relationships can influence the virulence of periodontal microflora either positively or negatively.Positive interactions, known as symbiosis, are categorized into mutualism, synergism, and commensalism.In mutualism, both species, such as Porphyromonas gingivalis with Treponema denticola and Tannerella forsythia with Fusobacterium nucleatum, benefit equally from their coexistence.Synergism occurs when the combined pathogenic potential of the interacting species exceeds the sum of their individual effects, as seen with Porphyromonas gingivalis and Fusobacterium nucleatum.Conversely, in commensalism, only one species benefits, such as the relationship between Porphyromonas gingivalis and Campylobacter rectus.Understanding these positive relationships within mature biofilms is essential, as they can either stimulate or inhibit the growth of specific bacterial species, thereby affecting the balance between the host and the microbiome [12]. Identifying Key Microbial Changes and Complications Due to advancements in methodologies such as culture-independent, high-end technologies such as reverse transcriptase-polymerase chain reaction, it is now possible to identify the number of microbial species present in various samples, including saliva and supragingival and subgingival plaque [13].The composition of dental plaque changes over time (from early to mature plaque) and varies depending on the location (supragingival or subgingival plaque groups).These variations are linked to different pathologies, such as caries and periodontitis [14]. As Wade et al. [9] described, dental caries and periodontal disease are the two most common diseases caused by bacteria.These diseases occur when a microbial shift transforms a symbiotic relationship into dysbiosis.Bacteria from the orange and red complexes, which are not grouped in the subgingival bacterial classification, are suspected of causing periodontal diseases [11].Supragingival plaque primarily consists of cariogenic bacteria, such as Lactobacilli and Actinomycetes species, which lead to enamel demineralization [15].Additionally, the role of supragingival microflora is crucial in periodontal pathogenesis; gram-negative anaerobes can co-aggregate and lead to periodontal diseases [16]. Subgingival microbiota/plaque has been the primary focus of periodontal pathogenesis research [12].A detailed study by Socransky et al. [17] analyzed the different bacterial species found in subgingival plaque.These bacteria belonged to specific complexes, and their strong and weak associations with periodontal disease potential were evident.The bacteria in the green and yellow complexes were early colonizers that provided a foundation for more harmful bacteria from the orange complexes.The orange complex bacteria produced toxins, leading to clinical attachment loss and deepening pockets, and forming a base for the red complex bacteria to thrive.Thus, it was established that the ultimate damage to the periodontium was caused by red complex bacteria, and subgingival plaque formed a prime niche for these bacteria [17].Several studies have explored the impact of orthodontic therapy on oral microflora.The general mechanism remains the same: orthodontic appliances complicate oral hygiene maintenance, leading to more plaque buildup and microbiological shifts, which can result in oral complications. In a recent study, Chen et al. [18] aimed to provide the first longitudinal, culture-free, deep sequence profiling of subgingival bacteria in patients undergoing fixed appliance therapy at an early stage.Their study found an increased incidence of local gingivitis and mild periodontitis in subjects undergoing orthodontic treatment compared to an untreated control group.This was linked to a greater diversity of subgingival microbiomes and the discovery of 12 novel bacterial species [18]. Multiple studies have observed a shift in the supragingival microbiome, characterized by increased anaerobic bacteria, periodontal pathogens, and a reduction in commensal bacteria [19].Under certain conditions, organisms that normally coexist peacefully can become detrimental.This is particularly true with orthodontic appliances, which might provide a breeding ground for opportunistic pathogens that require increased scrutiny [20].One such commensal fungus, Candida albicans, naturally presents in the oral microflora of 53% of the general population and can become pathogenic due to an imbalance in the microbial colony following orthodontic treatment, combined with inadequate oral hygiene or a compromised host immune system, leading to candida-associated oral infections [21].A few studies have reported an increase in caries-causing bacteria, including Streptococcus mutans, Lactobacillus spp., and potentially pathogenic gram-negative bacteria in patients with orthodontic appliances [13,22,23].Another study confirmed the presence of gram-negative bacteria compared to gram-positive bacteria in the process of plaque buildup.Additionally, a mature plaque has more anaerobic bacteria because early colonizers consume oxygen, reducing the oxygen potential of the oral biofilm environment, which facilitates the proliferation of anaerobic bacteria [12].Reichardt et al. [24] studied the influence of orthodontic appliances shortly (one week) after the start of the treatment.The findings revealed a qualitative increase in Streptococcus mutans at the premolar and molar regions, as identified by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Pseudomonas species, which are not typically present in the healthy oral microbiome, were found in orthodontic patients, as concluded by Sun et al. [25].Some periodontal pathogens, including cysts of Acanthamoeba spp., were also identified [26].Zibar Belasic et al. [27] observed a higher number of cariogenic bacteria than perio-pathogens in samples from patients treated with fixed appliance therapy.Gopalakrishnan et al. [28] noted a significant increase in sulfate-reducing bacteria in orthodontic patients compared to those not wearing any appliance.The rise in sulfate-reducing bacteria was attributed to the presence of metallic entities in the oral cavity, with colonization evident in the samples as black precipitates, which are bacterial by-products [28].Merghni et al. [29] concluded that Staphylococcus aureus, due to their considerable adhesion to dental metal alloys (abiotic) and epithelial cells (biotic), were found in these samples, indicating biomaterial infections. The study by Eckley et al. [30], consistent with the available literature, found an increase in Porphyromonas gingivalis, Tannerella forsythia, and Treponema denticola in orthodontic subjects.Additionally, dark-field microscopy revealed an increase in small and large spirochetes, non-motile rods, filaments, and fusiform and a decrease in coccoid forms and motile rods.These changes were reversed after the completion of treatment [30].Marincak Vrankova et al. [7] suggested that the combined effect of periodontal bacteria and Candida species is more detrimental to periodontal health in the medium term after the insertion of orthodontic appliances than each of these microorganisms alone.There is also some evidence regarding detecting certain viruses in the periodontal deep pockets, indicating a superinfection associated with periodontal diseases, which needs further research [12]. Complications Periodontal Complications: Gingivitis and Periodontitis Arweiler et al. [10] confirmed that an imbalance in the oral cavity's ecosystem, due to either an excessive bacterial load or a compromised host immune system, can challenge both local and systemic health, leading to periodontal diseases and periodontitis.Gingivitis is a reversible, non-destructive inflammation of the gums characterized by accumulated plaque microflora at or near the gingival sulcus, which leads to local gingival irritation and signs of inflammation [31].The reversible nature of this pathology suggests that it can be controlled at early stages.Most lesions are short-term; with close monitoring and proper plaque control methods, their progression can be hindered [32].As the biofilm evolves and the number of pathogenic bacteria in the plaque continuously increases, so does the risk for periodontitis [33]. Periodontitis, a polymicrobial disease, results from an intensified inflammatory response to the benign microorganisms that have overgrown in subgingival spaces [9,12].The transition in the subgingival microbiome marks the shift from healthy periodontium to gingivitis and eventually to periodontitis [34].Periodontitis may follow a chronic course, causing irreversible clinical attachment loss [31] as evidenced by occult blood in saliva [35] and bone loss.It is important to note that even a small number of pathogenic bacteria are sufficient to damage periodontal tissues [12], and restoration of health can take time, even after resuming effective plaque control methods [36]. Corrosion of Appliances Microbial-induced corrosion represents a complex intersection of corrosion science and microbiology, where bacteria, fungi, and algae play pivotal roles.Among these, sulfate-reducing bacteria are particularly notorious.These bacteria are prevalent when orthodontic stainless steel appliances are used within the oral cavity, where they have been shown to cause corrosion.The literature extensively confirms that sulfatereducing bacteria not only contribute to the corrosion of these appliances but are also among the causative agents of periodontal diseases.With an increased presence of sulfate-reducing bacteria in the biofilm of orthodontic patients, the risk of microbial-induced corrosion escalates [37].Gopalakrishnan et al. [28] specifically investigated the corrosion of orthodontic metals by sulfate-reducing bacteria in patients treated with fixed appliances.Their study concluded that a rise in sulfate-reducing bacteria levels within the oral microflora correlates with increased corrosion of metallic appliances [28]. Caries/White Spot Lesions As demonstrated by the studies mentioned above, orthodontic treatment can influence the proliferation of cariogenic bacteria, leading to the development of white spot lesions and dental caries.This effect is largely due to changes in the saliva's buffering capacity, which results in a more acidic environment.While stimulated salivary flow initially increases, it subsequently decreases to lower levels [35].The resultant acidic pH fosters the growth of cariogenic bacteria such as Streptococcus mutans and Lactobacillus.This increase is often compounded by inefficient cleaning during orthodontic therapy, rendering the teeth more susceptible to decay. Literature Search Methodology An electronic database search was conducted through PubMed, Google Scholar, and ScienceDirect to identify relevant studies.Initially, 98 articles were retrieved using the specified keywords.After reviewing the abstracts, articles that did not conform to the scope of this review or were written in languages other than English were excluded.To comprehensively demonstrate the microbial changes occurring in the periodontal environment due to orthodontic appliances, both recent and seminal studies, as well as earlier research findings, were included. Microbiological Variables Microbiological variables included the types of orthodontic appliances, surfaces, and treatment duration. Fixed Appliance Therapy and Impact on Oral Microbiome A study on Japanese patients that aimed to investigate microbial dynamics after fixed appliance therapy yielded significant findings.The research, conducted on 71 patients, involved collecting supragingival plaque and saliva samples at three stages: before the appliance was placed (T0), after six months of wear (T1), and after its removal (T2).A key finding was the significant increase in anaerobes and facultative anaerobes in both plaque and saliva, indicating dysbiosis in the periodontal environment.This dysbiosis was identified as a transitional stage from a healthy periodontium to periodontal disease [19]. Cantekin et al. [38] revealed that fixed appliance therapy negatively impacts oral health.The treatment heightened the risk of developing oral diseases due to plaque accumulation and microflora alteration.The effects were most critical in the first month after starting the treatment but diminished after debonding.Overall, the study concluded that fixed appliance therapy detrimentally affects patients' oral health [38,39].Similarly, Eckley et al. [30] observed increased plaque index scores and probing depths in subjects undergoing fixed appliance therapy.On the other hand, a few studies found no significant difference in the periodontal status of adult patients who had undergone orthodontic treatment during adolescence [40].No significant changes were observed in the periodontium post-therapy, and if any, they were not detrimental [40][41][42]. Yang et al. [43] conducted a study focusing specifically on the effects of fixed orthodontic appliances on the colonization of Candida albicans and Streptococcus mutans.These two microorganisms, typically synergistic, experienced altered colonization due to the long-term use of fixed appliances.This disturbance in the healthy balance between the fungus and bacterium was found to increase the risk of oral diseases [43]. Clear Aligner Therapy and Impact on Oral Microbiome Clear aligner therapy is an excellent alternative to fixed appliance therapy for adolescents and, increasingly, for adults as well.The removable nature of these appliances facilitates better oral hygiene during orthodontic treatment, improving periodontal health and serving as a barrier against changes in microbial colonies.There remains a need to understand each phase of treatment (early, middle, late, and maintenance) to monitor changes in oral microflora and their progression over time [6].Lucchese et al. [44] confirmed that removable appliances induce qualitative and quantitative changes in the bacterial population; however, these changes are transient and typically revert to optimal levels by the end of treatment. Gong et al. [45] investigated gingival enlargement (GE) associated with orthodontic therapy, finding that the gingiva becomes swollen in reaction to microbial insults.The pathogenesis of GE induced by orthodontic treatment is multifactorial, with pathogens and inflammatory cytokines (interleukin-1 beta (IL-1β) and transforming growth factor beta 1 (TGF-β1)) identified as primary risk factors.Temporarily removing the appliance can benefit periodontal health by reducing pathogens and aiding GE recovery [45].Existing literature has shown that clear aligner therapy has a lesser impact on the oral microbiome than fixed appliances do.Some studies have even noted a positive influence on gingival health, plaque index scores, and the occurrence of white spot lesions.The periodontal status did not deteriorate, and there was an improvement in pocket depths and gingival inflammation as treatment progressed.Nonetheless, it is crucial to note that proper plaque biofilm removal is essential, as the ridges, grooves, and abrasions on the aligners can harbor various microorganisms, leading to periodontal pathogenesis [6]. Lee et al. [46] studied the treatment of chronic periodontitis in three subjects who also exhibited maxillary anterior pathologic tooth migration using clear aligners.The study revealed improvements, with decreased probing depths, gingival recession, clinical attachment level, and tooth mobility during treatment [46].Palone et al. [47] also noted better compliance and easier oral hygiene maintenance with clear aligners throughout treatment, resulting in reduced overall plaque buildup. Hibino et al. [48] concluded that the insertion of removable appliances led to an increase in Candida albicans colonization.The study further highlighted the relationship between low salivary pH, removable appliances, and Candida species.It was observed that some non-Candida carriers transitioned to Candida carriers due to orthodontic treatment.However, the exact reason for this change remains unclear.The study emphasized that immunocompromised adolescents should be treated with caution to protect them against Candida infections [48]. Fixed Versus Removable Appliances Clear aligners are becoming increasingly popular due to their cosmetic appeal, comfort, and ease of maintaining oral hygiene.However, they are not suitable for all cases.Complex orthodontic issues often require clinicians to opt for fixed appliance therapy.Both treatment options have their advantages and disadvantages.Lombardo et al. [49] compared the composition of subgingival microflora during the first six months of treatment with clear aligners and fixed appliances.The initial hypothesis was that there would be no difference between the two groups.Contrary to this, the results indicated that subgingival microflora increased in the fixed appliances group after three and six months, while it remained stable in the clear aligners group.The study concluded that the type of appliance significantly influences the microbial composition of subgingival microbiota [49]. A study by Perkowski et al. [26] found that during orthodontic treatment, the presence of Enterococcus faecalis, E. faecium, Staphylococcus aureus, Escherichia coli, and Candida albicans increased more significantly with fixed appliances than with removable appliances or no treatment at all.Jiang et al. [50] reported that clear aligners are preferable for enhancing periodontal health and reducing the risk of gingival inflammation, making them better suited for such patients.Rouzi et al. [6] noted that clear aligners produce controlled intermittent forces during treatment, which can be advantageous for rebuilding the periodontal membrane and improving periodontal health. Wang et al. [51] found that the Invisalign system performs comparably to fixed appliance therapy in terms of treatment efficacy but highlighted the ease of oral hygiene maintenance with clear aligners, which might explain why clear aligners perform better in maintaining oral health. Recent case reports have highlighted additional complications associated with fixed appliance therapy, particularly in relation to periodontal health.One such case involved a patient developing gingivitis on both upper and lower arches during treatment, underscoring the inflammatory response that can occur due to increased plaque accumulation and microbial shifts.Furthermore, mild recession of the upper incisors was observed, indicating that fixed appliances can contribute to periodontal tissue damage if not managed properly.Recent studies have also explored the use of mini implants (temporary anchorage devices (TADs)) in orthodontic treatments, highlighting their role in managing complex cases such as severe open bites.Mini implants are particularly effective for upper posterior intrusion, allowing for the precise movement of teeth without the need for external headgear.However, their use requires careful consideration of potential periodontal impacts.A case report documented mild recession of the upper incisors as a complication, emphasizing the need for vigilant periodontal monitoring and patient compliance to mitigate adverse effects.Additionally, this case revealed that patients undergoing fixed appliance therapy, particularly with mini implants, exhibited increased gingivitis and plaque accumulation, indicating the necessity for rigorous oral hygiene practices and regular periodontal assessments throughout the treatment duration [69]. Surfaces of the Appliances That Harbor Bacteria Mini implants, while beneficial for anchorage and facilitating specific tooth movements, also present surfaces that can harbor bacteria, contributing to periodontal challenges.A case report indicated that the use of mini implants for upper posterior intrusion resulted in mild gingival recession, suggesting an increased risk of bacterial colonization around these devices.Furthermore, fixed appliances such as bands and brackets also contribute to plaque buildup and microbial shifts.The surfaces of orthodontic appliances can significantly influence periodontal health.For instance, a case report noted that the use of fixed appliances resulted in mild recession of the upper incisors, likely due to increased plaque retention and microbial colonization on appliance surfaces.This underscores the importance of selecting appropriate materials and designs that minimize plaque accumulation.Additionally, it highlights the necessity of vigilant oral hygiene practices to mitigate the adverse effects on periodontal tissues, particularly around bands and brackets where microbial buildup is most pronounced.This highlights the importance of maintaining impeccable oral hygiene and regular professional cleanings to prevent periodontal complications associated with the use of mini implants and other orthodontic components.The report emphasized that surfaces of elastomeric ligatures and bands, in particular, tend to accumulate more plaque, leading to increased inflammation and bleeding, further complicating periodontal health [69]. Türkkahraman et al. [52] conducted a split-mouth study on 21 subjects to compare elastomeric/O rings and steel ligature techniques.A significant finding was that elastomeric ligatures are more prone to plaque accumulation in fixed appliance therapy than steel ligatures, providing crucial insight for orthodontic professionals.Furthermore, signs of inflammation, such as bleeding, were observed at the sites with elastomeric ligatures, highlighting the need for further research to fully understand the implications [52].Another study by Palone et al. [47] also supported the use of steel ligature over conventional (elastomeric) and self-ligating ligatures, emphasizing the necessity for additional research in this area. Alves de Souza et al. [53] reported that organisms such as Tannerella forsythia and Prevotella nigrescens were found in significantly higher numbers at the elastomeric ligatures, while Porphyromonas gingivalis, Actinobacillus actinomycetemcomitans, and Prevotella intermedia showed no difference.Lombardo et al. [54] explored the differences between labial and lingual orthodontic appliances, observing noticeable differences in lingual appliances 4-8 weeks after bonding.After eight weeks, increased gingival inflammation and a rise in Streptococcus mutans counts were noted, although Lactobacillus counts, salivary buffer capacity, and salivary flow rate remained consistent between the two groups. Chen et al. [18] studied microbial changes around orthodontic brackets and bands, finding that bands led to more plaque buildup and microbial shifts than brackets, with 12 novel bacterial species identified.Kim et al. [55] reported a higher prevalence of pathogenic microflora, bleeding on probing, and deeper pockets around orthodontic bands, underlining the potential risks associated with these appliances.Mini-screw implants (MSIs) are commonly used in orthodontics for anchorage [55]. Mishra et al. [56] researched microbial colonization around MSIs and concluded that bacterial buildup occurs within 24 hours, a crucial finding for understanding the timeline of bacterial colonization.The perimini-implant crevicular fluid contained higher numbers of staphylococci, anaerobic cocci, and facultative enteric commensal bacteria compared to gingival crevicular fluid, necessitating careful monitoring. Moreover, failed mini-screw implants showed higher proportions of staphylococci, Enterobacter spp., and Parvimonas micra, factors linked to the stability and durability of the implant [56]. Contaldo et al. [13] noted that some orthodontic components, such as bonded molar brackets, ceramic brackets, and elastomeric ligatures, increase the risk of periodontal diseases and caries, stressing the importance of careful material selection.Santonocito et al. [57] emphasized that the current gingival status of the patient, optimum oral hygiene maintenance, and susceptibility to caries are critical factors when selecting an appliance for an orthodontic patient.They recommend avoiding ceramic brackets and elastomeric ligatures for patients with a thin gingival biotype, poor oral hygiene, or high caries risk, as these sites are more prone to microbial colonization and the progression of periodontal diseases [57]. Role of the Duration of Appliance Wear and Timing of Treatment The timing and duration of orthodontic treatment play vital roles in microbial alteration.Treatment with fixed appliances tends to take longer than with removable appliances.However, the overall impact on oral health depends on various factors, including the complexity of the case and the patient's oral hygiene practices.In some instances, removable appliances used over a long period, especially in cases of poor oral hygiene, can result in worse periodontal outcomes compared to fixed appliances.Therefore, it is essential to emphasize the importance of maintaining good oral hygiene regardless of the type of appliance used.As demonstrated by Lucchese et al. [58], the duration the appliance is used in the mouth is a significant variable to consider.The treatment duration is shorter with removable appliances, and they can also be easily removed, allowing for better oral hygiene maintenance due to their less plaque-retentive surfaces compared to fixed appliances.They pointed out that all types of orthodontic treatment can pose some challenges to the periodontium; however, the impact of fixed appliance therapy, with its longer duration and the notable difficulty in maintaining oral hygiene, must be carefully considered [58].Santonocito et al. [57] confirmed that minimizing the duration of use for fixed appliances is important to prevent excessive microbial buildup.Koopman et al. [59] studied the variables impacting changes in the oral environment after orthodontic treatment.They found more pronounced changes related to patients' current gingival health, the type of orthodontic treatment, and the timing of the procedure [59].Few studies have been conducted to explore the impact of the duration, the start and end of orthodontic treatment, and how each stage differs from the others.The microbiological changes observed before, during, and after treatment show how each phase functions and determine the measures necessary to minimize the consequences of microbial shifts.Initial stage: The existing literature indicates that microbial changes become noticeable shortly (one or two weeks) after the initiation of treatment with either fixed or removable appliances.However, if treatment is discontinued, microbial levels tend to return to pre-treatment levels within 30 days of appliance removal [24,44,47].This is attributed to the new surfaces, such as metal brackets, bands, and wires, which support biofilm buildup, leading to mild gingivitis in the early stages of treatment.However, some theories suggest that the progression and maturation of the oral microbiome continue until 90 days after treatment. Mid-term stage: On the other hand, long-term observations for at least six months after the removal of orthodontic appliances indicated that the levels of Tannerella forsythia and Fusobacterium nucleatum initially increased but returned to normal levels after a few months [60,61].However, Kim et al. [55] found that the levels of Tannerella forsythia remained elevated during the first six months without any decrease, suggesting a need for further research involving extended durations to draw accurate conclusions.Additionally, Koopman et al. [59] observed that populations of periodontal pathogens such as Selenomonas and Porphyromonas increased during orthodontic treatment.If oral hygiene is not adequately maintained, latestage gingivitis (early periodontitis) becomes clinically visible as teeth begin to move, resulting in deeper pockets.During this stage, certain bacteria become more prevalent than others. Late stage: Thornberg et al. [61] concluded that the number of periodontal pathogens increased during the first six months of orthodontic treatment but then decreased and returned to pre-treatment levels over the course of 12 months.In contrast, another study reported that bacteria such as Streptococcus, Rothia, and Haemophilus were abundant toward the end of or after the completion of orthodontic treatment [59].For these reasons, it is crucial to adhere strictly to oral hygiene protocols.The oral environment should also reach equilibrium once the teeth have achieved their desired position and alignment.Any further disturbances to oral microbiology may lead to chronic periodontal issues. Post-treatment: Once the appliance is removed, controlling plaque buildup becomes easier, and cleaning vulnerable surfaces is more convenient.While the periodontal changes are largely reversible once a healthy oral microbiome balance is reestablished, the extent of reversibility is greatly dependent on individual patient factors.Some researchers have found that the periodontal changes induced by orthodontic treatment were only partially reversible, even three months after the removal of the appliance [59,62].Effective oral hygiene practices are crucial during orthodontic treatment to prevent periodontal complications.The inclusion of mini implants in orthodontic treatment protocols necessitates enhanced oral hygiene practices to prevent periodontal issues.Effective plaque control around mini implants is critical, as these devices can serve as niches for bacterial accumulation.A case report emphasized the importance of follow-up visits and long-term care, revealing that patients might require additional periodontal treatments, such as gingival grafts, to achieve stable outcomes post-treatment.This underscores the importance of patient compliance with oral hygiene routines and regular dental visits to monitor and manage the health of periodontal tissues surrounding mini implants, ensuring successful treatment outcomes and minimizing complications such as gingival recession.Additionally, follow-up visits and longterm care are essential to ensure the stability of orthodontic treatment outcomes.The need for potential gingival grafts post-treatment highlights the importance of continuous periodontal monitoring and tailored oral hygiene protocols, including the use of specialized orthodontic brushes, interdental cleaning tools, and possibly antiseptic mouthwashes, to ensure plaque and bacterial accumulation are effectively controlled throughout and after the treatment process [69]. Maintaining good oral hygiene and plaque control regimens during orthodontic treatment is essential [63].It is crucial for patients to understand the importance of maintaining good oral hygiene throughout their treatment and to make regular follow-up visits to the dentist to mitigate the risk of plaque buildup and the growth of periodontopathogens and cariogenic bacteria.These issues can develop soon after treatment begins, as these alterations occur shortly after the initiation of treatment [57].Soni et al. [64] conducted a 90-day randomized, single-blind study to compare the effectiveness of different methods for controlling plaque in patients undergoing fixed appliance therapy.They found that orthodontic toothbrushes and water jets successfully controlled plaque and promoted oral hygiene [64].Özdemir et al. [65] studied the impact of brushing with Cervitec gel once a day to reduce plaque accumulation around teeth and fixed appliances.While the results did not significantly affect subgingival dental plaque levels, an improvement was observed on day 14, as indicated by a decrease in the Quigley-Hein Plaque Index score.Additionally, after receiving professional prophylaxis and oral hygiene instructions, a positive outcome and noteworthy improvement were observed clinically and pathologically [65]. Bauer Faria et al. [66] reported that Zingiber officinale mouthwash effectively controlled dental biofilm and reduced gingival inflammation, although they suggested that its flavor needs to be improved.Vidović et al. [67] concluded that using an octenidine (OCT)-based antiseptic reduced the risk of gingivitis and the number of subgingival bacteria in the first three months of fixed orthodontic treatment.Zibar Belasic et al. [27] found that Streptococcus mutans were better controlled with fluorides than chlorhexidine, and Streptococcus sobrinus and Aggregatibacter actinomycetemcomitans responded well to chlorhexidine.By introducing different methods to control the risk factors for periodontal diseases and consulting with the dentist, an overall successful orthodontic treatment outcome can be expected.Nonetheless, the gold standard strategy to prevent the risk of periodontal disease is mechanical removal by regular tooth brushing (using orthodontic brushes and flossing) [10]. Discussion This narrative review extensively examined the literature and found a consensus regarding the negative impact of orthodontic therapy on the periodontal environment.Studies by Kado et al. [19] and Cantekin et al. [38] concluded that the use of fixed appliances led to a transition from a healthy periodontium to a diseased state, with some recovery observed after debonding.The period immediately after appliance placement was identified as crucial for maintaining oral hygiene to prevent adverse changes [19,38]. Conversely, Koopman et al. [59] and Pan et al. [62] noted only partial recovery of periodontal health months after appliance removal, emphasizing the importance of patient compliance with oral hygiene protocols.The literature generally supports clear aligners over fixed appliances due to their ease of removal, which facilitates better oral hygiene and reduces the risk of periodontal disease [6,44,47].Significant findings by Yang et al. [43] and Hibino et al. [48] indicated an increase in Candida albicans during orthodontic treatment, highlighting the challenges posed by low pH and microbial dynamics, which may lead to higher risks of Candida-related diseases. Contrary perspectives were highlighted by Sadowsky et al. [40], who found minimal impact of orthodontic appliances on long-term periodontal health.Their study compared individuals treated orthodontically with a control group and noted no significant differences in periodontal diseases, although some areas exhibited slightly higher risks of mild to moderate periodontal issues [40].Gomes et al. [41] also reported no negative impacts from orthodontic treatment on periodontal pocketing depth and clinical attachment loss, suggesting that periodontal diseases could develop independently of orthodontic treatment factors.Bollen et al. [68] conducted a systematic review that reported inconsistent effects on gingivitis and clinical attachment loss, indicating that orthodontic appliances pose little risk to the periodontium.Chen et al. [18] highlighted the increased richness of the subgingival microbiome following orthodontic therapy, emphasizing the need to understand these microbial shifts. Overall, while there is substantial evidence of the effects of orthodontic treatment on the periodontal environment, further research is necessary to fill the gaps in understanding how these treatments impact microbial changes, especially concerning the potential involvement of other microorganisms such as viruses and protozoa.Additionally, examining the effects in immunocompromised patients and assessing the systemic health impacts of microbial shifts are crucial areas for future investigation.The role of gramnegative bacteria in periodontal disease development, particularly following orthodontic treatment, warrants continued attention [12,13,22,23]. Conclusions This review highlights certain findings.First, periodontopathogens, cariogenic bacteria, and corrosioninducing bacteria increase during orthodontic treatment, which subsequently increases the risk of gingivitis, periodontitis, dental caries, and appliance corrosion.Second, fixed appliance therapy poses a greater risk of periodontal issues compared to clear aligners due to their longer duration and continuous wear.On that note, strict plaque control is crucial during the early and mid-treatment stages to ensure that the oral cavity returns to a healthy state in the late and post-treatment stages.Surfaces such as ceramic brackets, molar bands, mini-implant screws, and elastomeric ligatures are particularly prone to plaque accumulation.Effective plaque control and improved treatment outcomes were achieved using orthodontic toothbrushes, mouthwashes, and professional prophylaxis. Table 1 presents a summary of microbial changes after orthodontic treatment.Microbial changes become noticeable shortly (one or two weeks) after treatment initiation.Mild gingivitis may occur.Microbial levels tend to return to pre-treatment levels within 30 days after appliance removal.Tannerella forsythia and Fusobacterium nucleatum initially increased but returned to normal levels after a few months.Populations of periodontal pathogens such as Selenomonas and Porphyromonas increased.Without adequate oral hygiene, late-stage gingivitis (early periodontitis) may become visible.
8,586.2
2024-07-01T00:00:00.000
[ "Medicine", "Biology" ]
Structure to function analysis with antigenic characterization of a hypothetical protein,HPAG1_0576 from Helicobacter pylori HPAG1 Helicobacter pylori, a unique gastric pathogen causing chronic inflammation in the gastric mucosa with a possibility to develop gastric cancer has one-third of its proteins still uncharacterized. In this study, a hypothetical protein (HP) namely HPAG1_0576 from H. pylori HPAG1 was chosen for detailed computational analysis of its structural, functional and epitopic properties. The primary, secondary and 3D structure/model of the selected HP was constructed. Then refinement and structure validation were done, which indicated a good quality of the newly constructed model. ProFunc and STRING suggested that HPAG1_0576 shares 98% identity with a carcinogenic factor, TNF-α inducing protein (Tip-α ) of H. pylori. IEDB immunoinformatics tool predicted VLMLQACTCPNTSQRNS from position 19-35 as most potential B-cell linear epitope and SFLKSKQL from position 5-12 as most potent conformational epitope. Alternatively, FALVRARGF and FLCGLGVLM were predicted as most immunogenic CD8+ and CD4+ T-cell epitopes respectively. At the same time findings of IFN epitope tool suggests that, HPAG1_0576 had a great potential to evoke interferon-gamma (IFN-γ) mediated immune response. However, this experiment is a primary approach for in silico vaccine designing from a HP, findings of this study will provide significant insights in further investigations and will assist in identifying new drug targets/vaccine candidates. makes their annotation even more difficult. This leaves bioinformatics with the opportunities to annotate protein functions by efficient, automated methods which are based on several algorithms and database of experimentally determined proteins [5,6]. Helicobacter pylori, a gram-negative bacterium has been classified as the definitive carcinogen of human gastric cancer and it is the fourth most prevailing cancer in the world. Infection with H. pylori induces chronic gastritis, peptic ulcer, mucosa-associated lymphoid tissue lymphoma and finally stomach cancer. Common virulence factors involved in these events are genes for cag Pathogenicity Island (cagA), vacuolating cytotoxin (vacA) and blood group antigen binding adhesions (babA & sabA). But the 457 ©Biomedical Informatics (2019) induction of proinflammatory cytokines such as IFN-γ, TNF-α, IL-6 and IL-8 during H. pylori infection indicates the existence of unique virulence factors that play a vital role in the prognosis of inflammation to carcinogenesis [7]. Such a protein, TNF-α inducing protein (Tip-α) has been identified as a new carcinogenic factor of H. pylori. It is a 19 kDa protein and released as a homodimer from H. pylori and dimer formation is must for its cancerous activity [8]. This current study aimed to identify a novel virulent factor from the HPs of H. pylori HPAG1 and ultimately found a member of Tip-α family (HPAG1_0576). This strain of H. pylori was targeted because among the 1536 proteincoding genes, around 500 were found as hypothetical (till July, 2016) according to the information obtained from NCBI and KEGG database. Tip-α is found only among H. pylori gene products with no obvious homolog in other species. To investigate the mechanism of a protein that is like Tip-α, it was necessary to establish the structure-function relationship [8]. In this study, the 3D structure of HPAG1_0576 was predicted by homology modeling and later was used for screening and designing new compound leading to the development of novel therapeutic strategy [9]. In addition, primary and secondary sequence/structure analyses, functional annotation, binding site prediction, PPI network generation were also performed. The study further attempted to combine best in silico approaches to identify potential epitopes that have high affinity for human MHC I and MHC II molecules, as well as to evaluate the IFN-γ inducing effect of HPAG1_0576; a critical step in the development of vaccines. The findings of this experiment will be very helpful for better understanding the disease mechanism and find novel drug targets with effective vaccine candidate to combat against H. pylori. Homology modeling: An automatic modeling tool, Phyre2 (http://www.sbg.bio.ic.ac.uk/phyre2) was used to predict the 3D models of the target protein. It also predicts secondary structure, disorder and structural alignment for the submitted protein sequence [19]. Superimposition of the best protein model with its template was performed by RaptorX server (http://raptorx.uchicago.edu/) [20]. web servers were used to evaluate energy profile and verify structure in terms of Z score. To facilitate visualization, PyMOL was used to view both the energy minimized and superimposed structures [26]. Function prediction from 3D structure: An independent server, ProFunc (http://www.ebi.ac.uk/thornton-srv/databases/ProFunc/) was used to identify the probable functions of the target protein, which considered 3D structure as input and utilizes a combination of sequence and structure based approaches such as InterProScan, blast vs PDB, superfamily search, SSM fold match, 3D template search for enzyme, reverse templates and DNA/ligand binding sites etc. [27]. Determination of Protein-Protein Interaction (PPI): In this study, STRING 9.05 was used to search the interacting partners of the target protein. Predicted interactions were sorted by scores such as low confidence scores <0.4; medium, 0.4 to 0.7 and high >0.7 (http://string-db.org) [28]. Prediction of binding sites and druggable pockets: Shape and size parameters of protein pockets and cavities are important for active site analysis and structure-based ligand design. In this experiment, computed atlas of surface topography of proteins (CastP) (http://sts.bioe.uic.edu/castp) was used to 459 ©Biomedical Informatics (2019) identify probable binding sites, pockets and cavities from the 3D structure of the target protein [29]. Determination of antigenicity and prediction of epitopes: The amino acid sequence of the target protein was subjected to VaxiJen server (http://www.ddgpharmfac.net/vaxijen/VaxiJen/VaxiJen.html) [13], which determines its antigenic property at threshold 0.4. NetCTL1.2 server (http://www.cbs.dtu.dk/services/NetCTL/) was used to predict CD8+ T cell epitopes at a threshold of 0.75, which execute MHC class I binding prediction of epitopes to 12 Results: Structure prediction: Characterization of primary and secondary structure: Primary structure of the target protein was revealed by ProtParam and the computed parameters proposed that, the amino acid Leucine was most prevalent in the protein sequence that suggests a preference of alpha helices in its 3D structure ( Table 1). The prediction outcomes for protein secondary structure generated by SOPMA found alpha helices (59.38%) to be most frequent which also supports the ProtParam interpretation ( Table 5) [18]. Homology modeling: After analyzing the results of homology modeling it was found that, Phyre2 generated 20 possible models for the target protein based on alignment with different templates. The best model was obtained with the highest scoring template (PDB id: 2wcr) which stands for Tip-α protein that induces expression of TNF-α in B cell and promotes tumor activities and thus results in gastric cancer [7]. The model was predicted with 100% confidence, 14% disorder and 76% alignment coverage. Figure 1 displays the secondary and 3D structure alignment of the modeled protein with its template. 460 ©Biomedical Informatics (2019) The predicted functional partners of the protein HPAG1_0576. Refinement, quality assessment, energy minimization and visualization of the model: ModRefiner refined the selected model by detecting high resolution protein structure with an RMSD 0.237 and TMscore 0.9972. The backbone conformation, internal consistency and reliability of the protein were evaluated by PROCHECK which created Ramachandran plot ( Table 3) with acceptable amino acid distribution for this model (Figure 1). Verify 3D and ERRAT analysis showed the overall quality values of 0.64 and 96.35 respectively (Figure 2). The Z score values by ProSA and QMEAN has been depicted in Figure 3. Functional annotation: The metadata server ProFunc made a general assessment using gene ontology terms defining the protein as DNA binding and involved in cellular processes. InterProScan found one motif match against Pfam database and it was TNF-α inducing protein of Helicobacter. Blast against PDB and UniProt found 25 and 50 matching sequences respectively. In addition, ProFunc output identified 664 matching folds, two nests, one enzyme active site and twenty reverse templates from the structure of HPAG1_0576. PPI network analysis: At medium confidence (0.400), PPI network analysis by STRING showed that, HPAG1_0576 was highly similar to hps (TNF-α inducing protein from H. pylori HPAG1) with highest bitscore and e-value of 400 and 1e-141 respectively. Figure 2 represents the PPI network of hps and demonstrates that, the target protein interacts with 10 other proteins. The highest confidence was 0.659 and observed with 8-amino-7-oxononanoate synthase (HP_0598) which catalyzes the decarboxylative condensation of pimeloyl-CoA and L-alanine to produce 8-amino-7-oxononanoate (AON), coenzyme A and/or converts 2-amino-3-ketobutyrate to glycine and acetyl-CoA. Other interacting partners were: a peptidoglycan-associated lipoprotein precursor, a penicillinbinding protein 1A, undecaprenyl phosphate N-acetyl glucosaminyl transferase, a 50S ribosomal protein L7/L12 which seems to be the binding site for several of the factors involved in protein synthesis and appears to be essential for accurate translation, an elongation factor P which is involved in peptide bond synthesis and other three hypothetical proteins. Active site analysis: CastP predicted 23 active sites of the modeled HPAG1_0576 which are associated with binding pockets within the protein. The best model which is usually considered standard was chosen on the basis of area, volume and conserved residues in the pockets. The largest pocket (pocket 23) had an area and volume of 196.2 and 215.1 Å respectively. The residues occurring in this pocket were TYR42, TRP43, LEU45, ASN47, ARG48, GLU50, TYR51, GLN54, VAL56 and LEU141 (Figure 3). T-cell epitope prediction: VaxiJen predicted that, HPAG1_0576 was a probable antigen. Therefore, NetCTL predicted 57 different CD8+ T cell epitopes of the protein according to all MHC (A1-B62) supertypes among which 4 most potential epitopes with high combinatorial scores were selected. The interacting MHC-I alleles with each of the four epitopes at affinity IC50 < 200 are shown in Table 1. It also shows epitope conservancy and the combined scores of epitope-HLA interactions. MHC class II binding prediction tool and HLApred retrieved five common epitopes that are strong binders to HLA-DRB1*01:01, HLA-DRB1*04:01, HLA-DRB1*07:01and HLA-DRB1*11:01. Similar human epitopes were eliminated and having an IC50 value less than 50 were selected [36]. The epitopes FLCGLGVLM, FLQDVPYWM, FLKSKQLFL, FALVRARGF and IKVAQNIVH were identified as potential CD4+ T-cell epitopes and which could elicit an immune response. B-cell epitope prediction: Epitopes those satisfied the threshold values for all five IEDB scales with highest antigenic propensities were considered to evoke potent B cell response and found to reside within19 to 35 residues spanning the sequence (Figure 4). Figure 5 depicts the combined linear epitope with spanning peptides, highest antigenicity scores and their corresponding threshold values. Ellipro predicted seven conformational epitopes as well as their residual specifications and scores that are summarized in Table 2. Among them, SFLKSKQL is the most potential with the highest score 0.971. Figure 6 represents the 2D score chart and 3D images of the predicted epitopes shown as ball-and-stick models. Prediction of IFN-γ induction and docking analysis: The findings of IFNepitope program suggests that, both the target protein and predicted B cell linear epitope had great probability to release of IFN-γ with a positive score. Within the region between 64 to 83 (GKTTEEIEKIATKRATIRVA) of HPAG1_0576 showed the maximum SVM score of 1.52, while the predicted B cell linear epitope had hybrid (motif+SVM) score of 3.0. The rigid and symmetric docking of HPAG1_0576 protein with the IFN-γ receptor was done in PatchDock and first 10 docking candidates were submitted to FireDock, which refines and scores them according to an energy function. The best docking pose showed an energetically favorable interaction between HPAG1_0576 and IFN-γ receptor alpha chain (Figure 7). The docking and post docking refinement results ranked on global energy of the best solution has been shown in Table 3, where the global energy (GE) is the binding energy of a solution. Transformation refers to 3D transformation with 3 rotational angles and 3 translational parameters and applied on the ligand molecule. Here score means geometric shape complementary score; area is approximate interface area of the complex; Vdw is Van der Walls; ACE means the contribution of the atomic contact energy (ACE) to the global binding energy and HB is the contribution of hydrogen bonds to global binding energy. 462 ©Biomedical Informatics (2019) Figure 6: B cell discontinuous epitopes of HPAG1_0576 predicted by ElliPro. (A) X and Y axis represents the residue number and scores respectively. Yellow regions in the plot represent potential B cell epitopes having a score above the threshold 0.5. (B) Jmol visualization of the predicted epitopes, where antibody chains are represented in white and epitopes in orange. Discussion: The present study identified a HP, HPAG1_0576 from H. pylori strain HPAG1, which showed a strong homology with a member of Tip-α superfamily. Since the crystal structure of this HP is unavailable, the study is proposing a structural model constructed via homology modeling using the crystal structure of a TNF-α inducer protein (PDB id: 2wcr) as a template. Initially the physicochemical characterization was done by ExPASy's ProtParam tool and the prediction results are the deciding factors for the hydrophilicity, stability and function of the protein [37]. Findings from SOPMA revealed that, the protein has a high helices percentage in its structure, which can facilitate protein folding by providing more flexibility to the structure, thus protein interactions might be increased [5]. Moreover, an abundance of coiled regions contributes to higher stability and conservation of the protein structure [37]. Phyre2 built the 3D structure of HPAG1_0576 with 100% confidence, which indicates that, the core of the protein is modeled at high accuracy. For extremely high accurate model, the percent identity between sequence and template should be above 30-40%; hence for the constructed model in this study, the identity was found 98%. The quality of the structural alignment was confirmed by RaptorX ( Figure 1B), that produced template modeling (TM) score 0.973 and RMSD 0.91 which denotes that the structures are almost identical because identical structures score 1 whereas highly similar models have a TM-score >0.7 [19]. The resolution required for protein applications such as ligand screening and understanding reaction mechanism was obtained by refining the model using ModRefiner. The distribution of the residues in Ramachandran plot supports good stereo chemical quality of the model ( Figure 8B) [38]. The 3D-1D average score 0.64 obtained fromVerify3D indicates a better environmental profile of the model Figure 9A [37]. The overall quality factor 96.35, obtained by ERRAT denotes the percentage of residues for which the calculated error value cannot exceed the 95% rejection limit Figure 9B [23]. The Z score obtained from ProSA for the obtained model was −6.5 Figure 10A, which was well fitted to the range that is typical for proteins of similar size. The local model quality is shown in the energy plot Figure 10B and minimum values in the plot account for nativity and stability of the molecules [5,39]. The QMEAN4 score for the protein was obtained 0.35 Figure 10D, which was in the range of estimated global model reliability score between 0 to 1 [38]. Hence, the protein of interest is in the dark region of the absolute model quality plot with a global score 0.7 which also supported the quality of the model [39]. Individual Z values for parameters such as C-β interaction energy, all atom interaction, solvation and torsion can also be observed in the plot Figure 10C. The significant similarity of the modeled HPAG1_0576 with its template indicates its likely function as Tip-α. Though, no single method is reliable in terms of correct prediction [37], therefore the meta server ProFunc was used and the structure was found to contain 664 matching folds among which four had certain matches with PDB codes 3gio, 2wcr, 3guq and 3vnc. One enzyme active site template that was identified in possible matches is E. coli heat-labile entero toxin with bound galactose (PDB id: 1lta) with 37.5% sequence identity. The function of 'reverse' template method is to break the target into many templates which are then scanned against a set of representative structures in PDB. Among the 370 auto-generated templates, certain matches were observed again with 2wcr, 3gio, and 3vnc confirming the Phyre2 prediction of the protein as Tip-α [27, 40]. Detailed study of protein-protein interactions network will help to elucidate the signaling pathways of human diseases and their drug targets as well [41]. From STRING analysis (Figure 3), the nearest interaction of HPAG1_0576 was observed with another HP of H. pylori, HP_0598 which is 8-amino-7-oxononanoate synthase. Other interacting partners are: a peptido-glycan associated lipoprotein precursor (excC), a penicillin-binding protein 1A (PBP1), an undecaprenyl phosphate N-acetyl glucosaminyl transferase (HP_1581), a 50S ribosomal protein L7/L12 (rplL), an adhesinthiol peroxidase (tpx) having antioxidant activity, an elongation factor P (efp) involved in peptide bond synthesis and other three hypothetical proteins of Helicobacter. Shape and size parameters of protein pockets and cavities are important for structure-based ligand designing. The top pocket in the CastP output list is the largest and considered as standard (Figure 4A). Since the protein is found to stimulate the immune system by activating NF-κB pathway, it is considered as highly immunogenic and proved so by VaxiJen server. To design an effective peptide antigen, the recommended length of peptide sequences should be within 8-22 amino acids. In this study, the continuous B cell epitope VLMLQACTCPNTSQRNS (position 19-35) was 17 residues long and the discontinuous epitope SFLKSKQL (5-12) was 8 residues long. The study also focused on searching natural epitopes that would stimulate both CD8+ and CD4+ T cell response, to mediate a more balanced response in the prevention of disease prognosis. Four potential CD8+ T cell epitopes ( Table 1) have been identified so far among which, FALVRARGF is the most potential with highest I pMHC immunogenicity score, this epitope was also predicted as CD4+ T cell epitope with high immunogenicity. The high level of epitope conservancy is much more important because Tip-α has a higher tendency towards mutation, hence epitope conservancy was found 100% for both [14,42]. Conclusion: It is of interest to study the structure to function information for antigenic characterization of a hypothetical protein designated as HPAG1_0576 from Helicobacter pylori HPAG1. We report that, the structural model of HPAG1_0576 shows it as a cytoplasmic protein with a Tip-α domain having unique DNA binding function. We also discuss the linear and conformational antigenic regions in the protein for potential consideration as a vaccine candidate. Further experimental studies are required to validate the predicted epitopes. Future studies are in progress to experimentally validate the data found from this study and to use the structural and functional information of the given model to identify novel ligands for new drug discovery.
4,236.4
2019-07-31T00:00:00.000
[ "Medicine", "Biology" ]
Bacterial Biomarkers of the Oropharyngeal and Oral Cavity during SARS-CoV-2 Infection (1) Background: Individuals with COVID-19 display different forms of disease severity and the upper respiratory tract microbiome has been suggested to play a crucial role in the development of its symptoms. (2) Methods: The present study analyzed the microbial profiles of the oral cavity and oropharynx of 182 COVID-19 patients compared to 75 unaffected individuals. The samples were obtained from gargle screening samples. 16S rRNA amplicon sequencing was applied to analyze the samples. (3) Results: The present study shows that SARS-CoV-2 infection induced significant differences in bacterial community assemblages, with Prevotella and Veillonella as biomarkers for positive-tested people and Streptococcus and Actinomyces for negative-tested people. It also suggests a state of dysbiosis on the part of the infected individuals due to significant differences in the bacterial community in favor of a microbiome richer in opportunistic pathogens. (4) Conclusions: SARS-CoV-2 infection induces dysbiosis in the upper respiratory tract. The identification of these opportunistic pathogenic biomarkers could be a new screening and prevention tool for people with prior dysbiosis. Introduction In 2020, the World Health Organization declared that the coronavirus disease 2019 (COVID- 19), caused by the new severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) coronavirus, had reached pandemic levels [1].As of 2023, it has infected more than 761 million individuals worldwide [2].To date, the scientific community has been working on several fronts to better understand the virus, its health impacts both short-and long-term, and to establish prevention and intervention strategies [3][4][5]. In healthy individuals, the airways harbor a complex community of microorganisms (or microbiome) that contributes to the development of the respiratory tract architecture, plays a role in the development and function of the immune system, and acts as an important component of the epithelial barrier to airway infection [6][7][8].This has been shown in acute viral respiratory infection, where in this context, the airway microbiome is altered by the viral infection or to support the host immune system response [9,10], which may be promoted by the microbiome via epithelial cells through stimulation of immunocompetence [10,11].A study on influenza demonstrated that Corynebacterium pseudodiphtheriticum in the airways maintains a protective immune environment against viral infections [10].Conversely, the presence of opportunistic pathogens such as Streptococcus, Haemophilus, and Neisseria can potentially contribute to the development of opportunistic respiratory diseases, thereby leading to more severe respiratory diseases [12,13]. A better understanding of the role of the microbiome in relation to the immune response may lead to the use of probiotics to prevent or treat diseases.In this regard, certain taxa in the microbiome could serve as indicators of disease susceptibility and help tailor treatments to restore a more resistant microbial signature to the initial stage of care [14].Indeed, during viral infections, the oral administration of Lactobacillus genera has had a beneficial effect on inducing protective immunity via increased antibody production and natural killer cell (NK) recruitment [15].Several other examples of the impact of probiotic treatments for respiratory infections or diseases were described in mouse models.Oral administration of Lactobacillus, Bifidobacterium, and Lactococcus strains improves the symptoms of influenza infection by modulating the gut microbiota [16], and the increase in Th2 (responsible for chronic disease) in the blood caused by antibiotics can be prevented by a probiotic treatment [16][17][18].Those studies showed the impact of probiotics on the gut microbiome but not on the airway one.However, research have demonstrated how the gut microbiome can modulate microbiome diversity in the lung through the gut-lung axis [19,20].This process includes the oropharynx, which is central to the migration of environmental products and bacteria from one organ to another and illustrates how ingested probiotics can alter the oropharynx microbiome through gut microbiome modifications [21]. In the human airways, the oropharynx harbors the densest and most diverse microbiome [11].The taxonomic richness of the respiratory tract is at its highest in the oropharynx, primarily because of its unique anatomical location, serving both the digestive and respiratory systems [11].It is the main source of the pulmonary microbiome in adults, which makes it an interesting region to study as it represents the health status of the airway microbiome [7,11,22]. Several studies have been conducted on the impact of microbiome diversity, particularly in the oropharyngeal microbiome, on COVID-19 risk and symptoms.Disruption of the oropharyngeal microbiome in patients with COVID-19 may promote the growth of respiratory pathogens, leading to metabolic imbalances and increasing disease severity [11,23].Moreover, the composition of the oropharyngeal microbiome is closely linked to the requirement for respiratory support, underscoring the critical role of a healthy oropharyngeal microbiome in the prevention and treatment of COVID-19 [24].This disruption appears to promote the growth of respiratory pathogens such as Staphylococcus and favor the onset of bacteremia following respiratory colonization by Enterococcus, suggesting a microbial dysbiosis in patients with COVID-19 [23].Further studies have demonstrated an inflammatory dysbiosis in patients with COVID-19, marked by the presence of the Prevotella and Veillonella genera.These findings have been linked to respiratory infections and an increase in the clinical severity of the disease [11,24].Additionally, the presence of the Leptotrichia genus has been associated with the establishment of a pro-inflammatory environment characterized by the production of lipopolysaccharides (LPS) [24].Furthermore, certain bacteria, such as Actinomyces, known for producing mycolic acid, exhibited an inverse correlation with the requirement for respiratory assistance [25].Gaining a deeper comprehension of the taxa that serve as indicators of COVID-19 risk or severity holds significant importance for this disease, given its high variability in terms of disease presentation and severity of symptoms [11]. In this study, alterations in microbial diversity and the presence of biomarkers in both infected and uninfected individuals were observed, suggesting that viral infection induces changes in the bacterial homeostasis of the respiratory tract.Furthermore, the identification of biomarkers in each group allows us to further our understanding of these links and explore potential prevention strategies for future epidemics.This research is of considerable importance for understanding the interaction between the virus and the microbiome of the oral cavity and oropharynx (OCO), which will enable us to develop targeted measures for preventive public health interventions. Study Population Samples were collected in the Saguenay-Lac-St-Jean (SLSJ) region, in northeastern Quebec (Canada).Population genetics in SLSJ have a unique structure due to several isolated migratory flows [26].This homogeneity makes it interesting to study genetics, culture, and environmental exposures [27].This region was largely protected from the initial waves of infection that affected the rest of the province in 2020 due to its remoteness and isolation from large cities, but by the end of 2020, it had infection rates comparable to those of the rest of the province, mainly due to the SARS-CoV-2 alpha variant [28]. Samples from the Quebec Biobank of COVID-19 Samples were obtained from screening tests performed by public health officials in SLSJ.From 2020 to 2022, public health guidelines encouraged and/or required the public to get screened to prevent the spread of COVID-19.Individuals were screened following contact with an infected person or if they presented COVID-like symptoms [1].Screening was performed by a gargle test following the Centre Intégré Universitaire de Santé et de Service Sociaux (CIUSSS) protocol and according to the standards [29].This technique involves gargling the mouth and throat twice with 5 mL of commercial plain water and spitting into a cup, which is then transferred to a plastic tube.These tests were used for PCR screening by Quebec provincial public health to detect the presence of SARS-CoV-2.During these screening tests, each individual had the opportunity to participate in the provincial effort for the study of COVID-19; the Biobanque québécoise de la COVID-19 (BQC19 which aimed to collect samples and data for research projects [30].Informed consent for inclusion in the BQC19 was obtained from the individuals being screened or their legal representatives.Ethical approval for the present project was obtained from the Research Ethics Board of the CIUSSS du SLSJ (IDs: 2022-388, 2021-026).An aliquot of the gargles was kept for each person tested at the Molecular Biology and Genetics Service, Clinical Department of Laboratory Medicine at the CIUSSS du SLSJ, for the purposes of this project.After COVID-19 screening, we had access to 256 samples to conduct the microbiome study.Among them, 182 samples were from individuals who screened positive for SARS-CoV-2, and 74 were from individuals who tested negative.Individuals positive for SARS-CoV-2 were either asymptomatic or presented a mild to moderate form of COVID-19.No individuals included in the study were hospitalized due to COVID-19.The SARS-CoV-2 variant that principally struck Quebec at the time of sample collection was the alpha variant [28]. Participants' sex and age are presented in Table 1, with age groups classified according to Health Canada's guidelines [31].The cohort included 148 women and 108 men, and the average age was similar for men (36 ± 21 years) and women (38 ± 20 years).The positive test (PT) group included 182 individuals (103 women and 79 men) with a mean age of 40 ± 20 years old.There were 74 individuals (45 women, 29 men) in the negative test (NT) group, with a mean age of 30 ± 19 years old (Table 1).[32], with mean DNA concentrations of 6.14 ± 3.14 ng/µL (Supplementary Table S1).Samples were normalized in a 10 mM Tris solution to a concentration of 5 ng/µL for further processing. Library Preparation and Sequencing Sequencing libraries of the V3-V4 regions of the 16S rRNA gene were prepared by 2-step PCR according to the Illumina protocol "16S Metagenomic Sequencing Library Preparation" [33].Primers used for the first PCR were 341F (5 TCG TCG GCA GCG TCA GAT GTG TAT AAG AGA CAG CCT ACG GGN GGC WGC AG-3 ) and 805R (5 -GTC TCG TGG GCT CGG AGA TGT GTA TAA GAG ACA GGA CTA CHV GGG TAT CTA ATC C-3 ).The second PCR was prepared with the Nextera XT Index Kit v2 (Illumina, San Diego, CA, USA) [34].Library size was checked with a Qiaxcel Advanced system (QIAGEN, Hilden, Germany) and confirmed at 550 bp.Libraries were quantified using a fluorometer (Fluoroskan, Thermo Fisher Scientific, Waltham, MA, USA) with the Quant-iT™ 1X dsDNA Assay Kits [35].Libraries were sequenced on Illumina MiniSeq by paired-end 150 bp cycles.Four sequencing runs were performed on four 96-well plates, generating a total of 47,233,715 reads with a mean of 184,507 ± 3709 per sample.Sequencing details and results are presented in Supplementary Table S2. Sequence Processing All analyses were performed with the R environment (version 4.2.0) for sequence processing and all subsequent steps [36].In order to perform the DNA amplicon sequencing analysis, the DADA2 pipeline was used [37].Sequence trimming and alignment were performed with the filterAndTrim() function of the dada2 {} package (version 1.24.0).In the trimming step, retained strands were trimmed to a length of 115 bp (truncLen=c(115,115)), no sequences with more than 2 errors were retained (maxEE=c(2,2)), and after trimming, sequences with a quality of less than two were deleted (truncQ=2).This step was performed without removing the primers.Then, errors generated during sequencing were removed (dada()), reverse and forward sequences were merged (mergePairs()), a table with the obtained sequences was created (makeSequenceTable()), sequencing chimeras were eliminated (removeBimeraDenovo()), and taxonomy was assigned to the non-chimeric sequences using a reference database (assignTaxonomy()).All these functions were used from the dada2 {} (version 1.24.0)[38].Finally, a file to store the DNA sequences thanks to the DNAStringSet()with the Biostrings{} package version (2.64.1) was written [39].Taxonomic assignment of amplicon sequence variants (ASVs) was performed with the SILVA SSU reference database (version r132_March2018).Finally, the phyloseq() function allowed to create a phyloseq file to facilitate the analysis and the visualization of the data [40].For all these functions, the parameters have been defined by default.This produced a taxonomic table containing 20,206 ASVs, and samples contained an average of 69,949 ± 28,555 reads. The taxonomic table was filtered using the DECONTAM{} package (version 1.16.0) to remove contaminating sequences, which identifies contaminants based on the frequency distribution of each ASV as a function of input DNA concentration [41].Using the fil-ter_taxa() function, bacterial taxa that have an abundance greater than four counts in at least 10% of the samples are retained; the others are removed [42].The decontaminated taxonomic table contained an average of 68,284 ± 28,015 reads per sample.These sequences allowed the identification of 9818 ASVs in a total of 256 samples.Multiple rarefaction curves were plotted to ensure that each sample reached saturation of taxa (Supplementary Figure S1). Diversity Analyses The relative distribution of phyla was calculated and compared with a Mann-Whitney test (p-value < 0.05).Subsequently, linear regressions were performed on all phyla using the lm() function without multiple correction, to check whether the variables "age" and "sex" and two combined could influence the significant difference in distribution between the PT and NT groups. Diversity analyses were also performed in R with the phyloseq{} package (version 1.40.0)[40] and the plot_diversity_stats() function of the microbiomeutilities{} package (version 1.0.16)[43].Taxonomic richness was calculated with the observed and Shannon diversity indices and compared with a Mann-Whitney test (p-value < 0.05) between the PT and NT groups.A Breusch-Pagan test of homoscedasticity (p-value < 0.05) was performed using the bp_test() function from the lmtest{} package (version 0.9.40) and on R. Community Composition across Positive Test (PT) and Negative Test (NT) Groups Community composition was compared through the Bray-Curtis dissimilarity index and visualized in non-metric multidimensional scaling (NMDS) using the plot_ordination() function of the phyloseq{} package (version 3.3.6).We tested for normality using the Shapiro test with shapiro.test().Then the distribution of the NMDS data was compared with the Mann-Whitney test with wilcox.test()(p-value < 0.05).Analysis of similarity was performed by permutation test with the adonis2() function (p-value < 0.05) of the vegan package{} (version 2.6.2) [44].The measure of the dispersion of the proportions within each sample group was performed by the betadisper() function with the vegan{} package (p-value < 0.05; version 2.6.2) [44].Finally, we carried out permutations with the permutest() function of the vegan{} package (p-value < 0.05; version 2.6.2),which allows us to see if there are significantly different distributions between the groups. Abundance of Specific Taxa in the Positive Test (PT) and Negative Test (NT) Groups Linear discriminant analysis effect size (LEfSe) was performed using run_lefse(), with the microbiomeMarker{} package (version 1.2.2) [45] to identify the microbial taxa that were features (biomarkers) of the PT and NT groups of participants.The significance threshold for the Kruskal-Wallis test was set with an alpha of 0.05, the significance threshold for the Wilcoxon test was set with an alpha of 0.05, and the LDA log score threshold was four.Taxa scoring higher than four are identified as features. A heatmap was made to visualize the similarities and differences between the samples according to their taxonomic composition.The prevalence threshold was set to 0.1, which means that only those bacteria that were detected in at least 10% of the samples will be included.It was made using the plot_core() function of the microbiome{} package (version 1.18.0)[46].A volcano plot was made to visualize differentially abundant taxa across the PT and NT groups, calculated using the ancombc() function of the ANCOMBC{} package (version 1.6.2) [47], with an alpha detection threshold of 0.01 and Bonferroni's correction for multiple comparisons. The relative distribution of phyla varied between the PT and NT groups.Following a Mann-Whitney test, a significant difference was observed in the abundance of Firmicutes (with a mean for PT = 62.68 and NT = 67.83,p-value = 0.004), Bacteriodota (with a mean for PT = 7.64 and NT = 5.11, p-value = 7 × 10 −4 ), and Fusobacteriota (with a mean for PT = 2.04 and NT = 0.95, p-value = 7.87 × 10 −8 ).Based on linear regression analyses, no significant difference was found in the distribution of phyla abundance between each subcategory within each group (age and sex; see Figure 2A,B). The relative distribution of phyla varied between the PT and NT groups.Following a Mann-Whitney test, a significant difference was observed in the abundance of Firmicutes (with a mean for PT = 62.68 and NT = 67.83,p-value = 0.004), Bacteriodota (with a mean for PT = 7.64 and NT = 5.11, p-value = 7 × 10 -4 ), and Fusobacteriota (with a mean for PT = 2.04 and NT = 0.95, p-value = 7.87 × 10 -8 ).Based on linear regression analyses, no significant difference was found in the distribution of phyla abundance between each subcategory within each group (age and sex; see Figure 2A,B). No Significant Difference in Alpha Diversity between PT and NT Individuals Alpha diversity indices were employed to compare the taxonomic richness between the PT and NT groups.The results showed no significant difference in taxonomic richness between the two groups (Figure 3A).The mean taxonomic richness based on the observed species index was 1257 ± 510 for the PT group and 1235 ± 400 for the NT group.Similarly, for the Shannon index, the mean was 5.94 ± 0.48 for the PT group and 5.94 ± 0.32 for the NT group.The Wilcoxon test yielded a p-value > 0.05 for both the observed index and the Shannon index, indicating no significant difference between the two groups for either index (Figure 3A,B). No Significant Difference in Alpha Diversity between PT and NT Individuals Alpha diversity indices were employed to compare the taxonomic richness between the PT and NT groups.The results showed no significant difference in taxonomic richness between the two groups (Figure 3A).The mean taxonomic richness based on the observed species index was 1257 ± 510 for the PT group and 1235 ± 400 for the NT group.Similarly, for the Shannon index, the mean was 5.94 ± 0.48 for the PT group and 5.94 ± 0.32 for the NT group.The Wilcoxon test yielded a p-value > 0.05 for both the observed index and the Shannon index, indicating no significant difference between the two groups for either index (Figure 3A,B). Significant Difference in Beta Diversity in Bacterial Communities in Samples of the Positive Test (PT) and Negative Test (NT) Groups The nonmetric multidimensional scaling (NMDS) graph depicted in our two sample groups (Figure 3C) illustrates the dissimilarity among our samples.After analyzing the bacterial community structure of the OCO, the Bray-Curtis dissimilarity index was calculated.Subsequently, the beta diversity index was employed to compare the taxonomic richness between samples of the PT and NT groups.Given the Shapiro-Wilk test results (p-value = 2.749 × 10 −16 ), the Mann-Whitney test was used to compare richness between the groups.The results showed a significant difference in taxon richness among our samples between the PT and NT groups (Mann-Whitney test; p-value = 0.043). The adonis2 test shows a variance difference in the taxonomic composition, indicating that the average dissimilarity of bacterial communities was not homogeneous between samples of the PT and NT groups.The screening test variable exhibited an R2 of 2.7% (p-value < 0.001). Furthermore, the Betadisp index, computed from the Bray-Curtis index, showed a distance of 0.412 and 0.367 between the sample centroids for the PT and NT groups, respectively.The dispersion within each group shows a significant difference (p-value = 0.002). Prevotella and Veillonella Are Features of OCO in COVID-19 Patients The LEfSe test highlights the bacterial genera most likely to significantly explain the differences between individuals in the PT and NT groups (Figure 4A).The diameter of each node is proportional to the abundance of the taxonomic rank in the group.Nodes in white are taxonomic ranks that are not significantly different but exhibit differential abundance. Individuals with COVID-19 Exhibit Greater Diversity within Their Core OCO Microbiome Of the 106 bacterial genera in the dataset, 38 were identified as part of the core microbiome, common in all samples from both the PT and NT groups.The relative taxonomic diversity between the two communities remained consistent, including Firmicutes, Actinobacteriota, Bacteroidota, Proteobacteria, and Fusobacteriota, which were found in all samples but with variations in prevalence (Figure S2).The most abundant and widespread genera are, in order, Streptococcus, Rothia, Veillonella, Actinomyces, Prevotella, Ge- In this study, the LEfSe test revealed a distinct variation in the relative distribution of genera between our samples from the PT and NT groups.Following a Mann-Whitney test, statistically significant differences were observed for Prevotella (mean for PT = 5.54 and NT = 3.27, p-value = 1.78 × 10 −5 ), Veillonella (mean for PT = 6.87 and NT = 4.58, p-value = 3.70 × 10 −5 ), Streptococcus (mean for PT = 49.73 and NT = 58.96,p-value = 1.38 × 10 −6 ), and Actinomyces (mean for PT = 5.08 and NT = 6.16, p-value = 8.72 × 10 −3 ).These findings indicate that Prevotella and Veillonella bacteria are features of the PT group, while Streptococcus and Actinomyces bacteria are characteristic of the NT group. Individuals with COVID-19 Exhibit Greater Diversity within Their Core OCO Microbiome Of the 106 bacterial genera in the dataset, 38 were identified as part of the core microbiome, common in all samples from both the PT and NT groups.The relative taxonomic diversity between the two communities remained consistent, including Firmicutes, Actinobacteriota, Bacteroidota, Proteobacteria, and Fusobacteriota, which were found in all samples but with variations in prevalence (Figure S2).The most abundant and widespread genera are, in order, Streptococcus, Rothia, Veillonella, Actinomyces, Prevotella, Gemella, Fusobacterium, Granullicatella, and Porphyronmonas, found across all samples (Figure 5).However, the prevalence of these genera differed between the two communities.Overall, samples from the PT group exhibited 74 prevalent taxa, while samples from the NT group showed 69 prevalent taxa, each at a relative abundance of 0.001%. COVID-19 Infection Status Is Associated with Differentially Abundant Taxa In this dataset, 18 genera were identified as differentially abundant (after Bonferroni correction; p-value 0.01; Figure 6).There were 11 differentially abundant genera that were overrepresented in the PT group: Lentimicobium, Lawsonella Pseudomonas, Centipeda, Discussion The upper respiratory tract is the main entry point for the SARS-CoV-2 virus.After entering the type II epithelial cells, the virus disrupts the homeostasis of the OCO microbiome through viral infection of host cells [48].This disruption can potentially lead to inflammatory damage, an inadequate immune response, or reduced resilience to the development of COVID-19 [49].To gain a better understanding of the biology of SARS-CoV-2 infection and its associated symptomatology, this study examined the bacterial communities of the upper respiratory tract and investigated their composition in individuals infected and uninfected with the SARS-CoV-2 alpha variant in the SLSJ region.Early studies conducted at the onset of the COVID-19 pandemic, investigating OCO bacterial communities, did not reveal a significant difference between SARS-CoV-2-infected and uninfected individuals [50].However, recent studies, including the present one, have demonstrated distinct bacterial communities [23,24,48].In this study, alterations in microbial diversity and the presence of biomarkers in infected and uninfected individuals were observed, suggesting that the virus induces changes in bacterial homeostasis in the respira- Discussion The upper respiratory tract is the main entry point for the SARS-CoV-2 virus.After entering the type II epithelial cells, the virus disrupts the homeostasis of the OCO microbiome through viral infection of host cells [48].This disruption can potentially lead to inflammatory damage, an inadequate immune response, or reduced resilience to the development of .To gain a better understanding of the biology of SARS-CoV-2 infection and its associated symptomatology, this study examined the bacterial communities of the upper respiratory tract and investigated their composition in individuals infected and uninfected with the SARS-CoV-2 alpha variant in the SLSJ region.Early studies conducted at the onset of the COVID-19 pandemic, investigating OCO bacterial communities, did not reveal a significant difference between SARS-CoV-2-infected and uninfected individuals [50].However, recent studies, including the present one, have demonstrated distinct bacterial communities [23,24,48].In this study, alterations in microbial diversity and the presence of biomarkers in infected and uninfected individuals were observed, suggesting that the virus induces changes in bacterial homeostasis in the respiratory tract, or that certain microbiome assemblages may promote SARS-CoV-2 infection.While the direction of causality is difficult to identify, this study shows that there are interactions between viral respiratory infections and the OCO microbiome. A notable strength of this study lies in the fact that the population sampled comes from the SLSJ region, which its geographically remote from the major cities, making it possible to control for variations in the microbiome due to environmental factors (e.g., lower diversity of circulating viruses) or significant cultural differences.Nevertheless, it is important to acknowledge certain limitations associated with the sample composition.Biases in sampling arose because most individuals who underwent COVID-19 testing were either symptomatic, suggesting a proportion of individuals may have tested negative for COVID-19 but may potentially have other unidentified infections.Consequently, interpreting the results was made more challenging by the fact that symptomatic individuals with negative tests might have experienced alterations in microbial diversity due to another respiratory infection.Furthermore, the regular testing of people in the medical field resulted in an overrepresentation of adults and women in the sample.These limitations might also explain why this study did not identify differences regarding age and sex as reported elsewhere [51,52].It is known that all microbiotas are in development from infancy to adulthood, thus being differently constituted [53].The underrepresentation of children (0-14 years) in the studied sample might also have contributed to this discrepancy with the literature.Lastly, clinical data such as severity, symptomatology, or outcomes were not available for the studied samples.These data may have been useful for the stratification of the results regarding specific health parameters. The analyses of phylum abundance revealed that both SARS-CoV-2-infected and uninfected individuals shared the same dominant phyla, with Firmicutes being the most prevalent.Firmicutes encompasses a wide range of taxa in the human commensal flora and has been previously identified as the dominant phylum in both healthy individuals and those infected with COVID-19 in previous studies [7,[54][55][56].The other phyla identified (from most to least abundant: Actinobacteriota, Bacteroidota, Proteobacteria, Fusobacteriota, and Patecibacteria) are also part of the commensal and SARS-CoV-2-infected flora of the OCO [55][56][57].Among these phyla, a significant decrease in the relative abundance of Firmicutes and a significant increase in Bacteroidota and Fusobacteriota were observed in infected individuals compared to uninfected ones.As these phyla are the most abundant in the OCO, the differences in their relative abundance in infected individuals suggest a state of dysbiosis, indicative of a disturbance in the commensal microbiome.This has been corroborated by other studies that have highlighted an infectious state and a decrease in Firmicutes (see review [58]).Additionally, among the genera belonging to the Firmicutes and Bacteroidota phyla, three were found to exhibit significant differences in their relative abundance and were identified as features of the SARS-CoV-2-infected OCO microbiome.No biomarkers were identified for the phylum Fusobacteriota.However, bacterial genera of this phylum have been shown to produce compounds such as hydrogen sulfide and methyl mercaptan, which, in high concentrations, induce inflammation [59].Another study shows a negative correlation between the taxon Fusobacterium periodonticum and the severity of COVID-19 symptoms [60], indicating the potential importance of this phylum in the immune and inflammatory responses associated with COVID-19 infection. Despite the decrease in the abundance of the Firmicutes phylum in infected individuals, the genera Veillonella and Prevotella showed a significant increase in infected individuals and were identified as features for SARS-CoV-2 infection.Both genera are known for their production of LPS [24,25].LPS, present on the outer membrane of Gram-negative bacteria, can have pro-inflammatory effects on the host immune system and induce systemic inflam-mation if they are predominant in the microbiome [61,62].Prior studies have demonstrated correlations between Veillonella and Prevotella abundances and the severity of COVID-19 symptoms [24,58,[63][64][65].Additionally, individual taxa belonging to these genera, such as V. parvula, V. dispar, V. infantium, P. enoeca, and P. melaninogenica, were significantly enriched in the OCO microbiome of individuals infected with SARS-CoV-2 and experiencing prolonged symptoms or co-infection with influenza, which can lead to pneumonia [24,[66][67][68].As this study utilized 16S rRNA gene sequencing, which provides information down to the bacterial genus level, specific validation of these results regarding taxonomic abundance in this sample was not possible.However, the strength of this amplicon-based sequencing method is its ability to screen large populations in a cost-effective manner.The results from the present dataset of 256 individuals strengthen observations made on Prevotella and Veillonella, underscore their contribution in maintaining the commensal OCO microbiome, and suggest they may play a critical role in inflammation in the upper airways. The third genus identified as a feature was Streptococcus (Firmicutes phylum), which served as a biomarker for uninfected individuals.In this study, a significantly higher proportion of Streptococcus was observed in uninfected individuals, contributing to the overall increase observed for this phylum.Streptococcus is the most abundant genus in the upper respiratory tract and plays a crucial role in maintaining the homeostasis of the oral microbiome [69].Microbiomes with a greater abundance of Streptococcus and, more specifically, S. parasanguinis tended to be more stable and resistant to co-infections or secondary infections, significantly correlating with mild or moderate forms of COVID-19 [66,70].The depletion of specific Streptococcus taxa could indicate a state of dysbiosis of the OCO microbiome, possibly resulting from the overgrowth of other bacteria or an intense immune response [68,71].This could also potentially be due to direct competitive processes within the OCO microbiome, as observed by the anti-Streptococcus effect of host lipids cleaved by Corynebacterium [72].Co-infection with another Streptococcus taxa, S. pneumoniae, has been found to be one of the most common occurrences following SARS-CoV-2 infection, affecting up to 79% of individuals admitted to the hospital and leading to pneumonia [66,[73][74][75][76].While the functional interactions between Streptococcus and SARS-CoV-2 infection could not be assessed from this present amplicon-based analysis, this genus may be an interesting target to develop COVID-19 sensitivity screenings and to better understand the ecological interactions within the OCO microbiome during infection. The analyses further identified other possible biomarkers from the aforementioned three phyla.The genus Actinomyces was found to be a feature-uninfected individual.While certain Actinomyces taxa were associated with mild or moderate forms of COVID-19, a decrease in their abundance was observed in severely affected individuals, and they were also negatively correlated with inflammatory biomarkers such as C-reactive protein [70,77].On the other hand, the Pasteurellales family and Fusobacteriales order were identified as biomarkers for COVID-19 infection, although no specific genus within these taxa showed significant differences.Nonetheless, genera belonging to these taxa have been found to be significantly increased in cases of co-infection with SARS-CoV-2 [73,[78][79][80]. Consistent with previous findings, the results of this study reveal that taxonomic diversity varies in infected individuals compared to uninfected individuals [66,81].While there was no statistical difference in taxonomic richness between both groups, there were significant differences in community composition, indicating that 2.7% of the variance in taxonomic assemblages of the OCO microbiome was influenced by individuals' infection status.Given the complexity of the OCO environment and the multitude of variables that could not be included in the analysis (tobacco use, air quality, etc.), this value still represents a significant proportion of the communities.Indeed, in comparison, beta diversity differences in the OCO microbiome for other infectious diseases typically fall below 1% [82,83]. The bacterial communities of infected individuals exhibited greater taxonomic diversity in terms of alpha diversity, and infected samples were more dispersed when visualized in NMDS space.The conventional notion suggests that a healthy microbiome should be characterized by diversity to enhance resilience against disturbances [84,85].However, stochastic processes that shape bacterial communities can lead to higher diversity than that of a healthy microbiome [66,86].This observation is further supported by the differential abundance of microbial genera known as opportunistic airway pathogens.Specifically, Pseudomonas and Corynebacterium were more abundant in the infected individuals, consistent with findings from previous studies [87,88].These genera are associated with secondary infections in the context of the SARS-CoV-2 infection [87][88][89].It is essential to note that no causality can be inferred from these observed associations, and they may be influenced by unmeasured confounding factors. In conclusion, this study presents the composition and diversity of the OCO microbiome of infected and uninfected individuals with SARS-CoV-2 from the SLSJ region in Canada.The results showed that the OCO microbiome of infected individuals has a different taxonomic composition and diversity, suggesting dysbiosis or increased sensitivity of certain microbiomes to infection.Additionally, four genera have been identified as possible biomarkers for SARS-CoV-2 infection (Prevotella and Veillonella) and two for the absence of SARS-CoV-2 infection (Streptococcus and Actinomyces).This study could help to highlight and understand the relationship between the microbiome of the respiratory tract and health in the context of a new pandemic. Conclusions The results of the present study show that the OCO microbiome of infected individuals exhibits a distinct taxonomic composition and community assembly, suggesting dysbiosis.Microbial features such as those identified in this study can be used as biomarkers to implement preventive measures, such as specialized pre/probiotics (for Streptococcus), or to identify the most vulnerable individuals in order to apply appropriate treatments (Veillonella). Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/microorganisms11112703/s1, Figure S1.Multiple rarefaction curves.Rarefaction curves showing samples of the positive test (PT) and negative test (NT) groups using the rarecurve() function on R form vegan package{} (version 2.6.2) have been realized from steps of 50.Saturation is close for all our samples, at around 60,000 reads.As a very small minority of samples had less than 60k reads, no rarefaction was applied to retain the maximum amount of information per sample; Figure S2.Heatmap of the most abundant phyla.The heatmap allows us to analyze and visualize the multidimensional data in samples of the negative test (NT) and positive test (PT) groups.The detection threshold of the relative abundance is on the abscissa and the genera on the ordinate.Color shows the prevalence of each phyla in terms of relative abundance.Color intensity shows the percentage in a sample, referring to the prevalence color key; Table S1.Data measured after extraction of gargal DNA, measured at Qubit™ using the Invitrogen™ Qubit™ 1X dsDNA Broad Range (BR) protocol (ThermoFischer Scientific).;Table S2.Sequencing results and number of remaining sequences with each step of data processing from the DADA2 pipeline.(a) denoisedF: number of reads remaining for forward (F) files after the modeling and sequencing error correction (denoising).(b) denoisedR: number of reads remaining for reverse (R) files after the modeling and sequencing error correction (denoising).(c) merged: number of reads remaining after merging the forward and reverse pairs to create a complete sequence. Figure 1 . Figure 1.Relative abundance (%) of dominant taxa in samples of the positive test (PT) and negative test (NT) groups.Phyla accounting for less than 5% of all samples were grouped together in the "<5% abundance" group. Figure 2 . Figure 2. Barplot of relative abundance of phyla in samples of the positive test and negative test groups by sex or age category.This graph shows the relative abundance of phyla present in each sample.(A) shows the most abundant phyla among: NT children, NT teenagers, NT adults, NT Figure 1 . Figure 1.Relative abundance (%) of dominant taxa in samples of the positive test (PT) and negative test (NT) groups.Phyla accounting for less than 5% of all samples were grouped together in the "<5% abundance" group. Figure 1 . Figure 1.Relative abundance (%) of dominant taxa in samples of the positive test (PT) and negative test (NT) groups.Phyla accounting for less than 5% of all samples were grouped together in the "<5% abundance" group. Figure 2 . Figure 2. Barplot of relative abundance of phyla in samples of the positive test and negative test groups by sex or age category.This graph shows the relative abundance of phyla present in each sample.(A) shows the most abundant phyla among: NT children, NT teenagers, NT adults, NT Figure 2 . Figure 2. Barplot of relative abundance of phyla in samples of the positive test and negative test groups by sex or age category.This graph shows the relative abundance of phyla present in each sample.(A) shows the most abundant phyla among: NT children, NT teenagers, NT adults, NT elders, PT children, PT teenagers, PT adults, PT elders.(B) shows the most abundant phyla among: NT Female, NT Males, PT Females, PT Males.Phyla representing less than 5% of all samples were grouped into the "<5% abundance" group. Figure 3 . Figure 3. Taxonomic richness and bacterial community composition in samples of the positive test (PT) and negative test (NT) groups.(A) Observed and (B) Shannon diversity indices.Half-violine plots show the distribution of the calculated index across samples of PT and NT groups.Boxes show quartiles that represent the largest distribution of both sample groups, with the line showing the median.p-values reported are from Mann-Whitney tests, with "ns" showing non-significant differences (p-value > 0.05).(C) Community composition (beta diversity) calculated using Bray-Curtis dissimilarity index in non-metric multidimensional scaling (NMDS) space, for samples of the PT and NT groups. Figure 3 . Figure 3. Taxonomic richness and bacterial community composition in samples of the positive test (PT) and negative test (NT) groups.(A) Observed and (B) Shannon diversity indices.Half-violine plots show the distribution of the calculated index across samples of PT and NT groups.Boxes show quartiles that represent the largest distribution of both sample groups, with the line showing the median.p-values reported are from Mann-Whitney tests, with "ns" showing non-significant differences (p-value > 0.05).(C) Community composition (beta diversity) calculated using Bray-Curtis dissimilarity index in non-metric multidimensional scaling (NMDS) space, for samples of the PT and NT groups. Figure 4 . Figure 4. Linear discriminant analysis (LDA) score and cladogram of LEfSe biomarkers.(A) Linear discriminant analysis (LDA) effect size (LEfSe) score (minimal score at 4, p < 0.05) for taxa identified as biomarkers of either samples of the PT or NT groups.(B) Cladogram showing the phylogeny of identified features.Concentric circles represent different taxonomic levels, with the outermost being genus.The nodes (red or blue) represent significantly different features identified as biomarkers.The diameter of each node is proportional to the abundance of the taxonomic rank in the group.Nodes in white are taxonomic ranks that are not significantly different but exhibit differential abundance. Figure 4 . Figure 4. Linear discriminant analysis (LDA) score and cladogram of LEfSe biomarkers.(A) Linear discriminant analysis (LDA) effect size (LEfSe) score (minimal score at 4, p < 0.05) for taxa identified as biomarkers of either samples of the PT or NT groups.(B) Cladogram showing the phylogeny of identified features.Concentric circles represent different taxonomic levels, with the outermost being genus.The nodes (red or blue) represent significantly different features identified as biomarkers.The diameter of each node is proportional to the abundance of the taxonomic rank in the group.Nodes in white are taxonomic ranks that are not significantly different but exhibit differential abundance. 18 Figure 5 . Figure 5. Heatmap of the most abundant genera.Core microbiome genera, shown through their prevalence across samples of the (A) NT and (B) PT groups (intensity of color) along a step-wise increase in the detection threshold (relative abundance) (x-axis). Figure 5 . Figure 5. Heatmap of the most abundant genera.Core microbiome genera, shown through their prevalence across samples of the (A) NT and (B) PT groups (intensity of along a step-wise increase in the detection threshold (relative abundance) (x-axis). Microorganisms 2023 , 18 Figure 6 . Figure 6.Differentially abundant genera between PT and NT.Volcano plot showing differentially expressed genera between PT and NT groups using the ancombc() function.Genera are colored if they pass the p-log10 value threshold (p-value = 0.05) and an absolute log fold change ≥ 0.05.Those that are more abundant in bacterial communities from the PT group are shown in red, while those more abundant in bacterial communities from the NT group are displayed in blue (p < 0.01; after Bonferroni correction). Figure 6 . Figure 6.Differentially abundant genera between PT and NT.Volcano plot showing differentially expressed genera between PT and NT groups using the ancombc() function.Genera are colored if they pass the p-log10 value threshold (p-value = 0.05) and an absolute log fold change ≥ 0.05.Those that are more abundant in bacterial communities from the PT group are shown in red, while those more abundant in bacterial communities from the NT group are displayed in blue (p < 0.01; after Bonferroni correction). (d) nonchim: final number of reads after elimination of chimeras created during PCR DNA amplification.Author Contributions: Conceptualization, C.L.; methodology, C.G.; validation, C.G.; formal analysis, W.B.; investigation, W.B.; resources, C.L., K.T. and G.J.; data curation, W.B.; writing-original draft preparation, W.B.; writing-review and editing, C.L., C.G., K.T. and G.J.; visualization, W.B.; supervision, C.L. and C.G.; project administration, C.L.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.Funding: This work was made possible through open sharing of data and samples from the Biobanque québécoise de la COVID-19 (BQC19), funded by the Fonds de recherche du Québec-Santé, Génome Québec, the Public Health Agency of Canada and, as of March 2022, the ministère de la Santé et des Services sociaux (ID: 2021-HQ-000051)).Funding was also obtained from the Canada Foundation for Table 1 . Age and sex of individuals.
9,375.8
2023-11-01T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Identification of Auditory Object-Specific Attention from Single-Trial Electroencephalogram Signals via Entropy Measures and Machine Learning Existing research has revealed that auditory attention can be tracked from ongoing electroencephalography (EEG) signals. The aim of this novel study was to investigate the identification of peoples’ attention to a specific auditory object from single-trial EEG signals via entropy measures and machine learning. Approximate entropy (ApEn), sample entropy (SampEn), composite multiscale entropy (CmpMSE) and fuzzy entropy (FuzzyEn) were used to extract the informative features of EEG signals under three kinds of auditory object-specific attention (Rest, Auditory Object1 Attention (AOA1) and Auditory Object2 Attention (AOA2)). The linear discriminant analysis and support vector machine (SVM), were used to construct two auditory attention classifiers. The statistical results of entropy measures indicated that there were significant differences in the values of ApEn, SampEn, CmpMSE and FuzzyEn between Rest, AOA1 and AOA2. For the SVM-based auditory attention classifier, the auditory object-specific attention of Rest, AOA1 and AOA2 could be identified from EEG signals using ApEn, SampEn, CmpMSE and FuzzyEn as features and the identification rates were significantly different from chance level. The optimal identification was achieved by the SVM-based auditory attention classifier using CmpMSE with the scale factor τ = 10. This study demonstrated a novel solution to identify the auditory object-specific attention from single-trial EEG signals without the need to access the auditory stimulus. Introduction Existing relevant research has revealed that auditory objects [1], as neural representational units encoded in the human auditory cortex [2], are involved with high-level cognitive processing in the cerebral cortex, such as top-down attentional modulation [3]. Top-down attention is a selection process that focuses cortical processing resources on the most relevant sensory information in order to enhance information processing. There are many reports that auditory attention can be detected from brain signals, such as invasive electrocorticography [4], non-invasive magnetoencephalography (MEG) [5,6] and electroencephalography (EEG) [7,8]. These findings provide hard evidence that peoples' attention to a specific auditory object, which is referred to as the auditory object-specific attention, can be identified from brain signals. As the reflection of electrical activity in the cerebral cortex, EEG signals contain a wealth of information which is closely relating to advanced nervous activities in human brain such as learning, memory and attention [9]. Owing to the advantages of relatively low cost, easy to access and high temporal resolution, EEG signals are of much more practical value for the study of auditory object-specific attention [10]. • The first approach is the use of system identification, which is mainly to build a linear forward map from the auditory stimuli of specific acoustic features on EEG signals. This is a direct method to estimate EEG signals [11]. The auditory object-specific attention can be inferred from the estimated EEG signals [5]. • The second approach is the use of stimulus reconstruction, which is mainly to reconstruct specific acoustic features (temporal envelopes) of auditory stimuli from the ongoing EEG signals [12]. Recently, this classical method has been extensively used to study the processing of speech perception in EEG signals. The auditory object-specific attention can be identified based on the reconstructed the acoustic features [7]. • The third approach is to extract the informative features of EEG signals and/or auditory stimuli and then exploit machine learning algorithms to train a classifier for the detection of auditory attention [10,13]. The informative features can be the cross-correlation between EEG signals and an auditory stimulus' envelope, the power in EEG signal bands, the measure of auditory event-related potentials [14] and so on. Machine learning algorithms, such as linear discriminant analysis (LDA) [15], regularized discriminant analysis (RDA) [13], support vector machine (SVM) [16,17], neural networks [18] and so on, have been reported in the published studies. According to the available research, the first and second approaches must exploit EEG signals and acoustic features of auditory stimulus to achieve the identification of auditory object-specific attention. The auditory attention identification from EEG signals usually requires that the acoustic features of auditory stimuli be known in advance, for instance, the speech envelopes in the study of Horton [15]. Because the identification of auditory attention from EEG signals were based on that the cortical oscillations phase locked to the envelope of the auditory stimuli [19] and the temporal envelope of the auditory stimuli could be reconstructed from individual neural representations [2]. However, machine learning techniques recently have made remarkable progress and can achieve unprecedented accuracy for classification tasks [20]. With the help of machine learning, the third approach has the potential to achieve the detection of auditory object-specific attention by exploiting the enough useful information from EEG signals alone. Using entropy measures to extract the informative features of EEG signals for brain-state monitoring and brain function assessment are becoming the hot research topics. EEG signals are commonly accepted to be non-stationary, nonlinear and multicomponent in nature. As typical nonlinear analysis methods, entropy measures in EEG signals may be much more appropriate to capture the imperceptible changes in different physiological and cognitive states of human brain. For example, Mu et al. studied the detection of driving fatigue using four entropy measures, i.e., spectrum entropy, approximate entropy (ApEn), sample entropy (SampEn) and fuzzy entropy (FuzzyEn), to extract features of EEG signals and reached an average recognition accuracy of 98.75% [21]. Hosseini et al. proposed the use of ApEn and wavelet entropy in EEG signals for emotion state analysis and the research found that ApEn and wavelet entropy were capable of discriminating emotional states [22]. Shourie et al. adopted ApEn to investigate the differences between EEG signals of artists and non-artists during the visual perception and mental imagery of some paintings and at resting condition. The research found that ApEn was significantly higher for artists during the visual perception and the mental imagery when compared with nonartists [23,24]. Alaraj and Fukami exploited ApEn to quantitatively evaluate the wakefulness state and the results showed that ApEn outperformed other conventional methods with respect to the classification of awake and drowsy subjects [25]. To date, applying entropy measures to the quantification of EEG signals has been proved to be a powerful tool to identify mental tasks and reveal cerebral states. In continuation of the aforementioned studies, we move one step ahead in this study and explore the feasibility of using entropy measures on EEG signals to identify auditory object-specific attention, without the need for the acoustic features of auditory stimulus. Using four well-established entropy measures, i.e., ApEn, SampEn, composite multiscale entropy (CmpMSE) and FuzzyEn, to extract the informative features of EEG signals, we investigate the changes of these entropy measures in EEG signals under different auditory object-specific attention. Then, we use machine learning to train an auditory attention classifier for the identification of auditory object-specific attention. Based on preliminary experiment research, we demonstrate a novel solution to the identification of auditory object-specific attention from the ongoing EEG signals by the use of auditory attention classifier. The study of identification of auditory object-specific attention not only has great research value on monitoring the cognitive and physiological states of human brain, but also has great potential of the realization of assistive hearing technology with neural feedback. Subjects Thirteen subjects (aged 21 to 28 years, four females) participated in this study. All subjects were normal-hearing and right-handed college students and none had a history of neurological illness, which were confirmed by questionnaires. The experiment procedures were approved by the ethics committee of Harbin Institute of Technology Shenzhen Graduate School and all experiments were performed in accordance with relevant guidelines and regulations. Informed consent forms were signed by the subjects before the experiments were performed. Experimental Design In the study two audio signals were used as the auditory stimuli and the durations of both the audio signals were 60 s. While the subjects focused their attention on listening to the specific auditory stimulus, the corresponding specific auditory object was emerging in their auditory cortex. The audio signal A was the roaring sound of tiger corresponding to the Auditory Object1; the audio signal B was a segment of a stand-up comedy corresponding to the Auditory Object2. The audio signals were binaurally played with ER4 (Etymotic Research Inc., Elk Grove Village, IL, USA) in-ear earphones. The subjects were instructed to keep their attention on the auditory stimuli with their eyes closed while their scalp EEG signals were being recorded. In this study 8-channel EEG signals were recorded using the ENOBIO 8 system (Neuroelectrics, Barcelona, Spain) with dry electrodes. The EEG signals were sampled at 500 Hz with band pass filter 0.540 Hz from eight sites on the scalp. According to the international standard 10-20 system, the eight electrode sites were selected to be T7 and T8 in the temporal region, P7 and P8 in the posterior temporal region, P3 and P4 in the parietal region, Cz in the central region and Fz in the frontal region, respectively. In order to minimize the possibility of the movements of subject's body as much as possible during experiment, the subjects were asked to sit in a comfortable chair. The experiments were conducted in a soundproof room. Each subject was required to undergo three EEG measurement protocols in a random order and there were totally 39 EEG measurements in this study, each of 60 s in length. The three 60-s EEG measurement protocols corresponded to three kinds of auditory object-specific attention. The first EEG measurement protocol, during which the subject was instructed to keep calm and his brain was in a resting state without any audio signal playing, corresponded to Rest. The second EEG measurement protocol, during which the subject was instructed to keep his attention on the auditory stimulus with the audio signal A playing, corresponded to Auditory Object1 Attention (AOA1). The third EEG measurement protocol, during which the subject was instructed to keep his attention on the auditory stimulus with the audio signal B playing, corresponded to Auditory Object2 Attention (AOA2). For each subject the three EEG measurement protocols were randomly performed to reduce EEG signals to be contaminated by a fixed order of auditory tasks or the dominance of ears. Entropy Measures in EEG Signals ApEn, proposed by Pincus [26], is considered as a complexity measure of time series. ApEn has the ability of measures of predictability based on evaluating the irregularity of time series. The more similar patterns the time series has, the less irregular the time series are and the more likely the time series are to be predictable. The computation method of ApEn is firstly involved in the phase space reconstruction, in which the time series are embedded into phase spaces of dimension m and m + 1, respectively; and then calculates the percentages of similar vectors in phase spaces with acceptable matches. SampEn, proposed by Richman et al. [27], is a modified version of ApEn. SampEn has better performance over ApEn in the consistency and dependence on data length. CmpMSE was proposed by Wu [28]. For a predefined scale factor, a set of k-th coarse-grained time series based on the composite averaging method are reconstructed from original time series. The sample entropies of all coarse-grained time series are calculated, and then CmpMSE is defined as the mean of all the sample entropies. FuzzyEn used the fuzzy membership function to obtain a fuzzy measurement of two vectors' similarity [29]. The family of exponential function was usually used as the fuzzy function and it was continuous and convex so that the similarity does not change abruptly [30]. Before the calculation of ApEn, SampEn, CmpMSE and FuzzyEn, EEG signals were preprocessed by linear detrending, which the polynomial curve fitting were used as the trend terms of EEG signals and then subtracted. Then, a 9-level wavelet decomposition was performed, using Daubechies (db4) wavelets as the wavelet function which was suitable for detecting changes of the EEG signals [31]. Two of the highest detail coefficients D2 (62.5-125 Hz) and D1 (125-250 Hz) [31] and the approximation coefficients A9 (0-0.49 Hz) were considered as noises. The denoised EEG signals were recovered by the detail coefficients from D3 and D9 and the effective frequency band of the denoised EEG signals was considered to be 0.5-62.5 Hz. The recording time of EEG signals was 60 s, which corresponded to the durations of both the audio stimuli. Because of the sampling rate was 500, the data length used for the calculation of entropy measures in EEG signals was N = 30,000. For the calculation of ApEn, SampEn and FuzzyEn, the embedding dimension m = 2 and the tolerance r = 0.15 × STD, where STD was the standard deviation of EEG signals. For the calculation of CmpMSE, the scale factor was selected to τ = 30 and the parameter m and r were the same as ApEn and SampEn. The assignment of the parameters for the calculation of ApEn, SampEn, CmpMSE and FuzzyEn, are shown in Table 1. Statistical Analysis Methods The statistical analysis methods included the multiple-sample tests for equal variances, Shapiro-Wilk W test for normal distribution, parametric or non-parametric analysis of variance and multiple comparisons, which were used to evaluate ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals under auditory object-specific attention of Rest, AOA1 and AOA2. The multiple-sample tests for equal variances were to use the Bartlett AOA1 and AOA2 had equal variances and obeyed normal distributions, the parametric analysis of variances and multiple comparisons were performed to determine whether the values of the entropy measures were significantly different from each other for Rest, AOA1 and AOA2 of auditory object-specific attention; otherwise, the Kruskal-Wallis test (an extension of the Wilcoxon rank sum test) and multiple comparisons were used. Besides, Bonferroni correction was used to counteract the problem of multiple comparisons. Auditory Attention Classifier Based on Entropy Measures and Machine Learning Machine learning, which learns from data to make data-driven predictions or decisions, is now widely used for the analysis of EEG signals [32]. Machine learning is the method that is used to train a model and develop related algorithms with the input features and lend itself to prediction. In this study three kinds of auditory object-specific attention were investigated, i.e., Rest, AOA1 and AOA2. To demonstrate auditory attention classifier, we used ApEn, SampEn, CmpMSE and FuzzyEn to extract the information of EEG signals as features and then exploited LDA and SVM to construct the auditory attention classifiers. As a classical machine learning method, LDA is a statistical classifier which achieves a linear decision boundary based on the within and between class scatter matrices [33]. LDA performs the discrimination of different classes by maximizing the between class scatter and minimizing the within class scatter. LDA has been commonly used for EEG signals classification, which allows for fast and massive processing of data samples [34,35]. Like LDA, SVM also is one of the most machine learning methods, which can be not only used for the linear classification but also for the non-linear classification using a specific kernel function. The thirteen subjects' EEG signals were recorded and each subject underwent three EEG measurement protocols. Therefore, a total of 39 samples were available for the auditory attention classifier. To facilitate the training and testing of the auditory attention classifier, LDA and SVM are employed for supervised learning. After training, the LDA-based and SVM-based auditory attention classifiers were capable of identifying the auditory object-specific attention of Rest, AOA1 and AOA2. In order to assess the identification of auditory object-specific attention, we used the leave-one-out cross-validation (LOOCV) approach to examine the identification accuracy of auditory object-specific attention of Rest, AOA1 and AOA2. Statistical Results of Entropy Measures in EEG Signals According to the statistical results of multiple-sample tests for equal variances, the values of ApEn, SampEn, CmpMSE and FuzzyEn all obeyed equal variances (all p > 0.05) with respect to Rest, AOA1 and AOA2 of auditory object-specific attention. According to the statistical results of Shapiro-Wilk W test, for ApEn and CmpMSE in EEG signals under auditory object-specific attention of Rest, AOA1 and AOA2, the vast majority of cases of ApEn and CmpMSE obeyed normal distributions (p > 0.05), except in the case of ApEn in EEG signals with P8 channel (p = 0.012) under the auditory object-specific attention of AOA2, and except in the case of CmpMSE in EEG signals with Fz channel (p = 0.031) under auditory object-specific attention of AOA1. For SampEn in EEG signals under auditory object-specific attention of Rest, AOA1 and AOA2, the majority of cases of SampEn obeyed normal distributions, except in the case of SampEn in EEG signals with Fz channel (p = 0.038) under auditory object-specific attention of Rest and with Cz, Fz and P3 channels (p = 0.036, 0.013 and 0.037, respectively) under auditory object-specific attention of AOA2. For FuzzyEn in EEG signals under auditory object-specific attention of Rest, AOA1 and AOA2, the vast majority of cases of FuzzyEn obeyed normal distributions, except in the case of FuzzyEn in EEG signals with T7, T8 and P8 channels (p = 0.013, 0.005 and 0.0003, respectively) under auditory object-specific attention of AOA2. Thus, according to the normality and non-normality of these entropy measures, the parametric and non-parametric analysis of variance were carried out respectively to test the significance of difference degree of these entropy measures among Rest, AOA1 and AOA2. For ApEn of P8 channel, the values of ApEn in EEG signals did not obey normal distribution, so the Kruskal-Wallis test was carried out; but for ApEn of the other channels, the values ApEn in EEG signals obeyed normal distributions, so the one-way analysis of variance was carried out. Like ApEn, for SampEn, CmpMSE and FuzzyEn the same statistical methods were applied to test the significance of difference degree of these entropy measures among Rest, AOA1 and AOA2. Figure 1 shows To further investigate which pairs of means were significantly different for Rest, AOA1 and AOA2, the multiple comparisons test was carried out. Because there were three kinds of auditory object-specific attention, for multiple comparisons test each entropy measure was testing three (3 × 2/2) independent hypotheses. Therefore, here a p value of <0.015 (0.05/3) was considered statistically To further investigate which pairs of means were significantly different for Rest, AOA1 and AOA2, the multiple comparisons test was carried out. Because there were three kinds of auditory object-specific attention, for multiple comparisons test each entropy measure was testing three (3 × 2/2) Entropy 2018, 20, 386 7 of 16 independent hypotheses. Therefore, here a p value of <0.015 (0.05/3) was considered statistically significant using the Bonferroni criterion. Table 2 shows the p values of multiple comparisons between the auditory object-specific attention of Rest, AOA1 and AOA2. In Table 2, it is clearly observed that there are significant differences in ApEn of T8, P8, Cz, Fz, P3 and P4 channels between AOA1 and AOA2, in ApEn of P3 and P4 channels between Rest and AOA1; in SampMSE of T8, P7, P8 and Fz channels between Rest and AOA1; in CmpMSE of T8, P7, P8, Cz, Fz, P3 and P4 channels between Rest and AOA1; in FuzzyEn of T7, T8, P8, Cz, Fz, P3 and P4 channels between Rest and AOA1, in FuzzyEn of Cz, Fz, P3 and P4 channels between AOA1 and AOA2. It must be noted that, for SampEn of Cz, P3 and P4 channels and FuzzyEn of P7 channel, there was no need to perform the multiple comparisons because there were no significant differences in SampEn of Cz, P3 and P4 channels and FuzzyEn of P7 channel among Rest, AOA1 and AOA2 for the parametric or non-parametric analysis of variance. As shown in Figure 1, it is clearly observed that there are obvious differences in the mean values of these entropy measures under auditory object-specific attention of Rest, AOA1 and AOA2. For example, the mean values of ApEn, SampEn and CmpMSE under auditory object-specific attention of AOA1 are obviously higher than those under auditory object-specific attention of Rest and AOA2. In the viewpoint of mathematics, Figure 1 has a certain intrinsic correlation with Table 2. For instance, for ApEn of P4 channel in Table 2, the p values of between Rest and AOA1 and between AOA1 and AOA2 are less than 0.015, which indicate that the differences of the values of ApEn between Rest and AOA1 and between AOA1 and AOA2 are significant. At the same time, in Figure 1 the significant differences of the mean values of ApEn of P4 channel between Rest and AOA1 and between AOA1 and AOA2 are observed. Therefore, the size of the p values as shown in Table 2, to a certain extent, can indicate the discriminating power of the entropy measures. The smaller the p values, the stronger the discriminating power of the entropy measures may be. As shown in Table 2, the p values of CmpMSE, on the whole, are less than those of ApEn and SampEn. Individual-Level Analysis of Entropy Measures in EEG Signals To demonstrate the individual-level identification of Rest, AOA1 and AOA2, we first carried out the individual-level analysis of ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals under auditory object-specific attention of Rest, AOA1 and AOA2. Figure 2 presents the values of four subjects' ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals of P3 channel under auditory object-specific attention of Rest, AOA1 and AOA2 and the EEG signals were selected from four representative subjects. As shown in Figure 2, it is clearly observed that the values of ApEn, SampEn, CmpMSE and FuzzyEn show obvious differences with respect to Rest, AOA1 and AOA2 of auditory object-specific attention. For the subjects A, B, C and D, the values of ApEn, SampEn, CmpMSE and FuzzyEn on FuzzyEn on auditory object-specific attention of AOA1 are greater than those of the entropy measures on auditory object-specific attention of Rest and AOA2. The values of SampEn and CmpMSE on auditory object-specific attention of AOA2 are greater than those of the entropy measures on auditory object-specific attention of Rest, and yet the values of ApEn on auditory object-specific attention of AOA2 are lower than that of ApEn on auditory object-specific attention of Rest. Through this individual-level analysis of entropy measures on auditory object-specific attention of Rest, AOA1 and AOA2, it was clear that ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals could be used as informative indicators to determine the auditory object-specific attention. Identification of Auditory Object-Specific Attention by Auditory Attention Classifier In order to demonstrate the discriminating power with respect to Rest, AOA1 and AOA2 of auditory object-specific attention, the identification of auditory object-specific attention was investigated by two auditory attention classifiers, one used LDA to construct the auditory attention classifier and the other used SVM to construct the auditory attention classifier. The LDA-based auditory attention classifier is designed using a multiclass classification method with open source code [36]. The SVM-based auditory attention classifier is designed using the LIBSVM toolbox [37]. To statistically evaluate whether the identification rates were significantly different from the chance level (33.3%), the chi-squared test was used, with the null hypothesis that the identification rates was dependent of the chance level. Table 3 shows the identification rates of Rest, AOA1 and AOA2 of auditory object-specific attention by the LDA-based and SVM-based auditory attention classifiers using ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals of eight channels as features. For the LDA-based auditory attention classifier, the average identification rates of the auditory object-specific attention are 48 The values of SampEn and CmpMSE on auditory object-specific attention of AOA2 are greater than those of the entropy measures on auditory object-specific attention of Rest, and yet the values of ApEn on auditory object-specific attention of AOA2 are lower than that of ApEn on auditory object-specific attention of Rest. Through this individual-level analysis of entropy measures on auditory object-specific attention of Rest, AOA1 and AOA2, it was clear that ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals could be used as informative indicators to determine the auditory object-specific attention. Identification of Auditory Object-Specific Attention by Auditory Attention Classifier In order to demonstrate the discriminating power with respect to Rest, AOA1 and AOA2 of auditory object-specific attention, the identification of auditory object-specific attention was investigated by two auditory attention classifiers, one used LDA to construct the auditory attention classifier and the other used SVM to construct the auditory attention classifier. The LDA-based auditory attention classifier is designed using a multiclass classification method with open source code [36]. The SVM-based auditory attention classifier is designed using the LIBSVM toolbox [37]. To statistically evaluate whether the identification rates were significantly different from the chance level (33.3%), the chi-squared test was used, with the null hypothesis that the identification rates was dependent of the chance level. Table 3 shows the identification rates of Rest, AOA1 and AOA2 of auditory object-specific attention by the LDA-based and SVM-based auditory attention classifiers using ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals of eight channels as features. For the LDA-based auditory attention classifier, the average identification rates of the auditory object-specific attention are 48 As shown in Table 3, on the whole, the identification rates of Rest, AOA1 and AOA2 when using the SVM-based auditory attention classifier are higher than those when using the LDA-based auditory attention classifier. Thus, on the basis of the above results, it is clear that for the identification of auditory object-specific attention the SVM-based auditory attention classifier is more effective than the LDA-based auditory attention classifier. To investigate which channel of EEG signals was the most sensitive to the identification of Rest, AOA1 and AOA2 of auditory object-specific attention, we exploited the SVM-based auditory attention classifier to identify the auditory object-specific attention using ApEn and CmpMSE in EEG signals per channel as features. Figure 3 shows the identification rates of Rest, AOA1 and AOA2 by the SVM-based auditory attention classifier using ApEn and CmpMSE in EEG signals per channel as features. The average identification rate is calculated by averaging the identification rates of Rest, AOA1 and AOA2. For ApEn, the P8, P4 and Fz channels are corresponding to the top three of the average identification rates of auditory object-specific attention and the corresponding identification rates are 59.0% (p = 0.008), 56.4% (p = 0.014) and 56.4% (p = 0.005), respectively. For CmpMSE, the T8, P4 and Fz channels are corresponding to the top three of the average identification rates of auditory object-specific attention and the corresponding identification rates are 59.0% (p = 0.012), 56.4% (p < 0.001), and 48.7% (p = 0.001), respectively. It is clearly observed that for ApEn and CmpMSE the performances of the identification of auditory object-specific attention of Rest, AOA1 and AOA2 vary with the different channels, which may because the different informative features were extracted from different channels by the entropy measures. In order to investigate the influence of the scale factor of entropy measures on the identification rate of auditory object-specific attention, we further studied the identification of auditory object-specific attention of Rest, AOA1 and AOA2 by the SVM-based auditory attention classifier using CmpMSE with different scale factors as features. Figure 4 shows the identification rates of Rest, AOA1 and AOA2 by the adoption of CmpMSE in EEG signals of eight channels with the scale factors τ = 1, 5, 10, 15, 20, 25, 30, 35 and 40, respectively. The average identification rate is calculated by averaging the identification rates of Rest, AOA1 and AOA2. The average identification rates are 56.4% (p = 0.026), 56.4% (p = 0.041), 69.2% (p < 0.001), 46.2% (p = 0.076), 56.4% (p = 0.016), 56.4% (p = 0.026), 53.8% (p = 0.017), 56.4% (p = 0.026) and 48.7% (p = 0.146), respectively. Therefore, for CmpMSE the optimal identification of Rest, AOA1 and AOA2 of auditory object-specific attention is achieved with the scale factor τ = 10, and the corresponding identification rates of Rest, AOA1 and AOA2 are 69.2%, 76.9% and 61.5%, respectively. As shown in Table 3, on the whole, the identification rates of Rest, AOA1 and AOA2 when using the SVM-based auditory attention classifier are higher than those when using the LDA-based auditory attention classifier. Thus, on the basis of the above results, it is clear that for the identification of auditory object-specific attention the SVM-based auditory attention classifier is more effective than the LDA-based auditory attention classifier. To investigate which channel of EEG signals was the most sensitive to the identification of Rest, AOA1 and AOA2 of auditory object-specific attention, we exploited the SVM-based auditory attention classifier to identify the auditory object-specific attention using ApEn and CmpMSE in EEG signals per channel as features. Figure 3 shows the identification rates of Rest, AOA1 and AOA2 by the SVM-based auditory attention classifier using ApEn and CmpMSE in EEG signals per channel as features. The average identification rate is calculated by averaging the identification rates of Rest, AOA1 and AOA2. For ApEn, the P8, P4 and Fz channels are corresponding to the top three of the average identification rates of auditory object-specific attention and the corresponding identification rates are 59.0% (p = 0.008), 56.4% (p = 0.014) and 56.4% (p = 0.005), respectively. For CmpMSE, the T8, P4 and Fz channels are corresponding to the top three of the average identification rates of auditory object-specific attention and the corresponding identification rates are 59.0% (p = 0.012), 56.4% (p < 0.001), and 48.7% (p = 0.001), respectively. It is clearly observed that for ApEn and CmpMSE the performances of the identification of auditory object-specific attention of Rest, AOA1 and AOA2 vary with the different channels, which may because the different informative features were extracted from different channels by the entropy measures. In order to investigate the influence of the scale factor of entropy measures on the identification rate of auditory object-specific attention, we further studied the identification of auditory object-specific attention of Rest, AOA1 and AOA2 by the SVM-based auditory attention classifier using CmpMSE with different scale factors as features. Figure 4 shows the identification rates of Rest, AOA1 and AOA2 by the adoption of CmpMSE in EEG signals of eight channels with the scale factors τ = 1, 5, 10, 15, 20, 25, 30, 35 and 40, respectively. The average identification rate is calculated by averaging the identification rates of Rest, AOA1 and AOA2. The average identification rates are 56.4% (p = 0.026), 56.4% (p = 0.041), 69.2% (p < 0.001), 46.2% (p = 0.076), 56.4% (p = 0.016), 56.4% (p = 0.026), 53.8% (p = 0.017), 56.4% (p = 0.026) and 48.7% (p = 0.146), respectively. Therefore, for CmpMSE the optimal identification of Rest, AOA1 and AOA2 of auditory object-specific attention is achieved with the scale factor τ = 10, and the corresponding identification rates of Rest, AOA1 and AOA2 are 69.2%, 76.9% and 61.5%, respectively. In order to investigate the influence of the choice of parameters for the entropy measures on the identification rate of auditory object-specific attention, we carried out a qualitative comparison of the identification results of auditory object-specific attention by the SVM-based auditory attention classifier using CmpMSE with different parameter values of tolerance. For the calculation of CmpMSE the tolerance was selected to r = 0.10, 0.15, 0.20 and 0.25, respectively, and the other parameters were fixed. The qualitative comparison of the identification results of auditory object-specific attention are shown in Table 4. The average identification rates are 59.0% (p = 0.020), 69.2% (p < 0.001), 71.8% (p < 0.001) and 53.8% (p = 0.068), corresponding to the tolerance r = 0.10, 0.15, 0.20 and 0.25, respectively. Discussion In this study, we explored the entropy measures in EEG signals to extract the informative features relating to auditory object-specific attention and then exploited LDA and SVM to construct the auditory attention classifiers. Our proposed method to the identification of auditory object-specific attention is an innovative attempt. Even though the optimal identification rates of Rest, AOA1 and AOA2 of auditory object-specific attention are only 69.2%, 76.9% and 61.5% respectively by the SVM-based auditory attention classifier using CmpMSE in EEG signals of eight channels with the scale factors τ = 10 as features, the identification accuracy is at the same level with the existing studies, for instance, the experimental results reported by Bleichner et al. [38]. We have compared our identification results with the available studies and the comparison results are presented in Table 5. EEG signals are a kind of non-stationary, non-linear and often multicomponential dynamic signal and it is challenging to accurately extract the informative features of EEG signals. In this study, based on four well-established entropy measures, i.e., ApEn, SampEn, CmpMSE and FuzzyEn, we demonstrate the use of entropy measures in EEG signals as informative features to reveal the auditory attention states. It is clearly shown that the SVM-based auditory attention classifier using ApEn, SampEn, CmpMSE as features are capable of indicating significant differences in informative features of EEG signals under different auditory object-specific attention. This experiment findings are also in line with existing research findings. Many studies also reported that the physiological and cognitive states of human brain could be determined by the use of entropy measures in EEG signals. For example, discrete wavelet transform and entropy measures were used to identify the focal EEG signals [39]; ApEn, SampEn and multiscale entropy were used to assess the different visual attention levels [40]; ApEn was used to evaluate the wakefulness state [25]. These available studies had suggested that the entropy measures of EEG signals, as a complexity parameters of physiological time-series, could be as an useful indicator to reveal the physiological states of human brain and there was no doubt that the entropy measures of EEG signals had clinical significance. Therefore, the entropy measures in EEG signals could also be as informative features to identify the auditory object-specific attention. When compared with ApEn, SampEn and FuzzyEn, CmpMSE maybe was regarded as the most informative feature of EEG signals to identify the auditory object-specific attention and the optimal identification was achieved by the SVM-based auditory attention classifier using CmpMSE in EEG signals with the scale factor τ = 10 and the tolerance r = 0.15 and 0.20. This might be because the different entropy measures in EEG signals were able to extract the different informative features of EEG signals. As is known to all, ApEn and SampEn can quantify the temporal structure and complexity of time series strictly at a time scale, usually selected to be 1. But CmpMSE can quantify the long-term structures in EEG signals at multiple time scales, which can extract the long-rang informative feature of EEG signals. The main advantages of our proposed method was that the identification of auditory object-specific attention from single-trial EEG signals was achieved without the need to access to the auditory stimulus, when compared with most available studies. For the existing researches, the auditory attention identification from EEG signals usually required the acoustic features of auditory stimuli be known in advance. The main disadvantages of our proposed method was that the entropy measures were usually computationally intensive and it was necessary to perform comprehensive statistical analyses of the optimal parameters of the computation of the entropy measures and the optimization of machine learning algorithms. However, the algorithm complexity and computing time of CmpMSE are lower than those of ApEn, SampEn and FuzzyEn. Therefore, CmpMSE in EEG signals maybe was the most useful indicator to identify the auditory object-specific attention of Rest, AOA1 and AOA2. There are some limitations in the current study. Firstly, the 60-s duration of EEG signals used to calculate the entropy measures, corresponding to the data length of N = 30,000, is a slightly long time. It is hard to ensure that the EEG signal is stationary or even weakly stationary, which is especially required for permutation entropy analysis [41][42][43]. In fact, we had also evaluated the identification results of auditory object-specific attention by the LDA-based and SVM-based auditory attention classifier using permutation entropy in EEG signals of eight channels as features, and the average identification rates are 23.1% (p = 0.429) and 30.8% (p = 0.764), respectively. if the EEG signals were split into relatively short epochs and the entropy measures in EEG signals which could be deemed stationary were computed for each of the epochs, and then the distributions of values for each EEG signals were obtained [44][45][46][47], the experimental data would be better to demonstrate the identification results of auditory object-specific attention via entropy measures and machine learning. Secondly, only 13 subjects participated in the study, and the number of sample data are not enough, which might lead to the statistical results of the entropy measures not showing significance. In fact, this experiment was not very good to perform. In order to ensure the experimental effects for each subject, EEG signals should be recorded successfully in the first round to avoid the second round of experiments because the subject listening to the same auditory stimulus at the second time might cause adverse effects on the auditory attention. Therefore, in the available studies, the number of subjects who participated in the experiment were usually not many. For example, in the studies of Haghighi [13] and Choi [14], ten subjects' EEG signals were used in the investigations. Thirdly, for the calculation of entropy measures, it is well-known that some parameters, such as the embedding dimension m and tolerance r, need to be fixed in advance. We did not assess our experimental performances with the different combinations of the parameters. For one thing, if we such did, it would cause the analysis of experiment results to be rather complex. For another, there were no solid methods to obtain the optimal parameters for the entropy measures. Maybe because of these, the identification rates of Rest, AOA1 and AOA2 of auditory object-specific attention were not very high in this study. Fourthly, in this study the EEG signals were recorded as 8-channel signals and then the 8-dimensional entropy measures (ApEn, SampEn, CmpMSE and FuzzyEn) could be used to identify the auditory object-specific attention. In Table 3 the number of features of ApEn, SampEn, CmpMSE and FuzzyEn were 8. In Figure 4 and Table 4 the number of features of CmpMSE were 8. We believed that the more EEG signal channels used and the more the dimensionality of entropy measures, the more reliable the experimental results may be, because for the entropy measures the more EEG signal channels were used, more informative features of auditory object-specific attention could be extracted. It also should be noted that, however, the identification rate maybe was not always better with the more EEG signal channels. What's more, the current research trend was to identify the auditory attention with less EEG signal channels. [13]. This is mainly because using less EEG signal channels to identify auditory attention is more worthy research for practical application. For future work to improve the study presented here, the identification accuracy of auditory object-specific attention maybe has a great potential for improvement. Firstly, we can adopt other advanced machine learning techniques, such as deep learning [49][50][51]. In recent years, deep learning has been widely used in physiological signal application analysis (especially EEG signals), such as seizure prediction [51]. Secondly, we can explore the potential of other non-linear feature analysis methods of EEG signals for the identification of auditory object-specific attention, such as higher order spectra [52], phase entropy [53], wavelet transform in conjunction with entropy [39], empirical mode decomposition in conjunction with entropy [54] and so on. In addition, we can adopt several different types of features of EEG signals in conjunction with feature ranking approach [53,55] to further investigate the identification of auditory object-specific attention. The identification of auditory object-specific attention would undoubtedly have great research value and application potential for the optimization of hearing aids and enhanced listening techniques, which are our main clinical application. For example, the algorithm of identification of auditory object-specific attention would work hand in hand with the algorithms of acoustic scene analysis in hearing aids to form neuro-steered hearing prostheses. With the help of the identification of peoples' attention to a specific auditory object from EEG signals, we can use EEG signals to guide the algorithms of acoustic scene analysis, in effect extending the efferent neural pathways which simulates the top-down cognitive control of auditory attention. Conclusions In this paper, ApEn, SampEn, CmpMSE and FuzzyEn were used to extract the informative features of EEG signals under three kinds of auditory object-specific attention (Rest, AOA1 and AOA2). The results of statistical analysis of entropy measures indicated that there were significant differences (p < 0.05) in the values of ApEn, SampEn, CmpMSE and FuzzyEn in EEG signals under auditory object-specific attention of Rest, AOA1 and AOA2. LDA and SVM were used to construct the auditory attention classifiers respectively and LOOCV was used to evaluate the identification rates of Rest, AOA1 and AOA2 of auditory object-specific attention. Compared with the LDA-based auditory attention classifier, the SVM-based auditory attention classifier was capable of achieving better auditory object-specific attention identification accuracy for Rest, AOA1 and AOA2. According to the identification results, for Rest, AOA1 and AOA2 of auditory object-specific attention, the optimal identification was achieved by the SVM-based auditory attention classifier using CmpMSE with the scale factor τ = 10 and the corresponding identification rates were 69.2%, 76.9% and 61.5%, respectively. All results suggest that using the entropy measures in EEG signals as informative features in conjunction with machine learning techniques can provide a novel solution to the identification of auditory object-specific attention from single-trial EEG signals without the need to access to the auditory stimulus.
9,644.6
2018-05-01T00:00:00.000
[ "Computer Science", "Psychology" ]
Total scattering reveals the hidden stacking disorder in a 2D covalent organic framework Interactions between extended π-systems are often invoked as the main driving force for stacking and crystallization of 2D organic polymers. In covalent organic frameworks (COFs), the stacking strongly influences properties such as the accessibility of functional sites, pore geometry, and surface states, but the exact nature of the interlayer interactions is mostly elusive. The stacking mode is often identified as eclipsed based on observed high symmetry diffraction patterns. However, as pointed out by various studies, the energetics of eclipsed stacking are not favorable and offset stacking is preferred. This work presents lower and higher apparent symmetry modifications of the imine-linked TTI-COF prepared through high- and low-temperature reactions. Through local structure investigation by pair distribution function analysis and simulations of stacking disorder, we observe random local layer offsets in the low temperature modification. We show that while stacking disorder can be easily overlooked due to the apparent crystallographic symmetry of these materials, total scattering methods can help clarify this information and suggest that defective local structures could be much more prevalent in COFs than previously thought. A detailed analysis of the local structure helps to improve the search for and design of highly porous tailor-made materials. Introduction Covalent organic frameworks (COFs) are crystalline, porous polymers assembled from building blocks in a reticulating reaction. Depending on the geometry of the linkers, they form either 2D sheets, where layers stack via dispersive forces, or 3D covalently connected frameworks. COFs possess well-dened micro-and mesoporous structures, where pore size, shape, topology, and the distribution of readily accessible active functional sites are dened with molecular precision. 1 Applications such as small molecule separation, capture and storage, (opto-)electronics, and catalysis are particularly promising due to the tunability of these properties. 2,3 The sheets that comprise 2D COFs can stack in various ways, as shown in Fig. 1A. The offset between neighboring layers in the a-b plane can be formally equal or unequal to zero, resulting in eclipsed stacking or slipped stacking. 4,5 In the latter case, alternating and unidirectional slip stacking are differentiated, where the offset occurs in the same or alternating directions. Staggered stacking represents a special case of AB-type slip stacking, where the offset is such that the vertex of one layer is above the pore of another, similar to graphite. 6 The symmetry of these stacking motifs decreases in order from eclipsed, staggered, alternating, to unidirectional. Other scenarios could involve different combinations of these motifs or fully random stacking, which is more difficult to characterize due to the lack of translational order. The geometry of specic linker molecules can generate ordered layer stacking by offering a templating effect during the growth of new layers, [7][8][9][10][11] as thermodynamics generally govern the arrangement of linker and small oligomer molecules. On the other hand, because the stacking energy is too high to be overcome at typical reaction temperatures, 4,12-14 layer aggregation, as opposed to linker and oligomer adsorption, is effectively irreversible, which results in stacking disorder in most COFs. The in-plane disorder can also be caused by exible linkers and inuences stacking interactions, leading to further out-of-plane disorder. 15 It therefore follows that understanding the local structure of a given COF is vital, because properties such as pore geometry, accessibility of functional sites, interaction with guest molecules, (opto-)electronic properties, and surface states in the pore signicantly depend on the layer stacking. 4,7,16,17 The prevalent notion is that most COFs must still have a local structure dominated by a layer offset, despite apparent high symmetry and eclipsed stacking. 5,8,[18][19][20][21][22][23][24] Techniques that offer insight into the local order and stacking of COFs are thus instrumental in understanding and developing novel materials for particular applications in a directed manner. Here, we directly investigate the local symmetry of two related imine COFs by a combination of X-ray diffraction, stacking fault simulations, spectroscopy, electron microscopy, and physisorption analysis, and show how ordered and disordered slip stacking manifests. We evaluate short-and longrange order in terms of defect abundance, stacking, and morphology and show that in fact, random slip stacking is easily misinterpreted as apparent eclipsed stacking. Results and discussion TTI-COF was synthesized by condensation of the corresponding tritopic aldehyde and amine under solvothermal conditions in a mesitylene/1,4-dioxane 1 : 1 mixture, catalyzed by aqueous 6 M acetic acid. We prepared two differently stacked forms: HT at high temperature, i.e., 120 C, 25 and LT at room temperature. Fig. 1C shows the linkers and simplied reaction scheme, where the imine-linked layer is represented by the hexagonal unit cell. Spectroscopy We rst have to conrm the chemical identity and formation of the COF. Fourier-transform infrared (FT-IR) spectra presented in Fig. 2A and B show a reduction of the characteristic amine (I) bands at 3473 cm À1 and 3379 cm À1 , and aldehyde (II) bands at 1700 cm À1 , when compared to the starting materials ( Fig. SI-1C and D †). Instead, a new band at 1624 cm À1 (III), which is weak in this COF, 25 indicates the formation of imine linkages. Weak residual bands corresponding to amine and aldehyde groups, which are signicantly stronger for LT, are explained by terminal functional groups and trapped linker molecules. Lastly, the bands at 1509 cm À1 and 1365 cm À1 in HT are characteristic for triazines, 26 but notably shied by 6 cm À1 to lower frequencies in LT, which may suggest different interlayer interactions in the two samples. The local structure of the COFs was also investigated by 13 C and 15 N solid-state nuclear magnetic resonance spectroscopy (ssNMR), shown in Fig. 2C and D. Signals of the corresponding carbon at 158 ppm ( 13 C) and nitrogen at À58 ppm ( 15 N) indicate the formation of the imine bond. On comparing the spectra of both samples, two main differences become apparent: similar to FT-IR spectroscopy, signals from residual amine and aldehyde groups are only present in LT at À322 ppm ( 15 N) and 192 ppm ( 13 C), respectively. These signals suggest more residual surface groups. All NMR signals are also signicantly broader for LT, which indicates a wider distribution of local chemical environments compared to HT. Electron microscopy The hexagonal pore structure and one-dimensional pore channels of the COF are observed in transmission electron microscopy (TEM), as shown in Fig. 2E and F. Using fast Fourier transforms and intensity proles of these micrographs, we determined the periodicity of these features (see Fig. SI-2 †). The measured values correspond to the d-spacing of the 100 and 110 reections, 22Å and 13Å, respectively, which also match results obtained from Xray diffraction (see below). The micrographs also show that the average crystallite size in HT is signicantly larger than in LT. Crystallites of over 100 nm can be observed in HT, while many crystallites with sizes of under 50 nm are prevalent in LT (see Fig. SI-4 † for additional representative micrographs). Smaller domain sizes also account for the increased occurrence of free aldehydes and amines in IR and ssNMR spectroscopy for LT over HT owing to the increased surface-area-to-volume ratio. Similar results can be inferred from scanning electron microscopy (SEM), shown in Fig. 2G and H. Both samples present themselves with a dendritic cauliower-like morphology but with much smaller aggregated particles in LT. Sorption analysis The porosity of the COFs was analyzed via argon physisorption, and the resulting isotherms are presented in Fig. 3A. Aer an initial monolayer-multilayer adsorption step and pore condensation, a saturation plateau dominates the isotherms over p/p 0 ¼ 0.10. 27 These features are characteristic for type IV(b) isotherms, which are common for mesoporous materials. 28 The BET areas were determined to be 1308 m 2 g À1 for HT and only 338 m 2 g À1 for LT. This trend is also reected by the total pore volume as determined from the maximum amount of gas absorbed at the saturation pressure p 0 . We measured total pore volumes of 0.825 cm 3 g À1 for HT and 0.292 cm 3 g À1 for LT. Such signicant differences in BET areas and pore volumes indicate that most of the internal surface area in LT is not accessible to the adsorbate because of pore blockage. We also observed that HT exhibits only a small amount of hysteresis, which is much more pronounced in LT. Since the hysteresis extends to very small relative pressures, physical effects, such as percolation effects, cavitation, or capillary condensation cannot be its sole cause. [28][29][30][31] Instead, it is probably caused by severely limited diffusion of the adsorbate through the porous material. The stiff geometry of the linker molecules and strong interlayer interactions ideally lead to uniform pores in COFs. Stacking faults can, however, generate constrictions at the pore entrances or within the channel, which hinder diffusion pathways and trap linker or oligomer molecules. Due to the hysteresis, the pore size distribution (PSD) was determined from the adsorption branch. [29][30][31][32] Quenched solid density functional theory (QSDFT) gives average pore widths of 2.2 nm for HT (Fig. 3B) and LT (Fig. 3D). This dimension matches the diameter obtained from the optimized structure of TTI-COF, illustrated in Fig. 3C. In LT, however, the PSD is wider, which indicates a more disordered pore structure. Diffraction We conrmed the crystallinity of both samples by X-ray powder diffraction (XRPD), see Fig. 4A. HT exhibits narrower Bragg peaks and additional peak splitting on the rst four reections. The peak broadening of LT is, however, particularly pronounced in the stacking reections at 30 2q. Anisotropic crystallite size broadening and microstrain due to local disorder can both result in peak broadening, but contributions from these effects cannot be distinguished easily for this class of materials, because of the typically low quality of diffraction data. In earlier work, our group showed that the peak splitting results from symmetry reduction caused by a unidirectionally slip-stacked structure. Density functional theory calculations found an This journal is © The Royal Society of Chemistry 2020 Chem. Sci., 2020, 11, 12647-12654 | 12649 Edge Article Chemical Science optimum stacking offset of around 1.6Å and showed that an antiparallel linker orientation is preferred. 25 When two tritopic linkers are used, they can either stack in a parallel or antiparallel fashion, as shown in Fig. 1B. These cases lead to imine linkages oriented in the same or opposite directions, respectively. 4 We used the unidirectional slip-stacked, antiparallel structure model as a basis for the Rietveld renement of HT. 25 Rietveld renements were performed using TOPAS-Academic v6, taking into account the instrumental prole and crystallite size and microstrain broadening. 33 The resulting t, shown in Fig. 4B, is of good quality and describes the experimentally observed pattern reasonably well. The unidirectional stacking of layers causes a reduction of the symmetry, which results in the observed peak splitting. In contrast, using an eclipsed structure model returns a poor Rietveld t (see Fig. SI-6A †) because it cannot describe the additionally observed Bragg peaks. Consequently, Rietveld renements showed that LT is best described by the eclipsed rather than slip-stacked structure, as shown in Fig. 4C, albeit with a much smaller crystallite size as observed by TEM. However, the renement also indicates a severe amount of strain in LT compared to HT, which suggests that the local structure of this material is not welldescribed by the eclipsed stacking motif. To gain further insight into the samples' atomic-scale details, we performed pair distribution function (PDF) analysis. [34][35][36][37][38] We collected total scattering data using synchrotron radiation, which was rst converted into the reduced total scattering structure function F(Q) (Fig. 5A, cf. ESI Methods section †), with the elastic scattering momentum transfer Q ¼ 4p sin(q)l À1 , using the PDFgetX3 algorithm within xPDFsuite. 36,39,40 A considerable reduction in the intensity of the peaks located at 1.8Å À1 and 3.6Å À1 is observed in LT compared to HT, while the peak at 3.0Å À1 is the same for both samples. The two peaks with reduced intensity contain strong contributions from the 002 and 004 reections, respectively, and systematic broadening and intensity reduction here could be associated with both reduction in crystallite size along the stacking direction as well as stacking disorder. The patterns are, however, nearly identical above 4.0Å À1 . The high-Q scattering and the peak at 3.0Å À1 result mostly from in-plane components, indicating that the individual layers remain conformationally consistent between both samples, which could also be conrmed by simulations performed using the soware XISF (see Fig. SI-7 †). 41 The pair distribution function G(r) is obtained by Fourier transformation of F(Q). Here, G(r) can be roughly divided into three length scales: (I) very sharp peaks at short distances under 6.0Å, which correspond to specic atom-pair distances within the layers, (II) intermediate frequency peaks, which are associated with the layer stacking (both Fig. 5B), and (III) broad, lowfrequency peaks, which result from the COF pores (Fig. 5C). The frequencies of the latter two components match the 002 and 100 reections, with d-spacings of 3.7Å and 22Å, respectively. The low-frequency component associated with the pore structure dominates both PDF signals over long distances above 200Å (see Fig. SI-10A and B †). The intensity of these peaks is lower in LT than in HT, which suggests some combination of increased disorder in the layer offset, more distortions of the pore shape, trapped pore content, and decreased crystallite size. By truncating the reduced total structure function to Q values above 1.5 A À1 , we were also able to isolate the stacking component of the PDFs for HT and LT (see Fig. SI-10C and D †). The coherence lengths of these signals are roughly 70Å and 50Å for HT and LT, respectively, showing a relatively lower degree of order in the stacking direction. Structure renements to the PDF data using different models were performed in PDFgui, with experimental broadening and damping from nite Q max and instrumental prole effects xed. 37 Structural and thermal effects were accounted for in the lattice parameters, atomic displacement parameters (ADPs), and low-r peak sharpening by correlated motion corrections (see ESI for more details †). The structure model with unidirectional slip stacking gave a good Rietveld t for HT and likewise returned a good PDF t over 1Å to 20Å, as shown in Fig. 5D. Sharp peaks corresponding to short interatomic distances within a single layer and broad peaks due to interlayer interactions can both be well described using ADPs with U 11 ¼ U 22 within the layer and separately rened U 33 for the out-of-plane distances. 42 When the stacking orientations are not well described in the model, U 33 tends toward higher values to broaden interlayer atom-pair correlations. We also compared models with eclipsed stacking and both antiparallel and parallel imine orientations (see Fig. SI-8 and SI-9 †). In all cases, in-plane ADPs were low, indicating a good description of an ordered layer structure, but the stacking was not well described by the eclipsed models. An antiparallel, rather than parallel, imine orientation, showed better agreement with the experimental data, which corroborates the preference for antiparallel packing. 25 While the lack of peak splitting suggests an eclipsed structure for LT, the high strain parameters derived from Rietveld renements and the similarity of the PDF signals of HT and LT over short and intermediate-range distances (cf. Fig. 5B) point toward a more slipped local layer relationship instead. Indeed, Fig. 5E shows that while the intralayer contributions can still be described reasonably well by an eclipsed structure model, the peak positions corresponding to the layer stacking over short and intermediate distances do not match the experimental data. We also observe high ADPs in the stacking direction. We thus can assume that the layers in LT are slipped relative to each other, as would be thermodynamically more favorable and as attested to in HT. 20,43 Indeed, using a unidirectionally slipstacked structure to t the local structure in the PDF improves the result, as seen in Fig. 5F. There is, however, still a mismatch between the observed and simulated peak positions above 10Å, and this model conicts with the high apparent symmetry of LT seen in XRD. To resolve these discrepancies and increase understanding of the overall stacking, we performed stacking fault simulations. Stacking fault simulations The absence of Bragg peak splitting and apparent hexagonal symmetry in LT seem to suggest apparent zero offset between the layers. However, analysis of the local structure shows slipped stacking between neighboring layers, which is energetically favored. 4,[43][44][45][46][47] We also observe a generally high amount of disorder and strain in LT with complementary spectroscopy and diffraction methods. These seemingly conicting ndings can be resolved by random translational disorder from layer to layer, which would express itself in the same high-symmetry diffraction pattern as eclipsed stacking. The peak shapes observed in the diffraction pattern for LT further suggests interlayer disorder. 48 Different stacking scenarios were investigated using DIFFaX to check consistency with the experimental data (see ESI † for more information and input le). 49 We then investigated this disorder in LT by Rietveld renement, where we used a supercell approach, 33 averaging the calculated diffraction patterns of 300 supercells containing 200 layers each. Starting from the optimized layer structure of HT with an antiparallel orientation of the imines, we dened two different layer offsets where neighboring layers are slipped along the direction of a pore wall. When the projected distance between two triazine ring centers is 1.6Å (Fig. 6A), one triazine nitrogen atom is directly above the center of the previous ring. When the distance is increased to 3.0Å (Fig. 6B), the nitrogen atom overlaps with the previous layer's triazine carbon. Due to the symmetry of the building blocks, both stacking vectors can be rotated by 120 and 240 along the layer plane to create a total of six different stacking transitions, as illustrated by Fig. 6C. Instead of describing the disorder with microstrain parameters, we built a faulting scenario with these six vectors, where each transition probability relates to the stacking fault probability P f (see Table SI-3 †). A grid search optimization was performed by iterating the probability in small increments, resulting in Fig. 6D. 50,51 Even with only little random stacking (P f < 0.10), the quality of the Rietveld ts of LT increases vastly as compared to the unfaulted model. We found the best agreement to the experimental diffraction pattern in the region where 0.80 < P f < 0.90, with a global minimum at P f ¼ 0.83, representing a complete loss of ordered stacking and almost equal probabilities for all slip-stacking transitions. Peak splitting is predicted based on the calculated peak positions. However, due to the random directionality of the slip stacking, only single broad peaks are observed for the hk0 reections, which results in the observed apparent high symmetry. We also rened the experimental PDF data of LT with structural models suited to simulate a randomly stacked material. We built hexagonal supercells from between two and six antiparallel layers that could translate freely in the a and b directions during PDF renements. With an increasing number of layers, the quality of the ts improved signicantly (see Fig. SI-18 †), which was mainly reected by the lower out-ofplane ADP. The result of the renement with six layers is presented in Fig. 5G and shows how well random stacking can describe the stacking component for r > 10Å. We estimated the average stacking offset by rening the PDF in the range of neighboring layers, i.e., r < 6Å. The resulting value of 1.63Å ts very well with the energetically preferred lateral offset for COFs, which has been calculated as 1.7Å. 4,25,[43][44][45][46][47] This slip-stacking motif is not exclusive to 2D polymers, but can also be found in aromatic molecular systems, both experimentally and theoretically. [52][53][54][55][56] The attractive interactions between stacked aromatic rings are commonly attributed to interactions between p electrons. Instead, electrostatic attraction between the edge and face of aromatic quadrupoles accounts for the offset stacking that is predominant in singlecrystal structures of aromatic molecules. [57][58][59] It can be assumed that the high stacking energy in COFs results from similar interlayer interactions. These results indicate that offset stacking might be ubiquitous in COFs even when eclipsed stacking is assumed. Conclusions We have conrmed and modelled layer stacking disorder in a low-and high-temperature variant of an imine-based COF by Rietveld renement and PDF analysis combined with stacking fault simulations. A high amount of terminal groups and disorder as suggested by physisorption and electron microscopy point to insufficient error correction, which prohibits the growth of large crystallites or an ordered layer structure. On the other hand, the reduced synthesis temperature allowed access to a different, kinetically trapped stacking motif in TTI-COF with higher apparent symmetry due to random average layer offsets of 1.6Å. We therefore suggest that the synthesis temperature-and with it, crystallite size, amount of terminal groups, and layer connectivity-should be considered as a variable with which the stacking motif may be adjusted in 2D polymers. Thus, we showed that the assignment of an eclipsed structure can be an oversimplication of the true local environment, as is indicated by the unfavorable energetics associated with these arrangements. We propose that many COFs reported as eclipsed structures very likely also feature random offset stacking motifs. X-ray diffraction data obtained from COFs is typically of lower quality than that of related materials, such as molecular organic crystals or metal organic frameworks, with much broader and also fewer Bragg peaks. The structure model obtained from such low-quality data is consequently less reliable, especially concerning the local order, which is instead oen inferred from the linker geometry and structure modelling based on molecular mechanics or density functional theory calculations. 5 We suggest then that structures inferred solely from pattern indexing or Rietveld tting to low-quality data should be strictly interpreted in the crystallographic sense as average structures. In the absence of detailed structural insights into the stacking geometry, utmost care should be exercised when deriving structure-property relationships. Instead, by using the techniques mentioned above, additional information about the local structure can be extracted and help determine a more detailed picture of the atomic-level structure and stacking motifs present in a given COF. To conclude, structural interpretations and properties calculated based on a purely crystallographic, i.e., average, view of these structures can be unreliable, which can result in the misinterpretation of the inherent properties of COFs. This has been demonstrated for a wide range of materials such as perovskite photovoltaics, 60-62 catalytic nanoparticles, 63,64 exotic electronic materials, 65-67 and more recently 2D polymer materials. 68,69 Other complementary methods for tackling this problem are under active development. [70][71][72][73][74] Thus, structural probes such as total scattering and PDF, as used here, could be valuable in obtaining a more distinct understanding of structuring pathways in 2D COFs and help to contextualize and optimize their functional behavior. Conflicts of interest There are no conicts to declare.
5,338.2
2020-07-08T00:00:00.000
[ "Materials Science" ]
Iron accumulation in macrophages promotes the formation of foam cells and development of atherosclerosis Macrophages that accumulate in atherosclerotic plaques contribute to progression of the lesions to more advanced and complex plaques. Although iron deposition was found in human atherosclerotic plaques, clinical and pre-clinical studies showed controversial results. Several epidemiological studies did not show the positive correlation between a systemic iron status and an incidence of cardiovascular diseases, suggesting that the iron involvement occurs locally, rather than systemically. To determine the direct in vivo effect of iron accumulation in macrophages on the progression of atherosclerosis, we generated Apoe−/− mice with a macrophage-specific ferroportin (Fpn1) deficiency (Apoe−/−Fpn1LysM/LysM). Fpn1 deficiency in macrophages dramatically accelerated the progression of atherosclerosis in mice. Pathophysiological evidence showed elevated levels of reactive oxygen species, aggravated systemic inflammation, and altered plaque-lipid composition. Moreover, Fpn1 deficiency in macrophages significantly inhibited the expression of ABC transporters (ABCA1 and ABCG1) by decreasing the expression of the transcription factor LXRα, which reduced cholesterol efflux and therefore promoted foam cell formation and enhanced plaque formation. Iron chelation relieved the symptoms moderately in vivo, but drastically ex vivo. Macrophage iron content in plaques is a critical factor in progression of atherosclerosis. The interaction of iron and lipid metabolism takes place in macrophage-rich atherosclerotic plaques. And we also suggest that altering intracellular iron levels in macrophages by systemic iron chelation or dietary iron restriction may be a potential supplementary strategy to limit or even regress the progression of atherosclerosis. Background Atherosclerosis is the underlying cause of a majority of clinical cardiovascular events, including myocardial infarction, peripheral artery disease, stroke and coronary artery disease (CAD) [1]. Excessive fatty deposits and inflammatory cells accumulate during the formation and development of atherosclerotic lesions. As the major immune cells in atherosclerotic lesions, macrophages play a critical role in the development of atherosclerosis [2]. A central hallmark of atherosclerosis is foam cell formation characterized by uncontrolled lipoprotein accumulation within macrophages [3]. Despite decades of research, the molecular mechanisms underlying the uptake and efflux of lipids during this process remain to be fully understood [4]. Iron is an essential element for many biological processes, such as DNA repair, cellular respiration, oxidation and reduction reactions, and oxygen transport. In 1981, Sullivan initially found a correlation between atherosclerosis and iron deposition, called the "iron hypothesis", which proposes that iron overload promotes cardiovascular diseases, while iron deficiency protects against ischemic heart disease [5,6]. Interestingly, several epidemiological studies found that a high iron status was not associated with an increased incidence of CAD in humans; in contrast, an elevated status was correlated with a reduced CAD risk [7,8]. Although the iron concentration is higher in human atherosclerotic plaques than in healthy arterial tissue [9], it is still unclear whether iron accumulation in atherosclerotic plaques is a cause or a consequence, whether iron deposition is deleterious, and whether the associated harmful effects are cell specific [10]. Considering the key roles of macrophages in the formation and progression of atherosclerotic plaques, combined with the fact that macrophages provide a large amount of iron in the circulation to meet systemic requirements by recycling iron from senescent red blood cells [11], selective iron deposition in macrophages has been proposed as a mechanism underlying accelerated atherosclerosis progression via catalytic generation of reactive oxygen species (ROS) and thus promotion of foam cell formation [12,13]. Several mouse models of iron overload (i.e., high-iron diet or injection with irondextran [14,15] and hereditary hemochromatosis (HH) [16][17][18] characterized by systemic iron overload rather than macrophage-specific iron deposition are likely not suitable to integrate the current data for elucidating the impact of iron on atherosclerosis. Macrophage iron efflux is performed by ferroportin 1 (Fpn1), which is currently the only known mammalian iron exporter [19,20]. Systemic deletion of Fpn1 is embryonic lethal in mice, and heterozygotes with one targeted deletion of Fpn1 appear normal [20]. A mouse strain with cell-specific deletion of Fpn1 (Fpn1 LysM/LysM ) was generated and showed iron accumulation specifically in macrophages [21]. In this study, we generated a mouse model (Apoe −/− Fpn1 LysM/LysM ) by breeding Fpn1 LysM/LysM mice with Apolipoprotein E-deficient (Apoe −/− ) mice, a classic mouse model of atherosclerosis, to investigate the role of macrophage iron in atherosclerosis. Here, we demonstrated that iron overload in macrophages in Apoe −/− Fpn1 LysM/LysM mice promotes foam cell formation and drastically accelerates atherosclerosis progression. Macrophage-specific Fpn1 deficiency drastically promotes atherosclerosis progression To determine the role of macrophage iron accumulation in the development of atherosclerosis, we crossed Fpn1 LysM/LysM mice [21] with Apoe −/− mice to generate Apoe −/− Fpn1 LysM/LysM mice, in which Fpn1 was specifically deleted in macrophages on the genetic background of global Apoe knockout. In male Apoe −/− Fpn1 LysM/LysM and Apoe −/− mice, feeding of a high fat diet was initiated at 8 weeks of age and continued for another 16 weeks to induce atherosclerosis. Hematological assessment confirmed that macrophage-specific Fpn1 deficiency induced mild anemia (Additional file 1: Table S1 and [21]). No differences were observed in body weight or the levels of plasma lipids, including cholesterol and triglycerides, between the Apoe −/− Fpn1 LysM/LysM and Apoe −/− mice (Additional file 1: Table S2). More iron accumulation in plaques was confirmed in Apoe −/− Fpn1 LysM/LysM than in Apoe −/− mice (Additional file 1: Figure S1). Strikingly, the severity of atherosclerosis was significantly increased after Fpn1 depletion in mice fed with high-fat chow (Fig. 1). We quantified the plaques in en face preparations of the aorta. As shown in Fig. 1a, b, the percentage of lesion area in the aorta were significantly higher in Apoe −/− Fpn1 LysM/LysM mice than in Apoe −/− mice, as determined by Oil Red O staining. In accordance with the increase in overall lesion area, plaque size in the aortic root was also increased in Apoe −/− Fpn1 LysM/ LysM mice (Fig. 1c, d). Moreover, the Oil Red O-stained area in aortic roots presented more lipid content in Apoe −/− Fpn1 LysM/LysM mice than in Apoe −/− mice (Fig. 1e, f ). These results demonstrate that the Fpn1 deletioninduced iron accumulation in plaque macrophages is associated with the severe atherosclerosis. Macrophage-specific Fpn1 deficiency modulates the composition of atherosclerotic plaques The plaque composition in the aortic root was further analyzed in detail. Apoe −/− Fpn1 LysM/LysM mice exhibited significantly stronger CD68-staining intensity than their Apoe −/− littermates, indicating that there were greater numbers of macrophages within the atherosclerotic lesions in the aortic roots of Apoe −/− Fpn1 LysM/LysM mice (Additional file 1: Figure S2a). Furthermore, the area of immunostaining for α-SMA and the corresponding staining intensity were also increased in Apoe −/− Fpn1 LysM/LysM mice (Additional file 1: Figure S2b), suggesting more proliferation of intimal vascular smooth muscle cells. However, plaque collagen content, as evidenced by Masson's Trichrome staining, was reduced in Apoe −/− Fpn1 LysM/ LysM mice (Additional file 1: Figure S2c). In combination with the increased lesion area and plaque count, these compromised collagen levels render atherosclerotic plaques prone to rupture. Therefore, the results suggest that the atherosclerotic plaques in Apoe −/− Fpn1 LysM/LysM mice are more advanced and less stable than those in Apoe −/− mice. Macrophage-specific Fpn1 deficiency increases oxidative stress in the aorta Iron loading may promote the formation of hydroxyl radicals via the Fenton reaction. Since oxidative stress plays a crucial role in the pathogenesis of atherosclerosis [22], we examined whether macrophage-specific Fpn1 deficiency-mediated iron retention increased ROS levels in the aortas of Apoe −/− Fpn1 LysM/LysM mice by performing dihydroethidium (DHE) fluorescence staining and measuring malondialdehyde (MDA) content. The results revealed that the intensities of DHE and MDA were increased in Apoe −/− Fpn1 LysM/LysM mouse aortas, indicating that macrophage-specific Fpn1 deficiency increased oxidative stress in the vascular walls ( Fig. 2a-c). Oxidative damage to DNA was also assessed by immunostaining for 8-hydroxy-2′-deoxyguanosine (8-OHdG). The results showed a significant increase in the 8-OHdG-positive area in the aortic roots of Apoe −/− Fpn1 LysM/LysM mice (Fig. 2d). As a result, cellular defenses against oxidative stress should be activated. We therefore measured the protein levels of catalase (CAT), heme oxygenase 1 (HO-1), and superoxide dismutase (SOD) to assess the cellular responses to oxidative stress. Western blotting showed that the expression levels of these scavengers (CAT, HO-1, and SOD2) were all increased in the aorta (Fig. 2e, f ). Among these enzymes, HO-1 presented the most remarkable change, which was correlated with increased macrophage infiltration (Additional file 1: Figure S2a). Data are presented as the mean ± SEM. Statistical significance was determined using Student's t-test. **P < 0.01, and ***P < 0.001 vs. Apoe −/− mice Taken together, these results suggest that iron accumulation in macrophages mediated by Fpn1 deficiency increases oxidative stress in the aorta and promotes atherosclerosis progression. Macrophage-specific Fpn1 deficiency increases arterial and systemic inflammation Macrophages play important roles in inflammatory responses, and chronic inflammation is one of the pathogenic features of atherosclerosis. Therefore, we investigated whether the loss of Fpn1 in macrophages can enhance the secretion of cytokines during atherosclerosis progression. IL-6, IL-1β, and TNF-α are proinflammatory cytokines released by macrophages and other cell types that can produce distant inflammatory effects. Western blotting showed that the expression of IL-1β and TNF-α in the aortas of Apoe −/− Fpn1 LysM/LysM mice was significantly increased, while the protein level of IL-6 remained constant (Fig. 3a, b). The serum concentrations of hepcidin and IL-6, which were measured by ELISA, Data are presented as the mean ± SEM; n = 4. Statistical significance was determined using Student's t-test. *P < 0.05, **P < 0.01, and ***P < 0.001 vs. Apoe −/− mice did not differ significantly between the mouse strains ( Fig. 3c, d), while serum IL-1β and TNFα levels were significantly increased in Apoe −/− Fpn1 LysM/LysM mice (Fig. 3e, f ). The adhesion of monocytes to the endothelium should be accelerated by monocyte chemoattractant protein (MCP)-1 and intercellular cell adhesion molecule-1 (ICAM-1). Consistent with the increased number of macrophages in plaques, the levels of MCP-1 and ICAM-1 were found to be significantly increased in the serum of Apoe −/− Fpn1 LysM/LysM mice (Fig. 3g, h). Collectively, these results suggest that Fpn1 deficiency in macrophages increases the production of proinflammatory cytokines and promotes aortic and systemic inflammation, which is the basis of monocyte recruitment to and infiltration into plaques. Macrophage-specific Fpn1 deficiency accelerates foam cell formation The formation of foam cells from macrophages is a crucial step in the development of atherosclerosis. To determine whether macrophage-specific Fpn1 deficiency-induced iron retention affected foam cell formation, we isolated primary peritoneal macrophages from mice and loaded the cells with oxLDL (50 μg/ ml) for 48 h in the presence or absence of 100 μM FAC, an iron source, or 50 μM DFP, an iron chelator. DAB-enhanced Perls' Prussian blue staining showed extensive iron accumulation in Apoe −/− Fpn1 LysM/LysM macrophages, and this staining was further enhanced by treatment with FAC, confirming that the staining was iron specific and that Fpn1 deficiency led to Data are presented as the mean ± SEM. Statistical significance was determined using Student's t-test. *P < 0.05, **P < 0.01, and ***P < 0.001 vs. Apoe −/− mice. NS, no significance intracellular iron accumulation (Fig. 4a). After treatment with oxLDL, Oil Red O staining was performed. The results showed that more lipids accumulated in the Apoe −/− Fpn1 LysM/LysM macrophages than in Apoe −/− cells, and this accumulation was significantly enhanced by treatment with FAC and reduced by treatment with DFP, indicating that iron overload strengthened lipid deposition (Fig. 4b). Consistent with the Oil Red O staining results, the levels of total and esterified cholesterol were significantly increased in Apoe −/− Fpn1 LysM/LysM macrophages compared with Apoe −/− macrophages, whereas iron chelation reduced cholesterol levels (Fig. 4c). Next, we measured the cytokines released by the macrophages. Culture medium from Apoe −/− Fpn1 LysM/LysM macrophages exhibited higher levels of both TNF-α and IL-1β than medium from Apoe −/− macrophages, while iron chelation suppressed the release of these proinflammatory factors (Fig. 4d). These results indicate that Fpn1 deficiency-mediated iron accumulation dramatically increases the potential of macrophages to form foam cells. Fpn1 deficiency-mediated iron accumulation in macrophages suppresses ABC transporters through downregulated LXRα expression Both uncontrolled uptake of modified LDL and impaired cholesterol efflux lead to lipid accumulation. Therefore, we asked whether the observed iron overload could cause uncontrolled uptake of modified LDL or impaired cholesterol efflux in Fpn1-deficient macrophages. To this end, we examined the expression of ABCA1 and ABCG1, two important transporters mediating cholesterol efflux, and CD36 and LOX1, two receptors responsible for the uptake of oxLDL. Apoe −/− Fpn1 LysM/LysM macrophages expressed significantly lower levels of ABCA1 and ABCG1 than Apoe −/− cells, while no difference in the expression of CD36 or LOX1 was found between Apoe −/− and Apoe −/− Fpn1 LysM/LysM macrophages (Fig. 5a), suggesting that compromised efflux of cholesterol occurred when iron was overloaded in the macrophages. Liver X receptors (LXRs) are transcriptional regulators of lipid homeostasis that play an important role in the development of atherosclerosis (see review [23]). We thus asked whether LXRα expression was downregulated to reduce the expression of ABCA1 and ABCG1 in Fpn1depleted macrophages. As expected, the protein level of LXRα was downregulated in Fpn1-depleted macrophages (Fig. 5a). To further examine whether the high level of intracellular iron, mediated by Fpn1 depletion, caused the change in LXRα expression, we treated Apoe −/− Fpn1 LysM/ LysM macrophages with DFP to reduce their iron content. Treatment with DFP significantly increased the expression of LXRα, as revealed by Western blot analysis (Fig. 5b). In addition, the protein and mRNA levels of ABCA1/ABCG1 were also increased by treatment with DFP (Fig. 5b, c). These data suggest that iron overload represses LXRα expression and subsequently reduces the expression of the cholesterol exporters ABCA1 and ABCG1 to enhance cellular lipid retention and promote macrophage differentiation into foam cells. In Apoe −/− Fpn1 LysM/LysM mice, iron overload increased oxidative stress, which could play critical roles in the Here, we postulate that ROS are the critical factors in foam cell formation, more precisely in the downregulation of lipid exporters; thus, we chose an antioxidant, α-LA, to modulate the differentiation of macrophages. We first used DHE fluorescence staining to evaluate ROS production. Apoe −/− Fpn1 LysM/LysM macrophages presented a higher intensity of red fluorescence than Apoe −/− macrophages, and both DFP and α-LA treatment reduced the ROS levels (Fig. 5d). Interestingly, Western blot and qPCR analyses showed that treatment with α-LA significantly increased the expression of ABCA1/ABCG1 at both the protein and mRNA levels and that their upstream transcription factor subunit LXRα was also upregulated after treatment with α-LA (Fig. 5b, c), supporting the idea that iron-induced oxidative stress played an important role in blocking lipid efflux via LXRα repression [24]. Since Fpn1 deficiency decreases the expression of ABCA1 and ABCG1 in macrophages, in vitro functional assays were performed to detect the capacity for cholesterol efflux mediated by ApoAI, a major structural protein of high-density lipoprotein involved in cellular cholesterol efflux. We incubated macrophages with oxLDL (50 μg/ml) for 48 h to induce cholesterol accumulation and then exposed the cells to ApoAI (100 μg/ ml) for 24 h to induce cholesterol efflux in the presence or absence of DFP or α-LA. Intracellular total cholesterol levels were determined by enzymatic assays. The results showed that ApoAI-stimulated cholesterol efflux in Apoe −/− Fpn1 LysM/LysM macrophages was markedly weaker than that in Apoe −/− , Apoe −/− Fpn1 LysM/LysM + DFP and Apoe −/− Fpn1 LysM/LysM + α-LA macrophages (Fig. 5e). Iron chelation therapy prevents severe atherosclerosis in Apoe −/− Fpn1 LysM/LysM mice Since treatment with the iron chelator DFP reduced intracellular iron levels and lipid deposits ex vivo, we hypothesized that DFP would reverse lipid accumulation and diminish plaque formation in vivo. Therefore, we administered DFP to 8-week-old Apoe −/− Fpn1 LysM/ LysM mice maintained on a high fat diet for 16 weeks. Hematological assessment showed that red blood cell counts, hemoglobin levels, hematocrit values and mean corpuscular hemoglobin values were reduced in the iron chelation group (DFP) compared with the vehicle group (Additional file 1: Table S3). The serum iron level and transferrin saturation were significantly lower in the DFP-treated mice (Additional file 1: Table S3). No differences were observed in body weight or the levels of plasma lipids, including cholesterol and triglycerides, between the DFP-treated and saline-treated mice (Additional file 1: Table S4). However, DFP administration significantly reduced lesion area, as revealed by en face preparations of the aorta for Oil Red O staining (Fig. 6a, b). In agreement with these observations, plaque size in the aortic root decreased after DFP administration, as indicated by the measured reductions in lesion percentage and lesion size (Fig. 6c, d). Moreover, the Oil Red O-stained area in the aortic root also showed less lipid content in the DFP-treated mice than in the salinetreated mice (Fig. 6e, f ). In addition, the expression of ABCA1/ABCG1 was increased within plaques after DFP administration (Fig. 6g). These data demonstrated that chronic systemic iron chelation has therapeutic effect in Apoe −/− Fpn1 LysM/LysM mouse model of atherosclerosis. Discussion We report here that Fpn1 deficiency in macrophages dramatically accelerates the progression of atherosclerosis in mice despite the mice with mild anemia and without significant change of plasma hepcidin levels. These results provide direct evidence for the local contribution of macrophage iron to atherosclerosis development. Moreover, we report that iron accumulation mediated by Fpn1 deficiency in macrophages promotes lipid retention for foam cell formation via downregulated LXRα expression (model in Additional file 1: Figure S3). Although iron accumulates in human atherosclerotic lesions [25], epidemiological and experimental studies have produced controversial results regarding the role of iron in atherosclerosis development and progression. Considering the natural function of macrophages in iron recycling, macrophages need to export iron by, at least partially, the currently only known transporter Fpn1. In the classical HH patients and animal models, Fpn1 function is enhanced by downregulated hepcidin [16,26], which may explain systemic iron overload in HH patients. Though, systemic iron overload is not a risk factor in atherosclerotic patients [7,8]. Later, the role of macrophage iron was proposed. Given these findings and the proposal, we generated Apoe −/− Fpn1 LysM/LysM mice to investigate the role of macrophage iron. The symptoms of the mice presented a severe atherosclerosis as observed clinically. A very recent report has demonstrated that hepcidin deficiency protects against atherosclerosis in a hyperlipidemic mouse (Hamp −/− /Ldlr −/− ) model, in which hepcidin deficiency is associated with both an increased serum iron level and a decreased macrophage iron level [16]. The data from both Hamp −/− /Ldlr −/− [16] and Apoe −/− Fpn1 LysM/ LysM (this work) models reciprocally demonstrated that low levels of iron in macrophages induced by upregulated Fpn1 protected against atherosclerosis and that high levels of iron in macrophages induced by Fpn1 depletion accelerated the progression of atherosclerosis. These results were further confirmed by the observation in this study that Apoe −/− Fpn1 LysM/LysM mice developed typical atherosclerotic plaques at 6 months of age more severely than Apoe −/− mice when fed a normal diet (results not shown). Thus, we emphasize the important role of macrophage iron that differs from the role of systemic iron overload in HH patients because macrophages in classical HH patients do not overload iron. Very interestingly, another mouse model (Apoe −/− Fpn wt/C326S ), generated by Vinchi recently [17], presented aggravated atherosclerosis. This Fpn mutation (C326S) leads to type IV HH by rendering Fpn1 resistant to hepcidin binding, internalization, and degradation [27]. Another mouse model with mutation (ffe, H32R) of Fpn1 combining Apoe −/− to generate mice with macrophage-specific iron accumulation [18]. Both mutations are dominant negative and animals homozygous for the mutations die early in gestation. The former mutation produced the aggravated atherosclerosis, compared with the control mice, as we observed in this study. However, the mice Apoe −/− /ffe did not exhibit more severe atherosclerosis than Apoe −/− mice. Comparing with Apoe −/− Fpn LysM/LysM mice, Apoe −/− Fpn1 wt/ C326S and Apoe −/− ffe mice may have less iron accumulation in macrophages, theoretically, due to a small portion of functional Fpn1 in the two heterozygous mutants though its gain-of function mutation. And the effects of mutation C326S might be stronger than H32R mutation, which is supported by patients with C326S mutation and none with H32R (H30 in human) so far and by remarkably diminished binding capacity of C326S mutant with hepcidin [27]. Taken together, the content of macrophage iron matters when the deposited iron reaches a certain level and acts as a detrimental factor to aggravate atherosclerosis. Clinically, the deposited iron might not result from deficiency of functional Fpn1. We found significantly low levels of the ferroxidases, ceruloplasmin and hephaestin, in plaques comparing to normal vessel tissue, whereas the levels of ferritin and FPN1 were quite high [4]. The accumulated iron may result from the inability to be released due to the insufficient ferroxidase. Thus, systemic iron chelation would be beneficial, as shown in our study and others (reviewed in [28]). Due to the significant role of accumulated iron in generating ROS via the Fenton reaction, one important aspect in Apoe −/− Fpn1 LysM/LysM mice was oxidative stress, which was significantly higher than in Apoe −/− mice. It has been shown that ROS stress promotes foam cell formation. Here, we provide more evidence to support the role of iron-dependent oxidative stress in suppressing cholesterol efflux through the downregulation of ABC transporter expression. It is well accepted that LXRα and LXRβ complexes control cholesterol removal from macrophages by upregulating the expression of ABC transporters, including ABCA1 and ABCG1 [29]. We further demonstrate that the suppression of LXRα decreases the expression of ABCA1 and ABCG1 as occurred in Kupffer cells (reviewed in [30]). Notably, Bories et al. demonstrated that iron loading in IL-4-polarized M2 macrophages drove the activation of LXRα and enhanced the transcription of ABCA1/ABCG1 [31]. Although discussing each previously published result that contradicts our findings is beyond the scope of this manuscript, one major distinction that deserves attention is the different approaches that have been used to alter macrophage plasticity. In our study, the plasticity of Fpn1-deleted macrophages was directly shaped by retained iron, whereas the plasticity of IL-4-induced M2-like macrophages was directed by iron-induced ferroportin expression [28]. Therefore, the macrophage phenotypes are distinct with more M1-like macrophages in our study and more M2-like macrophages in Bories' study. As reviewed in [24], M2-like macrophages undergo a protective response to erythrophagocytosis and to oxidized LDL. It also supports that hemoglobin-stimulated macrophages display reduced intracellular iron content through upregulation of ferroportin expression, which in turn reduces iron-induced oxidative stress and further increases the expression of ABCA1/ABCG1 through the activation of LXRα [32]. Conclusions In summary, the present study provided direct evidence that iron accumulation in macrophages accelerated the development of atherosclerosis. The interaction of iron and lipid metabolism takes place in macrophage-rich atherosclerotic plaques. And we also suggest that altering intracellular iron levels in macrophages by systemic iron chelation or dietary iron restriction may be a potential supplementary strategy to limit or even regress the progression of atherosclerosis. Materials and methods Detailed information on the methods used for histological, biochemical, and enzymatic assays is available in the supplementary data. Treatment with an iron chelator Apoe −/− Fpn1 LysM/LysM mice were randomly divided into 2 groups: the vehicle (saline injection) and iron chelation (treatment with deferiprone, DFP, Sigma-Aldrich, St. Louis, MO) groups. DFP at a dose of 80 mg/kg or the same-volume saline was administered to these 8-weekold mice (fed a high fat diet) by daily intraperitoneal injection for 16 weeks. No mice were excluded during the experiments. Cell culture Peritoneal macrophages were collected from peritoneal exudates 4 days after injecting 8-week-old mice with 0.3 ml of 4% BBL thioglycollate, Brewer modified (BD Biosciences, Shanghai, China), and then cultured in RPMI 1640 medium supplemented with 10% fetal bovine serum (FBS) for 8 h. The macrophages were then cultured in medium containing 50 μg/ml human oxidized low-density lipoprotein (oxLDL) for 48 h in the presence of 100 μM ferric ammonium citrate (FAC), 50 μM DFP or 200 nM alpha lipoic acid (α-LA; an antioxidant). Oil Red O staining was performed to evaluate foam cell formation. Cellular iron staining was performed using Perls' Prussian blue stain. The protein levels of ABCA1, ABCG1, CD36, LOX-1 and LXRα were determined by Western blot analysis. Statistical analysis All experiments were randomized and blinded. All the data are presented as the mean ± SEM. A two-tailed Student's t-test (for two groups) or one-way analysis of variance followed by multiple comparisons test with Bonferroni correction (for more than two groups) was performed by using SPSS 17.0 (SPSS Inc, Chicago, IL). P < 0.05 indicated statistical significance. Learn more biomedcentral.com/submissions Ready to submit your research ? Choose BMC and benefit from:
5,779.2
2020-11-26T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
New Chebyshev type inequalities via a general family of fractional integral The main goal of this article is first to introduce a new generalization of the fractional integral operators with a certain modified Mittag-Leffler kernel and then investigate the Chebyshev inequality via this general family of fractional integral operators. We improve our results and we investigate the Chebyshev inequality for more than two functions. We also derive some inequalities of this type for functions whose derivatives are bounded above and bounded below. In addition, we establish an estimate for the Chebyshev functional by using the new fractional integral operators. Finally, we find similar inequalities for some specialized fractional integrals keeping some of the earlier results in view. 11168 Abstract: The main goal of this article is first to introduce a new generalization of the fractional integral operators with a certain modified Mittag-Leffler kernel and then investigate the Chebyshev inequality via this general family of fractional integral operators. We improve our results and we investigate the Chebyshev inequality for more than two functions. We also derive some inequalities of this type for functions whose derivatives are bounded above and bounded below. In addition, we establish an estimate for the Chebyshev functional by using the new fractional integral operators. Finally, we find similar inequalities for some specialized fractional integrals keeping some of the earlier results in view. Introduction For the last few decades, the study of integral inequalities has been a significant field of fractional calculus and its applications, connecting with such other areas as differential equations, mathematical analysis, mathematical physics, convexity theory, and discrete fractional calculus [1][2][3][4][5][6][7][8][9][10][11][12][13]. One important type of integral inequalities consists of the familiar Chebyshev inequality which is related to the synchronous functions. This has been intensively studied, with many book chapters and important research articles dedicated to the Chebyshev type inequalities [14][15][16][17][18]. The Chebyshev inequality is given as follows (see [16]): where ζ 1 and ζ 2 are assumed to be integrable and synchronous functions on [b 1 , b 2 ]. By definition, two functions are called synchronous on [b 1 , b 2 ] if the following inequality holds true: In particular, the Chebyshev inequality (1.1) is useful due to its connections with fractional calculus and it arises naturally in existence of solutions to various integer-order or fractional-order differential equations including some which are useful in practical applications such as those in numerical quadrature, transform theory, statistics and probability [19][20][21][22][23][24]. In the context of fractional calculus, the study of the derivative and integral operators of calculus is extended to non-integer orders [25][26][27], but most (if not all) of the potentially useful studies come about only along the real line. The standard left-side and right-side Riemann-Liouville (RL) fractional integrals of order µ > 0 are defined, respectively, by and . Furthermore, the left-side and right-side Riemann-Liouville (RL) fractional derivatives are defined, respectively, by means of the following expressions for (µ) 0: in each of which n (µ) + 1. There are many ways to define fractional derivatives and fractional integrals, often related to or inspired by the RL definitions (see, for example, [28][29][30]), with reference to some general classes into which such fractional derivative and fractional integral operators can be classified. In pure mathematics, we always consider the most general possible setting in which a specific behaviour or result can be obtained. However, in applied mathematics, it is important to consider particular types of fractional calculus, which are suited to the model of a given real-world problem. Some of these definitions of fractional calculus have properties which are from those of the standard RL definitions, and some of them can be used to the model of real-life data more effectively than the RL model [31][32][33][34][35][36][37]. As described in many recent articles which are cited herein, the fractional calculus definitions, which are discussed in this article, have been found to be useful, particularly in the modelling of real-world problems. The familiar Mittag-Leffler function E α (z) and its two-parameter version E α,β (z) are defined, respectively, by In many recent investigations, the interest in the families of Mittag-Leffler type functions has grown considerably due mainly to their potential for applications in some reaction-diffusion and other applied problems and their various generalizations appear in the solutions of fractional-order differential and integral equations (see, for example, [43]; see also [44] and [45]). The following family of the multiindex Mittag-Leffler functions: E γ,κ, (α j , β j ) m j=1 ; z was considered and used as a kernel of some fractional-calculus operators by Srivastava et al. (see [46] and [47]; see also the references cited in each of these papers): defined (for λ, ν ∈ C and in terms of the familiar Gamma function) by it being assumed conventionally that (0) 0 := 1 and understand tacitly that the Γ-quotient in (1.6) exists. Some of the special cases of the multi-index Mittag-Leffler function: include (for example) the following generalizations of the Mittag-Leffler type functions: (i) By using the relation between the Gamma function and the Pochhammer symbol in (1.6), the case when m = 2, δ = = 1, κ = q, α 1 = α, β 1 = β, and α 2 = p, and β 2 = δ, the definition (1.5) would correspond to [Γ(δ)] −1 times the Mittag-Leffler type function E γ,δ,q α,β,p (z), which was considered by Salim and Faraj [48]. (ii) A special case of the multi-index Mittag-Leffler function defined by (1.5) when m = 2 can be shown to correspond to the Mittag-Leffler function E γ,κ α,β (z), which was introduced by Srivastava and Tomovski [49] (see also [50]). (iii) For m = 2 and κ = 1, the multi-index Mittag-Leffler function defined by (1.5) would readily correspond to the Mittag-Leffler type function E γ α,β (z), which was studied by Prabhakar [51]. We now turn to the familiar Fox-Wright hypergeometric function p Ψ q (z) (with p numerator and q denominator parameters), which is given by the following series (see Fox [52] and Wright [53,54]; see also [1,p. 67,Eq (1.12 (68)] and [55, p. 21, Eq 1.2 (38)]): in which we have made use of the general Pochhammer symbol (λ) ν (λ, ν ∈ C) defined by (1.6), the parameters α j , β k ∈ C ( j = 1, . . . , p; k = 1, · · · , q) and the coefficients A 1 , . . . , A p ∈ R + and B 1 , . . . , B q ∈ R + are so constrained that with the equality for appropriately constrained values of the argument z. Thus, if we compare the definition (1.5) of the general multi-index Mittag-Leffler function: In particular, for the above-mentioned Mittag-Leffler type functions E γ,δ,q α,β,p (z), E γ,κ α,β (z) and E γ α,β (z), we have the following relationships with the Fox-Wright hypergeometric function defined by (1.7): (1.12) The relationships in (1.9), (1.10), (1.11) and (1.12) exhibit the fact that, not only this general multiindex Mittag-Leffler function defined by (1.5), but indeed also all of the above-mentioned Mittag-Leffler type functions and many more, are contained, as special cases, in the the extensively-and widely-investigated Fox-Wright hypergeometric function p Ψ q (z) defined by (1.7). The interested reader will find it to be worthwhile to refer also to the aforecited work of Srivastava and Tomovski [49, p. 199] for similar remarks about the much more general nature of the Fox-Wright hypergeometric function p Ψ q (z) than any of these Mittag-Leffler type functions. It should be mentioned in passing that, not only Fox-Wright hypergeometric function p Ψ q (z) defined by (1.7), but also much more general functions such as (for example) the Meijer G-function and the Fox H-function, have already been used as kernels of various families of fractional-calculus operators (see, for details, [56][57][58]; see also the references cited therein). In fact, Srivastava et al. [57] not only used the Riemann-Liouville type fractional integrals with the Fox H-function and the Fox-Wright hypergeometric function p Ψ q (z) as kernels, but also applied their results to the substantially more general H-function (see, for example, [59,60]). Our present investigation is based essentially upon the operators of the fractional integrals of the Riemann-Liouville type (1.2), which are defined below. Definition 1.1 (see [61]). For a given L 1 -function ϕ on an interval [b 1 , b 2 ], the general left-side and right-side fractional integral operators, applied to ϕ(z), are defined for λ, ρ > 0 and w ∈ R by where the function ϕ is so constrained that the integrals on the right-hand sides exit and F σ ρ,λ is the modified Mittag-Leffler function given by (see [62]) where ρ, λ > 0, |z| < R, and {σ(n)} n∈N 0 is a bounded sequence in the real-number set R. in the definition (1.15), we are led to the following special case: in terms of the Fox-Wright hypergeometric function p Ψ q (z) defined by (1.7). A slightly modified version of the fractional integrals in Definition 1.1, which we find to be convenient to use in this paper, is given by Definition 1.2 below. Definition 1.2 (The ν-modified fractional integral operators). For a given L 1 -function ϕ on an interval [b 1 , b 2 ], the general left-side and right-side fractional integral operators, applied to ϕ(z), are defined for λ, ρ, ν > 0 and w ∈ R by In view of the generality of the sequence {σ(n)} n∈N 0 , the fractional integral operators given by Definition 1.1 and Definition 1.2 can be appropriately specialized to yield all those Riemann-Liouville type fractional integrals involving not only the Fox-Wright hypergeometric function p Ψ q (z) kernel given by (1.17), but also involving all those multi-index Mittag-Leffler type kernels which are further special cases of the Fox-Wright hypergeometric function p Ψ q (z) defined by (1.7). There exist many classes integral inequalities related to the fractional integral operators given by Definition 1.1 (see, for example, [64][65][66][67][68]). Our objective in this work is to present a study of Chebyshev's inequality in terms of the fractional integrals given by Definition 1.2. We also apply our results to deduce several results by following the lines used in some of the earlier works. Main results and their consequences Throughout our study, we suppose that {σ(n)} n∈N 0 is a sequence of non-negative real numbers. In the case when n = 2, by making use of Theorem 2.1, we have We now assume that the inequality (2.2) holds true for some n ∈ N. Then, since the n functions {ζ i } n i=1 are positive and increasing on [0, ∞), n i=1 ζ i is also an increasing function. Hence, we can apply Theorem 2.1 with Thus, if we make use of our assumed inequality (2.2) in the last inequality, we have which was considered in in [69,Theorem 3.3]. Moreover, just as we pointed out in Remark 2.3, with appropriate choices of, and under sufficient conditions on, the parameters and the arguments involved, we can express the result of Theorem 2.2 in terms of fractional integrals with the aforementioned Mittag-leffler type kernels: E γ,δ,q α,β,p (z), E γ,κ α,β (z) and E γ α,β (z), given by (1.10), (1.11) and (1.12), respectively. The details involved are being skipped here. Proof. By the same technique as that used for proving Theorem 2.3, together with we can obtain the desired result asserted by Corollary 2.1. Proof. By the same technique used for Theorem 2.3 with the setting we can obtain the desired result asserted by Corollary 2.2. Proof. By the same technique used for proving Theorem 2.3 with the setting we can derive the desired result asserted by Corollary 2.3. Remark 2.7. Some particularly simple cases of Theorem 2.4 are given below. • Just as we pointed out in Remark 1.4, with appropriate choices of, and under sufficient conditions on, the arguments and the parameters involved, we can express the result of Theorem 2.4 in terms of fractional integrals with kernels involving not only the Fox-Wright hypergeometric function p Ψ q (z) , given by (1.7), (1.16) and (1.17), but also in terms of the aforementioned Mittag-Leffler type kernels such as E γ,κ, (α j , β j ) m j=1 ; z , given by (1.5) and (1.9), as well as its further special cases: E γ,δ,q α,β,p (z), E γ,κ α,β (z) and E γ α,β (z), given by (1.10), (1.11) and (1.12), respectively. The details of these and various other deductions and derivations from Theorem 2.4 are being left as an exercise for the interested reader. Conclusions In the development of the present work, the Chebyshev inequality was established via a certain family of modified fractional integral operators in Theorem 2.1. Moreover, Chebyshev's inequality was proved for more than two functions in Theorem 2.2. Several inequalities of this type were established in Theorem 2.3 as well as in and Corollaries 2.1, Corollary 2.2 and 2.3 for functions whose derivatives are bounded above or bounded below. Furthermore, an estimate for the Chebyshev functional was established in Theorem 2.4 by using the above-mentioned family of modified fractional integrals. Finally, from the main results, similar inequalities can be deduced for each of the aforementioned simpler Riemann-Liouville fractional integrals with other specialized Fox-Wright and Mittag-Leffler type kernels.
3,113.6
2021-01-01T00:00:00.000
[ "Mathematics" ]
A 6DOF Virtual Environment Space Docking Operation with Human Supervision : In this work, we present a synchronous co-simulation of a 6DOF (six degree of freedom) ball and plate platform and its 3D computer model. The co-simulation in the virtual environment is intended to mimic the rendezvous between a cargo vehicle such as the Falcon 9 from SpaceX and the ISS (International Space Station). The visual feedback sensing of the position of the 6DOF platform is implemented using a Kinect RGB-D device. The human in the loop acts as supervisory control for initiating the docking mechanism. This paper delivers an adaptive fractional order control solution which is easily tunable, implementable and validated on a laboratory benchmark. The results indicate that fractional order control can tackle large variability in the system dynamics and deliver specified performance at all times. Introduction There are many possible applications for the six degree of freedom (6DOF) ball and plate platform, ranging from flight simulators, gaming and mimicking space rendezvous for docking and berthing mechanisms.Space engineering applications have increased degrees of LPV (linear parameter varying) dynamics due to their size, relative position, remote location and high instrumentation complexity [1].The mechanical design of such systems may not be optimal from a control point of view, but rather from a practical point of view (e.g., remote access, testing, validation protocols, etc.).Thus, the control task becomes more challenging within the limitations imposed by the hardware and context [2][3][4]. Space rendezvous missions are characterized by a high degree of uncertainty in relative inertial load and relative mass distribution depending on the angles of contact between the meeting parts.For example, a leg has to put less effort into accelerating the platform in a vertical translation when the platform is at maximal height than when the platform is at its lowest level.Since the experienced loads do not correspond to tangible masses, the experienced load is referred to as a reflected mass.Therefore, the load on top of an LEMA (linear electro-mechanical actuator) is a non-fixed parameter.Since the dynamic models corresponding to these variable reflected masses change significantly, the variation of the dynamics should be taken into the consideration during the controller tuning process.A specific application is introduced in this paper, as it has the advantage to mimic real-life conditions in a lab scale environment.The challenge of model uncertainty has been addressed previously with a model-based control scheduling scheme and a robust control algorithm [5][6][7]. This paper presents a co-simulation between a real world model and a virtual world model of a Stewart platform simulator.The purpose is to combine real-world parts (a 6DOF platform) with a virtual model (an equivalent docking 6DOF platform) and mimic a space rendezvous.There is no one-controller-fits-all solution available due to the various orders of magnitude of the difference between dynamic values of the system operation.An adaptive control algorithm ensures the system can abide to aforementioned performance specifications in the presence of high LPV conditions. The paper is organized as follows.The real world and virtual environment setup, afferent software and hardware facilities and the mimicking co-simulation of the virtualreality part of the system are described in Section 2. Next, an adaptive controller is proposed in Section 3, allowing human-in-the-loop interaction.The results and discussion are given in Section 4, and Section 5 presents the conclusions and summarizes the main outcome of this work. Real and Virtual Setup Description The Berthing Docking Mechanism (BDM) is a mechatronic system that is capable of performing successful rendezvous/contact operations via either docking or berthing.It is designed for the low-impact docking of a spacecraft by actively reducing the impact forces on the platform during progressive contact.The structural design of the BDM is a 6DOF parallel manipulator, based on the Stewart-Gough platform [8,9].Lab-scale 6DOF Stewart systems controlled with robust integer-order controllers have already been presented in [10][11][12][13].However, the main difference to the lab-scale application is that BDM structures have high LPV dynamic properties.Since the leg's dynamics depend on the experienced inertial load, each relative mass corresponds to a model with different dynamics.In other words, each model represents the dynamics of a particular leg in a particular pose when a certain force is exerted on the platform's upper ring.Since every executable trajectory of the platform can be seen as a succession of different states, it is clear that the LEMA dynamics change constantly during spacecraft docking.How they become active during the contact process is briefly described as follows.During docking, the mechanism consists of an active and a passive module, as illustrated in Figure 1, which can be viewed as two sub-systems with an interaction when in contact.The ring-shaped interfaces of the active module can be moved with 6DOF by means of six telescopic extensible legs.The length of the legs is controlled for each leg individually by a brushless DC motor, based on the force signals measured by the load cells.Considering the platform's 6DOF, the net power the legs have to deliver in order to move the spacecraft in a certain direction might differ, depending on the pose of the platform.When the actuator is oriented in the direction in which the platform has to be accelerated, the legs will experience low-load behavior, which is less demanding in terms of actuator effort.The variations in the load are referred to as the reflected mass.From an LPV perspective, this 6DOF platform has to be controlled with respect to a varying-load system conditions. The real picture of the platform is depicted in Figure 2a, while the coordinate system of the virtual and real platform is illustrated in Figure 2b.In this figure, the O vw is the origin of the virtual world, the O vwp is the central coordinate of the virtual platform and O rw and O rwp represent the coordinates of the real world and of the real platform, respectively.H rp = H vp = 50 cm represents the height of the platform, D = 10 cm is the distance between the real ad the virtual platform, and the diameter of the platform plate is 70 cm.The real 6DOF platform from Figure 2a represents the first part of the co-simulation setup system and is considered to be the active module of the BMD setup.It uses the position control of each leg (Figure 3a) to achieve global positioning and determine the angle of the platform.Given that the position of the platform is known, the position of each connection C(x c , y c , z c ) is also known (as specified by the controller).The position of the servo A(x a , y a , z a ) and its orientation θ are determined by the motion platform construction.The connection of the servo lever and the vertical rod is indicated as B(x b , y b , z b ).Let r be the length of the servo arm (AB) while BC is the shaft that connects the servo arm with the platform in the point C and has the length .The absolute distance between point A and C is denoted by 1 .The inverse kinematic model is described as where The inverse kinematic model from (1) has four solutions: There are many possibilities for mapping a 3D environment, ranging from very expensive and accurate technologies to low-cost devices which are available for consumers.The Microsoft Kinect is a perfect example of such a low-cost sensor, and this has had a huge impact on recent research in computer vision as well as in robotics.Here, a Kinect v2 sensor is used as it was available in the laboratory.The Kinect v2 sensor holds two cameras (a RGB and an infrared (IR) camera), an IR emitter and a microphone bar, as shown in Figure 4a.The Kinect software maps the depth data to the coordinate system, as shown in Figure 4a, with its origin located at the center of the IR sensor.Note that this is a righthanded coordinate system with the orientation of the y-axis depending on the tilt of the camera and one unit representing one meter.Kinect v2 uses optical time-of-flight (ToF) technology in order to retrieve the depth data with the infrared (IR) camera.The basic principle of ToF is to measure the time difference between an emitted light ray and its collection after reflection from an object (Figure 4b).The RGB-camera within the Kinect sensor has a resolution of 1920 × 1080 pixels with a field of view of 70.6 × 60 degrees, while the IR-camera resolution is 512 × 424 pixels and the field of view is 84.1 × 53.8 degrees.The operative measuring range of the Kinect sensor is between 0.5 and 4.5 m and the frame rate is 30 fps.The intrinsic parameters of the depth camera of the Kinect, the focal length and the principal point, can be acquired using the Kinect For Windows SDK while the intrinsic parameters of the RGB camera can be acquired with the Camera Calibration Toolbox for MATLAB. Next, the virtual model was constructed with a Siemens NX12.0 based on the existing 6DOF model of the platform [10][11][12][13].Combining the NX12.0Motion environment and the inverse kinematics calculations programmed in VisualBasic.Net, a simulation of the virtual world model was executed.Finally, a connection between the LabVIEW program and the NXOpen program was created using registry keys in the Windows Registry, as this allowed for the continuous monitoring of the keys without hindering access to the key from other processes. In total, 13 VIs were generated to provide the necessary library for communication and the real-time operation of the real world platform and the virtual platform for mimicking a rendezvous.The LabVIEW platform provides the following features for the real-world 6DOF platform: NXOpen is a tool which is mainly used for computer-aided engineering (CAE)purposes, and it allows the user to operate several functions of NX from a centralized location.In this case, NXOpen is used to allow the control of orientation of the platform in all its degrees of freedom, to set the duration of the simulation and to perform a static and dynamic analysis of the motion of the platform.Apart from taking the user input to control its position, it also allows the direct control of the servo angles and it permits the user to import data from the motion of the real-world platform in order to adapt to its orientation.It also starts the LabVIEW application when the simulation method is chosen. The code behind this graphical user interface (GUI) was written in VB.NET and consists of roughly five parts, with the result given in Figure 5: • A part that is responsible for maintaining the GUI; • An object class to do the necessary calculations; • A method to export the values retrieved from the virtual sensors in the form of a .csvfile; • A component to configure the motion functions of the servomotors; • A part that regulates the communication with LabVIEW through the use of registry keys.As observed in Figure 5, the user interface is divided inti three tabs: The overview of these three operating modes is illustrated in the flowchart in Figure 6, where the red path represents the manual override mode, the blue path represents the import motion mode and the green path represents the export motion mode.The purple dash arrows represent the registry keys (SimStart, PathReady, LVMode, Im-portPath) that are used to communicate with the LabVIEW application and ensure that the LabVIEW program does not skip steps the next time it executes.The overview of the working application (Figure 6) shows the main steps in the communication and data processing parts necessary to allow the real-time co-simulation of the real world and virtual world platform setups as part of the docking mechanism. Co-Simulation Procedure The previous section gave a brief overview of both the LabVIEW and the VB.NET applications, while this section explains the cooperation between both applications.Both applications have the ability to influence each other, but only at specific points during the execution of the code, by reading and writing registry keys and by creating intermediary files to transfer large amounts of data.The communication via the registry keys is simply used as a confirmation and allows one application to know which part of the code the other application is executing.It can be said that the communication between LabVIEW and the VB.NET-application is a two-way communication process, but it is however not a simultaneous or continuous type of communication.Larger amounts of data are transferred between applications using both .csv-filesand .txt-files,which results in another type of indirect communication.The motion sequence is recorded by six virtual sensors that are positioned at the center of the upper surface of the virtual platform.These sensors measure the displacement in three directions as well as the angular displacement around three axes.NX outputs the measurement data of the sensors as a binary file.These data are used as an input by LabVIEW to match the motion sequence of the real-world platform to that of the virtual platform.Because the data of the virtual sensors are used as a function of the virtual world coordinate system, this data have to be transformed to the real world-platform coordinate system using the principle illustrated in Figure 4b.The real-world-platform motion-sequence is recorded by the Kinect sensor, which returns the platform orientation data.The data consist of three markers on the plate, which are detected by the visual sensor and are further used to calculate the platform position and attitude.The initial locations for each of these points is known, thus enabling us to calculate the transformation matrix between the initial location and the current location. This co-simulation consists of two types of simulations: 1. The real-world simulation is in itself a real-time simulator: values that are changed on the front panel of the LabVIEW-app have an immediate effect on the orientation of the real-world platform; 2. The virtual-world simulation, however, is a slower-than-real-time-simulation because this application solves the entirety of the sequence at once before starting the animation. The program mode that most closely resembles a synchronous simulation is the automode.It records small motion-sequences with a step duration of approximately one 60th of a second, sends them to the virtual-world simulator and animates the simulation. Simplified Model The variation in the reflected mass for a single LEMA has been generated from reallife simulators and the data analyzed thoroughly in collaboration with QINETIC Space Kruibeke, Belgium (a confidentiality agreement is in place).To illustrate the high LPV degree in this berthing and docking mechanism system, we introduce a linear model approximation for the extreme cases.Since the leg positions must be controlled, these models capturing the leg dynamics can be approximated in the form indicated bellow: where K is the gain, z 0 is a real zero, z * 1 denotes the complex comprised of z 1 , p 0 , p 1 real poles and p * 2 denotes the complex comprised of p 2 .This form of the transfer function has LPV parameter dynamics; i.e., changes in the z 1 and p 1 values.In order to illustrate the variability of the system, two extreme cases have been considered.The assumed linear model for case 1 is The other extreme (case 2) of LPV dynamics is expressed by the system transfer function: In this form of the models, the variations in pole/zero locations are clearly visible. Adaptive Control Originally, the controller used for this level was a single PI (proportional-integral) controller with feedforward action.An integer PI-cascaded controller has been outperformed in terms of robustness for maintaining closed loop specifications by a fractional order PI controller while being validated on the real-life simulator for European Space Agency at QINETIQ nva Antwerp, Belgium [16].A robust fractional order PI controller has been proposed and simulated in [6].Here, we use the adaptive robust fractional order PI (FOPI) control design described in [5].It has already been shown that fractional order control is a good candidate for the robust control of DC motor components [17][18][19]. The FOPI described hereafter has the form: where K p and K i are the proportional and integral gains and µ ∈ (0, 1) the fractional order of integration.Note that the traditional PI controller is obtained for the special case when µ = 1.The fractional order PI controller in ( 7) is thus a generalization of the integer order PI controller.The advantages of the fractional order PI controller stem from the extra tuning parameter µ, which can be used to enhance the robustness of the controller.Since there are three tuning parameters in the fractional order PI controller, three performance specifications are used.These three performance criteria lead to the typical tuning rules for fractional order controllers.and they are based on the relationship between timedomain performance specifications and corresponding frequency domain specifications. We enumerate here the most commonly used specifications: • Gain cross over frequency ω gc , which is related to the settling time of the closed loop system-large values will yield smaller settling times at the cost of higher control efforts; • Phase margin (PM), which is stability related and an indicator of the closed loop overshoot percentage-usual values are within the 45-65 • interval [20]; • The iso-damping property, which is a condition ensuring robustness to gain changes such that overshoot remains constant within a certain gain variation range. To ensure a constant overshoot, a constant phase margin needs to be maintained around the desired gain crossover frequency, which ultimately implies that the phase of the open-loop system must be kept constant around the specified gain cross-over frequency.In other words, the derivative of the phase with respect to frequency around the gain cross over frequency must be (almost) null.The three performance specifications mentioned above are mathematically expressed as ∠H open−loop (jω gc ) = −π + PM d(∠H open−loop (jω)) dω | ω=ω gc (10) where H open−loop (s) = C FOPI (s)P(s) stands for the open loop transfer function and P(s) is the system to be controlled.Equations ( 8)-( 10) require the magnitude, phase and derivative of the phase for both the FOPI controller and P(s).The modulus and phase of the FOPI controller are given by To determine analytically the derivative of the phase of the FOPI controller, the phase equation in (11) is used, resulting in Equations ( 8)-( 10) also use the magnitude, phase and derivative of the phase of P(s).The values for the modulus, phase and phase slope of the process should be known a priori at the gain cross over frequency ω gc .These can be obtained analytically based on a model of P(s).Assuming a process model is given by Equation ( 5), the magnitude, phase and derivative of the phase of P(s) are computed as To mimic a space rendezvous, the relative mass on each leg of the 6DOF platform can be varied at times during the simulation.In practice, the relative mass can be estimated continuously from the measured position and velocity of the platform.However, a continuous adaptation of controller parameters is not necessary and significantly increases the computational burden of the deployed system operation.Instead, using the standard performance index, a tolerance interval of ±10% is introduced in the overshoot (OS% < 20%) and settling time (Ts < 0.1s) values. The relations in (13) are easily computed online from previous samples' reflected mass estimation.However, an additional condition of the violation of the tolerance interval is implemented to prevent the sluggish and unnecessary adaptation of controller parameters and consequently discretization and deployment on real-life emulator.The latter steps are performed solely when the threshold of tolerance interval is exceeded.Then, replacing (11) and ( 12) in ( 8)-( 10) leads to With respect to the digital implementation of this controller, two more parameters are practically relevant: the degree of approximation and the user-specified ω-range in which the desired behavior should be obtained, and tuning rules are given in [21] (includes the Matlab function code provided in the paper). Results and Discussion The controller parameter solution for the isodamping property is achieved fora gain crossover frequency ω gc of 150 rad/s and phase margin (PM) of 50 • .The assumed linear model is (5) and the resulting controller is The parameters of the FOPI controller in (15) were obtained by solving the set of nonlinear equations in (14).First of all, the K i parameter was estimated as a function of the fractional order µ, using the phase margin and the iso-damping, according to the second and third equations in (7).The two functions were then plotted as indicated in Figure 7a.The intersection of the two functions gives the two control parameters that ensure both a certain phase margin and the iso-damping property.According to Figure 7a, the intersection point yields K i = 1.15 * 10 4 and µ = 1.656.Once these two parameters are computed, the proportional gain is estimated from the magnitude from the (first) equation in (14) as K p = 0.065.Figure 7b presents the Bode diagram for the controller and the two systems denoting two extreme cases (P 1 and P 2 ) of LPV values (i.e., a variation of 250% in the reflected mass value in the 6DOF platform simulator).The discretization scheme proposed in [21] has been employed to obtain a discretized version of pole-zero pairs.This scheme is particularly efficient at delivering stable, loworder approximations of the non-rational function of the FOPI into a proper filter form, and for our study case, it delivers a fifth order rational transfer function. An example of a co-simulation interface and real-time tracking between the 6DOF platform and virtual model is illustrated in Figure 8.In this figure, the real world platform can be observed, including the active part of the BDM system (label 1), the front panel of the LabVIEW application (label 2) and the 3D model of the simulated platform (label 3).The adaptive controller maintains the same specifications in closed loop performance for the two variations in the reflected mass parameters, with the step response for three platform legs (the other are similar) given in Figure 9.It can be observed that the controller maintains performance despite the great variability in system parameters mimicking the reflected mass for one leg with or without contact at docking.Step responses for the adaptive controller for three legs for the first case, P 1 (s) (left), and the second case, P 2 (s) (right). Conclusions In this paper, we presented a co-simulation framework for a real-world and virtualworld 6DOF platform to mimic the dynamic variability in a docking mechanism for a space rendezvous.The problem of controlling a challenging LPV system using a single type of controller in all operational zones has been addressed.A new approach has been introduced by using fractional order controllers, and a rationale for a generalized approach for LPV systems has been proposed.The simulation results indicate that the proposed approach works satisfactorily and supports the claim that fractional order control is an appealing solution to complex control problems.Further analysis of the control performance in the presence of unknown varying time delays can be evaluated in terms of tele-operation delays for maintenance, failure emergency cases, etc. Figure 1 . Figure1.Illustration of the active and passive modules for space IBDM (International Berthing and Docking Mechanism) rendezvous/contact.In the context described in our methodological approach, each module is a dynamic sub-system. Figure 2 . Figure 2. (a) Real world six degree of freedom (6DOF) platform for co-simulation, including a frame and Kinect IR-camera; (b) Relation between coordinate systems of the real world and the virtual world. Figure 3 . Figure 3. (a) Local coordinate frame for a servo control unit; (b) mapping of possible solutions for (1). Figure 4 . Figure 4. (a) The coordinate system of the Kinect v2 camera and the location of the sensors [15]; (b) time-of-flight (ToF) principle used in the Kinect sensor.The sent and received signal are compared to each other in order to determine the phase change and calculate the distance. Figure 5 . Figure 5.The NXOpen interface with all three tabs visible-here, the user can specify supervisory setpoint values to position the platform before docking. • Export motion is used to configure the attitude and position of the virtual platform.The ids values of servomotors are written in the text box besides the imaginary angles while filling in the values for the platform orientation-this makes it so that the values can still be adjusted before starting the solution.The simulation time defines the duration of the simulation; • Import motion: In this mode, a sequence of motion of the real-world-platform is imported and transformed to angles of the servomotors using the inverse kinematics, and then every point is plotted to form a motion-function.This motion function is then used as the driving function for the servo drivers; • Manual override is used to configure the angles of the individual servomotors.This function is primarily provided for testing purposes and has no influence on the real-world simulator. Figure 6 . Figure 6.Flowchart of the operational procedures in the VisualBasic application (red path represents the manual override mode, the blue path represents the import motion mode and the green path represents the export motion mode).LV is the abbreviation of LabVIEW. Figure 7 . Figure 7. (a) The cross-section point of the phase margin and iso-damping criteria; (b) a Bode diagram of the open loop system in the two extreme cases. Figure 8 . Figure 8. Example of co-simulation interface and real-time tracking between the 6DOF platform and virtual model. Figure 9 . Figure 9.Step responses for the adaptive controller for three legs for the first case, P 1 (s) (left), and the second case, P 2 (s) (right).
5,902.4
2021-04-19T00:00:00.000
[ "Engineering" ]
Structural Optimization of Trusses in Building Information Modeling (BIM) Projects Using Visual Programming, Evolutionary Algorithms, and Life Cycle Assessment (LCA) Tools : The optimal structural design is imperative in order to minimize material consumption and reduce the environmental impacts of construction. Given the complexity in the formulation of structural design problems, the process of optimization is commonly performed using artificial intelligence (AI) global optimization, such as the genetic algorithm (GA). However, the integration of AI-based optimization, together with visual programming (VP), in building information modeling (BIM) projects warrants further investigation. This study proposes a workflow by combining structure analysis, VP, BIM, and GA to optimize trusses. The methodology encompasses several steps, including the following: (i) generation of parametric trusses in Dynamo VP; (ii) performing finite element modeling (FEM) using Robot Structural Analysis (RSA); (iii) retrieving and evaluating the FEM results interchangeably between Dynamo and RSA; (iv) finding the best solution us-ing GA; and (v) importing the optimized model into Revit, enabling the user to perform simulations and engineering analysis, such as life cycle assessment (LCA) and quantity surveying. This methodology provides a new interoperable framework with minimal interference with existing supply-chain processes, and it will be flexible to technology literacy and allow architectural, engineering and construction (AEC) professionals to employ VP, global optimization, and FEM in BIM-based projects by leveraging open-sourced software and tools, together with commonly used design software. The feasibility of the proposed workflow was tested on benchmark problems and compared with the open literature. The outcomes of this study offer insight into the opportunities and limitations of combining VP, GA, FEA, and BIM for structural optimization applications, particularly to enhance structural efficiency and sustainability in construction. Despite the success of this study in developing a workable, user-friendly, and interoperable framework for the utilization of VP, GA, FEM, and BIM for structural optimization, the results obtained could be improved by (i) increasing the callback function speed between Dynamo and RSA through specialized application programming interface (API); and (ii) fine-tuning the GA parameters or utilizing other advanced global optimization and supervised learning techniques for the optimization Importance of Structural Optimization in the Construction Industry The reliable optimization of complex engineering problems has gained considerable traction with the advent and enhancement of digital technologies and tools, enabling the efficient utilization of artificial intelligence (AI) optimization algorithms at a fraction of the computation cost.This has also allowed the idea of structural optimization to emerge, as obtaining the optimal performance from a structure with minimum weight is one of the main objectives in the architecture, engineering, construction, and operations (AECO) sector [1], particularly to reduce the environmental impact of construction material [2].In this study, structural optimization is defined as finding the structure with minimum weight while satisfying all code-specific constraints on displacement and strength.This structural design optimization problem is commonly divided into three sub-categories, namely, shape, size, and topology optimization.In shape optimization theory, the outer boundary of the structure or, in other words, the surface node coordinates of the structure, are the design decision variables [3,4].An example of shape optimization was given in [5], where the authors improved the mechanical performance of free-form-grid structures just by finding the optimal grid placement for the nodes.On the other hand, sizing optimization, which is also referred to as cross-sectional optimization, is another structural optimization branch that concentrates on finding the best cross-section for the structural elements [3] to fulfill the required design performance objectives [6].As such, the design decision variables are the cross-section type, area, and shape.In fact, considerable improvements have been observed and reported on benchmark truss problems by considering constraints such as structural modal frequencies [7].Topology optimization encompasses the decision variables of both shape and sizing optimization [3] within the structural optimization problem.However, the structures obtained by this method usually come with higher manufacturing costs due to flexibility in the layout, size, and shape of the structure, which often generates complex geometries.To this end, additional constraints may be added to limit the complexity of the final structure by utilizing a select set of modules for the layout of the structure [8]. Overall, as the relationship between the objective function and the decision variables contains multiple intermediary steps, such as finite element modeling (FEM), it cannot be represented in a closed form [2,9].As such, AI-based metaheuristic algorithms are commonly utilized to find a near-optimal solution to the structural optimization problem [10].Among metaheuristic algorithms, the genetic algorithm (GA), a deep-rooted method that mimics evolutionary theory, is one of those widely used among researchers to optimize structures [11][12][13][14][15].Other metaheuristic algorithms were also employed, such as ant colony optimization (ACO), which mimics the behavior of ants [16]; particle swarm optimization (PSO), a strong metaheuristic algorithm that mimics swarm behavior [17]; and Bonobo optimization, which mimics the behavior of primates [18].It is reported that the Bonobo algorithm performs better than or competes well against other techniques according to the tests conducted on some truss examples.Further information about metaheuristic algorithms and their implementation can be found in the review article [19]. Available Tools for AI-Based Structural Optimization in BIM Projects Although these studies demonstrate the implementation of metaheuristic algorithms in the structural optimization process, scholars tend to use their own programs for the optimization, structural analysis, and structure generation processes.This is because software platforms such as MATLAB (MathWorks, Natick, MA, USA) and Python (used version 3.9.12,open-source, independent) have many built-in libraries to support the implementation of the optimization algorithms.In the recent studies on structural optimization with metaheuristic algorithms reviewed for this section, the referenced manuscripts [20][21][22][23][24][25][26][27] either employed proprietary programs or did not specify the soft-ware used for optimization.On the other hand, using MATLAB for structural optimization purposes is common among researchers [2,7,18,28]; however, MATLAB is not open source and is generally not the preferred choice for FEM in the AECO industry for structural optimization.Python is generally more preferred as it is open source [27,29]; however, due to the considerable numerical computational power offered by existing commercially available FEM software, the integration of optimization software together with commercial structural analysis software provides a more efficient solution. In [30], SAP2000 (Computers and Structures, Inc., Walnut Creek, CA, USA) [31] was employed, instead of coding FEM from scratch, to conduct structural analysis during the optimization of large steel-frame structures, particularly to reduce the cost and weight of oil and gas modules.The shape and size optimization of large barrel-vault frames was carried out in [32] by employing SAP2000 via an open application programming interface (OAPI).An integration between MATLAB and SAP2000 using OAPI to optimize steel-truss bridges with the aim of minimizing weight was investigated in [33].An integration between MATLAB and ANSYS (ANSYS, Inc., Canonsburg, PA, USA) was proposed in [34], where the efficiency of different GA operators to perform topology optimization on a benchmark truss structure problem was evaluated.Another pertinent example is the integration of visual programming (VP) software to support the parametric design of the structure.In reference [35], Rhino-Grasshopper (Robert McNeel & Associates, Seattle, WA, USA), a BIM-based VP tool, was utilized together with Karamba (Ka-ramba3D GmbH, Vienna, Austria), a plug-in to Rhino for structural analysis, to perform optimization by employing GA operators.Along the same lines, manuscript [36] utilized Grasshopper to create geometry and developed a tool in C# to transfer the structure to SAP2000 to perform topology optimization.Grasshopper, together with Peregrine (Lim-itState, Sheffield, UK) plug-in, were employed to perform layout and geometry optimization methods in sequence for optimum designs in [37].Furthermore, a framework that utilizes Dynamo (Autodesk, San Rafael, CA, USA) for Visual Programming and Open-Sees (UC Regents, Pacific Earthquake Engineering Research Center, CA, USA) for measuring structural performance by using the FEM of the created models was proposed in [38].Another study [39] explores the efficacy of VP technology in structural design, particularly through Dynamo.The study highlights VP's effectiveness in creating complex geometries and its seamless integration with BIM systems like Revit (Autodesk, San Rafael, CA, USA) and Robot Structural Analysis (RSA) (Autodesk, San Rafael, CA, USA).The opportunities that come with VP, such as generative modeling and optimization, are also mentioned in the study.However, there are no numerical examples or applications in that area, which our current study aims to fulfill. Although much research in the field has been carried out, a stand-alone and interoperable framework that utilizes VP, FEM software, and AI-based optimization in BIM projects still requires further investigation.As such, this study examined the application of combining interoperable Autodesk platforms, including VP software, Dynamo (used version 2.17.0.3472);BIM tool, Revit 2024 (used version 24.1.11.26); and FEM software, RSA 2024 (used version 37.0.0.10095), with a Python-based GA algorithm that was developed as a function directly within Dynamo.To the best of the authors' knowledge, no benchmark problem has been optimized using this proposed method and compared with other results found in the open literature.Moreover, due to the enormous amount of CO2 emissions from the AECO industry, along with increasing interest in sustainable development, reducing environmental impacts has recently become another key goal of governments.This interest has become a driving force for countries to adopt BIM, as recent studies show that BIM-based design, construction, and management have the potential to support sustainability in construction [40,41].The hypothesis was that this approach, due to the inherent interoperability between the Autodesk software platforms, might reduce possible computational overheads and/or loss of information between different information modeling platforms and address the existing gaps.In other words, the design is parameterized once in Dynamo, is optimized using GA and RSA, and is di-rectly transferred into Revit (with no additional tools) for further simulations and engineering analysis, such as life cycle analysis (LCA), quantity surveying, and cost estimation.Furthermore, the Autodesk software platforms are well-established within the AECO industry and the free educational access available for students and educators enables its possible widespread use for teaching, training, and industry transfer.Thus, this study focused on creating a workflow that integrates Revit-Dynamo with RSA to create a robust, efficient, and user-friendly environment to use structural optimization methods.The results were tested in available benchmark problems such as 2D 10-bar, 3D 36-bar, and 3D 120-bar dome trusses and compared with the results reported in the open literature to demonstrate the applicability of the proposed framework.Moreover, the interaction with Revit has also been established for further analysis of the optimized structures, particularly in LCA and cost estimation.Integration with parametric modeling tools, FEM analysis software, and BIM for optimization purposes enables the seamless transfer of knowledge between different platforms and different team members through a common data environment (CDE). Scope, and Objective of This study This study focuses on creating a workflow for the AECO industry to create sustainable, affordable, and nature-friendly designs.By using optimization, lower weight designs that are more functional can be constructed, which will be of vital importance for countries that have limited sources, resulting in reduced expenses and embodied CO2 [42].It is necessary to mention that the objective of this study is not to develop the best genetic algorithm and the most efficient code or to improve the existing optimization algorithms.Although better algorithms and codes can be developed and readily implemented into the proposed script, these issues are beyond the scope of the present research.Numerical examples are presented to only show the adaptability of the proposed workflow. Furthermore, the scope of this research is limited to the problem of size optimization for steel-truss structures, and the aim of the numerical examples is to select appropriate options from a list of real-world section criteria to minimize the weight of the truss while satisfying constraints (e.g., stress, displacement, and buckling).In this aspect, the sizing optimization problem investigated here resembles the combinatorial Knapsack Problem [43].As GA has been shown to effectively provide a fast and heuristic solution to the Knapsack Problem, it was used here to provide a solution to the sizing optimization problem. Genetic Algorithm English naturalist Charles Darwin transformed our understanding of life through his exploration and the concept of natural selection in his famous book, "On the Origin of Species".He introduced natural selection as a metric for the resilience of species/organisms to adapt to changes in their environment and evolve accordingly [44].In evolutionary theory, traits are established by chromosomes, which consist of groups of units called genes.The genes can pass on these traits to their offspring through a process referred to as crossover [45].The algorithms that use these logics and mimic similar evolutionary processes are called Evolutionary Algorithms (EA).GA, which is a type of EA, starts by introducing an initial set of gene populations, identifying their objective fitness values, selecting the best genes and performing cross over and mutation, and generating a new population for the next generation [12].In trivial terms, the process aims to keep the genes with the best fitness values and create new combinations from these best genes to generate even better solutions in successive generations.Figure 1 demonstrates the basic tenets of a simple GA.In this study, the evaluation of numerical examples to validate the proposed workflow was performed using GA through the PyMoo (version 0.6.1,open-source, independent) library [46] due to its efficiency, simplicity, robustness, and optimization capabilities for combinatorial optimization problems [47,48].The selected GA operators were as follows: (i) integer random sampling with duplicate elimination; (ii) tournament selection; (iii) simulated binary crossover (SBX) with a probability value set to 1; (iv) polynomial mutation (PM) with a probability value set to 1; and (v) fitness survival.The termination criterion was controlled by limiting the generation number.Further parameter tuning (outside of the default PyMoo library settings) is provided in Section 3.1.It should be mentioned here that the choice of GA was purely practical, due to its simplicity and effectiveness in solving popular combinatorial optimization problems such as the Knapsack Problem.As such, GA was used only as a tool for validating the workflow.In fact, it is possible to achieve better results either by fine-tuning the existing GA parameter settings, or by employing other algorithms, such as PSO, ACO, or the more recent Jaya optimization.However, these topics are subjects for future research following the establishment of the methodology proposed in this manuscript. Formulation of the Truss Optimization Problem As mentioned above, the goal of the proposed design is to optimize (minimize the weight of the structure (W) under multiple constraints) solutions to problems of truss structure with the proposed workflow.Consequently, the weight of truss structures is chosen as an objective function.The problem under consideration in this study can be defined mathematically as follows: where is the density of the material, A is the cross-sectional area of the member, and is length of the member.As can be seen in the above-mentioned formulation, the weight of the overall structure is taken as the objective.Because GA is ideally used for unconstrained optimization problems, it is necessary to convert the problem into an unconstrained one before starting the optimization process [49].The transformation is performed by calculating the violations made on the constraints.These violations are penalized by the penalty term, which affects the objective function that directs the search for workable solutions [12].Rajeev and Krishnamoorthy [12] proposed a formulation based on the violation of normalized constraints in the referenced paper, and it was found to work very well among scholars [20].Therefore, in this study the same methods have been used to calculate objective functions.In Equation ( 2), the basic formulation of the proposed methodology is presented as follows: where is the objective function and K is a coefficient that is determined by the type of problem.For this study, a value of 10 was chosen for K based on the findings in the referenced manuscript [12] as it has been found to work well for truss optimization problems.In situations where the design fully satisfies the constraints, the coefficient is set to a higher value, and heavier penalty values are assigned to solutions that make only a slight infringement on the constraints.This approach ensures that the constraints are not violated while the search for solutions continues.In engineering designs, minor deviations from the constraints are often acceptable [12].In such cases, when the main objective is to minimize the weight of the design, a smaller coefficient can be used.This allows for the exploration of solutions with lower weight without including individual solutions that have extreme levels of constraint violation [12].This approach helps to strike a balance between not violating the constraints and achieving a lighter design.C is the violation coefficient, which is calculated based on the violation of constraints.The constraints are formulated in normalized form, as demonstrated by the following formulas [12]: where is the stress value on the element and is the allowable stress.Similarly, is the displacement on the nodes and is the allowable displacement.The violation coefficient C is determined in the following way: if gi(x) > 0, then ci = gi(x); or if gi(x) ≤ 0, then ci = 0 [12]. where m = the number of constraints.After the constraint adjustment, the problem reduces into an unconstrained optimization problem with the new objective function, ∅(x) [12]. Proposed Framework The overall methodology consists of the following steps: 1. Create parametric trusses by using visual programming (Figure 2a); 2. Perform structural analysis on RSA and retrieve the results through Dynamo (Figure 2b); 3. Perform the first two steps in a loop along with GA operators to reach an optimum design (Figure 2c); 4. Import the optimized model on Revit to perform further enhancements such as LCA, cost analysis, etc. (Figure 2d). The overview of the proposed workflow in this study is presented as a flowchart in a simplified form in Figure 3. Parametric Model Creation There are two common techniques used to write computer programs, namely VP and textual programming.The one in which graphical elements and visual syntax such as blocks, nodes, or diagrams are used to create program logic and flow is called visual programming.As it requires no significant syntactic knowledge, it can be quite userfriendly and suitable for beginners.Consequently, VP has been used in this study for modeling through Dynamo.Figure 2a illustrates the use of a number-integer slider, or code block nodes, to specify various parameters such as truss height, truss length, the number of vertical struts, section indexes, truss type, etc.These values are all expressed in millimeters (mm), aligning with the Project Unit setting in RSA, and can be determined as design variables.For this study, only sections are considered as design variables.To create a truss structure, the analytical nodes need to be defined.This can be achieved by either entering the code text "Point.ByCoordinates(0,0,0)" in the code block or utilizing the "Point.ByCoordinates" node.These points were then connected using the "PolyCurve.ByPoints" node, forming the elements of the truss structure.This section must be updated whenever different trusses need to be optimized. Calculation and Retrieval of Results Performing structural analysis is an essential step for structural optimization workflow.Although it is not reliable, many scholars use their own program or third-party libraries for this step, as explained in the introduction section.In this study, an interaction has been established between Dynamo and RSA through the Structural Analysis for Dynamo (version 3.0.10)package, which enables the creation of geometry and the assignment of simulation criteria, such as supports, loads, and sections, based on the geometrical input in Dynamo.To create the analytical members, the lines created in the previous section need to relate to the "AnalyticalBar.ByLines" node from the mentioned Dynamo package.Once the bars have been created, sections and materials must be assigned to the created bars.Thereafter, supports, bar-end releases, and loads must be defined as illustrated in Figure 2b. Dynamo drives the analysis process in RSA by using the "Analysis.CalculateWithSave" node.The information (analytical model, support and releases, and load cases) that has been created in the previous section needs to be connected to this node.This node performs the analysis and saves the model at a specified location (Figure 2b).Subsequently, stresses, weight of structure, and displacements are needed for the calculation of the penalized objective function.To obtain the maximal stress for each bar, the "BarStress.GetMaxValuesList" node is utilized.For maximal values of displacement on the joints, the "BarDisplacement.GetMaxValuesList" node is utilized.The last parameter that needs to be obtained for the weight score is the "selfweight" value of the structure.This can be achieved by using the "NodeReactions.GetValues" node and extracting the "FZ" results for the load case labeled as "Selfweight".Evaluating the penalized objective function of each design option is crucial for ranking the fitness of de-signs in structural optimization.That is why the last part of the script evaluates penalized objective functions by considering constraints as explained in detail in the background section.This process is implemented in the VP environment with the help of Python script as presented in Figure 4. Structural Optimization To perform size optimization, section indexes can be set as design parameters.This is how the algorithm will try to find the best parameters for those inputs.The feature that allows users to do this is parametric modeling, because as soon as an input is updated, a new model is automatically created and FEA can be performed for the new model.Optimization works by running the steps explained in the previous section in a loop.To do this, a custom node and the "LoopWhile" node have been used.By creating a custom node, all of the nodes from Figure 2a,b are consolidated into a single core node.This core node is central to all analyses.It takes the parameter to be optimized through its import port and, in turn, outputs the penalized objective function results.This process occurs repeatedly, ensuring continuous optimization.Henceforth, an initial population is needed for optimization.Thus, a Python script has been prepared to create a random initial population by defining lower-upper limits for solution sets, population size, and chromosome length.After creating initial populations, the outputs are connected to the "LoopWhile" node to process the steps as explained above.Figure 2c illustrates the nodes necessary for this operation.There are two custom nodes in this figure, which are "LoopCheck" and "LoopBodyVar1".The "LoopCheck" node allows the definition of the stopping criteria by taking the iteration number and comparing it to the input number.As soon as the iteration number exceeds the input number, a stop signal is sent to the "LoopWhile" node and the loop ends.Subsequently, every individual in the initial population list must be sorted based on their penalized objective function results. The parents that will create the succeeding generation of the population by GA operators are selected by using the obtained penalized objective function that results from the initial population and binary tournament selection.Individuals with smaller objec-tive function values are selected as the parents while the other individuals are not selected, and their genes are eliminated.This process is repeated by the next generation creator node group shown in Figure 2c until a goal is met or a certain number of generations have been produced.The "LoopBodyVar1Literature" custom node contains two Python scripts and the same nodes from the initial population fitness calculator node group as illustrated in Figure 5. First Python code uses GA operators such as selection, crossover, mutation, etc. to create new populations.Second Python code, on the other hand, performs like another selection operator of GA and chooses the best combinations from parents and children. BIM Integration In this section, some advantages of using the BIM environment in the optimization process are described and some examples are given.Integration of BIM and AI-based metaheuristic search algorithms for the optimization of designs in the AECO sectors will improve the project-creating process by generating multiple optimized design alternatives and providing detailed reports about the project.Consequently, architects and engineers can give more time to different aspects of a project and encourage innovation in their field by utilizing automation and streamlining repetitive tasks.For this, Revit's features can be utilized by exporting the optimized model to Revit by employing the Dynamo script and the "RevitCreator" custom node (Figure 2d).The structure following importation of the truss model into Revit is shown in Figure 6a.Prior to creating the model on Revit, cost and LCA analyses can be performed along with arranging connection details.Revit allows users to design connections and prepare more detailed models.This can be performed in two ways.One, choosing the connection points manually on Revit, under the "structure tab", on the "assembling steel connections" menu.Another way is doing all of these steps parametrically with the help of Dynamo.In this study, to show the capabilities of BIM and Dynamo, this process is performed parametrically on Dynamo by employing the script presented in Figure 6b.Furthermore, Revit allows the user to utilize add-ins such as "Tally", which employs contribution assessments to illustrate the environmental impacts of products or components in construction, considering factors such as ozone depletion, acidification, and global warming potential by performing LCA analysis, which gives an opportunity to the AEC specialist who wishes to build an environment-friendly and sustainable design by performing a study on the project and providing a detailed report about the materials.Additionally, it provides focused views of global warming potential and embodied energy through graphics and charts, aiding designers to compare specific assemblies and components.Examples are provided in the upcoming Results section. Results This section presents the numerical outcomes for several test problems.A comparison is made between the solution obtained from this study and solutions derived from various studies found in the open literature.These problems are solved within the proposed framework using a computer with an Intel(R) Core i7-6700HQ CPU and 16.0 GB Installed RAM.Unlike studies in the literature, sections that exist in real-world applications have been considered as design variables to increase the adaptability of the proposed workflow for real-world examples.The average time spent on 100 analyses is approximately 2 h.It is assumed that this time will be reduced by a computer having better hardware properties.All scripts and corresponding Python code, along with animations, and instructions are conveniently provided in the respective dedicated GitHub repository for this study [50]. Experimental Setup Three different benchmark problems have been examined for validation of this study, namely, the 10-bar-2D-truss problem (adopted from [12]); the 36-bar-3D-truss problem (adopted from [51]); and the 120-bar-3D dome truss problem (adopted from [52]).The referenced studies offer the theoretical bases for the constraints, along with the geometric dimensions.According to the referenced manuscripts, the stress member limit, along with the displacements, were as follows: (i) 172.25 MPa, and 50.8 mm, respectively, for the 10-bar truss; (ii) 172.25 MPa, and 50.8 mm (for node 4 in both negative z and y direction), respectively, for the 36-bar truss; and (iii) 240 MPa, and 10 mm (for every node in the structure in the negative z direction), respectively, for the 120-bar truss. The 10-bar truss example is a standardized test case in the field of structural optimization for those who want to evaluate and validate the effectiveness of proposed optimization techniques, and it is frequently used as a benchmark problem by researchers [12,14,53,54].The geometry, support requirements, material properties, and boundary conditions for this 2D-hanging truss, which is shown in Figure 7a, have been adopted from the referenced study [12].A discrete list that contains 41 cross-sectional areas has been sourced from the American Institute of Steel Construction (AISC) and imported to RSA from the AISC database (AISC Edition 15.0 American hot rolled shapes).The HSRO (Hollow Structural Round Sections) family was used during the optimization.Detailed information about importing databases and using different section properties can be found on the referenced website [55].Population size and generation number have both been determined as 20 through the execution of ten distinct run cycles for this example. The 36-bar truss problem, which is considerably difficult and includes 21 design variables, has emerged as another benchmark problem used among scholars [14,51,56].The configuration, dimensions, support requirements, material properties, and loading condition of this three-dimensional truss, shown in Figure 7b, are sourced from the cited article [51].A discrete list that contains 25 cross-sectional areas, which have been acquired from the AISC and imported to RSA from the AISC database (above), has been prepared for this example.The RB (Round Bars) family was used during the optimization.Population size and generation number have both been determined as 40 through the execution of two distinct run cycles for this example.The 120-bar dome truss problem was first solved by M.P. Saka in 1991 [52].Thereafter, other researchers have employed this structure to test their algorithms [52,[57][58][59][60][61].The configuration, dimensions, support requirements, material properties, loading, and boundary conditions of this dome, which is illustrated in Figure 7c, are sourced from the referenced article [52].The elasticity module is taken as 210000 MPa and material density is taken as 7860 kg/m 3 .A total of 27 CHS (Circular Hollow Section) sections that comply with the provided limits were chosen from the UKST (British hot rolled section) database available on RSA [55].Moreover, the buckling effect for elements under compression has also been taken into consideration.To calculate the critical load for buckling, a Python code was created with the formulations taken from AISC [62] regulations.Population size and generation number have both been determined as 20 through the execution of ten distinct run cycles for this example. 10-Bar 2D Truss Problem The results found for the 10-bar truss are shown and compared with those from studies taken from the literature in Table 1.This table reveals that the weight obtained after optimization is higher than the solutions obtained from previous studies.However, as the aim of this study is to demonstrate the usability of the proposed methodology, the solution obtained is acceptable because this study, conducted by Camp et al., required an average of approximately 10,000 truss analyses with 24 separate run cycles [53] to converge the solution, exhibiting a magnification of 62.5 times compared to the analysis number that has been performed for this study.On the other hand, Jafari et al. [23] needed 113.25 times more analyses to obtain the result presented in Table 1.In addition to the table, the weight score change chart and the final version of optimized structure with sections found have been presented in Figure 8. 36-Bar 3D Truss Problem The comparison of the results, the number of analyses performed to obtain that truss in a single run cycle, and separate runs performed during this study for the 36-bar truss problem are depicted in Table 2.The main drawback of the proposed framework was the increase in calculation times for structures with more elements.This downside was limited to the number of separate runs performed for this problem.The results, also, showed that the GA configuration used in this study was less efficient compared to that of the 10-bar truss, where the function evaluations were squared to achieve a similar 10% error to that of the best observed result in the literature.Figure 9 presents the weight score and final version of the optimized structure. 120-Bar Dome Truss Problem This example aims to find suitable sections that give minimum weight while satisfying given constraints for 120 elements that are divided into 7 groups from the list that contains 27 different cross-sectional areas.Results for the 120-bar dome truss are presented in Table 3 along with their comparison to literature results.It is worth noting that conducting additional runs and increasing the number of analyses for the same study is expected to yield better weights, as the study [61] needed 84,000 overall analyses and the study [58], required 100 separate run cycles to find the results depicted in Table 3.It is also important to mention that in this study [61] the displacement constraints are taken as having a ±5 mm difference to the referenced study [52], resulting in a higher structure weight.Also. the referenced studies all used continuous design variables whereas discrete sets were utilized in this manuscript.The reason for choosing this problem for the Results section is to demonstrate that the proposed algorithm is effective and performs well as the number of elements increases.The results indicate that the proposed methodology can be applied to complex structures in future studies.Nevertheless, it should be noted that conducting one analysis for a problem of this scale took about 50 s.Lastly, Figure 10 presents the optimized structure along with a weight score change by generation chart.This section of this study report provides LCA and cost analysis results of the optimized 120-bar dome truss structure obtained using the explained methodology.As the first step, the best model after optimization has been imported into Revit from Dynamo using the "Revit importer" custom node.Once the model is ready in Revit, as shown in Figure 11a, LCA analysis can be performed by using "Tally".After the analysis, a detailed report is generated, which includes information on factors such as global warming potential, acidification, and smog formation potential, some part of which is demonstrated in Figure 11b.For cost analysis, a bill-of-quantity table has been created on Revit, as shown in Figure 11c.Fields in this table can be adjusted by specific needs.To give an example, unit prices have been obtained from the referenced website [63] and the total cost of the structure has been calculated automatically within the table. Discussion In this section, transparent comments have been made about the aspects of this study that could have been improved or that were not fully addressed, along with a brief discussion about the results found in this study and potential future deployment. Firstly, the proposed workflow can be enhanced by employing RSA-API instead of the Dynamo package to create an interaction between Dynamo and RSA, which would give broader control over the analysis part of the workflow.The Dynamo package restricts the optimization process and the ability to use the full performance of RSA, because no changes are possible inside this package.If there is a problem during the optimization that is caused by this package, then it is challenging to detect and solve that problem.That is why using API to perform FEM analysis on RSA appears to be a better option for future studies.Also, it is assumed that utilizing API would decrease the calculation time, supporting the designer reach optimum solutions in less time, while enabling scholars to perform additional analysis to achieve more competitive results compared to the current literature. It is important to mention that the main goal for this study was to develop a unified and interoperable framework for the combination of VP, FEM, and GA in BIM-based projects using tools and software that are commonly utilized in the AEC industry.It was hypothesized that this may support the wide implementation of AI-based structural optimization by making it easier and less dependent on programming knowledge (or technology literacy) while having minimal impact on the current supply-chain processes in the industry.In Tables 1-3, it can be observed that the lower weights were observed in other studies in the literature; however, Figures 8-10 demonstrate that the average population and the best observed results were yielding, suggesting at least a local convergence.From this point, it is possible to update the algorithms, perform additional runs, increase the population size and generation size, and find the optimum setting for GA and verify it by performing a comparison with benchmark functions, or with different global optimization algorithms such as ACO, PSO, etc.Moreover, performing shape or topology optimization for future research under scenarios such as dynamic loadings or different constraints such as frequency constraints has the potential to broaden the application of the proposed methodology. One way that results and computation time may improve is through machine learning strategies, such as artificial neural networks (ANN) (for more information, readers can refer to referenced manuscripts [64][65][66]).Exploring the integration of machine learning algorithms for predicting structural optimization in BIM projects could be worthwhile to further enhance the proposed framework.Although the current study focuses on creating a workflow that can employ various optimization algorithms and techniques, this future deployment can enhance the efficiency and effectiveness of the opti-mization process by using the power of machine learning techniques, such as reinforcement learning and deep learning, leading to more sustainable and cost-effective structural designs in BIM-based projects. Furthermore, following the optimization of truss structures, optimization for special structures such as wide-span roofs and wide-span trusses can result in significant improvements over existing methodology and validation processes.These structures are typically found in industrial buildings, airport terminals, and sports facilities when large open expanses with no intermediary supports are required.For the experimental results, case studies of sustainable railway station designs can be optimized, and the outcomes can be given in future research.These case studies can provide useful insights into how truss structures are employed to achieve sustainable and efficient designs in real-world applications. Lastly, the definitions of the initial population size, mutation rate, and elite rate all significantly affect the results of the examples.Additionally, the number of design variables and the size of the list containing these variables have substantial influences on finding optimal results.Although having a large list of design parameters allows for the evaluation of various options, it also comes at a significant time cost.The primary challenge of the proposed method is the time required for certain tasks, particularly when importing the model into RSA, assigning boundary conditions, and conducting the analysis.These processes consume a substantial amount of time compared to methods where all operators and programs are coded within a single piece of software using one programming language, and where the generation number does not impose a limitation on this study.However, in this study, as the generation number increases the waiting time also rises due to issues like RAM leaks.Improving the interaction between RSA and Dynamo might reduce the model creation time in RSA, which could help to address this problem.Due to these reasons, the separate run number and analysis number values were kept low, and as soon as a solution close to the literature results was found, the optimization process was terminated.Despite the reasons explained so far, the solutions obtained by using the model proposed in this manuscript are meaningful from the viewpoint of engineering management and computational efforts because, in this study, discrete and real-word cross-sections have been adopted, to the contrary of benchmark problems, as design variables.It should also be emphasized that conducting additional run cycles and increasing the number of analyses for the same study is expected to yield better weights as all the benchmark studies performed a considerably greater amount of analysis than was performed in this study. Overall, the results obtained in this study are approximately 10% higher than those in the literature.This difference is considered acceptable given that the focus of this study is to establish a workflow for structural optimization using visual programming, parametric modeling, and other BIM tools, rather than coding a metaheuristic algorithm that works best among the ones created in the literature. Final Remarks Even though structural optimization with metaheuristic algorithms is a well-known subject, its application and utilization in real-world problems are not common because of unfriendly software interfaces and the vast amount of programming expertise required.This gap in the literature is approached utilizing technological advancements like parametric design, VP, BIM, etc. through software like Dynamo, Revit, and RSA during the optimization process.The aim of this manuscript is to create a structural optimization methodology that integrates the tools of VP, parametric modeling, and BIM to obtain a robust optimization framework.This methodology was then validated by comparing the results of several 2D and 3D benchmark problems from the open literature.Concurrently, to show the advantages of using BIM during the optimization process, LCA and cost analysis have been added to the proposed methodology. The results in this study highlight the promise of new technologies, allowing scholars to include programming into their studies without requiring a large amount of syntax expertise.Furthermore, the findings of this study provided important insights into the adaptation of VP and BIM tools to the optimization process.The computation time of the framework, however, was a controlling factor in the number of function evaluations used during the GA optimization.To this end, the present study, despite its success in developing a workable, user-friendly, and interoperable framework for the utilization of VP, GA, FEM, and BIM for structural optimization, achieved results that were on average 10% higher than those reported in the relevant literature.The authors conclude that this can be improved by the following measures: (i) increasing the callback function speed between Dynamo and RSA (which was the main computation bottleneck) through specialized API; and (ii) fine-tuning the GA parameters or utilizing other advanced global optimization and supervised learning techniques for the optimization. Data Availability Statement: The raw data for the configurations used in this study are available through in the following references: 10-bar truss [12], 36-bar truss [51], and 120-bar dome [52].The generated computer code along with animations, and instructions are conveniently provided in the respective GitHub repository dedicated for this project [50]. Conflicts of Interest: The authors declare no conflicts of interest. Figure 1 . Figure 1.Flowchart of GA with basic operators. Figure 2 . Figure 2. Schematic representation of the proposed methodology: (a) parametric modeling of truss; (b) integration of RSA structural analysis; (c) GA optimization modules; and (d) integration with Revit and Tally. Figure 3 . Figure 3. Flowchart of the proposed workflow. Figure 4 . Figure 4. Python script used for evaluating and ranking the results. Figure 6 . Figure 6.(a) Truss model after importation into Revit; (b) nodes for adding the connection details through Dynamo; (c) the structure after adding the connection details. Figure 8 . Figure 8.The result of the 10-bar truss problem; fitness score vs. generation number. Figure 9 . Figure 9.The result of the 36-bar truss problem; fitness score vs. generation number. Figure 10 . Figure 10.The result of the 120-bar truss problem; fitness score vs. generation number.LCA and Cost Analysis for the 120-Bar Truss Structure Figure 11 . Figure 11.Results are as follows: (a) dome truss after being imported into Revit; (b) life-cycle stage (generated using Tally); and (c) Revit bills-of-quantity table. Author Contributions: Conceptualization, R.M. and F.Y.; methodology, R.M., F.Y. and V.T.; software, F.Y.; validation, F.Y.; formal analysis, F.Y.; investigation, R.M. and F.Y.; resources, R.M.; data curation, F.Y.; writing-original draft preparation, F.Y.; writing-review and editing, R.M. and V.T.; visualization, F.Y.; supervision, R.M. and V.T.; project administration, R.M. and V.T.; funding acquisition, R.M.All authors have read and agreed to the published version of the manuscript.Funding: This research was supported by the research funds of the Endowed Professorship in Digital Engineering and Construction (DEC) at the Institute of Technology and Management in Construction (TMB) of the Karlsruhe Institute of Technology (KIT).The author wishes to acknowledge the support provided by the KIT Publication Fund of the Karlsruhe Institute of Technology in supplying the APC.
9,804.2
2024-05-25T00:00:00.000
[ "Engineering", "Environmental Science", "Computer Science" ]
High-Density 1R/1W Dual-Port Spin-Transfer Torque MRAM Spin-transfer torque magnetic random-access memory (STT-MRAM) has several desirable features, such as non-volatility, high integration density, and near-zero leakage power. However, it is challenging to adopt STT-MRAM in a wide range of memory applications owing to the long write latency and a tradeoff between read stability and write ability. To mitigate these issues, an STT-MRAM bit cell can be designed with two transistors to support multiple ports, as well as the independent optimization of read stability and write ability. The multi-port STT-MRAM, however, is achieved at the expense of a higher area requirement due to an additional transistor per cell. In this work, we propose an area-efficient design of 1R/1W dual-port STT-MRAM that shares a bitline between two adjacent bit cells. We identify that the bitline sharing may cause simultaneous access conflicts, which can be effectively alleviated by using the bit-interleaving architecture with a long interleaving distance and the sufficient number of word lines per memory bank. We report various metrics of the proposed design based on the bit cell design using a 45 nm process. Compared to a standard single-port STT-MRAM, the proposed design shows a 15% lower read power and a 19% higher read-disturb margin. Compared with prior work on the 1R/1W dual-port STT-MRAM, the proposed design improves the area by 25%. Introduction Spin-transfer torque magnetic random-access memory (STT-MRAM) has drawn great attention as a promising candidate for future on-chip memory because of its desirable features such as high integration density, near-zero leakage power, non-volatility, and compatibility with the CMOS fabrication process [1][2][3][4][5][6][7][8][9]. A standard STT-MRAM bit cell comprises a single access transistor and a magnetic tunnel junction (MTJ) that functions as a storage element. The MTJ consists of a pinned layer (PL) and a free layer (FL) sandwiching a tunneling oxide barrier, as shown in Figure 1a. The magnetization of the PL is pinned to one direction, whereas the FL's magnetization can be altered by passing an electrical current so that its direction is either parallel (P) or anti-parallel (AP) to that of the PL [1]. Since an MTJ resistance in the AP state is higher than that in the P state, a read operation can be performed by sensing the resistance of the MTJ. STT-MRAM is capable of >2x integration density, in comparison with conventional static RAM (SRAM) that requires six transistors per cell. Moreover, STT-MRAM can lower the total power dissipation by eliminating the leakage power because the MTJ is non-volatile [10]. Despite the aforementioned advantages, two major issues need to be addressed in order to adopt STT-MRAM in a wide range of memory applications. First, STT-MRAM has a high write latency that may degrade system performance [11,12]. When a read request occurs during a write operation, it may be delayed until the long-latency write is completed [12]. Second, the read current path is identical to the write current path, as shown in Figure 1b, creating a tradeoff between read stability and write ability [11]. For [13]. A possible solution is to design an STT-MRAM bit cell with multiple ports, such as one read and one write (1R/1W) dual-port STT-MRAM [11]. As shown in Figure 2, read and write operations can be simultaneously performed by using two different access transistors, and thus, the impact of a slow write operation can be effectively mitigated. Because a read-access transistor and a write-access transistor are separated, the read stability and write ability can be independently optimized. However, due to the requirement of an additional transistor, the multi-port design degrades the achievable memory density. Thus, the 1R/1W dual-port STT-MRAM trades off the write latency and the memory cell area. In this paper, we propose an area-efficient design for 1R/1W dual-port STT-MRAM to improve the integration density. The proposed dual-port STT-MRAM shares a bitline between two adjacent bit cells. We identify that the bitline sharing may cause erroneous operations due to a creation of sneak path currents or a conflicting requirement of biasing conditions. We categorize such erroneous operations into three cases and show that each case can be mitigated by using the bit-interleaving architecture with long interleaving distance and the sufficient number of word lines per memory bank. Our simulation results show that the proposed designs can reduce the memory cell area by 25% in comparison with the prior 1R/1W STT-MRAM design shown in [11], at the same specification of 10 ns switching time, 20% write margin, and >35% read margin. Moreover, the proposed design achieves a 15% lower read power and a 19% higher read-disturb margin compared to the standard single-port STT-MRAM. Single-Port STT-MRAM In a conventional STT-MRAM with a single-port, when a write operation is being performed, read requests are delayed until the write operation is completed. This may result in performance degradation, especially for write-intensive applications [6]. Consider a 2 × 2 array of single-port STT-MRAM with a distance-2 bit-interleaving A possible solution is to design an STT-MRAM bit cell with multiple ports, such as one read and one write (1R/1W) dual-port STT-MRAM [11]. As shown in Figure 2, read and write operations can be simultaneously performed by using two different access transistors, and thus, the impact of a slow write operation can be effectively mitigated. Because a read-access transistor and a write-access transistor are separated, the read stability and write ability can be independently optimized. However, due to the requirement of an additional transistor, the multi-port design degrades the achievable memory density. Thus, the 1R/1W dual-port STT-MRAM trades off the write latency and the memory cell area. instance, if the access transistor has a large width for a reliable write operation, it is likely that an inadvertent bit flip occurs during a read operation, as depicted in Figure 1c [4]. A possible solution is to design an STT-MRAM bit cell with multiple ports, such as one read and one write (1R/1W) dual-port STT-MRAM [11]. As shown in Figure 2, read and write operations can be simultaneously performed by using two different access transistors, and thus, the impact of a slow write operation can be effectively mitigated. Because a read-access transistor and a write-access transistor are separated, the read stability and write ability can be independently optimized. However, due to the requirement of an additional transistor, the multi-port design degrades the achievable memory density. Thus, the 1R/1W dual-port STT-MRAM trades off the write latency and the memory cell area. In this paper, we propose an area-efficient design for 1R/1W dual-port STT-MRAM to improve the integration density. The proposed dual-port STT-MRAM shares a bitline between two adjacent bit cells. We identify that the bitline sharing may cause erroneous operations due to a creation of sneak path currents or a conflicting requirement of biasing conditions. We categorize such erroneous operations into three cases and show that each case can be mitigated by using the bit-interleaving architecture with long interleaving distance and the sufficient number of word lines per memory bank. Our simulation results show that the proposed designs can reduce the memory cell area by 25% in comparison with the prior 1R/1W STT-MRAM design shown in [11], at the same specification of 10 ns switching time, 20% write margin, and >35% read margin. Moreover, the proposed design achieves a 15% lower read power and a 19% higher read-disturb margin compared to the standard single-port STT-MRAM. Single-Port STT-MRAM In a conventional STT-MRAM with a single-port, when a write operation is being performed, read requests are delayed until the write operation is completed. This may result in performance degradation, especially for write-intensive applications [6]. Consider a 2 × 2 array of single-port STT-MRAM with a distance-2 bit-interleaving In this paper, we propose an area-efficient design for 1R/1W dual-port STT-MRAM to improve the integration density. The proposed dual-port STT-MRAM shares a bitline between two adjacent bit cells. We identify that the bitline sharing may cause erroneous operations due to a creation of sneak path currents or a conflicting requirement of biasing conditions. We categorize such erroneous operations into three cases and show that each case can be mitigated by using the bit-interleaving architecture with long interleaving distance and the sufficient number of word lines per memory bank. Our simulation results show that the proposed designs can reduce the memory cell area by 25% in comparison with the prior 1R/1W STT-MRAM design shown in [11], at the same specification of 10 ns switching time, 20% write margin, and >35% read margin. Moreover, the proposed design achieves a 15% lower read power and a 19% higher read-disturb margin compared to the standard single-port STT-MRAM. Single-Port STT-MRAM In a conventional STT-MRAM with a single-port, when a write operation is being performed, read requests are delayed until the write operation is completed. This may result in performance degradation, especially for write-intensive applications [6]. Consider a 2 × 2 array of single-port STT-MRAM with a distance-2 bit-interleaving architecture, as shown in Figure 3. Each row is composed of bit cells connected with the same word line (WL), while each column is composed of bit cells connected with the same set of bitline (BL) and source line (SL). Moreover, an odd-column cell and an even-column cell cannot be contained in the same word due to column selection by distance-2 bit-interleaving [14][15][16][17]. Accordingly, all four bit cells shown in Figure 3 belong to different words. Now, it is easy to observe that simultaneous write and read operations are not allowed. For instance, to write a value 1 to a bit cell on the first column of first row, BL1 is set to the write voltage level (V WRITE ), SL1 is grounded, and WL1 is asserted high. If the bit cell on the second column of the second row is accessed for a read operation at the same time, BL2 is biased at V READ , SL2 at G ND , and WL2 is asserted high. Note that the unselected cell in the first column of the second row may accidently flip its MTJ value because the activation of WL2 passes an electrical current from BL1 to SL1. It should be also noted that the read operation in the second column of the second row may be erroneous because the unwanted current flows through the unselected cell in the second column of the first row. architecture, as shown in Figure 3. Each row is composed of bit cells conne same word line (WL), while each column is composed of bit cells connected set of bitline (BL) and source line (SL). Moreover, an odd-column cell and an cell cannot be contained in the same word due to column selection by distan leaving [14][15][16][17]. Accordingly, all four bit cells shown in Figure 3 belong to dif Now, it is easy to observe that simultaneous write and read operations are For instance, to write a value 1 to a bit cell on the first column of first row, BL write voltage level (VWRITE), SL1 is grounded, and WL1 is asserted high. If the second column of the second row is accessed for a read operation at th BL2 is biased at VREAD, SL2 at GND, and WL2 is asserted high. Note that the u in the first column of the second row may accidently flip its MTJ value bec vation of WL2 passes an electrical current from BL1 to SL1. It should be al the read operation in the second column of the second row may be erroneou unwanted current flows through the unselected cell in the second column of 1R/1W Dual-Port STT-MRAM To avoid the aforementioned conflicts regarding simultaneous mem 1R/1W dual-port STT-MRAM was proposed in [11], in which an extra acces required. For a write operation, the transistor M1 is activated by biasing (WBL) and write word line (WWL), as shown in Figure 2. For a read operat transistor M2 is activated by biasing read bitline (RBL) and read word line ( priately. The dedicated dual ports enable simultaneous write and read ope two memory accesses are in different rows. To see this, consider a 2 × 2 ar STT-MRAM with a distance-2 bit-interleaving architecture, as shown in Figu a bit cell on the first column of the first row, WBL1 is set to VWRITE, and WW high. If the bit cell on the second column of the second row is accessed for a r at the same time, RBL2 is biased at VREAD, and RWL2 is asserted high. In such and WWL2 remain de-asserted, which can prevent a current flow in the u cells. 1R/1W Dual-Port STT-MRAM To avoid the aforementioned conflicts regarding simultaneous memory accesses, 1R/1W dual-port STT-MRAM was proposed in [11], in which an extra access transistor is required. For a write operation, the transistor M1 is activated by biasing write bitline (WBL) and write word line (WWL), as shown in Figure 2. For a read operation, the other transistor M2 is activated by biasing read bitline (RBL) and read word line (RWL), appropriately. The dedicated dual ports enable simultaneous write and read operations when two memory accesses are in different rows. To see this, consider a 2 × 2 array of 1R/1W STT-MRAM with a distance-2 bit-interleaving architecture, as shown in Figure 4. To write a bit cell on the first column of the first row, WBL1 is set to V WRITE , and WWL1 is asserted high. If the bit cell on the second column of the second row is accessed for a read operation at the same time, RBL2 is biased at V READ , and RWL2 is asserted high. In such a case, RWL1 and WWL2 remain de-asserted, which can prevent a current flow in the unselected bit cells. However, we identify that simultaneous write and read operations are not supported when two memory accesses are attempted on different words in a same row [7,14]. See Figure 4b, where two adjacent bit-cells (that belong to different words) on the first row are being accessed, one cell for a write and the other cell for a read. Then, both RWL and WWL on the same row must be asserted high, creating sneak path currents. We define such a case as a simultaneous access conflict. Micromachines 2022, 13, x FOR PEER REVIEW 4 of 11 However, we identify that simultaneous write and read operations are not supported when two memory accesses are attempted on different words in a same row [7,14]. See Figure 4b, where two adjacent bit-cells (that belong to different words) on the first row are being accessed, one cell for a write and the other cell for a read. Then, both RWL and WWL on the same row must be asserted high, creating sneak path currents. We define such a case as a simultaneous access conflict. Proposed Design In order to improve bit-cell area while supporting 1R/1W accesses, we propose a new design that shares a vertical BL between two adjacent bit cells. The shared BL, termed S_BL, is used as the RBL for an odd-column bit cell and as the WBL for an even-column bit cell, as shown in Figure 5a. Detailed biasing conditions for writing and reading the proposed bit cell are presented in Figure 5b, where SL is fixed at GND, as is in the case of the prior design of 1R/1W STT-MRAM [11]. Note that for writing a value of zero, the WBL is biased at the negative voltage (VNEG), such that the current flows from the SL to the WBL via the MTJ. The WLs for unselected cells are also biased at VNEG to keep the access transistors in the unselected cells turned off [10]. Due to sharing of S_BL between two bit cells, as a trade-off, the proposed 1R/1W STT-MRAM may cause simultaneous access conflicts with higher probability. Simultaneous access conflicts due to the proposed design can be categorized into three cases: Proposed Design In order to improve bit-cell area while supporting 1R/1W accesses, we propose a new design that shares a vertical BL between two adjacent bit cells. The shared BL, termed S_BL, is used as the RBL for an odd-column bit cell and as the WBL for an even-column bit cell, as shown in Figure 5a. Detailed biasing conditions for writing and reading the proposed bit cell are presented in Figure 5b, where SL is fixed at G ND , as is in the case of the prior design of 1R/1W STT-MRAM [11]. Note that for writing a value of zero, the WBL is biased at the negative voltage (V NEG ), such that the current flows from the SL to the WBL via the MTJ. The WLs for unselected cells are also biased at V NEG to keep the access transistors in the unselected cells turned off [10]. However, we identify that simultaneous write and read operations are not supported when two memory accesses are attempted on different words in a same row [7,14]. See Figure 4b, where two adjacent bit-cells (that belong to different words) on the first row are being accessed, one cell for a write and the other cell for a read. Then, both RWL and WWL on the same row must be asserted high, creating sneak path currents. We define such a case as a simultaneous access conflict. Proposed Design In order to improve bit-cell area while supporting 1R/1W accesses, we propose a new design that shares a vertical BL between two adjacent bit cells. The shared BL, termed S_BL, is used as the RBL for an odd-column bit cell and as the WBL for an even-column bit cell, as shown in Figure 5a. Detailed biasing conditions for writing and reading the proposed bit cell are presented in Figure 5b, where SL is fixed at GND, as is in the case of the prior design of 1R/1W STT-MRAM [11]. Note that for writing a value of zero, the WBL is biased at the negative voltage (VNEG), such that the current flows from the SL to the WBL via the MTJ. The WLs for unselected cells are also biased at VNEG to keep the access transistors in the unselected cells turned off [10]. Due to sharing of S_BL between two bit cells, as a trade-off, the proposed 1R/1W STT-MRAM may cause simultaneous access conflicts with higher probability. Simultaneous access conflicts due to the proposed design can be categorized into three cases: Due to sharing of S_BL between two bit cells, as a trade-off, the proposed 1R/1W STT-MRAM may cause simultaneous access conflicts with higher probability. Simultaneous access conflicts due to the proposed design can be categorized into three cases: (Case 1) Accesses to a same row. An example of Case 1 is illustrated in Figure 6a, where both WWL1 and RWL1 are asserted high to write the cell in the first column of the first row and simultaneously read the cell in the third column of the first row. This can cause an unintended path for current flow on unselected cells, for example, the cells in the fourth column in the first row. The simultaneous access conflicts occurring in the prior design in [11] correspond to those in Case 1. read the cell in the first column of the second row. This can cause an unintended current flow in the unselected cell in the second column of the first row. (Case 3) Accesses to odd-column cell for a write and even-column cell for a read that share S_BL. An example of Case 3 is illustrated in Figure 6c, where S_BL1 should be at VWRITE to write an odd-column cell. At the same time, S_BL1 should be at VREAD for reading an evencolumn cell, which is a conflict. (Case 2) Accesses to a same column. An example of Case 2 is illustrated in Figure 6b, where WWL1 is asserted high, and S_BL1 is at V READ to write the cell in the first column of the first row and simultaneously read the cell in the first column of the second row. This can cause an unintended current flow in the unselected cell in the second column of the first row. (Case 3) Accesses to odd-column cell for a write and even-column cell for a read that share S_BL. An example of Case 3 is illustrated in Figure 6c, where S_BL1 should be at V WRITE to write an odd-column cell. At the same time, S_BL1 should be at V READ for reading an even-column cell, which is a conflict. Figure 7 shows the possibility of simultaneous access conflicts with the assumption of uniformly random memory accesses. First, it is observed that the proposed design incurs simultaneous access conflicts with higher probability when compared to the conventional design in [11] that incurs simultaneous access conflicts only in Case 1. Specifically, if the distance-2 bit-interleaving is applied, occurrences of Case 2 or Case 3 are dominant over Case 1, leading to the large gap between the conventional design and the proposed design in terms of the probability of simultaneous access conflicts. However, increasing the interleaving distance can come to the rescue to reduce the probability of Case 2 or Case 3; it can be observed that the probability of the conflict caused by Case 2 or Case 3 can be lowered by nearly half as the interleaving distance increases by a factor of two. Second, doubling the number of WLs per bank reduces the probability of Case 1 by half, lowering the probability of simultaneous access conflicts for both the conventional design and the proposed design. Hence, simultaneous access conflicts of the proposed design can be effectively mitigated by using the long interleaving distance and the sufficient number of WLs per bank. of uniformly random memory accesses. First, it is observed that the proposed design incurs simultaneous access conflicts with higher probability when compared to the conventional design in [11] that incurs simultaneous access conflicts only in Case 1. Specifically, if the distance-2 bit-interleaving is applied, occurrences of Case 2 or Case 3 are dominant over Case 1, leading to the large gap between the conventional design and the proposed design in terms of the probability of simultaneous access conflicts. However, increasing the interleaving distance can come to the rescue to reduce the probability of Case 2 or Case 3; it can be observed that the probability of the conflict caused by Case 2 or Case 3 can be lowered by nearly half as the interleaving distance increases by a factor of two. Second, doubling the number of WLs per bank reduces the probability of Case 1 by half, lowering the probability of simultaneous access conflicts for both the conventional design and the proposed design. Hence, simultaneous access conflicts of the proposed design can be effectively mitigated by using the long interleaving distance and the sufficient number of WLs per bank. Layout Analysis In this section, we present memory cell layouts for the standard single-port STT-MRAM, conventional 1R/1W STT-MRAM, and the proposed 1R/1W STT-MRAM to analyze the bit-cell areas. The cell layout dimensions are evaluated based on λ-based design rules, where λ is half the minimum feature size [17,18]. See the detailed parameters, including minimum metal spacing and minimum metal width, in Figure 8 [19]. In the case of the standard STT-MRAM, the WL runs horizontally across the memory array, and the BL and SL run vertically. If the access transistor width (WFET) is smaller than 9λ, the horizontal dimension is limited by the metal spacing and the metal width, as seen in Figure 9a [13]: Otherwise, the horizontal dimension is limited by the transistor width, as seen in Figure 9b: Layout Analysis In this section, we present memory cell layouts for the standard single-port STT-MRAM, conventional 1R/1W STT-MRAM, and the proposed 1R/1W STT-MRAM to analyze the bit-cell areas. The cell layout dimensions are evaluated based on λ-based design rules, where λ is half the minimum feature size [17,18]. See the detailed parameters, including minimum metal spacing and minimum metal width, in Figure 8 [19]. In the case of the standard STT-MRAM, the WL runs horizontally across the memory array, and the BL and SL run vertically. If the access transistor width (W FET ) is smaller than 9λ, the horizontal dimension is limited by the metal spacing and the metal width, as seen in Figure 9a [13]: Otherwise, the horizontal dimension is limited by the transistor width, as seen in Figure 9b: Micromachines 2022, 13, x FOR PEER REVIEW Figure 8. Parameters for the layout design rules. Figure 9. Single-port STT-MRAM layout (a) when the access transistor width is smaller than 9λ, (b) when its access transistor width is greater than 9λ, conventional 1R/1W STT-MRAM layout (c) when width of both transistors is smaller than 9λ, (d) when width of either of M1 and M2 is greater than 9λ, proposed 1R/1W STT-MRAM layout (e) when width of both transistors is smaller than 6λ, (f) when width of either of M1 and M2 is greater than 6λ. 3λ. Figure 9. Single-port STT-MRAM layout (a) when the access transistor width is smaller than 9λ, (b) when its access transistor width is greater than 9λ, conventional 1R/1W STT-MRAM layout (c) when width of both transistors is smaller than 9λ, (d) when width of either of M1 and M2 is greater than 9λ, proposed 1R/1W STT-MRAM layout (e) when width of both transistors is smaller than 6λ, (f) when width of either of M1 and M2 is greater than 6λ. In the case of the 1R/1W STT-MRAM bit ell, SL is fixed at G ND , whose metal line can be routed in the horizontal direction, as shown in Figure 9c,d. This can maintain the expression of the x-dimension by having the same number of vertical metal tracks in comparison with the standard STT-MRAM bit cell. However, the two-transistor requirement of 1R/1W STT-MRAM increases the y-dimension by 39%, as follows: Figure 9e,f presents the layout of a pair of the proposed 1R/1W dual-port STT-MRAM bit cells. Since the RBL of the odd-column cell and the WBL of the even-column cell are combined into a single bitline (S_BL), the number of vertical metal lines is three, compared to four for the conventional design [11]. This relaxes the minimum horizontal dimension of the cell to 9λ when the width of both M1 (W M1 ) and M2 (W M2 ) is smaller than 6λ, as illustrated in Figure 9e. If W M1 > 6λ or W M2 > 6λ, the horizontal dimension, which is determined by the width of the transistor, as shown in Figure 9f, is as follows: max(W M1, W M2 ) + W A2A = max(W M1, W M2 ) + 3λ. Figure 10 shows the bit-cell areas with respect to a range of max(W M1 , W M2 ) for singleport STT-MRAM, 1R/1W STT-MRAM, and the proposed 1R/1W design. The bit-cell area is either metal pitch limited (MPL) or transistor width limited (TWL), depending on whether the horizontal dimension is determined by the metal pitch or the transistor width [17]. If max(W M1 , W M1 ) < 6λ, the proposed 1R/1W MRAM can improve the bit-cell area by 25% compared with the conventional 1R/1W STT-MRAM. On the other hand, when max(W RFET , W WFET ) is >6λ, the bit-cell area savings diminishes because the proposed design is in the TWL region [13]. bit cells. Since the RBL of the odd-column cell and the WBL of the even-column cell are combined into a single bitline (S_BL), the number of vertical metal lines is three, compared to four for the conventional design [11]. This relaxes the minimum horizontal dimension of the cell to 9λ when the width of both M1 (WM1) and M2 (WM2) is smaller than 6λ, as illustrated in Figure 9e. If WM1 > 6λ or WM2 > 6λ, the horizontal dimension, which is determined by the width of the transistor, as shown in Figure 9f, is as follows: , , 3λ. Figure 10 shows the bit-cell areas with respect to a range of max(WM1, WM2) for singleport STT-MRAM, 1R/1W STT-MRAM, and the proposed 1R/1W design. The bit-cell area is either metal pitch limited (MPL) or transistor width limited (TWL), depending on whether the horizontal dimension is determined by the metal pitch or the transistor width [17]. If max(WM1, WM1) < 6λ, the proposed 1R/1W MRAM can improve the bit-cell area by 25% compared with the conventional 1R/1W STT-MRAM. On the other hand, when max(WRFET, WWFET) is >6λ, the bit-cell area savings diminishes because the proposed design is in the TWL region [13]. Simulations and Results To evaluate the proposed memory design in comparison with conventional MRAMs, we utilized a simulation framework [20] that comprises three components: (1) the Landau-Lifshitz-Gilbert (LLG) equation solver for modeling the magnetization dynamics of a spintronic device [21][22][23]; (2) the non-equilibrium Green's function (NEGF) formalism in order to determine the resistivity of MTJ [24]; (3) the simulation program with integrated circuit emphasis (SPICE) simulator to model the memory bit-cell circuit. The LLG equation solver determines the critical current for a 10 ns switching time, based on the parameters in Table 1. The voltage-dependent resistance of the MTJ is obtained by using NEGF formalism [24][25][26]. The resistance function of the spintronic device Simulations and Results To evaluate the proposed memory design in comparison with conventional MRAMs, we utilized a simulation framework [20] that comprises three components: (1) the Landau-Lifshitz-Gilbert (LLG) equation solver for modeling the magnetization dynamics of a spintronic device [21][22][23]; (2) the non-equilibrium Green's function (NEGF) formalism in order to determine the resistivity of MTJ [24]; (3) the simulation program with integrated circuit emphasis (SPICE) simulator to model the memory bit-cell circuit. The LLG equation solver determines the critical current for a 10 ns switching time, based on the parameters in Table 1. The voltage-dependent resistance of the MTJ is obtained by using NEGF formalism [24][25][26]. The resistance function of the spintronic device was coupled with a commercial 45 nm transistor; then, transient SPICE circuit simulations were performed to evaluate the three different memory bit cells. [13]. Specifically, a write voltage and a write access transistor width of each bit cell are determined using the following steps. (Step 1) Set the initial write voltage V WRITE to 1.0V. (Step 2) Obtain the minimum transistor width W M1 that achieves a write-current driving capability for 10 ns switching time, with a 20% write margin. ( Step 3) If W M1 is translated to a metal-pitch limited (MPL) bit-cell area, two sub-steps are subsequently performed: (Step 3.1) Increase W M1 to the maximum width in the MPL region. (Step 3.2) Reduce V WRITE to the voltage at which the 10 ns switching requirement is met, with a 20% write margin. The simulation results are presented in Table 2. In the case of conventional single-port and 1R/1W STT-MRAMs, the write access transistor is initially sized at 120 nm (=6λ) after following (Step 1) and (Step 2). However, the application of (Step 3) adjusts the transistor width to 180 nm (=9λ). This is required to improve the dynamic write power consumption by reducing V WRITE to 0.8V, without any negative impact on the bit-cell area. (See from Figure 10 that the bit-cell area remains the same as the transistor width changes from 6λ to 9λ.) In the case of the proposed design, (Step 3.1) and (Step 3.2) are not applied because the point of intersection between MPL and TWL is moved to 6λ by the proposed bitline sharing, as shown in Figure 10. This is the reason that the write access transistor width for the proposed bit cell is 120 nm. Accordingly, the proposed design exhibits a 25% smaller area than conventional 1R/1W STT-MRAM. Note that the proposed design maintains the inherent advantages of the conventional 1R/1W design [10,11]. Because of the dedicated transistor and bitline for the read and write operations, the proposed memory enables us to perform simultaneous read and write accesses. This effectively overcomes the impact of a slow write operation in the overall system performance [10]. Furthermore, the proposed memory can separately optimize the read-access transistor without considering write operations. By using a small access transistor for read operation, it was possible to achieve 15% lower read power consumption and improve the read-disturbance margin (defined as (I C − I R /I C ) by~19%. Conclusions We propose a high-density 1R/1W dual-port STT-MRAM design. Our proposed design combines the RBL of an odd-column cell and the WBL of an even-column cell in the same row, relaxing the minimum achievable area constrained by the metal pitch. The bitline sharing incurs more simultaneous access conflict than the conventional design, owing to the creation of the sneak current or conflicts on the biasing condition of S_BL. However, this can be effectively addressed by using the bit-interleaving architecture with a long interleaving distance. The simulation results reveal that our proposed design improves the memory bit-cell area by 25% compared with that of conventional dual-port design. The proposed 1R/1W MRAM achieves a 15% lower read power and a 19% higher read-disturbance margin than those of the single-port STT-MRAM.
7,677.6
2022-12-01T00:00:00.000
[ "Computer Science" ]
HSP60 silencing promotes Warburg-like phenotypes and switches the mitochondrial function from ATP production to biosynthesis in ccRCC cells HSP60 is a major mitochondrial chaperone for maintaining mitochondrial proteostasis. Our previous studies showed that HSP60 was significantly downregulated in clear cell renal cell carcinoma (ccRCC), the most common type of kidney cancer characterized by the classic Warburg effect. Here, we analyzed datasets in The Cancer Genome Atlas and revealed that higher HSP60 expression correlated with better overall survival in ccRCC patients. We also stably knocked down or overexpressed HSP60 in ccRCC cells to investigate the effects of HSP60 expression on the transition between oxidative phosphorylation and glycolysis. We confirmed that HSP60 knockdown increased cell proliferation, whereas its overexpression decreased cell growth. Proteomics and metabolomics revealed that HSP60 knockdown promoted Warburg-like phenotypes with enhanced glycolysis and decreased mitochondrial activity. Consistent with this finding, isotope tracing showed that the metabolic flow from glycolysis to TCA was reduced. However, HSP60 silencing enhanced mitochondrial functions in glutamine-directed biosynthesis with increased flow in two parts of the TCA cycle: Gln→αKG→OAA→Asp and Gln→αKG→ISO→acetyl-CoA, resulting in elevated de novo nucleotide synthesis and lipid synthesis. Proteomic analysis indicated that HSP60 silencing activated NRF2-mediated oxidative stress responses, while glutamate generated from glutamine increased glutathione synthesis for quenching excessive reactive oxygen species (ROS) produced upon elevated cell growth. We further found that HSP60 silencing activated the MEK/ERK/c-Myc axis to promote glutamine addiction, and confirmed that ccRCC cells were susceptible to oxidative stress and glutaminase inhibition. Collectively, our data show that HSP60 knockdown drives metabolic reprogramming in ccRCC to promote tumor progression and enhances mitochondrial-dependent biosynthesis. Introduction HSP60 is the major ATP-dependent chaperone in mitochondria and plays a crucial role in the maintenance of mitochondrial proteostasis. It has been amply documented that HSP60 is involved in many complex diseases, including neurodegenerative disorders, atherosclerosis, and heart disease, as well as multiple inflammatory diseases [1][2][3][4]. Studies have also shown that HSP60 exerts both pro-survival and pro-apoptotic functions in tumors. It enhances tumor cell growth, suppresses stressinduced apoptosis, and promotes tumorigenesis and metastasis [5][6][7]. The binding of HSP60 with cyclophilin D (CypD) regulates the mitochondrial permeability transition pore to inhibit cell apoptosis [7]. Consequently, high HSP60 expression is presented in colorectal carcinogenesis, ovarian cancer, glioblastoma, and prostate cancers [3,5,[8][9][10]. Regarding its pro-apoptotic functions, HSP60 expression has been found to be downregulated in lung cancer and bladder cancer [2,11]. These results suggest that HSP60 plays tumor-type-dependent roles, and its functions in tumor progression need to be examined in the context of a specific cancer. Clear cell renal cell carcinoma (ccRCC) is the most common type of renal cancer [12][13][14], accounting for about 70%-80% of kidney cancer cases and being one of the most lethal human cancers when it metastasizes [12][13][14][15][16]. Tumorigenesis in ccRCC is mainly attributed to mutation in the von Hippel-Lindau [12,15,16]. In this disease, mutations are also frequently present in chromatin modifiers and genes encoding products involved in the mTOR and PI3K pathways. Significant changes in metabolic pathways that regulate energetics and biosynthesis have been uncovered in such cases, indicating that ccRCC is a metabolic disease. A detailed study revealed that the low level of fructose-1,6-bisphosphatase (FBP1) in most ccRCC patients antagonized glycolytic flux [17]. Moreover. arginase 2 downregulation reduced urea cycle activity for the conservation of pyridoxal-5′phosphate and reduction of polyamine accumulation [18]. A recent isotope tracing study also demonstrated the enhanced glycolysis and reduced TCA cycle in ccRCC patients in vivo, and indicated that ccRCC exhibited the classic Warburg effect, defined as a switch from oxidative phosphorylation to glycolysis [19]. Our earlier studies uncovered that HSP60 expression was unequivocally downregulated in ccRCC tissues compared with the level in pericancerous tissues and that HSP60 downregulation disrupted mitochondrial proteostasis and enhanced tumor progression in ccRCC cells [20]. The decreased mitochondrial respiratory capacity in ccRCC also rendered cells sensitive to glycolytic inhibition [21]. All of these results support the longstanding hypothesis that mitochondrial dysfunction is involved in tumorigenesis. However, it is unclear whether tumor progression in ccRCC depends on any metabolic processes occurring in mitochondria and how HSP60 downregulation promotes tumor progression. In the present study, we established stable ccRCC cell lines in which HSP60 expression was either knocked down or overexpressed. We carried out proteomics, metabolomics, and isotope tracing to delineate the effects of HSP60 knockdown on metabolic reprogramming in ccRCC. We revealed that HSP60 silencing induced glutamine addiction in ccRCC to support nucleotide synthesis and to quench ROS generated upon mitochondrial dysfunction, which facilitates cell proliferation in ccRCC. [20]. Cells were grown in RPMI1640 medium (Wisent, Canada) supplemented with 10% fetal bovine serum(FBS) and 1% penicillin and streptomycin (Wisent). Cells were cultured in an incubator containing 5% CO 2 at 37°C. Colony formation assay Cells were seeded in six-well plates at 200 cells/well. HSP60-KD and control cells were cultured at 37°C for 10 days, while HSP60-OE and control cells were cultured at 37°C for 15 days followed by colony counting. The formation of colonies was checked using a bright-field microscope and the cells were fixed with 1 ml of 100% methanol for 20 min at room temperature. The cells were then rinsed with water and 300 μl of crystal violet staining solution was added into each well, with staining being allowed to occur for 5 min at room temperature. The cells were next viewed using Image Lab (Bio-Rad Laboratories, CA) and the colonies were counted using ImageJ software (Image J 1.51, NIH, USA). Metabolomics Cold methanol extraction was used for the collection of cell metabolic products [10,22]. Briefly, the cells were washed twice with icecold PBS and were extracted three times using pre-chilled 80% methanol (−80°C). Macromolecules and debris were removed by centrifugation, and the metabolites within the supernatant were concentrated by drying completely using a Speedvac (Labconco, USA) for mass spectroscopy analysis. The chromatographic peak area was used to represent the relative abundance of a metabolite, and the protein content was used for normalization. Isotope tracing metabolomics The isotope tracing metabolomics was performed following a method we reported recently [23]. We used RPMI1640 medium that was glucose-and glutamine-free (Gibco, Thermo Fisher Scientific, USA) which supplemented with either 11 mM 13 C 6 -glucose or 2 mM 13 C 5glutamine (Cambridge Isotope Laboratories, USA). Cells were grown in 13 C-containing medium for a set time (12 h). In parallel with this, an unlabeled culture was prepared by adding equal concentrations of unlabeled glucose or glutamine to the medium to identify unlabeled metabolites. The metabolites were extracted by cold methanol extraction methods for mass spectroscopy analysis. Quantitative proteomic analysis by LC-MS/MS Proteomic analysis was performed as described previously. A total of 100 μg of protein was extracted from cells with 8 M urea. Then, the proteins were digested with trypsin (Promega, Madison, USA) at 37°C overnight. Tryptic peptides were desalted and labeled with the tandem mass tag (TMT, Thermo Fisher Scientific, USA), in accordance with the manufacturer's protocol. Then, the mixed labeled peptides were subjected to LC-MS/MS analysis, and the MS/MS spectra from the mass spectrometer were searched against the UniProt human database using the SEQUEST search engine of Proteome Discoverer software (version 2.1). Western blotting Cells were lysed in RIPA lysis buffer (Beyotime Institute of Biotechnology, Beijing, China) supplemented with Protease Inhibitor Cocktail (MERCK, Darmstadt, Germany). Equal amounts of protein were separated on 12% SDS-PAGE gel and then transferred onto a PVDF membrane. Western blot analysis was performed following a standard procedure. Anti-HSP60 antibody, anti-PDH antibody, anti-MEK1 antibody, anti-ERK1/2 antibody, anti-phospho ERK1/2 antibody, and antic-Myc antibody were purchased from Cell Signaling Technology (CST, Danvers, MA), while anti-GLS1 antibody was purchased from Abcam (Cambridge, MA, USA). Glycolysis assay and mitochondrial respiration analysis Extracellular acidification rate (ECAR) and oxygen consumption rate (OCR) were measured using the Seahorse XF Cell Glyco Stress Test and XF Cell Mito Stress Test using XF24 Flux Analyzer (Agilent, USA), respectively. The basal glycolysis, basal respiration, and ATP synthesislinked respiration were analyzed and visualized using Wave software (version 2.3.0, Seahorse Bioscience, Agilent Technologies, Waldbronn, Germany) and normalized by the protein content. Determination of mitochondrial mass The mitochondrial mass of cells was determined using the fluorescent probe MitoTracker Green FM (CST, Danvers, MA). For this, the Detection of cellular reactive oxygen species The cellular reactive oxygen species (ROS) were detected using CellROX Deep Red Reagents (Invitrogen, Grand Island, NY), following the manufacturer's instructions. In brief, cell medium was supplemented with 5 μM CellROX Deep Red probe; then, cells were incubated at 37°C for 30 min. Next, the cells were washed twice with PBS and analyzed on a BD Calibur flow cytometer (Becton Dickinson, Franklin Lakes, NJ). Detection of cellular triglycerides Cellular triglycerides were measured using a triglyceride assay kit (Applygen Technologies, Beijing, China). Briefly, cells were seeded into 6-cm plates, rinsed with PBS, scraped off, and lysed with lysis buffer. Total protein concentration was estimated by the BCA method for normalization. Statistical analysis Statistical analysis was carried out using GraphPad Prism 6.0 software. Student's t-test was used to determine the significance of differences, and p-values of 0.05 were considered to be significant. Low expression of HSP60 enhances cell growth in ccRCC Our previous studies demonstrated that HSP60 was downregulated in ccRCC tissues compared with that in pericancerous tissues [18]. This was further confirmed by analyzing the transcriptome datasets of kidney renal clear cell carcinoma tissue and kidney normal tissue in The Cancer Genome Atlas, in which the mean mRNA level of HSP60 for ccRCC tissue was slightly lower than that for normal tissue (Fig. 1A). To further analyze the effects of HSP60 expression on tumor progression, we determined the correlation between HSP60 levels and the patient overall survival (OS) rates, showing that patients with higher HSP60 expression tended to have a better OS than those with lower HSP60 (Fig. 1B). This demonstrates that low HSP60 expression promotes tumor progression. To confirm the effects of HSP60 expression on the growth of ccRCC cells, we established stable cell lines in which HSP60 was knocked down (KD) or overexpressed (OE) in 786-O and 769-P cells. HSP60-KD cells were constructed by shRNA interference, while HSP60-OE cells by transfection with lentivirus encoding HSP60 with a C-terminal flag tag. Cells were transduced with a scrambled shRNA with no human genome homolog or a blank pLVX-IRES-ZsGreen1 vector as control cells of HSP60-KD and HSP60-OE, respectively. HSP60 knockdown or overexpression in human ccRCC cells 786-O or 769-P was examined by western blotting (Fig. 1C). CCK8 assay and colony formation assay were used to determine the effects of HSP60-KD or OE on cell proliferation rates ( Fig. 1D and E). Consistent with a previous observation [20], HSP60 silencing promoted the growth of 786-O and 769-P cells compared with that in the control cells. In contrast, HSP60 overexpression in 786-O cells inhibited cell proliferation, which demonstrated that low HSP60 expression promoted the growth of ccRCC cells in vitro and in patients. HSP60 silencing promotes classic Warburg phenotypes in ccRCC cells We carried out proteomics, metabolomics, and isotope tracing to characterize HSP60-KD cells. We identified differentially expressed proteins based on the following cut-off values: protein fold change > 1.3 or < 0.75 with p < 0.05, derived from population statistical analysis to determine the significant cut-off of differentially expressed proteins (Fig. S1D). Ingenuity Pathway Analysis (IPA) identified that a majority of proteins associated with OXPHO and the TCA cycle were downregulated, whereas proteins involved in glycolysis such as HK, PFK, ENO2, and LDH (Fig. S1A, labeled red) were upregulated, indicating that HSP60 silencing triggered a metabolic switch in 786-O cells ( Fig. 2A). Using 13 C 6 -glucose isotope tracing, we observed increased glycolytic intermediate flux (Fig. S1B). To further confirm the increase of glycolysis in 786-O-HSP60-KD cells, we performed the Seahorse experiment and showed that the glycolytic rates were increased in HSP60-KD cells (Fig. 2B). In contrast, HSP60 silencing lowered basal respiration and decreased ATP production in 786-O cells ( Fig. 2C and D). These results clearly demonstrate that HSP60 knockdown aggravated the Warburg effect by triggering the glycolysis-oxidative phosphorylation switch. Pyruvate dehydrogenase (PDH) is a critical enzyme for linking glycolysis to the citrate acid cycle by catalyzing the conversion of pyruvate to acetyl-CoA for citrate synthesis. The inhibition of PDH prevents pyruvate to acetyl-CoA conversion and induces glycolytic metabolism [25]. Western blotting revealed that PDH was downregulated in HSP60-KD cells (Fig. 2E). This was confirmed by 13 C 6 -glucose isotope tracing, showing that the isotope abundances of (M+2) acetyl-CoA and citrate from glucose were significantly reduced in HSP60-KD cells (Fig. 2F). Taking these findings together, it is demonstrated that HSP60 silencing simultaneously enhances glycolysis, and downregulates PDH expression to block pyruvate from entering the TCA cycle for oxidative phosphorylation, and decreases oxidative respiration, leading to aggravated Warburg phenotypes. HSP60 silencing switches mitochondrial functions to favor glutaminedirected metabolism Analysis of metabolomics data by MetaboAnalyst 4.0 revealed that HSP60 silencing activated pyrimidine metabolism and alanine, aspartate, and glutamate metabolism (Fig. S2A) [38], resulting in significant increases in UMP and TTP, while AMP and GMP were also increased in HSP60-KD cells compared with the levels in control cells (Fig. S2B). The levels of glutamate and aspartate required for the de novo pyrimidine synthesis were higher in HSP60-KD cells than in control cells (Fig. S2B,S2C). Cellular aspartate level is a limiting factor in de novo nucleotide synthesis, which is crucial for tumor growth [26][27][28]. Aspartate can be generated from glucose oxidation, glutamine oxidation, or glutamine reductive carboxylation [24], among which glutamine oxidation is the major pathway for pyrimidine-based nucleic acid synthesis. During de novo pyrimidine synthesis, four carbons in aspartate are derived from glutamine via the TCA cycle, among which three carbons are converted into UMP for nucleic acid synthesis (Fig. 3A). Using the 13 C 5 -glutamine tracing, we detected the increases in isotope-encoded α-KG M+5, succinic acid M+4, malic acid M+4, and aspartate M+4 in 786-O-HSP60-KD cells (Fig. 3B). Notably, the isotope-encoded UMP M+3 and UTP M+3 derived from aspartate M+4 were increased (Fig. 3B). These results indicate that HSP60 knockdown promoted glutamine-directed nucleotide synthesis. To examine whether the HSP60-silencing-mediated cell growth was glutamine-dependent, we cultured HSP60-KD and control cells in medium with or without glutamine, and found that the growth rate of HSP60-KD cells was strikingly reduced in glutamine-free medium compared with that of control cells (Fig. 3C), which demonstrated that fast growing ccRCC cells are more glutamine-dependent. Glutaminase (GLS) catalyzes the conversion of glutamine to glutamate. Consistent with this, HSP60 silencing decreased glutamine levels in both cells and the medium, whereas intracellular glutamate levels were significantly increased (Fig. S2C). GLS1 (KGA) and its shorter splice variant glutaminase C (GAC) are localized to the mitochondrion. Using western blotting, we found that HSP60 silencing did not alter KGA, but upregulated GAC, indicating that GAC plays a key role in ccRCC progression (Fig. 3D). This is consistent with an earlier report describing that GAC is essential to the mitochondrial glutamine metabolism in cancer cells [29][30][31]. We further treated cells with the GLS1 inhibitor BPTES and discovered that HSP60 silencing sensitized cells to GLS1 inhibition (Fig. 3E). In contrast, re-expression of HSP60 in 786-O-HSP60-KD cells or addition of the exogenous glutamate and dimethyl 2-oxoglutarate (DM-aKG) rescued GLS1-inhibition-mediated cell death (Figs. S2D, S2E, S2F). IPA analysis revealed that the ERK/MAPK signaling pathway was activated in HSP60 KD cells ( Fig. 2A), which was verified by western blotting, showing that MEK1, p-ERK1/2, and its downstream target c-Myc were upregulated (Fig. 3F). Earlier studies demonstrated that the MEK/ERK/c-Myc pathway regulated glutamine metabolism in tumors [32][33][34][35][36]. When cells were treated with U0126, an inhibitor of ERK1/2, the cell growth of HSP60-KD cells was significantly suppressed as compared to control cells (Fig. S3F). The present study suggests that MEK/ERK/c-Myc is responsible for HSP60-mediated glutamine addiction in ccRCC progression. Moreover, metabolomics results showed that fatty acids were increased in HSP60-KD cells (Fig. S3A), which was consistent with the proteomic data indicating that two crucial enzymes for fatty acid synthesis, acyl-CoA carboxylase (ACC) and fatty acid synthase (FASN), were upregulated (Fig. S3B). Consequently, we detected that triacylglycerol (TAG) was increased in HSP60-KD cells compared with the level in control cells (Fig. S3E). It was reported that glutamine reductive carboxylation (RC) generates citrate and lipogenic acetyl-CoA from glutamine to support lipid synthesis under hypoxic conditions (Fig. S3C) [37]. Using 13 C 5-glutamine tracing, we found that isotope-encoded citrate (M+5), aconitate, and isocitrate were all significantly increased in HSP60-KD cells (Fig. S3D), indicating that HSP60 silencing promoted glutamine reductive carboxylation. Taken together, these findings provide experimental evidence that HSP60 silencing rewires metabolic pathways in mitochondria to enhance glutamine-directed biosynthesis. Enhanced glutamine metabolism in HSP60-KD cells facilitates GSH synthesis to quench increased ROS It has been amply documented that mitochondrial dysfunction or rapidly proliferating cells generate excessive ROS. Indeed, HSP60 silencing significantly increased ROS levels in both 786-O cells and 769-P cells (Fig. 4A). To determine whether the enhanced ROS production benefits cell growth, we treated 786-O cells with rotenone, N-acetyl cysteine (NAc) and N-tert-Butyl-α-phenylnitrone (PBN). Interestingly, we found that a low concentration of rotenone slightly increased cell growth, whereas a high concentration of it inhibited it (Figs. S4A and S4B). In contrast, the ROS scavenger NAc and PBN greatly enhanced cell growth, suggesting that the quenching of ROS promotes the growth of ccRCC cells (Fig. S4C-S4D, Fig. S4E-S4F). Metabolomics analysis showed that HSP60 silencing did not alter the intracellular GSH levels, whereas it boosted the GSSG level (Fig. 4B), indicating that HSP60-KD cells were under oxidative stress. Proteomic analysis indicated that NRF2-mediated oxidative stress responses were activated in HSP60-KD cells ( Fig. 2A). In this context, we uncovered that enzymes involved in GSH synthesis, namely, glutamatecysteine ligase (GCL), glutathione synthetase (GS), and glutathione peroxidase 1 (GPX1), were all upregulated in 786-O-HSP60-KD cells (Fig. 4C). The scheme for the incorporation of isotope-encoded glutamate into GSH and GSSG is displayed in Fig. 4D. Using 13 C 5 -glutamine tracing, we demonstrated that the isotope-labeled glutamate (M+5), GSH, and γ-glutamylcysteine were elevated in HSP60-KD cells, as were the isotope-encoded GSSG (M+5) and (M+10) (Fig. 4E and F). These findings indicate that enhanced glutamine metabolism promotes GSH synthesis to support cell proliferation. Conclusion In summary, the present study demonstrated that HSP60 knockdown aggravated classical Warburg phenotypes in ccRCC cells, conferring a poor prognosis on ccRCC patients with low HSP60 expression. HSP60 silencing activated the MEK/ERK/c-Myc pathway to enhance glutamine-directed metabolism, which switched mitochondria from ATP production to biosynthesis to promote tumor progression. HSP60 knockdown also enhanced NRF2-mediated oxidative stress responses to increase GSH production for quenching ROS generated in rapidly proliferating cells.
4,305
2019-05-14T00:00:00.000
[ "Biology", "Chemistry" ]
Guilt-Tripping: On the Relation between Ethical Decisions, Climate Change and the Built Environment The curiosity of how the built environment, implicitly and explicitly, affects how citizens and users make choices in their everyday life related to climate change is on the rise. If there is a nicely designed bike lane, the choice to bike to work is much more easily taken than if the only option is a densely trafficked road. But which responsibility does the built environment have for citizens to be as climate neutral as possible and, in extension, who should it burden? Is it the individual user, the designer, the planner, the policymaker or global politics? Media is playing an important and complicated role here; it works both as a source of information and as a trigger, instigating both feelings of guilt, fear, and shame in order to set change inmotion. In this article, I will discuss everyday climate-related decision-making fuelled by shame and guilt, drawing on Judith Butler’s writings on ethical obligations and narrating it with findings from amapping study of daily transportation routes that I conducted in a middle-class suburb outside of Lund, in Sweden. There appears to be a dissonance between the relatively high knowledge about one’s responsibility concerning climate change and the limited space to manoeuvre in everyday life. Even though shame and guilt may be driving forces to make decisions, the possibility to imagine and to change needs to be expanded. Introduction For reasons unfathomable to the most experienced prophets in Maycomb County, autumn turned to winter that year. We had two weeks of the coldest weather since 1885, Atticus said. Mr Avery said it was written in the Rosetta Stone that when children disobeyed their parents, smoked cigarettes and made war on each other, the seasons would change: Jem and I were burdened with the guilt of contributing to the aberrations of nature, thereby causing unhappiness to our neighbours and discomfort to ourselves. (Lee, 1960(Lee, /2002 This is a fictional quote that expresses popular belief and burdens children with guilt. By doing so, it neatly captures the topics that this article wishes to discuss: the level of responsibility that lands on the users of the built environment concerning climate matters and how shame can play a role in everyday decision-making. This article is driven by a curiosity concerning how the built environment, implicitly and explicitly, affects how citizens and users make choices in their everyday life related to climate change. If there is a nicely designed bike lane, the choice to bike to work is much more easily taken if the only option is a densely trafficked road. But what is the role of the built environment in encouraging citizens to be as climate neutral as possible and, in extension, where should this responsibility be placed? Buildings, roads, walls, bridges and other built elements can connect, disconnect, produce and perform through their use and thereby become important actors in many everyday choices as they are activated in social and political settings (Yaneva, 2017, p. 72). Catastrophic images and reports communicated by the media might be overwhelming, leaving us with a desire to act and to change quickly. In Precarious Life, Judith Butler (2004Butler ( , 2011 writes about the rage and grief invoked in individuals through images of war reported by the media, she wonders if we must be overwhelmed to act (Butler, 2011, p. 3). In this article, I will discuss how shame and guilt may contribute to climate-related decision-making in everyday life. I will follow Butler's line of reasoning on ethical obligations and narrate it with findings from a mapping study of daily transportation routes that I conducted in a middle-class suburb outside of Lund, in Sweden. A tentative finding in the empirical study is that the respondents, in general, had a relatively high awareness of their responsibility concerning climate change and a rather narrow possibility of change in their everyday lives. This discrepancy must be addressed, and room must be made to increase the possibility of making climate-friendly adjustments in transportation routes. Everyday Transportation The mapping study took place in Stångby, which is an expanding village north of Lund, in the south of Sweden. Figure 1 shows a view over the newest part with resi-dential single housing. First, in May 2019, a flyer was distributed in residents' mailboxes with information about the study and a call for participants. Out of the 200 flyers, 10 people answered that they wanted to take part. A few weeks later, in June 2019, I went back to distribute the packs with maps and questionnaires, I handed out 40 packs in total, to the ones that had responded to the call and to people I had talked to during my observational visits to the location. I gave some of the respondents' double packs, encouraging them to pass one of them on to a partner, a neighbour or a friend. The packs included maps in two scales; one focused on the area of Stångby and Lund, and one zooming out to include a larger region with Malmö in the south and Landskrona in the north. The instructions were to map out everyday routes in different colours depending on whether it was work-, consumption-or leisure-related and to make notes of what time and what type of transportation was used from Monday to Sunday, consecutively. There was also a questionnaire included in the pack with questions about decisions concerning transportation and a prepaid envelope to send the material back to me. I received 14 packs back with maps that were filled out between June and September. In this article, I will mainly use examples from the questionnaires and comments made in the margins on the maps. I have also followed letters to the editor concerning climate change and everyday life in the Swedish newspapers (mainly Sydsvenska Dagbladet and also Dagens Nyheter). I am using the empirical material from the mapping study qualitatively. The sample is rather small and could perhaps be considered biased since the packs were distributed to some extent between friends and family. In this article, the material is used mainly to contribute with situatedness to a wider ethical discussion. Ethical Obligations It is important to bear in mind that climate change is an international and intergenerational problem that strikes differently over space and time, and even if it is a problem for everyone, it is far from just. The western world is responsible for a major part of CO 2 emissions, however, at the moment, the effects are more acute in, for example, countries of the African continent (Williston, 2019, p. 71). Butler's discussion on ethical obligations in hard times is based on war, specifically violence sanctioned by the US government in the years after the attacks on the Twin Towers in New York on September 11th, 2001. Butler's reasoning is based on the individual citizen's responsibility, drawing on the work of Emmanuel Levinas and Hannah Arendt. Still, it does not allow anyone to be singled out but, on the contrary, to always be bounded in relation to the other (Butler, 2004(Butler, , 2011(Butler, , 2016. Even if the geographical distance spans over continents, we have ethical obligations to one another, we also have ethical obligations to the ones in our proximity, even those with whom we did not choose to live (Butler, 2011, p. 15). By writing this, she claims that what happens nearby also happens far away, and that involuntary cohabitation is prerequisite for equality but also precarity. Butler describes how anger and grief can be dangerous if used as an excuse for governments or people in power to make hasty decisions, ideas that are also relevant in a discussion on ethical obligations concerning climate change. If grieving is to be feared, the fears can, in turn, become starting points for impulsive decisionmaking and quick fixes, leading to the elusive idea of restoring everything to a former order, or rather a fantasy that the world was once orderly (Butler, 2004, p. 29). I am humble to the fact that the translation from war to climate change may not be immediate, but many of the preconditions are similar. For example, the issues of distance and proximity, even if the effects of our pollution do not necessarily have a direct effect on our everyday life, we are informed by the media, measurements, scientists, that there are effects that affect somebody else's everyday life. Carbon pollution does not respect national or political boundaries (Williston, 2019, p. 70) and that knowledge in itself should come with ethical obligations. In some countries, the effects of climate change are so severe that it makes places unliveable and causes migration (Williston, 2019, p. 17). As in war, there is a shared precariousness that comes with the uncertainty of what effects we will witness in the near future and where it will strike hardest. Whereas war usually plays out between two or more nation-states and alliances, climate change defies national borders; it is global and will affect us all to different degrees. The inequality inherent in war-some lives matter and others do not-is also important to reflect upon in relation to climate change; whose lives are and will be protected and whose are not considered grievable (Butler, 2004, p. 32). While Butler is focused on the perspective of the human, I will extend the focus to also incorporate nonhuman elements, in this case, the built environment, taking some inspiration from an Actor-Network approach (Latour, 2005;Yaneva, 2009), as well as from Donna Haraway's 'Sympoiesis' (Haraway, 2016, p. 58), which means being creative together. Haraway argues that there is an urgent need for reshaping and moving the boundary between the 'critters' of planet earth and to work collectively for everyone to be able to coexist. According to Yaneva, non-human actors play a vital role in everyday decision-making by mediating agency, connecting, disconnecting, performing and enacting the different realities that make up everyday life. The built environment and its thingy nature become a political actor when facilitating or hindering important decisions concerning climate change (Latour, 2017;Yaneva, 2017). For example, a bike lane mediates controversies that surround the built environment, its pedagogical possibilities, and the shift to climate-friendly lifestyles. Following a reinforced surface unfolds the various quotidian life situations where it operates and the many controversies (Yaneva, 2012) it takes part in, materializing wider notions such as safety and time planning in the everyday. Albena Yaneva writes about the building as a microcosmos (Yaneva, 2012, p. 26), which is a way to describe architecture as networks that consist of different actors, both human and non-human, that change over time. By tracing these networks, we learn not only about what the built environment does but also how it teaches us to behave in a situation. Even if these are different perspectives, they share the view that our lives are dependent on boundedness, to other humans and also non-humans, that is noncommunitarian, that somehow distorts the idea of proximity and distance and that places focus on the boundary itself rather than what it potentially separates and unites, a moral bond (Butler, 2004, p. 49;Haraway, 2016, p. 31;Yaneva, 2017, p. 29). These perspectives are important to the discussion of the responsibility of the individual and how she forms different assemblages that incorporate for example shame associated with climaterelated decision-making and how responsibility is somewhat shifting between humans and non-humans. Primarily on social media such as Instagram and Facebook, environmental activists in Sweden have introduced the phenomenon of flygskam, which translates from Swedish to flight shame (Larsson, 2019;Mkono, 2019). It is an initiative aiming to make an individual feel ashamed for their habits associated with a cer-tain, more affluent lifestyle. In response to this, many Instagram users chose to show off their holiday travels by train instead. Even if there is a risk of misinformation on both sides of the climate change debate, a tension is embodied in the social media and media coverage of activist Greta Thunberg (Jung, Petkanic, Nan, & Hyun Kim, 2020), via whom a rising awareness of the general public has developed. The knowledge that drastic changes need to be made quickly to meet the requirements to lower CO 2 emissions globally according to the Paris Agreement can cause 'eco-anxiety,' a high level of stress in the individual (Mkono, 2019). It emerges as a consequence of the clash between doomsday scenarios on the one hand and the unwillingness in some people to inform oneself on the other. Within the debate on flight shame, it has been suggested that the ones that really should reconsider their lifestyle concerning climate change did not seem to care or understand their part. People tend to be optimistic and unaware of what part their individual lives play in the big picture. This aligns with the so-called 'optimism bias,' which means that we are less likely to believe that something bad will happen to ourselves than to someone else (Sharot, 2011). In some situations, this is helpful, but in this case, it complicates the understanding of our responsibility towards climate change. On the one end, there is the idea that every individual needs to act and change now and, on the other, there are the ones who feel that it does not matter at all what they do and that the responsibility lies elsewhere. The latter, expressed by an agitated participant in my survey, who also pointed to the very important point that everyone has different circumstances in their life that play important roles in everyday decision-making. The responsibility to change rests on various shoulders: the individual, society, politicians, culture and so on, and it is, however, an ethical dilemma that links to care. Peg Rawes, drawing on Spinoza's writings on the 'common,' an aesthetic of care and wellbeing that he sees in shared patterns of human relations, describes how achieving a sense of wellbeing is not just a job for the individual citizen but also a greater concern for the larger group (Rawes, 2013, p. 51). Rawes studies the works of conceptual artist Agnes Denes who, for example, planted the Wheatfield: A Confrontation in Battery Park New York in the early 1970s as a critique and commentary on capitalist construction (Rawes, 2013, p. 41). Rawes sees a need, especially in urban environments, for mental as well as physical aspects of architecture to be addressed to achieve more general welfare in society (Rawes, 2013). To take care of one's own decisionmaking or one's dwelling can be motivated by how others take care of theirs. Maria Puig de la Bellacasa explains two important elements in care, the first is the aspect of an emotional connection to something and the second is the work associated with taking care of something (Puig de la Bellacasa, 2017, p. 42). The aspects of care are relevant both to decision-making itself and the potential affects that drive it. In the questionnaires that I handed out, half of the respondents state that they have made transportation decisions prompted by climate shame and that their resolution to that problem has been firstly, to try as much as possible to use public transportation and secondly, to try to be clever when they use the car: pick up kids, do some shopping, and run errands so that they are efficient and minimise the frequency of car use. Why is it then that 9 out of 10 state that the car is the most used transportation means in their everyday life and that only 5 out of 14 are happy about that decision? Figure 2 depicts a vehicle situation for a resident of the part of Stångby built in the 1960s. The Bike Lane To somewhat set the scene, let me describe a typical bike ride based on a route to work that one of the respondents recorded in the maps, a route that shared many similarities with the other bicycle commuters. It departs from the home and the destination is the workplace. When the commute begins, it passes through a residential area where the traffic is shared, an occasional car will appear, pedestrians of different ages walk by, its rhythm varies due to traffic lights, a tunnel, speed bumps, and so on. Upon arrival at the actual bike lane, the speed increases and consideration needs to be shown mainly to other cyclists, the bike lane runs parallel to the railroad, and a passing train might make a brief follower on the journey. Approaching the denser urban area, the bike lane narrows and the mix of modes of transportation presents itself anew. The bike lane continues for a bit, though narrower and with interruptions like pedestrian crossings and road crossings. The final stretch of the journey to work is on a street, through a park, crossing a major road with cars and public transport, and finally arriving at the bike stand next to the entrance to the workplace. The trajectory described is accounted for in Figure 3 where the respondents' Wednesday routes are marked out superimposing one another. This example presents a problem-free day, where the choice to cycle is easily taken. Still, what are the circumstances that make the car such a common means of transportation? In the questionnaires and the comments in the margins of the maps, different problems were explained as reasons, such as the weather, temporal aspects, traffic problems, safety and issues related to the private economy. In the south of Sweden, where this study was conducted, the main obstacle given by the weather is wind, cold and rain, but some years, snow and ice may also cause problems for cyclists. The intention to cycle was a choice that would decrease one's carbon footprint, but it was hindered by the discomfort due to the climate. It is often argued that with the right attire, the weather is not an obstacle, but for many people it still is, and the weather has always been challenging occasionally. The weather is a contingent and complicating factor in everyday decisionmaking, the choice to bike was taken from a place of care, for the planet and fellow inhabitants, both human and non-human. Where does the responsibility land here? One could say that it lands, at least to a certain extent, on designers and planners. There are microclimatic adjustments to be made in the built environment. However, many people will choose the car before the bicycle anyway, at least on a windy day with temperatures below 10 degrees. Another aspect that affects choices made concerning daily transportation is time; it has most frequently come up in the respondents' answers as rush hours, evening/night, weekends and as a shortage of time/perceived stress. It was a dominant factor in decision-making related to public transport versus the car, where issues of public transport timetables, cancellations and crowdedness surfaced. Temporal aspects concerning riding the bike mainly addressed questions of security, of feeling unsafe riding in the dark and in deserted places at night or having to bike on shared roads with heavy traffic during rush hours. In this example, the responsibility is somewhat clearer, it is possible for policymakers, designers, planners, employers to work towards greater comfort by using means such as prioritisation, budgeting, working with lighting, scheduling, etc. These adjustments would benefit from looking at the built environment and its spaces concerning typical temporal situations such as rush hours. Puig de la Bellacasa writes: "Personal lives are both affected by what a world values and considers relevant and transformable through collective action. Thinking of practices of everyday care as a necessary activity to the maintenance of every world makes them a collective affair" (2017, p. 160). Along the lines of Puig de la Bellacasa, one could argue that choosing the bike is not only a personal preference but that if biking is fought for and made space for, it is also an act of environmental care. The Bike Ride That Did Not Happen There are some challenges to the daily transportation planning that is executed in the home. Traffic problems, especially for cars and public transport, contribute to decision making. In this case, the bike ride might be an alternative to the car or public transport. A recurring theme from informants is that they decide to choose something other than their preferred means of transportation due to fear of, for example, running late to a meeting or missing out on pick-up from school. This theme could couple with the temporal and is an example of moving responsibility, originating in the individual decision which is based on the fact that a bike lane exists and makes the ride possible. To use the terminology of Actor Network Theory, the built environment prescribes a certain mode of transportation via its material design, in relation to the climate change debate, even a moral mode of transport (Yaneva, 2009, p. 277). But at the same time, through its lack of certain elements, in the case of public transport punctuality or in the case of bike riding safety, it negates its own prescription and destabilises as a network (Latour, 1997, p. 176). The bike lane is not used for cycling even if it still exists as a material entity because it is perceived as unsafe. The delegation of responsibility moves from the built world to actors such as policymakers, designers, planners and employers. The issue of safety was touched upon in relation to temporal aspects in some of the questionnaire's answers, though it does not present itself as the decisive force. Even if there is an ambition to choose alternatives to the car, if there is no bike lane and the road is narrow and densely trafficked, riding a bike or walking is not even an option. Again, the responsibility lies with the municipality and with designers. However, for them to even know the problem exists or that there is a wish for a bike lane to be built, civil organisations, activists, citizens and design-user-dialogues play important roles. In some cases, one is limited by expenses, the cost of public transport, of switching to a lowemission car, of buying an electric bicycle. One could argue that the responsibility is on the individual or politicians, on the municipality, large companies, or on employers. What becomes apparent in this situation is how limiting the network around the individual can be, and how this is the point where questions of social justice concerning new climate-friendly lifestyles are difficult to dodge. Who benefits from what we build and what resources does the individual need to make use of it? There is ethical potential in mapping the spatiotemporal assemblages that are shaped in relation to the cyclist because they show deficits in the designscapes of everyday transportation and social equality, what Rawes would call 'difference-relations' (Rawes, 2013, p. 52). Figure 4 shows a house from the old part of Stångby, dating back to the early 1900s with bicycles parked outside the entrance. All these obstacles to decision making produce a different affect. Shame and guilt are important, shame focuses more on the self and pushes us towards feeling bad whereas guilt is the notion of not being good in relation to other people; it pushes us to act morally. In some of the above examples, the individual is left with a sense of disempowerment, both towards the self but also in relation to other people. A dissonance emerges in the decision-making process as feelings of empowerment, good intentions and control are swayed by uncertainties in the context. In the book Frames of War: When Is Life Grievable? Judith Butler (2016) connects guilt and the fears related to survivability. She wonders which decisions and actions we allow ourselves when our lives are threatened. She writes: "If guilt poses a question for the human subject, it is not first and foremost a question of whether one is leading the good life, but of whether life will be liveable at all" (Butler, 2016, p. 45). It accounts for a pro- Guilt appears to incite a desire to self-help rather than a moral attitude towards the other. Even if feelings are activated towards fellow inhabitants, guilt appears deeply connected to the self (Butler, 2016). War operates on shorter timespans than climate change. However, the moral dilemma is shared between the problem realms. If I, coming from a place of discomfort or fear, act hastily and, for example, decide to preserve my lifestyle, I might cause harm somewhere else and over the years this harm will potentially come back to me. The built environment is an important actor in the process, as it might just as well underscore a decision as it might make a decision completely impossible to carry out. In most cases, it provides both possibilities and problems. Let us stay with the bike lane. For example, in the situation with the bad weather, the bike lane lies there so the built environment complies with the decision to ride a bike. However, the icy wind makes the journey so unpleasant that the plan is discarded. It is possible for design elements and technological advancement to make the experience nicer, but a major issue of the built fabric might be distance; the length of the commute is decisive as to whether or not one will endure the climate or not. So, it is a question of how things are built, but foremost of how they are laid out in relation to one another and how creative the individual can be in terms of adjusting the day to, for example, work closer to the home or not. For example, the opportunity to be flexible and work from home on days with harsh weather can then form part of the bike riding program and the choice to cycle as a moral act for the future of the planet can be sustained. Albena Yaneva writes: Design makes us gain access to the social, but it is a molecularised social, discovered in individual objects, users, designers and inventors. If many individual users like me do not repeat what design has implied, nothing remains of the social. (Yaneva, 2009, p. 282) For the bike order to be socially upheld, the network relations between the material elements of the bike ride and its repeated use needs to incorporate flexibility. The responsibility slides away from the individual who might be shameful for not choosing the bike, even if most of us are rather powerless when faced with some of the circumstances. According to answers in questionnaires and comments written on the maps, the time-related hinders to using the bike lane are connected to the individual and are more likely to invoke guilt considering how these choices are usually made in relation to oneself and based on perhaps fear of the dark, discomfort in crowds, the stress of having a busy life with many activities and so on. Even though there is a bike lane, the bike remains unused in the stand. Numerous urban design strategies could enable a more climate-friendly decision regarding transportation, for example, to work with lighting, openness, a mix of activities but this example also shows how intricate the relationship between the built environment and the responsibility of for example policy and employers is. Flexibility and autonomy are important to be able to make the best use of the bike lane in relation to these problems. Traffic problems were reported to cause a lot of stress in everyday life, the discomfort of being in a tight situation and the fear of not making it in time for meetings for example. However, in the answers, this is mentioned mostly concerning cars and public transport and can therefore be the element that pushes towards taking the bicycle instead, depending on how far your day-today destination is. The response with less embedded dissonance that came up several times was safety. If an option is better for the climate but means risking your life, it does not chafe very much, 'I could make a choice, but I am not able to execute it.' Cycling or walking is not an alternative if there is no safe way. The last example that was mentioned in the questionnaires is about the private economy in the responses expressed as 'I would prefer to choose differently but I cannot afford it.' A structural and common problem especially in everyday ethical consumption decisions (Hall, 2011) is that even if the built environment matches the preferred mode of transport, the individual is left feeling disempowered and perhaps shameful. These examples show that shame can make pushes towards moral decisions and that the built environment plays an important role in the possibility to make changes in everyday transportation. To make changes in your life might require a time of mourning for the past while welcoming the present, a sense of loss that becomes necessary for transformation to take place (Butler, 2004, p. 21). An asphalted stretch of road gives information about the interconnections between material architectural elements and the small and large networks that our daily lives are made up of. Shame can push us towards trying something new and guilt reminds us of the difficulties that occur on the path between the choice to change and the final execution of the new plan. Ethical Responsibility This brings us to a discussion on the responsibility of the individual, the role of a built element such as the bike lane and the possibility for an architecture of care. The individual has responsibility for everyday life decisions but there needs to be a framework around her to enable change. In social media, the flight shaming movement has taken place, advocating new social norms in relation to one's personal carbon footprint (Gössling, Humpe, & Bausch, 2020). Examples of initiatives to lifestyle changes taken by individuals have appeared in the local newspaper with examples of how one can adjust something small like changing the speed at which you drive your car on the freeway to reduce your overall CO 2 emissions. There have also been different examples of downshifters (Juniu, 2017) who proclaim the need to place value on time rather than on commodities. Affect such as shame and guilt may be strong means to induce transition but there has to be possibilities for them to work and to not be destructive. One way towards a common commitment to align our everyday lives in a more climate-friendly manner is to recognise that no one can escape the precarity that comes with social life, it may be considered our shared non-foundation (Butler, 2011, p. 21). One can push people in different directions but also need to provide possibilities for them to make changes: The relations between individuals, the built environment, and policy are intricate. This is illustrated in the different ways the bike lane mediates agency to the user and also conveyed in Figure 5 with the driveway suggesting that its owner buys a car. The media reports on wildfires, hurricanes, melting ice and plastic agglomeration can be terrifying. Butler describes it, drawing on Susan Sontag, as a way to make faraway suffering close and what is proximate far away, the images of distant suffering impose an ethical interrogation on us as viewers that compels us to treat questions of proximity. Do I contribute to the occurrence of this suffering? She means that ethical obligations span across time and space (Butler, 2016, pp. 68-69). Shifting our everyday means of transportation might be a more sustainable lifestyle for us but it can also be an act of care for the rest of the world and all its inhabitants. By caring for someone or something, we work in relation to a larger collective, thereby adding an ethical dimension to our everyday lives (Puig de la Bellacasa, 2017, p. 160). Ethical dilemmas may arise out of the ordinary (La Cecla & Zanini, 2013), in this study via a sample of weeks of someone's daily transportation routes. Whether one acknowledges it or not, there is a moral bond between human beings, the ones that exist in close physical proximity and the ones that are far away. It is tempting to use a broad-brushed 'we' here but let us try to resist it and remain focused for the concluding part of the text. Judith Butler argues that the precondition for the 'we' is to find out how we are interconnected as fellow humans (Butler, 2004, p. 49). For me to understand who you are and to get to know you, I must lose myself and rebuild myself in relation to the other, this process takes place repeatedly throughout life. The city is described similarly by Albena Yaneva (2017, p. 91) as multiple realities that are reproduced in different contexts over and over again. As citizens, we need to understand the connections that we take part in on different levels, different spa- (Butler, 2011). It is important to understand this in decision-making processes. How I articulate myself as a subject becomes important in relation to the world around me (Butler, 2004, p. 44) and thereby my actions and their effects on the close and distant world. Shame can be a functional tool to raise awareness of my role in larger processes but can also underscore a sense of powerlessness if it is difficult to carry out a transformation. There appears to sometimes be a dissonance between the level of knowledge about the role that I have acquired concerning climate change and the space in my everyday life to change. Concluding Remarks With Covid-19, the beginning of 2020 has interestingly shown how fast a large transition can be made once the policy is in place. Possibilities open up for employers and individuals to adjust their everyday routines and facilitate changes that can be climate-friendly such as avoiding frequent long-distance travelling, flexibility to work from home and to learning new ways to be social. Nevertheless, the planning, design and construction of roads, bike lanes, parking lots, bus lanes, stations, benches, and so on need to be synchronised with larger systems for citizens to be able to make climate-friendly transportation decisions in their everyday life. Although the sample in the empirical study was relatively small, some aspects turned out to be more important than others in relation to the specific suburb that I have studied. For the majority of participants who responded, transportation possibilities had been a parameter upon deciding to move there. Access to public transportation and the relatively short distance to Lund were crucial points. Most of the respondents had both the means and strong ambitions in terms of reducing their ecological footprint through transportation but still many felt that for different reasons such as synchronisation of activities, costs or security, that they were highly dependent on the car. Another somewhat banal, but still very important, result is that in the absence of a bike lane, most of the respondents did not ride a bike even if they would have liked to. However, as this discussion has shown, the bike lane moves in and out of different socio-material assemblages over time. Even if cycling was the main mode of transport, it would not be the only means of transport in the respondents' everyday. Daily transportation has here presented itself not as a mere spatial problem but also a temporal one and it appears as if there is need for a synchronised arsenal of accessible climate-friendly options to transport. In total, there are improvements to be made for the residents of Stångby and other similar places. One aspect that seems interesting to investigate further concerning future ethical living spaces might be time-planning. Time-planning was introduced to address problems within the complex urban landscape such as crowding, gridlocks, accessibility and so on (Fernandes et al., 2015;Mareggi, 2002). My study suggests that these kinds of initiatives could perhaps be put to use more explicitly also when it comes to everyday life choices relating to sustainable development and a way of living that addresses the gap that arises between climatefriendly intentions on the one hand and everyday life hindrances on the other. Working with temporal aspects would facilitate flexibility. The view that time is inseparable from architecture (Till, 2009, p. 116) is a way to understand how transition is not necessarily about tearing something down and replacing it with something new but rather about acknowledging how it, for example, is and can be used differently over time, shifting throughout the day, the week, a month, a year and so on. Thus, planning must be complemented with a focus on material design. In Sweden, safety issues have, for example, made it into planning but have also affected urban design on quite detailed levels, discussing both problems that spring from asymmetric power relations (Listerborn, 2002(Listerborn, , 2015, and connecting these to practical directives including maintenance of shrubbery, light design, etc. Something similar might be necessary if we would like to address the ethical concerns regarding everyday decision-making brought up in this article. Different entities of the built environment, including paving materials, shelters for the wind, etc., are important actors if we want to stabilise the bike trip as a recurrent event. This cannot be left to planning to handle but needs to be materialised and designed on different scale levels. An architecture of care (Rawes, 2013, p. 52) should be designed departing from individuals' and society's needs and, when in place, it holds a pedagogical potential to show possibilities concerning how to structure everyday life. The back and forth movement between use and planning is important for design not to give way solely to nudging (French, 2011). Urban design can play a role by being discussed contextually. The bike lane has shown how the relationship between the user and the built environment is not a one-way affair, rather, it gives and takes and materialises repeatedly. A possible climate friendly path regarding ethical everyday interactions between the built environment and its users is inspired by the notion of 'care,' a creative togetherness, something that needs to be investigated further. The assemblages that form and vary over time, that one moves in and out of, shows that there is a shared and moving responsibility between material elements and users.
8,880.2
2020-11-12T00:00:00.000
[ "Environmental Science", "Philosophy" ]
A SNP-based phylogenetic analysis of Corynebacterium diphtheriae in Malaysia Objective There is a lack of study in Corynebacterium diphtheriae isolates in Malaysia. The alarming surge of cases in year 2016 lead us to evaluate the local clinical C. diphtheriae strains in Malaysia. We conducted single nucleotide polymorphism phylogenetic analysis on the core and pan-genome as well as toxin and diphtheria toxin repressor (DtxR) genes of Malaysian C. diphtheriae isolates from the year 1986–2016. Results The comparison between core and pan-genomic comparison showed variation in the distribution of C. diphtheriae. The local isolates portrayed a heterogenous trait and a close relationship between Malaysia’s and Belarus’s, Africa’s and India’s strains were observed. A toxigenic C. diphtheriae clone was noted to be circulating in the Malaysian population for nearly 30 years and from our study, the non-toxigenic and toxigenic C. diphtheriae strains can be differentiated significantly into two large clusters, A and B respectively. Analysis against vaccine strain, PW8 portrayed that the amino acid composition of toxin and DtxR in Malaysia’s local strains are well-conserved and there was no functional defect noted. Hence, the change in efficacy of the currently used toxoid vaccine is unlikely to occur. Introduction Corynebacterium diphtheriae is the causative agent for diphtheria, an acute, communicable disease among children, which can be fatal. The disease is transmitted through contact with respiratory droplets from infected individuals. During the pre-immunization era, diphtheria toxin (tox) was the major cause of mortality in the infected individuals. The disease showed a tremendous reduction after the introduction of toxoid vaccine (PW8) in the twentieth century and currently remaining at less than 8000 reported cases worldwide in year 2016 [1]. The clinical presentation is generally characterized by the formation of an inflammatory pseudomembrane at the upper respiratory tract [2]. The interaction between the bacteria and its infecting phage plays an important role in the bacterial toxin acquisition. DtxR, an iron-dependent toxin repressor produced by C. diphtheriae regulates the expression of tox introduced via corynebacteriophage by repressing the transcription of tox under high iron condition and vice versa [3,4]. In Malaysia, diphtheria toxoid vaccine is listed in the Malaysia immunization schedule and provided by the Ministry of Health Malaysia. However, not all parents bring their children for vaccination as it is not mandatory. The unvaccinated individuals would be at high risk to acquire the disease from potential diphtheria carriers. Sporadic cases were spotted over the years and recently, there was a sudden surge of diphtheria cases in year 2016, with 31 cases compared to 4, 2, 4 cases in year 2013, 2014 and 2015 respectively [5]. Our study provides the general overview of Malaysia's C. diphtheriae by determining the relatedness among local C. diphtheriae isolated within 31 years (from year 1986 to 2016) and comparing these strains with other strains worldwide using single nucleotide polymorphism (SNP) analysis. We also studied the genetic variability of tox and DtxR in these strains. Hii et al. BMC Res Notes (2018) 11:760 Main text Materials and methods A total of eighty C. diphtheriae isolates comprising of 58 toxigenic and 22 non-toxigenic strains from Malaysia, India, Belarus, Africa, Brazil, United Kingdom, Italy and USA were analysed in this study including 28 Malaysia's isolates (27 toxigenic and 1 non-toxigenic) which we had submitted previously to GenBank under project PRJNA345527 [6]. All the 27 toxigenic strains showed positive Elek test [7]. The other selected genomes were selected randomly from the C. diphtheriae strains deposited in GenBank [8][9][10][11]. All the genome data used in this study and their accession numbers were specified in Table 1. The construction of the phylogenetic trees was done using kSNP version 3.0 [12] at k-mer = 19 and illustrated by FigTree version 1.4.3 [13]. Two individual phylogenetic trees were constructed based on the SNPs in core genome and pan genome. For pan genome analysis, only the shared SNPs found in at least 90% of the genome were considered. The change of the SNPs is inferred by the branch length. The phylogenetic tree was analyzed at bootstrap value > 0.9 and arranged in decreasing order. Multiple sequence alignments for tox and DtxR genes were constructed and analysed by Clustal Omega [14]. Results and discussion In this study, we used a total of 80 genomes including toxin and non-toxin bearing C. diphtheriae to create an overview of C. diphtheriae strains in Malaysia. With the advance in next generation sequencing, we applied whole genome SNP analysis in our study by comparing the SNPs in core genome and pan-genome which includes the full complement of bacterial genes: core genome and dispensable genome [15,16]. The relationship between specific geographical locations within Malaysia which consist of Peninsular and East Malaysia were not evaluated in the study. We assumed that there were frequent movements of the probable carriers between these two areas which might affect our analysis. Lesser SNPs was observed in core genome (29,184 SNPs) compared to pan-genome (55,071 SNPs). Both core ( Fig. 1) and pan-genome ( Fig. 2) SNP-based phylogenetic analysis divided the C. diphtheriae strains into two large clusters: I, II and A, B respectively. We observed an almost equal percentage of toxigenic and non-toxigenic strains in cluster I and II using core genome phylogenetic analysis. However, in pan-genome phylogenetic analysis, the majority of the toxigenic strains were in cluster B (75.9%) whilst non-toxigenic resided in cluster A (63.6%). Further statistical analysis using Pearson's Chi square test showed that there is a significant difference between cluster A and B with A consisting of non-toxigenic strains and vice versa at p = 0.001. The majority of Malaysia's toxigenic isolates (85.2%) were clustered in B except for C110, C319, C517 and RZ358. These four isolates as well as toxigenic strains: TH510, TH1526 from India; CD1791, CD2173, CD72, CD2225, CD5052, CD4728 from Belarus; CD31A from Brazil along with NCTC13129 and NCTC5011 from United Kingdom, were scattered among the non-toxin bearing isolates. Among them, 3 out of 4 Malaysia's toxigenic isolates (C110, C319, C517), except RZ358, claded with those from Belarus and United Kingdom in cluster A. These observations showed that there is a unique and close relationship between these non-toxigenic and toxigenic strains. Therefore, there is a possibility that the tox may not be the cause of the pathogenicity which may bear to the ineffectiveness of the toxoid vaccine. The rising awareness of the other virulence factors besides toxin has brought to the investigations on iron acquisition system, resistance mechanism, and pathogenicity islands [17,20,21]. The overall distribution of core genome and pangenome SNP-based phylogenetic tree was different. The pan-genome SNP analysis is able to detect slight changes in genetically-close organism especially those in the accessory genomes, therefore further discriminate the strains with similar core genome. This could be due to the regrouping of the strains as a result of the SNP changes in accessory genomes compared to the conserved core genome. A similar observation was also depicted by Sangal et al. showing discrepancy in the clustering and degree of variation using the same set of strains in core vs accessory genome and proteome analysis [17]. A marked difference was noted when a large cluster of toxigenic strains were shifted to cluster B and both BH8 and CD31A from Brazil to cluster A in pangenome SNP phylogenetic tree. The pan-genome SNP analysis has also brought Malaysia's strains: RZ632 and RZ356 to be closer to Africa's strains. It is also interesting to see that a number of recent outbreak strains from Malaysia, India and Africa in year 2016 were grouped closely to each other within cluster B. The clustering of the strains by both SNP analysis were slightly different with the core genome sequence alignment generated phylogenetic tree as reported by Hong et al. and Trost et al. [8,20]. The intra-clustering within a clade may not be altered when genetically distinct species is introduced. However, in our study, the introduction of Belarus strains showed a high relatedness with Malaysia's strains leading to the recalculation of the genetic difference and restructuring of the cluster. PW8 (toxoid vaccine) is used as the reference and indicator for molecular analysis of tox and DtxR genes. All the local strains' tox gene were aligned and compared against PW8 using Clustal Omega. One or two points mutation were detected at nucleotide level in tox but the amino acid sequences were in perfect sequence identity with PW8 except for RZ319 and RZ597 which presented a non-synonymous amino acid change by the substitution of histidine to tyrosine at position 24 (H24Y) with no deleterious effect as predicted by PROVEAN [18,19]. This observation showed that Malaysia's strains produce single antigenic type of toxin similar to the toxoid. Genetic variations in the composition of DtxR might influence the tox gene expression and the virulence of C. diphtheriae [3,4]. The analysis on local strains by comparing to PW8, showed that all except four C. diphtheriae strains, C110, C517, C319 and C113, had non-synonymous amino acid change in DtxR. Two non-synonymous SNPs: alanine to valine (A147V) and leucine to isoleucine (L214I) at position 147 and 214 respectively were located in C110, C517 and C319, all in cluster A. This observation is in concordance with a report shown by Nakao et al. who reported most amino acid substitution occurs in the carboxyl-terminal half of DtxR and both the amino acid substitution, A147 and L214I were observed in Russia and Ukraine strains [3]. However, a different observation in our isolate was the amino acid substitution at position 150, changing threonine to asparagine (T150N) of C113. However, all of them were predicted to be neutral by PROVEAN [18,19]. Our analysis provides a general overview on the Malaysia's C. diphtheriae isolates and the difference in genetic relatedness caused by the accessory genomes at a glance. Pan-genome SNP analysis allows a more rapid and efficient genetic relatedness observation using SNP variation especially in outbreak study to discriminate variations in core genome and accessory genome between genetically similar species [15,16]. A further insight into the variability in the accessory genome between the closely related toxigenic and non-toxigenic local strains, for instance, RZ358, will be required to understand the acquired pathogenicity other than toxin such as the presence of functional genomic islands [17,20,21]. Our current analysis has significantly divided the toxigenic and non-toxigenic strain into two clusters, focusing mainly on local isolates. The observation might differ if more toxin-bearing clones with non-toxin related pathogenicity were introduced in the future. In conclusion, over the years, sporadic diphtheria cases in Malaysia were shown to bear diverse strains. Based on the pan-genome SNP analysis, it is possible that the C. diphtheriae strains isolated in Malaysia could be of Belarus, Africa and India origin or vice versa based on the shared SNPs. However, the majority of the strains isolated in the year 2016 outbreak were clustered with strains isolated from as early as year 1986 indicating the presence of a persistent local strain in the population for decades. The non-toxigenic and toxigenic strains can also be clustered in A and B with regards to the toxin status. All the Malaysia clinical isolates produced single antigenic type of diphtheria toxin, similar to PW8. Given the well-conserved amino acid composition of toxin and DtxR of these local isolates compared to PW8, the alteration in the efficacy of the currently used toxoid vaccine would be unlikely. Limitations The investigation on the specific type of accessory genome would be useful to understand the connection between toxigenic and non-toxigenic Corynebacterium diphtheriae strains in Malaysia. Most of the local C. diphtheriae isolated are toxigenic strains and only one non-toxigenic strain is available for analysis.
2,682.4
2018-10-25T00:00:00.000
[ "Medicine", "Biology" ]
BAYESIAN ANALYSIS OF RIGHT CENSORED SURVIVAL TIME DATA We analyzed cancer data using Fully Bayesian inference approach based on Markov Chain Monte Carlo (MCMC) simulation technique which allows the estimation of very complex and realistic models. The results show that sex and age are significant risk factors for dying from some selected cancers. The risk of dying from these cancers is observed to progressively increase as age of patients increases. It is also observed that in order to allow for nonlinearity due to metrical covariate age, the semiparametric P-splines model is better than the model that categorizes age into various age groups. INTRODUCTION Analysis of survival or failure times has gained a considerable attention, particularly in the field of medical applications wherefrom the conventional denotation 'survival analysis' arises [Hennerfeind(2006)].Censoring is one phenomenon that makes survival analysis differ from other analyses.This is a situation of incompleteness in the observed survival data.The most common censoring in survival time data is Right Censoring which occurs when the actual time a subject experiences the event of interest is not known.In this type of censoring, it is assumed for some individuals in the study that there is a time to event and the right censoring time C where the 's are assumed to be independently and identically distributed with density function f(t) and survival function S(t).The exact survival time T of any individual will be known if and only if is less than or equal to C. If is greater than C, then the individual is a survivor and the exact survival time is censored at C. Thus the observed time is T = min( ,C) and the data for such a design can be represented by pairs of random variables (T, δ ),where δ indicates whether the survival time T corresponds to an event (δ=1) or is right censored (δ=0).An aspect of analysis of survival time data that has gained popularity, especially in medical research is assessing the relationship between survival time and some biological, socio-economic and demographic characteristics that could possibly affect the survival status of patients.One popular regression model formulation that is often used in survival analysis is the Cox (1972) proportional hazards model.The model utilizes the hazard function λ(t), also known as the hazard rate or force of mortality which is defined as the probability of experiencing event of failure in the infinitesimally small interval (t, t+Δt), given that such an event has not been experienced prior to t.It is expressed as (1.1) Likelihood for Right Censored Data The likelihood for censored data is derived by considering the observed survival times . (1.2) If the subject is still alive at , all we know under non-informative censoring is that the lifetime exceeds and thus the contribution of such censored observation to the likelihood is δ be a failure indicator which takes value 1 if subject i fails at time and value 0 is subject i is censored.Then we write the full likelihood as i t . (1.4) COX PROPORTIONAL HAZARDS MODEL FORMULATION Suppose that the data collected on n subjects are denoted by ) , , ( , where t i is time to failure of the ith subject, δ i is the censoring indicator such that for the ith subject, δ i =1 if event of failure occurs to the subject at time and δ i = 0 if the time is right censored (i.e we observe some value c with the knowledge that t i > c) and Z i is a p-dimensional vector of covariates.Cox (1972) model assumes that the hazard function for the i-th subject with covariate value Z i has the form i t is an arbitrary baseline hazard function and γ is a p-vector of unknown regression coefficients.Model (2.1) is semi-parametric because the dependence function is modelled explicitly but no specific probability distribution is assumed for the survival times.Thus is only estimable through the partial likelihood estimation procedure. Often, survival time data involve identified clusters of subjects according to some unobserved characteristics such that subjects belonging to the same cluster are similar with respect to such characteristics so that the survival times of such subjects are correlated whereas the survival times of subjects belonging to different clusters are independent.One appropriate way of analyzing such data is to use random effect (frailty) model. where is the random effect (frailty) shared by the subjects belonging to cluster 3) assumes that effects of covariates are linear on the log hazards and are thus modelled parametrically as fixed effects.Often, in practical situations, effects of continuous covariates are not linear and thus cannot be adequately modelled as fixed effects.Thus extending Hennerfeind et al (2005), the parametric predictor 3) is replaced with a more flexible semiparametric structured additive predictor that incorporates this complexity within the same framework.Thus the Cox type hazard model, (2.1) can be written as where is the nonlinear effect of a continuous covariate , j f j x γ is the vector of usual linear fixed effects, c b is the cluster specific random effect (frailty) with Clearly, are usually assumed to be independent realizations from normal or log-gamma distribution with known mean and unknown variance. BAYESIAN INFERENCE Bayesian analysis requires assignment of priors.Thus for defining priors and developing posterior analysis, the predictor (2.4) needs to be rewritten in generic matrix notation.Thus we express , and b as the matrix product of an appropriately defined design matrix Z which leads to re-expressing (2.4) as o g j f . (3.1) We then assign priors as follows.For fixed effect parameter γ we have assumed diffuse priors i.e. The general form of priors for j β can be cast into the form , where is a precision or penalty matrix of rank ( ) = , which shrinks parameters towards zero or penalizes too abrupt jumps between neighbouring parameters. For the baseline and non-linear effect of continuous covariate, we assign Bayesian P-splines prior as in Lang and Brezger (2004) and the random effect are assumed to be i.i.d Gaussian.i.e ~). APPLICATION: HOSPITAL ADMISSION OF CANCER PATIENTS We consider data on cancer patients who were admitted at the University of Ilorin Teaching Hospital (UILTH) from 1999 to 2005.The record of each patient contains information on variables length of stay in the hospital recorded in days, sex, age and outcome which indicates whether the patient is dead or alive.We define survival time as length of stay till event of death occurs while those whose records read "alive" were right-censored because such patients had not died as at the time of the study.Nine types of cancer were selected and the Patients were grouped into nine cancer/tumor types/sites, which include: carcinoma, leukaemia, lymphoma, melanoma, sarcoma, rectum, lung, liver and stomach.Prostate and breast cancers are not included because they are gender related and may possibly introduce gender bias into the analysis. Fitting variable cancer type as fixed effect requires that we construct eight dummy variables, and this result in eight parameter estimates to be compared to an arbitrarily chosen reference category.A more efficient alternative to this is to fit the cancer type as a random effect (frailty). At the initial stage, we fitted sex and continuous age as fixed effects with diffuse prior.That is we fitted model Table 1 shows the posterior estimates, standard errors and the 95% credible intervals.Effects of sex and age when fitted as fixed effects are seen to be significant as the credible intervals do not include zero.To gain more insight into the analysis with respect to gender differences, we fitted models for combined and then male and female differently.Since the assumption of linear effect of metrical covariates such as age on the predictor is too restrictive as discussed in section (2), we consider two widely used alternative ways to allow for non-linearity in the effects of metrical covariates.In the first alternative, we categorize the covariate age by constructing a set of variables , with one being arbitrarily chosen as a reference category, thereby producing dummies with parameters to be estimated for the categorized covariate.In the second alternative, which is a more flexible and data driven way, we incorporate age additively in the predictor using smooth regression function and then model it nonparametrically using P-splines prior as in Lang and Brezger (2004).In this paper, Sex was coded 1 for male and 0 for female patients.The metrical age was coded into four categories: "less ) ( j j x f than 23 years" (reference group), "23-39 years", "40-55 years", and "greater than 55 years".Our research interest thus includes: investigating the effect of categorized age on the risk of dying from cancer for the cancer patients combined and for male and female separate, comparing the two ways described above by considering some hierarchical models, starting from very simple model and progressively increase model complexity.Model comparisons are based on Deviance information criterion (DIC) introduced by Spiegelhalter et al (2002), which is a Bayesian analogue of Akaike information criterion (AIC).The following models are fitted, noting that all models contain baseline effect. Model I: (metrical age with random effect) Model 4: Model 5: (categorical age with random effect) RESULTS Results for the analyses are presented in table 2, showing fixed effects of age of patients for the combined, male and female and in table 3, showing the hierarchical models under the categorized age and age fitted by P-splines.The results in table 2 a,b and c are the posterior means, standard errors and the quantiles of fixed effects of the categorized age for combined, male and female patients.It is observed that the risk of dying from cancer increases with age for both combined and both sexes separately.For example, in the combined data, patients in age group 23-39 years have a risk of exp(0.290)which is 1.33 times that of patients in the reference category (less than 23 years). The results are in the same direction for males and females, though the risks are relatively much higher for male than their female counterpart.For example, when the risk for male patients in age category 40-55 is 1.70 times those in the reference category, it is 1.52 for the females.It is observed in Table 3 that all the models fitted are best for the male patients alone and worst for the combined data as revealed by the values of the DIC which is least for the males and highest for the combined.It is also observed that the P-splines models for age are better than models with categorized age as the DIC values are seen to be smallest for the later than the former throughout for the combined, male and female, and we also observe that the data really contains random effect (frailty) and that models that take this into account are better than those that ignore it. CONCLUSION In the analysis of data on hospital admission for the cancer patients under study, results show significant differences among age groups with respect to the risk of dying from the selected cancer considered.Results of Deviance information criterion (DIC) also reveal that when we allow for non-linearity in the effects of metrical covariate age, the nonparametric model using P-splines prior as in Lang and Brezger (2004) is preferred over the model that categorize age.Software Package: All analyses in this paper have been done using BayesX, a public domain software package for performing complex full and empirical Bayesian inference is available at http://www.stat.uni-muenchen.de/~lang/BayesX.Limitation of the study: The major caveat to be considered when interpreting the result is about patient's age which is self reported.Most often, self reported age by patients may not be their true age.Despite this limitation, the study strength is significant.
2,699.2
2010-02-09T00:00:00.000
[ "Mathematics" ]
Electromagnetic Wave Absorbing Composites with a Square Patterned Conducting Polymer Layer for Wideband Characteristics The applications of electromagnetic(EM-) wave-absorbers are being expanded for commercial andmilitary purposes. Formilitary applications in particular, EM-wave-absorbers (EMWAs) could minimize Radar Cross Section (RCS) of structures, which could reduce the possibility of detection by radar. In this study, EMWA composite structure containing a square periodic patterned layer is presented. It was found that control of the pattern geometry and surface resistance induced EMWA characteristics which can create multiresonance for wideband absorption in composite structures. Periodic Patterns for Radar Absorbing Structures (RAS). An electrically conductive medium is used as an EM-wave reflector and shielding structure.When the conductive surface is engraved, DC can always be conducted, but in the case of AC, there is a specific region where the EM wave cannot be transmitted or reflected.In the frequency range of interest, periodic patterns such as EM wave filters are considered frequency selective surfaces. There are various methods and equations to verify the characteristics of the pattern layer; however, a computer simulation using FEM was assumed to be an effective tool to verify the accuracy of the equations.When the square array pattern is located in free space, the approximate equation for the resonance characteristics is as follows [1]: ln csc ((/2) (/)) , V = (1 − 0.41 ) ( ) , = − 2 . ( Total transmission occurs at and mean the size of unit cell and is the wavelength. From the Babinet principle, the grid type and patch type have the same resonance point, with opposite filter characteristics.The equation assumes the medium of frequency selective surface (FSS) is a metallic material like perfect electric conductor (PEC) of an infinitely thin film.When we design the periodic pattern for a radome, this equation is useful.but the equation assumes a free-space boundary.When a dielectric slab is added to the FSS layer, the real characteristics of the periodic pattern are changed.In general, the degree of change depends on the dielectric properties and the resonance frequency moves to the low frequency range [2]. The high impedance surface is different from the lossy surface; the periodic pattern is usually made by metal.The pattern controls only the reactive part of the impedance, and the layer is assumed to be thicker than the skin depth.As a result, control of the pattern thickness cannot affect the EM characteristics of the filter. Advantage of Periodic-Pattern-Layered RAS (PPRAS). One of the basic models for RAS is the Salisbury absorber, which uses a specific resistive sheet with a low dielectric spacer.The resonance peak can be controlled by the thickness of the spacer.As the thick spacer in the Salisbury absorber is its main demerit, many efforts have been made to reduce the thickness.The principle of impedance-matching with /4 thickness is that the maximum electric field is located at that point, and the resistive screen dissipates the energy of the electric field.Since the resistive sheet should not reflect the entire incident EM wave, it should have a free-space impedance of 377 Ω/sq.and be located at distance /4 from the PEC back-layer.This means that the Salisbury absorber has to be of minimum thickness and the resistivity of the screen should have a constant value. The attempt to broaden the bandwidth of the Salisbury absorber involved multilayers of resistive screen, and the effort to minimize its thickness involved the study of highly dielectric materials [3].However, such materials generally reduce the mechanical properties of the RAS and are costly to synthesize.Even if these kinds of materials do reduce the thickness, the bandwidth remains relatively narrow. Another way to reduce the thickness is by application of a periodic pattern layer.Based on the /4 resonance, the engraved periodic patterns on the screen could reduce the total thickness of the RAS.In other words, this can move the resonance peak to a lower frequency range.The advantages of using such a pattern layer are reduction of thickness and the peak tuning of the RAS.Additionally, the loss control of the pattern is capable of changing the EM wave absorbing characteristics of the RAS.Because the pattern layer contains inductance (L) and capacitance (C) in the single layer, the filter characteristics of L and C are totally different.From the equivalent circuit theory, the combination of L and C could make a simple RF filter.As the number of elements increases, the order increases and the system shows good filter characteristics.These characteristics may be demerits of the RF circuit but, fortunately, these same characteristics are merits for the EM-wave-absorber.For example, a fractal pattern array with unit cells of various scales can be assumed to be a high-order system in filter design.The essence is that these high-order characteristics can be generated by the single layer pattern, not by the multilayered RAS. Design of the Composite Substrate. The target frequency of the RAS designed in this research was X-band.Based on the target frequency, the Ku-band was included for ultrawideband RAS.A RAS which could cover both bands is useful for air-stealth technology.In the first step, the RAS for X-band was designed with a thin substrate to verify the thickness reduction effect.The design of the substrate thickness was the main concern, because the thickness determines the target frequency range.For the Salisbury absorber in X-band, the thickness of the air spacer ( = 1.0-0.0) is 7.5 mm, and glass/epoxy ( = 4.2-0.002) is about 3.5 mm. Design of the Periodic Pattern Layer. Through previous research on various patterns, the characteristics of each type could be verified.In this research, the square unit cell was adopted because of the simplicity of design and analysis to the various incidence angles.The square cell is divided into inductive (grid type) and capacitive (patch type) surfaces (Figure 1). Basically, the dielectric substrate with PEC back plate has inductive characteristic and the capacitive pattern layer is effective for the RAS.According to the above consideration, square patch was used as unit cell, and the capacitance can be calculated by the following equation [4]: where = array period and = gap between patches. The size of unit cell, the gap between patches, and the thickness and permittivity of the dielectric substrate mainly control the inductance and capacitance of the pattern layer, and the resonance frequency can be estimated from the variables (Figure 2).The resonance occurs when the imaginary part of impedance becomes zero. Resonance frequency = 1/2 √ : The approximate resonance point can be calculated from the above equation, and the more exact resonance frequency was deduced from computer simulation.Within reasonable boundary conditions, parameter sweep and optimization were performed.The approximate equation assumes the thickness of the conductive square patch is enough to generate L and C, and the target resonance frequency was set to 10.3 GHz. Expansion of Bandwidth. The maximum electric conductivity of the synthesized conducting polymer (CP) was set and the surface resistance was controlled by varying the coating thickness.In the case of the CP, the thickness for a Salisbury resistive screen was 2 m, and sheet resistance was 377 Ω/sq.When the coating layer exceeds this thickness, surface resistance decreases and conductivity increases.When the pattern layer is applied, the effective surface resistance should be considered.The unit cell with a 6 mm square pattern should have a 6 m coating thickness to achieve the Salisbury resistive effect, as there was a 2 mm null-grid region. When the thickness exceeded 6 m, L and C resonance was initiated, and the artificial magnetic conductor (AMC) characteristic was as if the thickness was more than 12 m. In other words, the 6∼12 m range would be the transition thickness for the AMC absorber, and at 12 m thickness, the increased conductivity of the patterns could generate L and C on the surface of the periodic layer.As a result, there was a combination of Salisbury and AMC absorption in the transition range of the coating thickness. When we design the periodic pattern layer with coating thickness control, the resonance peaks from these two EMwave-absorber effects can be tuned effectively.That is the essence of wideband RAS with peak control in the X-and Kubands. As the effective surface resistance was applied to maintain 377 Ω/sq., the pattern shape was functionless when the thickness was less than specified, whether the pattern existed or not.Clearly then, for the evaluation of the critical thickness, the transition thickness is important to be considered in the pattern design and conductivity of the coating material.When the unit cell has enough thickness to generate current using incident EM wave energy, the AMC characteristics are generated.If the pattern shape is effective in generating large amounts of L and C, the AMC resonance peak moves into the lower frequency range, and this means that the AMC peak location can be controlled.When the peaks of the Salisbury screen and the AMC are properly arranged for the specific frequency range, the combined dual peak can make the wideband absorption region within the single-layered PPRAS. For the square pattern used in this research, as the size of a square increases, the capacitance also increases.As a result, the AMC peak moves to the lower frequency range, whereas the Salisbury peak remains in the original frequency region.When the size of a square decreases, the AMC peak moves to the higher frequency range located near the Salisbury resonance peak. Figure 3 shows the location and movement of the AMC peak with variation of square size.In the first graph, the square size is large and the capacitance is also relatively large.The location of the AMC peak is near 6 GHz and Salisbury peak is at 17 GHz.In the second graph, the square size is smaller.The AMC peak moved to the right, but the Salisbury peak location remained fixed.As a result, the AMC peak and Salisbury peak are combined in a specific frequency range.Generally, this kind of dual-peak control was achieved by control of the thickness of the multilayered RAS.The result of this multiple-peak control is expansion of the EM wave absorption bandwidth in the single-layered PPRAS. The simulation model was designed for measurement of the S-parameter.The boundary condition was the TEMmode plane wave.The back plate of the PPRAS was covered by PEC, and its transmission and reflection characteristics were simulated.The design was conducted with parameter sweep and optimization with CST-MWS.Based on the simulation result, we prepared an effective PPRAS model.The thickness of the substrate was 2.6 mm and the unit cell size was 6 mm with a 4 mm gap (Figure 5). Figure 4 shows variation in the reflection loss with change in the thickness of the CP layer, while the other variables are fixed.Up to 6 m, the AMC peak did not appear at the target frequency (left peak).When the thickness approached 12 m, the single peak split into two peaks.When the thickness increased more than 12 m, two clear peaks were present. The coating thickness needed to establish this critical transition point is determined by the conductivity of the coating materials, and the CP paste synthesized in this study had the desired transition characteristics with a coating of 6-12 m.If the thickness increased more than 12 m, the peaks were totally separated. Results and Conclusion The reflection loss of the PPRAS plate was measured in Xband.The free-space measurement system was used.In the experiment and simulation, transmission was prevented by a metal back plate.The relation between the incidence wave and reflection wave verified the absorption characteristics. The measurement system is illustrated in Figure 6.Two spot-focusing horn antennas for X-band were located on a square aluminum plate (1.83 m × 1.83 m).The sample was located at the midpoint between the antennas.The specimen was a square plate (150 mm × 150 mm) and the antenna was connected to a network analyzer (HP 8510C).The designed PPRAS had 9.5 GHz bandwidth under −10 dB, and its X-band properties were measured.The S-parameters are shown in Figure 7. The absorbers had about −10 dB absorption characteristics in the X-band, as designed.PPRAS 1 and 2 show similar traces as the simulation result and the result follows the initial design target.From the graph, we can expect that the first peak is located in the lower frequency range, and the second peak is located in the higher frequency range.As the S-parameter was measured only in X-band, we cannot confirm the exact location of the peaks.PPRAS 1 and 2 were two similar specimens, which were fabricated considering fabrication error.The disagreement of the graphs of the two specimens was caused by measurement errors, coating resolution, pattern uniformity, and various fabrication errors. In this study, we designed PPRAS using a composite material.The PPRAS had a dual resonance peak in the layer with a single pattern, and the peak could be effectively designed and controlled to cover the X-and Ku-bands.As a result, we realized a thin, wideband RAS using PPRAS. Figure 4 : Figure 4: Simulation model: reflection loss change with different CP thickness. Figure 7 : Figure 7: Reflection loss of the PPRA in X-band. table Figure 6: Free-space measurement system.
3,000.2
2014-04-06T00:00:00.000
[ "Materials Science" ]
Investigation of heat treatment process using Particle Image Velocimetry . Martensitic hardening is a technology widely used in mechanical engineering practice to increase the hardness and abrasion resistance of ferrous alloys. To achieve martensitic hardening, the workpiece must be cooled the workpiece very quickly, which can be achieved through using various refrigerants. The quality and the properties of these coolants have a great influence on the success of the process, so their investigation is essential. Throughout the investigation method – according to ISO 9950 standard –various aqueous suspensions used as refrigerants are studied by placing a heated cylindrical specimen in a refrigerated container (which will be described later) and by recording the temperature change over time. The flows generated in the tank during the heat treatment influence the nature of the heat transfer, thus the hardening process itself. The task is to map these flow conditions to give a comprehensive description of the qualification process in question. With the advancement of measurement technologies, there are more and more possibilities to determine the velocity of flowing media, even at several points simultaneously. Using the available PIV (Particle Image Velocity) system, the two-dimensional vector field of the flow can be determined on a complete plane surface. In the present research, the applicability of seeding particles required for the PIV measurement technique is also investigated as part of the mentioned project, establishing further research possibilities. The classification of eight different seeding particles is presented. As a final result, seeding particles suitable in the current arrangement for proper flow display are listed. Introduction In industrial applications, it is important to know the effects of certain factors that influence heat transfer during the application of a given refrigerant (heat treatment). In case of waterbased polymer solutions used as refrigerants, the heat removal characteristics vary within certain limits due to the simultaneous influence of temperature, concentration, and flow rate. Knowing the trend and extent of the change, heat removal can -to a certain extent -be optimized for the given process [1][2]. Their quality and properties can greatly influence the success of the whole process, so their investigation is essential. According to ISO 9950 standard, the use of different aqueous suspensions employed as refrigerants is studied by placing a heated cylindrical specimen in a refrigerated container designed for this purpose, then recording the temperature change over time. The flows generated in the tank during the heat treatment influence the nature of the heat transfer, thus the hardening process itself. To study the flow processes, a planar or a spatial measurement can be performed. A conventional PIV measurement is a non-intrusive optical method that can provide an instantaneous flow velocity field in a single plane. Although non-intrusive, the technique requires seeding particles to operate. Particles should be neutrally buoyant, and usually micron sized. The exact size, type and density of particles mixed with the working fluid can vary for every measurement. Seeding particles must meet two important requirements. First, they must follow the flow well. Second, they must reflect the light emitted by the light source (usually a LASER or high-power LED) effectively. Since the recording cameras need a clear view of particles, the medium must also be transparent. If seeding particles can't properly reflect light in the given conditions, then the obtained images are not reliably processable and will provide spurious results. It is possible to increase the intensity of illumination and/or the sensitivity of the camera to improve image quality [3]. For the same flow medium, the flow properties of smaller particles are more favorable, but larger particles reflect light to a greater extent. Considering these two effects, the flow properties, and the properties of the light source, the optimal diameter of seeding particles can be obtained [4]. Measurement system Particle Image Velocity (PIV) is an optical measurement method, with which planar images of the velocity field can be obtained. Seeding particles are fed into the main flow. Using a precisely timed light source, the area of interest is illuminated twice with a short time gap between the two pulses. Seeding particles reflect some of the light, which then can be captured with a recording unit (one or more high-speed cameras, matrix detector). Particle images are recorded simultaneously with the light pulses. The displacement of a given particle can be determined from subsequent images, from which its velocity can be derived. The time gap between two pulses can vary depending on the peak velocity and seeding properties of the measured flow. An appropriate displacement of particles is essential for a good correlation. In the present research, illumination is provided by a double-pulsed laser, and the recording is done by a CCD camera. After processing and post-processing, the results of the completed images, valuable information about the given flow can be obtained [5]. The whole PIV system consists of several subunits. These include lasers and other optical devices for illumination, seeding particles and their feeding apparatus, a camera for capturing images, and post-processing hardware and software. Figure 1 shows the elements of a typical PIV measurement set-up [5]. When dealing with low-velocity liquid flows, it is very important that the settling rate of the particles be as low as possible, thus tracers with a density similar to that of the flow medium should be used. An example is polystyrene seeding particles, which have a density of 1050-1090 [kg/m 3 ], but their reflectivity is rather poor. Silver-coated polystyrene particles could be used to overcome said disadvantage, but their price is a notable drawback. At relatively low cost, hollow glass spheres can be used with the highest efficiency within the size range of 1-100 μm. With air filling the inside of the glass spheres, their density can be controlled during production. The air / glass transitions also have a high refractive index; hence they provide good visibility. When investigating gaseous flows, some form of water aerosol or oil can be used as a seeding medium. The density of these particles are several times higher than that of the gas, the settling rate can be reduced by setting a small diameter for the particles. Seeding particles are usually of the order of 1 μm are used in this case [3]. In case of liquid flows, a trivial solution is most often present, namely the solid particles can be mixed in the liquid to form a suspension. In case of gaseous media in particular, the challenge is to introduce the seeding particles into the area of investigation in a sufficient quantity and in a sufficiently homogeneous distribution without disturbing the flow itself. Possible solutions to feed the seeding particles into the main flow if the particles are liquid can be atomization or condensation, while if the particles are solid, the atomization of their dispersion can be a solution. Acquired images are divided into smaller parts, called the interrogation zones. For each interrogation zone, one vector field is calculated. Particles that leave the interrogation zone are lost, reducing the number of particles that contribute to the correlation. Particles inside an interrogation zone must move homogeneously in the same direction, else the statistically calculated vector fields will show spurious results. Usually, there should be about 10-15 particles in each interrogation zone, which may increase at higher flow rates [4]. Measurement setup The measurements were performed in a container designed for the qualification of heat treatment media according to the ISO 9950 standard. The wall of the container was made of plexiglass to make it possible to use PIV measurement technique. During the heat treatment process, a steel rod heated to 850 °C was placed in the opening fashioned for this purpose, and the temperature change of the rod over time is recorded. The flow within the container is enhanced by a propeller stirrer with adjustable speed [2]. The container and its associated accessories are shown in Figure 2. The aim of our investigation was to select the optimal seeding particles to use, then use them to examine the nature, velocity, and homogeneity of the flow in relevant volumes. During our investigation, the specimen, its fixation, and the temperature measurement devices were removed. The flow regime created by the mixer was divided into two regions by a plexiglass sheet, as a result of which the flow transpired continuously in the formed channel in a well-defined manner. One of the main elements of the PIV measurement technique is the laser light emitted by the high-energy laser, which illuminates the investigated area in a plane section after proper optical imaging. The light of the Nd: YAG double-pulsing laser was refracted twice at angles of 90° on plane mirrors, then by directing it through the light sheet optics the necessary 2D planar light sheet was generated. The test area was located on the symmetry plane "x" of the container, and its size covered the entire cross-section of the flow. Figure 3 shows a schematic of the applied measurement arrangement with the laser, light paths, heat treatment container, and the camera. Three operating parameters of the laser were controlled. These were: Δt time difference, laser frequency and intensity. The value of Δt shows the time elapsed between two laser pulses. Based on preliminary measurements, Δt was chosen as 50 ms. Evaluation accuracy is increased during post-processing by changing the size of the computational area. The recording frequency of the camera was 7.5 Hz. Hereinafter, five subsequent image-pairs form a measurement series. The strengths of the two laser pulses can be controlled independent of one another within the range of 0V-10V. These parameters were optimized during experiments pro re nata. The requirements for the facilitated camera are: they must be able to take images at a sufficiently high resolution and with a sufficiently short shutter speed. The TSI Power View 630059 CCD camera we use perfectly meets these requirements; it captures images at 4 MP resolution with shutter speeds of up to 200 ns [7]. Figure 4 shows an image recorded by the camera of the liquid with seeding particles. Ideally, the contrast between the seeding particles and the black background is high, and the particles are in focus, so the image's overall clarity is high. Most of the defects in our case are probably caused by the refraction of plexiglass, which could be mitigated by using clear glass. For the CCD camera we used, it was possible to control the focal length and aperture diameter of the lens. Prior to recording, these two parameters always had to be set for the respective seeding particle, as they had different light reflection and geometric properties (diameter). When adjusting the focal distance, the goal was to change the sharpness of the images, while changing the aperture diameter was to change the depth of field of the images. Experience has shown that by increasing the aperture diameter, the intensity of the laser light also had to be increased to obtain high quality images. In our experiments, eight types of seeding particles were compared using the described equipment. Selection was based on availability, adequate light reflectance, and presumed flow properties (size, density -particles need to be able to move with the flow). The available particles and their characteristics are listed in Table 1. Styrofoam powder -made by coffee grinder, from standard Styrofoam board, with a characteristic particle size of ~500 μm T7 "Mica powder" -commercially available glitter powder, ground mica shale and titanium oxide T8 Glitter powder -commercially available, small particle size glitter powder Results The applicability of the previously listed seeding particles in an aqueous medium was investigated using the described equipment (container with stirrer) and measurement technique (PIV). Changing the speed of the propeller of the stirrer was made possible by a frequency controller. Experiments were performed at the lowest speed, and by placing an equal volume of seeding particles in the water every time. The examination of the region shown in Figure 5 was the primary goal, and the vector field of this area was determined and evaluated during subsequent post-processing. To obtain an accurate relation of the location of particles compared to the measurement domain, a calibration procedure is needed. During the process, a target with a pattern of equidistant dots is placed in a way that its plane is parallel with the laser light sheet. The parameters and the exact location of the calibration target are well known; hence the recording and post processing software (Insight 3G) can be calibrated. Images resulting from the eight measurements series were processed, and evaluated for three interrogation zone sizes, and for each of the three evaluations, the average velocities, and the velocities at a fixed point of each seeding particle were determined. Control velocities were measured to validate the obtained results using ultrasound ultrasonic velocity measurements and by adding 6, relatively big pearls to the system, and calculating their displacement using MATLAB. The diagrams show that both the average and fixed-point velocity values of particle T1 (Al2O3, 100-200 μm) are outside of the control velocity range, so this seeding particle was declared unsuitable, and no further studies were performed with it. Both the fixed-point and average velocities of T6 (Styrofoam powder) and T8 (glitter powder) fell within the control range, hence under the given conditions, based on our investigations, the flow was followed best by these two seeding particles. In addition, the fixed-point velocity values of T2, T4 and T7, and the average velocity values of the T3, proved to be adequate, so further studies were performed for these seeding particles. A statistical analysis of the suitable seeding particles followed. Five subsequent imagepairs form a measurement series, and a vector field was created based on each image pair. The variance of the vector components u and v are known. The variance values of the five vector fields were averaged and plotted for the seven different tracer particles (Fig. 7). We are trying the find the seeding particles that produce the lowest variance values, as measurements performed with them should provide the most reliable results. The trends in Figure 7 show that the variance in velocities for component v are always greater than that of component u, which is expected since the flow is mostly vertical (v direction). This implies a greater degree of variation in values. Seeding particles with the smallest variance values were T2, T3, T7 and T8, for these the cross-correlation procedure using Insight 3G software proved to be successful. T8 (glitter powder) passed well in both control phases (velocity and statistical), and particles T2, T3 and T7 also showed good results in the statistical test, so flow visualization was performed for these four seeding particles. Conclusion The applicability of different seeding particles in a container designed for the qualification of heat treatment media according to the ISO 9950 standard has been investigated. Eightmostly commercially available -materials were investigated as seeding particles for PIV measurements. Seeding particles must meet to prerequisites: they should follow the flow well, so their size should be adequately small, and their average density should not differ greatly from the density of the medium. They must also reflect the light emitted by the light source effectively to provide clear, relatively high contrast images for the PIV processing software to avoid spurious results. The continuous flow of the mixture of medium and seeding particles were granted by a propeller stirrer placed inside the container. Post-processing of the images taken during the measurements was performed using the Insight 3D software. The size of the computational area was optimized by checking the velocity values after processing. At this time, a 64x64 pixel computational area proved to be the most efficient, so we performed further data processing according to this. During the evaluation of the resulting vector fields, we compared the fixed point and average velocities of the validation measurements (executed prior to PIV investigations using MATLAB and ultrasonic velocity measurement) with the velocities obtained by the use of different seeding particles. Some seeding particles with false results were filtered. Under the given conditions, T6 (Styrofoam powder) and T8 (glitter powder) performed the best at following the flow. It is likely that their effectiveness is due to their density being similar to that of water. The density of styrene (raw material of Styrofoam) is ρ=1040 kg/m 3 . If a sufficiently fine powder is produced from it, most of the air bubbles are removed, so it can be used effectively as a tracer particle in aqueous media. The base material of the glitter powder we obtained was polyethylene terephthalate (PET), density of which is ρ=1380 kg/m 3 , which is slightly higher than the density of water. The appropriate reflectance of the investigated particles was inferred from the variance of the velocity vector files. The lowest variance values were produced by T2 (Al2O3) and T3 (glass powder). T7 (mica powder) and T8 (glitter powder) particles also showed adequate variance values. Based on these results, it can be stated that by using these four seeding particles, the cross-correlation procedure during data processing created a correct vector field with few errors. Overall, four of the eight seeding particles were found to be suitable for the current scenario during data processing. These are Al2O3 (T2), glass powder (T3), mica powder (T7) and glitter powder (T8). Using these four seeding particles, the velocity distribution of the area of investigation was obtained as shown in Figure 8. Based on obtained the images, it can be said that although the Al2O3 (T2) powder gave the smallest values of standard deviation and variance during the statistical evaluation, it created an insufficient vector field in the lower part of the investigated area. The images also show that the generated flow nonhomogeneous, a high velocity region occurs. This flow pattern will, of course, be altered if the specimen is inserted, but it is likely that the visible trend will remain, resulting in a higher velocity flow along one side of the rod. This causes the specimen to cool asymmetrically, resulting in thermal stresses and other structural defects. In the future, velocity fields obtained during the cooling of the rod can be examined in a similar manner, using the experience of the present research to our advantage. Although mica powder and glitter powder have proven to be the most suitable under the current circumstances, introducing a specimen of 850 °C into the water during the heat treatment will massively alter the whole filter process. Seeding particles can melt, air bubbles can form around the specimen, and a whole set of new factors should be considered. Glass powder with a particle fraction of 0-50 μm can provide a viable alternative that has shown good properties in our studies and is also resistant to high temperatures.
4,348.4
2022-01-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
PRMT5 Is Upregulated in HTLV-1-Mediated T-Cell Transformation and Selective Inhibition Alters Viral Gene Expression and Infected Cell Survival Human T-cell leukemia virus type-1 (HTLV-1) is a tumorigenic retrovirus responsible for development of adult T-cell leukemia/lymphoma (ATLL). This disease manifests after a long clinical latency period of up to 2–3 decades. Two viral gene products, Tax and HBZ, have transforming properties and play a role in the pathogenic process. Genetic and epigenetic cellular changes also occur in HTLV-1-infected cells, which contribute to transformation and disease development. However, the role of cellular factors in transformation is not completely understood. Herein, we examined the role of protein arginine methyltransferase 5 (PRMT5) on HTLV-1-mediated cellular transformation and viral gene expression. We found PRMT5 expression was upregulated during HTLV-1-mediated T-cell transformation, as well as in established lymphocytic leukemia/lymphoma cell lines and ATLL patient PBMCs. shRNA-mediated reduction in PRMT5 protein levels or its inhibition by a small molecule inhibitor (PRMT5i) in HTLV-1-infected lymphocytes resulted in increased viral gene expression and decreased cellular proliferation. PRMT5i also had selective toxicity in HTLV-1-transformed T-cells. Finally, we demonstrated that PRMT5 and the HTLV-1 p30 protein had an additive inhibitory effect on HTLV-1 gene expression. Our study provides evidence for PRMT5 as a host cell factor important in HTLV-1-mediated T-cell transformation, and a potential target for ATLL treatment. Introduction Human T-cell leukemia virus type-1 (HTLV-1) is a tumorigenic retrovirus that infects an estimated 15-20 million people worldwide [1]. This blood-borne pathogen is the causative infectious agent of adult T-cell leukemia/lymphoma (ATLL), a disease of CD4+ T-cells [2][3][4]. HTLV-1 is also associated with inflammatory disorders such as HTLV-1-associated myelopathy/tropical spastic paraparesis Currently, there have been no studies investigating the role of PRMT5 in T-cell malignancies, including HTLV-1-associated disease. Therefore, we sought to determine if PRMT5 plays a role in HTLV-1 transformation/malignancy. Indeed, we found PRMT5 levels were upregulated during T-cell transformation and in established lymphocytic leukemia/lymphoma cell lines. Our data suggested that PRMT5 negatively regulated HTLV-1 viral gene expression, which indicated that PRMT5 could be an important cellular regulator of the viral transformation process. Furthermore, selective inhibition of PRMT5 by a novel small molecule inhibitor (PRMT5i) in HTLV-1-positive cell lines reduced cell survival; therefore, PRMT5 may represent an important therapeutic target for ATLL. Plasmids and Cloning Plasmid DNA was purified on maxi-prep columns according to the manufacturer's protocol (Qiagen, Valencia, CA, USA). The flag-tagged PRMT5 expression vector was described previously [37]. p30 cDNA was cloned into the pcDNA3.1 expression vector (Invitrogen, Grand Island, NY, USA) to create pcDNA3.1-p30. The S-tagged Tax and HBZ expression vectors contained Tax or HBZ cDNA inserted into a pTriEx™-4 Neo vector (Novagen, Madison, WI, USA) for mammalian cell expression of S-tagged Tax and HBZ proteins. The plasmid containing the wild-type HTLV-1 infectious proviral clone, ACHneo, was described previously [43]. The LTR-1-luciferase reporter plasmid and transfection efficiency control plasmid TK-renilla were described previously [41]. Quantitative RT-PCR Total RNA was isolated from 10 6 cells per condition using the RNeasy Mini Kit (Qiagen) according to the manufacturer's instructions. Isolated RNA was quantitated and DNase-treated using recombinant DNase I (Roche). Reverse transcription was performed using the Omniscript RT Kit (Qiagen) according to the manufacturer's instructions. The instrumentation and general principles of the CFX96 Touch™ Real-Time PCR Detection System (Bio-Rad) are described in detail in the operator's manual. PCR amplification was carried out in 96-well plates with optical caps. The final reaction volume was 20 µL consisting of 10 µL iQ™ SYBR Green Supermix (Bio-Rad), 300 nM of each specific primer, and 2 µL of cDNA template. For each run, sample cDNA and a no-template control were assayed in triplicate. The reaction conditions were 95˝C for 5 min, followed by 40 cycles of 94˝C for 30 s, 56˝C for 30 s, and 72˝C for 45 s. Primer pairs to specifically detect viral mRNA species (tax, hbz), prmt5, st7, and gapdh were described previously [28,44]. Data are presented in histogram form as means with standard deviations from triplicate experiments. Co-Culture Immortalization Assays Long-term immortalization assays were performed as detailed previously [45]. Briefly, 2ˆ10 6 freshly isolated human PBMCs were co-cultivated at a 2:1 ratio with lethally irradiated cells (729.B uninfected parental; 729.ACH HTLV-1-producing) in 24-well culture plates (media was supplemented with 10 U/mL rhIL-2). HTLV-1 gene expression was confirmed by the detection of p19 Gag protein in the culture supernatant, and was measured weekly by p19 ELISA (Zeptometrix, Buffalo, NY, USA). Viable cells were counted weekly by Trypan blue dye exclusion. Packaging and Infection of Lentivirus Vectors Lentiviral vectors expressing five different PRMT5-directed shRNAs (target set RHS4533-EG10419), and the universal negative control, pLKO.1 (RHS4080) were purchased from Open Biosystems (Dharmacon, Lafayette, CO, USA) and propagated according to the manufacturer's instructions. HEK293T cells were transfected with lentiviral vector(s) expressing shRNAs, plus DNA vectors encoding HIV Gag/Pol and VSV-G in 10-cm dishes with Lipofectamine 2000 according to the manufacturer's instructions. Media containing lentiviral particles was collected 72 h later and filtered through 0.45 µm pore size filters. Lentiviral particles were then concentrated using ultracentrifugation in a Sorvall SW-41 swinging bucket rotor. Lymphoid cell lines were infected with the concentrated lentivirus in 8 µg/mL polybrene by spinoculation at 2000 xg for 2 h at room temperature. HEK293T cells were infected with the concentrated lentivirus in 8 µg/mL polybrene. After 72 h, stable cell lines were selected by treatment with 1-2 µg/mL puromycin for 7 days. PRMT5i Treatment A selective PRMT5 inhibitor (PRMT5i) drug was recently described by Alinari et al. [31]. Lymphoid cells were seeded into a 12-well plate at 0.5ˆ10 6 cells/mL. Indicated concentrations of the PRMT5i were added to duplicate wells. Cells were incubated at 37˝C for 48 h. After incubation, cell viability and proliferation were measured. Cell viability was determined using Trypan blue dye exclusion. Cellular proliferation was measured in duplicate in each condition using the CellTiter 96 AQ ueous One Solution Cell Proliferation Assay (Promega). Cells were collected by slow centrifugation (5 min, 800 xg) for downstream qRT-PCR analysis as described above. 2.9. Transient Transfections, Reporter Assays, and p19 Gag ELISA HEK293T cells were transfected using Lipofectamine 2000 Transfection Reagent according to the manufacturer's instructions. Each transfection experiment was performed in triplicate and presented as means plus standard deviations. In general, HEK293T cells in a 6-well dish were transfected with approximately 1-2 µg total DNA consisting of 20 ng TK-renilla (transfection control), 100 ng LTR-1-luciferase, 1 µg ACHneo, 500 ng Flag-PRMT5, 500 ng pcDNA3.1-p30, 100 ng S-tag-Tax, or 500 ng S-tag-HBZ, where indicated. HEK293T cells were harvested 48 h post-transfection in Passive Lysis Buffer (Promega). Relative firefly and Renilla luciferase units were measured in a FilterMax F5 Multi-Mode Microplate Reader using the Dual-Luciferase Reporter Assay System (Promega) according to the manufacturer's instructions. Each condition was performed in duplicate. Extracts also were subjected to immunoblotting to verify equivalent protein levels. Cell supernatants (48 h) were used for p19 ELISA (Zeptometrix). Annexin V Staining Lymphoid cells were seeded into a 6-well plate at 1ˆ10 6 cells/mL. Indicated concentrations of the PRMT5i were added to the appropriate wells. Cells were incubated at 37˝C for 24 h. After incubation, cells were collected by slow centrifugation (5 min, 800 xg) for apoptosis analysis via flow cytometry. Collected cells were stained using the FITC Annexin V Apoptosis Detection Kit I (BD Biosciences; San Diego, CA, USA) according to the manufacturer's instructions. ChIP Assays pA-18G-BHK-21 cells are a Syrian Hamster kidney cell line stably transfected with a plasmid vector containing the lacZ bacterial gene under the control of a HTLV-1-LTR promoter as previously described [46]. pA-18G-BHK-21 cells were transfected in 10-cm dishes (1 µg ACHneo and 5 µg flag-PRMT5) using Lipofectamine 2000 according to the manufacturer's instructions. Cells were cross-linked in fresh 1% paraformaldehyde for 10 min at room temperature. The cross-linking reaction was quenched using 125 mM glycine. Following cell lysis and DNA fragmentation by sonication, DNA-protein complexes were immunoprecipitated with anti-PRMT5 (Santa Cruz) and control anti-IgG (Santa Cruz) antibodies. Immunoprecipitated DNA-protein complexes were washed using sequential low-salt, high-salt, lithium chloride, and Tris-EDTA (TE) buffers. DNA was purified using the Qiagen Gel Extraction Kit (Qiagen). The presence of specific DNA fragments in each precipitate was detected using PCR. Primers used for amplifying the HTLV-1 LTR were 5'-CCACAGGCGGGAGGCGGCAGAA-3' and 5'-TCATAAGCTCAGACCTCCGGGAAG-3' and LacZ-coding region were 5'-AAAATGGTCTGCTGCTG-3' and 5'-TGGCTTCATCCACCACA-3'. Quantification of each ChIP experiment was performed using ImageJ software. PRMT5 Was Upregulated in T-Cell Leukemia/Lymphoma Cells Recently, PRMT5 over-expression was identified to be involved in the pathogenesis of hematologic (lymphoma) and solid tumors (melanoma, astrocytomas) [27][28][29][30][31][32]. To determine whether PRMT5 is important to HTLV-1 biology and pathogenesis, we first examined the levels of PRMT5 protein ( Figure 1A) and RNA ( Figure 1B Interestingly, although protein and RNA levels were upregulated, PRMT5 RNA was not directly correlated to PRMT5 protein levels, which suggested a post-transcriptional method of regulation. PRMT5 protein ( Figure 1C) and RNA ( Figure 1D) were also upregulated in 3 of 4 and 4 of 4 PBMC samples from ATLL patients, respectively, relative to HTLV-1-negative naïve T-cells. , and naïve T-cells were subjected to immunoblot analysis to compare the levels of endogenous PRMT5 expression. β-Actin expression was used as a loading control. The amount of PRMT5 in each cell line was measured relative to β-actin; the level of PRMT5 expression obtained with naïve T-cells was set at 1; (B) Quantitative RT-PCR for PRMT5 and GAPDH was performed on mRNA isolated from cells in panel A. Total PRMT5 mRNA level was determined using the ΔΔCt method [47] and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments; (C) Lysates of ATLL PBMCs from four independent donors and naïve T-cells were subjected to immunoblot analysis to compare endogenous PRMT5 protein expression levels. β-Actin expression was used as a loading control. The amount of PRMT5 in each condition was measured relative to β-actin and depicted in histogram form; the level of expression obtained with naïve T-cells was set to 1. Each sample was measured in duplicate; (D) Quantitative RT-PCR for PRMT5 and GAPDH was performed on mRNA isolated from cells in panel C. Total PRMT5 mRNA level was determined using the ΔΔCt method and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments. PRMT5 Levels Were Elevated during HTLV-1-Mediated Cellular Transformation We next determined whether PRMT5 becomes dysregulated and over-expressed during HTLV-1-driven T-cell transformation. Freshly isolated human PBMCs co-cultured with lethally irradiated HTLV-1 producer cells (729.ACHi) in the presence of 10 U/mL of human IL-2 showed progressive growth consistent with HTLV-1 immortalization (Figure 2A, left panel). As a control, PBMCs co-cultured with lethally irradiated 729.B (HTLV-1-negative parental line) were unable to sustain progressive growth. We also detected continuous accumulation of p19 Gag in the culture supernatants of PBMCs co-cultured with 729.ACHi cells, which indicated viral replication and virion production; as expected, irradiated HTLV-1 producer cells alone failed to grow or produce p19 over time ( Figure 2A, right panel). We examined PRMT5 protein ( Figure 2B) and RNA ( Figure 2C) levels throughout the 10-week in vitro transformation assay. Protein and RNA were isolated from two independent wells of cells at weekly time points. Our data revealed that PRMT5 protein and RNA were upregulated at each time point, to varying degrees, throughout the transformation assay. , and naïve T-cells were subjected to immunoblot analysis to compare the levels of endogenous PRMT5 expression. β-Actin expression was used as a loading control. The amount of PRMT5 in each cell line was measured relative to β-actin; the level of PRMT5 expression obtained with naïve T-cells was set at 1; (B) Quantitative RT-PCR for PRMT5 and GAPDH was performed on mRNA isolated from cells in panel A. Total PRMT5 mRNA level was determined using the ∆∆Ct method [47] and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments; (C) Lysates of ATLL PBMCs from four independent donors and naïve T-cells were subjected to immunoblot analysis to compare endogenous PRMT5 protein expression levels. β-Actin expression was used as a loading control. The amount of PRMT5 in each condition was measured relative to β-actin and depicted in histogram form; the level of expression obtained with naïve T-cells was set to 1. Each sample was measured in duplicate; (D) Quantitative RT-PCR for PRMT5 and GAPDH was performed on mRNA isolated from cells in panel C. Total PRMT5 mRNA level was determined using the ∆∆Ct method and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments. PRMT5 Levels Were Elevated during HTLV-1-Mediated Cellular Transformation We next determined whether PRMT5 becomes dysregulated and over-expressed during HTLV-1-driven T-cell transformation. Freshly isolated human PBMCs co-cultured with lethally irradiated HTLV-1 producer cells (729.ACHi) in the presence of 10 U/mL of human IL-2 showed progressive growth consistent with HTLV-1 immortalization (Figure 2A, left panel). As a control, PBMCs co-cultured with lethally irradiated 729.B (HTLV-1-negative parental line) were unable to sustain progressive growth. We also detected continuous accumulation of p19 Gag in the culture supernatants of PBMCs co-cultured with 729.ACHi cells, which indicated viral replication and virion production; as expected, irradiated HTLV-1 producer cells alone failed to grow or produce p19 over time (Figure 2A, right panel). We examined PRMT5 protein ( Figure 2B) and RNA ( Figure 2C) levels throughout the 10-week in vitro transformation assay. Protein and RNA were isolated from two independent wells of cells at weekly time points. Our data revealed that PRMT5 protein and RNA were upregulated at each time point, to varying degrees, throughout the transformation assay. Gag protein production at weekly intervals are presented. Means and standard deviations of data from each time point were determined from four random independent wells; Cells were also collected at weekly intervals and analyzed by immunoblotting for PRMT5 protein expression (B); and qRT-PCR analysis for PRMT5 RNA levels (C). PRMT5 levels are shown relative to an internal control (β-actin, GAPDH) for each time point. Resting PBMC PRMT5 levels were set to 1. Means and standard deviations of data from each time point were determined from two random independent wells. Loss of Endogenous PRMT5 Increased HTLV-1 Gene Expression To determine whether PRMT5 over-expression is a marker for T-cell transformation and/or contributes to the process of HTLV-1 transformation, we utilized shRNA vectors to knockdown PRMT5 expression in three different HTLV-1-transformed cell lines. As shown in Figure 3A, shRNA-mediated knockdown of endogenous PRMT5 expression in the early passage HTLV-1 immortalized T-cell line, PBL-ACH, resulted in a significant increase of viral p19 Gag protein production (left and right panel) and a significant increase in the levels of Tax and HBZ gene expression (middle panel). Knockdown of PRMT5 expression likewise significantly increased Tax and HBZ transcript levels in another HTLV-1-transformed cell line, SLB-1 ( Figure 3B). Finally, knockdown of PRMT5 protein expression in the Tax-negative ATL-derived T-cell line, TL-Om1, significantly increased HBZ transcript levels ( Figure 3C). Gag protein production at weekly intervals are presented. Means and standard deviations of data from each time point were determined from four random independent wells; Cells were also collected at weekly intervals and analyzed by immunoblotting for PRMT5 protein expression (B); and qRT-PCR analysis for PRMT5 RNA levels (C). PRMT5 levels are shown relative to an internal control (β-actin, GAPDH) for each time point. Resting PBMC PRMT5 levels were set to 1. Means and standard deviations of data from each time point were determined from two random independent wells. Loss of Endogenous PRMT5 Increased HTLV-1 Gene Expression To determine whether PRMT5 over-expression is a marker for T-cell transformation and/or contributes to the process of HTLV-1 transformation, we utilized shRNA vectors to knockdown PRMT5 expression in three different HTLV-1-transformed cell lines. As shown in Figure 3A, shRNA-mediated knockdown of endogenous PRMT5 expression in the early passage HTLV-1 immortalized T-cell line, PBL-ACH, resulted in a significant increase of viral p19 Gag protein production (left and right panel) and a significant increase in the levels of Tax and HBZ gene expression (middle panel). Knockdown of PRMT5 expression likewise significantly increased Tax and HBZ transcript levels in another HTLV-1-transformed cell line, SLB-1 ( Figure 3B). Finally, knockdown of PRMT5 protein expression in the Tax-negative ATL-derived T-cell line, TL-Om1, significantly increased HBZ transcript levels ( Figure 3C). (middle panel) for Tax, HBZ, and GAPDH was performed on mRNA isolated from shControl and shPRMT5 cells. Transcript levels were determined using the ΔΔCt method and normalized to relative GAPDH levels. Levels of Tax and HBZ relative to GAPDH in shControl cells were set to 1. Data are presented in histogram form with means and standard deviations from triplicate experiments. HTLV-1 gene expression was quantified by the detection of the p19 Gag protein in the culture supernatant using ELISA (right panel); (B) SLB-1 cells (HTLV-1-transformed) were infected with a pool of five different lentiviral vectors directed against PRMT5 or control shRNAs. The cells were selected for 7 days using puromycin. Immunoblot analysis was performed to compare the levels of PRMT5 and β-actin protein (loading control) in each condition (left panel). Quantitative RT-PCR for Tax, HBZ, and GAPDH was performed on mRNA isolated from shControl and shPRMT5 cells (right panel). Transcript levels were determined using the ΔΔCt method and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments. Levels of Tax and HBZ relative to GAPDH in shControl cells were set to 1; (C) TL-Om1 cells (ATL-derived, Tax-negative) were infected with a pool of five different lentiviral vectors directed against PRMT5, or control shRNAs. The cells were selected for 7 days using puromycin. Immunoblot analysis was performed to compare the levels of PRMT5 and β-actin protein (loading control) in each condition (left panel). Quantitative RT-PCR for HBZ and GAPDH was performed on mRNA isolated from shControl and shPRMT5 cells (right panel). HBZ transcript level was determined using the ΔΔCt method and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments. Levels of HBZ relative to GAPDH in shControl cells were set to 1. Student's t test was performed to determine significant differences in viral transcript levels between shControl and shPRMT5 cells; p < 0.05 (*). were infected with a pool of five different lentiviral vectors directed against PRMT5 or control shRNAs. The cells were selected for 7 days using puromycin. Immunoblot analysis was performed to compare the levels of PRMT5 and β-actin protein (loading control) in each condition (left panel). Quantitative RT-PCR for Tax, HBZ, and GAPDH was performed on mRNA isolated from shControl and shPRMT5 cells (right panel). Transcript levels were determined using the ∆∆Ct method and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments. Levels of Tax and HBZ relative to GAPDH in shControl cells were set to 1; (C) TL-Om1 cells (ATL-derived, Tax-negative) were infected with a pool of five different lentiviral vectors directed against PRMT5, or control shRNAs. The cells were selected for 7 days using puromycin. Immunoblot analysis was performed to compare the levels of PRMT5 and β-actin protein (loading control) in each condition (left panel). Quantitative RT-PCR for HBZ and GAPDH was performed on mRNA isolated from shControl and shPRMT5 cells (right panel). HBZ transcript level was determined using the ∆∆Ct method and normalized to relative GAPDH levels. Data are presented in histogram form with means and standard deviations from triplicate experiments. Levels of HBZ relative to GAPDH in shControl cells were set to 1. Student's t test was performed to determine significant differences in viral transcript levels between shControl and shPRMT5 cells; p < 0.05 (*). Recently, a first-in-class, small-molecule PRMT5 inhibitor (PRMT5i) was developed [31]. This novel inhibitor selectively blocks S2Me-H4R3 (symmetric di-methylation of H4R3) but is inactive against other type I and type II PRMT enzymes, which highlights its specificity for PRMT5. To evaluate whether PRMT5 over-expression is a marker for T-cell transformation or contributes to the process of HTLV-1 transformation, we treated six different HTLV-1-transformed cell lines with titrating amounts of PRMT5i ranging from 10 µM to 50 µM. As shown in Figure 4A-D, inhibition of PRMT5 resulted in a significant increase in Tax and HBZ viral gene expression in the HTLV-1-transformed T-cell lines PBL-ACH, ACH.2 SLB-1, and Hut-102. ST7 transcript levels were measured as a control to ensure successful PRMT5 inhibition because PRMT5 was reported to repress the tumor suppressor ST7 in MCL [28]. Treatment with PRMT5i also resulted in a significant increase in HBZ transcript levels in the Tax-negative, ATL-derived T-cell lines TL-Om1 and ATL-ED ( Figure 4E,F). HTLV-1-negative cell lines, Jurkat and Hut-78, were also treated with titrating amounts of PRMT5i. ST7 transcript levels were measured to ensure successful PRMT5 inhibition ( Figure 4G). PRMT5 expression levels were examined and found to be relatively unchanged in all PRMT5i treated cells tested (western blot lower panels; Figure 4A-G). ACH-2 cells (HIV-1 LAV ) were also treated with titrating amounts of PRMT5i ( Figure 4H) to examine if PRMT5 might regulate viral gene expression of another human retrovirus. Treatment with PRMT5i did not significantly alter the expression of HIV-1 as measured by RT activity in the cell supernatant. Selective Inhibition of PRMT5 Decreased Cell Proliferation and Viability In a recent report, PRMT5 was linked to proliferation in B-cell lines and MCL because knockdown of PRMT5 expression reduced cell proliferation [28]. Treatment of HTLV-1-positive T-cell lines with titrating amounts of PRMT5i or shRNA-mediated knockdown of PRMT5 resulted in a significant decrease in cellular proliferation ( Figure 5A,B) and cell viability ( Figure 5C). Surprisingly, the same dose of PRMT5i had minimal effects on proliferation and viability of Jurkat and Hut-78 cells (HTLV-1-negative). The level of cellular apoptosis ( Figure 5D) and senescence ( Figure 5E-H) was also measured in response to titrating amounts of PRMT5i treatment. The number of apoptotic cells was increased in all cell lines in response to PRMT5i treatment; however, the amount of apoptotic cells in HTLV-1-transformed lines was higher. We found a slight increase in the level of cellular senescence (as measured by increased p21 and p27 levels and decreased cyclin B1 levels) in response to PRMT5i in the HTLV-1-positive cell lines examined. Recently, a first-in-class, small-molecule PRMT5 inhibitor (PRMT5i) was developed [31]. This novel inhibitor selectively blocks S2Me-H4R3 (symmetric di-methylation of H4R3) but is inactive against other type I and type II PRMT enzymes, which highlights its specificity for PRMT5. To evaluate whether PRMT5 over-expression is a marker for T-cell transformation or contributes to the process of HTLV-1 transformation, we treated six different HTLV-1-transformed cell lines with titrating amounts of PRMT5i ranging from 10 μM to 50 μM. As shown in Figure 4A-D, inhibition of PRMT5 resulted in a significant increase in Tax and HBZ viral gene expression in the HTLV-1-transformed T-cell lines PBL-ACH, ACH.2 SLB-1, and Hut-102. ST7 transcript levels were measured as a control to ensure successful PRMT5 inhibition because PRMT5 was reported to repress the tumor suppressor ST7 in MCL [28]. Treatment with PRMT5i also resulted in a significant increase in HBZ transcript levels in the Tax-negative, ATL-derived T-cell lines TL-Om1 and ATL-ED ( Figure 4E,F). HTLV-1-negative cell lines, Jurkat and Hut-78, were also treated with titrating amounts of PRMT5i. ST7 transcript levels were measured to ensure successful PRMT5 inhibition ( Figure 4G). PRMT5 expression levels were examined and found to be relatively unchanged in all PRMT5i treated cells tested (western blot lower panels; Figure 4A-G). ACH-2 cells (HIV-1LAV) were also treated with titrating amounts of PRMT5i ( Figure 4H) to examine if PRMT5 might regulate viral gene expression of another human retrovirus. Treatment with PRMT5i did not significantly alter the expression of HIV-1 as measured by RT activity in the cell supernatant. PRMT5 Negatively Regulated HTLV-1 Gene Expression A recent report identified PRMT5 as a binding partner of the HTLV-1 accessory protein p30, a known negative regulator of HTLV-1 gene expression [37]. To investigate the effect(s) of exogenous PRMT5 on p30, HEK293T cells were transfected with a LTR-1-luciferase reporter vector (LTR-1-luc), TK-renilla control, the ACHneo proviral clone, a flag-tagged PRMT5 expression vector, and a p30 expression vector as indicated ( Figure 6A). Luciferase activity was measured after 48 h. The luciferase activity of the empty control reflected the amount of Tax and therefore, was a measure of transcription from the provirus. In the presence of either exogenous p30 or PRMT5, LTR-1-luciferase reporter was significantly repressed (left panel). However, in the presence of both p30 and PRMT5, there was an additive effect on LTR-1-luciferase repression. The amount of viral p19 Gag protein present in the culture supernatant of each condition also was examined using ELISA, which provided another method to quantify HTLV-1 gene expression (middle panel). Similar to the luciferase results, p30 and PRMT5 individually repressed viral p19 Gag production, and the presence of both p30 and PRMT5 repressed viral gene transcription further. We next asked whether PRMT5 was required for p30 function by transducing HEK293T cells with shRNA vectors directed against PRMT5 or a scramble control ( Figure 6B). Scramble and shPRMT5 HEK293T cell lines were selected for 7 days using puromycin to ensure sufficient knockdown of endogenous PRMT5. After selection, each cell line was transfected with LTR-1-luc, TK-renilla control, the ACHneo proviral clone, and a p30 expression vector as indicated. Luciferase activity was measured after 48 h. Knockdown of endogenous PRMT5 significantly enhanced viral transcription, as measured by LTR-1-luciferase activity (left panel) and p19 Gag ELISA (middle panel). In addition, reduced levels of PRMT5 did not significantly affect the ability of p30 to repress viral transcription. selected for 7 days using puromycin to ensure sufficient knockdown of endogenous PRMT5. After selection, each cell line was transfected with LTR-1-luc, TK-renilla control, the ACHneo proviral clone, and a p30 expression vector as indicated. Luciferase activity was measured after 48 h. Knockdown of endogenous PRMT5 significantly enhanced viral transcription, as measured by LTR-1-luciferase activity (left panel) and p19 Gag ELISA (middle panel). In addition, reduced levels of PRMT5 did not significantly affect the ability of p30 to repress viral transcription. The decreases in relative LTR-1-luc activity compared to control were significant (p < 0.05 (*)). HTLV-1 gene expression was quantified by the detection of the p19 Gag protein in the culture supernatant of each condition using ELISA (middle panel). The decreases in p19 Gag levels compared to control were significant (p < 0.05 (*)). Immunoblot analysis was performed to compare the levels of PRMT5 (Flag), p30, and β-actin (loading control) in each condition (right panel); (B) HEK293T cells were infected with a pool of five different lentiviral vectors directed against PRMT5 or control shRNAs. The cells were selected for 7 days using puromycin. HEK293T shControl and shPRMT5 cells were then transfected with 20 ng TK-renilla, 100 ng LTR-1-luciferase reporter, 1 μg ACHneo, and 500 ng p30 expression plasmid as indicated. Forty-eight hours post-transfection, cell lysates were collected, and luciferase levels measured; relative luciferase activity for each condition is shown (left panel). The differences in relative LTR-1-luc activity were significant (p < 0.05 (*)). HTLV-1 gene expression was quantified by the detection of the p19 Gag protein in the culture supernatant of each condition using ELISA (middle panel). The differences in p19 Gag levels were significant (p < 0.05 (*)). Immunoblot analysis was performed to compare the levels of endogenous PRMT5, p30, and β-actin (loading control) in each condition (right panel). The decreases in relative LTR-1-luc activity compared to control were significant (p < 0.05 (*)). HTLV-1 gene expression was quantified by the detection of the p19 Gag protein in the culture supernatant of each condition using ELISA (middle panel). The decreases in p19 Gag levels compared to control were significant (p < 0.05 (*)). Immunoblot analysis was performed to compare the levels of PRMT5 (Flag), p30, and β-actin (loading control) in each condition (right panel); (B) HEK293T cells were infected with a pool of five different lentiviral vectors directed against PRMT5 or control shRNAs. The cells were selected for 7 days using puromycin. HEK293T shControl and shPRMT5 cells were then transfected with 20 ng TK-renilla, 100 ng LTR-1-luciferase reporter, 1 µg ACHneo, and 500 ng p30 expression plasmid as indicated. Forty-eight hours post-transfection, cell lysates were collected, and luciferase levels measured; relative luciferase activity for each condition is shown (left panel). The differences in relative LTR-1-luc activity were significant (p < 0.05 (*)). HTLV-1 gene expression was quantified by the detection of the p19 Gag protein in the culture supernatant of each condition using ELISA (middle panel). The differences in p19 Gag levels were significant (p < 0.05 (*)). Immunoblot analysis was performed to compare the levels of endogenous PRMT5, p30, and β-actin (loading control) in each condition (right panel). We next asked what effect PRMT5 had on Tax and HBZ transcriptional activity. HEK293T cells were transfected with LTR-1-luc, TK-renilla control, the ACHneo proviral clone, flag-PRMT5 expression vector, S-tag-Tax expression vector, and S-tag-HBZ expression vector as indicated ( Figure 7A). Tax activated transcription while HBZ repressed Tax-mediated transcriptional activation (left and middle panels), as expected [17][18][19]. In the presence of exogenous PRMT5, Tax transcriptional activity and HBZ-mediated repression of Tax transcriptional activity were decreased. To determine if PRMT5 was able to specifically repress Tax in the absence of other viral genes, HEK293T cells were transfected with LTR-1-luc, TK-renilla control, flag-PRMT5 expression vector, and an S-tag-Tax expression vector ( Figure 7B). Tax was able to activate LTR-1-luciferase activity, while PRMT5 had no effect in the presence or absence of Tax on LTR-1 luciferase activity. Our results suggested that PRMT5 requires the HTLV-1 proviral DNA to suppress Tax-induced LTR activation. To examine if viral factors other than Tax are implicated in suppression of viral transcription by PRMT5, we examined the effect of HBZ or p30 with PRMT5 on LTR-1 luciferase activity. HEK293T cells were transfected with LTR-1-luc, TK-renilla control, flag-PRMT5 expression vector, and an S-tag-HBZ or p30 expression vector ( Figure 7C,D). Neither HBZ nor p30 activated or repressed LTR-1-luciferase activity in the absence of Tax, as expected [17][18][19][20][21][22]41,42]. Also, PRMT5 had no effect in the presence or absence of HBZ or p30 on LTR-1-luciferase activity. To determine if PRMT5 associated with the viral LTR promoters in vivo, we performed ChIP assays using pA-18G-BHK-21 cells. The Syrian Hamster kidney cell line was stably transfected with a plasmid vector containing the HTLV-1-LTR promoter that drives expression of lacZ [46]. pA-18G-BHK-21 cells were transfected with ACHneo proviral DNA and flag-PRMT5 expression vector ( Figure 7E). PRMT5 was associated with the viral LTR, but not the downstream LacZ-coding region (negative control). Discussion HTLV-1 is a tumorigenic retrovirus and the causative infectious agent of ATLL, an extremely aggressive and fatal disease of CD4+ T-cells [2][3][4]. In culture, HTLV-1 can effectively immortalize and eventually transform primary human T-cells. However, in infected individuals, the incidence of disease is only 2%-6% [7] after an extensive clinical latency period. Evidence suggests both genetic and epigenetic changes in the cellular environment that accumulate over time contribute to the development of ATLL [23]. While many aspects of HTLV-1 biology have been revealed, the detailed mechanisms of ATLL development remain poorly defined. Recently, the cellular protein PRMT5 has been shown to play a critical role in EBV-driven B-cell transformation as well as the pathogenesis of many types of hematologic and solid tumors [27][28][29][30][31][32]. We hypothesized that PRMT5 could be important in HTLV-1-mediated cellular transformation and in regulation of viral replication. Given the development of a novel small molecular inhibitor (PRMT5i) [31], identification of PRMT5 as a factor during HTLV-1 transformation will provide valuable insights into improved strategies to treat patients with ATLL. To determine the importance of PRMT5 in HTLV-1-infected cells, we first examined the expression level of endogenous PRMT5 in a variety of HTLV-1-transformed, ATLL-derived, and HTLV-1-negative T-cells lines ( Figure 1A). PRMT5 proteins levels were upregulated in HTLV-1-positive cells, but also in all transformed T-cell lines, regardless of origin. This is not surprising given that PRMT5 over-expression has recently been identified in lung carcinoma, glioblastoma, B-cell lymphoma, mantle cell lymphoma, and melanoma, to name just a few [28,31,[48][49][50]. It appears that PRMT5 over-expression is a hallmark of most transformed cells, not specifically HTLV-1-transformed cells. Previous work by Pal et al. found decreased PRMT5 mRNA levels in mantle cell lymphoma cell lines despite abundant PRMT5 protein over-expression [28]. In this instance, the increase in PRMT5 protein was not due to an increase in mRNA levels, but instead was due to a decrease in the inhibitory miRNAs miR-92b and miR-96, which allowed for enhanced PRMT5 translation. Conversely, a recent report by Shilo et al. found both PRMT5 protein and mRNA were upregulated in lung tumors [48]. We examined the level of PRMT5 mRNA in a variety of HTLV-1-transformed, ATLL-derived, and HTLV-1-negative T-cell lines and found that the PRMT5 mRNA level was increased in every transformed cell line relative to naïve T-cells ( Figure 1B). However, the increase in PRMT5 mRNA did not directly correlate with the level of PRMT5 protein expression, which suggested some degree of post-transcriptional regulation. Because these experiments were conducted in cell lines grown in vitro, we also examined the level of PRMT5 protein and RNA in total PBMCs isolated from ATLL patients ( Figure 1C,D). Both PRMT5 protein and RNA were upregulated in a majority of ATLL patient samples. The increased level of PRMT5 RNA and protein expression in patient PBMCs was not as prominent as what was found in transformed cell lines, likely due to the use of total PBMCs, which contain a mixture of normal and leukemic cells. HTLV-1 infection of CD4+ T-cells does not always lead to transformation. A delicate balance must be achieved between viral gene expression and certain genetic and epigenetic events to result in transformation. Using a long-term immortalization co-culture assay, we found both PRMT5 protein and RNA were upregulated throughout the immortalization process ( Figure 2). It is important to note the producer cells were lethally irradiated, and although there was some residual p19 Gag detected in the supernatant, the producer cells were dead by week 1. Since only a portion of the total co-culture assay was tested, the levels of both protein and RNA fluctuated from week to week; however, the overall trend showed that PRMT5 was upregulated. Regulation of viral gene expression early after infection is highly relevant for successful transformation; for example, too much Tax expression can cause a phenomenon known as Tax-induced senescence (TIS) [51]. Using shRNA vectors directed against PRMT5, we found that knockdown of PRMT5 enhanced HTLV-1 viral gene expression and decreased cellular proliferation in HTLV-1-infected cell lines (Figures 3 and 5B). Given the importance of PRMT5 in cellular proliferation, long-term stable cell lines were difficult to create. Thus, we transduced cells and selected with drug for less than two weeks, which provided the added benefit of less antigenic drift within the cell population over time. Similar results were obtained using a novel, small molecule inhibitor of PRMT5 ( Figure 4A-F). Because Tax expression is lost in a majority of ATLL-transformed cells and only HBZ is expressed in every cell, we included the Tax-negative ATLL transformed cell lines, TL-Om1 and ATL-ED, in our studies. PRMT5 knockdown and inhibition enhanced HBZ expression in ATLL transformed cell lines. Of interest, PRMT5i did not affect HIV-1 gene expression, which suggested that PRMT5 was not a global repressor of all retrovirus gene expression ( Figure 4G). Because Tax and HBZ are driven from separate viral promoters (5' LTR and 3' LTR opposite strand, respectively), this finding would suggest that PRMT5 is a global repressor of HTLV-1 transcription. In support of this hypothesis, we found PRMT5 associated with the viral LTR using ChIP analysis ( Figure 7E). Using reporter gene assays, we found PRMT5 inhibited HTLV-1 gene transcription, but not Tax protein specifically (Figures 6 and 7A,B). We also found LTR promoter activation was unaffected by PRMT5 (with or without viral accessory proteins HBZ or p30) in the absence of the proviral genome ( Figure 7B-D). This result was not surprising since one of the functions of HBZ is to repress Tax-mediated transcriptional activation of the viral LTR and the function of p30 is to retain unspliced tax/rex mRNA in the nucleus. Taken together, these results suggested that other viral proteins were required for the repressive effects of PRMT5, and/or PRMT5 affected a cellular transcription factor responsible for activating viral transcription. Although not required for the repressive effects of p30 on viral gene expression, we did find PRMT5 and p30 had additive repressive effects on viral transcription, which adds yet another level of regulation to HTLV-1 gene expression. The roles of additional PRMT5 interacting partners, such as MEP50, in PRMT5-mediated HTLV-1 gene regulation are also a possibility to explore in the future. MEP50 is a WD-40 repeat protein and a common PRMT5 cofactor, likely present in most PRMT5-containing complexes in vivo [52]. Phosphorylation of MEP50 by Cdk4 alters the activity and targeting of the PRMT5 protein in cells [53]. Importantly, we found PRMT5i treatment or shRNA-mediated knockdown of PRMT5 in HTLV-1-positive cell lines caused a decrease in cell proliferation compared to HTLV-1-negative cell lines ( Figure 5A,B). Furthermore, PRMT5i was selectively toxic to HTLV-1-positive cell lines ( Figure 5C). These results suggested that HTLV-1-positive cells rely strongly on PRMT5 for cellular growth and survival. Treatment with PRMT5i induced cellular apoptosis to some degree in all cell lines ( Figure 5D). Interestingly, HTLV-1-transformed cell lines underwent noticeably more apoptosis than either the HTLV-1-negative or the ATL-derived cell lines. Previous reports have found that aberrant expression of Tax protein can lead to TIS in cells [51]. We did observe a slight increase in cellular senescence in response to PRMT5i in the HTLV-1-positive cell lines tested, including Tax-expressing HTLV-1-transformed cells and Tax-negative ATL-derived cells ( Figure 5E-H). We would predict an increase in cellular senescence in the HTLV-1-transformed cell lines, as they are the only Tax-expressing lines. However, these cell lines also express HBZ, which has been reported to repress TIS. Another possibility is the level of Tax expression induced in response to PRMT5i in our cell lines was not substantial enough to elicit a measurable increase in cellular senescence. In summary, our study highlighted the significance of PRMT5 in HTLV-1-mediated cellular transformation and its importance as a target for the newly developed PRMT5i, presenting a viable strategy for treatment of ATLL.
8,915.2
2015-12-30T00:00:00.000
[ "Biology" ]
Converged Wireless Networking and Optimization for Next Generation Services The Next Generation Network (NGN) vision is tending towards the convergence of internet and mobile services providing the impetus for new market opportunities in combining the appealing services of internet with the roaming capability of mobile networks. However, this convergence does not go far enough, and with the emergence of new coexistence scenarios, there is a clear need to evolve the current architecture to provide cost-effective end-to-end communication. The LOOP project, a EUREKA-CELTIC driven initiative, is one piece in the jigsaw by helping European industry to sustain a leading role in telecommunications and manufacturing of high-value products and machinery by delivering pioneering converged wireless networking solutions that can be successfully demonstrated. This paper provides an overview of the LOOP project and the key achievements that have been tunneled into first prototypes for showcasing next generation services for operators and process manufacturers. Introduction The NGN vision is tending towards a diverse wireless networking world where scenarios define that the user will be able to effectively attain any service, at any time on any network that is optimized for the application at hand. An important architectural issue is that of defining a next-generation wireless system, which acts as a "networkof-wireless-networks" accommodating a variety of radio technologies and mobile service requirements in a seamless cost-effective manner. The convergence of internet and mobile services is currently being addressed by the IMS (IP Multimedia Subsystems) platform, driven mainly by the operators and service providers to address market opportunities in combining the appealing services of internet with the roaming capability of mobile networks. But this convergence does not go far enough, and with the emergence of new coexistence scenarios, there is a clear need to evolve 2 EURASIP Journal on Wireless Communications and Networking the current architecture to provide cost-effective end-toend communications. This will raise significant research challenges and, undeniably, system coexistence solutions to address WAN (Wireless Area Networks), and LTE (Long-Term Evolution RAN) interoperability (Figure 1), and their impact on the 3GPP SAE (System Architecture Evolution) and IMS architectures require further innovation to align with future wireless trends and deliver new market opportunities for all players in the supply chain. Under the umbrella of converged services and networks, LOOP technology is targeting potential applications in the wireless market for process manufacturing. This market is expected to grow at a pace neighbouring 30% per year; faster than the wired contingent. Nevertheless, adoption of wireless technology is still low and most managers are reluctant to introduce radio solutions; key impediments being latency and performance issues. In LOOP, these challenges have been addressed for delivering virtual metrology services in the automotive industry as a case study. In this paper, we provide an overview of the main achievements emanating from the LOOP project (EUREKA-CELTIC call 4: an instrument that aims to strengthen Europe's competitiveness in telecommunications through short-and medium-term collaborative R&D projects) that have led to potential innovative products for operators and wireless process manufacturers. This paper is organized as follows: Section 2 presents the LOOP case studies and the associated technical challenges; Section 3 provides an overview of the key technical achievements; Section 4 presents the product innovations born from LOOP; the conclusion is in Section 5. LOOP Scenarios and Technical Challenges In order to better understand the technical challenges faced by the convergence of wireless networks, herein we provide the description of the two major scenarios identified within the scope of the project targeting the telecoms industry and process manufacturing. The first scenario targets potential new services and energy-efficient networks for the operators in order to anticipate the deployment of NGNs in an era where spectral resources are at premium. The deployment of NGN aims at a global infrastructure where several systems can coexist to support transparent end-to-end communications in a costeffective manner. An important issue for next generation wireless systems will be coexistence and optimization to provide a "network-of-wireless networks" accommodating a variety of radio technologies and mobile services in a seamless and cost-effective manner. To address these issues, the main focus of LOOP was to explore innovative solutions targeting the following. (i) Network discovery, session management and roaming allowing the end-user to maintain session continuity whilst roaming between operators and heterogeneous wireless technologies. (ii) Ad-Hoc networking for relay-based cell coverage extension to extend wireless and mobile coverage providing enhanced QoS delivery and extended service delivery to remote and fringe users. (iii) Dynamic spectrum allocation for heterogeneous networks to investigate the opportunistic use of licensed spectrum by secondary systems for optimized utilization of scarce spectral resources. (iv) Intra-system optimization to maximize network utilization by exploring the application of a crosslayered protocol architecture. The second scenario is directed towards the car manufacturing industry and focuses on the deployment of metrology services on the factory floor to allow quality control production engineers to analyze and process large volumes of 3D multimedia information in real-time anywhere and anytime, as shown in Figure 2. A major problem for manufacturing companies is the maintenance of their cost-intensive, production-critical assets, such as machines, tools, and equipment. These assets constantly suffer from aging and wear, which often lead to functional loss and breakdown of machines, and ultimately, complete standstill of production with costly consequences. One promising approach is to anticipate and address the problems before they occur. The LOOP project provides a solution for ubiquitous monitoring and management of Coordinate Measurement Machines (CMM) based on closed-loop approaches through wireless and nomadic sensors placed around the mechanical robot in a car production line. The LOOP application would inform the maintenance engineering team of potential problems in order to ensure their prevention and to manage their repair with a sustainable plan in mind. The targeted solution is based on the need for NGN solutions enabling physical and semantic interoperability of required sensors, devices, services and systems. LOOP builds on relevant ongoing progress in wireless routing protocols based on cross-layer design to ensure that a fast and proactive communication path is established on the factory floor back to a Maintenance Service Centre (MSC)/Central Decision Point (CDP) in order to mobilize the resources for repair or to stand by for further updates and information. Suitability-Based RAT Selection Algorithm. Networks of the future will explore cooperative platforms in a bid to provide cost-effective communications to the end user. In a bid to address this challenge, interworking architectures have been proposed by ETSI/BRAN [1] and 3GPP [2] such as the loose and tight coupling approach for WiFi and UMTS/HSDPA (High-Speed Downlink Packet Access). Moreover, several solutions have been proposed within international research projects that worked on architectures and platforms for cooperation schemes between heterogeneous Radio Access Networks (RANs) [3][4][5][6], mainly focusing on interworking architectures between UMTS/HSDPA and WiFi. In [7], the requirements and algorithms for cooperation of several RANs are presented. Cooperation can also be achieved by means of a CRRM (Cooperative Radio Resource Management Entity) entity that is able to direct traffic through different networks according to operator-specific requirements and based on cross-system information. More specifically, the CRRM is responsible for (i) gathering system-and user-specific information, (ii) processing this information according to operator specific criteria, and (iii) triggering a new handover event according to the load balancing criteria and position. It is assumed that either a common operator deploys both systems or the system operators share a service-level agreement (SLS). Reference [8] investigated a CRRM-type cooperation based on the load-suitability for delay-constrained services. The notion of suitability is based on the most preferred access system to accommodate the service, but the suitability factor can change as load increases in order to maintain the quality of service across the networks. In LOOP, we extend this notion of suitability cooperation for RAT (Radio Access Technology) selection to optimize the choice between WiFi and HSDPA. The suitability cooperative algorithm for RAT selection is expressed by where cell i, j represents the cell/AP i pertaining to RAT j ; L(cell i, j ) is the normalized load in cell i, j ; LTh j is the load threshold for RAT j ; S (L(cell i, j )) is the suitability value for accepting a new user in cell i, j . The algorithm was testing the use-case scenario involving HSDPA partially overlapped by WiFi indoor hotspots, assuming high-priority NRTV (Near Real-Time Video) traffic at 64 kbps characterised by the 3GPP model [9]. Figure 3 provides the simulation results for CRRM goodput (bits that are received correctly and within the QoS delay threshold) versus offered load. LOOP results show that the CRRM system throughput gain introduced by service suitability is significant and sensitive with regards to the service suitability threshold. The optimal load threshold was determined to be LTh 0 = 0.6 where the potential observable gain is around 1.2 Mbps in contrast to the stand-alone HSPDA scenario; the use of smaller load thresholds is not advised, since it causes the WiFi system to overload faster-causing problems to the existing background traffic. Ad hoc Networking for Relay-Based Cell Coverage Extension. In the LOOP project, we have studied mechanisms to extend the cell coverage through cooperative mechanisms based on the use of relays. In particular, we have focused on Cooperative Automatic Retransmission Request (C-ARQ) schemes [10] which allow for the transmission of data even when the channel conditions are poor, and errors are frequent, by enabling spontaneous relays to retransmit upon the occurrence of a transmission error from the source. In LOOP, we have designed a new MAC protocol to coordinate the retransmissions from these helpers or relays called Persistent Relay Carrier Sensing Multiple Access (PRCSMA) protocol, and it represents an extension of the IEEE 802.11 Standard [11] to operate in C-ARQ schemes. A comprehensive description, theoretical analysis, and performance evaluation of the protocol can be found in [12]. When using PRCSMA, all the stations must listen to every ongoing transmission in order to be able to cooperate if required. Whenever a data packet is received with errors at the destination station, a cooperation phase can be initiated by broadcasting a Call for Cooperation (CFC) packet. Upon the reception of the CFC, all the stations willing and able to cooperate become active relays and get ready to forward the original packet. To do so, they use the MAC rules specified in the IEEE 802.11 Standard [11] considering the two following modifications. (1) There is no expected ACK associated to each transmitted cooperation packet. (2) Those active relays which do not have an already set back-off counter (from a previous transmission attempt) set it up and initiate a random back-off period before attempting to transmit for the first time. Those relays which already have a non-zero back-off counter value keep the value upon the initialization of a cooperation phase. A cooperation phase is completed, either when the destination station is able to decode the original data packet by properly combining the different retransmissions from the relays, or when a certain maximum cooperation timeout has elapsed. In the former case, an ACK packet is transmitted by the destination station. In the latter case, a negative ACK (NACK) is transmitted by the destination station. In any case, all the relays pop out the cooperative packet from their queue upon the end of a cooperation phase. The performance of PRCSMA has been analytically modeled by applying Markov Chain Theory [12]. We have focused on the evaluation of the average packet transmission delay when cooperation is required, which is defined as the average amount of time elapsed from the moment a packet is transmitted for the first time until it can be decoded without errors at the destination upon the reception of an arbitrary number K of retransmissions received from the relays. We plot in Figures transmit at 54 Mbps to the destination while the source can do it at 1, 6, 24, or 54 Mbps. The data transmission rates are represented in the legend of the plots indicating the transmission rate of the source and the transmission rate of the relays separated by a dash. The control transmission rate has been fixed in all cases to 6 Mbps. In addition, we consider in all cases that the C-ARQ is executed by means of the PRCSMA basic access, that is, without RTS/CTS handshake. The traditional ARQ curve represents the case when the retransmissions are only requested from the original source (there are no relays). Finally, it is worth mentioning that we have included in the plots both the results obtained through computer simulation and by means of the derived theoretical model. The perfect match between the two cases shows the accuracy of the developed model. The ratio between the transmission rate of the source and that of the relays determines how efficient the C-ARQ mechanism is in comparison to the traditional non-cooperative ARQ approach, where the retransmissions are only requested from the source at the best available transmission rate between the source and the intended destination station and without contention between consecutive retransmissions. For example, in the case of using the transmission rate set 1-54 (source-relays transmission rate), when K = 5, the C-ARQ reduces the average packet transmission delay by a factor 4 compared to the traditional ARQ scheme. On the other hand, at the limit where the relay stations transmit at the same rate as the source station, the average delay in the C-ARQ scheme is higher due to the cost of coordinating the set of relays. It is worth mentioning that, as can be expected, if K is very low, then the efficiency of the C-ARQ scheme becomes similar to that of a traditional non-cooperative ARQ scheme. This is due to the fact that, despite the faster relay retransmissions, the overhead associated to the protocol does not pay off the reduction of the actual data retransmission time. In the case of networks where the data transmission rate of each station is selected as a function of the channel state between source and destination stations, as in IEEE 802.11 WLANs, the behavior of PRCSMA shows that C-ARQ schemes would be especially beneficial for those stations located far away, in radio-electric terms, that is, at the cell boundaries from a transmitting station. Note that these stations will be prone to transmit at very low transmission rates and therefore they could benefit from faster and more reliable retransmissions performed by intermediate relay stations on the path from the source station. In addition, the whole network, that is, the rest of the stations, will benefit from this scheme in the sense that faster transmissions will occupy the channel for shorter periods of time. Cooperative Spectrum Sensing for Cognitive Radio-Enhanced Heterogeneous Networks. As wireless technologies continue to grow, more and more spectrum resources will be needed. However within the current spectrum regulatory framework, all of the frequency bands are exclusively allocated to specific services, and no violation from unlicensed users is allowed. A recent survey of spectrum utilization made by the Federal Communications Commission (FCC) has indicated that the actual licensed spectrum is largely underutilized in vast temporal and geographic dimensions [13]. Spectrum utilization can be improved significantly by allowing a Secondary User (SU) to utilize a licensed band when the Primary User (PU) is absent. Cognitive Radio (CR), as an agile radio technology, has been proposed to promote the efficient use of the spectrum [14]. By sensing and adapting to the environment, a CR is able to fill in spectrum holes and serve its users without causing harmful interference to the licensed user. To do so, the CR must continuously sense the spectrum it is using in order to detect the reappearance of the PU. Once the PU is detected, the CR should withdraw from the spectrum so as to minimize the interference it may possibly cause. However, a very important challenge of implementing spectrum sensing is the hidden terminal problem, which occurs when the CR is shadowed, in severe multipath fading or inside buildings with a high penetration loss while a PU is operating in the vicinity. Cooperative communications are an emerging and powerful solution that can overcome the limitation of wireless systems [15]. The basic idea behind cooperative transmission rests on the observation that, in a wireless environment, the signal transmitted or broadcast by a source to a destination node is also received by other terminals. These latter nodes can process and retransmit the signals they receive. The destination then combines the signals coming from the source and the partners, thereby creating spatial diversity by taking advantage of the multiple receptions of the same data at the various terminals and transmission paths. By allowing multiple CRs to cooperate in spectrum sensing, the hidden terminal problem can be addressed [16]. Indeed, cooperative spectrum sensing in CR networks has an analogy to a distributed decision in wireless sensor networks, where each sensor makes a local decision and those decision results are reported to a fusion centre to give a final decision according to some fusion rule. The main and fundamental difference between these two applications lies in the wireless environment. Compared to wireless sensor networks, CRs and the fusion centre (or common receiver) are distributed over a larger geographic area. This difference brings out a much more challenging problem to cooperative spectrum sensing because sensing channels (from the PU to CRs) and reporting channels (from the CRs to the fusion centre or common receiver) are normally subject to fading or heavy shadowing. In LOOP [17], we have analyzed, for the first time in the literature, the fundamental problem of cooperative spectrum sensing over wireless environments characterized by realistic propagation conditions, that is, heavily and spatially correlated shadowing environments. More specifically, we have proposed an advanced framework for performance analysis and optimization of a general multilayer decentralized data fusion problem for application to cooperative spectrum sensing, which includes realistic sensing/reporting channels and correlated Log-Normal shadowfading in all wireless links of the cooperative network. The analyzed system setup is sketched in Figure 6. The analysis of the scenario in Figure 6 has revealed an important result: even though always overlooked in typical cooperative spectrum sensing analysis, shadowing correlation on the reporting channel can yield similar performance degradations as shadowing correlation on the sensing channel. So, our performance study has revealed that further importance should be given to the role played by the reporting channel for a sound analysis and design of distributed detection problems with data fusion, especially when the system is expected to be deployed in realistic propagation environments targeted for CR applications. An example of the obtained results is shown in Figure 7. Cross-Layer Packet Scheduling for WiMAX. The IEEE 802.16 standard [18,19] provides specification for the Medium Access Control (MAC) and Physical (PHY) layers for WiMAX (Worldwide Interoperability for Microwave Access). A critical part of the MAC layer specification is the scheduler, which resolves contention for bandwidth and determines the transmission order of users; it is imperative for a scheduler to satisfy QoS requirements of the users, maximizing system utilization and ensuring fairness among the users. The basic approach for providing the QoS guarantees in the WiMAX network [20,21] considers that the BS performs the scheduling for both the uplink and downlink directions; an algorithm at the BS has then to translate the QoS requirements of SSs into the appropriate number of slots. The IEEE 802.16d/e standards [18,19] do not specify scheduling techniques for MAC layer in WiMAX networks, and the existing NS-2-based simulation platforms [22], implement only QoS-aware scheduling based on Service class prioritization. We propose a simple, efficient solution for the WiMAX scheduler that is capable of allocating slots based on the QoS Service class, traffic priority and the WiMAX network and transmission parameters. To test the proposed solution, the QoS model for the IEEE 802.16d/e MAC layer in the NS-2 simulator [23] developed by the WiMAX Forum [22,24,25] was taken as a reference. We propose the Enhanced Round Robin (eRR) scheduler (cf. Figure 8). It is based on the simple round robin solution, but introduces more elements in the decisionmaking process for packet allocation within each radio frame. The proposed scheduler algorithm has in fact two objectives. (i) The first, that was already mentioned, maps the user traffic to the available radio resources according to the service class and radio channel quality. (ii) The second allows user differentiation/priority within each service class and thus enables the network operator to implement new business models, the concept of gold, silver and bronze users, guaranteeing at the same time the subscribed QoS. In practice, the algorithm initially performs the same round robin procedure as explained in the previous models, that is, serving first connections in the following order: UGS, rtPS, nrtPS, and BE. From the list of existing connections inside the same class, a priority is also established taking into account the RSSI (Received Signal Strength Indication) value for the given node; where highest priority is given to users with highest signal strength. This approach will provide a trade-off between optimizing spectral efficiency and guaranteeing QoS. Simulations were realized using a point-to-multipoint topology with three services running on the same terminal, conveying differentiated traffic in the uplink direction, namely, the configured traffic sources for UGS (Unsolicited grant service), rtPS (Real time polling service) and BE (Best Effort). In this scenario, we have defined the relevant PHY layer simulation parameters. The key simulation parameters are summarized in Table 1. Figure 9 shows the slight gain difference that can be achieved in throughput using the enhanced Round Robin solution, in contrast to the observed earlier service class differentiation. In this particular case, the traffic priority was assumed to be equal among the same classes, the priorities here are based on the service class and RSSI of the respective terminal. Concerning delay, as shown in Figures 9 and 10, the proposed scheduler reduces the overall packet delay and either equals or slightly outperforms the existing Round Robin based on the WMF (WiMAX Forum) model. Figures 11 and 12 illustrate the scenario consisting of terminals supporting the rtPS and BE classes, respectively, and different traffic priorities inside each service class, that is, rtPS1 has lower priority than the rtPS connection and BE1, also in respect to BE. The results show the priorities in the scheduling decision as both classes are distinguished in terms of throughput and delay (better values are observed for rtPS classes than BE ones) and traffic prioritization inside each particular class (improved performance for rtPS and BE in relation to rtPS1 and BE1, resp.). In Summary, the eRR algorithm provides a new innovative scheme to implement new business models based on the application of the cross-layer paradigm for RR scheduling. Numerical results show that the proposed scheme can Cross-Layer Optimized Routing Strategies. The benefits of cross-layer system design are mainly being applied in the area of mobile and wireless operators. In recent years, the area of communications in the manufacturing process is gaining importance. Traditional Ethernet and PROFIBUS [26,27] factory systems are being enhanced to facilitate new means of automation through attractive wireless solutions: to provide added flexibility and self-configuration in processing machines to reduce production costs. LOOP tackles machine automation by investing communication challenges related to remote management of Coordinate Measurement Machines (CMMs) for future manufacturing environments. Specifically, we aim to investigate routing strategies to provide fast and efficient data management on the factory floor that is highly dynamic in nature. LOOP specifically addresses cross-layer enhancements to both flat and hierarchical routing strategies. In HOLSR (Hierarchical Optimized Link State Routing), there are two levels of hierarchy according to our network design as shown in Figure 13, where Level-1 hierarchy corresponds to connection among backbone network nodes, while Level-2 hierarchy corresponds to connection among mesh routers in access networks. Regarding the second cross-layer enhancement, Cross Layer Link Layer Notification, the basis is to utilize link break information gathered at the MAC layer to impose OLSR [28] routing table recalculation. More specifically, the MAC layer detects the link break and sends an indication to the protocol layer. Upon receiving such an indication which is treated as a topology or neighbor change, OLSR conducts routing table recalculation immediately. Finally, note it is of great importance to understand which approach is more effective, namely, cross-layer or hierarchical, to deploy the correct solution based on the dimension addressed. Simulation results (Figure 14) provide the performance of video communication over the network, as the transmission of the virtual part was taking place. Such multimedia stream would be directed to experts in assisting the manufacturing decisions all over the plants that are normally very large. The video connection has a bit rate of 128 kbps and a QCIF format. The results obtained suggest that in small deployment areas, intrasystem optimization based on cross-layer approaches is more effective than the hierarchical counterparts. LOOP Products Carrying out research at European level is primarity important when facing the global market, since gaining knowledge on novel technologies and system integration may provide the needed competitive advantage at industrial level, that is, better European products or better European-based networks deployed around the world. In this context, LOOP has transferred engineering know-how to meet the shortterm market requirements to allow industry to anticipate the commercial deployment towards NGNs in terms of delivering potential products that include the following. OptiMaX (Portugal Telecom Inovação). Radio access network planning and deployment is a complex process that can be divided into two key stages. In this first stage, the optimization goals (capacity, coverage and QoS) are defined, the network is dimensioned and the radio planning and optimisation loop are initialized. In this process, sophisticated planning and optimisation tools are used which resort to complex cost functions to perform various trade-offs. The output from the iterative optimization stage results in BS parameters corresponding to the Radio Resource Management (RRM) algorithm under test. In the second stage, after network deployment, network performance and quality characteristics are monitored. In this stage, monitoring tools are used to collect the geo-referenced radio measurements (e.g., SNR) in order to evaluate the difference between what was planned and what is in fact implemented. Based on monitoring results and on the RNP (Radio Network Planning) simulations, the radio network parameters are tuned which usually include both hard (e.g., antenna tilts) and soft parameters (RRM mechanisms). Despite the widespread deployment of WiMAX (IEEE 802-16d) networks, there is no radio monitoring tool on the market to support network operators in the optimization task; hence this provided the impetus for the OptiMax tool proposed in the scope of the LOOP project. OptiMax is a new tool that allows the network operator to perform network analysis and planning for the WiMAX (IEEE802.16d) system. The monitoring phase not only constitutes collecting and storing the radio signal quality for coverage measurements, but can also "sniff-out" essential network information pertaining to the IEEE802.16 protocol. Moreover, the monitoring capabilities of the tool can also estimate the maximum bit rate per location for a particular bandwidth. In order to obtain this network-related data, specific CLI (Command-Line Interface) requests are made to the SS (Subscriber Station). The replies are parsed to XML format resultant from the monitoring phase. Each monitoring session is attached with the potential locations of each WiMAX Base Station so that it can be overlaid on a geographical map, where each position is represented by a coloured circle ranging from red (low RSSI, Received Signal Strength Indication) to green (higher RSSI). The hardware needed to execute each test is shown by Figure 15. It constitutes: (1) laptop, with OptiMax application installed, for mobility testing, (2) GPS system, the used GPS system connects via Bluetooth, (3) WiMAX omnidirectional antenna, (4) UPS system and a PoE (Power over Ethernet) unit providing energy for the SS, (5) a WiMAX SS (Subscriber Station), connected to the laptop using an Ethernet cable. The equipment used, was chosen by its flexibility, low cost and easy loading in an automobile for field testing. The OptiMax tool collects the geo-referenced radio measurements (e.g., SNR) in order to evaluate the difference between what was planned and what is in fact implemented. Based on the monitoring results and on RNP (Radio Network Planning) tool simulations, the radio network is optimized: antenna tilts and azimuth, transmitted power level, and so forth. WiMAX System Experimental Platform (Turkcell). Mobile WiMAX is an access technology that promises high-data rates and wide coverage at low cost. Mobile WiMAX is based on 802.16 2009 which specifies the air interface including the physical layer (PHY) and medium access layer (MAC) for broadband wireless systems. To achieve high throughput and very good spectral efficiency, mobile WiMAX combines orthogonal frequency division multiple access (OFDMA) and multiple input multiple output (MIMO) with link adaptation and hybrid automatic repeat request (ARQ) algorithms. However, wireless communication technologies, and how to exploit better spectral efficiency improve dayby-day. Toward this end, we were interested in developing the WiMAX system level simulator to act as an experimental platform to test new algorithms/protocols for enhancing system efficiency through augmenting cell capacity. The main challenges in the implementation of such a simulator were the selection of parameters and assumptions. To overcome this challenge, we developed a systemlevel simulator compliant with the 802.16 m Evaluation Methodology [29]. The simulator test-bed was validated for different network configurations such as antenna numbers, frequency reuse patterns, user densities, and mobility of users. Management Tools for Wireless Process Manufacturing (TRIMEK). The wireless market in Process Manufacturing is expected to grow at a pace neighbouring 30% per year. It is growing faster than wired market. Nevertheless, adoption of wireless technology is still very low and most of managers in industry are reluctant to introduce radio solutions when wired alternatives exist. The primary impediments to wireless penetration are latency and performance issues, as well as reliability and security of sensitive information. In the scope of LOOP, these challenges have been addressed by applying autonomous wireless networks with crosslayer routing strategies to provide remote management capabilities for the flexiblity and self-configuration of measurement machines. Remote management is a key business opportunity for process manufacturers to provide technical assistance towards the offering of complete automation solutions allowing industrial automation and control providers, as well as major system integrators, to manage or visualize complete and self-contained control components, including software-based functionality, command and control, configuration, diagnostics, and documentation. TRIMEK is a CMM manufacturer, as well as a service provider for this kind of machinery. These are complex systems composed by a large number of parts (mechanical, electrical, electronic, IT, etc.). Therefore, any service regarding them might require the knowledge of professionals from different fields. Unfortunately it is not possible to forecast in advance the needs of both the machine and the service demanded by the client. For this reason, the help of new technology in this field will have the potential to provide an internal tool to satisfy unexpected problems in a short-time basis and with the accuracy required by this type of systems. The role of LOOP has been to provide more wireless flexibility and self-configuration by integrating ubiquitous monitoring and management tools on TRIMEK in-house 3D CMMs for the automotive industry. Therefore, based on the LOOP cross-layer routing strategy and traffic rules (Section 3.5), a wireless ad-hoc link was established on the factory floor resulting in highly autonomous CMM machines with high flexibility. Moreover it was established that the use of traffic rules improved the bandwidth assigned to the prioritised traffic while maintaining the quality of the video streams at the desired level. It has also been demonstrated that dynamic queue management, based on adaptive priority handling, is a key factor when trying to offer a specific quality to the provided services. Figure 16 shows the remote scan before and after LOOP technology. Conclusions Even though converged NGNs are still in their early stages, the impacts of NGN are expected to be significant to the ICT market on two levels: firstly NGN will provide the vehicle for enhancing access to communication services, and more innovative and personalised services and applications; and secondly NGN would be a basis for the UNS (Ubiquitous Network Society), where easy-to-use networks are connected at anytime, anywhere, with anything and for anyone. LOOP is one piece in the jigsaw, however more investment is required to help this vision to become a reality and to address new emerging challenges that include energy-efficient and secure communications.
7,519.4
2010-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Embedded Control System of DC Motor Using Microcontroller Arduino and PID Algorithm ABSTRACT This study proposes a DC motor hardware design with a low-cost embedded system device, namely the Arduino Uno [35][36] [37]. The PID controller is implemented in the system as an angular velocity controller (omega). PID control was chosen because it has advantages in its characteristics, namely easy to understand, simple but has good system response performance, and easy to implement in both software simulation and hardware implementation. [38][39] [40][41] [42]. System Design The block diagram of the embedded system for controlling a DC motor's speed is shown in Figure 1. At the same time, the system wiring diagram is shown in Figure 2. The system consists of input devices, processors (processors), output devices, and interface devices. The input device is an encoder sensor that functions to measure the angular speed of a DC motor. The output devices are the L298 motor driver and the JGA25-370 DC Motor. The processing device is the Arduino Uno. The interface device is a serial monitor or serial plotter from the Arduino IDE. The encoder sensor sends pulse data to the Arduino Uno to calculate the Radian Per Minute angular velocity value (RPM). The features used by the Arduino Uno to process angular velocity are a timer and counter. Angular velocity data is sent to Serial Monitor and Serial Plotter via USB serial. Pulse Width Modulation (PWM) is used to adjust the input voltage to the DC motor so that the DC motor's speed can be varied. The motor driver functions to convert the digital voltage (PWM) into an analog voltage with a higher voltage level (5 volts to 10 volts). Figure 1. System Block Diagram The configuration of the input PIN and output PIN can be seen in Figure 2. The Encoder sensor is connected to PIN 2 and 3. Both PINs have a counter feature to count pulses. The motor driver is connected to PIN 6, 7, and 8. PIN 6 functions to adjust the motor's angular speed with the Pulse Width Modulation (PWM) feature. At the same time, PIN 7 and 8 function to regulate the direction of rotation of the motor (clockwise or counterclockwise). The voltage source is obtained from the Power Supply Unit (PSU) with a voltage of 12V. The encoder sensor requires a voltage of 3.3V, which is obtained from the Arduino Uno minimum system. The L298 Motor Driver requires a 5V and 12V voltage source. In the motor driver, the 5V voltage functions as the electronic circuit voltage source, while the 12V voltage is the DC motor voltage source. The control system block diagram is shown in Figure 3. This system is categorized into a closed-loop control system. Setpoint block is a reference value that the system must follow. The PID controller is a Proportional-Integral-Derivative Control. The system to be controlled is a DC motor. The system output is angular velocity. The feedback uses an encoder sensor to calculate the angular velocity of the DC motor system. Angular Velocity Counter The angular velocity can be obtained by counting the pulses from the encoder sensor in one minute. The angular speed is calculated using an encoder mounted on the end of the DC motor. The angular velocity calculation can be written as Where is the number of rotations on (sample time). The number of rotations can be achieved from, where is the number of pulses in the sample time, the variable ! is the number of pulses in one turn. According to the motor datasheet, there are 600 pulses in one loop. The sample time used in the program " is 50ms need to be changed with (3) to get time, , in one minute. The constant 1000 is the conversion from milliseconds to seconds, and 60 is the second to minute conversion constant. Thus, the RPM can be calculated as, Derivative Integral Proportional Control (PID) PID controls consist of proportional, integral, and derivative controls [43][44] [45]. The PID Control equation in the time domain is, where ( ) is a control signal, $ is the proportional control parameter value, % is integral time dan ( is derived time, ( ) is the error or difference between the reference value and the feedback value. The PID controller can be written in another form as [46][47] The gain constant % is the value of the integral control parameter and ( is the value of the derived control parameter. PID control has characteristics that influence system response. It is because of the different controller structures. Proportional control deals with the error between the reference value and the feedback value. The integral control deals with the sum of all errors. The child control corresponds to the current error with the previous error. Algorithm The software used to create an embedded control program is the Arduino IDE. The Arduino IDE software and the embedded control main program with PID control are shown in Figure 4 (a). Simultaneously, the embedded control program flow chart is shown in Figure 4 (b). The main program has two parts, namely the angular velocity calculation program according to Equation (4) and the PID control program based on Equation (6). The parameters that must be determined before running the system are the reference value (SP) and the PID parameter (KP, KI, KD). The control system will continue to run until the supply is turned off or reaches some data. The PID control will calculate the PWM value sent to the motor. The angular velocity value is sent to computer to be displayed on the computer as a value or graph. RESULT AND DISCUSSION In this section, there are several tests as follows. The first part is about the open-loop system response [48]. The second part is about the closed-loop system response of the PID Controller. The third part is about the response to variations in the reference value and the effect of different sample times. The last part compares the systems without and with PID control hardware used in this study is shown in Figure 5. The price of the Arduino Uno component is IDR 70,000. The price of the L298 motor driver is IDR 25,000. The price of a 25GA370 DC Motor with an encoder sensor is IDR 125,000. The PSU price is IDR 30,000. The connecting cable, holder and plastic box is IDR50,000, so the total component price is IDR300,000. These components' price is lower than using other devices such as the NI-DAQ [49] or PLC [50]. System performance is measured using system response. Some of the observed system response parameters are rise time, settling time, overshoot, and steady-state error. The expected system response is that it has a small rise time, has a short stabilization time, has a small overshoot, and the steady-state error is zero. Open Loop Testing The results of the implementation of the open-loop test hardware are shown in Table 1 and Figure 6. Arduino Uno uses 8-bit PWM, with a data range between 0-255. The motor driver uses a 12-volt power supply. The maximum voltage to the DC motor is 10 volts measured at the motor driver output. The minimum PWM is 50, and if the PMW is below 50, the DC motor cannot rotate just buzzing. There are two sample times used in this study, namely 50ms and 100ms. Based on Figure 3, the 50ms sample time provides a more stable angular velocity than the 100ms sample time. It can be seen clearly in the system response with PWM = 250 that a sample time of 100ms produces a response with some oscillations (some ripples) after reaching a steady state. Sample time is critical for system accuracy and response speed. Using a smaller sample time, for example, 50ms, the delay for the system to respond to errors is also smaller. Therefore, the system can prevent the output from having errors. The system will then respond more quickly and produce a more stable output. DC Motor Step Reference Response with PID Control The proportional control system response is shown in Table 2 and Figure 7. The reference value is 100RPM. In Table 2, increasing proportional control (KP) can reduce steady-state errors. In Figure 4, it can be seen that the steady-state error is reduced. Proportional control increases the overshoot value and reduces the rise time. Thus, in hardware implementations, it can be seen that proportional control affects the reduction in rising time, increased overshoot, and reduction in steadystate errors. Table 2 The response of the integral control system is shown in Table 3 and Figure 8. It can be seen that the increase in integral control (KI) can eliminate steady-state errors and result in faster system response. Larger integral controls have faster ride times but have greater overshoot and undershoot. The change in the value of the integral control that is not too large can make the system experience overshoot and undershoot. Thus, integral control affects increased overshoot, increases undershoot, reduces ride time, and eliminates steady-state errors. The response of the derivative controller is shown in Table 4 and Figure 9. It can be seen that increasing derivative control (KD) can reduce overshoot but make the system experience undershoot. The larger the derivative control, the greater the undershoot value. Descent control can increase the ride time but can also reduce the ride time after the undershoot appears. Thus, derivative control effects reducing overshoot, reducing rise time, and increasing undershoot. Figure 9. Closed-Loop System Response Derivative Control Table 5 summarizes the characteristics of the PID controller based on the implementation of the hardware system response. Proportional control and integral control are suitable for reducing ride time but have the risk of increasing overshoot. The best function of proportional control is to reduce ride time, and the best function of integral control eliminates steady-state errors. Derivative controls are suitable for reducing overshoot but have the risk of increasing the undershoot. Accordingly, derivative control should not be overestimated. Testing Reference Value and Sampling Time The best PID parameters are shown in Table 6 and Figure 10. The setpoint value is 100RPM. The best PID control parameter (KP, KI, KD) is number 4. System response numbers 2 and 3 have undershot. That's because of the great descent controls. Increased derivative control must be careful because it will give a decreased response. The overshoot response is given by number 1 because of its large proportional value. The next test is the variation of the reference value and the response time of the sample. There are two sample times used in this study, 50ms and 100ms. The results are shown in Table 7 and Figure 11. The PID controller can control and stabilize the system at multiple set points and achieve a reference signal. Sample time affects system response but can still follow set points. 50ms sample time provides a faster response than 100ms response. Thus, a smaller sample time is good for faster system response. However, it should not be too small as it can eliminate the original characteristics of the angular velocity data. Comparison of Systems without and with PID Control This section describes the comparison of systems without and with PID control which is summarized in Table 8. A system without PID control is shown in Figure 12 (a), and a system with PID control is shown in Figure 12 (b). The most visible thing from comparing the two systems is that the performance reaches the reference value. Systems without PID control need time to set the correct PWM value for the motor's angular speed to reach the reference value. It takes a lot of experimentation to reach the reference value, especially if the reference value has to be changed. Meanwhile, systems with PID control can easily reach the reference value, even with reference value changes. Systems with PID control can automatically adjust to the reference value, while systems without PID control have to adjust manually. A system without a PID control cannot withstand disturbances which cause the angular velocity to not match the reference value. Meanwhile, systems with PID control are resistant to interference to maintain the DC motor's angular speed according to the reference value. Systems without a PID control will have a steady-state error value if there is a change in the reference value or interruption. Meanwhile, PID-controlled systems do not have steady-state error values. Therefore, a system with PID control is better than a system without PID control in system performance in achieving the reference value. CONCLUSIONS This study proposes controlling a DC motor system using Proportional Integral Derivative Control (PID) using the embedded Arduino Uno system. The PID controller can control and stabilize DC motors in the Embedded System using the Arduino Uno. The system can achieve different reference values with a steady time of under one second. Proportional control has the characteristic of reducing rise time but increasing overshoot. Integral control has the characteristics of eliminating steady-state errors and increasing overshoot. Derivative control has the characteristic of reducing overshoot but increasing undershoot. Shorter sample times provide a faster and more stable system response. However, the sample time should not be too short of giving the original characteristics of the output. The best PID controller for 100RPM reference is KP = 0.7; KI = 0.3; KD = 0.2 with a sample time of 50ms. Comparison with systems without PID control, systems with PID control have the advantage of easily reaching the reference value even when there is a change in the reference value.
3,144.4
2021-03-22T00:00:00.000
[ "Engineering", "Computer Science" ]
The Quantum Nature of Color Perception: Uncertainty Relations for Chromatic Opposition In this paper, we provide an overview on the foundation and first results of a very recent quantum theory of color perception, together with novel results about uncertainty relations for chromatic opposition. The major inspiration for this model is the 1974 remarkable work by H.L. Resnikoff, who had the idea to give up the analysis of the space of perceived colors through metameric classes of spectra in favor of the study of its algebraic properties. This strategy permitted to reveal the importance of hyperbolic geometry in colorimetry. Starting from these premises, we show how Resnikoff’s construction can be extended to a geometrically rich quantum framework, where the concepts of achromatic color, hue and saturation can be rigorously defined. Moreover, the analysis of pure and mixed quantum chromatic states leads to a deep understanding of chromatic opposition and its role in the encoding of visual signals. We complete our paper by proving the existence of uncertainty relations for the degree of chromatic opposition, thus providing a theoretical confirmation of the quantum nature of color perception. Introduction The central core of this paper is the concept of space of colors perceived by a trichromatic human being, or color space, for short. The scientific literature about this topic is abundant and here we limit ourselves to quote the classical reference [1], the more mathematically-oriented books [2,3] and the image processing and computer vision oriented references [4,5], among hundreds of books and papers written on this subject. What makes scientists so interested in color spaces is that, instead of being simple collections of elements representing either physical color stimuli or the sensation they provoke in humans, they are structured spaces, with intrinsic algebraic and geometrical properties and a metric able to quantify the distance (physical or perceptual) between their points. The work of H.L. Resnikoff, that we will briefly recall in Section 2, is related to the aforementioned geometrical and metric structure of the perceived color space. It is a foundational work, without direct algorithmic applications and it requires a non trivial acquaintance with theoretical physics and pure mathematics. This is probably the reason why their contribution, that we consider so important, has remained practically unnoticed until now. In this paper, we start providing an outlook on the color perception theory inspired by Resnikoff's insights that we have developed through the papers [6][7][8][9][10][11]. These are quite technical and dense works, here we prefer to privilege clarity of exposition and, for this reason, we will omit the proofs of the results that we will claim, the interested reader may consult them in the contributions just quoted. After introducing, in Section 3, some fundamental mathematical results related to Jordan algebras and their use in quantum theories, we show, in Section 4, that Resnikoff's work leads to a quantum theory of color perception. As a novel contribution, we will provide an extensive motivation to explain why we believe that a quantum theory is better suited than a classical one to model color perception and we will also provide a strong theoretical support for our claim, i.e., the existence of uncertainty relations for color opponency. In Section 4.1, we show how to identify pure and mixed quantum chromatic states, this result is used in Section 4.2 to study hue and saturation of a color and in Section 4.3 to show that the peculiar achromatic plus chromatic opponent encoding of light signals performed by the human visual system can be intrinsically described by the quantum framework, without resorting to an 'a-posteriori' statistical analysis. In Section 4.4, we derive the uncertainty relations for chromatic opponency by adapting a technique first proposed by Schrödinger that extends the one used by Heisenberg, Weyl and Robertson to single out uncertainty bounds. Section 4.5 offers a brief panorama on the rich geometry of quantum chromatic states. We conclude our paper in Section 5 by discussing some ideas about future developments of the theory. The Dawn of Hyperbolicity in Color Perception: Yilmaz's and Resnikoff's Works Here we present, in two separate subsections, the ideas and results of H. Yilmaz and H.L. Resnikoff about the role of hyperbolic structures in the study of color perception. In spite of the fact that Resnikoff's contribution is, by far, more important than Yilmaz's for our purposes, we want to respect the chronological development of the two works and we start with Yilmaz's idea about the link between color perception and special relativity. Yilmaz's Relativity of Color Perception Yilmaz was a theoretical physicist specialized in relativity who, around 1960, started to apply their knowledge to the field of human perception, here we shortly recap their 1962 paper [12] about the relativity of color perception. A detailed version of the content of this section can be found in [9]. Yilmaz considered a visual scene where a trichromatic observer adapted to a broadband illuminant can identify the colors of a patch by performing comparison with a set of Munsell chips. He searched for the transformation that permits to relate the color description when the observer is adapted to two different broadband illuminants. As a first approximation, he searched for a linear map, i.e., a matrix transformation between color coordinated, and he derived the explicit form of the entries of this matrix by using the result of three perceptual experiments. He obtained a three-dimensional Lorentzian matrix, with Lorentz factor given by Γ = 1/ 1 − (σ/Σ) 2 , in which the perceived saturation σ and the maximal perceivable saturation Σ are the analogous of the speed of a moving particle and the speed of light, respectively, in special relativity. Lorentz transformations are precisely the linear maps that preserve the Lorentzian scalar product, which is the hyperbolic counterpart of the Euclidean scalar product, see, e.g., [13]. Thus, up to our knowledge, Yilmaz underlined for the first time that hyperbolic structures may play a significant role in color perception. Yilmaz ideas, surely brilliant and much ahead of their time, were developed only heuristically: their mathematical analysis is not fully convincing in multiple parts and the experimental results that allow them to build the Lorentz transformation are just claimed, without providing any experimental data or apparatus description, see [9,11] for further details. In spite of that, Yilmaz's hint on the importance of hyperbolic geometry for the study of color perception inspired at least one key scientist, the polymath H.L. Resnikoff, who acknowledged Yilmaz in their fundamental 1974 paper [14] that we recall in the next subsection. Resnikoff's Homogeneous Color Space To greet Resnikoff's foresight, we consider that no sentence is more appropriate than that of Altiero Spinelli, one of the founding fathers of the European Union, who stated the following: 'the quality of an idea is revealed by its ability to rise again from its failures'. Resnikoff's work was written in the language of mathematical physics and fused differential geometry, harmonic analysis of Lie groups and the theory of Jordan algebras to study geometry and metrics of the space of perceived colors. This was far too abstract and advanced for the typical mathematical knowledge of the average color scientist of that time, with the consequence that Resnikoff's paper failed to interest the scientific community until today. Resnikoff's cleverest idea was to abandon the classical description of perceived colors in terms of metameric classes of light spectra, see, e.g., [1], and to concentrate on an alternative description based, essentially, on the algebraic properties satisfied by perceived colors. As we will see, this turned out to be the key to unveil a completely new way of representing colors. The starting point of Resnikoff's paper is the beautiful 1920 Schrödinger's set of axioms of perceived colors: the great theoretical physicist, before dedicating himself to quantum mechanics, studied optics and color perception and came to the conclusion that the empirical discoveries of the founding fathers of color theory, none less than Newton [15,16], Helmholtz [17], Grassmann [18] and Maxwell [19], could be resumed in a set of axioms which, put together, say that the space of perceived colors, denoted with C from now on, is a regular convex cone of dimension 3 (for trichromatic observers, the only ones that we consider in this paper). The mathematical formalization of this concept is the following: let C be a subset of a finite dimensional inner product vector space (V, , ), then: • C is a cone if, for all c ∈ C and all λ ∈ R + , λc ∈ C, i.e., C is stable w.r.t. multiplications by a positive constant. This is the mathematical translation of the fact that, up to the glare limit, if we can perceive a color, then we can also perceive a brighter version of it; • C is convex if, for every couple c 1 , c 2 ∈ C and every α ∈ [0, 1], αc 1 + (1 − α)c 2 ∈ C, i.e., the line segment connecting two perceived colors is entirely composed by perceived colors; • C is regular if, denoted with C its closure w.r.t. the topology induced by the inner product of V, the conditions c ∈ C and −c ∈ C imply that c = 0. The intuitive geometrical meaning of this condition is that C is a single cone with a vertex. It is important to stress that Schrödinger's axioms hold for the so-called aperture colors [20], i.e., colored lights seen in isolation against a neutral background or homogeneously colored papers seen through a reduction screen. Resnikoff dedicated a large part of their paper to motivate the introduction of a new axiom for C and to analyze the strong consequences on its geometry. He postulated that C is a homogeneous space, i.e., that there exists a transitive group action on it, which means, in practice, that any couple of elements of C can be connected via an invertible transformation. Resnikoff considered this property to be naturally satisfied by the space of perceived colors C because no color can be considered 'special' w.r.t. another one. Moreover, he gave an illuminating motivation for their interest in homogeneity by discussing the simplified case of achromatic visual stimuli. These ones provoke only a brightness sensation in humans, for this reason their space can be modeled as R + = (0, +∞) (if we do not consider glare), which is both a group w.r.t. multiplication and a homogeneous space of itself, in fact, for any x, y ∈ R + , we can write y = λx, with λ = y x ∈ R + . Now, the key observation is that, up to a positive constant, the only non-trivial R + -invariant distance on R + is given by and this expression coincides with the well-known Weber-Fechner's law, the first psychophysical law ever determined, which establishes the logarithmic response of the human visual system w.r.t. variations of achromatic stimuli, see, e.g., [21]. The fact that the only distance compatible with the homogeneous structure of R + coincides with a perceptual metric was a major source of inspiration for Resnikoff, who saw in the extension of homogeneity to the 3-dimensional regular convex cone C a possibility to uniquely determine perceptual metrics for the entire color space and not only for the achromatic axis. The transitive group action on C described by Resnikoff is that of the so-called 'background transformations', operationally implemented by the change of background depicted in Figure 1 and mathematically represented (The hypothesis of linearity for background transformations remains an open issue, see, e.g., [7,22].) by orientation-preserving linear transformations that preserve C, i.e., The hypothesis of homogeneity for C led to formidable consequences. In fact, Resnikoff proved that there are only two types of 3-dimensional homogeneous regular convex cones: see also [7] for a simplified proof. C 1 = R + × R + × R + is the classical flat colorimetric space used to harbor, e.g., the LMS, RGB, XYZ coordinates [1]. Instead, SL(2, R)/SO(2) is a space of constant negative curvature equal to −1 and it is an instance of a 2-dimensional hyperbolic model H. Other equivalent models are, e.g., the upper hyperboloid sheet (whose elements can be identified with 2 × 2 real symmetric positive-definite matrices with unit determinant), the upper half plane and the Poincaré and Klein disks [13]. Resnikoff saw in C 2 = R + × H a novel way to represent perceived colors: he interpreted R + as the brightness axis and H as the chromatic space of perceived colors, thus giving a mathematical formalization to the aforementioned Yilmaz's idea about the pertinence of hyperbolic structures in the study of color perception. As we said above, one of Resnikoff's motivations to study a homogeneous color space was the will to uniquely determine perceptual metrics compatible with the homogeneous structure of C, i.e., invariant under the action of GL + (C). He actually succeeded to prove that, for both C = C 1 and C = C 2 , there is only one Riemannian metric ds 2 , up to positive multiplicative scalars, such that the induced Riemannian distance d : C × C → R + satisfies d(B(c 1 ), B(c 2 )) = d(c 1 , c 2 ), ∀c 1 , c 2 ∈ C, ∀B ∈ GL + (C). Specifically, when C = C 1 , this metric, denoted with ds 2 1 , is which coincides with the well-known Helmholtz-Stiles metric classically used in colorimetry [1]. Instead, when C = C 2 the only Riemannian metrics satisfying (4) are those positively proportional to Tr being the matrix trace, which coincides with the Rao-Siegel metric widely used nowadays in geometric science of information, see, e.g., [23][24][25]. Assumption (4), however, is not coherent with the so-called crispening effect represented in Figure 2, where the same couple of color stimuli is exhibited over three different backgrounds: it is clear that the perceptual difference between them is not background independent. As a consequence, if background transformations are identified as elements of GL + (C), then neither the Helmholtz-Stiles nor the Rao-Fisher metric can be accepted as perceptuallycoherent color distances. In the second part of their 1974 paper, Resnikoff showed how to embed C 1 and C 2 in a single mathematical framework thanks to the theory of Jordan algebras. What he lacked to see was the link with a quantum theory of color provided by these objects. We will discuss this in Section 4, after recalling the basic results about Jordan algebras and their use in the algebraic formulation of quantum theories. Jordan Algebras and Their Use in Quantum Theories Jordan algebras have been introduced by the German theoretical physicist P. Jordan in 1932, see [26], in the context of quantum mechanics. For the sake of brevity, in this section we will only recap the information about such objects that we need in the sequel, more information can be found in [27][28][29]. Basic Results and Classification of Three-Dimensional Formally Real Jordan Algebras and Their Positive Cones A Jordan algebra A is a real vector space equipped with a bilinear product (a, b) → a • b, that is required to be commutative and to satisfy the following Jordan identity: which ensures that the power of any element a of A is defined without ambiguity and that A is at least power-associative, however A, in general, is not an associative algebra. The most classical example of a non-associative Jordan algebra is given by M(n, R), the set of real n × n matrices with n ≥ 2, equipped with the following matrix Jordan product: The Jordan algebras that we will consider in the sequel are formally real, which means that, for any finite set a 1 , a 2 , . . . , a n ∈ A, a 2 1 + a 2 2 + · · · + a 2 n = 0 implies a 1 = a 2 = · · · = a n = 0, just as if the elements a 1 , a 2 , . . . , a n were real, which motivates their name. In the sequel, we will make use of the convenient acronym FRJA to denote such Jordan algebras. It can be proven that any FRJA A is unital, i.e., there exists a unit 1 ∈ A such that 1 • a = a • 1 = a for all a ∈ A, and it can be endowed with the following partial ordering: for any couple of elements a, b ∈ A, b ≤ a if and only if a − b is a sum of squares. In particular, if a is the square of an element of A, then we call a a positive element and we write a ≥ 0. The set of positive elements of A is called its positive domain and it is denoted with C(A), its interior C(A) is called the positive cone of A. Every FRJA A can be equipped with an inner product defined by where Thus, every FRJA is also a Hilbert space with respect to this inner product. We now pass to the classification of FRJAs of dimension 3, for more information about a generic dimension n we refer the reader to [29]. The classification theorem of Jordan, von Neumann and Wigner [30] establishes that there are only two non-isomorphic FRJAs of dimension 3. The first is the associative Jordan algebra Its positive domain and cone are, respectively, The second option corresponds to two non-associative and naturally isomorphic Jordan algebras, namely: where H(2, R) denotes the Jordan algebra of 2×2 symmetric matrices with real entries equipped with the matrix Jordan product (8) and the vector space R ⊕ R 2 becomes the so-called spin factor when endowed with the Jordan product defined by where α, β ∈ R, v, w ∈ R 2 and , is the Euclidean inner product of R 2 . The natural isomorphism between the two representations of A 2 is provided by the following mapping: Thanks to this isomorphism, the positive domains and cones of the two representations of A 2 are in one-to-one correspondence. Simple computations show that and where where being the Euclidean norm, is called future lightcone and its closure is is the cone of positive-definite 2 × 2 real matrices and is the set of positive semi-definite 2 × 2 real matrices. For later purposes, we underline here that the positive cone (Actually C(A) is a symmetric cone, i.e., an open convex regular homogeneous self-dual cone, for all FRJA A and, by the Koecher-Vinberg theorem, see, e.g., [27,31,32], every symmetric cone is isomorphic to the positive cone of a FRJA.) C(A) of a FRJA A has the remarkable property of being self-dual, see, e.g., [27], i.e., C(A) = C * (A), where C * (A) is called dual cone and it is defined as follows The isomorphism above is a direct consequence of the Riesz representation theorem which allows us to identify every element a ∈ A with one and only one linear functional ω ∈ A * , the dual of the vector space underlying A. ω ∈ A * is called positive if ω(a) ≥ 0 for all a ∈ C(A). If we denote with A * + the set of positive functionals on A then, by self-duality, we have the identification A * + ∼ = C(A), so, thanks to (15): The results that we have recalled so far allow us to show how Resnikoff's finding about the classification of possible perceived color spaces C 1 and C 2 appearing in (3) can be related to Jordan algebras theory. In fact, on one side, C 1 coincides with the positive cone of the associative Jordan algebra A 1 = R ⊕ R ⊕ R. On the other side, every matrix X of H + (2, R), the positive cone of the non-associative Jordan algebra A 2 = H(2, R), has a strictly positive determinant, so we can always decompose it as where Y ∈ H + 1 (2, R), the subset of H + (2, R) given by matrices with unit determinant. In other words, the two color space found by Resnikoff can be identified with the positive cones of the only two non-isomorphic formally real Jordan algebras of dimension 3. This is as far as Resnikoff went in their paper [14]. In Section 4, we will show how to extend Resnikoff ideas to a quantum theory of color perception. The exposition will be clearer if we first explain, in the next subsection, how Jordan algebras relate to quantum theories. Jordan Algebras and Algebraic Formulation of Quantum Theories The birth (and also the name) of quantum mechanics is related to the need to explain the outcomes of physical experiments involving energy quantization. After the early for-malization attempts of Born, Heisenberg and Jordan with the so-called 'matrix mechanics' and of Schrödinger with their 'wave mechanics', first Dirac in [33] and then von Neumann in [34] provided the abstract setting based on Hilbert spaces and Hermitian operators that, nowadays, we call the ordinary axiomatization of (non-relativistic) quantum mechanics. The work of Dirac and von Neumann is unanimously considered extraordinary not only for their rigorous formalization of quantum mechanics, but also because they boldly gave up preconceptions about nature, such as continuity and deterministic behavior, and simply build the quantum theory from scratch out of the experiments, by searching the minimal mathematical framework and the most suitable names, objects and laws that described the outcome of the experiments. They did not allow previous philosophical dogmas about how nature 'should' work to tell them what to do, instead they let nature to speak for itself through mathematics. In this sense, quantum mechanics, despite its great mathematical abstractness, is as attached to practical experiments and measurements as it can be. This explains why mathematical definitions always go hand by hand with 'operational' definitions in quantum mechanics. Following this tradition, we start by the operational definitions that we will mimic in Section 4 for a perceptual system: • A physical system S is described as a setting where we can perform physical measurements giving rise to quantitative results in conditions that are as isolated as possible from external influences; • A state of S is the way it is prepared for the measurement of its observables; • Observables in S are the objects of measures and are associated with the physical apparatus used to measure them on a given state; • An expected value of an observable in a given state is the average result of multiple measurements of the observable when the system is prepared each time in the same state. It is clear that observables characterize a state through their measurements and, vice versa, the preparation of a particular state characterizes the experimental results that will be obtained on the observables. This duality observable-state, as we will see in this subsection, can be formalized mathematically. In the ordinary mathematical axiomatization of quantum mechanics a physical system is associated to a Hilbert space H, a state to a ray of H (i.e., the linear span of a vector of H), an observable to an Hermitian operator A : H → H and, finally, the expected value of A on a state is associated to an element of its spectrum. Besides this ordinary axiomatization of a quantum system, other, more profound, axiomatizations emerged later. Probably the most general and surely the best suited for our purposes is the so-called algebraic formulation, pioneered by Jordan, von Neumann and Wigner in [30]. von Neumann massively contributed to the Hilbert space formalization of quantum mechanics while he was an assistant of Hilbert in Göttingen between 1926 and 1932, year in which he published the book [34]. However, he soon came to the conclusion that, from an algebraic point of view, the Hilbert space formulation of quantum mechanics was not optimally suited [35]: in fact, linear operators on a Hilbert space form an algebra under composition, but two Hermitian operators (associated to quantum observables) are stable under composition if and only if they commute, moreover, on the operational side, the composition of observables makes sense only under restrictive conditions. These considerations led them to warmly welcome Jordan's 1932 proposal [26] to replace the non-commutative and operationally problematic composition product with the commutative and operationally significant Jordan matrix product defined in Equation (8), even if that meant to renounce to associativity. The meaningfulness of the product a • b = 1 2 (ab + ba) lies in Jordan's 'brilliantly trivial' observation that it can be re-written as 1 As underlined in [37], the algebraic formulation of a physical theory is perhaps the most general because it can encompass both classical and quantum systems. The postulates of direct interest for us are the following: • A physical system S is described by its observables, which are elements of an algebra A with unit 1 endowed with a partial ordering. Notice that this does not mean that all the elements of A are observables, but only that the observables of S are contained in A; • If A is a commutative and associative algebra, then we deal with a classical system; otherwise, we call S a quantum system; -If a ∈ A is positive, accordingly to the partial ordering of A, then ω(a) ≥ 0; - Given an observable a ∈ A and a preparation of the system, i.e., a state ω, we can associate the number a ω := ω(a), called the expectation value of the variable a ∈ A on the state ω. ω(a) is operationally obtained by performing replicated measurements of a on identically prepared states and by taking the average over the outcomes of measurements. It is important to motivate why the lack of commutativity or associativity of A is the key property to establish the quantum-like character of a theory. The real philosophical and mathematical core of a quantum system is not energy quantization (in fact, also in quantum mechanics there can be continuous energy bands, e.g., in solid state quantum physics, corresponding to the continuous part of the spectrum of an Hermitian operator.), but Heisenberg's uncertainty principle [38], i.e., paraphrasing the beautiful description contained in [36], the empirical observation of the existence of observables that cannot be measured simultaneously: the measurement of one of them introduces an unavoidable limit in the precision by which another can be measured, as happens for the observable of the so-called Heisenberg algebra [39]. Mathematically, this profound physical fact has nothing to do with the discrete spectrum of an Hermitian operator on a Hilbert space, but with the fact that such operators form an associative but non-commutative algebra. Crucially, Jordan, von Neumann and Wigner proven in [30] that also a commutative but non-associative algebra of observables can be used to encode this fundamental fact. Hence, the passage from the associative and commutative algebra of observables (real-valued functions on the phase space) of classical mechanics to the either non-associative or non-commutative algebraic structure of quantum mechanics is the profound and crucial distinction between the two kind of theories. As we have seen, a FRJA is always commutative, but it can be associative or not, in Section 4 we will use this property to interpret ordinary colorimetry as a classical theory and to establish a novel quantum theory of color perception through the choice of a nonassociative FRJA of observables. A similar use of Jordan algebras has been performed by Emch in statistical mechanics and quantum field theory [40]. Before doing that, let us complete this section by recalling the fundamental concept of density matrix and its relation to pure and mixed states. In Section 3.1, we have seen that Riesz's representation theorem and self-duality imply that A * + ∼ = C(A), i.e., the set of positive functionals of a FRJA A can be identified with positive elements of A, which form a closed convex cone. Thanks to this result, it is very easy to associate states to elements of A by imposing normalization. For historical reasons, these elements are typically denoted with ρ and called density matrices. Recalling Equation (9), the normalization condition can be simply written as follows: 1 = ω ρ (1) = ρ, 1 = Tr(ρ • 1) = Tr(ρ). As a consequence, the 'duality state-observable', i.e., the isomorphism between the set of states S(A) (a subset of A * ) and the set of density matrices DM(A) (a subset of A), is given by: So, for every state ω ∈ S(A) ⊂ A * , there is one and only one density matrix ρ ∈ DM(A) ⊂ A such that the expectation value of a ∈ A is given by: Density matrices can be associated to both pure and mixed states. We say that a state ω is the mixture of the two states ω 1 , ω 2 if we can write ω as the convex combination of ω 1 , ω 2 , i.e., if there exists 0 < λ < 1 such that ω = λω 1 + (1 − λ)ω 2 . More generally, a mixed state ω is the convex combination of n states ω i , i = 1, . . . , n, n ≥ 2, i.e., A state ω is pure if it cannot be written as a convex linear combination of other states. Geometrically speaking, a pure state does not lie in any open line segment joining two states. So, for example, if the set of states is represented by a disk or a sphere, then the pure states are those lying on the perimeter of the disk or the spherical surface, respectively. In Section 4, we will confirm this fact for an important case of interest for our theory of color. Remarkably, if we associate a density matrix ρ to a state ω, then we can establish if ω is pure or mixed thanks to this very simple criterion, see, e.g., [37] or [41]: Another way to characterize the purity of a state is through the so-called von Neumann entropy, defined as S(ρ) = −Tr(ρ • log ρ). It is possible to prove that, see, e.g., [42] or [43]: • If A is a real matrix Jordan algebra, then where the numbers λ k are the eigenvalues of ρ repeated as many times in the sum as their algebraic multiplicity; A Quantum Theory of Color Perception We are now ready to state our operational axioms for a theory of color perception by mimicking those of the algebraic formulation of a physical theory: • A visual scene is a setting where we can perform psycho-visual measurements in conditions that are as isolated as possible from external influences; • A perceptual chromatic state is represented by the preparation of a visual scene for psycho-visual experiments; • A perceptual color is the observable identified with a psycho-visual measurement performed on a given perceptual chromatic state; • A perceived color is the expected value of a perceptual color after a psycho-visual measurement. It is worthwhile underlying two facts about the previous assumptions: • Well-known colorimetric definitions such as additive or substractive synthesis of color stimuli, aperture or surface color, color in context, and so on, are incorporated in the concept of preparation of a perceptual chromatic state. A first example of preparation is the set up a visual scene where an observer in a dark room has to look at a screen, where a light stimulus with foveal aperture w.r.t. the observer provokes a color sensation. A second example of preparation is given by an observer adapted to an illuminant in a room who looks at the patch of a surface. The perceptual chromatic states identified by these two preparations are, in general, different; • The instruments used to measure the observables are not physical devices, but the sensory system of a human being. Moreover, the results may vary from person to person, thus the average procedure needed to experimentally define the expected value of an observable on a given state is, in general, observer-dependent. The response of an ideal standard observer can be obtained through a further statistical average on the observer-dependent expected values of an observable in a given state. On the mathematical side, the only axiom that we consider is the following, first introduced in [8]: TRICHROMACY AXIOM: -The space of perceptual colors is the positive cone C of a formally real Jordan algebra of real dimension 3. Notice that we associate C to perceptual colors, i.e., observable colors, and not to perceived colors, i.e., their expectation values after measurements. In Section 5, we will motivate the reason why we consider this association more appropriate by discussing the subtle concept of measurement implicitly involved in the definition of perceived color. As we have discussed in Section 3.1, the only formally real Jordan algebra of real dimension 3 are the associative Jordan algebra R ⊕ R ⊕ R and the non-associative and naturally isomorphic Jordan algebras H(2, R) ∼ = R ⊕ R 2 . The positive cones of R ⊕ R ⊕ R and H(2, R) agree exactly with those found by Resnikoff by adding the homogeneity axiom to the set of experimentally well-established Schrödinger's axioms. If we transpose the algebraic formulation of physical theories to the case of a perceptual theory of color, we immediately have that: • A theory of color perception associated to the FRJA R ⊕ R ⊕ R is classical; • A theory of color perception associated to the FRJAs H(2, R) ∼ = R ⊕ R 2 is quantum-like. As previously seen, standard colorimetry is associated to the FRJA R ⊕ R ⊕ R and so it is a classical theory associated to a geometrically trivial cone of observables. In [8], the much geometrically richer 'quantum colorimetry' associated to the FRJAs H(2, R) ∼ = R ⊕ R 2 has been investigated by exploiting, among other techniques, the results about density matrices and von Neumann entropy recalled above. The results obtained in [8] will be summarized in the following subsections. Before that, we would like to spend a few words to motivate why a quantum theory of color perception is not only a valid option, but, in our opinion, it makes much more sense than a classical one. The well-known Copenhagen interpretation of quantum mechanics assumes that the nature of microscopic world, contrary to the macroscopic one, is intrinsically probabilistic and, coherently with that, each quantum measurement must be interpreted in a probabilistic way. The same elusiveness characterize color perception: it is well-known that the outcome of multiple experiments where an observer must chose, say, a Munsell chip to match a given color patch under a fixed illuminant is, in the large majority of cases, not a sharp choice, but a distribution of close selections around the most frequent one. If we use the terms introduced above we have that, in spite of the fact that the visual scene has been prepared in the same chromatic state, the only way to characterize the perceived color of the patch is through the expected value of a probability distribution. Remarkably, the great theoretical physicist A. Ashtekar and their collaborators, A. Corichi and M. Pierri, foresaw in [44] the possibility of a quantum theory of color vision by writing: 'the underlying mathematical structure is reminiscent of the structure of states (i.e., density matrices) in quantum mechanics. The space of all states is also convex-linear, the boundary consists of pure states and any mixed state can be obtained by a superposition of pure states. In the present case, the spectral colors are the analogs of pure states'. In the following subsections we will see how this intuition can be precisely formalized. Finally, let us also uphold a peculiar feature of the quantum theory of color perception: it is based on real numbers. This may sound odd, since quantum mechanics is usually thought to be an intrinsically complex theory, this, however, is a misconception. In fact, as remarked in [45], the quantum description of observable as an Hermitian operator acting on a Hilbert space and of state as a density matrix is based upon the spectral theorem and Gleason's theorem, respectively. Both theorems retain their validity on real or quaternionic Hilbert spaces, so, contrary to common belief, a real or quaternionic quantum theory of observables, states and related concepts is as legitimate as a complex quantum theory. Pure and Mixed Quantum Chromatic States If we specialize Equation (24) in the case A = H(2, R) we get: S(H(2, R)) ∼ = DM(H(2, R)) = {ρ ∈ C(H(2, R)), Tr(ρ) = 1}, (30) but, thanks to (15), C(H(2, R)) = H + (2, R), which is a closed convex cone embedded in a 3-dimensional vector space. The linear constraint Tr(ρ) = 1 represents a 2-dimensional hyperplane, thus, geometrically, DM(H(2, R)) is expected to be 2-dimensional. If we add the condition Tr(ρ 2 ) = 1 to obtain pure states, we further reduce the dimension to 1. Coherently with these considerations, we have: and where D is the closed unit disk in R 2 centered in the origin and S 1 is its border. The extension of these results to the spin factor R ⊕ R 2 is obtained immediately by applying the isomorphism defined in (14) on the matrices ρ(v 1 , v 2 ): These results show that the states of our quantum color perception theory constitute the real version of the so-called Bloch sphere, that, in complex quantum mechanics, harbors the states of a qubit (e.g., an electron with its two spin states), whose Hilbert space state is C 2 . Following Wootters, see, e.g., [46], we call this system a rebit, the portmanteau of 'real qubit'. Since we lose a dimension passing from complex to real numbers, the Bloch sphere of a rebit becomes the Bloch disk D: the points in its interior parameterize mixed chromatic states, while those lying on the border S 1 , parameterize pure chromatic states. Von Neumann Entropy of Quantum Chromatic States: Saturation and Hue Thanks to the von Neumann entropy, we can provide an explicit measure of the degree of purity of a quantum chromatic state. Let us start with the maximal von Neumann entropy, expressed by formula (29): the identity element of A = H(2, R) is I 2 = 2ρ(0, 0) and Tr(I 2 ) = 2, so i.e., the state of maximal von Neumann entropy for H(2, R) is parameterized by the origin of the unit disk D. By using the isomorphism defined in (14), we can obtain the density matrix of associated to the maximal von Neumann entropy in the case A = R ⊕ R 2 , obtaining: The von Neumann entropy of a generic quantum chromatic state can be computed easily if we express the parameters (v 1 , v 2 ) ∈ D of a density matrix to the polar form (r cos ϑ, r sin ϑ), with r ∈ [0, 1], ϑ ∈ [0, 2π), the most natural parameterization of the disk D, obtaining: Thanks to Formula (28), by direct computation we get that the von Neumann Entropy of a quantum state described by the density matrix ρ(r, ϑ) can be written as follows: (r, ϑ)) is a radial concave bijective function on [0, 1], its maximum and minimum values are log(2) and 0, reached in correspondence of r = 0 and r = 1, respectively. Since r = 0 identifies the maximal von Neumann entropy of a quantum chromatic state and r = 1 identifies pure quantum chromatic states, it seems reasonable to associate the von Neumann entropy to the saturation of a perceptual color: when r = 0 we have the minimal chromatic information available, i.e., r = 0 describes achromatic colors; when r = 1, instead, we have the maximal chromatic information, i.e., r = 1 describes fully saturated colors and this holds for all values of the angle ϑ ∈ [0, 2π). This implied that it also seems reasonable to associate the density matrices to quantum chromatic states of 'pure hue'. The easiest way to obtain a saturation formula from the von Neumann entropy that associates to achromatic colors the value 0 and to pure hues the value 1 is the following: The graph of σ(r) can be seen in Figure 3. Its convex behavior, with a small slope near r = 0, a linear behavior near r = 1/2 and a large slope near r = 1 seems to fit well with common perception. The definition of saturation is probably the most elusive among color attributes, thus we are very interested to conduct careful tests in collaboration with psycho-physicists in order to validate or improve this definition of saturation, recalling once more that we are modeling color perception in very restrictive conditions and not in natural scenes. Hering's Chromatic Opponency and Its Role in the Encoding of Visual Signals One of the most important results of [8] is a mathematical explanation of Hering opponency [47], i.e., the fact that no color is simultaneously perceived as reddish and greenish, or as yellowish and bluish and, even more importantly, that the encoding of light signals performed by the human visual system, i.e., the superposition of achromatic plus chromatically opponent information performed mainly by ganglion cells, see, e.g., [48], can be intrinsically described by the mathematical framework of the quantum theory of color perception. This is very important because, up to the authors' knowledge, only an a posteriori explanation of this physiological behavior based on natural image statistics was available, see, e.g., [49][50][51] and the references therein. In order to obtain the results claimed above it is necessary to introduce the two Pauli-like matrices σ 1 , σ 2 given by and to notice that the set (σ 0 , σ 1 , σ 2 ), where σ 0 = I 2 , is a basis for H(2, R). By direct computation, the generic density matrix of H(2, R) can be obtained from this basis as follows: or, in polar coordinates, Let us express σ 1 and σ 2 in terms of suitable density matrices by considering the following pure state density matrices corresponding to noticeable values of the angle ϑ: we have: by introducing these expressions in Equation (41) we arrive at the formula which implies that the generic quantum chromatic state represented by a density matrix ρ(r, ϑ), with (r cos ϑ, r sin ϑ) ∈ D can be seen as the superposition of: • The maximally von Neumann entropy state ρ 0 , which represents the achromatic state; • Two pairs of diametrically opposed pure hues. This purely theoretical result can be connected to Hering's theory [47] by identifying the pairs of pure hues with red vs. green and yellow vs. blue, or to the neural coding theory of de Valois [52] by identifying them with pinkish-red vs. cyan and violet vs. greenishyellow. In spite of the particular identification, the important fact to underline is that our framework allows to intrinsically represent the retinal encoding of visual signals without the need of any a posteriori analysis or manipulation. We note also that if we sum all the density matrices listed in Equation (42) we get 4ρ 0 , so i.e., the achromatic state ρ 0 is the mixed state obtained by a convex combination of pure chromatic states where each one of them appears with the same probability coefficient. This fact further supports the interpretation of ρ 0 as the achromatic state. The expected values of the Pauli-like matrices σ 1 and σ 2 on the chromatic state represented by ρ(r, ϑ) carry a very important information. To compute them, we can use the formula which, by direct computation, gives σ 1 ρ(r,ϑ) = r cos ϑ and σ 2 ρ(r,ϑ) = r sin ϑ. Being the cosine and the sine the projection of the unit vector in the disk D on the horizontal and vertical axis, respectively, the interpretation of the previous results is immediate: the expected value of σ 1 (resp. σ 2 ) is the degree of opposition between the two pure color states that lie at extreme points of the horizontal (resp. vertical) segment [−1, 1]. Figure 4 gives a graphical representation of what was just stated. It can be interpreted as a mathematically rigorous quantum version of Newton's circle. In the model that we have described, Hering's observation about the fact that a color cannot be perceived simultaneously as greenish and reddish, or as bluish and yellowish, is an immediate consequence of the fact that the value of cos ϑ and sin ϑ cannot be simultaneously positive and negative. Each pure state associated to a hue can be represented by the density matrix or, equivalently, to the point (cos ϑ, sin ϑ) ∈ S 1 . This implies that, in this model of color perception, each hue is uniquely associated to a pair numbers belonging to [−1, 1] representing the degrees of opposition red-green and blue-yellow of that particular hue. Uncertainty Relations for Chromatic Opponency As we have previously said, the essence of a quantum theory is the existence of uncertainty relations for some observables. In this subsection we are going to derive such uncertainty relations for a generic couple of observables evaluated on arbitrary states and then we will apply our result on the Pauli-lile matrices to show uncertainty in the measurement of the degrees of chromatic opponency described in the previous subsection. In order to obtain these relations, we will not make use of the original Heisenberg's or Weyl's arguments, see [38,53], respectively, instead, we will use Robertson's and, in particular, Schrödinger's refinements, see [54,55], respectively. If we identify the opponent axes with red vs. green (R-G) and yellow vs. blue (Y-B), then the interpretation of Formula (60) is the following: • The only value of r that nullifies the right hand side of (60) is r = 0, that corresponds to an achromatic color state, thus no uncertainty about the degree of opposition R-G and Y-B is present in this case. This fact is coherent with both our physiological and perceptual knowledge of color perception: a perceived achromatic color is characterized by an 'equal amount of chromatic opponencies'; • The only values of ϑ that nullify the right hand side of (60) are ϑ = 0, π 2 , π, 3 2 π, which identify the two opposition axes R-G and Y-B. Again, this seems coherent with common knowledge: suppose that we want to determine the couple of opponencies R-G and Y-B to match, say, a color perceived as a red but with a non maximal saturation (measured as a function of the von Neumann entropy of its chromatic state). Due to the redness of the percept, we will always set an equal opposition in the axis Y-B that will not influence the search for the correct opposition R-G. Thus, the determination of the two chromatic oppositions will be compatible; • Instead, for r ∈ (0, 1] and ϑ ∈ [0, 2π) \ {0, π 2 , π, 3 2 π}, there will be a lower bound strictly greater than 0 for the product of quadratic dispersions of σ 1 and σ 2 on the state defined by ρ(r, ϑ). Moreover, this lower bound is a non-linear function of the variables (r, ϑ) and it is maximum for pure hues, r = 1, halfway in between G and B, B and R, R and Y and Y and G, i.e., ϑ = π 4 + k π 2 , k = 1, 2, 3. If this interpretation is correct, then trying to adjust the R-G opposition to match, say, a color perceived as orange, should introduce a 'perceptual disturbance' on the adjustment of the opposition Y-B. Even if the implications of the uncertainty relations discussed above seem coherent with common perception, they remain purely theoretical at the moment, thus they need to be validated by accurate experiments. If the validation will turn out to be faithful with the predictions of Formula (60), then this will provide a further firm confirmation of the non-classical nature of color perception. Geometry and Metrics of Quantum Chromatic States In this section we will deal with the geometry and metric of the perceived color space and of quantum chromatic states following [8]. We start by noticing that every matrix belonging to H + 1 (2, R) = {X ∈ H + (2, R), det(X) = 1} can be written as with α > 0, to guarantee the positive-definiteness, and v = (v 1 , v 2 ) ∈ R 2 satisfying α 2 − v 2 = 1 to guarantee that det(X) = 1. Thanks to the isomorphism (14), H + 1 (2, R) is in one-to-one correspondence with the level set of the future lightcone L + given by As proven for instance in [56], the projection on the plane in R ⊕ R 2 identified by the condition α = 0, i.e., is an isometry between L 1 and the Poincaré disk D. The expression of the matrix X in the w-parametrization, with of course and, when written like that, every X ∈ H + 1 (2, R) satisfies the following equation, where ds 2 D represents the Riemannian metric of the Poincaré disk [8]: Thanks to the decomposition H + (2, R) = R + × H + 1 (2, R), we have that H + (2, R), the positive cone of H(2, R), is foliated with leaves isometric to the Poincaré disk. We recall that the left hand side of Equation (65) is the Rao-Siegel metric encountered in Section 2.2 when we have discussed the Resnikoff model, with the difference that Resnikoff applied it on the whole cone H + (2, R) and not on the level set H + 1 (2, R). This metric has been used also in [57,58] in the context of Commission Internationale de l'Éclairage (CIE) colorimetry. This description is not the best suited for the metric purposes of the quantum theory of color perception because it does not take into account the role of density matrices. We are going to show that the correct way to deal with this issue is by considering an alternative description based on the spin factor R ⊕ R 2 and its positive cone L + . Again in [56] we can find a classic result of hyperbolic geometry which says that the projection:π 1 : with x i = v i α , i = 1, 2, is an isometry between the level set L 1 and the Klein disk K, whose Riemannian metric is given by: Moreover, the Klein and Poincaré disks, K and D, are isometric via the map defined by: It is possible to verify that L + , the positive cone of R ⊕ R 2 , is foliated by the level sets α=constant > 0 with leaves isometric to the Klein disk K. The leaf associated to α = 1/2 is particularly important. The reason is easy to understand: we have seen in (33) that the state density matrices ρ(v 1 , v 2 ) can be identified with the elements s v = 1 2 (1 + v) = 1 2 + v 2 with v ≤ 1 of the spin factor R ⊕ R 2 . If we set then the projection:π 1/2 : , which implies that, if we define the Klein disk of radius 1/2 as then the map ϕ : is an isometry between K and K 1/2 , where the Riemannian metric on the latter is given by: We recall that the geodesics of the Klein disk are extremely simple, being straight line segments, i.e., the chords inside the disk, contrary to those of the Poincaré disk, which are the diameters and the semicircles contained in the disk and orthogonal to its boundary. Moreover, the Klein distance on K 1/2 coincides with the Hilbert distance, defined as follows: let p and q be two interior points of the disk and let r and s be the two points of the boundary of the disk such that the segment [r, s] contains the segment [p, q]. The Hilbert distance between p and q is defined by: where is the cross-ratio of the four points r, p, q and s, for the proof see, e.g., [59]. Without entering in details that would take too much space, we underline the importance of the Hilbert distance by saying that, in [11], this distance is shown to be intimately related to the relativistic Einstein-Poincaré addition law of the so-called perceptual chromatic vectors, which permits to link color perception to the formalism of special relativity in a rigorous way, bypassing the heuristic analysis of Yilmaz recalled in Section 2.1. Discussion We have shown how Resnikoff's idea to abandon metameric classes of spectra and study the color space solely through its algebraic properties can be further refined by exploiting the properties of formally real Jordan algebras. This leads to a real quantum theory of color vision that permits to rigorously define colorimetric attributes and to understand chromatic opponency by means of quantum features. In this paper, we have given a theoretical confirmation of the quantum nature of color perception by proving the existence of uncertainty relations satisfied by chromatic opponencies. The birth of quantum mechanics is related to the theoretical analysis of very simple experiments, such as the observation of interference pattern in polarized light or spin measurements by a pair of Stern-Gerlach devices. Analogously, this novel theory of color perception is developed from mathematical properties of vision gathered from experiments in extremely simple conditions. In the same way as quantum mechanics evolved into the much more complicated quantum-relativistic gauge field theory, we expect that a highly non-trivial extension of the model recalled in this paper is needed to understand color perception in more realistic conditions. For example, to deal with contextual effects as chromatic induction, one may try to replace the closed quantum system described here with open ones, as suggested in [8], or to extend the model to a field theory of color perception through bundles and connections, as suggested in [60,61]. Another issue that we consider very intriguing is the deep comprehension of the concept of color measurement, which is intimately related to the definition of perceived color. As suggested in [8], the use of the so-called effects in relation with generalized quantum observables and unsharp measurements may play a significant role in this analysis. Furthermore, the extension of our theory to encompass cognitive phenomena, as described in [62], seems to be a quite natural issue to explore. On the geometrical side, it is quite remarkable that the set of quantum chromatic states can be embedded in the closure of the future lightcone L + via the projective tranformation (70), where it is identified with the Klein disk equipped with the Hilbert metric. This mathematical richness and clarity stands out even more if we compare it with the construction of the CIE xy chromaticity diagram, which is also built through a projective transformations of the XYZ CIE coordinates, namely x = X/(X + Y + Z), y = Y/(X + Y + Z) and z = Z/(X + Y + Z). The flag-like shape of the CIE chromaticity diagram is far from having the regularity of a disk, moreover, it must be artificially closed with the purple line and, most importantly, it does not come naturally equipped with any metric. This last fact generated a lot of confusion when the well-known MacAdam ellipses [63] were discovered and, eventually, when the 'uniform' color spaces such as CIELab, CIELuv, etc. were build, the metric choice felt arbitrarily on the Euclidean distance. To cope with the non-Euclidean behavior of perceptual chromatic attributes, the coordinates of these spaces had to be artificially adjusted to fit the data with ad hoc transformations and parameters. A firm aim of our future work is to avoid this kind of heuristic procedures by using only the minimal and more natural mathematical tools to extend the theory that we have exposed. As an example, it seems natural to investigate the quantum counterpart of the MacAdam ellipses through the analysis of the uncertainty relations that we have proven to be satisfied by the observables of our theory. The discovery of such features would be perfectly coherent with the quantum-like behavior of color perception. Author Contributions: The authors contributed equally to this paper. All authors have read and agreed to the published version of the manuscript. Acknowledgments: The authors would like to thank Nicoletta Prencipe for very fruitful discussions regarding the analytical expression of the von Neumann entropy and uncertainty relations. Conflicts of Interest: The authors declare no conflict of interest.
13,256.6
2021-02-01T00:00:00.000
[ "Physics" ]
Supplementation of Culture Media with Lysophosphatidic Acid Improves The Follicular Development of Human Ovarian Tissue after Xenotransplantaion into The Back Muscle of γ-Irradiated Mice Objective The aim of the present study was to evaluate the effects of lysophosphatidic acid (LPA) supplementation of human ovarian tissue culture media on tissue survival, follicular development and expression of apoptotic genes following xenotransplantation. Materials and Methods In this experimental study, human ovarian tissue was collected from eight normal female to male transsexual individuals and cut into small fragments. These fragments were vitrified-warmed and cultured for 24 hours in the presence or absence of LPA, then xenografted into back muscles of γ-irradiated mice. Two weeks post-transplantation the morphology of the recovered tissues were evaluated by hematoxylin and eosin staining. The expression of genes related to apoptosis (BAX and BCL2) were analyzed by real time revers transcription polymerase chain reaction (RT-PCR) and detection of BAX protein was done by immunohistochemical staining. Results The percent of normal and growing follicles were significantly increased in both grafted groups in comparison to the non-grafted groups, however, these rates were higher in the LPA-treated group than the non-treated group (P<0.05). There was a higher expression of the anti-apoptotic gene, BCL2, but a lower expression of the pro-apoptotic gene, BAX, and a significant lower BAX/ BCL2 ratio in the LPA-treated group in comparison with non-treated control group (P<0.05). No immunostaining positive cells for BAX were observed in the follicles and oocytes in both transplanted ovarian groups. Conclusion Supplementation of human ovarian tissue culture medium with LPA improves follicular survival and development by promoting an anti-apoptotic balance in transcription of BCL2 and BAX genes. Introduction Fertility preservation by using cryopreserved ovarian tissue is critical for patients who are subjected to chemotherapy and radiotherapy or suffer from premature ovarian failure and autoimmune problems (1). The ovarian tissue is cryopreserved by two slow freezing and vitrification techniques. Based on the literature vitrification may be more effective than slow freezing, based on less primordial follicular DNA damage and better preservation of stromal cells (2). In vitro culture (IVC) followed by transplantation of cortical ovarian tissue is a potential technique to develop and grow the follicles after cryopreservation. The results obtained from these techniques in human ovarian tissue are very controversial due to their large sizes, dense ovarian stroma and long folliculogenesis period (3)(4). Apoptosis that is induced by oxidative stress or physical and chemical triggers during IVC and transplantation in cryopreserved tissues, affects the quality, growth, survival, and development of ovarian follicles (5)(6)(7)(8)(9). The usage of appropriate growth factors, antioxidants and anti-apoptotic factors improves the quality of the tissue during both IVC and grafting procedure (10)(11)(12)(13). Lysophosphatidic acid (LPA) is a bioactive phospholipid that is present in all tissues and plays roles in several cell activities such as proliferation, differentiation and migration (14,15). In ovaries and uterus LPA signaling is involved in early embryo development and preparation of endometrium for embryo-maternal interactions (14,(16)(17)(18)(19)(20)(21). LPA and its active receptors have been reported to be expressed in uterus, ovaries, and placenta (15,16). Recent studies on several mammalian species showed that LPA does its function through interactions of its LPAR1-6 receptors (16)(17)(18)(19)(20)(21)(22). Out of the six LPA receptors, LPAR4 is highly expressed in the cortex of human ovaries and LPAR1-3 are detected in human granulosa-lutein cells (15). In addition, the effects of LPA as an anti-apoptotic factor on several cell types have been suggested in the literature (17,(21)(22)(23). However, there is poor information regarding its effects for improving the cell quality in IVC of human ovarian tissue. Therefore, the aim of the present study was to evaluate the effects of supplementation of the human ovarian tissue culture media with LPA on tissue survival and follicular development after xenotransplantation, using morphological and immunohistochemical techniques as well as analysis of the expression of apoptosis-related genes. Materials and Methods All reagents used in the following experiments were obtained from Sigma-Aldrich (Germany), unless stated otherwise. Ovarian tissue collection In this experimental study, the human ovarian tissues were collected from 8 normal transsexual (female to male) individuals aged 18-35 years old (median 26.1). The tissues were received following laparoscopic surgery under confirmation by Ethics Committee of the Faculty of Medical Science of Tarbiat Modares University (Ref. No. 52/883). The ovarian cortical tissues were cut into approximately 2×1×1 mm pieces under sterile conditions (n=130). These fragments vitrified-warmed and all assessments of this study were performed on these samples. All samples were cryopreserved and stored at liquid nitrogen until they were used. Experimental design This experimental study was designed to assess the effect of LPA on human ovarian tissue morphology and expression of some apoptosis-related genes after xenotransplantation. After vitrification and warming of ovarian fragments, the tissues were cultured 24 hours in the presence or absence of LPA, then xenotransplanted into gluteus maximus muscles of γ-irradiated female NMRI mice. Before and after transplantation tissue morphology and follicular counting were assessed by hematoxylin and eosin (H&E) staining. Analysis of expression of the apoptosis-related genes (BAX and BCL2) was performed by real time revers transcription polymerase chain reaction (RT-PCR). Also immunohistochemical staining for BAX protein was done on recovered transplanted tissue. Ovarian tissue vitrification and warming The ovarian cortical fragments (n=125) were vitrified according to the protocol described previously in the solution ethylene glycol, ficol and sucrose named: EFS40% (6). The human ovarian tissues were equilibrated in three changes of vitrification solutions, then they were put into cryovials and stored in liquid nitrogen. For warming the ovarian tissues they were hydrated by serially diluted sucrose (1, 0.5 and 0.25 M phosphate buffer) and equilibrated with culture media for 30 minutes for the following assessments. Some ovarian fragments (n=5 fragments) were fixed in Bouin's solution for evaluation of normal morphology after warming and the other fragments were used for in vitro culturing (n=120 fragments). Ovarian tissue culture Vitrified-warmed tissue fragments were cultured (n=120 fragments in total) in multi-well culture plates, containing 300 μl/well of α-MEM supplemented with 5 mg/ml human serum albumin (HSA), 0.1 mg/ml penicillin G, 0.1 mg/ml streptomycin, 10 μg/ml insulintransferrin-selenium, 0.5 IU/ml human recombinant follicle stimulating hormone, with or without 20 μM LPA at 37˚C in humidified chamber with 5% CO 2 (24). Some of these cultured ovarian fragments were used for histological evaluation, follicular counting and molecular analysis before transplantation (n=30 for each group) and the others were transplanted into γ-irradiated mice (n=30 for each group). γ-irradiated mice preparation and xenotransplantation of human ovarian tissue The 8-weeks-old NMRI female mice (n=30 mice for each group) were each given a single dose of 7.5 Gy whole body γ-irradiation for 6 minutes (Theratron 780C, Canada). For human ovarian tissue transplantation, the mice were anesthetized by an intra-peritoneal injection of ketamine 10% (75 mg/kg body weight) and xylazine 2% (15 mg/kg) and their back muscles were bilaterally exposed (25). Each tissue fragment that was derived from either LPA-treated or non-treated groups was individually inserted and stitched within each muscle (two ovarian fragments for each mouse), and the wound was sutured. The mice were sacrificed 14 days after transplantation and the recovered tissues were randomly fixed for histological and immunohistochemical analyses (n=15 tissue fragments for each group) or kept at -80˚C for molecular studies (n=15 tissue fragments in each group for triplicates). Histological evaluation For the light microscopic study, the fresh (n=5 fragment), vitrified-warmed (n=5 fragment), LPA-treated and nontreated human ovarian fragments before (n=15 fragments for each group) and after transplantation (n=15 fragments for each group) were fixed in Bouin's solution and embedded in paraffin wax. Tissue sections were prepared serially at 5 μm thickness and every 10th section was stained with H&E and observed under a light microscope (near 15-20 sections per each fragment). Another set of tissue sections was prepared for immunohistochemistry. The follicle classification criteria included: those containing an intact oocyte as well as granulosa cells (normal), those containing pyknotic oocyte nuclei or disorganized granulosa cells (degenerated), those containing only a single layer of flattened granulosa cells (primordial), those with cuboidal granulosa cells in a single layer (primary), and finally those with two or more layers of granulosa cells (growing follicles). Immunohistochemical staining for BAX The expression of pro-apoptotic protein BAX in transplanted LPA-treated and non-treated ovarian tissue was confirmed by immunohistochemistry. After paraffin removal, antigen retrieval was performed by boiling the tissue sections in citrate buffer (10 mM, pH=6) in a microwave oven for 10 minutes at 700 W. Then they were cooled at room temperature and washed in phosphate buffer saline (PBS). The tissue sections were separately incubated in rabbit polyclonal immunoglobulin G (IgG) anti-BAX antibody (SC-493, 1: 100) (Santa Cruz Biotechnology, UK) at 4˚C overnight, then were washed three times in PBS. Then they were incubated with a secondary goat anti-rabbit IgG antibody conjugated with fluorescein isothiocyanate (FITC) (Ab 6721, 1:100, Abcam, UK) diluted in PBS for 2 hours at 37˚C. Tissue sections from adult mouse ovaries were used as positive controls and were stained according to the same protocol. The samples were analyzed under fluorescent microscope (Ziess, Germany). RNA extraction and cDNA synthesis Total RNA was extracted from LPA-treated and nontreated ovarian tissue fragments before (n=15 in each group in three repeats) and after (n=15 in each group in three repeats) grafting, using Trizol reagent (Invitrogen, UK) and according to the manufacturer's protocol. The RNA samples were treated with DNase prior to proceeding with the cDNA synthesis. RNA concentration was measured by spectrophotometry. The cDNA synthesis was performed using a commercial Kit (Thermo Scientific, EU) at 42˚C for 60 minutes and the reaction was terminated by heating the samples at 70˚C for 5 minutes. The obtained cDNA was stored at -80˚C until utilized. Real-time revers transcription polymerase chain reaction Primers for the apoptosis-related genes, BAX and BCL2, and housekeeping β-actin (Table 1) were designed using the online primer3 software. One-step RT-PCR was performed on the Applied Biosystems (UK) real-time thermal cycler according to QuantiTect SYBR Green RT-PCR kit (Applied Biosystems, UK, Lot No: 1201416). Real-time thermal cycling conditions were set up as follows: holding step at 95˚C for 5 minutes, cycling step at 95˚C for 15 seconds, 60˚C for 30 seconds and it was continued by a melt curve step at 95˚C for 15 seconds, 60˚C for 1 minutes, and 95˚C for 15 seconds. Then, relative quantification of the target genes to housekeeping gene (β-actin) was determined by Pfaffl method. The nontemplate negative control sample was included in each run. These experiments were repeated at least three times. Statistical analysis All experiments were repeated in triplicates. All data were presented as mean ± SD and were analyzed, using one-way ANOVA and post hoc Duncan's Multiple Range Test. Statistical analysis was performed with the SPSS 19.0 (Chicago, IL, USA). A P<0.05 was considered statistically significant. Histological observation The normal morphology of human ovarian tissue after vitrification-warming in comparison to a fresh sample is presented in Figure 1 A and B. As shown in this Figure, the structure of tissue is cryopreserved well and no significant damage is seen in the follicles or stromal cells. A B The light microscopic observations of LPA-treated and non-treated human ovarian fragments after 24 hours of IVC are illustrated in Figure 2 A-D. The normal morphology of the growing follicles with central oocytes are seen and the oocytes are in close contact with the surrounding granulosa cells. Two weeks after grafting, the primordial, primary and growing follicles are detected in tissue sections (Fig.2E-H), however, the detachment between the oocyte and granulosa cells are observed in some follicles in non-treated grafted ovarian sections (Fig.2F). The percent of normal follicles in the study groups The proportion of the follicles at different developmental stages in our study groups is summarized in Table 2. After 24 hours into the cultures of ovarian fragments, the number of normal follicles in the LPAtreated group is significantly higher than those in the not-treated group [88.01 ± 2.62% vs. 81.72 ± 2.31% (P<0.05)]. Moreover, 14 days after transplantation, in the LPA-treated group 91.62 ± 0.70% of the follicles presented normal morphology, which was significantly higher (P<0.05) than that in the non-treated group (87.97 ± 1.61%). The percentage of follicles at different developmental stages in study groups The proportion of the follicles at different developmental stages in all experimental groups are compared and presented in Table 2. The percentage of the primordial follicles in non-treated cultures as well as LPA-treated groups prior to transplantation are 41.78 ± 4.61% and 42.49 ± 1.13%, respectively. Following transplantation the non-treated and treated groups decline significantly to 30.46 ± 6.86 and 21-17 ± 6.01, respectively (P<0.05). However, this posttransplantation percentage is significantly lower in the LPA-treated group compared to the non-treated group (P<0.05). There is no significant difference between the percentages of the primary follicles in the two study groups ( Table 2). The total percentages of the growing follicles in the LPA-treated group and the non-treated group prior to transplantation are 20.17 ± 2.39 and 19.49 ± 1.65, and are increased after transplantation to 40.95 ± 2.11 and 29.44 ± 1.39, respectively (P<0.05). This difference is significantly higher in the LPA-treated group compared to the non-treated group (P<0.05, Table 2). Immunohistochemistry The representative images of BAX immunohistochemistry in both transplanted ovarian tissue and positive tissue section as control are shown in Figure 3A-C. In spite of the presence of several BAX-positive cells (white arrow) in the follicular and stromal cells of the adult mouse ovarian tissue as positive control (Fig.3C), no other positive labeling for BAX was observed in the follicles and oocytes in neither transplanted ovarian group. Expression of apoptosis-related genes in studied groups The expression ratio of BAX and BCL2 genes to the housekeeping gene (β-actin) in both study groups is shown in Figure 4. Our results indicate that, the expression ratio of the BAX gene in the LPA-treated group is significantly lower than that in the nontreated group (P<0.05) both before and after grafting. Nonetheless, the level of BCL2 gene expression is significantly higher in the LPA-treated group compared to the non-treated ovarian tissue (P<0.05) before grafting (Fig.4A, B) also the same result was obtained after grafting (P<0.05). Also, the ratio of BAX to BCL2 expression in the LPA-treated group is significantly less than that in the non-treated ovarian tissue (P<0.05, Fig.4C). LPA; Lysophosphatidic acid, F; Follicle, a ; Significant difference with cultured-LPA -(without LPA) group in the same column (P<0.05), b ; Significant differences with cultured-LPA + (with LPA) group in the same column (P<0.05), and c ; Significant differences with cultured-grafted-LPA -(without LPA) group in the same column (P<0.05). The number of follicles at different developmental stages was calculated according to the total number of normal follicles. Data are presented as %mean ± SD. Mohammadi et al. Discussion This is to our knowledge the first report to evaluate the effect of LPA on the improvement of development and survival of human ovarian follicles after IVC as well as transplantation of ovarian tissue. Our morphological observations indicate an enhancement in the rate of normal follicles and a decrease in the percentage of degenerated follicles in the LPA-treated group in comparison to the non-treated group. This result shows the beneficial effects of LPA on the preservation of the follicles within the human ovarian tissues during an IVC period and following transplantation. These effects of LPA may be related to its function as an anti-apoptotic factor (17,(21)(22)(23). Apoptosis take places within the ovarian cells through two main pathways, including the activation of caspase 8, and the mitochondrial pathway that is controlled by BAX and BCL2 as regulatory proteins (26)(27)(28). In agreement with our morphological analysis, immunohistochemical staining showed very low number of BAX positive cells in the transplanted groups, especially in the LPA-treated group. According to the literature, the anti-apoptotic effects of LPA on oocyte, granulosa cells, ovarian cancer cells and corpus luteum are documented (17,21,23,27). In the study by Rapizzi et al. (29) it was shown that LPA induced migration and survival in the cervical cancer cells line, HeLa cells. Similarly, in the bovine corpus luteum, it was demonstrated that LPA inhibited the expression of BAX, therefore contributing to the survival of the cells (23). Sinderewicz et al. (19) and Boruszewska et al. (30) demonstrated that in healthy bovine follicles, LPA interacts with estradiol to stimulate the anti-apoptotic processes of granulosa cells. In addition, molecular analysis in the present study revealed a significantly higher expression of BCL2 and lower expression of BAX in the LPA-treated group in comparison with the non-treated group. Moreover, we found a significantly lower BAX/BCL2 ratio in the LPA-treated group compared to the non-treated ones. As BCL2 and BAX have been detected in the granulosa cells, it has been suggested that follicular viability and development may depend on a low level of pro-apoptotic gene expression, which prevents cell death within ovarian tissue. In agreement to our observations, in the study by Zhang et al. (18) the authors have shown that exposure of blastocyst culture media to LPA reduces the expression of the pro-apoptotic genes, while increasings the expression of anti-apoptotic genes. Similar results were obtained by Boruszewska et al. (17) in their study on bovine oocyte. Our current data demonstrates that LPA could enhance the follicular growth and development, as the culture media used in our study seems to support the activation and development of growing follicles. The growth of follicles depends mainly on proliferation rate of the granulosa cells. It is proposed that LPA could be involved in proliferation and growth of the follicles directly via its receptors, or indirectly by stimulation of some other factors (16)(17)(18)(19)(20)(21)(22). In agreement with these suggestions, it has been previously revealed that in the mitogenic effects of LPA on ovarian, tumor, and amniotic cells, mitogenactivated protein kinase (MAPK)/p38 and phosphoinositol 3-kinase (PI3K)/Akt pathways are involved (31)(32)(33). Kim et al. (31) also have found that LPA modulates cellular activity and stimulates proliferation of human amnion cells in vitro. These authors also proposed that the LPA produced in leiomyoma may be involved in tumor cell proliferation. With regard of another suggestion that was well demonstrated previously by Boruszewska et al. (30), it is possible that LPA alone or LPA together with follicle stimulating hormone induced estradiol (E2) are byproducts of in vitro cultures of bovine granulosa cells. Thus, the secretion of these hormones causes an increase in the expression of the follicle stimulating hormone receptor and 17β -Hydroxysteroid dehydrogenase (HSD) genes that are involved in follicular growth and development. Our results are in agreement with that reported by Abedpour et al. (22,24), who stated that LPA can improve the developmental and maturational rates of the follicles in cultured mouse ovarian tissue. Related reports show that LPA plays a significant role in activation of the primordial follicles and improves nuclear and cytoplasmic maturation of mouse oocytes via its receptors (33). In 2015 Zhang et al. (18) performed a similar study and demonstrated that LPA had beneficial effects on porcine cytoplasmic oocyte maturation. The work by Boruszewska et al. (17) also revealed that supplementation of bovine oocyte maturation media with LPA increased expression of some oocyte developmental genes such as growth differentiation factor 9 (GDF9) and follistatin (FST) transcripts. Hwang et al. (34) by treatment of porcine oocytes during in vitro maturation with different concentrations of LPA showed that 30 μM LPA promotes and enhances cumulus cell expansion and oocyte nuclear and cytoplasmic maturation, and reduces C the intracellular reactive oxygen species level. In contrast to our current report, in our previous study we had grafted the vitrified human ovarian tissue, and the rate of normal follicles was significantly decreased in the vitrified grafted tissues. In the present study, however, the tissue was cultured for 24 hours prior to transplantation. It is suggested that during the time of cultivation, especially in the presence of LPA, the harmful effects of cryopreservation are recovered to some extent. In a published study by Rahimi et al. (35), similar to our groups, they observed a higher incidence of apoptosis in grafted vitrified ovarian tissue samples without any supplementary factors added to the transplanted tissue. To prove this suggestion additional assessments are needs. Moreover, our observations showed the percentage of normal follicles was higher in both transplanted groups compared to their respected non-transplanted tissues at the end of the culture period. An explanation for this result is that in spite of degeneration of some follicles due to ischemia in the grafted tissue, these damaged follicles were disappeared during these two weeks following engraftment. It seems that the total number of the follicles may decline per each tissue section (was not calculated), while we have analyzed the ratio of normal follicles in comparison to the total number of the counted ones. Conclusion Supplementation of human ovarian tissue culture media with LPA could improve the follicular survival and development by promoting an anti-apoptotic balance in transcription of BCL2 and BAX genes, leading to increased cell survival.
4,858
2019-12-15T00:00:00.000
[ "Biology" ]
Mid-infrared interband cascade lasers operating at ambient temperatures We discuss the state-of-the-art performance of interband cascade lasers emitting in the 3–5 μm spectral band. Broad-area devices with five active stages display pulsed threshold current densities as low as 400 A cm−2 at room temperature. Auger decay rates are extracted from the analysis of threshold current densities and differential slope efficiencies of nearly 30 lasers, and found to be significantly lower than was anticipated based on prior information. New designs also produce ICLs with room-temperature internal losses as low as ≈6 cm−1. The combination of these advances with improvements to the processing of narrow ridges has led to the fabrication of a 4.4-μm-wide ridge emitting at 3.7 μm that lased to 335 K in continuous mode. This is the highest continuous-wave (cw) operating temperature for any semiconductor laser in the 3.0–4.6 μm spectral range. A 10-μm-wide ridge with high-reflection and anti-reflection facet coatings produced up to 59 mW of cw power at 298 K, and displayed a maximum wall-plug efficiency of 3.4%. Introduction For many decades after the first near-infrared double heterostructure and quantum well (QW) diode lasers were demonstrated in the 1970s, no semiconductor devices emitting at longer wavelengths in the 3-5 µm midwave-infrared (mid-IR) spectral band were capable of continuous-wave (cw) operation at temperatures high enough to permit thermoelectric cooling (≈260 K). The first technology to broach the key room-temperature-cw milestone was the quantum cascade laser (QCL), which employs intersubband optical transitions in a staircase geometry that connects multiple active stages in series [1]. At λ ≈ 4.6 µm [2,3], strain-balanced InGaAs/InAlAs QCLs have achieved cw operation up to T cw max = 373 K [2]. However, shifting to shorter wavelengths becomes increasingly challenging for this material system, because the strain required to maintain a sufficient conduction-band offset becomes excessive. For λ < 4.6 µm, the highest reported cw operating temperature is T cw max ≈ 330 K, and λ = 3.8 µm is the shortest wavelength for which room-temperature cw lasing has been attained [4]. While the larger conduction-band offsets in some less mature QCL materials such as InAs/AlSb [5] and InGaAs/AlAs(Sb) [6] have allowed pulsed lasing to well above ambient, it is unclear whether high-temperature cw operation will follow in view of the rather high thresholds observed to date. In the meantime, conventional type-I mid-IR QW lasers have also reached roomtemperature cw operation, but only at wavelengths a little beyond 3.0 µm [7]- [9]. Insufficient hole confinement appears to be responsible for much of the increasing difficulty at longer λ [8]. With a much stronger electronic confinement of both carrier types, the type-II material system based on InAs electron wells and GaInSb hole wells is a natural interband candidate for mid-IR emission [10,11]. The interband cascade laser (ICL) [12]- [14], which was first proposed [15] in 1994 and is illustrated schematically in figure 1, represents an especially elegant hybrid of the conventional diode and the QCL. Even though the lasing transition in its type-II 'W' active region is an interband process, multiple stages can be cascaded by energetically aligning the conduction and valence states at the structure's other type-II interface, between the electron and hole injectors. This allows electrons in the valence band to scatter elastically back to the conduction band for recycling into the next active stage. At longer wavelengths where rapid Auger decay tends to increase the current requirements, ohmic contributions to the bias voltage are usually substantial. This makes the in-series connection of the active wells in a cascade architecture advantageous over the effectively parallel current flow in a conventional multiple-QW laser. The lower current is combined with a higher bias voltage, whose minimum value is the photon energy multiplied by the number of stages. While the first laboratory demonstration of an ICL occurred more than decade ago [16], the practical milestone of ambient-temperature cw operation was surpassed only recently when our group at Naval Research Laboratory (NRL) obtained T cw max = 319 K at λ = 3.75 µm [17]. The early development was limited in part by material growth issues, coupled with an incomplete understanding of the relevant physical processes and design tradeoffs. These were compounded by the inadequacy of resources available to support ICL research (a very small fraction of those devoted to QCLs). Since NRL's first ICL growth by molecular beam epitaxy (MBE) in 2005, progress has resulted from systematic improvements of both the MBE quality and the level of understanding of the dominant optical and electronic mechanisms. While our advanced modeling capability allows simulations to be compared in detail with laboratory data, the theoretical guidance is often combined with empirical, trial-and-error variations of the relevant design parameters. This paper will summarize the current state-of-the-art for ICL development at NRL. It will describe both previously published performance characteristics and very recent advances. Growth and processing The five-stage ICLs described in this work were grown on n-GaSb (100) substrates in a Riber Compact 21T MBE system, using methods described in [18]. In particular, no attempt was made to force the interface bond type at the interfaces. The active region designs were generally similar to those described in [19,20], except for somewhat different thicknesses and compositions of the GaInSb hole wells and other relatively minor alterations. The n-type doping of the InAs:Si/AlSb optical cladding layers was reduced from that in earlier 10-stage structures, in order to minimize internal losses. The first 1 µm of each cladding region next to the waveguide core was doped to 1.5 × 10 17 cm −3 , while the rest was doped to 5 × 10 17 cm −3 . Emission wavelengths were shifted by fixing the GaInSb hole well parameters while varying the thicknesses of the two active InAs electron wells that sandwich the GaInSb. The designs of samples commencing with T080828 were further altered with the specific goal of reducing the internal loss. In order to provide rapid feedback for wafer screening and design evaluation, standardized broad-area ridges of width 70-150 µm were produced by contact lithography and wet chemical etching for pulsed characterization. The etch proceeded to a GaSb separate-confinement layer (SCL) below the active region. The mask set introduced intentional lateral corrugation of the ridge sidewalls, in order to frustrate parasitic lasing modes that can otherwise achieve feedback following strong reflections from the straight and deep-etched ridge boundaries [21]. We found that lasing in those modes can artificially suppress the observed differential slope efficiency, since the out-coupling is weak and difficult to collect due to its large angle with respect to the cavity axis. For wide ridges, the corrugations confined to several microns of the sidewalls do not appear to appreciably increase the internal loss or lasing threshold of the desired Fabry-Perot modes. The broad-area devices were cleaved to a standard cavity length (L cav ) of 2 mm. Narrow ridges for cw measurements were also processed from a few of the wafers. The ridges of width 4-10 µm were fabricated by photolithography and reactive-ion etching (RIE) using a Cl-based inductively coupled plasma (ICP) process that again stopped at the GaSb SCL below the active QWs. The ridges were subsequently cleaned with a phosphoric-acid-based wet etch to minimize damage from the ICP RIE process. The wet etch was observed to induce some selective undercutting of the AlSb barriers in the active region, owing to preferential etching by the phosphoric acid. A 200-nm-thick Si 3 N 4 dielectric layer was deposited by plasma-enhanced chemical vapor deposition and a top contact window was etched back using SF 6 -based ICP. Next 100 nm of SiO 2 was sputtered to block occasional pinholes in the Si 3 N 4 . The ridges were metallized and electroplated with 5 µm of gold to improve the heat dissipation. Following the division into cavities guided by cleaving lanes in the electroplated gold, high-reflection (HR) or anti-reflection (AR) coatings were in some cases deposited on one or both facets. Each device was finally mounted epitaxial-side-up on a copper heat sink attached to the cold finger of an Air Products Heli-Tran Dewar. Pulsed characterization and analysis of internal loss and Auger coefficient We obtain the most reliable information about 'intrinsic' device performance, uncomplicated by lattice heating and the sidewall quality of narrow ridges, by measuring the pulsed properties of broad-area lasers. The cavity-length dependence [17,22,23] of the pulsed differential slope efficiency then provides a determination of the internal loss and internal efficiency. In this section, we discuss a further correlation of the threshold current density ( j th ) with theoretical optical gain versus carrier density to derive the carrier lifetime and Auger coefficient. However, since cavity-length measurements are too laborious to perform on a large number of samples, we will assume internal efficiencies η i based on those obtained from a detailed temperaturedependent investigation of the five-stage sample T080227 (listed in table 1 below) [17]. In that case, η i decreased gradually with increasing temperature, from 83% at 78 K to 64% at 300 K. While this assumption will affect the quantitative estimate for the internal loss, we find that the derived Auger coefficient corresponding to the measured differential slope efficiency (dP/dI ) and threshold current density are surprisingly insensitive to the value of η i . We also note that the laser's internal loss, as determined from the expression: , has a weak dependence on the facet reflectivity R. While many previous studies used a value near 30%, a more careful examination of the reflectivity problem for well-confined TE-polarized modes [24] produces a better estimate of R ≈ 40%. This leads to a minor upward adjustment of the Auger coefficient reported previously [19] for one of the samples treated below. Table 1 summarizes the pulsed characteristics of five-stage ICLs fabricated from 12 representative wafers with room-temperature lasing wavelengths from 2.95 to 5.02 µm (the table does not include all of the samples characterized). Figure 2 plots the temperature dependence of threshold current densities for several samples selected from the table. In order to minimize lattice heating, the characterizations employed 100-350 ns pulses for T 300 K, 1 µs pulses in the 150-300 K temperature range, and cw injection at the lowest temperature (where pulsed measurements are impractical due to the very low currents). The repetition rate 200 Hz for measurements up to 300 K and 50 Hz at higher T. The threshold current densities at 78 K were extremely low, e.g. 1.7 A cm −2 for T080618, which makes them effectively negligible for Threshold current densities versus temperature for five broad-area ICLs with 2-mm-long cavities. The measurements employed 100-350 ns pulses for T 300 K, 1 µs pulses in the 150-300 K temperature range, and cw injection for the lowest temperatures. The line represents T 0 = 47 K, the characteristic temperature for T080227 over the 78-320 K temperature range. devices producing significant optical powers. The observed variations at low T are most likely due to differing densities of the traps responsible for Shockley-Read recombination. The room-temperature thresholds of ≈400 A cm −2 for the best ICLs emitting near 3.7 µm are comparable to those measured previously for 10-stage ICLs with higher doping levels in the optical cladding layers [23]. The threshold current densities exhibit surprisingly little variation over the 3.0-4.2 µm spectral range, which is attributable to a very weak dependence of the Auger coefficient on wavelength as discussed below. We do, however, find an increase of j th (300 K) at the longest wavelengths, e.g. ≈1500 A cm −2 for λ = 5.0 µm. Over the entire T = 78-300 K range the data imply characteristic temperatures of T 0 ≈ 37-49 K, although these are not the best predictors of high-T cw performance since j th is governed by different non-radiative mechanisms at low and high temperatures (Shockley-Read versus Auger). Nevertheless, the T 0 values near ambient are comparable to those derived for the full temperature range, with the minor variations correlating well with the Auger coefficients deduced below. The two primary figures of merit for achieving high-temperature cw operation are the threshold power density (P th ), which is obtained from the product of j th and the bias voltage at threshold (V th ), and the rate of the threshold power density's increase above room temperature (i.e. the characteristic temperature associated with P th ). Figure 3 illustrates threshold voltages versus temperature for the same samples shown in figure 2. The behavior is similar to that observed previously [23], in that V th first drops as the turn-on voltage decreases (due in part to decreasing energy gap, and sometimes also to thermal activation of the carrier transport), reaches a minimum at some intermediate temperature, and then rapidly rises at high T due to the increasing ohmic contribution associated with the larger j th (V = j th × ρ th s , where at threshold the differential series resistivity ρ th s is typically in the range 0.7-1.1 m -cm 2 at 300 K). Interband semiconductor lasers reach transparency when the bias separates the electron and hole quasi-Fermi levels by the energy gap (in each stage). The minimum required voltage is, therefore, roughly N ×hω, plus the ohmic contribution and the additional voltage of order k B T per stage needed to overcome losses. To assess whether any other major sources of excess voltage are present in addition to these mechanisms, figure 4 plots the quantity V th − V − N × k B T as a function of N ×hω at room temperature for the five-stage samples listed in the table, along with other 3-, 5-and 10-stage ICLs grown in 2008. It appears that the best structures require little or no additional voltage, although the ohmic term may include resistivity contributions that can be further minimized once they are better understood. Other structures require up to 0.7 V of excess bias in order to operate. Two of the worst five-stage devices in the figure were found to have inadequate Te doping in their SCLs (the SCLs may have been p-type). However, in most other cases it is unclear at present why some of the ICLs require much more voltage than others. Figure 5 plots temperature-dependent differential slope efficiencies for the samples of figures 2 and 3. Note that the redesign of T080828 quite effectively increased the efficiency Figure 6. Wavelength dependence of the internal loss at room temperature, as estimated from the pulsed differential slope efficiencies of a wide variety of broad-area ICLs. at both low and at high temperatures. For ICLs emitting at λ ≈ 3.7 µm, typical dP/dI values of 450-490 mW A −1 at 78 K and 140-180 mW A −1 at 300 K before the redesign increased to 621 and 239 mW A −1 after. Using a fixed room-temperature internal efficiency of 64% from the cavity-length study on Sample T080227, as discussed above, the internal loss in each broad-area ICL can be estimated from its slope efficiency. Results are listed in table 1, and also plotted versus wavelength in figure 6 for these samples and a number of other 3-, 5-and 10-stage devices. For ICLs emitting near 3.7 µm, the internal loss before the redesign was as low as 8.7 cm −1 but more typically ≈12 cm −1 . After the redesign, α i (300 K) decreased to the range 5.7-8.7 cm −1 . The lowest is only slightly larger than the mirror loss of ln(1/R)/L cav ≈ 4.5 cm −1 for the 2-mm-long cavity with uncoated facets. Taking both facets into account, α i = 5.7 cm −1 implies an external differential quantum efficiency per stage of 28.3%. Figure 6 indicates little dependence on wavelength between 3.6 and 4.2 µm (thus far the redesign has been applied only to a narrow central range near 3.7 µm). However, the measured losses increase substantially at λ 4.5 µm, with α i = 19 and 31 cm −1 being obtained for the ICLs emitting at 4.5 and 5.0 µm, respectively. This may reflect stronger hole absorption in the active region and/or increased freeelectron absorption in the SCL and cladding regions. Curiously, the internal loss also increases at shorter wavelengths 3.4 µm, with α i ≈ 18 cm −1 for λ = 2.95-3.4 µm. A photoluminescence study of wafers containing only cladding layers (no active QWs) showed trap emission in roughly this same spectral range. We therefore suspect that parasitic absorption within the InAs/AlSb superlattices may be responsible for the additional loss at shorter wavelengths. As outlined above, the derived internal loss, measured threshold current density and calculated modal gain can be combined to estimate an Auger coefficient (γ 3 ) for each structure. Whereas the active energy levels of the ICL are 2D in nature, we convert to more standard γ 3 units (cm 6 s −1 ) by assuming a typical net active QW thickness of 10 nm to normalize all sheet carrier concentrations to 3D densities. Results at room temperature are listed numerically in table 1, and are plotted versus wavelength in figure 7 for a much larger sampling of 3-, 5-and 10-stage ICLs with a variety of active-region designs. The additional data for bulk and type-I Figure 7. Room-temperature Auger coefficient versus emission wavelength (lasers) or effective wavelength corresponding to the energy gap (other devices), for a wide variety of pulsed, broad-area ICLs from this work and [20] (solid points), along with type-I and type-II data from [25] (open points). QW materials, and for optically pumped type-II structures, are taken mostly from [25]. Clearly, at most λ the derived Auger coefficients for type-II QWs in ICLs are much lower than had been anticipated based on the earlier optically pumped laser and photoconductivity experiments. Furthermore, the dependence on wavelength is much weaker than was implied by the earlier work. Both of these findings are highly beneficial to the prospects for high-temperature lasers operating in the 3.0-4.2 µm window. The observed γ 3 are 4-6 × 10 −28 cm 6 s −1 in this range, with any trends being weaker than the rather modest variations between samples. When λ is further increased to 4.5-5.0 µm, the derived Auger coefficients increases slightly to 7 × 10 −28 cm 6 s −1 . It is quite surprising (and fortunate) that the ICL Auger coefficients in figure 7 lie well below published theoretical projections [26,27] (although some of the difference may result from the use of different gain models [28]). This cannot result from any especially optimal valence subband alignments in our active region designs, since numerous structures with a variety of GaInSb thicknesses and compositions exhibit quite similar thresholds and slope efficiencies, and hence Auger coefficients. In fact, the observation that γ 3 remains nearly constant when the hole QW thickness and composition remain fixed and λ is varied appears to rule out any significant role for intervalence resonances (although this could reflect a substantial broadening of the resonances rather than the unimportance of intervalence processes). On the other hand, it may be significant that the present structures (along with a recent optically pumped device represented by the open star in figure 7) have much higher MBE quality than those grown more than a decade ago for the investigation of [25]. For example, defect-or interfaceroughness-assisted Auger recombination may have dominated the previous lifetimes while being much less prominent in the recent ICLs [26]. Nevertheless, further research is needed to reconcile the differences between earlier theories for Auger recombination in type-II 'W' QWs and the present experimental results. High-temperature cw operation We previously reported that an ICL fabricated from T080227 operated in cw mode at λ = 3.75 µm up to a maximum temperature of 319 K [17]. Subsequent narrow ridges fabricated from T080619 lased cw up to T cw max = 334 K, which is attributable primarily to the higher T 0 characteristic of those devices emitting at a somewhat shorter wavelength (3.3 µm). Although these first demonstrations of semiconductor lasers operating cw in this wavelength range at temperatures above ambient represent a breakthrough in mid-IR laser development, both the device designs and the processing used to etch those ridges were far from optimized. The improved etch recipe described in section 2 has substantially reduced undercutting in the most recent narrow-ridge ICLs, as illustrated by the SEM micrographs in figure 8. This recipe was applied to sample T080828 (λ ≈ 3.7 µm), whose slope efficiency is the highest of all the devices listed in table 1. Electroplated ridges were fabricated with widths w = 4.4 and 7.0 µm (at the level of the active region), L cav = 2 mm and uncoated facets. Figure 9 plots cw threshold current densities for the two devices at temperatures in the range T 250 K. At T = 300 K, j th = 778 and 651 A cm −2 for the w = 4.4 and 7.0 µm devices, respectively, which compare to 439 A cm −2 for the broad-area device from the same wafer. ICLs processed using the earlier etch recipe typically experienced larger degradations of the threshold current densities. The characteristic temperatures derived from these data for the 250-300 K range are T 0 = 52-53 K. The maximum operating temperature of 335 K for the w = 4.4 µm ridge is a new record for ICLs, and the highest reported for any semiconductor laser emitting in the 3.0-4.6 µm spectral range. Since this result was measured for a device with relatively short cavity and uncoated facets, further improvements in T cw max may be expected. Figure 10 shows cw light-current characteristics of the same two narrow-ridge devices at T = 300 K. The maximum cw output power for the w = 7.0 µm ridge is 24 mW per facet. That device's maximum slope efficiency of 164 mW A −1 per facet is 69% of the pulsed dP/dI reported in table 1. The inset to figure 10 shows that the w = 7.0 µm ridge displays a maximum cw wall-plug efficiency (WPE) of 2.8% per facet at T = 300 K. Additional Au-electroplated narrow ridges were processed from the same wafer to have widths of 10 µm, cavity lengths of 3 mm and coated facets. The red points in figure 11 are cw L-I characteristics at T = 298 K for an ICL with HR and AR coatings, while the blue points are data for a device with one HR and one uncoated facet. Note that while the HR/U ridge has a lower lasing threshold, with increasing current the HR/AR device eventually displays slightly higher output power and WPE. Up to 600 mA the maximum powers are 59 and 51 mW for the HR/AR and HR/U devices, respectively, while the maximum WPEs are 3.4 and 3.1%. While these early efficiencies are lower than the value ≈11% recently obtained for a 4-mmlong HR-coated QCL emitting at λ = 4.6 µm 2 , they nonetheless show the promise of ICLs to perform well in the intermediate λ = 3.0-4.2 µm spectral band. Furthermore, the input powers needed to achieve lasing are already much lower for ICLs than QCLs, due to lower threshold current densities and especially lower bias voltages associated with the much smaller number of stages. This can be an important consideration in field applications that require powering from batteries. Conclusions We have reviewed the state-of-art performance characteristics of ICLs emitting in the mid-IR spectral band at ambient temperatures and above. Our device development strategy relies heavily on characterizing the pulsed properties of broad-area lasers and then optimizing their designs. The most significant discovery has been that the Auger non-radiative decay rates in these type-II structures are much lower than was expected based on previous experiments and theoretical projections. Auger coefficients extracted from the lasing thresholds, differential slope efficiencies and other properties of nearly 30 lasers are remarkably consistent, and display no statistically significant dependence on wavelength in the 3.0-4.2 µm window. The γ 3 values rise only gradually beyond that, out to 5 µm. A direct consequence of the unanticipated Auger suppression is that lasing thresholds are lower than had been thought possible throughout the 3-5 µm band. Further optimization of the ICL design has reduced internal losses to as low as 5.7 cm −1 at 300 K. In the structures studied to date, losses are lowest for wavelengths in the intermediate 3.4-4.2 µm range, but increase at both longer and shorter λ. In parallel with the development of improved ICL designs, we have also refined the fabrication procedures for narrow-ridge devices. As a result, a 2-mm-long cavity without facet coatings achieved a record high cw operating temperature of 335 K (at λ = 3.7 µm). Another narrow ridge with HR/AR coatings produced up to 59 mW of cw output at 298 K, and had a maximum WPE of 3.4%. The structure's flexibility provides ample opportunity for further modifications of the design space, to assure ongoing advances of the ICL performance.
5,807.2
2009-12-01T00:00:00.000
[ "Physics", "Engineering" ]
The Emerging Role of Virtual Reality as an Adjunct to Procedural Sedation and Anesthesia: A Narrative Review Over the past 20 years, there has been a significant reduction in the incidence of adverse events associated with sedation outside of the operating room. Non-pharmacologic techniques are increasingly being used as peri-operative adjuncts to facilitate and promote anxiolysis, analgesia and sedation, and to reduce adverse events. This narrative review will briefly explore the emerging role of immersive reality in the peri-procedural care of surgical patients. Immersive virtual reality (VR) is intended to distract patients with the illusion of “being present” inside the computer-generated world, drawing attention away from their anxiety, pain, and discomfort. VR has been described for a variety of procedures that include colonoscopies, venipuncture, dental procedures, and burn wound care. As VR technology develops and the production costs decrease, the role and application of VR in clinical practice will expand. It is important for medical professionals to understand that VR is now available for prime-time use and to be aware of the growing body in the literature that supports VR. Introduction In the past 20 years, sedation delivery outside of the operating room has evolved in a number of ways designed to reduce sedative side effects [1]. Risks of significant adverse events have been reduced by a number of improvements in safety, such as the better identification of which patients can safely be treated outside of the operating room, training, physiologic monitoring, and broadened sedative options [2][3][4][5][6][7][8][9]. For example, the use of intranasal dexmedetomidine for the sedation of children during nonpainful imaging procedures [10] and the use of adjunctive dexmedetomidine during painful procedures, e.g., to reduce reliance on opioids [11][12][13][14][15][16]. The evolution of national guidelines, standardized definitions of the depth of sedation and outcomes, and the application of non-pharmacologic techniques have further contributed to reducing the risk of significant adverse events [17]. Once common practices such as immobilization through papoosing (physically restraining a child to keep children still during the medical procedure) have almost universally been replaced with distraction techniques that range from the rudimentary (a book) to the more sophisticated (tablet) [18]. The development of more powerful non-drug adjuncts has intensified, in light of the following Food and Drug Administration (FDA) warning [19]. "The FDA is warning that repeated or lengthy use of general anesthetic and sedation drugs during surgeries or procedures in children younger than 3 years or in pregnant women during their third trimester may affect the development of children's brains" (FDA, 2017 p 1); see also [20]. In children, the untoward effects of general anesthesia and procedural sedation have been reported to include short-term behavioral and emotional changes, a decline in academic achievement, maladaptive behavior (eating and sleeping difficulties, withdrawal, apathy, enuresis), and fear of future medical procedures [21][22][23][24][25][26][27]. With the heightened media coverage, fueled by the above-mentioned FDA warning concerning the possible deleterious effects of anesthesia and sedation on the developing neonate and infant brain, there is an increased urgency to consider new approaches to decrease anesthetic exposure. There is growing evidence that the patient's psychological state of mind can influence the dosage of sedatives needed to achieve the target sedation level. For example, a patient having a mild panic attack as they enter the operating room is likely to be more challenging to sedate than a patient who is calm and collected as they are being sedated. Over the past decade, help from child life specialists and other non-pharmacologic techniques have proliferated in their presence in the pre-procedural and pre-operative areas [28,29]. Clowns have become increasingly visible in these areas, entertaining the children to relieve parental and patient anxiety, with the goal of reducing pharmacologic analgesics and anxiolytics [30,31]. Many traditional distraction techniques are modestly helpful, but a more powerful distraction with little or no side effects is greatly needed. As briefly reviewed in the current narrative review paper, immersive virtual reality appears to be an unusually effective technique and is quickly becoming a "distraction on steroids". According to Websters Dictionary, Virtual Reality (VR) is "an artificial environment which is experienced through sensory stimuli (such as sights and sounds) provided by a computer and in which one's actions partially determine what happens in the environment." [32]. There is currently intense interest in the application of immersive virtual reality as a nonpharmacologic technique to reduce the need for anxiolytic, analgesia, and sedation delivery in the pre-procedure/operative period and to help reduce post-operative pain and reduce reliance on medications. Immersive VR is increasingly being used as a non-drug analgesic, anxiolytic, and digital sedative for procedural sedation. With the application of VR, patients have reported reductions in pain and anxiety during painful medical procedures [33][34][35] and VR may help reduce reliance on opioids for pain management [36,37]. The application of VR for the pediatric population, as an adjunct or in some cases as a replacement to procedural sedation, could offer important benefits [38]. This narrative review briefly explores some of the relevant literature on VR applications and discusses how the recent increase in usability and affordability is facilitating the proliferation of VR in peri-operative and surgical environments. The current brief narrative review is an overview and synthesis based on information from a selection of individual published scientific studies, meta-analyses, and personal experience of the originator of VR analgesia. This review presents a broad perspective on the topic of virtual reality analgesia for acute pain control, especially during procedures that often involve sedation. This paper also includes a brief history of several large steps of development of using VR pain management over the past 20 years. This review targets clinicians and is designed to help bring practitioners of pediatric sedation up to date on growing opportunities to use VR in their clinical practice. Virtual Therapeutic The application of VR as a non-pharmacologic anxiolytic and peri-analgesic technique, has evolved over the past decade. At its essence, patients don VR goggles (see Figure 1) and interact with virtual objects in a computer-generated world. During VR physical therapy (where motion is desired), with hand, head, and body movements, patients are able to transform their environment: stir the witches' cauldron, dance with a robot, walk a pirate's plank and pick up treasures, etc. However, with regard to sedation, VR experiences have been customized to minimize physical body movements. The patients use eye-movements and/or mouse tracking, e.g., Hoffman et al., 2019 [35] to aim snowballs at virtual snowmen and click a mouse button to throw snowballs, see Figure 2. Several VR systems have been tailored to the procedure, specifically designed to distract patients from the painful or anxiety generating stimulus without interfering with the wound care nurse's ability to work on the patient. For example, eye tracking can help patients interact with the virtual world, while the patient remains physically still during the medical procedure [39]. nator of VR analgesia. This review presents a broad perspective on the topic of virtual reality analgesia for acute pain control, especially during procedures that often involve sedation. This paper also includes a brief history of several large steps of development of using VR pain management over the past 20 years. This review targets clinicians and is designed to help bring practitioners of pediatric sedation up to date on growing opportunities to use VR in their clinical practice. Virtual Therapeutic The application of VR as a non-pharmacologic anxiolytic and peri-analgesic technique, has evolved over the past decade. At its essence, patients don VR goggles (see Figure 1) and interact with virtual objects in a computer-generated world. During VR physical therapy (where motion is desired), with hand, head, and body movements, patients are able to transform their environment: stir the witches' cauldron, dance with a robot, walk a pirate's plank and pick up treasures, etc. However, with regard to sedation, VR experiences have been customized to minimize physical body movements. The patients use eye-movements and/or mouse tracking, e.g., Hoffman et al., 2019 [35] to aim snowballs at virtual snowmen and click a mouse button to throw snowballs, see Figure 2. Several VR systems have been tailored to the procedure, specifically designed to distract patients from the painful or anxiety generating stimulus without interfering with the wound care nurse's ability to work on the patient. For example, eye tracking can help patients interact with the virtual world, while the patient remains physically still during the medical procedure [39]. . Some VR systems have been customized to be used during wound care during wound debridement. Some patients have face or head burns that cause it to be difficult to wear traditional head-mounted VR goggles. The "articulated arm" shown above holds the goggles near the patient's eyes, with little or no contact with the patient [35]. Photo and copyright Hunter Hoffman, www.vrpain.com, accessed on 28 November 2022. To track eye movements, six small infrared lights embedded inside the VR goggles are shined onto the surface of the patient's eyes, creating a pattern of dots on the outer eye. The pattern changes as they look in different directions. The changes in where the patient is looking are then captured using miniature video cameras pointed at the patients' eyes [39]. The computer chip in the VR goggles can tell from the patterns what virtual objects the patient is looking at in VR, such as to aim virtual snowballs. VR is designed to transport the patient to another reality, immerse them in their new virtual environment, and distract them by eliciting their engagement in an interactive game designed around their new environment. VR transports the patient to another 3-dimensional reality, e.g., SnowWorld [35,40], Mindfulness River World [41,42], or VR Animal Rescue . Some VR systems have been customized to be used during wound care during wound debridement. Some patients have face or head burns that cause it to be difficult to wear traditional head-mounted VR goggles. The "articulated arm" shown above holds the goggles near the patient's eyes, with little or no contact with the patient [35]. Photo and copyright Hunter Hoffman, www. vrpain.com, accessed on 28 November 2022. To track eye movements, six small infrared lights embedded inside the VR goggles are shined onto the surface of the patient's eyes, creating a pattern of dots on the outer eye. The pattern changes as they look in different directions. The changes in where the patient is looking are then captured using miniature video cameras pointed at the patients' eyes [39]. The computer chip in the VR goggles can tell from the patterns what virtual objects the patient is looking at in VR, such as to aim virtual snowballs. VR is designed to transport the patient to another reality, immerse them in their new virtual environment, and distract them by eliciting their engagement in an interactive game designed around their new environment. VR transports the patient to another 3-dimensional reality, e.g., SnowWorld [35,40], Mindfulness River World [41,42], or VR Animal Rescue World [43]. Even the slightest eye movement or mouse movement can change where the patients are looking in VR and allow patients to interact with objects in VR. Interactivity increases people's illusion of "being there" in the computer-generated world, as if it is a place they are visiting [44]. The patients describe a unique sense of "being present" in this computergenerated world. VR transports patients from their unpleasant, anxiety, and pain-provoking environment to an alternate, engaging, and entertaining computer-generated virtual world designed to help patients think about something other than their pain. VR can reduce acute pain and anxiety related to the most painful procedures: the scrubbing, cleaning, and debridement of burn wounds [35,45] or venipuncture [46,47]. Burn wound care is typically severely painful, despite the use of powerful opioid analgesics. Snow World VR is a frigid polar climate within which patients throw snowballs at targets (e.g., snowmen, penguins, and wooly mammoths, see Figure 3). SnowWorld is tailor-made to distract patients from the flashbacks of fire and the pain associated with the dressing change. VR analgesia enables patients to tolerate these procedures without increasing traditional analgesics. To track eye movements, six small infrared lights embedded inside the VR goggles are shined onto the surface of the patient's eyes, creating a pattern of dots on the outer eye. The pattern changes as they look in different directions. The changes in where the patient is looking are then captured using miniature video cameras pointed at the patients' eyes [39]. The computer chip in the VR goggles can tell from the patterns what virtual objects the patient is looking at in VR, such as to aim virtual snowballs. VR is designed to transport the patient to another reality, immerse them in their new virtual environment, and distract them by eliciting their engagement in an interactive game designed around their new environment. VR transports the patient to another 3-dimensional reality, e.g., SnowWorld [35,40], Mindfulness River World [41,42], or VR Animal Rescue World [43]. Even the slightest eye movement or mouse movement can change where the patients are looking in VR and allow patients to interact with objects in VR. Interactivity increases people's illusion of "being there" in the computer-generated world, as if it is a place they are visiting [44]. The patients describe a unique sense of "being present" in this computer-generated world. VR transports patients from their unpleasant, anxiety, and pain-provoking environment to an alternate, engaging, and entertaining computer-generated virtual world designed to help patients think about something other than their pain. VR can reduce acute pain and anxiety related to the most painful procedures: the scrubbing, cleaning, and debridement of burn wounds [35,45] or venipuncture [46,47]. Burn wound care is typically severely painful, despite the use of powerful opioid analgesics. Snow World VR is a frigid polar climate within which patients throw snowballs at targets (e.g., snowmen, penguins, and wooly mammoths, see Figure 3). SnowWorld is tailor-made to distract patients from the flashbacks of fire and the pain associated with the dressing change. VR analgesia enables patients to tolerate these procedures without increasing traditional analgesics. The literature on the safety, efficacy, and acceptability of VR has grown significantly in recent years [48,49]. For example, in children presenting for surgical procedures, VR has been shown to alleviate preoperative anxiety and facilitate the induction of anesthesia [50,51]. A review of outcomes in 213 children (6-18 years), primarily in the perioperative (60%, n = 128) and clinic (15%, n = 32) settings, demonstrated that VR-related adverse events were rare, self-limiting, and minor, such as occasional increases in anxiety (3.8%, n = 8), nausea (0.5%, n = 1), and dizziness (0.5%, n = 1) [25]. Importantly, the costs of developing, creating, and implementing VR has decreased greatly over the past decade, thereby increasing its applicability and affordability as a cost-effective tool for clinical practice [49,52]. The increased feasibility of widespread dissemination is occurring at a time of greatly increased demand for non-pharmacologic analgesics, in light of the opioid death epidemic [53,54] and a growing awareness of the need to avoid oversedation. The mechanism of action of immersive VR is not yet fully understood, but is thought to be related to VR's ability to divert attention [34,44]. fMRI studies demonstrate that VR significantly reduces pain-related brain activity in the anterior cingulate cortex, primary and secondary somatosensory cortex, thalamus, and insula [55][56][57] and show that VR can provide equi-analgesia to hydromorphone. VR as a Non-Pharmacologic Anxiolytic, Analgesic, Sedative VR reduces procedural anxiety [58] by redirecting the patient's attention into the computer-generated world, as the VR "transports" the patient from the clinical setting to another 'reality' [44]. The application of VR has been shown to decrease diastolic blood pressure, heart and respiratory rate, temperature, muscular tension, temperature, skin conductance, and serum carbon dioxide levels [59]. VR has been used during a wide range of medical procedures. The current non-exhaustive narrative review briefly explores some VR studies using VR during several medical procedures commonly using sedation outside of the operating room: with a focus on colonoscopies, venipuncture, dental procedures, MRI scans with autistic children, and wound care of children hospitalized with severe burn injuries. VR Sedation during Colonoscopies In the United States, it is estimated that 15 million colonoscopies are performed annually; it is a recommended medical procedure to screen for the third most commonly diagnosed cancer [60]. However, many people who could benefit from the preventative removal of pre-cancerous polyps avoid receiving their colonoscopy, for fear it will be unpleasant [61]. Some of these people who avoid colonoscopies die of colon cancer that could easily have been prevented via endoscopic polyp removal. In one recent survey about why patients avoid receiving a colonoscopy "Anxiety was as a key barrier cited by patients and SSPs, arising from the moment the patient received the invitation letter. Notably, procedural-related anxieties centred upon the fear of pain and discomfort and test invasiveness." ([62] p. 1). Colonoscopies are most frequently prescribed to older adults. Additionally, there is in fact some cause for concern about rare but real side effects of sedation often used during colonoscopies. The moderate and deep sedation, as well as general anesthesia, that are utilized to achieve these procedures all carry inherent associated-risk [63][64][65]. A systematic review of four randomized clinical trials (RCT) found that VR was superior to the controls and had similar efficacy to traditional pharmacologic sedation [66]. VR Distraction during Venipuncture For the induction and maintenance of anesthesia and sedation, an intravenous (IV) catheter is often placed in the pre-operative or pre-procedure location (e.g., in the patient's forearm). Needle phobia is common amongst toddlers, children, and some adolescents. Venipunctures can elicit exaggerated anxiety, and the fear of needles has contributed to the avoidance of healthcare and reduced adherence to childhood immunization schedules [46,47,67,68]; an unpleasant venipuncture can set a sedation off to an unpleasant start. Immersive VR can alleviate pain and anxiety in children and reduce the parents and nursing staff ratings of the childrens' procedural pain and anxiety [69]. In a randomized controlled trial comparing the feasibility and efficacy of incorporating VR into the routine care of venipunctures in 143 children (10-21 years old), VR significantly reduced acute procedural pain and anxiety. VR reduced caregiver and patient distress, increased clinician satisfaction, and improved efficiency and throughput in the outpatient phlebotomy clinics. These results are consistent with other recent applications of VR for venipuncture [46,68,[70][71][72]. Similarly, children undergoing blood draws and intravenous placement in the emergency department reported significantly greater reductions in fear of pain using immersive VR than the active control conditions-watching television or receiving standard distraction. Although children reported significantly less fear and greater satisfaction during VR compared to active control, no significant reductions in pain intensity were noted in that study [73]. In another study, VR did not reduce pain in children undergoing procedural sedation under local anesthesia [38]. VR Sedation during Dental Procedures VR has been used with success in the dental setting for children and adult patients presenting with mild to moderate fear and anxiety: VR distraction reduced the self-reported levels and physiological indicators of anxiety, fear, and pain [47,59,74]. In the pediatric population, VR decreased reported pain and anxiety levels with accompanying decreases in the pulse rate and oxygen saturation before, during, and after restorative dental treatment (p < 0.0001) [75]. Similar responses were reported in healthy children requiring buccal infiltration anesthesia [76] and inferior alveolar nerve block (IANB) for mandibular tooth extraction [77]. In another study of the analgesic effect of VR, the patients undergoing periodontal scaling and root planing procedures reported significantly reduced pain perception compared to an active comparator-watching a movie-group and passive controls (p < 0.001) with accompanying reductions in blood pressure and pulse rate (p < 0.001) [78]. In another study, children undergoing dental extractions did not report a significant reduction in anxiety [79]. VR Sedation Is Especially Challenging in Children with Autism VR has been used with success for children with autism spectrum disorders (ASD). Children with ASD can display delayed or repetitive language, sensory sensitivity, and elevated anxiety levels [80][81][82]. Repetitive behaviors such as rocking, jumping, twirling, pacing around, and other "hyper" and impulsive, sometimes aggressive behaviors cause it to be challenging for children with autism to stay physically still during medical procedures (e.g., MRI scans). VR distraction has been used in this clinical setting for dental procedures of children with ASD, alleviating the accompanying behavioral challenges during the procedures [83]. The cognitive and behavioral challenges associated with ASD can limit the child's ability to tolerate other medical procedures often necessary to evaluate their comorbidities (immune, gastrointestinal, and neurologic disorders) [84,85]. Magnetic Resonance Imaging (MRI) studies can be challenging for patients with autism, who often require immobility for extended periods of time in the cold, noisy, and claustrophobic MRI environment. With the FDA and Society for Pediatric Anesthesia warnings on the potential for neurotoxicity of sedation drugs to the developing brain, parents can be reluctant to consent to traditional pharmacologic anesthesia or sedation [86][87][88][89]. If unsedated autistic patients move around during imaging, the result can be aborted or produce blurry scans and the reduced quality of medical care. Days prior to their MRI scans, training in simulations can help familiarize autistic children with the MRI environment and can help them remain more calm during subsequent scans. For example, autistic children can practice being in an fMRI at home [89]. VR distraction and other types of distraction may also help enable children with ASD to successfully complete an MRI (see related pilot study of successful use of distraction and VR with autistic children [90], and successful SnowWorld VR distraction of clinically claustrophobic adults during mock MRI scans [91]. Transnasal Endoscopies and PICC Line Insertions In addition to radiological imaging studies, VR has been used as an adjunct to pediatric gastrointestinal procedures. Upper esophagogastroendoscopies are frequently performed with sedation or general anesthesia in the pediatric population [92]. Children with eosinophilic esophagitis typically require frequent follow-up endoscopies to follow disease progression. In these patients, some studies have demonstrated that VR has eliminated the need for sedation or anesthesia and for transnasal endoscopy and reduced the associated costs [93]. Transnasal endoscopy involves inserting a tube into the patient's body through their nose. Peripherally Inserted Central Catheter (PICC) lines are thin flexible tubes inserted peripherally (e.g, into the arm) until one end is near the larger veins near the patient's heart (e.g., to inject intravenous fluids, to give blood transfusions, chemotherapy, and other drugs, and to take blood samples). In a group of 10 children, when VR distraction was used, a Peripherally Inserted Central Catheter line insertion did not require patient restraint and VR was associated with reductions in anxiety and greater caregiver satisfaction [94]. Analgesic Potential of VR during other Painful Medical Procedures VR is increasingly being used as an adjunct or alternative to pharmacologic analgesia and sedation for a wide range of medical procedures [95]. VR distraction has been incorporated into a multimodal approach to the management of procedural pain: distracting the patient's focus to spend less time thinking about pain, while increasing their pain threshold and tolerance, and causing medical procedures to be significantly more fun [45]. The patients continue to benefit from VR when it is used repeatedly [35]. VR has been used as an anxiolytic and analgesic adjunct for orthopedic outpatient surgical procedures [36] as well as for transurethral microwave thermotherapy in elderly patients [96]. A study investigating the feasibility and effectiveness of VR for pain associated with atrial fibrillation (AF) ablation under conscious sedation in a group of 48 patients (mean age 63.0, SD 10.9 years; n = 16, 33.3% females) reported lower mean perceived pain (3.5 [SD 1.5] vs. 4.3 [SD 1.6]; p = 0.004) and greater comfort (7.5 [SD 1.6] vs. 6.8 [SD 1.7]; p = 0.03) than control. Although VR significantly reduced pain and was not associated with procedural complications or an increase in fluoroscopy duration, VR was not associated with any reductions in morphine consumption [100]. By reducing or in some cases eliminating the need for sedation, VR can lower the incidence of apnea [101]. In addition, studies of adults undergoing knee replacement have demonstrated improved intra-operative hemodynamics (reduced hypotension), decreased fentanyl requirement, and lower post-operative pain scores [102,103]. VR Sedation/Analgesia during Burn Wound Cleaning VR has been trialed for its potential to provide anxiolysis, analgesia, and distraction for severe burn patients. In the pediatric population, burn wound care is a profoundly painful and traumatic experience [35,104]. Burns are a leading cause of emergency department visits and hospitalizations (e.g., scalds and contact with hot household appliances) [105]. Traditional pharmacologic analgesics (opioid and non-opioid) and adjuncts (e.g., nonsteroidal anti-inflammatory drugs, benzodiazepines, neuroleptics, and opioids) are often unable to control the pain associated with wound care and debridements [106]. Although nearly all of the research on VR analgesia for pediatric burn patients has studied children of 6 years and older, a hybrid projector-based VR protocol and desktop VR have even demonstrated a significant effect in reducing pain related to hydrotherapy procedures in very young pediatric burn injuries as young as 2 years old [43,107]. VR for Military and Veteran Patients The military has long used VR for training. The U.S. Army recently purchased 22 billion dollars of Microsoft Hololens goggles, which can be used for either VR or see-through displays that superimpose computer-generated virtual images onto the users view of the real world, which is known as Augmented Reality (AR) [108]. There is also growing interest from the U.S. Office of Veterans Affair in using VR to help reduce the acute and chronic pain of U.S. veterans [109,110]. The military is very interested in acute pain management techniques that do not cloud the soldiers decision process and that facilitate deployment readiness [111,112], see also [109]. The Tactical Combat Casualty Care Guidelines are the "standard of care for the modern battlefield" [113]. Peterson et al., 2021, p. 11 [109] recommend revising the current Tactical Combat Casualty Care Guidelines to include "the addition of VR as an effective and hemodynamically safe approach to the current management of acute trauma pain in military personnel during medical procedures". Additional research and development of VR analgesia designed to meet the unique needs of military and VA patients, including both acute and chronic pain, is recommended [114,115], see also [116]. Meta-Analyses of VR Distraction Systematic reviews and meta-analyses demonstrated that immersive VR provides superior analgesia to controls during dressing changes/wound care in hospitalized children and adolescents (Cohen's d = −0.94, a large effect size (95% CI = 0.62, 1.27; Z = 5.70; p < 0.00001) compared to the control. Fully immersive VR is considered to be a useful adjuvant in pediatric burn care [117]. These recommendations are supported by another meta-analysis of 18 studies that demonstrates VR efficacy in reducing procedural pain associated with burn care (Cohen's d = −0.49; a medium effects size, 95% CI −0.78, −0.15; I 2 = 41%), and VR analgesia of procedural pain during burn care [118]. A systematic review and meta-analysis of 16 studies reported on the efficacy of VR for alleviating pain and anxiety in children undergoing a range of medical procedures (venous access, burn, dental, and oncology care). The results showed large effect sizes in favor of VR at reducing children's self-reported pain (Cohen' [119]. However, note that these unusually large effect sizes are due in part to biases frequent in these relatively early studies in this research program. The clinical efficacy of VR in managing pediatric procedural anxiety and pain is further supported by a recent literature review primarily for burn wound care and postburn physiotherapy, needle-related, and dental procedures between 2000 and 2020. This review supported the efficacy of VR in addressing procedural pain and anxiety in children aged from 6 months to 18 years. The authors concluded that VR is redefining pain and anxiety management with nurses, who may play a leading role in the implementation of VR into clinical care [120]. VR improves the perioperative patient experience and shows promise at reducing intra-operative anesthetic requirements. A systematic review and meta-analysis of 20 experimental and quasi-experimental trials published between January 2007 and December 2018 on VR as an analgesic agent in acute and chronic pain in adults showed benefits of VR to reduce peri-and postprocedural acute pain [121]. Across all trials included in the meta-analyses, despite a high degree of statistical heterogeneity, VR was associated with reductions in pain scores, with a moderate effect size (Cohen's d = −0.49 (95% CI −0.83 to −0.41, p = 0.006) [114]. Until recently, technological limitations of VR have been a barrier to dissemination. Due in part to the ongoing opioid overdose death crisis [122], VR analgesia/sedation is currently a research topic of intense scientific and clinical investigation. Over the past 25 years, despite growing empirical evidence of its efficacy, VR has not been widely adopted in everyday clinical practice. This is partly because, until recently, the VR systems were very rare, expensive, and complicated. During the early days of VR [33,34], the VR system was heavy and costly (USD 60,000 for one VR system), and included a heavy computer monitor, a 55 lb Silicon Graphics supercomputer, an 8-pound VR helmet, and a separate USD 8000 Polhemus FastTrak electromagnetic tracking system. The transport and set-up required several hours and was labor intensive and required considerable technical trouble-shooting skills. Over the years, VR technology has evolved and improved, becoming much lighter weight (less than 2 pounds total for the entire VR system), less expensive, and requiring less technological sophistication to set up, but, until recently, VR was still not easy for nurses to use without help, was still very expensive for the wide field of view VR helmets, and most patients who received VR were participants in research studies. Some major breakthroughs in VR technology occurred in 2016, as high-tech companies began mass producing VR helmets and mass marketing. In 2019, an untethered VR system was released that was wireless, self-contained, and easy to use for novice computer users (e.g., nurses). The Oculus Quest (2019) and Oculus Quest2 (2020) VR systems (see Figure 4) are now very inexpensive and widely available worldwide and are increasingly being used for VR analgesia. So, the biggest technological limitations to the dissemination of VR pain and anxiety reduction have recently been overcome and are continuing to improve (e.g., miniaturization and increased resolution, camera-based hand tracking, face tracking, eye tracking, improved graphics, etc). The stand-alone Meta Quest2 now costs under USD 500 per helmet and does not require a laptop. The new VR systems with eye tracking and hand tracking are more immersive and more distracting [39,44] and the immersiveness of VR (and the analgesia effectiveness) is currently increasing dramatically each year, due in part to multibillion dollar investments and competition amongst the big tech companies. For example, one single company, Facebook, has invested well over USD 2 billion into VR technology so far and it has announced that, as of July 2022, they have sold over 15 million Quest2 VR helmets Worldwide in less than 2 years. Although Facebook stock has recently tumbled in 2022, several other well-known tech companies such as Apple, Sony, Microsoft, Google, and Samsung are also creating VR technologies anticipating potentially lucrative new VR-related markets [123]. Conclusions As a result of huge private investments into the hardware by high tech companies such as those just mentioned and custom software by researchers and small VR therapy companies customized to benefit patients, VR has become an affordable tool that could substantially improve patient outcomes. Future clinical research and the development of VR is critical as we seek means to improve the patient care experience and minimize their risks and discomfort. As briefly reviewed in the current narrative study, immersive VR can help reduce anxiety before medical procedures, can help reduce anxiety and pain during medical procedures, and can help reduce post-surgical pain. Between 1998 and 2016, a growing body in the literature of clinical studies, including Mayday Fund and NIHfunded research, has shown large reductions in pain and anxiety during painful medical procedures. However, until 2016, VR was rarely used in clinical practice and most patients able to use VR were research subjects. Many of the most serious barriers that prevented more widespread use of VR in everyday clinical practice (e.g., very high expense, difficult to use, and bulkiness) have recently been eliminated. Because of multibillion dollar investments by private industries, VR technology is currently improving at a fast pace, thus increasing its potential for medical dissemination. Easy to use, highly distracting, wireless, and battery-powered multibillion dollar VR products are currently available to the anesthesiology community for a few hundred dollars per unit (e.g., the Meta Quest2 helmet). More widespread clinical use of VR distraction and pre-surgery patient education is recommended. Figure 4. Above right is a photo of the Meta Quest2 VR helmet. This lightweight helmet weighs only 17.7 ounces, is wireless, and uses cameras to optically track head and hand movements. Image on left shows a screenshot from an optically hand-tracked game named Waltz of the Wizard, aldin.com, accessed on 28 November 2022, photo on right by Hunter Hoffman, both images copyright Hunter Hoffman, www.vrpain.com, accessed on 28 November 2022. Conclusions As a result of huge private investments into the hardware by high tech companies such as those just mentioned and custom software by researchers and small VR therapy companies customized to benefit patients, VR has become an affordable tool that could substantially improve patient outcomes. Future clinical research and the development of VR is critical as we seek means to improve the patient care experience and minimize their risks and discomfort. As briefly reviewed in the current narrative study, immersive VR can help reduce anxiety before medical procedures, can help reduce anxiety and pain during medical procedures, and can help reduce post-surgical pain. Between 1998 and 2016, a growing body in the literature of clinical studies, including Mayday Fund and NIH-funded research, has shown large reductions in pain and anxiety during painful medical procedures. However, until 2016, VR was rarely used in clinical practice and most patients able to use VR were research subjects. Many of the most serious barriers that prevented more widespread use of VR in everyday clinical practice (e.g., very high expense, difficult to use, and bulkiness) have recently been eliminated. Because of multibillion dollar investments by private industries, VR technology is currently improving at a fast pace, thus increasing its potential for medical dissemination. Easy to use, highly distracting, wireless, and battery-powered multibillion dollar VR products are currently available to the anesthesiology community for a few hundred dollars per unit (e.g., the Meta Quest2 helmet). More widespread clinical use of VR distraction and pre-surgery patient education is recommended.
8,045.8
2023-01-20T00:00:00.000
[ "Medicine", "Computer Science" ]
The Energy Impact of Building Materials in Residential Buildings in Turkey In Turkey, heat loss from existing and new buildings constitutes a large part of energy waste, so usage of suitable construction material is quite important. The building selected in this study was analyzed by applying different building materials considering the annual energy consumption allowed, and according to the different heat zones and different thicknesses of insulation material in relation to demand. The most suitable building material in terms of energy and cost uptake and cost given to the regions was determined; the results were measured in the study in terms of the maximum allowable annual heating energy requirement and the optimum values were determined. Comparison of the optimum values and the total energy consumption rates was conducted for the analyzed cities. Introduction Climate change is a tremendous long-term challenge facing the Earth today. The energy-and thermal-performance of buildings has gained global importance in recent years, due to the aim of maintaining thermal comfort with a more efficient approach [1,2]. It is reported that public and commercial buildings in Europe consume an estimated 40% of total energy [3]. Residential buildings alone represent most parts of the final energy demand in many rural areas, which makes them one of the major single energy consumers of residential buildings. However, the operational energy demand, with a specific emphasis on thermal aspects, seems to cover a large part of the overall energy consumption of residential buildings and their users [4]. The construction sector must take responsibilities for environmental problems, as in every phase of the construction life-circle energy is consumed at a different level. Construction materials represent an important share of this consumption, and the energy consumed by the building materials during their life cycle becomes a significant parameter in the determination of the energy efficiency of the construction [5]. All building materials in their life cycle are exposed to different negative factors that influence their durability. One of them is carbon dioxide (CO 2 ) [6]. The building construction industry uses a lot of energy and emits large amounts of carbon dioxide (CO 2 ) into the atmosphere. Energy is used to extract, transport, process and combine materials, and CO 2 is released into the environment through fossil fuel burning, land use applications, and industrial process reactions. Recently, increased knowledge has suggested that building with Aerated Autoclaved Concrete (AAC) and wood-based material can result in lower CO 2 emissions compared to other materials, such as concrete, brick, or steel [7]. Concrete is the most widely used construction material in the world, with the prevailing consumption of 1 m 3 per person per year [8]. A variety of factors affect the energy and CO 2 balances of building materials over their lifecycle. The sustainable development of costs has also become an important consideration while designing buildings [18]. Building materials for energy efficiency have been contemplated by many researchers, of which some are related to this survey [19][20][21][22][23]. Material and Method In this paper three different building materials are applied to the same 3 story residential buildings, which are in various climate zones of Turkey, to determine and compare energy validity. The methodological framework of the study is presented in Figure 1. Materials 2021, 14, x FOR PEER REVIEW 3 of 20 no insulation or inefficient insulation [17]. While designing and constructing the buildings, paying attention to the climatic characteristics of the region contributes to the total cost economy and efficiency of the building. Energy conservation to reduce lifecycle energy costs has also become an important consideration while designing buildings [18]. Building materials for energy efficiency have been contemplated by many researchers, of which some are related to this survey [19][20][21][22]23]. Material and Method In this paper three different building materials are applied to the same 3 story residential buildings, which are in various climate zones of Turkey, to determine and compare energy validity. The methodological framework of the study is presented in Figure 1. The energy performance of the dissimilar types of buildings, the calculation method of annual heating energy demand, thermal transmittance "U" values for each region, which is determined by using the "degree-day method" in TS-825, and the maximum heating demand values according to regions were reported. The maximum U-value requirements according to TS825 [24] are given in Table 1. After describing the maximum heating demand values, the monthly outdoor temperature and solar radiation, which were assumed into consideration in this measure to calculate heating loads of buildings, were classified separately according to each region and month. In addition, the maximum heating loads were applied according to the A/V (area/volume) rates of buildings for each region (Table 1) [21]. In this study, Antalya, Istanbul, Ankara and Erzurum-as four sample cities located in different thermal regions-are selected to demonstrate the effect of decisions regarding The energy performance of the dissimilar types of buildings, the calculation method of annual heating energy demand, thermal transmittance "U" values for each region, which is determined by using the "degree-day method" in TS-825, and the maximum heating demand values according to regions were reported. The maximum U-value requirements according to TS825 [24] are given in Table 1. After describing the maximum heating demand values, the monthly outdoor temperature and solar radiation, which were assumed into consideration in this measure to calculate heating loads of buildings, were classified separately according to each region and month. In addition, the maximum heating loads were applied according to the A/V (area/volume) rates of buildings for each region (Table 1) [21]. In this study, Antalya, Istanbul, Ankara and Erzurum-as four sample cities located in different thermal regions-are selected to demonstrate the effect of decisions regarding the thickness of insulation materials with various building materials. Istanbul and Ankara are the moderate cities in the second and third degree-day region, respectively. Turkey is carved up into four climatic regions depending on temperature degree-days of heating according to the Turkish thermal insulation regulations (TS 825) given in Figure 2 [15,25,26]. the thickness of insulation materials with various building materials. Istanbul and Ankara are the moderate cities in the second and third degree-day region, respectively. Turkey is carved up into four climatic regions depending on temperature degree-days of heating according to the Turkish thermal insulation regulations (TS 825) given in Figure 2 [15,25,26]. The selected cities and their data are depicted in Table 2. Additionally, the energy savings and cost analysis resulting from the use of building materials and insulation were compared at a base "Maximum Allowable Annual Energy Requirement", which were estimated individually for each region. When the annual average temperature data in Antalya is examined, it is found that out that the annual maximum average temperature is 24.1 °C. The average maximum temperature values of the months range between 14.9 and 34 °C. The average maximum temperature difference between Antalya's hottest month and coldest month is 20.1 °C (Figure 3). The selected cities and their data are depicted in Table 2. Additionally, the energy savings and cost analysis resulting from the use of building materials and insulation were compared at a base "Maximum Allowable Annual Energy Requirement", which were estimated individually for each region. When the annual average temperature data in Antalya is examined, it is found that out that the annual maximum average temperature is 24.1 • C. The average maximum temperature values of the months range between 14.9 and 34 • C. The average maximum temperature difference between Antalya's hottest month and coldest month is 20.1 • C ( Figure 3). When the yearly average temperature data in Istanbul is examined, it is seen that the annual maximum average temperature is 18.7 • C. The average maximum temperature values of the months range between 8.8 and 28.9 • C. The average temperature difference between Istanbul's hottest month and the coldest month is 19.1 • C ( Figure 4). When the annual average temperature data in Ankara is examined, the annual maximum average temperature is seen as 17.8 • C. The normal maximum temperature values of the months range between 4.1 and 30.4 • C. For that reason, the average temperature is in February (−2.3 • C) when the temperature is lower, and the average temperature of August is 30.4 • C. The average temperature difference between Ankara's hottest month and coldest month is 26.3 • C ( Figure 5). When the year-round average temperature data in Erzurum is examined, it is shown to be that the annual maximum average temperature is 11.9 • C. The average maximum temperature values of the months range between −4 • C and 27.2 • C. The maximum average temperature difference between Erzurum's hottest month and coldest month is 28.5 • C ( Figure 6). When the yearly average temperature data in Istanbul is examined, it is seen that the annual maximum average temperature is 18.7 °C. The average maximum temperature values of the months range between 8.8 and 28.9 °C. The average temperature difference between Istanbul's hottest month and the coldest month is 19.1 °C (Figure 4). When the annual average temperature data in Ankara is examined, the annual maximum average temperature is seen as 17.8 °C. The normal maximum temperature When the yearly average temperature data in Istanbul is examined, it is seen that the annual maximum average temperature is 18.7 °C. The average maximum temperature values of the months range between 8.8 and 28.9 °C. The average temperature difference between Istanbul's hottest month and the coldest month is 19.1 °C (Figure 4). When the annual average temperature data in Ankara is examined, the annual maximum average temperature is seen as 17.8 °C. The normal maximum temperature values of the months range between 4.1 and 30.4 °C. For that reason, the average temperature is in February (−2.3 °C) when the temperature is lower, and the average temperature of August is 30.4 °C. The average temperature difference between Ankara's hottest month and coldest month is 26.3 °C ( Figure 5). When the year-round average temperature data in Erzurum is examined, it is shown to be that the annual maximum average temperature is 11.9 °C. The average maximum temperature values of the months range between −4 °C and 27.2 °C. The maximum average temperature difference between Erzurum's hottest month and coldest month is 28.5 °C ( Figure 6). Description of the Building and Building Material The studied building is a 3-story residential building, which consists of the ground When the year-round average temperature data in Erzurum is examined, it is shown to be that the annual maximum average temperature is 11.9 °C. The average maximum temperature values of the months range between −4 °C and 27.2 °C. The maximum average temperature difference between Erzurum's hottest month and coldest month is 28.5 °C ( Figure 6). Description of the Building and Building Material The studied building is a 3-story residential building, which consists of the ground floor and three floors which has a gross area of about 910 m 2 . Each unit has two bedrooms, one living room, one kitchen and one bathroom. The building has two dwelling units on each floor with about 455 m 2 and the floor plan of the house is shown in Figure 7. The structure of the building envelope components is shown in Table 3. Description of the Building and Building Material The studied building is a 3-story residential building, which consists of the ground floor and three floors which has a gross area of about 910 m 2 . Each unit has two bedrooms, one living room, one kitchen and one bathroom. The building has two dwelling units on each floor with about 455 m 2 and the floor plan of the house is shown in Figure 7. The structure of the building envelope components is shown in Table 3. The systems (Table 3) and costs are similar in all regions of Turkey for products easily available in a traditional structure. The systems discussed in the study are oriented toward providing the best, most suitable and most economical exterior wall structure system according to the best annual heating energy demand value. The m 2 design and all features of the building are the same and the annual heating energy demand values are equal. There is no demand for insulation AAC systems in the 1st-and 2nd-degree regions, the wall provided the needed values in the regulation (TS 825), nevertheless, insulation The systems (Table 3) and costs are similar in all regions of Turkey for products easily available in a traditional structure. The systems discussed in the study are oriented toward providing the best, most suitable and most economical exterior wall structure system according to the best annual heating energy demand value. The m 2 design and all features of the building are the same and the annual heating energy demand values are equal. The systems (Table 3) and costs are similar in all regions of Turkey for products easily available in a traditional structure. The systems discussed in the study are oriented toward providing the best, most suitable and most economical exterior wall structure system according to the best annual heating energy demand value. The m 2 design and all features of the building are the same and the annual heating energy demand values are equal. There is no demand for insulation AAC systems in the 1st-and 2nd-degree regions, the wall provided the needed values in the regulation (TS 825), nevertheless, insulation was required in the 1st, 2nd-zone brick, and pumice systems and all the systems in the 3rd and 4th zones. Calculation of Annual Heating Energy Data of Building In the study, the optimum annual heating requirement rate was determined for all external wall systems. Temperature loss and energy demand of cities belonging to four different degree day zones are calculated monthly according to TS 825 regulation. In the calculations, the quantity of fuel to be consumed per unit volume or unit area (kg.m 3 ) 860 × Q year /(calorific value of fuel × system efficiency) (Kcal/kg.m 3 ) = 1.17 (kg.m 3 ) was taken as fuel. The annual heating energy demand for a single building section in buildings is calculated with the following equation: where; Q year : annual heating energy (Joule), Q months : monthly heating energy (Joule), H: specific heat loss of the building (W/K), θ i : average monthly internal temperature ( • C), θ e : average monthly outside temperature ( • C), ηmonths: monthly average usage factor for earnings (unitless), φ i, months : monthly average earnings (can be received fixed) (W), φ s, months : monthly average solar energy gain (W), t: time, (a month in seconds = 86,400 × 30) (s)All calculation results represented by graphics, seen in (Figures 8-13). was required in the 1st, 2nd-zone brick, and pumice systems and all the systems in the 3rd and 4th zones. Calculation of Annual Heating Energy Data of Building In the study, the optimum annual heating requirement rate was determined for all external wall systems. Temperature loss and energy demand of cities belonging to four different degree day zones are calculated monthly according to TS 825 regulation. In the calculations, the quantity of fuel to be consumed per unit volume or unit area (kg. m 3 ) 860 × Qyear/(calorific value of fuel × system efficiency) (Kcal/kg. m 3 ) = 1.17 (kg. m 3 ) was taken as fuel. The annual heating energy demand for a single building section in buildings is calculated with the following equation: where; Qyear: annual heating energy (Joule), Qmonths: monthly heating energy (Joule), H: specific heat loss of the building (W/K), θi: average monthly internal temperature (°C), θe: average monthly outside temperature (°C), ηmonths: monthly average usage factor for earnings (unitless), φi, months: monthly average earnings (can be received fixed) (W), φs, months: monthly average solar energy gain (W), t: time, (a month in seconds = 86400 × 30) (s)All calculation results represented by graphics, seen in (Figures 8-13). As it is understandable from the figures, the heat demand in the four different degree day regions decreases towards summertime months, thus the heat loss decreases in proportion to the summer months on aforesaid dates. At the beginning of the subject, since the annual heat requirement values of the building materials are optimized by calculations and this value is kept constant and equal, the heat demand and heat losses of the materials are approximately the same in the graphics. Climatically, the 1st degree day zone is the lowest in terms of heating need and the highest in terms of cooling need. For this reason, Erzurum province, selected in the study, gives the highest heat demand and heat loss among other provinces (Antalya, İstanbul and Ankara). In areas with a high cooling load, proper insulation will provide long-term economy and comfort in the structure in terms of the operating period. Condensation and Evaporation Amounts in Building Elements In buildings that are situated in countries where external ambient temperatures vary in a wide range, such as Turkey, the importance of material insulation applications are increasing day by day in order to bring down the heat losses in winter months and the heat gains in summer months. The condensation that occurs as an outcome of water vapor diffusion negatively affects the heat transfer occurring in building materials Condensation or perspiration on surfaces that occur in building materials, especially in cold seasons, Figure 13. Heating energy demand for pumice exterior wall system. As it is understandable from the figures, the heat demand in the four different degree day regions decreases towards summertime months, thus the heat loss decreases in proportion to the summer months on aforesaid dates. At the beginning of the subject, since the annual heat requirement values of the building materials are optimized by calculations and this value is kept constant and equal, the heat demand and heat losses of the materials are approximately the same in the graphics. Climatically, the 1st degree day zone is the lowest in terms of heating need and the highest in terms of cooling need. For this reason, Erzurum province, selected in the study, gives the highest heat demand and heat loss among other provinces (Antalya,İstanbul and Ankara). In areas with a high cooling load, proper insulation will provide long-term economy and comfort in the structure in terms of the operating period. Condensation and Evaporation Amounts in Building Elements In buildings that are situated in countries where external ambient temperatures vary in a wide range, such as Turkey, the importance of material insulation applications are increasing day by day in order to bring down the heat losses in winter months and the heat gains in summer months. The condensation that occurs as an outcome of water vapor diffusion negatively affects the heat transfer occurring in building materials Condensation or perspiration on surfaces that occur in building materials, especially in cold seasons, change the physical and thermal properties of building materials. As a result, condensation increases the overall heat transfer coefficient of the material, to the point that it can disrupt the structure of the material and cause increased heat loss. This phenomenon, which is called condensation or sweating, causes undesirable outcomes such as damage to the materials, reduced intensity levels and increased heat losses due to the increased overall heat transfer coefficient. Condensation occurs due to a lack of insulation or insufficient insulation [28][29][30][31][32][33]. The condensation and evaporation amount of building materials according to the climatic zones are presented in Figures 14-25. Commensurate to all results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature deviation between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, it shows conformity with the ordinances. According to the results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, it follows the regulations. that it can disrupt the structure of the material and cause increased heat loss. This phenomenon, which is called condensation or sweating, causes undesirable outcomes such as damage to the materials, reduced intensity levels and increased heat losses due to the increased overall heat transfer coefficient. Condensation occurs due to a lack of insulation or insufficient insulation [28][29][30][31][32][33]. The condensation and evaporation amount of building materials according to the climatic zones are presented in Figures 14-25. that it can disrupt the structure of the material and cause increased heat loss. This phenomenon, which is called condensation or sweating, causes undesirable outcomes such as damage to the materials, reduced intensity levels and increased heat losses due to the increased overall heat transfer coefficient. Condensation occurs due to a lack of insulation or insufficient insulation [28][29][30][31][32][33]. The condensation and evaporation amount of building materials according to the climatic zones are presented in Figures 14-25. condensation increases the overall heat transfer coefficient of the material, to the point that it can disrupt the structure of the material and cause increased heat loss. This phenomenon, which is called condensation or sweating, causes undesirable outcomes such as damage to the materials, reduced intensity levels and increased heat losses due to the increased overall heat transfer coefficient. Condensation occurs due to a lack of insulation or insufficient insulation [28][29][30][31][32][33]. The condensation and evaporation amount of building materials according to the climatic zones are presented in Figures 14-25. Commensurate to all results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature deviation between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, it shows conformity with the ordinances. According to the results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, it follows the regulations. Commensurate to all results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature deviation between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, it shows conformity with the ordinances. According to the results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, it follows the regulations. Appropriate to the solutions obtained into the analysis, no condensation occurred in the building element according to TS 825. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, which follows with the regulations. Condensation conditions have occurred in one component in the building element. Characterizing the results obtained from the analysis, no condensation occurred in the building element according to TS 825. The temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, which is in observance with the requirements. Characterizing the results obtained from the analysis, no condensation occurred in the building element according to TS 825. The temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, which is in observance with the requirements. Characterizing the results obtained from the analysis, no condensation occurred in the building element according to TS 825. The temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, which is in observance with the requirements. Cost Analysis According to the results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC which compliance with the regulation. Energy saving to wall unit m 2 costs (USD) given in Table 4. Appropriate to the solutions obtained into the analysis, no condensation occurred in the building element according to TS 825. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, which follows with the regulations. Condensation conditions have occurred in one component in the building element. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees, this is in compliance with the Standard. The quantity of condensed water in the 3rd component is higher than the limit specified in TS 825 of 5.44 > 0.5 kg/m 2 . Condensation occurred in the heat, waterproofing or air layer (Max. 0.5 kg/m 2 ). The mass of the condensed water is less than the mass of the evaporated water; hence, the condensation is harmless. Characterizing the results obtained from the analysis, no condensation occurred in the building element according to TS 825. The temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC, which is in observance with the requirements. Cost Analysis According to the results obtained from the analysis, no condensation occurred in the building element according to TS 825. Since the temperature difference between the inner surface and the indoor environment is less than 3 degrees in exterior walls made using AAC which compliance with the regulation. Energy saving to wall unit m 2 costs (USD) given in Table 4. The construction of the configuration selected to study and the three different materials in Turkey were evaluated according to four distinct values that meet the demanded TS 825 regulations. Attuned to the results, despite that 10 cm thick insulation material was applied on the wall materials in Zone 4, brick and pumice products could not conform to the limit values defined in TS 825. In the 1st-and 2nd-degree day zones, AAC provide the desired values without the need for any extra insulation material, and the demand limit values in the 3rd and 4th-degree sections were achieved by putting on an additional insulation material layer. Results and Discussion While designing a project/building, the standards, manufacturing terms, price, mechanical and physical properties of the chosen material are important. Eco-friendly production has also become essential due to recently increasing levels of global warming and the environmental problems which come with this. The energy parameters of the preferred materials in building design are taken into consideration. In a redesigned structure in Turkey or anywhere in the world, the basic principles of building design are similar, except for the environmental conditions. When calculating the cost, the choice of materials that meet the standards is reduced. Brick, pumice, and cement-based blocks have high levels of CO 2 emissions during production, which causes global warming levels to increase. In addition, the quality and standard for these materials vary from country to country, because the ambient conditions affect the setting process of the concrete during the transition of the material from the plastic state to the hardened state and the quality of the concrete changes. In the production of concrete and pumice, ambient conditions and even the source differences of the raw materials in the mixture change the quality of the material. AAC production requires a serious investment and a technical infrastructure all over the world, so almost the same standard and quality can be provided globally. Different insulation thicknesses have been added to external walls in order to progress to TS 825 regulations in zones, where the annual heat values of building materials do not demand a limit value. Even though 10 cm thickness was added in the insulation material in some zones, the limit values are still not attained. The heat values calculated according to TS 825 annual heat amounts and materials are represented in Table 5. /Q 0.80 it is classified as an A-type building. Attuned to the standards of the thermal insulation rules in buildings, the annual heating requirement was estimated as "Q year " in line with the architectural characteristics and proportions of buildings. The annual heating energy demand should be smaller than standard limit values in line with the building architectural features and dimensions. Tables 6-8 express the limit values according to the regions and whether those values are sufficient or not. The energy efficiency index of the building considered in the study, according to the materials can be seen in Table 9. Conclusions Turkey's energy demand is increasing rapidly proportional to its rising population. Considering this situation, the limited energy resources of the country, and its dependence on foreign resources, energy-saving becomes more and more significant every day. In Turkey, heat losses from buildings are one of the primary sources of energy waste. Consequently, significant energy savings can be achieved by designing and constructing buildings with suitable building materials and insulation materials. The type and thickness of the building and insulation materials play an important role in the heating and cooling of buildings in terms of energy consumption. Using proper building and insulation materials will cut down energy usage in buildings. For the building material of the external wall, the cost analysis in terms of the various building materials is calculated for the four different climatic regions in Turkey. The three most commonly used materials have been selected in our study. These materials, which are applied according to the standards, were analyzed depending on the environment variables and insulation materials. The prevalence of the materials used in the analysis in Turkey and around world are similar. As a result of this study, it has been revealed that AAC is used more widely in buildings where energy consumption is important. Less energy is used in the production of AAC when compared to brick and pumice building materials. In addition, evaluating in terms of recycling, sustainability, and energy, AAC, which uses fewer natural resources, provides an advantage. These parameters and our study will be supportive for housing production in environments similar to the conditions in Turkey, and also for any part of the world. In this paper, calculations were made for four cities in different climate zones, and the following conclusions have been drawn based on these calculations. The EPS insulation material was selected for 19 cm horizontal perforated brick, 19 cm pumice block and 20 cm aerated concrete block wall materials, which are widely used in Turkey for building exterior wall structures. For the structures with different exterior wall materials, heat calculations were made according to four different regions, and it was checked whether the values obtained could fulfill the demand limit values in the TS 825 regulations. The building considered in the study had a gross usable area of 460 m 2 , consisting of six independent sections with three floors, and an external wall area of 320 m 2 . The building is a class 4A building, according to the definition of the Republic of Turkey Ministry of Environment and Urbanization. The Republic of Turkey Ministry of Environment and Urbanization determined the unit cost per square meter of 4 A type buildings as 198.72 USD/m 2 for 2020. Taking our considered data, the cost of this building was estimated at USD 90.416 in this analysis. At the end of the study, for a structure of this size it can be seen that the AAC wall product provided 2.8% less cost in total, as there was no need for an additional layer of thermal insulation material on the 1st-and 2nd-degree zones. Comparing alternative systems in the 3rd-degree zone, the AAC and insulation system was 0.36% more expensive than other systems. Even taking into account the application of 10 cm thick EPS insulation material on the wall layers in the 4th-degree zone, the brick and pumice systems could not meet the TS 825 demand limit values. Generally, the cost of insulation is generally higher in cold regions than in warmer regions, but the payback time for insulation is much shorter. When this is taken into consideration, short-term investments could reduce Turkey's dependence on limited fuel sources and make important contributions to the Turkish economy, and thus considerable energy savings can be obtained by using proper materials in buildings.
7,544.6
2021-05-24T00:00:00.000
[ "Environmental Science", "Engineering" ]
The function of the topos philophronesis in the letter to the Philippians-a comparison with three ancient letters of friendship The objective o f this article is to determ ine the fu n c tio n o f the tónog (piÁoíffxjvrjcng in the letter to the Philippians. A ccord ing to K oskenniem i ( I9 5 6 .Í5 -4 6 ) TÓíTog (piXcxppoiTjcng m eans that the ancient le tter ser\’e d the purpose o f expressing the friend ly rela tionship betw een two persons. K oskenniem i also identifies typical p h U o p h ro n e tk ph ra ses a n d form ulae. Becau.se it is expccted that the tónog <piXo(ppó\ricng w ill be m ore prom inen t in letters o f fr ien d sh ip (.see Dahl, 1976:539), three pap yri letters, which were cla.ssified by S tow ers (1986:58-76) as letters o f friendsh ip , are ana­ lysed. The letter to the Philippians is com pared to the ancien t episto lary p ra c tice w hen identify ing a n d determ ining the fu n c tio n o f the rÓTtog <piZo(fipóvTjmg. It is concluded that the rónog (piÁLxpfXJVTjmg serves the fu n c tio n o f eOog a n d náOog a n d that the relationship betw een addresser a n d addressee can be determ ined by analysing the <piXo(ppó\tjaig I. Introduction The aim of this article is to identify the (possible) presence o f the xónoq (piAxxppóvtiaiq in the letter to the Philippians, to detennine the function o f such a topos and the relation between the topos and the t-újcoí; of the letter. The concepts zonoc, and xmo<; are closely related.T o m ; is a department or head ing containing arguments of the same kind.Koskenniemi (1956:35-46) extra polates three general ciiaracteristics of the Greek letter which he takes to be crucial to understanding the uniqueness, purpose and function of the Greek letter; * The first and most important is the tótkx; (pilo<ppóvT|m.q, which expresses the friendly relationship between two persons. * The second is the TÓmx; napoixjía, meaning that tlie letter is intended to revive and sustain the existence of a friendsliip wlien the correspondents are physically separated. * The third is the tójioí; óm ^ía, known as the epistolary discourse. A TÚrtoq is a type of letter.Ancient epistolary theorists distinguish between xm oi of letters according to their style.Cicero (Ad FamiUares 2.4.If;4.13.1,6.10.4) distinguishes between simple letters with factual infonnation and letters com municating the mood of the writer, the genus faniiliare et iocosum and the genus sevenim et grave.Pseudo Demetrius (tújtoi 'EmaxoXiKoi) divides letters into twenty-one tú to i. Philostratus (De Kpistulis) mentions certain types of style used in letters, Julius Victor (^r.v Rhelorica 21) distinguishes between Ittlerae negoltales and familiares and Pseudo Libanius ('ErtioxoXi^atoi XapaKxfipD;) divides letters into forty-one xÚTtoi according to their style (Malherbe, 1988:12-13). hi an attempt to detennine the relationship between -cÚTtoc; and tójtoq, it will be fruitful to notice that, according to Dahl (1976:539), xóJto<; <pi>.o(ppóvri(Ti(;occurs mainly in the opening and closing segments of a letter and is most elaborate in letters of friendship and diplomatic correspondence.It thus seems possible to infer a close relationship between the xúnoq (piXiKoq (friendly type) and the tótco(; (piXoíppóvTicni; (finendly relationship). Stowers (1986:60) classifies the letter to the Philippians as a letter that employs TÓrtoi and language from the friendly letter tradition.Does Stowers mean that Philippians employs the xÓTtoq (<)iX,o<pp6vriOTq?And if (piX.o<ppóvr|oi(; is present in Philippians, how does it function?What is the relationship between the tótioí; (piXcxppóvTicní; and the twck; of Philippians?Could the presence of the tójioq (pi>xxppóvricn,q mean that Philippians is a fnendly letter?In order to answer these kinds of questions, the following will be done: 1.The TÓ;toq (piX.o<ppóvr|ai(; is described by applying the following procedure: * Ancient epistolary theorists and rhetoricians are consulted. * Modem theorists are consulted on the ancient epistolary practice.* A practical epistolographical analysis of a sample of ancient letters is attempted. 2. The letter to the Phihppians is compared to ancient epistolary practice when identifying and detemiining the function of the totox; (piXxxppóvticnq. Because it is probable that the xójtoq (piXxxppóvTi<Tu; will be more prominent in let ters of fnendship (see Dahl, 1976:539), three papyri letters, classified by Stowers (1986:58-76) as letters of friendship and written between 95 B.C.E. and 58 C.E. are analysed. The ancient theories and -practice of epistolography and rhetoric date fi-om 200 B.C.E..Because the letter to the Philippians was possibly written between 52 C.E. and 61 C.E., it seems justified to interpret it in comparison with the three papyri letters from a rhetorical -and epistolographical perspective, since all these letters were written within societies which applied the same rhetorical -and epistolographical rules.These letters as well as the letter to the Philippians are formally analysed according to the theories of Wliite (1972Wliite ( , 1986)).Aristotle (Téxvriq 'PTixopiKiii; 5.10) defines 6 xónoq as "a place to look for a store of something, or the store itself; a heading or department, containing a number of rhetorical arguments of the same kind".Tójioi are of two kinds: Koivoi tÓTtoi or simply TÓ7K31, the topics common to any kind of communication (Aristotle, Téxvriq 'PritopiKiV; ii.26.1) and eiSii or i5ia, specific topics, propositions o f limited appHcability, chiefly derived from ethics and politics (Aristotle, Téxvriq 'PriTopiKriq i.2.21), Cicero ('íopica) defines a tótkx; as a residing place of arguments and distinguish es between inherent tótioi (tó to i derived from the whole, the part, meaning and connection) and extrinsic tó to i (arguments not invented by the art of the orator) (Murphy, 1972:146-147). T h e id Quintilian defines the xójtoi for arguments as those areas of the mind to which one may go for specific sources of proof (Quintilian, Imtitutio Oratorio 5 .10). Although the ancient rhetoricians do not deal with the TÓJtoq q>iXo<ppóvn<Ti<; as such, Aristotle refers to something similar when he deals with ëvxexvoi.(1976) andKoskenniemi (1956). í>iXocppóvT|oi(; as an ancient epistolary practice Dahl (1976:539) proposes 'friendly disposition' as a translation for (piXxxppovrioii;.He states that philophronetic statements often prepare the way for expressions of disappointment, embarrassment, reproach, irony or warnings resulting from the friendship. Koskenniemi deals with (pi^ixppovrjcni; in detail.According to him Demetrius of Phalerum views a letter as a postulation of friendship, and that is why he consi ders the friendly disposition as tlie most important essence o f the letter and he mentions this function of the letter as being fundamental (Koskenniemi, 1956:35, 37). Because Koskenniemi (1956;1-214) bases his study on ancient epistolary prac tice, and does a detailed study o f philophronelic statements, his research w'ill be summarized here.Koskenniemi (1956:128-154) considers the following as typi cal philophronetic phrases and fonnulae: * General expressions of concern about the recipient's welfare For example: éxópTi(v) Xxxjimv ooC éreioTo^f|v, (bq úyiaíveu; "I rejoiced at receiving your letter, that you are well". * Greetings For example: áppaxro Kai 6 Gccx; SiaipuAxxxToi ae -"Be well and may God be with you". * Closing clause/paragraph For example: áaJió^onaí oe.o5£X,(pe, Kai Ei^xopai oe 'úyiaívEiv -"I salute you, brother, and pray that you are well". It is thus clear that totioí; (piAxxppó\T)mq is not an unknown concept in the works of modem epistolary theorists -works which are mere descriptions of the ancient epistolary practice.Further details about the tójioí; <piAxxppóvricyi<; will now be identified by means of an analysis of ancient friendly letters. A practical analysis o f a sample o f ancient letters In order to see how (piXxxppó\TiGi<; functions in ancient letters, the following ancient letters will be analysed with the help o f infonnation given by Koskennie mi In die Skriftig 28(1) 1994:57-73 6 ?The function o f the topos philophronesis in the Id ler lo the Philippians Papyri Merton I 12: A letter to Dionysios the physician (58 C.E.) This is a letter of Chairas to his friend Dionysios, a physican.The text and trans lation are from Wliite (1986:145-146). The identification of philophronetic elem ents in PMert 1 12 In order to detennine the functioii(s) of (piXxxppóvTicTtq, typical philophronetic elements will be identified within the different parts of the letters.The analysis (in each case) is my own, while infonnation provided by Wliite (1986:198-212) was found useful. * Letter opening The letter opening (lines 1-3) contains the typical greeting Xaipó^ Avovuoitoi xwi <piA.TaTcoi nXfioxa the typical health wish Kai 6ia Tiávxo(q) úyiaíveiv.According to Koskenniemi (1956:97-100) 9 iX.xaT0(; is used primarily in private letters.However, it does not concern family relationships or any friendship between the writer and reader.The qualification xw (piXxaxcp indicates a business letter and is generally not taken to express feeling but an objective.This phrase is very seldom used in letters of friendship -the recipient of a friend ly letter is seldom addressed as xw (piXxaxo).The greeting in this letter is thus not typical of a philophronctic statement.Regarding the health wish, 5ia návxo(q) íryiaíveiv (line 2-3) is, although a shorter parallel form, a typical health formula, and according to Koskenniemi (1956:128), the health wish and other statements on the welfare of the recipient are common philophronetic fonnulae. Letter body A phrase such as Konioánevóq ... (piXooxopYÍg (lines 3-14) is a typical exclama tion of joy at the receipt of a letter (see White, 1986:201).According to White such an exclamation is more characteristic of the opening of a letter.The opening of the letter seems to be the logic part to contain such a phrase if one takes the possible function of such a phrase into consideration.A possible fiinction o f the phrase Konioánevtx;... (piXxHJxopyia (lines 3-14), could be to express the writer's good attitude towards the recipient and to to make sure that the recipient is also positive and ready for the rest of his argument or any innovations.Because lines 3-14 do not, however, only express joy at the receipt of the letter but have an almost philosophical argument about friendship, they may be considered as the opening of the letter body.Although lines 3-14 do not fit into a specific philophronetic fonnula or phrase, they seem to fonn some kind of philophronetic paragraph.Chairas describes his joy at receiving Dionysios' letter (lines 3-4) and explains why he has not written sooner (lines 4-14). In line 14 another subject is dealt with when the writer turns to what the recipient had written in his letter.This subject serves as the background for the advice Chairas is about to ask.Lines 14-20 thus serve as the middle of the letter body.Koskenniemi (1956:145).Chairas could mean that Dionysios should remember him, and could express both his attitude towards their being friends as well as his current need. * Synthesis In this letter to Dionysios, the philophronetic elements are present in all three parts of the letter.Wliereas one would expect a lot of philophronetic elements in the opening and closing of the letter body, if one considers the fact that Stowers considers this letter to be of the friendly type, this letter seems to be poor in the tórto<; (piXcxppóvT|cn.(;.We have arrived in health at Lampsacus, myself and Pythocles and Fiermarchus and Ctesippus, and there we have found Themistas and the rest o f the friends in health.It is good if you also are in health and your grand mother, and obey your grandfather and Matron in all things, as you have done before For be sure, the reason why both I and all the rest love you so much is that you obey these in all things ... Formal analysis Because this letter is fragmentary, it is uncertain whetlier the whole belongs to the letter body or whether one can divide it into letter opening and letter body or even letter closing. The identification o f philophronetic elements According to Koskenniemi (1956:133-134) one often finds in the body closing a short 'warning' to the recipient to take care of himself or herself Typical cliches in this formula are KaXáx; 7toif|or.u;and e\)xapioTiíoEi<; noi. Koskenniemi mentions that the degree of intimacy is determined by what is added to the fonnula.It is surprising that this infonnation from Koskenniemi exactly describes lines 1-10 of the letter of Epicurus.We get a variation on the cliché KoXxbq iionioeu; -ev 7ioi£í<; (line 6) with úyiaívei^ (line 7).Wliat is added to this to make it more intimate, is the same health wish for t) nófinri -a person near to the recipient and event, a bit of infontiation on the state of health of not only he himself but also of his friends. This letter from Epicunis contains philophronetic elements, but because it is fragmentary, it is not possible to detemiine to which extent the philophronetic elements dominate. Select Papyri I 103: Petesouchos to his brothers and friends (95 B.C.E.) This is a letter of Petesouchos to his brothers and friends, consisting of greeting, farewells and assurances of the writer's welfare.The text and translation (on page 66) are from Wliite (1986:54-55). The identincation o f philophronetic elements * Letter opening To the letter opening belong greetings and health wishes (White, 1986:198-202).This means that netooouxtx; (lines 1-12) covers the letter opening.In the letter opening we find a health wish -éppóxjGai (line 8) with the assurance that the writer as well as his friends is well (lines 8-12).Koskenniemi (1956: 132) is of opinion that the writer may consider it important to express his interest in things or persons close to the recipient, as part of the health wish.Petesouchos knows his brothers and friends well enough to know exactly what and who are of great importance to them.This enables him to show that they have mutual inte-rest in these persons.The extensiveness of the heahh wish in the letter from Petesouchos thus reveals something o f the kind of relationship between Petesoiichos and his friends.Concluding the letter body, Petesouchos asks the recipients to take care of themselves (lines 21-24 xa ... ne-cootpu;).According to Wliite (1986:205), this is a typical formula for concluding the letter body.Petesouchos, however, extends this health wish by adding xopv^oioG' éawmv -"you would favour us" (line 22).This extension o f the health wish increases the intimacy between parties (Koskenniemi, 1956:134). * Letter closing The letter is closed with the typical greeting fomiula ^ppcooGe (line 24), which is a philophronelic element as such (see 2 .1). * Synthesis In this letter from Petesouchos almost the whole letter consists of philophronelic elements.The letter opening as well as the letter closing contains typical philo phronelic elements (as discussed by Koskenniemi, 1956:128-154).The letter body on the other hand is a complete discussion of mutual interests and care. Conclusion The letters from Dionysios (PMert 1 12) and Epicurus (Ex Hercui 176) do contain philophronelic elements, but only minimally so.In the letter from Petesouchos to his brothers and friends (SelPap 1 103), almost the whole letter consists o f greet ings and statements concerning welfare and love.This can be ascribed to the fact that the purpose of this letter is only to express fi-iendship. What do all the philophronelic statements have in common? 1 concur with Kos kenniemi (1956:132) that the health wishes probably concern matters of impor tance to the recipient.The analysis above would seem not only to confirm this point, but also to conclude that all philophronelic elements serve to make the recipient feel good about himself To persuade by means of tw xG oc; means to use statements about those things that are important to the recipient, in order to arouse feelings of pity, sorrow, sym pathy or compassion.In the first letter, Chairas writes about his joy when receiv ing his friend's letter and wishes him health.These statements are used in the letter opening, body opening as well as in the letter closing, to make Dionysios open-minded with regard to the request in the middle of the letter body, namely the request for advice. In the second letter, Epicurus is most concerned about the child's health and actions.Wliatever Epicurus wants to achieve, these statements are still every thing the child likes to hear. In the third letter all the greetings, the farewell and assurances o f the writer's welfare serve the function of m 6o<;. It thus seems as if philophronetic elements serve the function of the ^vtexvoc; iOoq, but primarily JwxBoq.I would, however, not consider it as merely 7tá 0oq.Petersen (1985:53) states the obvious fact that letters are surrogates for the per sonal presence of the addresser with the addressee.According to Petersen, Koskenniemi has demonstrated that the letter's fundamental structure reflects what happens in the face-to-face meeting of friends.And, Petersen continues, a letter thus functions to establish or maintain a relationship when the parties cannot meet in person.According to Koskenniemi (1956:94) one must take into account that the common epistolary style also contains philophronetic elements (not only the friendly letter).But is there a difference in the use of philophronetic statements in a friendly letter and in a letter of recommendation? The function of In letter types other than that of friendship, one would expect the philophronetic elements to be only part of the letter opening and closing, whereas in friendly letters it is expected to be also part of the letter body.And the more the purpose of a letter is to express friendship, the more philophronetic elements will appear in the letter body.If the purpose is simply to express friendship, one would expect philophronetic elements also in the middle of the letter body.It is, how ever, important to keep in mind that other types of letters may also contain philophronetic elements in the letter body.This may be an indicator of a mixed type of letter. Since TOTioq (piÁo<ppóvr|<nq is an essential element of all letters, it is possible to determine on the basis of the position of philophronetic elements whether the letter fits into the friendly type or not. The integrity o f Philippians The problem siirroimding the integrity of Philippians handicaps a formal analysis of Philippians.Although we have Philippians as one letter in UBS III, it is im portant for a study on ancient letters to consider it in its original form.Kiimmel (1965) and Garland (1985) discuss the whole matter.According to some, the transmitted letter to the Philippians has secondarily been compiled by joining two or three originally independent epistles or fraginents of letters.Advo cates of this view point out that Paul in Philippians, until 3:1, offers the paragon of a clear and precise letter, but that in 3;1 an epistolary conclusion begins which is interrupted in 3:2 by a warning, while 4:4 connects very well with 3:1.On the basis of these considerations some critics suppose that 3:3-4:3 is an interpolation. Other critics find that the thanks for the gift of the Philippians (4:10-20) is also out of place at the end of the letter.Moreover, 3:2-4:3 presupposes no im prisonment of Paul. As a result, we have the view that Philippians is composed of three letters, each chronologically following upon the previous one (Kiimmel, 1965:235).Because of this problem modem theorists also have difficulty in dealing with the letter.In his analysis of the letter.White (1972:73-90) ignores 2:25-4:9. For Kiimmel (1965:237), however, there is no sufficient reason to doubt the ori ginal unity of the transmitted Philippians.Garland (1985:143) is o f opinion that the arguments against the integrity of the letter are just as plausible as the counter-arginnents and he describes this debate as a 'stalemate' in argumentation.Watson (1988) analyses Philippians rhetorically in order to address the unity question.He (Watson, 1988:88) concludes his article by the following assump tions: * If the partition is maintained, one must assume that the host letter and the interpolated letters were redacted so that the rhetoric o f the whole has been unified in the present fonn. * Since the present fonn of Philippians conforms well to the classical rhetori cal conventions, the integrity can be assumed. Although 1 am of opinion that Watson uses the rhetorical perspective incorrectly to analyse a letter fonnally, this article is a proof of the fact that the debate on the integrity of Philippians has certainly reached stalemate.It is, however, beyond the limits of this article to survey this discussion in detail.For the purpose of this article it can be assumed that Philippians as we have it today, is a single unit and can be interpreted as such. * Letter opening The letter opening consists of a salutation in 1:1-2 and a thanksgiving in 1:3-11. * Letter body The letter body of the letter to the Philippians is introduced by the typical formula yivoxnceiv 5é PoúXonai ... (1:12).It can be divided into the following parts: Body opening: 1:12-26 Body middle: 1:27-4:9 Body closing: 4:10-20 In the middle of the letter body Paul switches from I to the you (1:27).This can be considered as the transition from the opening of the body to the middle.The middle of the letter body is in 2:19-30, interrupted by the infonnation about Ti mothy. * Letter body: opening: 1:12-26: Following upon the thanksgiving, this is an autobiographical para graph concluded by 1:26 -iva tó Kaúxri^ia i)nó)v Ttepiooeuti ■ • êv Énoi ... This paragraph can be considered as another example of (piXopóvTiCTiq in this letter, because such a paragraph tells us some thing about the nature of the relation between Paul and the Philip pians.Paul expresses his concern for their well-being, their growth in faith and joy.4:10 Paul is delighted in God for the Philippians: êxápr|v oxi ... ctveGóXexe to VTtép ehou cppovEtv.This is an example of tlie proskynema fonnula.By using this fonnula, Paul refers to the good cha racteristics of the recipients.He actually thanks them for their caretaking.It is evident that the writer and the recipients have a special relation. * Letter closing: The letter closing is covered by 4:21-23.This part of the letter offers a switch to the / again (4:10) and contains the typical secondary greetings (4:21) and bless ings (4:23). * Synthesis From the analysis of Philippians it is clear that Paul uses <piXxxpp6vr|cn<; in every part of the letter.This, however, does not seem to make of Philippians a xijtkk; (piXiKÓq, because the philophronelic elements in the letter body o f Philippians do not dominate when one realizes the extensiveness of the letter body. The whole opening of the body of the letter to the Philippians is an example of persuading by ëOoq, when Paul tries to increase his trustworthiness.The proskynema fontiulae in the letter opening and closing of the letter body are examples of persuasion by means of mOoq.Another example of rtdOoq is found in the middle of the letter body, when Paul talks about feelings to be shared.All these exam-pies arouse the readers' emotions aiid make them open-minded with regard to tlie infomiation given in the rest of the letter body. The function o f the TOJto(; cpiXo9póvr|oi(; in Philippians It seems as if one may be able to detennine fi-om the philophronelic elements the nature of the relationship between writer and recipient.With the help of the fol lowing fundamental theses illustrated by Petersen (1985:63-64), this relationship will be made clear: From this it should be clear that by reading and interpreting the philophronelic elements, one will perhaps be in a better position when attempting to read also between the lines.By studying (piXo<ppóvTicn<; in Philippians, for example, one can catch a glimpse of what the relationship between Paul and the Philippians might have been.When Paul uses the word yivoxTKexe in 2:22, it implies that the relationship between him and the readers is an already existing relationship which he maintains by (piXo<pp6vticn(;.The frequency of philophronelic elements in all the parts of the letter is an indicator of a high degree of intimacy between Paul and the Philippians. Conclusions The following can be concluded: Thus (piAxxppóvricnq is a xójtoq (present in all letters) that is useful in many ways for the writer of a letter.Studying (piXo(ppóvr|CTiq in letters also enables one to see something about the writer's means of persuasion, the type of letter and the relationship between addresser and addressee. B ibliograp hy e n tifíc a tio n o f xÓTtcx; (piX,o<ppóvTiai(; in a n c ie n (1956:128-154) on philophronelic statements and fonnulae: PMert (A De scriptive Catalogue of the Greek Papyri in the Collection of Wilfred Merton) I 12, Ex Hercul (Excavations of Herculaneum) 176, and SelPap (Select Papyri) I. The fim ction o f the topos philophronesis in the letter to the Philippidiix_____________________ This is a letter from Epicums (the well-known philosopher) to a child (possibly an orphan of a certain Metrodonis, of whom Epicums took charge).The text and translation are fromMilligan (1927:5 The function o f the topos pliilophronesis in the letter lo the PhiHpptans 1986:208) id en tifies lin es 12-16 (nf| ... é 7 tin E )iéX .T ix a i) as the o p en in g in g to W hite(1986:211), the phrase Jiepl w ith the gen itiv e mv (lin e 16) is so m etim es u sed in the m iddle o f the letter b od y but m ore often at a later point in the b o d y than at the very beginning.In this c a se it, h o w ev e r, se e m s that the pre p osition al phrase jcepi w ith the g en itive (bv, introduces the m iddle o f the letter b od y ( j K p i ... fÍK a x E (lin es 16-21).From this part o f the letter b o d y it b e c o m e s clear that P ete so u ch o s is w orried about the apparent bad circu m stan ces o f the Petosouchos, son o f Panebchounios, to Peteharsemtheos and Paganis, sons o f Panebchounios, and Pathemis, son o f Paras, and Peteharsemtheos, son o f Harsenouphis, and Peteharsemtheos, son o f Psennesis, and Horos, son o f Pates, greeting and good health.I my self am also well, along with Esthautis and Patous * The part of the letter where 9 iX,o(pp6vr|aiq appears, as well as the quantity of philophronelic elements, depends on the type of letter.Thus the xwoq of a letter and the tórox; are closely related.All tújioi (piX.iKoícontain xónoq (piAxxppóvriou;, but not all letters with tótioí; (piAxxppóvriCTi<; are xÚTtoi 9 i^ikoí.* What is added to the typical philophronelic elements increases the degree of intimacy between the addresser and addressee.* Philophronelic elements serve the function of roxBoq and ê0oq.* An analysis o f philophronelic elements can shed light on aspects of the nature of the relationship between addresser and addressee.* The letter to the Philippians also employs philophronelic elements.* The function of tlie philophronelic elements in Piiilippians fiinction as TwiBoq and eOo<; and it is i\o indication of a friendly letter type. ARISTOTLE MCMXLVII T é x v n q 'PnTOpiKtii; (Loeb Classical Library) London : Wil liam Heinemann DAHL, N. 1976 Letter (//; The Interpreter's Dictionary o f the Bible, Supplementary vol ume.Nashville ; Abingdon p 538-540 ) DOTY, W G 1983.Letters in Primitive Christianity Philadelphia Fortress Press GARLAND, D 1985.The Composition and Unity o f Philippians Novum Teslameiiliim, 21 141-173 KOSKENNIEM I, II 1956 Studien zur Idee und Phraseologie des Griechischen Briefes bis 400 n Chri Helsinki : Suomalaisen Kirjallisuuden Kirjapaino KuMMEL, W G 1965 Introduction to the New Testament London : SCM Press MALHERBE, A J 1988 Ancient Epistolary Theorista Georgia : Scholars Press MILLIGAN, G 1927 Selections from the Greek Papyri Cambridge : University Press MURPHY, J.J 1972 A Synoptic History o f Classical Rhetoric New York : Random House PETERSEN, N R 1985.Rediscovering Paul Philadelphia : Fortress Press QUINTILIAN 1947.Im litiilio O raiona (Loeb Classical Library) London : William Heinemann STOW ERS, S K 1986 Letter Writing in Greco-Roman Antiquity Philadelphia : The W'estminster Press WATSON, D F 1988 A Rhetorical Analysis o f Philippians and Its Implications for the Unity Question Novum Teslamenlum.30:57-88 WHITE, J.L 1972 The Form and Function o f the Body o f the Greek Letter in Non-Literary Papyri and in Paul the Apostle Michigan Edwards Brothers, Inc W HITE, J L 1986 Light from Ancient Letters (Foundations and Facets; New Testament) Philadelphia Fortress Evrexvoi are rhetorical strategies which have to be invented by the orator himself.The ivrexvoi that are to one's disposal, are ë0o<;, róBoq and XóycK; (Aristotle, Téxvriq 'PtixopiKfiq i.2.3-6).The orator persuades by ë0o<; (moral character), when the speech is delivered in such a manner as to make the speaker worthy of confidence.Wlien one persuades by m0o<;, one persuades by means of the hearers, by addressing their emotions.Lastly, persuasion is pro duced by the speech itself, called the Ax>yo<;. When inventing a speech (or any other vehicle of communication), one uses either axExvoi orêvxExvoi as a means of persuasion (Aristotle, Téxvriq 'Pr|xopiKf|q i.2.2, 15.1).'Axexvoi are rhetorical strategies which are independent of art, being already in existence and ready for use -rhetorical strategies such as witnesses and contracts.' 23 y(á)p (K)ax' áváyKr|v ÉTrEÍyonav 24 jcepi 5ê xfjq OKA.r|p5í gypayai; 5íx) 25 yévri Eivai. xó xtïí; 5ia-^.uxiKfjq 26 noi ypacpTov jcÉ|j.Vj/ov É'axiv yócp 27 Ktti f) XEXpacpápnaKoq OKX,r|pá, ii 5Ê 28 éTciaxoXfi aijxn xaúxTi ao i 29 éo(ppáyi(axai). Eppcoao Kai 30 |i£nvr|ao xSv EÍpri|i(Évcúv) 31 e(£xo\ x;) Népwvoí; xou Ktipíov), 32 ^ir|vó<; repnaviKoC a Aiovuaícoi 33 íaxpGi Eppcjoo ... elpTm(évcov) (line 29-30) covers the letter closing.According to Koskennieini ëppcxro (line 29) is a typical philophronetic fonnula(1956:151).I would like to add to this nénvtioo (line 30) as another philophronetic statement.Although we do not know what Chairas means by eiprm(évcov) (line 30) -his request for advice (lines 17-23), or perhaps his explanation why friends do not thank by words -this statement could correspond to the 'remembrance' formula as expounded by The closing of the letter body is introduced by a typical request such as epcoxw (line 20) (seeWhite, 1986:208).* L etter closing '' and Almentis and Phibis and Psenosiris and Phaphis and all our people Do not be grieved at the de parted ones.They were expected to be killed.He did nothing bad to us but, quite to the contrary, he has taken care o f us.Concerning this matter, if you want, write to me We heard that the mice have eaten up the crop Please come here to us or, if you prefer, to Diospolis to buy grain For the rest, you would favour us by taking care o f yourselves that you stay healthy, Horos and Petosiris are well.(lines 16-19) and that he would like to share in seeking a solution (lines 19-21). recipients
7,074.2
1994-06-11T00:00:00.000
[ "History", "Linguistics" ]
Ultrathin Amorphous Carbon as Active Part of Vibrating MEMS † Amorphous carbon in ultra-thin thicknesses shows amazing mechanical properties that make it particularly interesting for MEMS, especially as a vibrating membrane. We present the experimental results obtained on devices comprising composite membranes of a few nanometers thick suspended above cavities of 1 to 2 μm in width. The behaviors in quasi-static mode—at low frequency—and also in resonant mode were observed and measured. Resonances frequencies of 20 MHz to 110MHz depending on the geometry were measured. Introduction Ultrasound techniques are widely used as an analysis tool, in various fields, from medical imaging to materials science.The spatial resolution of the resulting information is related in particular to the geometry of the transducers.In the case of Micromachined Ultrasonic Transducers (MUT), the active part is a vibrating membrane operated at its natural frequency.The reduction of the vibrating surface of the devices to dimensions in the micrometer range is to be associated with a decrease in the thickness of the membrane to achieve functional displacements. We present here preliminary studies [1] of MEMS devices having micrometer large vibrating area.An appropriate membrane material, a stamping transfer process and an original characterization setup have been implemented to produce natural resonant frequency measurements and discussion, and even allow the mapping of nodes and antinodes for higher order modes. Ultrathin Membrane Material Reducing the thickness of the moving part to a few nanometer is a way to get significant amplitudes of vibration.It implies the use of a material having suitable flexibility and tenacity even in ultralow thicknesses.Amorphous carbon is a particular arrangement of carbon atoms between graphene and diamond that meets these mechanical criteria [2]. In this study, the amorphous carbon layer is coated by platinum on the one face and by nickel on the other face, in order to ensure electrical conductivity for capacitive actuation and for technological needs.The membrane material is therefore a three-layer metal/carbon/metal stack. Two series of membranes differing in carbon thickness were studied, (referred to as A and B).The respective carbon thicknesses of A and B are 3.7 ± 0.3 nm and 8.1 ± 0.3 nm; platinum and nickel thicknesses are 4.5 ± 0.3 and 7.0 ± 0.3 nm.The three-layer stack is directly deposited onto a sacrificial PMMA layer using an Electron Cyclotron Resonance (ECR) process [3]. Device Realization Each test chip was previously manufactured with a set of 0.8 to 2.3 μm wide trenches (50 and 100 μm long).The electrode set is designed for high frequency operation, top ground electrodes are patterned along the trench banks and signal electrodes are lying all along the bottom of the trenches, with vertical vias to connect surface coplanar pads. The transfer operation is a stamping process (Figure 1): the platinum layer of the membrane stack is placed in contact with the surface of the test chip, and then the sacrificial layer is dissolved using acetone [1].The three-layer stack lies onto the surface of the chip, locally suspended above the trenches, clamped on the trench banks, and electrically connected to the top ground electrodes. Contact pads are open through the membrane stack by a standard lithography process; during the removal of residual resins, the nickel layer protects the integrity of the carbon core of the stack. Measurement Set Up Devices are actuated by applying a voltage between signal and ground electrode.Amplitudes of deflection of the mobile part are measured using an AFM in tapping mode [1,4].The AFM tip is coated with an insulator to avoid electrical contact between the equipment and the sample, the spring constant ranges from 1 to 7 N/m and the cantilever resonance frequency fc is about 75 kHz. When a DC voltage is applied, the suspended area may be scanned by the AFM tip and result in a static deflection mapping. When a low frequency (i.e., the voltage generator frequency fg << fc) voltage is applied, the AFM tip follows the deflection of the membrane.The local deflection versus time is recorded. When a high frequency (fg >> fc) voltage is applied, the AFM tip does not follow the membrane displacements, it is repelled to an upper position and the actual measurement taken is the envelope of the membrane vibration.This configuration allows for measurements of the local amplitude of vibration depending on applied frequency, and spectrum analysis.More, when the applied signal fits with one of the resonance frequencies, and the AFM tip is set to scan an area, the resulting data build a map of nodes and antinodes. Low Frequency At low frequencies, the displacement of the membrane instantly follows the applied voltage oscillations [5].The amplitude of the displacement depends on the voltage swing (Figure 2). Frequency Spectrum The study of vibration amplitudes as a function of frequency is carried out from 1 MHz and up to 110 MHz.For each value of the applied voltage, the displacement measured at 1 MHz-i.e., far from any resonance frequency-, is considered as the origin of the spectrum amplitudes. A combination of DC bias (Vdc) and AC excitation voltage (vac) is applied to the samples.The instant resulting driving force is expressed by ( 1) in which A(z) is a coefficient depending on the membrane displacement z, and fg is the frequency of the AC excitation voltage. For low AC excitation voltages, the resonance fr clearly appears at fr = fg [1 6], (Figure 3) When AC voltages is increased (0.5 V < vac < 5 V), the spectrum gets more complex (Figure 4a): harmonic resonance frequency peaks appear at fg = 2fr, and even at fg = 3fr.Peaks also appear at fg = fr/2 and even fg = 3fr/2, corresponding to a membrane vibration frequency fr = 2fg.In addition, nonlinear phenomena such as duffing arise, inducing hysteresis and widening the resonance peaks.Very high amplitudes of deflection were measured around the main resonance peak.AFM scans taken on the vibrating area at different frequencies reveal nodes and antinodes location in accordance with high order transversal and longitudinal modes of resonance (Figure 4b) [1,7]. Conclusions Membranes made with ultrathin amorphous carbon as a core material and metal coating have demonstrated capacitive actuation, resonant behavior, and large deflection capabilities.They appear to be quite relevant for implementation as active part of vibrating MEMS. Author Contributions: All three authors contributed to the device fabrication, the experiments, and the results analysis. Figure 1 . Figure 1.Realization of the devices (a) Scheme of the stamping process; (b) SEM pictures of membranes suspended above the trench. Figure 2 . Figure 2. AFM Measurements of the displacement of a point of the membrane versus time at low frequency.(a) with frequency set at 5 Hz, and various AC voltage amplitudes; (b) with AC voltage amplitude set at 16 V, and various low frequencies.The operation of this measurement set up appears limited at frequencies lower than 100 Hz. Figure 3 . Figure 3. Natural resonance observation and measurement at low excitation level.AFM is used to measure the envelope of vibration of the membrane.(a) frequency spectrum taken on a 1.3 μm wide sample device, with Vdc = 4 V, vac = 0.5 V.The resonant frequency appears to be f0 = 52 MHz; (b) resonance frequencies related to trench width for the two sets of devices differing by the thickness of amorphous carbon.Resonance frequencies range from 20 Hz for the 2.3 μm wide trenches to 110 MHz for the 0.8 μm wide trenches. Figure 4 . Figure 4. Resonances at high excitation of a 2.3 μm wide device with a A-type membrane with natural frequency f0 = 26 MHz: (a) spectrum measured for Vdc = 15 V, vac = 5 V. Bullets indicate the frequencies corresponding to the scans shown in (b) Amplitude of vibration around 40 MHz is 80 nm on a 2.3 μm large suspended width; (b) amplitude scans taken when biased with Vdc = 15V, Vac = 5 V at frequencies 14 MHz and 62 MHz on a 30 μm × 30 μm area, at frequencies 30 MHz on a 10 μm × 10 μm area, and at.84 MHz on a 8 μm × 8 μm area.
1,873.2
2018-12-04T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
Genome‐wide SNP identification in Fraxinus linking genetic characteristics to tolerance of Agrilus planipennis Abstract Ash (Fraxinus spp.) is one of the most widely distributed tree genera in North America. Populations of ash in the United States and Canada have been decimated by the introduced pest Agrilus planipennis (Coleoptera: Buprestidae; emerald ash borer), having negative impacts on both forest ecosystems and economic interests. The majority of trees succumb to attack by A. planipennis, but some trees have been found to be tolerant to infestation despite years of exposure. Restriction site‐associated DNA (RAD) sequencing was used to sequence ash individuals, both tolerant and susceptible to A. planipennis attack, in order to identify single nucleotide polymorphism (SNP) patterns related to tolerance and health declines. de novo SNPs were called using SAMtools and, after filtering criteria were implemented, a set of 17,807 SNPs were generated. Principal component analysis (PCA) of SNPs aligned individual trees into clusters related to geography; however, five tolerant trees clustered together despite geographic location. A subset of 32 outlier SNPs identified within this group, as well as a subset of 17 SNPs identified based on vigor rating, are potential candidates for the selection of host tolerance. Understanding the mechanisms of host tolerance through genome‐wide association has the potential to restore populations with cultivars that are able to withstand A. planipennis infestation. This study was successful in using RAD‐sequencing in order to identify SNPs that could contribute to tolerance of A. planipennis. This was a first step toward uncovering the genetic basis for host tolerance to A. planipennis. Future studies are needed to identify the functionality of the loci where these SNPs occur and how they may be related to tolerance of A. planipennis attack. | INTRODUC TI ON Agrilus planipennis Fairmaire (Coleoptera: Buprestidae; emerald ash borer) is a metallic green beetle native to northeastern Asia that has become a pest to North American ash (Fraxinus spp. L.; McCullough et al., 2004). This pest was introduced into the Detroit/ Windsor area of Michigan, USA/Ontario, Canada, and quickly dispersed via human assistance, including movement of firewood, nursery stock, and wood packing material (Buck & Marshall, 2008;Cappaert et al., 2005). In its native range, A. planipennis coevolved with Manchurian ash (F. mandshurica Rupr.) and is a secondary pest in this tree species, requiring a primary stressor for successful attack (Rebek et al., 2008;Whitehill et al., 2012). Kelly et al. (2020) confirmed that Asian ash species occur in distinct phylogenetic lineages with candidate genes for tolerance. Ash species in North America lack this natural resistance and succumb to attack, regardless of the presence of a primary stressor, often within one to four years after initial attack (Eyles et al., 2007;Rebek et al., 2008). While black (F. nigra Marsh.), green (F. pennsylvanica Marsh.), and white (F. americana L.) ash are the most susceptible in the introduced range of A. planipennis, all North American ash are susceptible (Anulewicz et al., 2006;Cappaert et al., 2005;Poland & McCullough, 2006). In this introduced range, the life cycle of A. planipennis is typically completed within one year (Herms & McCullough, 2014). Beginning in May, adults emerge through D-shaped exit holes and mature within seven days as they feed on canopy leaves. Males identify suitable mates via visual and contact cues, and females feed on foliage for an additional five to seven days after mating before oviposition begins (Lelito et al., 2007(Lelito et al., , 2009Poland & McCullough, 2006). During oviposition, females deposit eggs in bark cracks and crevices, where eggs hatch within two weeks, and larvae tunnel into the bark to feed on the phloem and vascular cambium of the tree from late summer to autumn. Phloem consumption creates serpentine-shaped galleries, which severs photosynthate transport leading to eventual tree mortality. After completing four instars, larvae overwinter in pupal chambers and pupation occurs the following spring (Herms & McCullough, 2014;Poland & McCullough, 2006). While a one-year life cycle is most common, a two-year life cycle does occur, especially in more northern latitudes, with larvae overwintering in intermediate instars within the phloem (Wei et al., 2007). The proliferation of A. planipennis throughout forests in North America has caused the mortality of millions of ash trees, producing devastating ecological and economic impacts (Flower et al., 2013;Kovacs et al., 2011;Poland & McCullough, 2006). These impacts have created long-lasting changes to North American forest ecosystems and may require substantial restoration efforts (Herms & McCullough, 2014;Marshall, 2020;Poland & McCullough, 2006). Additional negative impacts include reduction in amount of wood products produced from ash and diminished esthetics in urban and suburban neighborhoods (Flower et al., 2013;Poland & McCullough, 2006). The cost of removal and replacement of ash trees in urban landscapes has been estimated at $12.5 billion from 2010 to 2020 (Kovacs et al., 2011). Additionally, the estimated loss by timberlands in the United States is $300 billion (Poland & McCullough, 2006). Given the large-scale distribution of A. planipennis, management options for control of the pest remain limited, which is further complicated by natural and urban landscapes. Therefore, a long-term solution to preserving ash will depend on successfully identifying resistant or tolerant populations. Resistance to wood-boring beetles is typically a function of female host selection and larval survival rate (Hanks, 1999). Therefore, resistance mechanisms can be placed into three general categories: antixenosis, antibiosis, and tolerance (Kogan & Ortman, 1978;Painter, 1951). Antixenosis traits are aimed at decreasing preferences for feeding and/or ovipositioning, while antibiosis results from traits that negatively affect insect growth, survival, and/or fecundity. Lastly, tolerance is the ability of the host to withstand infestation while remaining relatively healthy compared to other individuals undergoing the same level of attack. There is evidence of antixenotic traits in the interaction between A. planipennis and hosts. Adults of A. planipennis express variation in both feeding and oviposition host preferences. When given a choice, adult beetles preferentially feed on white, green, and black ash compared with Manchurian, blue, and European ash (F. excelsior L.; Pureswaran & Poland, 2009). North American ash species receive more eggs compared with Manchurian ash, suggesting a female choice of susceptible hosts in order to increase larval performance (Gripenberg et al., 2010;Rigsby et al., 2014). Within North American ash species, inter-and intraspecific variation of volatile emissions and oviposition preferences of A. planipennis have been shown to play a role in resistance (Anulewicz et al., 2008;Chen et al., 2011;Koch et al., 2015). Bark of blue ash has a phenolic composition that may contribute to its resistance relative to white, green, and black ash (Whitehill et al., 2012). Bark smoothness, as a physical characteristic, may be a limiting factor in oviposition locations and subsequently limits the number of larvae that could attack a tree at a given time (Marshall et al., 2013). Additionally, variability in ash growth rates has been related to susceptibility to A. planipennis, with trees tolerant of attack having more rapid and constant growth compared to susceptible trees (Boyes et al., 2019). Antibiosis interactions also exist in larval development. Mechanisms that affect larval performance mainly focus on variation in phenolic and defense protein chemistry (Cipollini et al., 2011;Villari et al., 2016;Whitehill et al., 2011Whitehill et al., , 2012. Previous studies comparing phenolic and lignin profiles of ash species found that Manchurian ash contains unique profiles that may contribute to their resistance to A. planipennis (Cipollini et al., 2011;Whitehill et al., 2011Whitehill et al., , 2012. Four potential defense-related proteins are expressed more than fivefold higher in Manchurian ash than in other species and may contribute to resistance . Mechanisms of tolerance are more difficult to quantify and therefore have not been as well studied. Identifying the genetic variants that allow these surviving trees in North America to tolerate infestation would greatly aide in the conservation of ash (Villari et al., 2016). Even with severe levels of ash mortality in the introduced range, certain trees have been able to survive after years of repeated exposure (Marshall et al., 2013). This has led to the identification of trees with differing apparent tolerance levels to A. planipennis attack. Trees classified as tolerant survive in spite of signs of A. planipennis attack and damage (Marshall et al., 2013). The objectives of this study were to (1) identify ash single nucleotide polymorphisms associated with the tolerance-susceptibility gradient to A. planipennis, (2) identify phenotypic and genotypic relationships between trees relative to this tolerance-susceptibility gradient, and (3) test the hypothesis that tolerance and susceptibility are linked to identifiable genetic markers. and Houghton County, Michigan, USA (n = 5; Figure 1). All individuals were naturally occurring and not planted. Within most of these locations, green ash was the dominant species with white ash being less common. However, in Houghton County, white ash dominated. Selection of trees was based on their occurrence along an apparent conceptual gradient from high tolerance to high susceptibility (i.e., low tolerance) to A. planipennis attack, similar to a gradient described by Hietala (2013). Apical buds were col- | Tree assessment Selected trees were identified to species and assessed on vigor (overall tree health: categorical 1-5 with 1 being high vigor [crown with relatively few dead twigs; normal foliage color and density] and 5 being low vigor [more than half of crown dead]), crown dieback (percent of dead branch tips: 5%-100%), and signs of A. planipennis attack (presence/absence: bark splits, exit holes, woodpecker damage, epicormic sprouts). Assessments followed those conducted in previous studies (Clark et al., 2015;Marshall et al., 2009Marshall et al., , 2010Marshall et al., , 2012Marshall et al., , 2013, which were derived from Millers et al. (1991). After assessment, 47 individuals were selected for analysis and given an overall categorization of tolerant or susceptible to A. planipennis infestation. This tolerant-susceptible categorization was similar to Hietala (2013). Individuals with a vigor ≤3 and dieback of ≤30% were considered tolerant. Individuals with a vigor of ≥3 and dieback >30% were considered susceptible. Chi-square analysis was used to test the null hypothesis that tolerance categorization was independent of species. Additionally, diameter at breast height (dbh, 1.37 m above soil surface) was collected as a size metric and compared between species and tolerance groups using a two-way analysis of variance (ANOVA). | DNA extraction and quantification Entire bud samples (two to three buds) were homogenized using sterile ceramic mortars and pestles, which were first cooled with liquid nitrogen. DNeasy Plant Mini Kit (QIAGEN) was used to extract total genomic DNA following the manufacturer's protocol. DNA from each sample was quantified using UV spectrophotometry (NanoDrop 1000) absorbance. All samples were subsequently diluted to a concentration of 25 ng/μl. | Library creation and SNP discovery Genomic DNA was converted into nextRAD genotyping-bysequencing libraries (SNPsaurus, LLC) as described by Russello et al. (2015). Briefly, genomic DNA was first fragmented with Nextera reagent (Illumina, Inc.), which also ligates short adapter sequences to the ends of the fragments. The Nextera reaction was scaled for fragmenting 7 ng of genomic DNA, although 14 ng of genomic DNA was used for input to compensate for the amount of degraded DNA in the samples and to increase fragment sizes. The genotyping analysis used custom scripts (SNPsaurus, LLC) that trimmed the reads using bbduk (BBMap tools, http://sourc eforge.net/proje cts/bbmap/). Command was as follows: bash bbmap/bbduk.sh in=$file out=$outfile ktrim=r k=17 hdist=1 mink=8 ref=bbmap/resources/nextera.fa.gz minlen=100 ow=t qtrim=r trimq=10 Next, a de novo reference genome was created by collecting 10 million reads in total, evenly from the samples, and excluding reads that had counts fewer than 7 or more than 700. The remaining loci were then aligned to each other to identify allelic loci and collapse allelic haplotypes to a single representative. All reads were mapped to the reference with an alignment identity threshold of 95% using bbmap (BBMap tools). In order to assess the proportion of sequence reads that originated from Fraxinus spp. versus other species, 1,000 high-quality reads from each sample were subject to BLASTN analysis in the NCBI database. Genotype calling was done using SAMtools and BCFtools (Li et al., 2009). Command was as follows: The VCF file was filtered to remove alleles with a population frequency of less than 0.03. Loci were removed that were heterozygous in all samples or had more than two alleles in a sample (suggesting collapsed paralogs). The absence of artifacts was checked by counting single nucleotide polymorphisms (SNPs) at each read nucleotide position and determining that SNP number did not increase with reduced base quality at the end of the read. All polymorphic sequences retained were subject to BLASTN analysis in the NCBI database. VCFtools (Danecek et al., 2011) was used to further filter SNPs based on the following criteria: (1) Phred quality score, (2) minor allele frequency, (3) maximum missing genotype, and (4) minimum mean read depth (Table 1). Loci that failed to meet the quantification threshold for any of the filtering criteria were removed and excluded from subsequent analyses. Samples were not filtered based on Hardy-Weinberg expectations because the goal of this study was to identify polymorphic loci under selection, which are expected to deviate from equilibrium. The VCF file was converted into file formats necessary for analysis using PGDSpider 2.1.1.3 (Lischer & Excoffier, 2012). Minimum mean read depth 14 Note: Loci failing to meet the quantification threshold for any of the criteria were excluded from subsequent analysis. Using a permutation test of pairwise identity-by-state (IBS) distance, we tested the hypothesis that green and white ash individuals were less similar between than within putative species. Phenotypes were then coded as tolerance groups ( | Detection of markers under selection BAYESCAN v2.1 (Foll & Gaggiotti, 2008) was used to identify outlier loci based on populations defined by the PCA clusters, as well as populations defined by vigor rating. BAYESCAN uses a hierarchical Bayesian method to estimate population-specific F ST coefficients as a fixation index, described by Beaumont and Balding (2004). A more conservative neutral model available in BAYESCAN (prior odds = 1,000) was used to minimize the number of false positives. Prior odds or prior probability is the likelihood of the null hypothesis being true before the test is performed. This increase in prior odds corresponds to the selection model being 1,000 times less likely than the neutral model, which was a more appropriate assumption given the number of SNPs included in this analysis (Lotterhos & Whitlock, 2014). After 100,000 iterations, SNPs with a posterior distribution over 0.95 were considered outliers. High F ST values (outliers) suggest that the locus has undergone directional selection (in contrast to balancing selection). Fisher's exact test was used in PLINK v1.07 (Purcell et al., 2007) to test the association of above identified outliers with the tolerance groups. | Phenotypic classification The 47 ash trees selected for this study were sampled from six different geographic locations (Figure 1a). Based on morphological characteristics, nine trees were identified as white ash and the remaining trees were identified as green ash. The trees were classi- (Figure 2). | SNP variation Pairwise IBS distance test to compare SNP similarity between species (green and white) failed to reject the hypothesis that pairs between the two species were less similar than pairs within species (p-value = .240). This lack of variability between pairs extended into the other pairwise tests, with green pairs not more similar than white pairs (p-value = .756). Additionally, the pairs between tolerance groups (tolerant and susceptible) were not less similar than pairs within groups (p-value = .620). However, susceptible pairs were more similar than tolerant pairs (p-value = .027). Four major clusters (labeled as right, lower, upper, and middle) were identified in the PCA based on 17,807 SNPs (Figure 3a). Diverging substantially from all other groups, the cluster on the right contained trees from different geographic locations, including Houghton and three Metroparks (Kensington, Oakwoods, and Willow). Furthermore, all the individuals in this cluster were classified as tolerant to A. planipennis attack (Figure 3a). IBS clustering resulted in a multidimensional scaling ordination plot that did not differ from the visual clusters present in the PCA ordination plot. | Outlier SNPs Outlier detection based on the four PCA clusters identified 32 outlier SNPs within 28 different loci (Table 2). Of these outliers detected with the PCA clusters, nine were significantly associated with the tolerant group. Outlier detection based on vigor rating identified 17 outlier SNPs with 13 different loci (Table 3). Of these outliers detected with vigor rating, five were significantly associated with the tolerant group and five were significantly associated with the susceptible group (Table 3). All outlier F ST values were skewed to the right and outliers were relatively high, suggesting directional selection ( Figure 4). There were no outliers detected by analysis when trees were grouped based on tolerance and susceptibility. Out of all 41 outlier loci, only one matched to a known sequence from the NCBI nucleotide database (locus 30133_137; top blast hit: XM_011468015.1). This locus mapped to a PTI1-like tyrosineprotein kinase receptor. Of the outliers detected between the PCA clusters, ten had a clear pattern of the polymorphic nucleotide being predominantly present in the five right cluster individuals ( Figure 5). These patterns were slightly offset by similar genetic trends between the middle and right clusters; however, the right cluster clearly had the highest occurrence of these polymorphisms. Interestingly, one set of outlier SNPs, all occurring at the locus 16669_22, was present in all trees except the five in the right group and two trees from the middle group ( Figure 6). These trees retained the reference nucleotide in Of the outliers detected between groups based on vigor rating, four had a clear pattern of the polymorphic nucleotide being present exclusively in trees with high vigor (Figure 7). Three of these SNPs were present at one locus (4467_128). One outlier SNP at locus 10225_13 displayed a pattern of the reference nucleotide occurring more frequently in trees with high vigor (i.e., low vigor rating values), whereas trees with low vigor (i.e., high vigor rating values) all had the polymorphic nucleotide (Figure 8). | D ISCUSS I ON This study identified polymorphic loci in Fraxinus spp. using RADsequencing genotyping-by-sequencing. The filter settings were selected to ensured high-quality nucleotide data with sufficient coverage across individuals (Bao et al., 2014;Nielsen et al., 2011). The resulting SNPs were used to highlight insights into a potential genetic basis for host tolerance to A. planipennis. While the two species were different in regard to mean dbh, there no differences between the two tolerance groups. Size is TA B L E 3 Summary of outlier loci detected between populations based on vigor rating F I G U R E 4 Frequency of F ST values of all SNPs computed between populations based on the PCA clusters (a) and populations based on vigor rating (b) F I G U R E 5 Proportion of trees in each PCA cluster that had the polymorphic nucleotide (white bars) or the reference nucleotide (black bars) for the ten outlier loci identified as having a clear presence in the right group. Within each locus, there was only one outlier SNP identified. Sample sizes for each cluster were as follows: right (n = 5), lower (n = 7), upper (n = 30), middle (n = 5). Samples with missing data for that loci were not included an important factor in A. planipennis infestation, with larger trees (≥25 cm dbh) being the most likely attacked (Marshall et al., 2009). However, size importance in subsequent mortality is not as strong, as other physiological factors (e.g., increased bark roughness, decreased growth rates) may be more important in mortality (Boyes et al., 2019;Marshall et al., 2013). | Patterns of genetic variation While the distance pairwise tests did not identify tolerance group pairs as more similar than across the two groups, there was less variation in the susceptible group pairs compared to the tolerant group pairs. PCA plots provided visual representation of genetic divergence among individuals. The PCA adds to the quantitative pairwise analysis in that visual clusters contained both tolerant and susceptible individuals, except for the cluster labeled "Right." There was no clear relationship between the PCA clusters and geographic distribution, as three of the four clusters contained trees from multiple locations. Field identified green and white ash did not separate as expected either-qualitatively in the PCA or quantitatively in pairwise distance analysis. This is not the first occurrence of genetic overlap of these two species. White ash is a polyploid species (2n = 46, 92, and 138) but most often diploid similar to green ash, and hybridization appears to confound genetic results (Schlesinger, 1990;Wallander, 2008;Whittemore et al., 2018). In some cases, white ash individuals group with green ash in phylogenetic analyses, a result that was attributed to the white ash samples likely being a polyploid hybrid with green ash (Wallander, 2008). Additionally, there is low genetic differentiation between white, velvet (F. velutina Torr.), and green ash (Hinsinger et al., 2013). Rapid radiation or recent exchange of genetic material could have led to these relationships (Hinsinger et al., 2013). This lack of interspecific variation is especially present in chloroplasts (Jeandroz et al., 1997). The co-occurrence of white and green ash in all sampling locations presents the possibility of hybridization between the two species; therefore, data were analyzed as Fraxinus spp. due to expected low genetic differentiation. Additionally, the pairwise IBS distance analysis in PLINK did not find green and white individuals were less similar, further suggesting that our SNP data were unable to separate the two species. Across the PCA, there was little separation based on tolerance and susceptibility categories applied by field assessment data. The exception to this pattern was the right cluster, which contained five tolerant individuals from various geographic locations in Michigan. Four of those individuals were in close proximity to the de facto introduction epicenter for A. planipennis (Siegert et al., 2014), indicating they have been exposed to A. planipennis for nearly 20 years and are still able to tolerate infestation (Marshall et al., 2013). These five trees were located in Houghton County and three Metroparks (Kensington, Oakwoods, and Willow). For this reason, outlier SNPs were identified between the four clusters on the PCA to determine which SNPs were likely causing the variation in this group. | SNP candidates for tolerance selection All outlier SNPs detected between the PCA clusters had high F ST values and appeared to be responsible for the divergence of the right cluster. However, subsequent PCA on SNP variation with these outliers removed (results not shown) revealed that the five individuals in the right cluster still displayed the same pattern of divergence, indicating that these 28 loci are not the only source of variation within this group. Throughout the outliers identified, there were similar genetic trends between trees in the middle and right clusters. The similarities between these two clusters are evident when looking at just the ten polymorphic loci that showed a pattern of almost exclusive presence in the right group. For seven of the ten loci, one to two trees from the middle cluster also had the polymorphic nucleotide. Two of these trees from the middle cluster were classified as susceptible; however, all of the trees from the middle group that had genotypic similarities with the five right group trees had no signs of A. planipennis attack (i.e., lacking bark spits, exit holes, woodpecker activity, and sprouting), despite being located in areas where A. planipennis is present. The outlier locus 30133_137 mapped to a PTI1-like tyrosineprotein kinase 2. This protein is known to be involved in growth and development, as well as defense responses (Anthony et al., 2006;Floriduz et al., 2002). PTI1 serine/threonine protein kinases were described to be key components of speck disease resistance in tomatoes by amplifying signaling pathways (Sessa et al., 2000). It is possible that this gene may also play a role in defense against A. planipennis by amplifying pathways necessary to tolerate infestation. F I G U R E 6 Proportion of trees in each PCA cluster that had the polymorphic nucleotide (white bars) or the reference nucleotide (black bars) for the locus 16669_22. Four SNPs were identified as outliers within this one locus (Table 3). Sample sizes for each cluster were as follows: right (n = 5), lower (n = 7), upper (n = 30), middle (n = 5). Samples with missing data were not included From the outliers identified based on the visual PCA cluster, several were significantly associated with the tolerant group. These may be important in identifying host tolerance and susceptibility. Locus 16669_22 is another potentially important gene for host tolerance. Four outlier SNPs at this locus had distinctive patterns in individuals with either all present as the polymorphism or all in their reference form. For the five individuals in the right cluster, the reference nucleotides were retained for all four SNPs at this locus. Unfortunately, BLAST analysis did not map this locus to any known genes. Additionally, two trees that clustered in the middle PCA group also had the reference nucleotides at this locus. These two trees were classified as susceptible, but interestingly, they were the only two susceptible trees with no signs of A. planipennis attack. This exemplifies the coarseness of categorizing tolerance based on phenotypic characteristics of vigor and dieback. These two trees with poor vigor and high dieback may have simply displayed other manifestations of decline not associated with A. planipennis (e.g., Houghton County trees were along a highway and subject to salt spray). Outliers detected between trees based on vigor rating resulted in an additional 13 loci being identified as potential candidates for host tolerance. None of these mapped to any known functional genes; however, the five outliers that did show a pattern of either the polymorphic or the reference nucleotide being present exclusively in high vigor trees are of particular interest. With five outliers associated with each of the tolerant and susceptible groups, there may be important SNPs to identify potential host tolerance and susceptibility. One constraint to mapping sequences is the amount of recombination due to divergence and availability of a representative reference genome (Sousa & Hey, 2013). Unmapped reads can also be attributed to conserved sequences across individuals (Gouin et al., 2015). Future analyses on characterizing the functionality of F I G U R E 7 Proportion of trees in each vigor rating that had the polymorphic nucleotide (white bars) or the reference nucleotide (black bars) for the four outlier loci identified as having a clear pattern of presence in the high vigor groups. Sample sizes for each group were as follows: 1 (n = 11), 2 (n = 9), 3 (n = 10), 4 (n = 8), 5 (n = 9). Samples with missing data were not included F I G U R E 8 Proportion of trees in each vigor rating that had the polymorphic nucleotide (white bars) or the reference nucleotide (black bars) for the locus 10225_13. Sample sizes for each cluster were as follows: 1 (n = 11), 2 (n = 9), 3 (n = 10), 4 (n = 8), 5 (n = 9). Samples with missing data were not included the outlier loci detected in this study, specifically those with associations with tolerance and susceptibility, could expose the importance of these genes and the role they may play in tolerance. A clear link between genotypic and phenotypic data was not identified. Most likely, this was due to the coarseness of phenotype classification, which then failed to correlate with complex genetic diversity. Phenotypes were defined by tree assessments, which included categorical tree health observations and presence or absence of signs of A. planipennis attack. A future study may have more success if these signs are quantified at finer scales (e.g., number of exit holes per square meter, area of phloem regrowth, and location of bark splits) as opposed to whole tree values. Boyes et al. (2019) demonstrated phloem regrowth in artificially damaged trees occurred at higher rates in tolerant individuals and those without exit holes. Detailed phenotypic data would allow for more robust analyses linking genotype and phenotype, such as a mixed-linear model. Finally, the power of genome-wide associations is affected by the genetic complexity and heritability of a trait (Burghardt et al., 2017). As tolerance is expected to be a complex genetic trait, this increases the chance of false positive associations. To remedy this issue in future studies, as many genotypes as possible should be used along with high-quality nucleotide data. | CON CLUS IONS Agrilus planipennis has devastated populations of Fraxinus spp. in North America; however, the survival of some individuals despite years of exposure to A. planipennis is evidence of host tolerance. Understanding the mechanisms of host tolerance through genomewide association has the potential to restore populations with cultivars that are able to persist in the presence of A. planipennis. Despite the caveats presented above, this study was successful in using RAD-sequencing in order to identify SNPs that are potential candidates for tolerance to A. planipennis. This was a first step toward uncovering a genetic basis for host tolerance to A. planipennis. Future studies are needed to identify the functionality of the outlier loci detected in this study. ACK N OWLED G M ENTS The authors would like to thank Nick White for field assistance and SNPsaurus, LLC, for contributing to a portion of the methods. Financial support for this work was partially provided by Purdue University Fort Wayne Institute for Research, Scholarship, and Creative Endeavors Collaborative Grant Program. CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest.
6,855.8
2019-09-07T00:00:00.000
[ "Biology" ]
Water-mediated interactions enable smooth substrate transport in a bacterial efflux pump Efflux pumps of the Resistance-Nodulation-cell Division superfamily confer multi-drug resistance to Gram-negative bacteria. AcrB of Escherichia coli is a paradigm model of these polyspecific transporters. The molecular determinants and the energetics of the functional rotation mechanism proposed for the export of substrates by this protein have not yet been unveiled. To this aim, we implemented an original protocol that allows mimicking substrate transport in silico. We show that the conformational changes occurring in AcrB enable the formation of a layer of structured waters on the surface of the substrate transport channel. This, in turn, allows for a fairly constant hydration of the substrate that facilitates its diffusion. Our findings reveal a new molecular mechanism of transport in polyspecific systems, whereby waters contribute by screening potentially strong substrate-protein interactions. The mechanistic understanding of a fundamental process related to multi-drug transport provided here could help rationalizing the behavior of other polyspecific systems. 2 Multi-Drug Resistant (MDR) pathogens represent one of the most pressing health concerns of the XXI Century due to their ability to elude the action of most (in some instances all) antibiotics (1)(2)(3)(4). A special family of membrane transport proteins, the so-called efflux pumps, plays a major role in conferring MDR by shuttling a broad spectrum of chemically unrelated cytotoxic molecules out of bacteria (5)(6)(7)(8)(9). Polyspecificity and partial overlap among the substrate specificities of different pumps are striking properties of these efflux machineries (10,11), making them a key survival tool of bacteria. The efflux systems of the Resistance Nodulation-cell Division (RND) superfamily, which span the entire periplasm connecting the inner and the outer membranes, are mainly involved in the onset of MDR in Gram-negative bacteria (5,(12)(13)(14). The AcrABZ-TolC efflux pump of Escherichia coli is the paradigm model and the most studied RND efflux pump. It is composed of the outer membrane efflux duct TolC, the inner membrane-anchored adaptor protein AcrA, the small transmembrane protein AcrZ and the inner membrane RND protein AcrB (15). The lattermost is a drug/H + antiporter fuelled by the proton gradient across the inner membrane and involved in the recognition and translocation of a very broad range of compounds (16). The multi-drug recognition capabilities and the postulated efflux mechanism of RND transporters are linked through an intriguing structural puzzle, which raised the question of how these proteins achieve their special features. An important step in this direction was made with the publication of the structure of AcrB ( Figure 1A), revealing a asymmetric homotrimer in which monomers assumed different conformations, named Loose (L), Tight (T), and Open (O) [or, alternatively, Access (A), Binding (B), Extrusion (C)] (17)(18)(19). Such conformation was postulated to represent the active state of the transporter. A "functional rotation" mechanism was proposed explaining substrate export in terms of peristaltic motions induced within the internal channels of the transporter. In the simplest hypothesis (see e.g. (20)(21)(22) for a more complex picture), recognition of substrates should start at an affinity site, the Access Pocket (AP), in the L monomer (20,23). Triggered by substrate binding, a conformational transition from L to T would then occur, accompanied by tight binding of the substrate within a deeper site, the so-called Deep or Distal Pocket (DP) (17)(18)(19). Successively, a second conformational change from T to O (supposed to be the energyrequiring step (24)) should drive the release of the substrate toward the upper (Funnel) 3 domain through a putative exit Gate (hereafter, simply Gate (19)) ( Figure 1B). After substrate release, the O conformation would relax back to L (coupled to proton release in the cytosol), restarting the cycle. Note that later different mechanisms of recognition were proposed for high vs. low molecular mass compounds, involving binding to the AP of monomer L and to the DP of monomer T, respectively (20). The feasibility of the functional rotation mechanism at a molecular level, however, remains to be established. Indeed, while the need for concerted conformational changes in monomers of AcrB was demonstrated by several experiments (25,26), no study addressed so far if and how substrate transport really occurs through the proposed mechanism. In fact, neither the molecular determinants nor the energetics of the process have been elucidated to date. Direct inspection of the functional rotation mechanism at an atomic level will ultimately provide a better understanding of how RND-type transporters work (possibly shedding light on rules governing polyspecific transport) and precious information for antibacterial drug discovery. In particular, the knowledge of the mechanistic details of the LTO"TOL step of the functional rotation (the underline indicates the monomer transporting the substrate; hereafter T"O) would represent a key milestone. Indeed, wherever recognition occurs, substrates should transit through the DP and the Gate in order to reach the Funnel domain of AcrB (17,18,20,21). Access to atomistic dynamics of complex molecular machines can be achieved nowadays by means of computer simulations, which represent an alternative and complementary approach to biochemical, biophysical, and structural experiments (27)(28)(29)(30)(31)(32)(33)(34). As obtaining high-resolution structures of the complexes between RND transporters and known "good" substrates proved to be very challenging (12), it is particular important to develop computational protocols able to understand mechanisms such as those involved in the extrusion of substrates by the RND efflux pumps (35). A few computational studies by us and other groups investigated in part the functional rotation mechanism in AcrB by using either enhanced-sampling all-atom molecular dynamics (MD) simulations (36)(37)(38)(39)(40) or a coarse-grained representation of the protein and its substrates (41,42). While these studies provided the first insights into the link between conformational changes in the protein and translocation of substrates, they were limited in scope. 4 Specifically, the coarse-grained approaches employed to study RND transporters used a single bead to represent each amino acid in the protein, thus they cannot dissect the role of different atomic-level interactions occurring along the translocation pathways of the substrates. In particular, they cannot address the role of water in the process, which is likely to be important for translocation as substrate diffusion channels are filled with solvent. Regarding the all-atom simulations performed to date: i) they were relatively short considering the size of the system under study, and ii) the conformational changes in the protein and the translocation of compounds were decoupled. These limitations hampered a quantitative understanding of how the former process drives the latter. Most importantly, no study evaluated (to the best of our knowledge) the energetics associated with the diffusion of compounds from the DP to the Funnel domain of AcrB during the T"O conformational change. Thus, we don't know how the functional rotation mechanism facilitates the diffusion of substrates of RND tranporters. Prompted by these considerations, we developed a computational protocol based on multiplebias MD simulations to characterize for the first time the key step T"O of the functional rotation mechanism, assessing its energetics and the role of the solvent in the process. The protocol was employed to study the translocation of doxorubicin (hereafter DOX, which until very recently was, together with minocycline, the only drug co-crystallized within the DP of AcrB (17,23,43)) from the DP to the Funnel domain driven by conformational changes occurring in AcrB. Our results demonstrated the effectiveness of the functional rotation mechanism in facilitating smooth transport of substrates. The peristaltic motions occurring within internal channels of AcrB enabled a water layer soaking the internal surface of the translocation channel. This in turn permitted a fairly constant wetting of DOX during transport. The mediating action of water leveled off the free energy profile associated to the displacement of DOX. We speculate that the above mechanism can be generalized to rationalize the transport of substrates other than DOX, and that the mediating action of water is crucial for polyspecific transport by AcrB. Plausibly, other multi-drug transporters could exploit a very similar mechanism to perform their biological functions. Results and Discussion In this section we first describe our computational protocol in brief. Then, we summarize and discuss our main results, comparing them with the available experimental data. We describe the structural, dynamical and energetic features of the functional rotation mechanism as seen in our simulations. Multi-bias MD simulations allow mimicking the functional rotation mechanism in silico The conformational changes occurring in AcrB along the T"O step of the functional rotation were simulated by means of targeted MD (TMD) (44). This technique mimics structural rearrangements involving large protein regions by applying, to selected atoms (Cas here), a force proportional to their displacement from a linear path connecting the initial and final structures. DOX translocation from the DP to the Funnel domain within AcrB was induced by steered MD (SMD), in which one end of a virtual spring is attached to the molecule and the other end is pulled along a predefined direction (45,46). The novelty of our approach consists in efficiently coupling these two techniques in order to thoroughly mimic the functional rotation in silico, i.e. simulating substrate translocation driven by the structural changes of AcrB. In order to achieve this goal, we first addressed a key issue concerning the coupling times between protein conformational changes and substrate translocation. Indeed, different substrates could "respond" (i.e. unbind from the DP) with different timings to the conformational changes in the RND transporter, due e.g. to their different binding affinities, sizes, etc. Moreover we do not know a priori the details of such process even for a single substrate, including DOX. To cope with this issue, we simulated different possible couplings between the two processes. Namely, we performed three simulations in which DOX was pulled from the DP towards the (Table S1). Hereafter, we refer to these simulations as T rot _10%T pull , T rot _20%T pull and T rot _30%T pull , respectively. In addition, we performed a 1 µs long SMD simulation without concomitant induction of conformational changes in AcrB (referred to as T pull_1µs ). Though not representative of any putative transport mechanism, the T pull_1µs simulation is very useful for comparative purposes. 6 Finally, to quantitatively address to what extent the functional rotation mechanism does promote the diffusion of substrates, we evaluated the free energy profile associated with translocation of DOX by means of umbrella sampling (US) simulations (47,48) (Tables S1 and S2). The in silico substrate translocation mechanism is consistent with experimental data First, we validated our computational protocol by assessing the consistency of the mechanism of substrate translocation in silico with the available experimental data. Overall, the translocation of DOX as seen in our simulations turns out to be quite unaffected by the details of the computational setup, as far as the conformational changes of the protein are mimicked concomitantly to the displacement of the substrate. Importantly, the substrate is always transported through the putative Gate of AcrB ( Figure 2A-B). More generally, during translocation DOX interacts with protein residues suggested experimentally (49) to be part of the extended DP and of the putative Gate ( Figure 2C). Being an extension of the vast and malleable DP, the path toward the Gate enables consistent substrate rearrangement, as documented by the change in the orientation of DOX observed in all simulations ( Figure S1). Previous experimental data demonstrated that DOX binds to the DP of AcrB assuming almost flipped orientations (17, 23) ( Figure S2). Our results show that rotation of the substrate is possible also during its transport, at least within the channel leading from the upper part of the DP to the Gate. Importantly, such reorientation of DOX occurs at almost no cost (see below). The simulated translocation process is thus compatible with several experimental findings. The functional rotation facilitates diffusion of substrates by lowering conformational strain and allowing for their continuous wetting within the transport channel The overall agreement among the main results obtained from T rot _1/2/30%T pull simulations discussed above is congruous with the resemblance of the profiles of the pulling force F pull applied to induce the translocation of DOX ( Figure 2D). Note that all profiles are relatively smooth and do not feature large values of F pull , which would be associated with transport bottlenecks. In particular, there is no evidence for high values of F pull near the Gate. On the contrary, a prominent peak of almost 10 kcal×mol -1 Å -1 in F pull appears near the Gate when the translocation of DOX is mimicked in the complete absence of conformational changes in 7 AcrB, i.e., for T pull_1µs ( Figure 2D). Thus, inducing the T"O conformational change in the first part of the simulation facilitates substrate translocation. In order to shed light on the microscopic determinants of the process, we performed a detailed analysis of the interactions involving DOX, the protein and the solvent. As expected, in the absence of the T"O conformational change the passage of DOX through the Gate induces a clear structural strain in the substrate ( Figure 3). More importantly, the opening of the Gate following the T"O transition enables a fairly structured hydration of almost the entire internal surface of the translocation channel leading from DP to the Funnel domain ( Figure 4). This is particularly evident from the plot of the spatial distribution function (SDF) of water molecules ( Figure 4A), which provides a picture of the order in liquid water and reveals specific details of its local structure (50). Note that structured hydration of the translocation channel is compatible with the relatively hydrophilic character of the channel connecting the DP to the Funnel domain (39). By comparing the hydration properties of the translocation channel and of the substrate in T pull _10%T rot and T pull_1µs simulations, it is clear that this phenomenon is a feature of the functional rotation mechanism ( Figure 4). As a consequence of the reduced screening effect of waters in T pull_1µs , a negative peak appeared in the corresponding DOX-AcrB interaction energy profile ( Figure 4B and Figure S3). The presence of a roughly continuous layer of waters within the translocation channel enables a fairly constant wetting of the substrate, as well as several water-mediated interactions with the protein ( Figure 4B). Substrate translocation occurs over a smooth free energy profile We calculated the free energy profile associated with the transport of DOX from the DP to the rear of the Gate ( Figure 5), that is the part of the translocation process occurring within the AcrB channel (corresponding to a change of about 35 Å in the relative displacement from the DP, d rel ). The profile is relatively smooth, with barriers lower than 5 kcal×mol -1 . Furthermore, the affinities of DOX to the DP and the Gate are virtually identical ( Figure 5A); thus, the probability to find the substrate near the second site increases in response to conformational cycling in AcrB (see next section for biological implications). We noticed that the orientation of DOX along its translocation path changes near the Gate (inset in Figure 5B, and Figure S1). Therefore, we included an angular variable a DOX/DP-Funnel 8 representing DOX rotation (see Materials and Methods) to estimate a 2D free energy profile in the corresponding region of the translocation process ( Figure 5C). Importantly, the addition of a DOX/DP-Funnel had a very small impact on the results discussed above. Indeed, both the free energy difference found between the states representing almost flipped orientations of DOX (labeled 3 and 4 in Figure 5A) and the barrier associated with the rotation remain vanishingly small. Clearly, a proper comparison of our results with experiments is flawed by several factors, including the lack of a free energy profile associated with substrate translocation through the whole AcrABZ-TolC efflux pump. However, it is worth noticing the absence in our profile of high free energy barriers, which would correspond to efflux kinetics inconsistent with the typical extrusion times of substrates by RND efflux pumps (up to 10 3 compounds per pump per second depending on the substrate (51-53)). Biological implications In this section we discuss further the possible implications of our findings in relation to the current view of how RND efflux pumps work. AcrB substrates can adopt multiple binding modes also outside the DP. We showed that DOX assumes different orientations during transport. While such rotation can be facilitated by the increased hydration of DOX upon detachment from the DP ( Figure 4B), we cannot exclude the presence of multifunctional sites (that is, sites able to recognize various types of functional groups, from hydrophobic to polar and charged) (54) within the channel leading from this site to the Gate. Note that the presence of such multifunctional sites has been demonstrated within the DP (54), and is coherent with the possibility for DOX to adopt (at least) two different orientations within this site (17,23). A new possibility arising from our findings is that multidrug recognition and transport are not restricted to one or more binding sites (e.g. the DP), but rather dictated by the physico-chemical properties of the entire substrate translocation pathway through AcrB. These findings are compatible with polyspecific transport by AcrB. A "one stroke -one drug" mechanism of substrate expulsion is not necessary for AcrB. As a result of the T"O conformational change in AcrB, the interaction strength of DOX with the Gate and with the DP are comparable ( Figure 5A). Thus, AcrB could contribute to efflux by 9 "just" favoring, through a functional rotation mechanism, the accumulation of substrates in the central region of the upper Funnel domain, beyond the Gate. This should create a concentration gradient driving translocation of a pile of compounds through the AcrA/TolC channel, as hypothesized earlier (24). Such a mechanism may explain how certain substrates (e.g. aminoacyl-β-naphthylamides (55)) are pumped out at rates far exceeding those expected for the common transporters, suggesting that many substrate molecules might be pushed out in one stroke. The largely prevalent hydrophilic character identified in the internal surfaces of MexA and OprM (respectively homologous of AcrA and TolC in Pseudomonas aeruginosa) further corroborates our hypothesis (56). Moreover, an analysis of the hydrophilic character of the internal surfaces of AcrA and TolC in the recently published structure of the complete AcrABZ-TolC assembly (57) confirmed these findings ( Figure S4). Therefore, it is plausible that upon crossing the Gate, the substrate will wander within an environment that would drive it out (56). Transport rate bottleneck is not due to diffusion of substrates within AcrB. The smooth free energy profile associated with the translocation of DOX implies that the bottleneck in terms of rate of transport comes from the concerted conformational changes occurring in AcrB. This hypothesis is in line with the current understanding of how many active transporters work (58), and is supported by the comparison of our data with experimental studies reporting AcrB efflux rates of several compounds (51,52,59). For instance, the values of the turnover number k cat estimated by Nikaido and co-workers (51,52) for the efflux of cephalosporins and penicillins range from ~10 s -1 to ~10 3 s -1 . Using simple arguments from transition state theory to get an approximate value of the effective free energy barrier DG ‡ that would be compatible with this rate, we obtained ~13 kcal×mol -1 , which is well above the values found here. Interestingly, the effective free energy barrier calculated for the translocation of DOX through the TolC channel amounted to almost 10 kcal×mol -1 (60), a value similar to that extrapolated from experimental data. These findings further highlight the key effect of the conformational changes in AcrB on substrate diffusion, and support our statement about efflux rate bottlenecks. Water is key for polyspecific transport by AcrB. According to our results, the relief of steric hindrance and the formation of a continuous layer of structured waters crucially facilitate substrate transport inside AcrB. The mechanism we propose would: i) match with the increasing ratio of hydrophilic over hydrophobic residues along the channel leading from the DP to the Funnel domain (39); ii) be compatible with the many structural waters found within internal surfaces of AcrB in the highest-resolution crystal structure reported to date (PDB ID 4DX5 (23)). Moreover, it is likely that this very general mechanism would facilitate diffusion of several chemically unrelated substrates dissociating from the DP, thus enabling polyspecific transport. Indeed, by shielding potentially (too) strong interactions between chemical groups of compounds and AcrB, water would in part "hide" chemical differences among diverse substrates. Therefore, translocation of neutral, zwitterionic, anionic and cationic compounds could occur along a similar path and with comparable overall costs. Although verifying such a hypothesis with other compounds would be very computationally demanding, we point out that among the drugs co-crystallized so far within the DP of AcrB (including minocycline (17,23) and puromycin (43)), DOX features overall the largest molecular mass, van der Waals volume and minimal projection area (see e.g. data at www.dsf.unica.it/translocation/db (61)). Moreover, it is as soluble as the other substrates (all have high solubility in water, the values of intrinsic logS being -3.6, -3.2 and -2.3 for DOX, puromycin and minocycline, respectively according to Chemicalize -https://chemicalize.com). Therefore, it is plausible to expect that smaller and similarly soluble compounds than DOX could be transported via the mechanism described above. These considerations strongly suggest that water mediates the transport of (at least) low-molecular mass substrates recognized at the DP. Recognition and transport of substrates in AcrB. On the basis of our findings and previous literature, the mechanism by which AcrB and its homologs proteins recognize and transport their substrates would be as follows: • Recognition occurs via interaction of substrates with one among the multiple binding sites present in AcrB, each endowed with a few multifunctional sites (17,20,21,23). • Concerning the DP, the interaction of the substrate with this site triggers the T"O conformational changes in the protein, which decreases the affinity of the compound. • Upon unbinding from the DP, the substrate will find itself in a relatively hydrated environment favoring smooth diffusion towards the Gate or, at least, disfavoring specific interactions that could hinder transport. This hypothesis matches with the physicochemical traits of typical AcrB substrates, whose unique common feature is some degree of lipophilicity (5,16). Furthermore, the latter scenario is compatible with the transport of a pile of compounds through repeated conformational cycling of the pump (24). Note that, according to the mechanism described above, the distinction between a substrate and an inhibitor should be in the way they bind to the same region of AcrB (binding strength and/or position), which should induce different rearrangements of the pump, as already suggested by several studies (43,(62)(63)(64). Conclusions and Perspectives In this work we validated a novel computational protocol to mimic in silico the key step (T"O) of the functional rotation mechanism by which RND transporters such as AcrB are believed to export their substrates. To the best of our knowledge, this is the first computational study: i) addressing the coupling between the conformational transitions occurring in RND-type transporters and the translocation of its substrates, and ii) providing an estimation of the free energy profile associated with the key step of the process, that is the transport of a substrate from the Distal Pocket to the Funnel domain. Thanks to this unprecedented computational effort we characterized the molecular determinants of substrate translocation caused by peristaltic-like motions occurring within internal channels of AcrB. Using doxorubicin as a probe we showed how these structural changes favor substrate transport along a path that is fully compatible with that proposed on the basis of X-ray data and whole cells assays. Moreover, we propose a rationale for the polyspecific transport by the RND-type multidrug efflux pump AcrB, in which water molecules play a key role. Clearly, water-mediated transport could be a general feature of the multidrug transport mechanism. Accurate computational protocols such as the one used here represent a valid and highly informative strategy to understand the molecular mechanisms of recognition and transport by RND proteins. Moreover, given the robustness of the methodology with respect to implementation details, we are confident that it can be successfully applied to study the transport of other substrates by AcrB and homologous proteins, and easily adapted to investigate complex processes in other biological systems. Materials and Methods In the following we describe in detail the system we studied and the computational protocol we employed. We also discuss the possible limitations of our approach. An extensive validation of the methodology and a comparison of our results with previously published computational work are reported in the Supplementary Information. Simulated System The system under study has been described in detail earlier (36). The starting structure consisted of a molecule of DOX bound to the DP of monomer T of the asymmetric homotrimeric AcrB, and was taken from the equilibrium dynamics reported in (36). The protein was embedded in a 1-Palmitoyl-2-Oleoyl-PhosphatidylEthanolamine (POPE) membrane bilayer model, and the whole system was solvated with a 0.15 M aqueous KCl solution. The total number of atoms in the system was ~450,000. The parameters of DOX (freely available at http://www.dsf.unica.it/translocation/db(61)) were taken from the GAFF force field (65) or generated using the tools of the AMBER 14 package (66). In particular, atomic restrained electrostatic potential (RESP) charges were derived using antechamber, after a structural optimization performed with Gaussian09 (67). The force field for POPE molecules was taken from (36). The AMBER force field parm14SB (68) was used for the protein, the TIP3P (69) model was employed for water, and the parameters for the ions were taken from ref. (70). Computational protocol In order to mimic DOX translocation from the DP to the Funnel domain in response to the T"O conformational transition of AcrB, we devised an original computational protocol that couples two well-established methods to enhance the sampling of biological processes. Namely, we mimicked the conformational change of the protein via TMD simulations (44) while pulling the substrate towards its putative path through the Gate by means of SMD simulations (45,46). To the best of our knowledge, these two computational methodologies were never combined to date. The resulting DOX translocation pathway was discretized into 13 several snapshots used as starting conformations to perform 1D and 2D US MD simulations (47,48). Finally, the weighted histogram analysis method (WHAM) (71) was used to estimate the free energy profile associated to the transport of DOX from the DP to the rear of the Gate. Simulations were analyzed with in-house tcl, bash and perl scripts, with the cpptraj tool of the AMBER 14 package (66) Targeted and steered MD simulations These simulations (Table S1) were performed using the NAMD 2.9 package (74). A time step of 1.5 fs was used to integrate the equations of motion. Periodic boundary conditions were employed, and electrostatic interactions were treated using the particle-mesh Ewald method, with a real space cutoff of 12 Å and a grid spacing of 1 Å per grid point in each dimension. The van der Waals energies were calculated using a smooth cutoff (switching radius 10 Å, cutoff radius 12 Å). MD simulations were performed in the NPT ensemble. The temperature was maintained around 310 K by applying Langevin forces to all heavy atoms, with a damping constant of 5 ps -1 . The pressure was kept at 1 atm using the Nosé-Hoover Langevin piston pressure control with default parameters. TMD (44) simulations allowed mimicking the conformational transitions between the two known conformational states (LTO and TOL) of AcrB. It was recently shown that TMD simulations produce reliable transition paths as compared to other more refined techniques (75). To prevent any steric hindrance induced on the T monomer by the neighbors, we targeted all of them toward their next state along the functional rotation cycle. Furthermore, to allow for the largest conformational freedom of the protein along the pathway traced by the targeting restraints, these were applied only to the Ca atoms of AcrB. The force constant was set to 1 kcal·mol -1 ·Å -2 for each carbon atom, much lower than that employed in earlier studies (36,39). Regarding SMD simulations (45,46), a relatively low force constant (1 kcal·mol -1 ·Å -2 ) was applied to the center of mass of the non-hydrogenous atoms of the substrate. This choice allowed the molecule deviating from a straight pathway and optimizing interactions with the surrounding environment during transport. Umbrella sampling simulations To estimate the energetics associated with the translocation of DOX we performed extensive 1D and 2D US simulations (47,48) Since the orientation of DOX changed near the Gate (namely at values of d Funnel-DP between ~5 Å and ~12 Å; Figure 5 and Figure S1), a single reaction coordinate could be insufficient to evaluate the free energy profile correctly. Therefore, a 2D free energy surface was evaluated in that region by introducing the additional angular variable a DOX/DP-Funnel . This is defined as the The values of d Funnel-DP and a DOX/DP-Funnel were saved every 2 ps (corresponding to 1333 simulation steps). The WHAM method (71) as implemented in the g_wham tool of GROMACS was used to extract the free energy profiles and surfaces, using a tolerance of 10 -6 for the 15 convergence of the probability. Simple Bayesian bootstrapping was utilized to estimate the statistical sampling errors using 500 randomly chosen data sets with the same data size. The RMSD of the complexes with respect to their initial conformation revealed minor structural changes in all windows ( Figure S6A). A fairly flat profile was indeed reached after ~20 ns. In addition, the orientation of DOX did not change significantly in any of the 1D US windows ( Figure S6B). The relatively good stability of the systems was mirrored in the fairly good convergence of the free energy profile ( Figure S6C). The free energy profile (surface) reported in Figure 6A (B) was estimated over the last 12.5 (6.25) ns of the simulation. Limitations of our approach Our methodology is based on all-atom classical MD simulations with predefined protonation states of all molecules. As such, it neglects the coupling between conformational changes occurring in the periplasmic region of AcrB and the flux of protons across the TM domain. However, this limitation will hardly affect the outcome, since the translocation of DOX occurs fully within the periplasmic domain. Furthermore, we are mimicking exactly the process (i.e. the LTO"TOL conformational change) induced by the change in the protonation states of key residues within the TM region. Another limitation of our approach consists in neglecting the AcrB partners forming the full AcrABZ-TolC assembly (43,57,76,77). In particular, the extrusion process could be affected by the interaction between AcrB and AcrA, which could alter e.g. the flexibility and the hydration properties of the upper part of the Funnel domain. However, we believe that restricting our study to AcrB does not constitute a major drawback, as the translocation of DOX simulated here occurs mainly within internal AcrB channels. Acknowledgments The research leading to the results discussed here was partly conducted as part of the Figure S1). Colored points indicate conformations of DOX extracted from the T rot _10%T pull simulation and featuring different orientations, and shown with the same color code in the inset of B.
7,229.6
2017-08-30T00:00:00.000
[ "Biology" ]
Multilevel and Multi-index Monte Carlo methods for the McKean–Vlasov equation We address the approximation of functionals depending on a system of particles, described by stochastic differential equations (SDEs), in the mean-field limit when the number of particles approaches infinity. This problem is equivalent to estimating the weak solution of the limiting McKean–Vlasov SDE. To that end, our approach uses systems with finite numbers of particles and a time-stepping scheme. In this case, there are two discretization parameters: the number of time steps and the number of particles. Based on these two parameters, we consider different variants of the Monte Carlo and Multilevel Monte Carlo (MLMC) methods and show that, in the best case, the optimal work complexity of MLMC, to estimate the functional in one typical setting with an error tolerance of TOL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {TOL}$$\end{document}, is when using the partitioning estimator and the Milstein time-stepping scheme. We also consider a method that uses the recent Multi-index Monte Carlo method and show an improved work complexity in the same typical setting of . Our numerical experiments are carried out on the so-called Kuramoto model, a system of coupled oscillators. Introduction In our setting, a stochastic particle system is a system of coupled d-dimensional stochastic differential equations (SDEs), each modeling the state of a "particle".Such particle systems are versatile tools that can be used to model the dynamics of various complicated phenomena using relatively simple interactions, e.g., pedestrian dynamics [22,17], collective animal behavior [10,9], interactions between cells [8] and in some numerical methods such as Ensemble Kalman filters [25].One common goal of the simulation of these particle systems is to average some quantity of interest computed on all particles, e.g., the average velocity, average exit time or average number of particles in a specific region. Under certain conditions, most importantly the exchangeability of particles and sufficient regularity of the SDE coefficients, the stochastic particle system approaches a mean-field limit as the number of particles tends to infinity [28].Exchangeability of particles refers to the assumption that all permutations of the particles have the same joint distribution.In the mean-field limit, each particle follows a single McKean-Vlasov SDE where the advection and/or diffusion coefficients depend on the distribution of the solution to the SDE [11].In many cases, the objective is to approximate the expected value of a quantity of interest (QoI) in the mean-field limit as the number of particles tend to infinity, subject to some error tolerance, TOL.While it is possible to approximate the expectation of these QoIs by estimating the solution to a nonlinear PDE using traditional numerical methods, such methods usually suffer from the curse of dimensionality.Indeed, the cost of these method is usually of O TOL − w d for some constant w > 1 that depends on the particular numerical method.Using sparse numerical methods alleviates the curse of dimensionality but requires increasing regularity as the dimensionality of the state space increases.On the other hand, Monte Carlo methods do not suffer from this curse with respect to the dimensionality of the state space.This work explores different variants and extensions of the Monte Carlo method when the underlying stochastic particle system satisfies certain crucial assumptions.We theoretically show the validity of some of these assumptions in a somewhat general setting, while verifying the other assumptions numerically on a simple stochastic particle system, leaving further theoretical justification to a future work. Generally, the SDEs that constitute a stochastic particle system cannot be solved exactly and their solution must instead be approximated using a time-stepping scheme with a number of time steps, N .This approximation parameter and a finite number of particles, P , are the two approximation parameters that are involved in approximating a finite average of the QoI computed for all particles in the system.Then, to approximate the expectation of this average, we use a Monte Carlo method.In such a method, multiple independent and identical stochastic particle systems, approximated with the same number of time steps, N , are simulated and the average QoI is computed from each and an overall average is then taken.Using this method, a reduction of the variance of the estimator is achieved by increasing the number of simulations of the stochastic particle system or increasing the number of particles in the system.Section 3.1 presents the Monte Carlo method more precisely in the setting of stochastic particle systems.Particle methods that are not based on Monte Carlo were also discussed in [2,3].In these methods, a single simulation of the stochastic particle system is carried out and only the number of particles is increased to reduce the variance. As an improvement of Monte Carlo methods, the Multilevel Monte Carlo (MLMC) method was first introduced in [21] for parametric integration and in [13] for SDEs; see [14] and references therein for an overview.MLMC improves the efficiency of the Monte Carlo method when only an approximation, controlled with a single discretization parameter, of the solution to the underlying system can be computed.The basic idea is to reduce the number of required samples on the finest, most accurate but most expensive discretization, by reducing the variability of this approximation with a correlated coarser and cheaper discretization as a control variate.More details are given in Section 3.2 for the case of stochastic particle systems.The application of MLMC to particle systems has been investigated in many works [4,17,27].The same concepts have also been applied to nested expectations [14].More recently, a particle method applying the MLMC methodology to stochastic particle systems was also introduced in [26] achieving, for a linear system with a diffusion coefficient that is independent of the state variable, a work complexity of O TOL −2 (log(TOL −1 )) 5 . Recently, the Multi-index Monte Carlo (MIMC) method [19] was introduced to tackle high dimensional problems with more than one discretization parameter.MIMC is based on the same concepts as MLMC and improves the efficiency of MLMC even further but requires mixed regularity with respect to the discretization parameters.More details are given in Section 3.3 for the case of stochastic particle systems.In that section, we demonstrate the improved work complexity of MIMC compared with the work complexity of MC and MLMC, when applied to a stochastic particle system.More specifically, we show that, when using a naive simulation method for the particle system with quadratic complexity, the optimal work complexity of MIMC is O TOL −2 log TOL −1 2 when using the Milstein time-stepping scheme and O TOL −2 log TOL −1 4 when using the Euler-Maruyama time-stepping scheme.Finally, in Section 4, we provide numerical verification for the assumptions that are made throughout the current work and the derived rates of the work complexity. In what follows, the notation a b means that there exists a constant c that is independent of a and b such that a < cb. Problem Setting Consider a system of P exchangeable stochastic differential equations (SDEs) where for p = 1 . . .P , we have the following equation for X p|P (t) ∈ R d dX p|P (t) = A t, X p|P (t), λ X P (t) dt + B t, X p|P (t), λ X P (t) dW p (t) where X P (t) = {X q|P (t)} P q=1 and for some (possibly stochastic) functions, where δ is the Dirac measure, is called the empirical measure.In this setting, {W p } p≥1 are mutually independent d-dimensional Wiener processes.If, moreover, {x 0 p } p≥1 are i.i.d., then under certain conditions on the smoothness and form of A and B [28], as P → ∞ for any p ∈ N, the X p|∞ stochastic process satisfies where •) being the pdf of x 0 p which is given and is independent of p. Due to (2) and x 0 p being i.i.d, {X p|∞ } p are also i.i.d.; hence, unless we want to emphasize the particular path, we drop the p-dependence in X p|∞ and refer to the random process X ∞ instead.In any case, we are interested in computing E[ψ (X ∞ (T ))] for some given function, ψ, and some final time, T < ∞. Kuramoto Example (Fully connected Kuramoto model for synchronized oscillators).Throughout this work, we focus on a simple, one-dimensional example of (1).For p = 1, 2, . . ., P , we seek X p|P (t) ∈ R that satisfies where σ ∈ R is a constant and {ϑ p } p are i.i.d. and independent from the set of i.i.d.random variables {x 0 p } p and the Wiener processes {W p } p .The limiting SDE as P → ∞ is Note that in terms of the generic system (1) we have with ϑ a random variable and B = σ is a constant.We are interested in a real number between zero and one that measures the level of synchronization in the system with an infinite number of oscillators [1]; with zero corresponding to total disorder.In this case, we need two estimators: one where we take ψ(•) = sin(•) and the other where we take ψ(•) = cos(•). While it is computationally efficient to approximate E[ψ(X ∞ (T ))] by solving the McKean-Vlasov PDE, that ρ ∞ satisfies, when the state dimensionality, d, is small (cf., e.g., [17]), the cost of a standard full tensor approximation increases exponentially as the dimensionality of the state space increases.On the other hand, using sparse approximation techniques to solve the PDE requires increasing regularity assumptions as the dimensionality of the state space increases.Instead, in this work, we focus on approximating the value of E[ψ(X ∞ )] by simulating the SDE system in (1).Let us now define Here, due to exchangeability, {X p|P (T )} P p=1 are identically distributed but they are not independent since they are taken from the same realization of the particle system.Nevertheless, we have E[φ P ] = E ψ(X p|P (T )) for any p and P .In this case, with respect to the number of particles, P , the cost of a naive calculation of φ P is O P 2 due to the cost of evaluating the empirical measure in (1) for every particle in the system.It is possible to take {X p|P } P p=1 in (4) as i.i.d., i.e., for each p = 1 . . .P , X p|P is taken from a different independent realization of the system (1).In this case, the usual law of large numbers applies, but the cost of a naive calculation of φ P is O P 3 .For this reason, we focus in this work on the former method of taking identically distributed but not independent {X p|P } P p=1 .Following the setup in [7,20], our objective is to build a random estimator, A, approximating with minimal work, i.e., we wish to satisfy the constraint for a given error tolerance, TOL, and a given confidence level determined by 0 < 1.We instead impose the following, more restrictive, two constraints: Statistical constraint: for a given tolerance splitting parameter, θ ∈ (0, 1), possibly a function of TOL.To show that these bounds are sufficient note that then imposing (7) gives (5).Next, we can use Markov inequality and impose Var[A] ≤ (θTOL) 2 to satisfy (7).However, by assuming (at least asymptotic) normality of the estimator, A we can get a less stringent condition on the variance as follows: Variance constraint: Here, 0 < C is such that Φ(C ) = 1 − 2 , where Φ is the cumulative distribution function of a standard normal random variable, e.g., C ≈ 1.96 for = 0.05.The asymptotic normality of the estimator is usually shown using some form of the Central Limit Theorem (CLT) or the Lindeberg-Feller theorem (see, e.g., [7,19] for CLT results for the MLMC and MIMC estimators and Figure 3-right). As previously mentioned, we wish to approximate the values of X ∞ by using (1) with a finite number of particles, P .For a given number of particles, P , a solution to (1) is not readily available.Instead, we have to discretize the system of SDEs using, for example, the Euler-Maruyama timestepping scheme with N time steps.For n = 0, 1, 2, . . .N − 1, . At this point, we make the following assumptions: These assumption will be verified numerically in Section 4. In general, they translate to smoothness and boundedness assumptions on A, B and ψ.Indeed, in (P1), the weak convergence of the Euler-Maruyama method with respect to the number of time steps is a standard result shown, for example, in [23] by assuming 4-time differentiability of A, B and ψ.Showing that the constant multiplying N −1 is bounded for all P is straightforward by extending the standard proof of weak convergence the Euler-Maruyama method in [23, Chapter 14] and assuming boundedness of the derivatives A, B and ψ.On the other hand, the weak convergence with respect to the number of particles, i.e., E ψ(X p|P ) → E[ψ(X ∞ )] is a consequence of the propagation of chaos which is shown, without a convergence rate, in [28] for ψ Lipschitz, B constant and A of the the form where κ(t, •, •) is Lipschitz.On the other hand, for one-dimensional systems and using the results from [24, Theorem 3.2] we can show the weak convergence rate with respect to the number of particles and the convergence rate for the variance of φ P as the following lemma shows.Below, C(R) is the space of continuous bounded functions and C k (R) is the space of continuous bounded functions whose i'th derivative is in C(R) for i = 1, . . ., k. Lemma 2.1 (Weak and variance convergence rates w.r.t.number of particles).Consider (1) and (2) where the norms are assumed to be uniform with respect the arguments, x and y, respectively.If, moreover, Proof.The system in this lemma is a special case of the system in [24, Theorem 3.2].From there and given the assumptions of the current lemma, (10) immediately follows.Moreover, from the same reference, we can futher conclude that for 1 ≤ p = q ≤ P .Using this we can show (11) since From here, the rate of convergence for the variance of φ N P can be shown by noting that and noting that Var[φ P ] P −1 , then showing that the first term is Var φ N P − Var[φ P ] N −1 P −1 because of the weak convergence with respect to the number of time steps. Finally, as mentioned above, with a naive method, the total cost to compute a single sample of φ N P is O N P 2 .The quadratic power of P can be reduced by using, for example, a multipole algorithm [5,16].In general, we consider the work required to compute one sample of φ N P as O (N P γp ) for a positive constant, γ p ≥ 1. Monte Carlo methods In this section, we study different Monte Carlo methods that can be used to estimate the previous quantity, φ ∞ .In the following, we use the notation ω (m) p: where, for each q, ω (m) q denotes the m'th sample of the set of underlying random variables that are used in calculating X N |N q|P , i.e., the Wiener path, W q , the initial condition, x 0 q , and any random variables that are used in A or B.Moreover, we sometimes write φ N P (ω (m) 1:P ) to emphasize the dependence of the m th sample of φ N P on the underlying random variables. Monte Carlo (MC) The first estimator that we look at is a Monte Carlo estimator.For a given number of samples, M , number of particles, P , and number of time steps, N , we can write the MC estimator as follows: Here, Hence, due to (P1), we must have P = O TOL −1 and N = O TOL −1 to satisfy (6), and, due to (P2), we must have M = O TOL −1 to satisfy (8).Based on these choices, the total work to compute Kuramoto Example.Using a naive calculation method of φ N P (i.e., γ p = 2) gives a work complexity of O TOL −4 .See also Table 1 for the work complexities for different common values of γ p . Multilevel Monte Carlo (MLMC) For a given L ∈ N, define two hierarchies, {N } L =0 and {P } L =0 , satisfying P −1 ≤ P and N −1 ≤ N for all .Then, we can write the MLMC estimator as follows: where we later choose the function ϕ P L due to the telescopic sum.For MLMC to have better work complexity than that of Monte Carlo, φ N P (ω ( ,m) 1:P ) and ϕ 1:P ) must be correlated for every and m, so that their difference has a smaller variance than either φ N P (ω ( ,m) 1:P ) for all > 0. Given two discretization levels, N and N −1 , with the same number of particles, P , we can generate a sample of ϕ (ω ( ,m) 1:P ).That is, we use the same samples of the initial values, {x 0 p } p≥1 , the same Wiener paths, {W p } P p=1 , and, in case they are random as in (3), the same samples of the advection and diffusion coefficients, A and B, respectively.We can improve the correlation by using an antithetic sampler as detailed in [15] or by using a higher-order scheme like the Milstein scheme [12].In the Kuramoto example, the Euler-Maruyama and the Milstein schemes are equivalent since the diffusion coefficient is constant. On the other hand, given two different sizes of the particle system, P and P −1 , with the same discretization level, N , we can generate a sample of ϕ N P −1 (ω 1:P ) by taking In other words, we use the same P −1 sets of random variables out of the total P sets of random variables to run an independent simulation of the stochastic system with P −1 particles.We also consider another estimator that is more correlated with φ N P (ω 1:P ).The "antithetic" estimator was first independently introduced in [17, Chapter 5] and [4] and subsequently used in other works on particle systems [27] and nested simulations [14].In this work, we call this estimator a "partitioning" estimator to clearly distinguish it from the antithetic estimator in [15].We assume that P = β p P −1 for all and some positive integer β p and take That is, we split the underlying P sets of random variables into β p identically distributed and independent groups, each of size P −1 , and independently simulate β p particle systems, each of size P −1 .Finally, for each particle system, we compute the quantity of interest and take the average of the β p quantities. In the following subsections, we look at different settings in which either P or N depends on while the other parameter is constant for all .We begin by recalling the optimal convergence rates of MLMC when applied to a generic random variable, Y , with a trivial generalization to the case when there are two discretization parameters: one that is a function of the level, , and the other, L, that is fixed for all levels., where we assume that the samples {Y ( ,m) } ,m are mutually independent.Consider the MLMC estimator with Y ,m L,−1 = 0 and for β, w, γ, s, β, w, γ, c > 0 where s ≤ 2w, assume the following: Then, for any TOL < e −1 , there exists L, L and a sequence of {M } L =0 such that and Proof.The proof can be straightforwardly derived from the proof of [6, Theorem 1], we sketch here the main steps.First, we split the constraint (15) to a bias and variance constraints similar to ( 6) to ( 8 Finally, given L, solving for {M } L =0 to minimize the work while satisfying the variance constraint gives the desired result. MLMC hierarchy based on the number of time steps In this setting, we take N = (β t ) for some β t > 0 and P = P L for all , i.e., the number of particles is a constant, P L , on all levels.We make an extra assumption in this case, namely: for some constant s t > 0. The factor (β t ) −st is the usual assumption on the variance convergence of the level difference in MLMC theory [13] and is a standard result for the Euler-Maruyama scheme with s t = 1 and for the Milstein scheme with s t = 2, [23].On the other hand, the factor P −1 L can be motivated from (P2), which states that the variance of each term in the difference converges at this rate. Due to Theorem 3.1, we can conclude that the work complexity of MLMC is Kuramoto Example.In this example, using the Milstein time-stepping scheme, we have s t = 2 (cf. Figure 1), and a naive calculation method of φ N P (γ p = 2) gives a work complexity of O TOL −3 .See also Table 1 for the work complexities for different common values of s t and γ p .Kuramoto Example.We choose β p = β t and use a naive calculation method of φ N P (yielding γ p = 2) and the partitioning sampler (yielding s p = 1).Finally, using the Milstein time-stepping scheme, we have s t = 2. Refer to Figure 1 for numerical verification.Based on these rates, we have, in (19), s = 2 log(β p ), w = log(β p ) and γ = 3 log(β p ).The MLMC work complexity in this case is O TOL −3 .See also Table 1 for the work complexities for different common values of s t and γ p . Multi-index Monte Carlo (MIMC) Following [19], for every multi-index α = (α 1 , α 2 ) ∈ N 2 , let P α1 = (β p ) α1 and N α2 = (β t ) α2 and define the first-order mixed-difference operator in two dimensions as The MIMC estimator is then written for a given I ⊂ N 2 as At this point, similar to the original work on MIMC [19], we make the following assumptions on the convergence of ∆φ Assumption (MIMC1) is motivated from (P1) by assuming that the mixed first order difference, ∆φ Pα 1 , gives a product of the convergence terms instead of a sum.Similarly, (MIMC2) is motivated from (MLMC1) and (MLMC2).To the best of our knowledge, there are currently no proofs of these assumptions for particle systems, but we verify them numerically for (3) in Figure 2. Henceforth, we will assume that β t = β p for easier presentation.Following [19, Lemma 2.1] and recalling the assumption on cost per sample, Work ∆φ Nα 2 Pα 1 P γp α1 N α2 , then, for every value of L ∈ R + , the optimal set can be written as and the optimal computational complexity of MIMC is O TOL −2−2 max(0,ζ) log TOL −1 p , where Kuramoto Example.Here again, we use a naive calculation method of φ N P (yielding γ p = 2) and the partitioning sampler (yielding s p = 1).Finally, using the Milstein time-stepping scheme, we have s t = 2. Hence, ζ = 0, z = 1 and Work [A MIMC ] = O TOL −2 log TOL −1 2 .See also Numerical Example In this section we provide numerical evidence of the assumptions and work complexities that were made in the Section 3.This section also verifies that the constants of the work complexity (which were not tracked) are not significant for reasonable error tolerances.The results in this section were obtained using the mimclib software library [18] and GNU parallel [29]. Figure 1 shows the absolute expectation and variance of the level differences for the different MLMC settings that were outlined in Section 3.2.These figures verify Assumptions (P1), (P2) and (MLMC1)-(MLMC3) with the values s t = 2 and s p = 0 for the ϕ sampler in (13) or the value s p = 1 for the ϕ sampler in (14).For the same parameter values, Figure 2 provides numerical evidence for Assumptions (MIMC1) and (MIMC2) for the ϕ sampler (14). We now compare the MLMC method [13] in the setting that was presented in Section 3.2.3 and the MIMC method [19] that was presented in Section 3.3.In both methods, we use the Milstein time-stepping scheme and the partitioning sampler, ϕ, in (14).Recall that in this case, we verified numerically that γ p = 2, s p = 1 and s t = 2.We also use the MLMC and MIMC algorithms that were outlined in their original work and use an initial 25 samples on each level or multi-index to compute a corresponding variance estimate that is required to compute the optimal number of samples.In the following, we refer to these methods as simply "MLMC" and "MIMC".We focus on the settings in Sections 3.2.3 and 3.3 since checking the bias of the estimator in those settings can be done straightforwardly by checking the absolute value of the level differences in MLMC or the multi-index differences in MIMC.On the other hand, checking the bias in the settings outlined in Sections 3.1, 3.2.1 and 3.2.2 is not as straightforward and determining the number of times steps and/or the number of particles to satisfy a certain error tolerance requires more sophisticated algorithms.This makes a fair numerical comparison with these later settings somewhat difficult. Figure 3-left shows the exact errors of both MLMC and MIMC for different prescribed tolerances.This plot shows that both methods estimate the quantity of interest up to the same error tolerance; comparing their work complexity is thus fair.On the other hand, Figure 3-right is a PP plot, i.e., a plot of the cumulative distribution function (CDF) of the MLMC and MIMC estimators, normalized by their variance and shifted by their mean, versus the CDF of a standard normal distribution.This figure shows that our assumption in Section 2 of the asymptotic normality of these estimators is well founded.Figure 4 shows the maximum discretization level for both the number of time steps and the number of particles for MLMC and MIMC (cf.( 22)).Recall that, for a fixed tolerance in MIMC, 2α 2 + α 1 is bounded by a constant (cf.( 21)).Hence, Figure 4 has a direct implication on the results reported in Figure 5 where we plot the maximum cost of the samples used in both MLMC and MIMC for different tolerances.This cost represents an indivisible unit of simulation for both methods, assuming we treat the simulation of the particle system as a black box.Hence, Figure 5 shows that MIMC has better parallelization scaling, i.e., even with an infinite number of computation nodes MIMC would still be more efficient than MLMC. to (22).From the right plot, we can confirm that s t = 2 for the Milstein method.We can also deduce that using the ϕ sampler in (13) yields s p = 0 in (MIMC2) (i.e., no variance reduction compared to Var φ N P ) while using the ϕ sampler in ( 14) yields s p = 1 in (MIMC2) (i.e.O P −2 ). Finally, we show in Figure 6 the cost estimates of MLMC and MIMC for different tolerances.This figure clearly shows the performance improvement of MIMC over MLMC and shows that the complexity rates that we derived in this work are reasonably accurate. Conclusions This work has shown both numerically and theoretically under certain assumptions, that could be verified numerically, the improvement of MIMC over MLMC when used to approximate a quantity of interest computed on a particle system as the number of particles goes to infinity.The application to other particle systems (or equivalently other McKean-Vlasov SDEs) is straightforward and similar improvements are expected.The same machinery was also suggested for approximating nested expectations in [14] and the analysis here applies to that setting as well.Moreover, the same machinery, i.e., multi-index structure with respect to time steps and number of particles coupled with a partitioning estimator, could be used to create control variates to reduce the computational cost of approximating quantities of interest on stochastic particle systems with a finite number of particles. Future work includes analyzing the optimal level separation parameters, β p and β t , and the behavior of the tolerance splitting parameter, θ.Another direction could be applying the MIMC method to higher-dimensional particle systems such as the crowd model in [17].On the theoretical side, the next step is to prove the assumptions that were postulated and verified numerically in this work for certain classes of particle systems, namely: the second order convergence with respect to the number of particles of the variance of the partitioning estimator ( 14) and the convergence rates for mixed differences (MIMC1) and (MIMC2). E [A MC (M, P, N )] = E φ work is Work [A MC (M, P, N )] = M N P γp . Theorem 3 . 1 ( Optimal MLMC complexity).Let Y L, be an approximation of the random variable, Y , for every ( L, ) ∈ N 2 .Denote by Y ( ,m) a sample of Y and denote its corresponding approximation by Y ( ,m) L, 1 w ), respectively.Then, since E A MLMC ( L, L) = E Y L,L , given the first assumption of the theorem and imposing the bias constraint yield L = O 1 w log( β) log(TOL −1 ) and L = O log(β) log(TOL −1 ) .The assumptions on the variance and work then give: Table 1 : The work complexity of the different methods presented in this work in common situations, encoded as (a, b) to represent O TOL −a (log(TOL −1 )) b .When appropriate, we use the partitioning estimator (i.e., s p = 1).In general, MIMC has always the best complexity.However, when γ p = 1 MIMC does not offer an advantage over an appropriate MLMC method.
7,046.2
2016-10-31T00:00:00.000
[ "Mathematics", "Physics" ]
CREB-AP1 Protein Complexes Regulate Transcription of the Collagen XXIV Gene (Col24a1) in Osteoblasts* Collagen XXIV is a newly discovered and poorly characterized member of the fibril-forming family of collagen molecules, which displays unique structural features of invertebrate fibrillar collagens and is expressed predominantly in bone tissue. Here we report the characterization of the proximal promoter of the mouse gene (Col24a1) and its regulation in osteoblastic cells. Using well characterized murine models of osteoblast differentiation, we found that the Col24a1 gene is activated sometime before onset of the late differentiation marker osteocalcin. Additional analyses revealed that Col24a1 produces equal amounts of two alternatively spliced products with different 5′-untranslated sequences that originate from distinct transcriptional start sites. Cell transfection experiments in combination with DNA binding assays demonstrated that Col24a1 promoter activity in ROS17/2.8 osteosarcoma cells is under the control of an upstream cis-acting element, which is shared by both transcripts and is recognized by specific combinations of c-Jun, CREB1, ATF1, and ATF2 dimers. Consistent with these results, overexpression of c-Jun, ATF1, ATF2, or CREB1 in transiently transfected osteoblastic cells stimulated transcription from reporter gene constructs driven by the Col24a1 promoter to different degrees. Moreover, chromatin immunoprecipitation experiments showed that these nuclear factors bind the same upstream sequence of the endogenous Col24a1 gene. Collectively these data provide new information about transcriptional control of collagen fibrillogenesis, in addition to implicating for the first time CREB-AP1 protein complexes in the regulation of collagen gene expression in osteoblasts. Vertebrate collagens represent a very large superfamily of extracellular proteins that impart specific physical properties to the connective tissue of virtually every organ system (1)(2)(3). There are more than 42 collagen ␣-chains that form 27 distinct trimers or types, which in turn give rise to a large variety of specialized macroaggregates. The most abundant and ubiquitous collagen macroaggregates are the highly ordered banded fibrils made of the so-called fibrillar collagens (types I-III, V, and XI) (1)(2)(3). All members of the fibrillar collagen family share a common structure that consists of a long triple helical domain, which is made of uninterrupted Gly-X-Y triplets and flanked at both ends by noncollagenous propeptides (1)(2)(3). These structural features also characterize collagen molecules that form fibrils in the extracellular matrices of primitive invertebrates, such as sponges, annelids, echinoderms, and mollusks (4,5). Unlike the vertebrate counterparts, invertebrate fibrillar collagens display short interruptions in the triple helices and unique structural features in the amino-and carboxyl-terminal propeptides (4,5). Vertebrate fibrillar collagens are either widely distributed in soft and hard tissues (types I, III, and V) or are restricted predominantly to cartilage (types II and XI) (1). Genetic evidence from animal and human studies has indicated that the quantitatively minor types V and XI collagen regulate the diameter of the major types I and II fibrils, respectively, by participating in fibril assembly (2, 6 -9). These studies have also demonstrated the importance of heterotypic collagens I/V and II/XI fibrils in skeletal development and integrity. As a result of the Human Genome effort, several new collagens have been recently identified which had escaped prior biochemical detection. Two among them (types XXIV and XXVII) bear the structural characteristics of fibrillar collagens and specifically, of invertebrate fibrillar collagens (10 -12). Gene expression analyses in the mouse have revealed that Col24a1 and Col27a1 display mutually exclusive patterns in the developing and adult skeleton. These studies have in fact shown that whereas Col24a1 transcripts accumulate at ossification centers of the craniofacial, axial, and appendicular skeleton, Col27a1 activity is instead confined to the cartilaginous anlagen of skeletal elements (10 -12). Additionally, structural considerations have suggested that collagens XXIV and XXVII are likely to form distinct homotrimers (11). Together these observations have been interpreted to indicate that these newly discovered fibrillar collagens may participate in the control of important physiological processes in bone and cartilage, such as collagen fibrillogenesis and/or matrix calcification and mineralization (10 -12). Bone formation is a complex and tightly regulated genetic program that involves two distinct pathways at different anatomical locations (13)(14)(15). In intramembranous ossification, mesenchymal cells condense and differentiate directly into collagen I-producing osteoblasts. In endochondral bone formation, cells at condensation sites differentiate into chondrocytes that form a cartilage (collagen II-rich) anlagen, which is replaced by a bony (collagen I-rich) matrix and bone marrow following chondrocyte hypertrophy, matrix calcification, and vascular invasion. At the same time, cells around the condensations form the perichondrial layer that gives rise to the osteoblast-forming periosteum and ulti-mately to cortical bone. Distinct transcriptional codes control osteoblastogenesis and chondrogenesis and thus, assembly of the collagen I-rich bone matrix and the collagen II-rich cartilage matrix (14,15). The canonical Wnt signaling pathway has been shown recently to direct differentiation of mesenchymal cells toward either the osteoblast or chondrocyte lineages (16,17). Previous investigations, on the other hand, have implicated the transcription factors Runx2 and Osterix in the progression of osteoblastogenesis during intramembranous and endochondral ossification, as well as the Sox5, 6, and 9 nuclear proteins in the regulation of chondrogenesis (18 -22). A number of ubiquitous transcription factors have been also involved in osteoblast differentiation and function, including Msx proteins, Dlx5, Twist, and members of the AP1 complexes (23)(24)(25)(26)(27)(28). Similarly, several studies have identified DNA cis-acting elements and nuclear trans-acting factors that regulate cartilage-specific expression of the collagen II and XI genes (29 -32). By contrast, significantly less is known about the regulation of fibrillar collagen genes in osteoblasts. One of our research interests is the study of the regulation and function of fibrillar collagen genes in normal and diseased conditions. As part of this ongoing effort, the present study was designed to characterize the proximal promoter of the mouse Col24a1 gene using a combination of cell transfection and DNA binding assays. The results of these experiments suggest that Col24a1 is activated during the mid to late phase of osteoblast differentiation mostly through the binding of CREB 2 -AP1 complexes to an upstream sequence, which is shared by two alternative transcription start sites. This study therefore extends our knowledge of the transcriptional regulation of collagen fibrillogenesis, in addition to implicating for the first time CREB-AP1 protein complexes in the expression of a fibrillar collagen gene in osteoblasts. Cell Transfection Assays-Col24a1 promoter-luciferase (LUC) reporter gene constructs were derived from clone pBeroBAC RP23-205C6 using PCR amplification. Amplified products were cloned into the pGEM-T Easy vector (Promega, Madison, WI) and sequenced. Internal deletions and nucleotide substitutions were generated by sitedirected mutagenesis as described previously (34). Transient transfections were performed using the Lipofectamine Plus reagent system (Invitrogen), and luciferase activity was assayed 48 h later using the Dual-luciferase TM reporter assay system (Promega). The pRL-TK Renilla reniformis luciferase expression vector was used as an internal control for transfection efficiency. Results were expressed as the mean Ϯ S.E. of five to seven independent experiments and evaluated by Student's t test. Expression vectors for ATF1, ATF2, CREB1, and c-Jun expression vectors were kindly provided by Drs. Gerard Karsenty (Baylor College of Medicine, Houston, TX) and Lionel Ivashkiv (Hospital for Special Surgery, New York, NY). DNA Binding Assays-Preparation of nuclear extracts and DNA binding assays were carried out according to the published protocols (34,35). Wild-type and mutant oligonucleotide probes were generated by PCR amplification using HindIII site-linked primers. PCR products were subcloned into pGEM-T Easy vector, cleaved with HindIII, and radiolabeled with [␣-32 P]dCTP using the Klenow enzyme (34,35). DNA-nuclear protein binding was carried out at 25°C for 30 min in 25 l of reaction buffer containing 3 g of poly(dI-dC). DNA-bound protein complexes were separated in a 4.5% nondenaturing polyacrylamide gel in 0.25% TBE buffer. For competition and antibody interference assays, unlabeled probes or antibodies were added to the reaction mixture for 1 h at 4°C before the addition of labeled probe. The anti-CREB1 antibody was purchased from Upstate Biotechnology (Lake Placid, NY), and the other antibodies were from Santa Cruz Biotechnology (Santa Cruz, CA). Chromatin immunoprecipitation (ChIP) assays were performed using a commercial kit (Upstate Biotechnology) (34). Quantitative PCR was carried out for 35 cycles using 5 l of sample DNA solution/50-l reaction, and amplification products were separated in 2.5% agarose gel in 1 ϫ TAE buffer. Col24a1 Contains Two Alternative Promoters-Previous work established the entire coding sequence of the human ␣1(XXIV) collagen (COL24A1) gene and of only part of the mouse Col24a1 gene (11). We used this information to complete the primary structure of the mouse ␣1(XXIV) collagen chain by identifying mouse expressed sequence tags in the GenBank TM and by generating PCR amplification products covering sequence gaps. As a result, we found a 19-amino acid insertion in the noncollagenous amino-terminal sequence of the mouse compared with the human chain (see GenBank TM accession numbers AY244357 and DQ157748). These experiments were also instrumental in identifying the foremost exon of Col24a1 as consisting of a 5Ј-untranslated region (UTR) of undetermined length and a coding segment corresponding to the first 94 amino acid residues of the ␣-chain. The oligonucleotide-capping RACE approach was therefore employed to determine the Col24a1 start site of transcription and implicitly, the 5Ј-boundary of exon 1. As the source of template for the reaction, we utilized RNA purified from the eye and bone in which accumulation of Col24a1 transcripts has been found to be the highest (11). Sequencing of nearly 40 independent cDNA clones from each set of RNA samples revealed the presence of two different 5Ј-UTRs that upon subsequent analysis of genomic clones, were accounted for by a combination of alternative splicing and transcriptional start sites. To be precise, half of the cDNA clones contained a 353-nucleotide long 5Ј-UTR that is continuous with the genomic sequence of the exon originally identified as the first of exon of the human gene and which was now renamed exon 1a (Fig. 1A) (11). The other half of the cDNA clones instead contained the 87 nucleotides immediately upstream of the ATG codon, in addition to an 80-nucleotide long 5Ј-UTR corresponding to an upstream exon (named exon 1b) that is separated from exon 1a by a 152-bp intervening sequence (Fig. 1A). Both transcripts 1a and 1b are spliced correctly into exon 2, leaving the open reading frame unaffected, and consequently, they are predicted to translate into identical ␣1(XXIV) chains (Fig. 1A). In summary, Col24a1 contains two alternative start sites of transcription, thereby identified as Ϫ1 (exon 1a) and ϩ232 (exon 1b), two alternatively spliced transcripts with different 5Ј-UTRs, the shortest of which (transcript 1b) splices into nucleotide ϩ509 of exon 1a, and the same start site of translation, located at nucleotide ϩ586 (Fig. 1A). The functional significance of Col24a1 alternative promoters and 5Ј-UTR heterogeneity was not addressed in the present study. Comparison of the 5Ј-end sequences of the COL24A1 and Col24a1 genes revealed three segments of high homology which span from nucleotides Ϫ100 to ϩ133, from ϩ198 to ϩ256, and from ϩ470 to ϩ632 (Fig. 1A). Collagen XXIV has been estimated to represent about 4% of the amount of collagen I in bone, thus slightly less than collagen V, the regulator of collagen I fibrillogenesis (11). The estimate was based on gene expression analyses that in addition documented coexpression of collagens I and XXIV genes at ossification centers in the mouse embryo (11). Osteoblasts were therefore chosen as the experimental system in which to study the transcriptional regulation of Col24a1. Reverse transcription-PCR amplifications were used to assess the levels of Col24a1 expression in ROS17/2.8 and ROS25 osteosarcoma cell lines, which represent late and early stages of osteoblast differentiation, respectively (33). As positive and negative controls, PCR amplifications were also performed with RNAs purified from MCC and NIH-3T3 fibroblasts. Osteoblast-specific genes included Col1a2 (early osteoblast differentiation marker) and osteocalcin (late osteoblast differentiation marker), whereas the ubiquitous GAPDH gene served as the normalizing control. The results of these experiments suggested that the onset of Col24a1 expression occurs sometime after Col2a1 and prior to osteocalcin gene activation (Fig. 1B). Implicitly, they also identified ROS17/2.8 cells as the most suitable model in which to study the anatomy of the minimal Col24a1 promoter. A Short Upstream Sequence Promotes Col24a1 Transcription in Osteoblasts-Cell transfection experiments were initially employed to delineate the shortest promoter sequence of Col24a1 capable of direct- ing transcription in ROS17/2.8 cells. To this end, we engineered two distinct sets of LUC reporter gene constructs representative of the alternative promoters of Col24a1. The first set of Col24a1 promoter-LUC constructs shared the same 3Ј-end at position ϩ 509 and included both start sites of transcription, whereas the 3Ј-ends of the second set of Col24a1 promoter-LUC constructs was located at ϩ80 and excluded the start site of transcript 1b ( Fig. 2A). Both sets of promoter-LUC constructs included progressive 5Ј-deletions of the upstream Col24a1 sequence ( Fig. 2A). Irrespective of the 3Ј-end of the promoter-LUC construct, cell transfection assays assigned maximal transcriptional activity to the region between nucleotides Ϫ144 and ϩ80 (Fig. 2B). This promoter segment contains one of the three homology sequences of the COL24A1 and Col24a1 genes (Fig. 1A). Next, an electrophoretic mobility shift assay (EMSA) was employed to identify potential DNA-nuclear protein interactions within the Ϫ144 to ϩ81 segment of the Col24a1 promoter. To this end, four overlapping probes (p1-p4) spanning from nucleotide Ϫ163 to nucleotide ϩ116 were each incubated with ROS17/2.8 nuclear extracts (Fig. 3A). Specific band shifts were only obtained with overlapping probes p2 and p3, which cover together the sequence between nucleotides Ϫ98 and ϩ51 (Fig. 3A). To be precise, p2 yielded four retarded bands (b1-b4) of which p3 appeared to migrate as band b2 of probe p3 (Fig. 3A). In support of this postulate, band b2 disappeared from the p2 EMSA pattern when competed with a molar excess of unlabeled probe p3; conversely, formation of the p3 retarded band was eliminated by competition with a molar excess of the p2 sequence (Fig. 3B). Taken together, these results mapped the b1, b3, and b4 binding sites between nucleotides Ϫ98 and Ϫ33 and the binding site of b2 between nucleotides Ϫ33 and Ϫ15 (Fig. 4). The functional contribution of the segment encompassing the b1-b4 binding sites was evaluated by cell transfection experiments using the Ϫ144 ϩ 509/LUC plasmid bearing internal deletions of the Ϫ98 to Ϫ33 sequence (p2D; b1,3,4 binding sites), the overlapping Ϫ33 to Ϫ15 sequence (p2/3D; b2 binding site), or both of them (p2,3D; b1-b4 binding sites) (Fig. 4A). Unlike elimination of the b2 binding site in the p2/3D construct, deletion of the upstream sequence that gives rise to retarded bands b1, b3, and b4 led to a drop in luciferase activity of construct p2D nearly equal to the Ϫ144 ϩ 509/LUC construct with the internal dele- tion of all binding sites (p2,3D) or the shorter Ϫ52 ϩ 509/LUC plasmid (Fig. 4B). We therefore focused on the characterization of the factors binding to the sequence between Ϫ98 and Ϫ33 because this cis-acting element appears to drive most of Col24a1 promoter expression in ROS17/2.8 cells. Binding of CREB-ATF and AP1 Complexes Is Required for Col24a1 Transcription-A series of shorter oligonucleotides encompassing the Ϫ98 to Ϫ33 sequence was used in the EMSA to compete in vitro binding to the p2 probe to narrow down the site(s) of nuclear protein interaction within this cis-acting element. The results clearly demonstrated that all p2 retarded complexes (b1, b3, and b4) were competed by the p2b oli-gonucleotide that extends from nucleotides Ϫ55 to Ϫ38 (Fig. 5A). Inspection of the Ϫ55 to Ϫ38 sequence identified within it the TGACGTCA sequence, which represents a perfect match of the cyclic AMP-responsive element (CRE) protein/ATF binding site (Fig. 5B) (36). Indeed competition experiments using a molar excess of a mutated CRE oligonucleotide failed to interfere with formation of the b1, b3, and b4 complexes (Fig. 5B). The significance of the binding assays was corroborated further by cell transfection experiments that showed a loss of promoter activity in the Ϫ144 ϩ 509/LUC plasmids bearing the same p2b mutation (Fig. 4). Members of the CREB-ATF family of nuclear factors bind with high affinity to CRE, but AP1 dimers can also interact with CREB-ATF sites, depending on the composition of their flanking sequences (36). Antibodies specific for AP1 and CREB family members were therefore used in a screen to identify which of the possible protein combinations bind the p2b element of Col24a1. The results showed supershifts or binding interferences only with c-Jun, CREB1, ATF1, and ATF2 antisera (Fig. 6). Importantly, each of the antibodies was noted to affect formation of different retarded bands, implying that a distinct combination of AP1 and CREB-ATF proteins binds to the p2b element that directs Col24a1 transcription in ROS17 cells (Fig. 6, A and B). This last conclusion was corroborated by EMSAs in which different combinations of antibodies suggested that band b1 corresponds to c-Jun/ATF2 heterodimers, band b3 to CREB1 homodimers, and band b4 to CREB1/ATF1 heterodimers (Fig. 6, C and D). Expression vectors for c-Jun, ATF1, ATF2, or CREB1 were each cotransfected into ROS17/2.8 cells together with the wild-type or p2a mutant Ϫ144 ϩ 509/LUC plasmid to correlate DNA-protein binding with promoter function. These functional assays demonstrated that each of the four recombinantly expressed nuclear factors was capable of stimulating Col24a1 promoter activity in a dose-dependent manner and only when the integrity of the p2b sequence was preserved (Fig. 7A). Independent confirmation of the functional assays was obtained by a ChIP assay that documented in vivo occupancy by CREB1, ATF1, c-Jun, and ATF2 of the p2b element in the endogenous Col24a1 promoter (Fig. 7B). Consistent with the EMSA data, this in vivo assay also provided evidence for CREB-AP1 specificity by showing lack of JunD and ATF4 binding to the collagen but not the TIMP1 or osteocalcin promoters (Fig. 7B) (37,38). DISCUSSION The present study demonstrated that Col24a1 is a marker of late osteoblast differentiation which is positively regulated by the binding of specific combinations of CREB-AP1 proteins to an upstream cis-acting element, which is shared by the two alternative promoters of the gene. These findings advance knowledge of the transcriptional pathways that regulate formation of fibrillar collagen assemblies in the skeleton. They will also inform the characterization of genetic programs that may be negatively affected by the loss of collagen XXIV in the developing and adult mouse bone. The role of collagen I fibrils in bone physiology is well established as is the contribution of the minor collagen V to guiding nucleation of collagen I fibrils in the skin and the eyes and by extrapolation, in bone tissue (1-3, 6, 8, 9). The expression pattern of Col24a1 in the developing bone and the eye is at least consistent with a similar role of this macromolecule in collagen I fibrillogenesis, even though the lack of suitable antibodies has hampered experimental confirmation of this hypothesis. Our preliminary findings indicate that Col24a1 is inactive in ROS25 cells, which correspond to early differentiating osteoblasts, but not in ROS17/ 2.8 cells, which represent late differentiating osteoblasts, or in calvarial osteoblasts, albeit at seemingly lower levels than in ROS17/2.8 cells. Within the limitations of this experimental system, these results nonetheless suggest that collagen XXIV is an integral part of the genetic program of osteoblast terminal differentiation (39). That collagen XXIV is expressed at a lower level in nonskeletal tissues, such as the brain and the eye, also suggests a potentially broader role in organogenesis (11). Our study adds AP1 and CREB-ATF proteins to the list of transcription factors that are involved in the regulation of fibrillar collagen genes, particularly in osteoblastic cells. A large body of work has demonstrated the critical contributions of these two families of basic leucine zipper proteins to bone formation and remodeling (40 -42). For example, genetic alterations in functions of AP1 and related proteins have been shown to affect negatively osteoblast differentiation and function as well as bone development (40). Similarly, transgenic interference of CREB protein activity greatly impairs the normal process of endochondral bone formation (43). Members of the AP1 or CREB-ATF family of nuclear factors can form homodimeric or heterodimeric protein complexes, which transduce distinct signals and exert discrete transcriptional responses on various promoter targets (36). The heterogeneity in dimer composition is the main determinant of the functional diversification of AP1 and CREB-ATF complexes, which include dimers within and between selected members of each family of transcription factors (36,40,41). Obligatory combinations of CREB-ATF proteins recognize the octameric TGACGTCA element, as do heterodimers between ATFs and specific Jun and/or Fos proteins (36). In line with this last consideration, our antibody interference experiments indicate that CREB1, CREB1/ATF1 and c-Jun/ATF2 dimers can specifically bind in vitro to the evolutionarily retained TGACGTCA sequence (p2b element) of the Col24a1 and COL24A1 promoters. Further confirmation of this in vitro finding was provided by the ChIP assay, which showed that the endogenous p2b site of the rat gene is occupied in ROS17/2.8 cells by the CREB1, ATF1, ATF2, and c-Jun proteins. Consistent with this result, we showed that overexpression of each of these four nuclear proteins in osteoblasts stimulates transcription from the wild-type Col24a1 promoter, but not from the same promoter harboring four nucleotide substitutions in the CREB-ATF binding site. Although our experiments left unresolved whether and how the various CREB-AP1 complexes may compete for 2b binding, they clearly FIGURE 4. Functional contribution of the b1,3,4 and b2 binding sites to minimal promoter activity. Luciferase activity of various mutant promoter sequences was evaluated in transiently transfected ROS17/2.8 cells in relationship to that of the wild-type Ϫ144 to ϩ509 promoter construct in which the relative positions of the b1,3,4 and b2 binding sites are shown. Mutations include deletions of the sequence encompassing probes p2 and p3 (p2,3D plasmid), only probe p2 (p2D plasmid), only probe p3 (p3D plasmid), or the overlap between probes p2 and p3 (p2/3D plasmid), as well as single nucleotide substitutions in the nuclear protein binding sites of probe p2 (plasmid p2bmt, see also top of Fig. 5B). Bars indicate the S.E. of the means. FIGURE 5. Mapping of the nuclear protein binding sites within the probe p2 sequence. A, EMSA using the p2 probe in the absence (Ϫ) or in the presence (ϩ) of a 100-fold molar excess of unlabeled oligonucleotides p2a-c that cover the p2 sequence (nucleotide positions shown above the autoradiographs). B, top, the CREB-ATF binding site within the p2b sequence is highlighted, and the nucleotide substitutions (mt) are indicated (p2bmt, see also relevant functional assay of Fig. 4); bottom, EMSAs showing formation of retarded complexes between the p2b probe and ROS17/2.8 nuclear extracts in the absence (Ϫ) or in the presence (ϩ) of increasing (10 -100-fold) molar excess of unlabeled wild-type (wt) or mutant (mt) p2b oligonucleotides. indicate that binding of these nuclear proteins stimulates transcription from the minimal Col24a1 promoter in osteoblasts. This conclusion is based on the absolute requirement of element 2b integrity for promoter activity and on the positive effect on promoter activity of each of the four nuclear proteins overexpressed in ROS17/2.8 cells. The expression profiles of the proteins that bind element 2b during osteoblast differentiation in vitro are also consistent with the time of Col24a1 onset estimated by the present study. Jun proteins are in fact highly expressed in differentiating osteoblasts prior to matrix production and mineralization, and phosphorylated CREB reaches its highest level in the early mineralization stage (44,45). Irrespective of whether or not these findings are functionally correlated, the characterization of the Col24a1 promoter further support the emerging notion that distinct regulatory pathways coordinate expression of different fibrillar collagen genes in bone tissue. Transgenic studies have in fact indicated that cooperation between an Sp1 binding site in the proximal promoter and uncharacterized complex(es) interacting with a far upstream enhancer directs bone-specific expression of the human COL1A2 gene (46). The same kind of approach has identified positive osteoblast-specific elements in the upstream promoter of the mouse and rat Col1a1 genes, as well as a ␦EF1/ZEB1 binding site further upstream which represses transcription of the mouse promoter in osteoblasts (47)(48)(49). Work in progress is extending the present work to the characterization of possible interactions of the CREB-AP1 complexes with other nuclear proteins, in addition to evaluating these in vitro findings within the physiological context of the transgenic mouse model.
5,590.2
2006-03-03T00:00:00.000
[ "Biology" ]
“Optimization of Mudaraba Sharia bank finance through agency theory perspective” This study aims to analyze the implementation of mudaraba financing at Sharia banks, to consider the relationship between a principal and an agent in mudaraba financing at Sharia banks, and to explore efforts to optimize the implementation of mudaraba financing at Sharia banks. This research was conducted at the Bank Muamalat Ternate Branch. The study used a qualitative method of single case study approach. The analysis used is an interactive model developed by Miles and Huberman. Research result exhibits the following: INTRODUCTION The sharia financial industry in Indonesia began in 1992 with the establishment of Bank Muamalat Indonesia.Syariah banking in Indonesia has grown quite rapidly after the government improved banking regulations.Implementation of Law No. 7 of the year 1992 was amended by the Act No. 10 of 1998 concerning the banking system.It was further improved by issuing Law No. 21 of the year 2008 on sharia banking, which encouraged the growth of sharia banking in the country (BI, 2013). The growth of sharia banking does not reflect the basic philosophy of sharia banks.The community recognizes sharia banks as profit-sharing banks.Sharia bank operates using the principle of profit sharing as the core product (fund product), both the fundraising and channeling (Antonio, 2001;Muhammad, 2010;Tarsidin, 2010;Ascarya & Yumanita, 2007).The principle of profit sharing is known as profit and loss sharing (PLS).One of the agreements in the PLS principle is mudaraba contract financing.Mudaraba contract financing is a form of cooperation contract between the capital owner (shahibul maal) to provide money to the entrepreneur (mudarib) to run a trading business with shared profit advantage.Capital owner (shahibul maal) is called as principal and owner of expertise/management (mudarib) is called as agent (Muhammad, 2010).Jensen and Meckling (1976) declared in their agency theory that in a contract between a principal as capital owner (shareholder) and an agent as the management that operates the company, the principal will delegate the decision-making authority to the agent in order for the contract to run smoothly. LITERATURE REVIEW The agency problem is that the actions of one of the parties are not in line with the interests or the purpose of the other party.The agency problem occurs because of the information asymmetry between both parties (principal and agent).Asymmetric information is a condition under which the agent knows more about the completion of a task and in turn has an advantage over that information than the principal. Asymmetric information can arise due to moral hazard and adverse selection.Moral hazard is a decision or action of the agent that emphasizes the satisfaction and ignores the principal interests or satisfaction.Adverse selection is the inability of the principal to know the agent's real characteristics (Manzilati, 2011).Sadr and Iqbal (2000) state that a mudaraba contract is a financial contract which is loaded with asymmetric information.Warde (1999) and Karim (2000) note that adverse selection and moral hazard are an obstacle in mudaraba financing.Khalil, Rickwood, and Muride (2000) mention that there is a risk caused by adverse selection and moral hazard.The risk of adverse selection and moral hazard is an agency problem in mudaraba financing. The phenomenon occurs in Indonesian non-profit financing products.It dominates sharia banks financing, especially products with the principle of murabaha buying and selling in fund channeling.Warde (1999) explains that in fact, sharia banks are eager to develop financing products emphasized in profit sharing, but society condition has not provided the desired climate.Mudaraba financing has many problems.Saeed (1996) explains there are several issues involving low mudaraba financing implementation in the syariah bank: moral standard, the ineffectiveness of profit sharing model related to an entrepreneur, high cost, technical aspect which includes lack of professional human resources in handling mudaraba financing, less attractive profit-sharing system in business activity, and efficiency issues. This study examines the role of monitoring and the importance of a consultant in overcoming the problem of mudaraba financing agency in sharia banks.The role of monitoring is emphasized in the Fatwa of DSN MUI No. 7/2000 which indicates that mudaraba financing, shahibul maal should not interfere in managing the business but possess the right to conduct full monitoring of mudarib business.Monitoring is either optimally conducted by the principal/shahibul maal before the contract is signing or during the contract.Therefore, shahibul maal needs to optimize the right of supervision to avoid the occurrence of two main barriers (adverse selection and moral hazard) in mudaraba financing.Moreover, a consultant in the mudharabah finance acts as the connector between the principal/shahibul maal and the agent/mudharib.The consultant has an active and formal role, and is formally involved in the mudharabah financing, but the characters only give considerations and advice to the shahibul maad and mudharib as the main actor in the mudharabah financing. In addition to maximizing supervision (monitoring), Islamic banks need to implement incentive system in mudaraba financing to ensure that the financing has incentives compatible.The incentive system for mudaraba financing could be conducted through the optimization of profit sharing between shahibul maal and mudarib (Tarsidin, 2010).Incentives are given to management, as managers can reduce the occurrence of moral hazard agents (Weston & Brigham, 1994;Jensen & Meckling, 1976). METHODS This research uses a qualitative method with case study approach.Case study research is a study that examines the contemporary phenomenon as a whole and thorough on the real conditions using various data sources.Case study research is grouped into explanatory, exploratory, and descriptive case studies.The type of case study research in this article is an explanatory case study aimed at answering the hows and the whys of the case study.Through such questions, the substantial substances contained in the case studied can be explored in depth (Yin, 2009). This study offers a change in the process of managing an object to complement, refine or craft new theories, concepts, or mechanisms for object management.Changes to the process meant innovative efforts required to encourage mudaraba financing of sharia banks (Stake, 2005;Creswell, 2009;Yin, 2009). To explore experiences and views on mudaraba financing, informants are required to experience the process and understand the nature of mudaraba financing in sharia banks.Determination of selected informants was based on the involvement and capability concerned with the subject.Informants in this study consisted of informants from shahibul maal, namely the head of Bank Mumalat of Ternate branch as much as two people (head of a branch and a marketing manager), and cooperative mudarib informant represented by cooperative management consisting of 4 (four) people: a Chairman, 2 (two) secretaries, and a treasurer.In addition, the data were also obtained from an expert informant on syariah banking. Data in this research are primary data and secondary data.Primary data are those obtained from informants through interviews.Interviews are conversations with a specific purpose.The conversation was conducted by two parties, namely the interviewer who asked questions and interviewees who provide answers to the questions.This study uses an unstandardized interview type.While secondary data are those obtained from institutions or forms of documents -documents relating to the need for completeness of primary data through techniques/study documentation. This study uses interactive data modeling techniques similar to one developed by Miles and Huberman (1992).Data analysis takes place simultaneously along with the data collection process, with the following stages: data collection, data reduction, data display, and drawing conclusions or verification. Qualitative research is valid if it has the required trustworthiness criteria.To determine the validity of the data, inspection techniques are required.The implementation of inspection techniques is based on a number of specific criteria.Moleong (2009) suggests that four criteria are used: credibility, transferability, dependability, and confirmability. An implementation of Mudaraba financing in Sharia banks Saeed (1996) explains Islamic banking theorists expect that Islamic banks' investment activities are based on two legal concepts, namely mudaraba and musharaka, or commonly known as profit and loss sharing (PLS).These theorists argue that Islamic banks will provide their vast fund resources on the principle of risk sharing, unlike interest-based financing where the borrower bears all risks.In practice, however, Islamic banks generally have come to realize that the PLS system, especially the mudaraba financing, as theorists have imagined, cannot be widely used in Islamic banking due to potential risks to the bank. As is commonly a cooperation contract, mudaraba must comply with the terms and conditions of the mudaraba contract.the mechanism of mudharabah financing so that the mudharib gives authority to the shahibul maal to decide the implementation process of mudharabah contract. After financing is given, supervision and monitoring of the financing are done intensively and continuously.Customer business performance supervision is conducted in a continuous and strict manner.This activity is carried out by applying the requirements to be fulfilled by the customer and agreed upon in the agreement made, such as by requiring the client to make reports containing information on business condition managed by the customer, such as the delivery of financial statements, income statement, inventory report of existing or used goods for the business process. In solving the financing issues, the bank takes various persuasive actions in accordance with the level of problems that occur, by evaluating the conditions of problems, whether the management and financial factors are still salvageable. Principal-agent relationship in Mudaraba financing in Sharia banks Jensen and Meckling (1976) explain in their agency theory that the structure of ownership affects the behavior of individuals within a company.The company is a legal partner that acts as a contractual relationship among individuals.The contractual relationship between individuals is an agency relationship as a contract mechanism between the principal and the agents.These contractual relationships are conducted when the principal asks the agent to act on behalf of the principal. The characteristics of the agency relationship as described by Jensen and Meckling (1976) above are similar to the shahibul maal and mudarib partnership in mudaraba financing at sharia financial institution (syariah bank).Muhammad (2010) that explains the principal relationship with the agent in the financing of a syariah bank can be realized in the form of the mudaraba financing contract agreement, namely the contract between the capital owner (bank/shahibul maal/principal) and the business actor (customer/mudarib/agent).Mudaraba financing contract is a contract that ac-counts for the profit and loss between the capital owner (principal/shahibul maal) and the business manager (agent/mudarib).Thus, in such a contractual relationship, it is necessary to have mutual disclosure between the two parties (principal and agent) in terms of profit and loss of business (business) run. Why asymmetric information occurs in mudaraba contracts?Sadr and Iqbal (2000) explained that when Islamic banks develop an investment contract, whether it is an investment in a mudaraba contract or a musyarakah contract, the sharia bank will be faced with a project in the state of asymmetric information, in which the contract has an adverse selection level and high moral hazard. In mudaraba financing, business ownership is the common property of the capital owner (shahibul maal) and the business manager (mudarib).However, the right of ownership of the mudaraba capital still belongs to the shahibul maal, while the profits generated by the mudaraba cooperative business belong to both parties.The division is based on a mutually agreed profit sharing ratio.Therefore, shahibul maal, to protect respective capital, must be able to make rules or requirements, thereby reducing the chance for mudarib to perform harmful actions.This is in line with the hadith of Prophet SAW: There are two forms of asymmetric information, namely hidden action and hidden information.Hidden action will bring moral hazard and hidden information will bring adverse selection.As to moral hazard and adverse selection issues, Sadr and Iqbal (2000) explain: adverse selection occurs in the debt contract when the borrower has an unfavorable quality of financing beyond the limits of certain profit-level conditions, and moral hazard is connected to conducting irregular activities or posing a greater risk in the contract.Bashir (1990) explains that when production process begins in mudaraba contracts, the agent shows good ethics for the actions to be agreed upon.After some time passed, mudarib conducts irregular activities, namely the action that cannot be observed (moral hazard), and business ethics, which cannot be known by the capital owner (adverse selection). Asymmetric information is the main problem encountered in the financing with income sharing system including the mudharabah financing.The asymmetric information comprises adverse selection and moral hazard.Asymmetric information happens since one of the parties has information that another party has no.In this case, the mudharib/agent has private information about the type of characteristic of his/herself.While the principal/shahibul maal do not know the private information because of that information characteristic and technical reason, which requires big budget to get the information so that is not efficient for the shahibul maal/principal.One of the asymmetric information forms, as explained above, is about moral hazard inside the parties who made a cooperation contract as mudharabah financing.Moral hazard problem refers to the private information on the effort made by mudharib out of the shahibul maal/principal's observation, and the level of utility needed by the mudharib in the mudharab financing contract is a problem that appears when the mudharib/agent uses financing accepted as promised. Optimizing the implementation of Mudaraba financing in Sharia banks The Understanding SDI on mudaraba financing is still low, not to mention related to the varied efforts of customers, which require their own knowledge and expertise to conduct analysis, in order to determine the business opportunity or business proposed by the prospective customer.Due to this limited human resource, banks need strategic and management experts to evaluate the project or business more seriously.This will incur a consequence on increasing the cost burdened to the bank. On the other hand, the customer is unknowledgeable about mudaraba financing products with the profit sharings offered by sharia banks, especially about how to calculate the profit sharing.Saeed (1996) explains to entrepreneurs and industrialists not to know accurately the cost of funding based on profit and loss sharing system.This condition causes the entrepreneurs not interested in mudaraba financing with profit sharing system.Sumiyanto (2005) explains that businessmen are less interested in using mudaraba financing.The fact that the low utilization of profit-sharing financing, especially mudaraba financing, encourages various parties, namely sharia banking practitioners, academics, researchers, and others, make various efforts, among others through research. http://dx.doi.org/10.21511/bbs.13(4).2018.04 Given various findings of factors of low application of mudaraba financing to sharia banking, experts and researchers of sharia banking have provided some recommendations for improvement.In terms of reducing the occurrence of asymmetric information in mudaraba financing, Saeed (1996) points out the need to establish an incentive structure.Tarsidin (2010) explains that in order to finance the profit sharing (mudaraba), there needs to be a fair profit sharing scheme between the capital owner (shahibul maal) and the business manager (mudarib).While Muhammad (2010) argued that to minimize the problem of agency (asymmetric information) in mudaraba financing, shahibul maal/principal must be able to supervise mudarib and their conducted activities.Furthermore, Maharani (2008) suggests that to reduce the agency problem in mudharibah financing, one prospective method is to apply mudaraba muqayyadah financing. Based on mudarib perspective, it has the disadvantage of understanding the provisions in mudaraba financing.Due to this weakness, the mudarib is less interested in perusing mudaraba financing.This is in line with Sumiyanto's (2005) findings stating that there's low interest of company managers in utilizing mudaraba financing. Given the phenomenon, the shahibul maal takes a high precautionary action against asymmetric information, causing shahibul maal to take a risk in avertion action, consequently shahibul maal will not dare to channel the mudaraba financing which is at risk.In addition, the lack of knowledge and understand-ing of shahibul maal against various business variations of mudarib candidates causes shahibul maal to be unable to analyze potential mudarib business. On the other hand, the mudarib has a weakness in knowledge of mudaraba financing, so mudarib is not interested in using the financing.This condition causes mudaraba financing to be stagnant, cannot develop as desired by the community.Because sharia banks are actually profit-sharing banks that are characterized by their products both on funding and on financing with profit-sharing systems. The scheme of mudaraba financing exhibits two parties that perform cooperation contract in doing business.Those parties are shahibul maal and mudarib (see Figure 1). 2) Sharia banks share funds (capital) 100% to finance the project. 3) The customer does not submit any funds at all, but as a project manager, he possesses the skills to manage the project/business financed by the sharia bank. 4) Project/business management conducted by the customer (mudarib), sharia bank (shahibul maal) does not intervene in the management of the project/business. 5) After the business is executed to obtain income, the result of the business is divided based on the ratio (in percentage X% for shahibul maal, and Y% for mudarib) as agreed at the beginning of the contract.The higher the income earned by mudarib, the greater the income earned by shahibul maal (syariah bank) and mudarib (customer). 6) At the end of the mudarib financing period, the customer returns the capital provided by the shahibul maal (sharia bank) as a whole (100%) or in accordance with the capital situation at the end of the contract period. Mudaraba financing scheme as shown in Figure 1, if observed at the research locus, shows that both shahibul maal and mudarib have weakness causing stagnation of mudaraba financing.The weakness of shahibul maal implements risk aversion as a form of prudential banking to avoid asymmetric information.The selective nature causes difficulty in obtaining mudarib as well as shahibul maal knowledge and skills (BMI Ternate) on various projects/business.Hence, they cannot afford to analyze the project/potential business, discouraging them from financing a project/business.This condition requires breakthrough to ensure mudaraba financing can be applied in sharia banking financing.In this context, researchers emphasize the importance of third parties as consultants/advisors overcoming the weaknesses of both parties (sharia banks and customers) in the mudaraba financing cooperation contract. Adding third-party variables to mudaraba financing schemes refers to the expert's consideration of the mudharabh scheme. , Algaout and Lewis (2003).They stated that one of the problems of low-yield financing was caused by regulatory aspects, namely the absence of supporting institutions to encourage the use of profit sharing. In the context of sharia, the role of consultant in mudaraba financing can be seen in the event between Amirul Kukminin Umar Bin Khattab r.a.Let's take a closer look at the problematic financing of mudaraba, especially that occurring at the research locus, which until now has not redistributed mudaraba financing.Based on the objective conditions at the research locus as described above, it is necessary to improve the management of mudaraba financing.Governance was improved via adding third-party elements or variables (consultants) to mudaraba financing schemes.The consultant acts as the party giving the considerations to both parties (shahibul maal and mudarib).The existence of consultants in the mudaraba financing which was allowed by three madhhab imams/ religious leaders (Hanfi, Maliki, and Shafi'i) is allowed, as long as the third party is a worker of the shahibul maal (Muhammad, 2010).Mudaraba financing scheme is presented in Figure 2. 2) Sharia banks share funds (capital) 100% to finance project/effort.3) The customer does not submit any funds at all, but as a project manager, he has the skills to manage the project/business financed by the sharia bank. 4) Project management/business conducted by the customer (mudarib), sharia bank (shahibul maal) does not intervene in the management of the project/business.5) After the business is executed to obtain the realization of income, then the result of the business is divided based on the ratio (in percentage X% for shahibul maal, and Y% for mudarib) as agreed at the beginning of the contract.The higher the income earned by mudarib, the greater the income earned by shahibul maal (syariah bank) and mudarib (customer). 6) At the end of the mudarib financing period, the customer returns the capital provided by the shahibul maal (sharia bank) as a whole (100%) or in accordance with the capital situation at the end of the contract period. 7) The consultant (external gives consideration to the shahibul maal related to the characteristics of the business to be financed and consideration to mudarib related to the operational provisions of mudaraba financing. Placement of consultants in mudaraba financing schemes was conducted in order to actively participate in the providing consultations or advises to both parties (shahibul maal and mudarib) to ensure mudaraba financing is carried out effectively and efficiently.The placement of this consultant is a response from the signals of sharia banking experts at the low implementation of profit sharing financing (mudaraba) in sharia banks, from the regulatory aspect, because there is no supporting institution to encourage profit sharing (Ascarya & Yumanita, 2007).Supporting institutions, researchers consider as a consultant who must play an active role to conduct proper mudaraba financing. In order for mudaraba financing to be the main product in sharia banks, it is not enough to expect mere two-party efforts in mudaraba financing (shahibul maal and mudarib).As long as this occurs, it would ultimately decline the growth of mudaraba financing in sharia banking.This condition is not only happening in Indonesia but also in the world (Saeed, 1996).This phenomenon should be a common concern of Muslims, working together to encourage mudaraba financing into a core business in sharia banking. Cooperation is highly recommended in Islam, which is known as "at-ta'awun" principle of helping each other and working together among members of society for good.This is affirmed by Allah SWT in the Qur'an (QS.5:2): "Aid others in conducting good and piety deeds, do not provide aid in sin and transgression."(QS.5:2) The contextual meaning of the above verse if associated with mudaraba financing, as one part of the development of Islamic economy should be a common concern.Mettwally (1995) in Arifin (2006) asserted, one of the economic principles in Islam is "cooperation" that became the main driver of Islamic economy.In order to improve Islamic economy, especially sharia banking, mudaraba financing ought to be core business.It is conducted to ensure the growth and development of Islamic banking to possess a broad impact on economic growth and development, especially to be one indicator of growth and development of Islamic economy. Consultant in the proposed mudaraba financing scheme is expected, as part of efforts to develop Islamic economy, especially in syariah banking, as one of the syariah financial institutions.The consultant redesigns mudaraba financing scheme to strengthen the cooperative relationship between shahibul maal and mudarib in implementing mudaraba financing contracts.In addition, the role of consultant is to decrease the problem of agency in the mudharabh financing. The existence of a consultant in redesigning the mudaraba financing scheme is added to cost incurred.Mudaraba financing is based on the principle of profit sharing.Profit sharing is expressed in terms of ratio (percentage) of profit earned.Asiyah (2015) explains that the financing sharing ratio is determined by considering two main points, namely: Figure 2 . Figure 2. Proposed Mudaraba financing scheme reason for the lack of interest in sharia banks is channeling the financing of profit sharing as explained by Sumiyanto (2005), indicating a high risk of asymmetric information on the financing of profit sharing, especially mudaraba financing.The high prudence of sharia banks for the emergence of asymmetric information in the form of adverse selection and moral hazard, causing profit sharing financing, especially low mudaraba financing in the funds channeling in Islamic banks. Sabiq (1987) sons Abdullah and Ubaidillah.The incident as mentioned by Al Hafiz Ibn Hajar in SayyidSabiq (1987): It is narrated that Abdullah and Ubaidillah (sons) Umar bin Al Khaththab r.a.marched out with Iraqi forces.When they returned they stopped at the subordinate Umar bin Khaththab r.a.namely Abu Musa Al Ash'ari (Governor of Basrah), he accepted them with pleasure and said: "Had I been able to help you, I would do it.Then he said: "Actually, this is part of God's treasure that I want to send to Amirulmukminin.I lend it to you to buy things that are in Iraq, then you sell in Medina.You return the principal capital to Amirulmukminin, so you get the benefit".Both of them (Abdullah and Ubaidillah) then said: "We are happy to do it", then Abu Musa did it, and wrote a letter to Umar that he took the treasure from both.Once they arrive, they sold the goods and earned a profit.Umar then said: Are all the troops lent The role of the two figures is interpreted as a consultant in the current context, who plays an active role to give consideration to ensure both parties conduct mudaraba contract cooperation.
5,622.4
2018-11-19T00:00:00.000
[ "Business", "Economics" ]
DEHP Impairs Zebrafish Reproduction by Affecting Critical Factors in Oogenesis Public concerns on phthalates distributions in the environment have been increasing since they can cause liver cancer, structural abnormalities and reduce sperm counts in male reproductive system. However, few data are actually available on the effects of Di-(2-ethylhexyl)-phthalate (DEHP) in female reproductive system. The aim of this study was to assess the impacts of DEHP on zebrafish oogenesis and embryo production. Female Danio rerio were exposed to environmentally relevant doses of DEHP and a significant decrease in ovulation and embryo production was observed. The effects of DEHP on several key regulators of oocyte maturation and ovulation including bone morphogenetic protein-15 (BMP15), luteinizing hormone receptor (LHR), membrane progesterone receptors (mPRs) and cyclooxygenase (COX)-2 (ptgs2) were determined by real time PCR. The expressions of BMP15 and mPR proteins were further determined by Western analyses to strengthen molecular findings. Moreover, plasma vitellogenin (vtg) titers were assayed by an ELISA procedure to determine the estrogenic effects of DEHP and its effects on oocyte growth. A significant reduction of fecundity in fish exposed to DEHP was observed. The reduced reproductive capacity was associated with an increase in ovarian BMP15 levels. This rise, in turn, was concomitant with a significant reduction in LHR and mPRβ levels. Finally, ptgs2 expression, the final trigger of ovulation, was also decreased by DEHP. By an in vitro maturation assay, the inhibitory effect of DEHP on germinal vesicle breakdown was further confirmed. In conclusion, DEHP affecting signals involved in oocyte growth (vtg), maturation (BMP15, LHR, mPRs,) and ovulation (ptgs2), deeply impairs ovarian functions with serious consequences on embryo production. Since there is a significant genetic similarity between D.rerio and humans, the harmful effects observed at oocyte level may be relevant for further molecular studies on humans. Introduction Endocrine disruptors (EDs) are able to disrupt the activity of the endocrine system and therefore modulate the metabolic activity of organs, tissues, cells and target structures [1]. Many EDs can interact with estrogen or androgen receptors and thus act as agonists or antagonists of endogenous hormones. Increasing evidence shows that EDs may also modulate the activities/expressions of steroidogenic enzymes [2]. Recent screening studies carried out in industrialized countries to detect contaminants in human urine samples have revealed the population's ubiquitous exposure to various plasticizers belonging to the group of phthalates (esters of aphthalic acid). Di-(2-ethylhexyl)-phthalate (DEHP) is the most commonly used plasticizer in PVC formulation for a wide variety of applications including medical devices, construction products, clothing and car products. DEHP is also used in non-polymer materials such as lacquers and paints, adhesives, fillers and printing inks and cosmetics [3]. As a result, DEHP has been found everywhere in the environment, and is universally considered to be a ubiquitous environmental contaminant [4]. In the last 3 years, the number of studies reporting a relationship between exposure of environmental phthalic acid ester (PAE) and human health has rapidly increased. These studies suggest possible associations between environmental exposure to PAEs and adverse effects on human reproduction and health [5][6][7], similar to those already described for rats dosed during gestation and/or lactation with phthalates [8]. In addition, studies have shown that exposure of pregnant laboratory animals to high doses of DEHP led to the similar effects as those caused by antiandrogens [9,10]. Follicle development, oocyte maturation and ovulation in fish are controlled by hormones, including the follicle stimulating hormone (FSH) and the luteinizing hormone (LH), as well as growth factors and hormones produced by the ovary [11]. The bone morphogenetic protein-15 (BMP15), a member of the transforming growth factor b (TGFb) superfamily, has recently been demonstrated to prevent precocious oocyte maturation [12,13] by inhibiting the expression of LH receptor (LHR) and membrane progestin receptors (mPRs) [14,15] which are known to have a pivotal role in the final steps of maturation [16]. By preventing small follicles from undergoing maturation, BMP15 may be important in maintaining oocyte quality and subsequent ovulation, fertilization, and embryo development [17]. The ovulatory process, the last step of oogenesis, which ultimately leads to the rupture of the follicle wall and the release of oocytes, involves a complex series of biochemical and biophysical events. The pre-ovulatory surge of LH triggers a marked and obligatory increase in follicular prostaglandin synthesis prior to ovulation, and the cyclooxygenase (COX) enzyme is a key rate-limiting step in the biosynthesis of prostaglandins. It in fact catalyzes the conversion of arachidonic acid to prostaglandin H2, involved in the ovulation process [18]. Recently, the zebrafish and human genomes have been shown to share extensive conserved syntenic fragments and many zebrafish genes and their human homologs display structural and functional similarities. These results, in addition to providing sound elements for environmental risk assessment, can be considered a starting point for further molecular studies on humans [19,20]. While the effects of DEHP on humans and mammalian species have largely been investigated especially in males, few data are available on its effects on the reproduction of aquatic organisms such as fish, important sentinels of environmental quality. Considering that the Environmental Protection Agency (EPA) has established a DEHP safety concentration limit in drinking water on 6 ppb (mg/l), in this study, the impact of relevant environmental concentrations [21], ranging from 0.02 to 40 mg/l, on zebrafish oocytes maturation, ovulation and fecundity, was analyzed. The molecular mechanisms of the adverse effects of DEHP were also investigated. Experimental design Adult Danio rerio (zebrafish) females were purchased from a commercial dealer (Acquario di Bologna, BO, IT). They were kept in aquaria at 28uC and oxygenated water. Fish were fed twice daily with commercial food (Vipagran, Sera, Germany) and other two times with Artemia salina. Eggs laid by parental fish were kept and grown. Six months old adult zebrafish were used for toxicological studies. Females were exposed for three weeks, in semi-static conditions, to nominal 0.02, 0.2, 2, 20 and 40 mg/l concentrations of DEHP. In order to evaluate DEHP estrogenic potency, one group was exposed to the positive control, 17a-Ethynylestradiol (EE2 25 ng/l). To investigate potential effect of the solvent, the vehicle control (EtOH), was used as control for all experimental groups. For each concentration, the treatment was performed in three different tanks (30 fish each), at a mean density of 1 fish/l, at a constant day/night photoperiod (12L/12D). Reproductive performance At the end of the three weeks' exposure, females (n = 10) from each experimental group were transferred to spawning tanks containing non contaminated water together with non treated zebrafish males (ratio: 10female/7males). Fecundity, defined as daily number of fertilized eggs (embryos), was determined for the next 14 days. For each pollutant concentration, three spawning tanks were set up. Percent of follicles of each stage of development Following exposure, adult females (n = 5) from each experimental group were sacrificed and oocyte follicles stages determination was performed. The ovary of zebrafish is in fact asynchronous and oocytes at different stages of development are simultaneously present [22]. The oocytes were divided into three different groups according to their sizes: previtellogenic (0.15-0.34 mm Ø), vitellogenic (0.35-0.69 mm Ø) and postvitellogenic (0.70-0.75 mm Ø). Follicles were manually isolated using micro tweezers under a microscope equipped with a micrometric scale in the objective. Each follicle stage was expressed as a percent of the total number of follicles from both ovaries of each female used. Enzyme-linked immuno sorbent assay (ELISA) Rabbit anti-zebrafish vtg polyclonal antibody was purchased by Biosense Laboratories AS (Thormøhlensgt. 55, N-5008 Bergen, Norway). The assay to determine vtg concentration was performed in the plasma of 5 fish. Standard curves were obtained by adding increasing doses of vtg from 10 to 1280 ng [23]. A reliable calibration curve enables the antigen titer (vtg) to be measured in all culture media. Western blot analysis Ovary homogenates were prepared, electrophoresed, and transferred to PVDF membrane. The membranes were then probed with anti-BMP15, anti-mPRa and anti-mPRb antisera as previously reported [25]. Data were normalized against b tubulin protein levels. Anti-b-tubulin antibody (1 g/ml) (Gene Tex, Inc.) was used to normalize the sample loading. The antibody reaction was visualized with chemiluminescent reagent for Western blot. The densitometric analysis was performed by ImageJ software for Windows. Follicles in vitro maturation Maturation assays were performed as previously described [13]. Briefly, gravid female zebrafish were anesthetized using 3-aminobenzoic acid ethyl ester (Sigma-Aldrich Canada Inc., Oakville, ON, Canada) and decapitated. The ovaries were removed; follicles were Stage IIIB follicles were collected from female zebrafish and were pre-exposed to DEHP for 4 hrs before addition of 17,20 bP, or simultaneously exposed to DEHP and 17,20 bP. Data represent mean 6 SD of three replicate wells from one experiment. The experiment was repeated three times with similar results. Different letters indicate statistical significant differences compare to control group (p,0.05). doi:10.1371/journal.pone.0010201.g002 staged according to their size and stage IIIB oocyte were collected. These oocytes were then pre-incubated for 4 hrs with 10 nM or 100 nM DEHP before the addition of 17,20b dihydroxy-4-pregnen-3one (17,20bP) (100 ng/ml, Sigma Aldrich, Milan Italy). Additional groups were incubated with DEHP and 17,20bP simultaneously. The rate of maturation indicated by the germinal vesicle breakdown (GVBD) was scored after 12 hrs post incubation. Each treatment was conducted with approximately 20 follicles per well and all experiments were carried out at least three times. Data Analysis Statistical analysis was performed with GraphPad Prism version 5.00 for Windows, GraphPad Software, San Diego California USA. Normal distribution of any variable analysed was checked by Kolgomorov-Smirnov test. Data for dose-response studies were analyzed for statistical significance by one-way ANOVA. Bonferroni's multiple comparison tests were used to determine differences among groups. Significance was set at p,0.05. Results Effects of DEHP exposure in female zebrafish on oocyte growth and maturation Treatment with EE2 or the 2 mg/l DEHP dose led to a significant increase in the number of vitellogenic oocytes. This increase was associated with a significant decrease in pre-vitellogenic oocytes observed in the same experimental groups; EE2 and 2 mg/l DEHP shifted pre-vitellogenic oocytes towards vitellogenic induction. Interestingly, no post-vitellogenic oocytes were found in the EE2, 20 or 40 mg/l DEHP exposed females, ( Figure 1A). In all treated groups, the GSI (gonad-somatic index) became higher, although not significantly except for the EE2 group where a significant increase was found (data not shown). A significant increase in vtg levels in the plasma of treated females was observed with the highest induction found with 40 mg/l DEHP, clearly showing the estrogenic activity of DEHP ( Figure 1B). After three weeks DEHP exposure, females were paired with control males and in the following 14 days, the embryos were collected and counted daily. The fecundity was remarkably compromised by DEHP at all doses tested. The most dramatic effects were observed at the highest dose of DEHP where the number of embryos was about 1% of the embryos produced by the control (Figure 1C). The number of embryos collected in females exposed to EE2 was similar to that obtained with 20 mg/l DEHP. Effects of DEHP on in vitro GVBD To further confirm the effect of DEHP on oocyte maturation, in vitro maturation assays were performed. Approximately 75% stage IIIB follicles underwent GVBD following treatment with 17,20bP alone for 12 hrs. In contrast, the steroid-induced oocyte maturation (GVBD) was inhibited significantly when stage IIIB oocytes were pre-exposed with two different doses of DEHP for 4 hrs prior to the treatment of 17,20bP, compared to 17,20bP treatment alone. When DEHP and 17,20bP were added simultaneously, no inhibition of steroid-induced GVBD was observed (Figure 2). Effects of DEHP exposure on ovarian BMP15 protein level Since exposure to DEHP caused a strong inhibition in oocyte maturation, we tested the effect of DEHP on BMP15 known to be involved in this processes. A significant dose-dependent increase in BMP15 protein was observed in all groups treated with DEHP except for the 0.02 mg/l dose and EE2 groups ( Figure 3A). Effects of DEHP exposure on ovarian LHR gene expression A significant decrease in LHR mRNA levels was observed in the oocytes of females exposed to DEHP or EE2 compared to the vehicle control group (p,0.05). The lowest expression of LHR was obtained in fish exposed to EE2, or DEHP being the lowest doses (0.02 and 0.2 mg/l) the most injurious ( Figure 3B). Effects of DEHP on ovarian mPRs gene and protein expression A significant decrease in both mPRb mRNA ( Figure 4A) and protein ( Figure 4B) levels was observed in oocytes of females exposed to DEHP at all doses or EE2 compared to those in the vehicle control group. The lowest expression of mPR was obtained in fish exposed to EE2, or DEHP at doses of 0.02 and 0.2 mg/l. No significant variations in mPRa gene and protein expression was observed using the same treatments (data not shown). Effects of DEHP exposure on ovarian cyclooxygenase-2 gene expression Compared to controls, a significant dose-dependent decrease in the expression of cyclooxygenase 2 (ptgs2) gene was observed with lowest levels at dose of 40 mg/l ( Figure 5). Discussion In this study, we provide evidence that environmentally relevant concentrations [21] of DEHP interfere with zebrafish reproductive performance, representing a concrete risk for the aquatic population living in polluted areas. We found that exposure of female zebrafish to DEHP or EE2 led to a significant increase of circulating levels of vitellogenin. Since estrogen is well known for its effect on promoting vitellogenesis, this observation supports our hypothesis that DEHP has estrogenic effects in zebrafish. However, in fish exposed to the highest dose DEHP, there was a significant decrease in the number of post-vitellogenic oocytes and the number of ovulated eggs was dramatically decreased by all treatments. According to a model proposed by Nagahama and collaborators [26], vitellogenesis and oocyte maturation are regulated primarily by FSH and LH, respectively. During oocyte maturation, LH induces 17,20bP production [27] and enhances the expression of membrane progestin receptors [28,29]. Studies in zebrafish have demonstrated that the effects of these hormones may be modulated and/or mediated by locally produced regulators, such as TGFb family members [12]. Since a member of this family, BMP15, plays a physiological role in the zebrafish ovary by preventing precocious oocyte maturation [13], in the present study, its levels were determined in the gonad of DEHP treated females and the protein level was seen to significantly increase, representing one of the possible factors responsible for the observed block in oocyte maturation. In contrast to the increase in BMP15 level, we found that DEHP caused an inhibition of the LHR and mPRb expression. The worst effect on these maturation factors was induced by the lowest doses of DEHP, behaving as EE2. It has been reported that BMP15 inhibited expression of mPRb, but not mPRa expression [14,25] and might also suppress LHR expression [17]. The mPRb has been demonstrated to be involved in initializing the resumption of meiosis during Xenopus oocyte maturation [30] and in regulating in vitro maturation of pig cumulus-oocyte complexes [31]. Moreover, while previous studies have shown that microinjection of mPRa antisense nucleotide in oocytes resulted in partial inhibition of 17,20bP-induced oocyte maturation in goldfish [28], similar experiments conducted on zebrafish oocytes using the mPRb subtype resulted in complete inhibition of oocyte maturation, suggesting that mPRb has a key role in the control of this process in zebrafish [29]. The lack of egg production observed in the present study, may be due to reduction of mPRb subtype by DEHP exposure. In addition, the differences in expression between the two mPRs here observed, might also be due to the lower frequency of stage IV follicles in treated females where the a isoform is commonly more abundant [32,33]. The in vivo results were supported by the in vitro GVBD inhibition by DEHP which occurred only when the oocytes were exposed to this contaminant four hours before the incubation with 17,20bP, while no effects were observed when DEHP and 17,20bP were concomitantly added to oocyte cultures. These results suggest that DEHP may act on the synthesis of local factors involved in GVBD contrasting oocyte maturation when 17,20bP was added a few hours later. About ovulation, this process is controlled by prostaglandins which are synthesized under the influence of 17,20bP. In this regard, a relationship was also demonstrated between ovarian ptgs2, the gene coding for the enzyme essential for the ovulation process [34], and the lack of ovulation in fish exposed to DEHP, being the highest doses the most detrimental. Similar effects on vitellogenesis and ovulation, as well as on the expressions of LHR, mPRb and ptgs2 caused by DEHP or EE2, suggest that these two chemicals work in a similar way inhibiting the maturation/ovulation process, as already found for EE2 in mammals [35]. The inhibition of ptgs2 gene transcription may lead to a reduction in the cyclooxygenase (COX) products, the prostaglandins, which in turn are essential for vertebrate ovulation [34,36]. On the contrary, BMP15 expression was induced by all doses of DEHP, except for the lowest dose, but not by EE2. Therefore, it is possible that DEHP may have additional functions which are not related to its estrogenic activity. It remains to be determined how DEHP regulates the expression of BMP15. In summary, the data presented here provide new insight into the molecular control of oogenesis by phthalates in zebrafish. We can conclude that all environmental relevant doses of DEHP affect vitellogenesis, demonstrating its estrogenic potency. Different dose-related effects have been observed in relation to maturation and ovulation process. The lowest doses have a stronger negative effects on signals inducing maturation (LHR and mPRb), while the highest doses have a greater impact on the inhibition of ovulation (ptgs2). The results of this study, both in vivo and in vitro, clearly demonstrate that all doses of DEHP strongly impair oocyte maturation and ovulation by influencing the expression of factors involved in these processes. These results could help to build a vertebrate integrated model on the effects of this environmentally relevant compound during oogenesis, an emerging field of investigation. Acknowledgments The Authors wish to thank members of their laboratory who contributed to the study presented here.
4,195.8
2010-04-15T00:00:00.000
[ "Environmental Science", "Biology" ]
Are there narrow flavor-exotic tetraquarks in large-$N_c$ QCD? A salient feature shared by all tetraquark candidates observed in experiment is the absence of flavor-exotic states of the type $\bar a b\bar c d$, with four different quark flavors. This phenomenon may be understood from the properties of large-$N_c$ QCD: On the one hand, consistency conditions for flavor-exotic Green functions, potentially containing these tetraquark poles, require the existence of two tetraquarks $T_A$ and $T_B$: each of them should decay dominantly via a single two-meson channel, $T_A\to M_{\bar a b}M_{\bar c d}$ and $T_B\to M_{\bar a d}M_{\bar c b}$, with suppressed rates $T_A\to M_{\bar a d}M_{\bar c b}$ and $T_B\to M_{\bar a b}M_{\bar c d}$. On the other hand, we have at hand only one diquark-antidiquark flavor structure $(\bar a \bar c)(b d)$ that might produce a compact tetraquark bound state. Taking into account that the diquark-antidiquark structure is the only viable candidate for a compact tetraquark state, one concludes that it is impossible to have two different narrow tetraquarks decaying dominantly into different two-meson channels. This contradiction suggests that large-$N_c$ QCD does not support the existence of narrow flavor-exotic tetraquarks. This argument does not rule out the possible existence of broad molecular-type flavor-exotic states, or of molecular-type bound states lying very close to the two-meson thresholds. MOTIVATION In recent years, many narrow near-threshold hadron resonances which have a favourable interpretation as tetraquark and pentaquark hadrons (i.e., hadrons with minimal parton configurations consisting of four and five quarks, respectively) have been observed experimentally [1,2]. An intriguing feature shared by all exotic candidates is the absence of states with flavor-exotic structure, i.e., with four different quark flavors, which cannot be realized in a single ordinary hadron. The only flavor-exotic tetraquark candidate X(5568) of D0 [3] was not confirmed by LHCb [4], CMS [5], CDF [6], and ATLAS [7]. Lattice calculations also seem to rule out the existence of tetraquarks with such a structure [8], at least with the quark content (csūd). In this paper, we attempt to understand this phenomenon from the large-N c perspective and give arguments why in large-N c QCD no narrow compact flavor-exotic states may exist. QCD with a large number of colors N c (i.e., SU(N c ) gauge theory for large N c , with quarks in the fundamental representation) with a simultaneously decreasing coupling α s ∼ 1/N c [9,10] has proven to be a useful theoretical tool to explain the essential properties of hadron interactions and, in particular, the properties of possibly existing tetraquark and pentaquark hadrons [11][12][13][14][15][16][17][18][19]. Recently, we have formulated rigorous criteria to be satisfied by the four-point Green functions of bilinear quark currents appropriate for tetraquark poles [17,18]: any diagram which contributes to the potential tetraquark pole in the Mandelstam variable s, at s = M 2 T , where M T is the tetraquark mass, should satisfy the following two almost self-evident criteria: (i) The diagram should have a nontrivial (i.e., non-polynomial) dependence on s. (ii) It should support four-quark intermediate states and corresponding cuts starting at s = (m 1 + m 2 + m 3 + m 4 ) 2 , where m i are the masses of the quarks forming the tetraquark bound state. The presence or absence of this cut is established by solving the Landau equations for the corresponding diagram [20]. Hereafter, we refer to diagrams which satisfy these criteria as tetraquark-phile diagrams. Here, we take a closer look at the four-point Green functions of quark bilinear currents (omitting spin and Lorentz indices, which do not play a fundamental role) of exotic flavor content. We show that the tetraquark-phile diagrams have a cylinder topology: For the direct (D) Green functions T {jā b jc d j † ab j † cd } and T {jā d jc b j † ad j † cb } , the N c -leading contributions are given by diagrams with multiple nonintersecting gluon exchanges lying on the tube; all these diagrams behave like O(N 0 c ). Cylinder diagrams with k gluonic handles, which avoid intersection of gluon lines on the tube, behave like O(N −2k of them decaying via one preferred two-meson channel [17,18]. Next, since the only viable flavor structure of narrow compact tetraquark bound states, resulting from confinement, is the diquark structure [21][22][23][24][25][26][27][28][29] and in the flavor-exotic case one has only one flavor diquark-antidiquark combination, (āc)(bd) (product of antisymmetric representations in color space), one encounters a contradiction with the requirement of the existence of two different tetraquarks. One thus has to conclude that, without the presence of a dynamical fine-tuning mechanism, narrow flavor-exotic compact states do not exist in large-N c QCD. NARROW FLAVOR-EXOTIC TETRAQUARK STATES AT LARGE Nc We consider the case of possibly existing narrow flavor-exotic tetraquark states and study the properties of tetraquark-phile diagrams for direct Green functions A. Direct 4-point Green functions According to the formulated criteria for selecting tetraquark-phile diagrams (which give necessary, but not sufficient conditions for the existence of tetraquark poles), the lowest-order diagrams in α s are cylinder diagrams of Fig. 1 with two-gluon exchanges between the quark loops. They have cylinder topology, contain two color loops and are of order We now add one more gluon. As an example, we consider adding a gluon in the diagram of Fig. 1 It is easy to establish the connection between the cylinder diagrams of Fig. 2 and the planar diagrams of Fig. 3: we break the quark lines a and d and put the two separated points of each line at −∞ and +∞, respectively. Then the classification planar/nonplanar gluon exchanges becomes obvious. When calculating the color factors, one has to take into account that the left and the right ends of the line a (d) are in fact joined together and form a color loop. B. Recombination 4-point Green function The N c -leading tetraquark-phile diagram for the recombination Green function is shown in Fig. 4: it is a cylinder with one handle, has one color loop, and is of order α 2 s N c . Interestingly, there are no tetraquark-phile diagrams without handles in the recombination channel. COMPARISON OF DIRECT AND RECOMBINATION GREEN FUNCTIONS Applying our criteria for tetraquark-phile diagrams, one finds that the N c -leading direct and recombination diagrams have different large-N c behaviors. If tetraquark poles emerge at all, they should emerge in the N c -leading tetraquarkphile diagrams (any different setup is difficult to justify from the perspective of bound-state equations for tetraquarks). Then, one needs two narrow tetraquark states T A and T B , each decaying via one preferred meson-meson channel, in order to satisfy the consistency conditions between D and R Green functions [17,18]: If the bound states exist, their widths Γ(T A,B ) will be determined by the dominant channels, which yields Γ(T A,B ) = O(N −2 c ), thus confirming the narrow-width property of the tetraquark candidate states. The N c -matching conditions also allow us to deduce the properties of the effective tree-level meson-meson interactions. The direct-channel N c -leading connected diagrams are OZI-suppressed [10,11] and the corresponding effective meson-meson interactions come out to be of order 1/N 2 c , resulting either from contact terms ( Fig. 5(a)) or from glueball exchanges (Fig. 5(b)) [18]. On the other hand, the recombination-channel interactions are of the generic order 1/N c , resulting either from contact terms ( Fig. 6(a)) or from meson exchanges ( Fig. 6(b)) [18]. (They are provided by the N c -leading diagrams of the recombination channel.) Taking into account the above properties and Eqs. (3.1), one deduces the dominant structure of each of the two tetraquarks: T A has the structure (ād)(cb), while T B has the structure (āb)(cd), both of them being the product of two color-singlet clusters; their dominant decay channel proceeds through the recombination (quark-exchange) process, rather than through the dissociation one. However, in the diquark-antidiquark mechanism of the tetraquark formation [21][22][23][24][25], one disposes of one flavorexotic combination (āc)(bd), in the form of the product of color-antisymmetric representations, which leads to a contradiction with the above requirement of the existence of two tetraquarks. One thus is led to conclude that in large-N c QCD flavor-exotic compact narrow tetraquarks might not exist. This conclusion rests on the observed different behaviors of the tetraquark-phile contributions to direct and recombination Green functions. Can one formulate different consistent criteria for selecting tetraquark-phile diagrams? It is conceivable that for some dynamical reasons, tetraquarks do not necessarily contribute to the generic leading diagrams that have been taken into account. The authors of [19] consider such a possibility by imposing more stringent selection rules. These are based on two main assumptions: (i) Tetraquark-phile diagrams have a nonplanar topology with one gluonic handle. (ii) Only one class of diagrams, either D or R, contributes to the tetraquark formation. For phenomenological reasons, it is channel D that is chosen as admissible for tetraquark emergence. Then a single tetraquark may accommodate the consistency conditions, with a coupling of order N −2 c to the two sets of available meson pairs. Without intending to discard the possibility of a selection mechanism as described in [19], which demands, however, a more detailed investigation on dynamical grounds, we would like to draw attention to one argument that does not seem well founded. The main justification in [19] of imposing on the channel D tetraquark-phile diagrams to have a handle is based on the assertion that D-type planar diagrams do not describe mutual interactions of meson or (qq) pairs. However, this is contradicted by the existence of two-meson intermediate states contributing through planar diagrams to mesonmeson scattering. First, the N c -leading diagrams of the R channel, which do not have four-quark singularities, still contribute to the effective meson-meson interaction through the diagrams of Fig. 6, the global coupling being of order N −1 c . Second, the unitarity condition requires that the diagrams of Fig. 6 generate meson loop diagrams of the type of Fig. 7, which are genuine parts of the meson-meson scattering amplitude in channel D. They are of order N −2 c , i.e., of the same order as the leading planar diagrams of channel D. This could not happen if their underlying QCD diagrams were not of the planar type. A typical such diagram is presented in Fig. 8. The intermediate states, obtained from a vertical cut are precisely those corresponding to the quark-exchange process. Here, color rearrangement plays a physical role by converting the pair of initial mesons into the other pair. This shows that the leading-order planar diagrams of channel D are not physically empty and describe a part of the meson-meson scattering process. The question whether a compact tetraquark pole may emerge from such a process or not still remains a relevant issue. Meson-meson interactions are expected to be short-range and if a tetraquark pole exists in the corresponding scattering amplitude, as a bound state or a resonance, it should be loosely bound and would presumably correspond to a molecular-type state. This possibility is examined in Sec. 4. The possible existence of a hidden dynamical mechanism which favors the emergence of compact tetraquarks in fully exotic channels is an open question and deserves further study. We also emphasize that the conclusions obtained in the main part of this section apply only to the fully exotic case (four different quark flavors). For systems with a smaller number of quark flavors, additional QCD diagrams, not present in the fully exotic case, may invalidate some of the results obtained above and may allow for the existence of one tetraquark [17,18]. MOLECULAR STATES AT LARGE Nc Tetraquarks may also have a molecular structure, resulting from meson-meson interactions and existing either in the form of bound states or of resonances [30][31][32][33][34][35][36]. Meson-meson interactions are generally formulated in the form of effective Lagrangians or of empirical potentials. In the large-N c limit, these interactions are expected to scale as 1/N c [10,11]. In the case of mesons made of light quarks (u, d, s) and, in particular, involving the lightest pseudoscalar mesons, chiral perturbation theory (ChPT) [37,38] provides the general effective Lagrangian suited for the description of the corresponding interactions at low energies. An extension of the energy domain of validity of ChPT is done with the aid of the unitarization condition of the scattering amplitudes, together with the use of dispersion relations [39,40]. In sectors involving heavy quarks, the masses of the latter introduce new scales in the system, which must be taken into account. The heavy mesons, in addition to their contact-type interactions, also interact through light-meson exchanges of Yukawa type [34,35,41,42] (cf. Fig. 6(b), where the heavy quarks correspond to those denoted b and d). The latter interactions provide additional opportunities for the emergence of bound states or of resonances. A more systematic study can be done with the use of Heavy Quark Effective Field Theory and the associated spin and flavor symmetries [36,[43][44][45][46][47][48]. The possible dynamical emergence of bound states or of resonances from meson-meson interactions is studied by summing in the evaluation of the scattering amplitude chains of diagrams of similar structure; Fig. 9, where we have explicitly factored out the N c dependence of the effective coupling constant, schematically displays the summation of bubble diagrams. These diagrams are generally divergent and, accordingly, the effective coupling constants undergo +... renormalization [46,49,50]. According to the signs of the renormalized coupling constants, which might be determined from other experimental data, a bound state or a resonance pole may emerge. The important qualitative feature of the resultant dynamical pole is that its mass squared, in the case of a resonance, or binding energy, in the case of a bound state, are essentially proportional, up to small corrections, to the inverse of the effective coupling constant [46,49,50]. Therefore, assuming that the renormalized effective coupling constant remains, as the bare one, inversely proportional to N c , the resonance mass will be pushed towards infinity with a broad width, while the bound state will disappear from the spectrum. In the case of light quarks, a detailed study of the problem has been presented in [39,40]. The general result is that, in the scalar-isoscalar s-channel of ππ scattering, a dynamical resonance, corresponding to the observed f 0 (500) resonance, emerges, having a dominant structure of two quarks and two antiquarks, in distinction from the ordinary mesons [51]. This result has also been confirmed by a direct solution of the four-quark Bethe-Salpeter equation [52]. At large N c , the mass and the width of the resonance behave as √ N c , as expected from the general qualitative features outlined above. Generalization of the calculations with other combinations of the light quarks is expected to provide similar qualitative conclusions. In the case of sectors involving heavy quarks, the above conclusions would remain true in the formal limit of N c going to infinity, but for finite values of N c the observable effects might be less striking, since the mass gap between the two-meson threshold and the resonance position would be relatively reduced as compared to the light-quark-sector case. Can molecular-type tetraquarks contribute to the large-N c analysis of Green functions? The answer is negative, since the emergence of a molecular-type pole necessitates the summation of a chain of diagrams with different orders in 1/N c (Fig. 9). This is in contrast to the case of compact tetraquarks, where the summation of planar diagrams is done at the same order in N c together with the creation of the pole; this makes possible the matching of the pole contributions in the Green function, on the one hand, and in the Feynman diagrams, on the other. For molecular-type tetraquarks, the latter contribute to Feynman diagrams at leading order in N c only through their contact terms and one-meson exchange terms (Fig. 6) and therefore a matching-type analysis is not possible. In that case, one has to deduce the properties of the tetraquark from the explicit summation itself. A particular attention should be paid to the case of molecular-type bound states, also called deuteron-like, which may emerge very close to the two-meson threshold, with an unnaturally small binding energy [36,53]. They are also characterized by a large (negative) value of the S-wave scattering length, much greater than the natural scale provided by the physical parameters of the system, and exhibiting universality properties [54]. According to the general properties of the emergence of dynamical poles in the scattering amplitude as outlined above, these bound states might appear only in the strong-coupling limit of the effective theory, while the large-N c limit drives the theory to its weak-coupling limit. Therefore, in the formal limit of large N c , one also predicts the diappearance of these states. In general, the details of the creation mechanism of these states not being well known, it is admitted that underlying fine-tuning processes might be at work for their existence [54]. This would mean that they are very sensitive to variations of their physical parameters and, in particular, of N c . Results obtained at large-N c might not correctly describe their physical properties at finite values of N c . Coming now back to the case of flavor-exotic tetraquarks, we recall that the direct-channel effective meson-meson interactions are actually of order 1/N 2 c (cf. Fig. 5), i.e., they are much weaker than in the generic case (∼ 1/N c ). The recombination-channel interactions remain of order 1/N c (cf. Fig. 6); however, since they represent off-diagonal-type contributions in a coupled-channel formalism, their effective contributions to the resonance or the bound-state pole formation will still be as in the direct-channel case. Therefore, the possibly existing resonance-pole positions will be pushed even more strongly to infinity than in the generic cases, while bound-state poles will be absent from the spectrum. In conclusion, narrow-width molecular-type tetraquarks, with masses that remain fixed at large N c , are not generally expected to occur in flavor-exotic sectors. Assuming that the continuation to finite values of N c remains a smooth operation in the theory, this statement would still be valid in the physical world, except possibly in the particular case of a bound state lying very close to the two-meson threshold. CONCLUSIONS We have considered, in the large-N c limit of QCD, the possibility of the existence of narrow four-quark states of an exotic flavor content, involving four quarks of different flavors (that requires two quarks and two antiquarks as a minimal parton configuration). The two cases of compact and molecular tetraquarks have been examined. Compact tetraquarks are the genuine candidates for the quest for narrow-width states at large N c . In the sectors of flavor-exotic states, the consistency constraints, coming from the direct and recombination (or quark-exchange) type channels, require the existence of two different tetraquarks, each having a structure made of two color-singlet clusters or mesons, and decaying in a preferred two-meson channel, fixed by the dominance of the recombination-type effective interaction. On the other hand, the formation mechanism of tetraquarks through a primary formation of diquarks and antidiquarks predicts the existence of one tetraquark, decaying with equal weights, up to small corrections, into the two different two-meson channels. This contradiction suggests that compact tetraquarks do not exist in flavor-exotic sectors, unless some hidden dynamical mechanism favors their emergence [19,26,29]. Molecular tetraquarks, because of the weakening of the effective meson-meson interactions at large N c , might only exist as resonances with masses and widths that increase like √ N c . In the case of the presence of heavy mesons, the mass gap between the resonance position and the two-meson thresholds might be substantially reduced at finite values of N c . In the flavor-exotic case, the effective interactions are much weaker than in the generic cases, and, because of this feature, the masses of the possibly existing resonances are repelled to higher values. Therefore, at large N c , no molecular-type tetraquarks, with fixed masses and narrow widths, are expected to emerge. An exceptional case might occur, at finite N c , with the emergence of a single bound state lying very close to the two-meson threshold. Up to now, experimental data, as well as lattice calculations, do not provide evidence for the existence of flavorexotic tetraquarks in sectors involving one heavy quark, c or b. Flavor-exotic sectors involving two heavy quarks, c and b, seem to be yet unexplored. Therefore, experimental data and lattice calculations for these sectors would be of great help for the understanding of the underlying dynamics of QCD.
4,795
2018-10-23T00:00:00.000
[ "Physics" ]
Structural variation turnovers and defective genomes: key drivers for the in vitro evolution of the large double-stranded DNA koi herpesvirus (KHV) Structural variations (SVs) constitute a significant source of genetic variability in virus genomes. Yet knowledge about SV variability and contribution to the evolutionary process in large doublestranded (ds)DNA viruses is limited. Cyprinid herpesvirus 3 (CyHV-3), also commonly known as koi herpesvirus (KHV), has the largest dsDNA genome within herpesviruses. This virus has become one of the biggest threats to common carp and koi farming, resulting in high morbidity and mortalities of fishes, serious environmental damage, and severe economic losses. A previous study analyzing CyHV-3 virulence evolution during serial passages onto carp cell cultures suggested that CyHV-3 evolves, at least in vitro, through an assembly of haplotypes that alternatively become dominant or under-represented. The present study investigates the SV diversity and dynamics in CyHV-3 genome during 99 serial passages in cell culture using, for the first time, ultra-deep whole-genome and amplicon-based sequencing. The results indicate that KHV polymorphism mostly involves SVs. These SVs display a wide distribution along the genome and exhibit high turnover dynamics with a clear bias towards inversion and deletion events. Analysis of the pathogenesis-associated ORF150 region in ten intermediate cell passages highlighted mainly deletion, inversion and insertion variations that deeply altered the structure of ORF150. Our findings indicate that SV turnovers and defective genomes represent key drivers in the viral population dynamics and in vitro evolution of KHV. Thus, the present study can contribute to the basic research needed to design safe live-attenuated vaccines, classically obtained by viral attenuation after serial passages in cell culture. can interfere with the wild-type virus replication (Vignuzzi and López 2019). Since then, the role of DVGs in antiviral immunity, viral persistence and their negative impact on virus replication and production has been established (Bull et al., 2003;Li et al., 2011;Vignuzzi and López 2019;Loiseau et al., 2020). Nowadays, DVGs have been described in most RNA viruses and to a lesser extent in dsDNA viruses (Vignuzzi and López 2019;Loiseau et al., 2020). Despite the critical role of SVs in virus infection dynamics, the knowledge about structural variation diversity, and their evolutionary impact in viral populations, especially those with large dsDNA, is limited. The large dsDNA Cyprinid herpesvirus 3 (CyHV-3), more commonly known as koi herpesvirus (KHV), is one of the most virulent viruses of fish. It is a lethally infectious agent that infects common carp and koi (Cyprinus carpio) at all stages of their life (Hedrick et al., 2000;Haenen et al., 2004). KHV infections are usually associated with high morbidities and mortalities (up to 95%), resulting in serious environmental damages and severe economic losses (Sunarto et al., 2011;Rakus et al., 2013). This threatening virus had a rapid worldwide spread due to global fish trade and international ornamental koi exhibitions (Gotesman et al., 2013). Classified within the family Alloherpesviridae, genus Cyprinivirus, CyHV-3 is the subject of an increasing number of studies and has become the archetype of alloherpesviruses (Boutier et al., 2015). Despite this "status", only 19 isolates have been entirely sequenced so far (source: NCBI) since the release of the first complete genome sequences in 2007 (Aoki et al., 2007). Such a low number of full genomes impairs large-scale phylogenomic studies (Gao et al., 2018). On the other hand, KHV infections have been shown to be the result of haplotype mixtures, both in vivo and in vitro (Hammoumi et al, 2016;Klafack et al, 2019). If mixed-haplotype infections probably represent an additional source of diversification for KHV (Renner and Szpara, 2018), they make genomic comparisons more challenging. KHV has the largest genome among all known herpesviruses, with a size of approximately 295 kb and 156 predicted open reading frames (ORFs) (Aoki et al. 2007). Several studies focusing on the analysis of viral ORFs have shown the implication of some of them in KHV virulence (Boutier et al., 2015;Fuchs et al. 2014). KHV isolates are known to carry mutations in ORFs that are likely to alter gene functions, and these mutations may vary from virus to virus (Gao et al., 2018) and even within viruses (Hammoumi et al., 2016). Aoki et al (2007) hypothesized that virulent KHV would have arisen from a wild-type ancestor by loss of gene function. However, nearly 15 years later, this hypothesis has still not been tested, probably because of the lack of extensive genomic comparisons. SVs may play a key role in this gene function loss, as recently shown by Klafack et al (2019). These authors conducted a comparative study of a cell culture-propagated isolate that suggested that CyHV-3 evolves through an assemblage of haplotypes whose composition changes within cell passages. This study revealed a deletion of 1,363 bp in the ORF150 of the majority of haplotypes after 78 passages (P78), which was not detected after 99 passages. Furthermore, experimental infections showed that the virus passaged 78 times was much less virulent compared to the original wild-type on the one hand and slightly less virulent compared to the same virus passaged 99 times (P99), highlighting the potentially important role of the ORF150 in the virulence of KHV. Besides, this study demonstrated that haplotype assemblies evolve very rapidly along successive in vitro cell passages during infectious cycles, and raised many questions regarding the mechanisms leading to such rapid gene loss and gain in vitro. The present study sought to characterize the SV diversity and dynamics in the KHV genome using viruses propagated onto cell cultures. First, P78 and P99 whole virus genomes were sequenced using ultra-deep long-read sequencing, a first with KHV. Then, the obviously pathogenesis-associated ORF150 region (~5 kb) was sequenced in ten intermediate successive cell passages through an Oxford nanopore® amplicon-based sequencing approach to gain insights into the gene loss and gain mechanisms. Extraction of high molecular weight DNA from P78 and P99 cell culture passages The virus isolate used in this study was the same as that previously described in Klafack et al. (2019), i.e. an isolate collected from an infected koi in Taiwan (KHV-T) and passed 99 times onto common carp brain (CCB) cells. Considering previous results, a special focus was made on passages 78 (P78) and 99 (P99). Genomic DNA was extracted from cell cultures stored at -80°C, using the MagAttract HMW DNA Kit (Qiagen). Each frozen culture was thawed quickly in a 37°C water bath, equilibrated to room temperature (25°C) and divided into 12 cell culture aliquots of 250 µL. Tubes were centrifuged at 3,000× g for 1 minute and supernatants were transferred into new 2-mL tubes containing 200 µL of proteinase K and RNase A solution. DNA was subsequently extracted according to the manufacturer's recommendations and eluted in 200 SL distilled water provided in the kit. The 12 replicates of each sample were pooled together and evaporated at room temperature using a vacuum concentrator, to reach a final volume of around 60 µL. Concentrated DNA was quantified by fluorometry (Qubit, ThermoFisher Scientific) and its quality was evaluated by spectrophotometry (Nanodrop 2100) and agarose gel electrophoresis. The final concentration of P78 and P99 was 14.4 and 2.6 ng·µL -1 , respectively. consisted in an initial denaturation at 95°C for 5 min followed by 45 cycles of amplification at 95°C for 10 sec, annealing at 60°C for 20 sec and elongation at 72°C for 10 sec with a single fluorescence measurement. After amplification, a melting step was applied, which comprised a denaturation at 95°C for 5 sec, a renaturation at 65°C for 60 sec and a heating step from 65 to 97°C with a ramp of 0.1°C per second and a continuous fluorescence acquisition. Specificity of amplification was verified by visual inspection of the melting profiles, and the ratio between cellular and viral DNA was estimated as 2 -ΔCqCq , assuming that each primer pair has an amplification efficiency close to 2 and that each amplicon is present as a single copy per genome. Genomic library preparation and Oxford Nanopore whole genome sequencing High-quality genomic DNA from the two samples (P78 and P99) was sequenced using Oxford Amplicon-based Oxford nanopore sequencing To specifically investigate ORF150 region, a fragment of ~4.3 kb encompassing the whole µL of each primer (10 µM). Cycling conditions were as follows: initial denaturation at 95°C for 10 min, amplification with 40 cycles of 95°C for 10 sec, 60°C for 20 sec, 72°C for 3 min, and final extension at 72°C for 5 min. PCR products were purified using 1X Agencourt AMPure XP beads, tested for purity using the NanoDrop™ One spectrophotometer (ThermoFisher Scientific), and quantified fluorometrically using the Qubit dsDNA High sensitivity kit. DNA libraries were prepared using the Rapid Barcoding kit (SQK-RBK004), following the manufacturer's instructions. For each sample, 400 ng of purified amplicon were adjusted with nuclease-free water to a total volume of 7.5 µL and supplemented with 2.5 µL of Fragmentation Mix RB01-4 (one for each sample). Four barcoded samples were combined with an equimolar ratio by mixing 2.5 µL of each sample in a total volume of 10 µL. Furthermore, two additional barcodes RB01-2 were used with the passages P78 and P99 as described above. Pooled libraries were sequenced on 3 R9.4.1 flow cells (4 barcodes, 4 barcodes, 2 barcodes) for 24 hours and sequencing runs were controlled with MinKNOW version 0.49.3.7. DNA sequence analysis For each sample, bases from raw FAST5 files with a pass sequencing tag were recalled using order to span large structural variants. After the mapping step using minimap2 (Li, 2018), the sequencing depth was calculated for each sample using the plotCoverage tool implemented in deepTools2.0 tool suite (Ramírez et al., 2016). Sequencing coverage was assessed with the bamCoverage tool from the same tool suite and normalized using the RPGC (reads per genome coverage) method ( Figure 1). Structural variant detection To detect structural variants (SVs) in the P78 and P99 whole genomes, a mapping step followed by BAM filtering was performed. Two aligners were used to map the raw long-reads against the KHV-J AP008984.1 reference genome: minimap2 (Li, 2018) and NGMLR (Sedlazeck et al., 2018). BAM files were then filtered using the option '-F' of Samtools view with the flag "4" to keep only mapped reads and with the flag "0x800" to remove chimeric reads (inconsistent/supplemental mapping). For P78 and P99, 99.33% and 97.77% of reads were mapped, respectively. Chimeric reads represented 28.05% and 17.74% of the mapped reads in P78 and P99, respectively. The resulting filtered BAM files from each mapper were used as input data for SV caller, Sniffles (Sedlazeck et al., 2018). Only SVs ≥ 30 bp and supported by at least 10 reads were kept in the final VCF files. A cross-validation step was performed using SURVIVOR (Jeffares et al., 2017) by extracting common SVs from each mapper/caller combination for each sample. Although the KHV-J reference genome used for the mapping is phylogenetically close to the KHV-T isolate, some genetic diversity exists (Klafack et al. 2017). Hence, to exclude inter-isolate SVs, a pairwise comparison between P78 and P99 was made. The distribution of the different SVs along with P78 and P99 genomes was assessed by estimating their occurrences using a 5 kb sliding window. SNPs and Indels variants were called in P78 and P99 by medaka_variant implemented in medaka (1.4.4) using KHV-J AP008984.1 as a reference genome. To detect structural variants in the amplified region (257,345) of P10 to P99 samples, a size filtering step using guppyplex was added to the steps described above. Only reads from 1.5 kb to 8 kb were used for the analysis. Main features of sequencing data for P78 and P99 A total of 4,900,000 and 2,293,830 long-reads were obtained for P78 and P99, respectively. After filtering, 462,982 long-reads with an average length of 4.96 kb were retained for P78, and 418,034 reads with an average length of 7.06 kb for P99 (Table 1, Table S1).100% of the sampled bases from the P78 genome had at least 5,000 overlapping reads and 100% of the sampled bases from the P99 genome had at least 7,500 overlapping reads ( Figure S2). Both P78 and P99 genomes were entirely covered by the sequencing data ( Figure 1). The ratio between cellular and viral DNA, calculated by qPCR, was 34800 and 33200 for P78 and P99, and the percent of mapped reads 99.33% and 97.77%, respectively. Figure 1. Normalized sequencing coverage for P78 and P99 samples using the RPGC (reads per genome coverage) method. Both P78 and P99 genomes were totally covered by the sequencing. For P78, the coverage break (green triangle) corresponds to the 1.3 kb deletion. total size Q20=1 in 100 probability of an incorrect base call, Q30= 1 in 1000 probability of an incorrect base call. Here 59.53% of P78 bases=Q20 and 58.27% of P99 bases=Q20. SV distribution in P78 and P99 For P78, the mapper/caller combination minimap2/Sniffles detected 731 structural variations (SVs), and the combination NGMLR/Sniffles detected 460 SVs (Table S2). For P99, the combination minimap2/Sniffles detected 210 SVs and NGMLR/Sniffles detected 397 SVs (Table S2). Independently from the mapper/caller combination that was used, P78 showed more SVs than P99 (Table S2). After the cross-validation step (extracting common SVs from each mapper/ caller combination), 236 and 87 SVs were kept for P78 and P99, respectively. For comparison, the number of small variations (with a size < 30 nt, which were excluded from this study) amounted to 57 and 77 for P78 and P99, respectively. In (Figure 2.B). The frequencies of these SVs were low and did not exceed 1% of the total reads (with a few exceptions, Table S2). In spite of such low frequencies, it is interesting to note that the most frequent SVs were located in ORFs potentially involved in DNA replication and encapsidation, e.g. ORF33, 46, 47, 55 (Aoki et al, 2007; Table S2). Altogether, these results highlight high SV turnover dynamics during the in vitro infection cycles (from 78 passages to 99) with a clear trend, or bias, towards inversion and deletion events. Dynamics and impacts of SVs in ORF150 region Taking advantage of the high-resolution SV detection provided by the long-read sequencing, we looked for the SV events around the potential virulence-linked ORF150 in P78 and P99 (nt 257,103-261,345 according to AP008984.1). Results confirmed that P99 had a reference-like profile with an unmodified ORF150. In P78, the deletion (nt 258,154-259,517; D258153) was found in 6,902 reads (100% of the reads), whereas the reference haplotype was also detected in 30 reads, representing 0.44% of the total 6,734 supporting reads (Figure 3, Tables S3). Surprisingly, 26 reads revealed a haplotype as yet unidentified (INV258153), consisting of an inversion of the same length (1,363 bp) and at the same breakpoints as the deletion. The inverted haplotype (INV258153) in P78 deeply altered the ORF150, by inverting the first 1200 bp of the ORF and 160 bp of the 5'UTR in the middle of the ORF (Figure 3). In order to trace the unexpected dynamics of gain and loss of the full ORF150 along passages, we searched for the SV turnovers during 10 intermediate passages (P10, P20, P30, P40, P50, P70, P78, P80, P90, P99). This analysis revealed the presence of haplotype D258153 at low frequency (from 0.05 to 0.15% of the reads) in passages P10 to P40 and a strong increase in its frequency at P50 (88.7% of the reads) (Figure 4). The frequency of the haplotype D258153 reached a maximum at P78 (100% of the reads) then dropped quickly at P80 (30.7% of the reads) to stabilize at low frequency (0.31% of readings) at P90., as during the first 40 passages ( Figure 4, Table S3). Interestingly, shorter deletions of 119 and 881 bp were observed near the 5' end of the ORF150 in P40 and P80, respectively, at low frequencies (0.42 % in P40 and 0.18% in P80) ( Figure 4, Table S3). The haplotype D258153 completely disappeared at passage 99 ( Figure 4, Table S3). This analysis also evidenced several other SVs that alter the structure of ORF150 and of its upstream region, including the beginning of ORF149 ( Figure 5). Besides the large deletion, inversions and insertions were also observed in the ORF149-ORF150 region. Inversions were at a low frequency (between 0.01% and 0.53% of the supporting reads) in all passages except for P70, P90 and P99. P10 and P40 showed the lowest and the highest inversion frequencies, respectively ( Figure 5, Table S3). A large insertion of about 1 kb appeared in P50 and P70 at moderate frequencies (14,34% and 16.01% of the supporting reads, respectively) to disappear in P78 and re-appear at a lower frequency (6,79% of the supporting reads) in P80. The consensus sequence of this insertion corresponds to the fragment 259,517-260,477 of the KHV genome, with an identity of about 90%. In P90, an intriguing inverted-duplicated haplotype was observed at a low proportion (0.054% of the supporting reads). Surprisingly, P99 exhibited a unique reference-like, SV-free haplotype ( Figure 5, Table S3). All the variations deeply impacted the structure of ORF150 -and sometimes that of ORF149 as well -by shrinking or increasing its size, causing the ORF149 and ORF150 fusion, inverting the ORF150 sequences and duplicating the ORF150 with the deleted, inserted, inverted and inverted-duplicated haplotypes ( Figure 5). Discussion SVs significantly impact the adaptation of viruses to their natural host and environment (Pérez-Losada et al., 2015). Yet the role of SV diversity and dynamics in large DNA viruses is barely known. Ultra-deep long-read sequencing opens unprecedented ways to gain insights into these untapped viral genome polymorphisms. The present study started to tackle the impact of SVs in the evolution of the large dsDNA KHV during cell culture serial passages using ultra-deep whole-genome and amplicon-based sequencing. The sequence data showed a wide distribution of various SVs along the genome associated with high SV turnover dynamics during the in vitro infection cycles and a clear bias towards inversion and deletion events. Analysis of the pathogenicity-associated ORF150 region in ten serial passages mainly highlighted deletions, inversions and insertions that deeply altered the structure of ORF150. Serial passages of viruses in cell culture may lead to the accumulation of mutations and gene disruptions (Spatz 2010;Colgrove et al., 2014). These mutations can modify viral adaptation and increase or decrease virulence López-Muñoz et al., 2021;. In the case of KHV, a previous work using short-read sequencing showed that 99 consecutive in vitro passages onto CCB cells resulted in the accumulation of less than 60 small variations (<100 nt) (Klafack et al., 2019). It also showed that the haplotype composition can quickly vary along with infection cycles of KHV in vitro. The present study unexpectedly highlighted a high number of structural variations: 87 for P99 and 236 for P78. In contrast, the accumulation of small variations was consistent with what had been observed with short-read sequencing (Klafack et al, 2019). These findings illustrate that long-read sequencing is highly suitable for genome-wide comparisons of viruses. Most importantly, they revealed a hidden source of virus diversification, which had never been reported so far for KHV. They also confirmed that P78 consists of a mixture of undeleted and deleted haplotypes revealing that the undeleted haplotype did not correspond to the native one but to an inverted version of it. We experimentally validated this inversion in P78 by PCR followed by sequencing. Moreover, the inversion found in the middle of the reads excluded the formation of in silico chimeras, i.e., chimeras resulting from the basecaller when two molecules are sequenced in the same pore that undergoes fast reloading (Martin and Legget, 2021). These structural variations form a 'mosaic' of viral subpopulations that seem to result from multiple rearrangement events, mainly inversions and deletions and, to a lesser extent, insertions and duplications. Such sub-viral variants often lead to Defective Viral Genomes (DVGs) (Vignuzzi and López 2019). Because of their negative impact on viral replication, some forms of DVGs have been extensively studied, and three pathogenesis-related functions have been well-described: interference with viral replication, immunostimulation, and viral persistence (Marriott and Dimmock 2010;Vignuzzi and López 2019). DVGs play a role in viral production interference by accumulating at higher rates than the full-length viral genomes and consequently interfere with viral replication by taking up the polymerase activity and competing for structural proteins (Calain and Roux 1995;Portner and Kingsbury 1971). In addition, DVGs can act as primary stimuli and trigger antiviral immunity by inducing the expression of some interleukins and pro-inflammatory cytokines (for review, see Interesting New Gene (RING) domain that is predicted to span 628 amino acids (Li et al, 2015;Aoki et al, 2007). This RING domain contains the HC (C3HC4) type of RING structure, which is involved in the ubiquitination pathway, by acting as a E3 ubiquitin-protein ligase (He and Kwang 2008). Viruses that encode RING finger like-ubiquitin ligases (E3s)may evade host immune responses and also hijack the host's RING E3 to enhance their replication (Zhang et al.,2018). In aquatic viruses, RING family genes have been reported to be involved in virus latency, replication, and host protein degradation (Shekar and Venugopal 2019;Wang et al., 2021). As previously observed, the most significant difference affecting all viral haplotypes between P78 and P99 was around the ORF150. For this reason, we focused on this interesting region to determine the passage number at which the presence of the deletion inversion first arose, by sequencing selected representative passages (P10, P20, P30, P40, P50, P70, P80 and P90). We found that deleted and inverted haplotypes appeared as soon as P10 and that their proportion varied along the successive passages. The most striking feature was the rapid and total disappearance of the deletion between P78 and P99, raising many questions regarding the mechanisms that led to the clearance of this major haplotype. The rapid SV turnover of DNA viruses, including herpesviruses, likely involves recombination (Szpara and VanDoorslaer, 2021;Renner and Szpara, 2018;Cudini et al., 2019;Kolb et al., 2017;Tomer et al., 2019), which is often linked to replication and DNA repair, as well as errors during viral genome replication (Kulkarni and Fortunato 2011;Xiaofei and Kowalik 2014). In the present case, the pretty good conservation of breakpoints around ORF150 may be a sign of homologous recombination. However, whether this recombination occurs within the same genomic entities or between different viruses remains open. It would be interesting to assess the involvement of each of these mechanisms in generating the observed structural diversity. The multiple mechanisms of DNA virus evolution beyond single nucleotide substitutions likely confer KHV a high level of evolutionary adaptability. Classically, the generation of live-attenuated vaccines is achieved by passaging the virus in cell culture under different conditions (in different host species or at lower or higher temperatures), in order to induce mutation accumulation that supports viral adaptation to the specific conditions and provides viral attenuation (Minor, 2015, Hanley 2011. With the exception of the OPV polio vaccine viruses (Kew et al., 2005), the exact mechanisms by which these mutations lead to attenuated phenotypes are usually poorly characterized (Lauring et al., 2010). However, liveattenuated viruses can revert to virulent phenotypes either by reversions (as shown here between P78 and P99), introduction of compensatory mutations, or recombination with viruses belonging to the same genus (Cann et al., 1984;Bull et al., 2018;Muslin et al., 2019). Additionally, the combination of multiple live-attenuated viruses may result in competition or facilitation between individual vaccine viruses, resulting in undesirable increases in virulence or decreases in immunogenicity (Hanley 2011;Pereira-Gomez et al., 2021). Recently, genetic engineering has led to many novel approaches to generate live-attenuated virus vaccines that contain modifications to prevent reversion to virulence (Yeh et al., 2020) and improve interferences among multiple vaccine strains (Pereira-Gomez et al., 2021). Conclusion Our findings confirm that CyHV-3 can evolve rapidly during infectious cycles in cell culture, and Conflict of interest disclosure The authors declare they have no conflict of interest relating to the content of this article.
5,531.4
2022-07-21T00:00:00.000
[ "Biology" ]
Graphene based sulfonated polyvinyl alcohol hydrogel nanocomposite for flexible supercapacitors Graphene based sulfonated polyvinyl alcohol (PVA) hydrogel was synthesized and its performance as nanocomposite gel polymer electrolyte was investigated for application in quasi solid-state flexible supercapacitors. Hydrothermally reduced graphene (HRG) was synthesized through hydrothermal reduction of graphene oxide (GO). Sulfonated PVA hydrogel (SPVA) was synthesized with predetermined quantities of HRG to obtain nanocomposite gel polymer electrolytes coded as SPVA-HRG-x (x = content (wt.%) of HRG). The amorphous nature of SPVA-HRG-x was determined using X-ray diffraction (XRD) technique. The electrochemical performance of SPVA-HRG-x was evaluated using techniques like cyclic voltammetry (CV), galvanostatic charge-discharge (GCD) and electrochemical spectroscopy (EIS) studies of a lab scale supercapacitor cell, fabricated using hydrothermally reduced carbon cloth (CCHy) current collectors coated with HRG (HRG-CCHy). In SPVA-HRG-0.5 electrolyte, HRG-CCHy exhibited specific capacitance of 200 F g at 1 A g and specific energy of 6.1 Wh kg at specific power of 1 kW kg and retained 93 % of its initial capacitance even after 5000 GCD cycles. The incorporation of SPVA with 0.5 wt.% of HRG-CCHy can be attributed to the increase in amorphous nature of SPVA-HRG-0.5, which in-turn lowers its impedance. This contributed to the remarkable supercapacitive behaviour of HRG-CCHy, demonstrating its potential as gel polymer electrolyte (GPE) for application in quasi solid-state flexible supercapacitors. Introduction In recent years, there has been sharp increase in the development of flexible supercapacitors (FSCs) for electronic devices as power sources, which have to be ultra-thin and flexible in-order to serve their purpose [1][2][3][4]. FSCs with high specific energy, specific power and excellent cycle life are much desirable to supply the rising market of flexible and wearable electronic devices in the near future [5,6]. To increase the performance of supercapacitors, the usage of various types of carbons, mixed/binary metal oxide and sulfide-based electrode materials like nanochains [7], nanoflowers [8] and other nanostructures [9][10][11] has been widely reported. Carbon based materials are most widely used electrode materials in supercapacitor application, due to their high surface area and capability of storing charge in the form of an electric-double layer [12,13]. Currently, reduced graphene oxide (RGO), a graphene-based material with high ionic conductivity and surface area, has emerged as a potential electrode material in FSCs and alternative to activated carbon [14]. The main components of a FSC are flexible electrodes, solid or quasi-solid-state electrolyte and a porous separator to prevent short-circuit [15]. The synergy between the electrolyte and the electrode material creates a significant impact on the properties of a supercapacitor, like charge-discharge capabilities, cyclic stability, energy storage in the form of charge, and power delivery [16]. The rate performance and specific power of a supercapacitor can be increased by increasing ionic conductivity of the electrolyte [17]. Most of FSC assemblies use gel polymer electrolytes (GPEs) to prevent leakage and packaging issues, as in the case of liquid electrolytes [18,19]. GPEs contain a discontinuous phase of solvent entrapped inside a continuous phase of three-dimensional polymer network [20]. Hydrogels of polyvinyl alcohol (PVA) contain hydroxyl groups which contribute to its hydrophilic nature by absorbing large amounts of water, in-turn enhancing the conductivity of electrolyte ions [26] and establishing stable contact at the interface of electrode and electrolyte [21]. Inorganic fillers like nano SiO 2 [35], nano TiO 2 [36], Sb 2 O 3 [37] and graphene oxide (GO) [38] have been incorporated for improving performance in PVA based GPEs. Several reports have mentioned the usage of GO as a nanofiller in GPEs in various electrochemical devices [39][40][41]. Yang et al. [42] reported that incorporation of GO into polyvinylidene difluoride (PVDF) based GPE enhances its ionic conductivity by forming 3D network structures in the polymer matrix facilitating the transport of ions. The current paper deals with incorporation of hydrothermally reduced graphene oxide (HRG) into SPVA hydrogels, performed for the first time to obtain nanocomposite GPEs and evaluate their electrochemical performance for application in quasi solid state flexible supercapacitors. GO was synthesized using a modified Hummer's method and reduced hydrothermally to obtain HRG. Carbon cloth (CC) was modified using a hydrothermal method to obtain hydrothermally reduced carbon cloth (CCHy) and use it as a flexible current collector. Nanocomposite GPEs were prepared by sulfonating PVA hydrogel using H 2 SO 4 , followed by addition of calculated amounts of HRG ranging from 0.1 to 1.0 wt.% to obtain HRG incorporated SPVA GPEs, hereafter referred as SPVA-HRG-x (x = content (wt.%) of HRG). Electrochemical performance of developed SPVA-HRG-x was evaluated using an in-house fabricated supercapacitor single cell with electrodes of HRG coated CCHy (HRG-CCHy). Synthesis of HRG HRG was synthesized following the procedure reported in our previous work [43]. Firstly, a modified Hummer's method was followed for synthesizing GO. In 50 mL of conc. H 2 SO 4 , 1 g of NaNO 3 was added and stirred for few minutes, followed by dispersing 1 g of graphite powder into it. The above dispersion was stirred at <5 o C in an ice bath for 4 h continuously, followed by slow addition of 6 g of KMnO 4 . The above mixture was stirred uninterruptedly for 48 h at room temperature (RT). 92 mL of DI water was slowly added to the above mixture and stirred for two more hours. Later, 10 mL of 30 % H 2 O 2 was added to the mixture leading to change in dispersion color from brown to yellow, indicating the formation of GO. The collected GO precipitate was washed with 1 M HCl and DI water and centrifuged. The resultant precipitate was finally washed using ethanol and vacuum dried at 70 o C for 12 h. Later, the sample was finely crushed to obtain GO powder. 200 mg of GO powder was dispersed in 200 mL through ultrasonication. The pH of the dispersion was maintained at 11 using NaOH pellets. The dispersion was then transferred into a Teflon  lined hydrothermal reactor (300 mL) and autoclaved for 14 h at 180 o C. The precipitate collected after the reaction was washed several times using DI water and dried under vacuum for 12 h at 60 o C. The dried sample was crushed to obtain a fine powder of HRG. Preparation of CCHy CCHy was prepared following the procedure reported in our previous work [44]. Commercially obtained carbon cloth (CC) (50 cm 2 ) was oxidized via chemical route. An acidic mixture of 20 mL of H 2 SO 4 and 10 mL of HNO 3 was prepared into which a pristine CC was dropped and stirred at RT, followed by slow addition of 3 g of KMnO 4 and 100 mL of DI water, and stirred for 3 h. Later, 5-10 mL of 30 % H 2 O 2 was added to the above mixture resulting in a clear solution with oxidized carbon cloth in it. The oxidized CC was washed with DI water and transferred into a Teflon  lined hydrothermal reactor (300 mL), filled with DI water and autoclaved for 14 h at 180 °C. Later, the reduced CC was vacuum dried at 70 °C for 6 h. The hydrothermally reduced CC (CCHy) thus obtained was used as a current collector in the following two electrode cell studies. Preparation of SPVA To prepare SPVA, 270 μL of H 2 SO 4 was added to 4 mL of DI water. Then, 0.5 g of PVA was added to this mixture, and stirred at 80 °C till all PVA gets dissolved, resulting in a transparent viscous liquid. Preparation of SPVA-HRG-x HRG based SPVA nanocomposite was prepared by adding predetermined quantities of HRG ranging from 0.1 to 1.0 wt. % to SPVA, separately. The HRG was dispersed in IPA using ultrasonication and added to the cooled SPVA and stirred continuously at 80 °C for 30 min, resulting in dark coloured SPVA-HRG-x. The obtained SPVA-HRG-x were coded as SPVA-HRG-0.1, SPVA-HRG-0.2, SPVA-HRG-0.5 and SPVA-HRG-1.0 for SPVA incorporated with 0.1, 0.2, 0.5 and 1.0 wt. % of HRG, respectively. Figure 1 shows the optical images of prepared SPVA and SPVA-HRG-x. Characterization studies X-ray diffraction (XRD) technique (Rigaku Miniflex 600) was used to analyse all GPEs. Electrochemical workstation (PARSTAT PMC 2000A) was used to evaluate the electrochemical performance of prepared SPVA-HRG-x. HRG-CCHy was prepared by coating HRG (1 mg cm -2 ) over CCHy. Ink of HRG was prepared by dispersing 1.8 mg of HRG in 100 μL of NMP along with 2 μL of 10 wt. % PVDF/NMP solution by ultrasonication. HRG ink was then deposited over flexible CCHy current collectors, followed by drying under vacuum for 15 min at 120 o C. Strands of HRG coated CCHy (HRG-CCHy) were placed on both sides of a GPE coated Whatman  filter paper and packed in between acrylic plates tightly to fabricate a cell. Approximately, 100 μL of GPE was utilized during fabrication each cell. The prepared cell was then tested in two electrode configuration using cyclic voltammetry (CV), galvanostatic charge-discharge (GCD) and electrochemical impedance spectroscopy (EIS) techniques to evaluate the performance of SPVA-HRG-x. The specific capacitance (C s ), specific energy (E d ) and specific power (P d ) of HRG-CCHy in all GPEs were calculated from GCD data, using the equations (1) where I represent constant discharge current, Δt represents discharge time, ΔV represents discharge potential window and m represents the mass of the active material on one electrode. Figure 2 shows X-ray diffraction patterns of pure PVA, SPVA and SPVA-HRG-x. The diffraction pattern of pure PVA shows a characteristic semi-crystalline peak at 2 value of around 19.6 [46]. In the case of SPVA, H 2 SO 4 addition disturbs the semi-crystalline nature of pure PVA, thereby increasing its amorphous nature [47]. From diffraction patterns of all SPVA-HRG-x it can be inferred that by increase in HRG concentration the intensity of peak around 2 value of 19.6 decreased, indicating an increase in amorphous nature of GPEs [48]. The addition of HRG may contribute to the increase in amorphous nature of the SPVA-HRG-x [42], thereby enhancing the rate of penetration and conduction of ions [49]. Electrochemical studies CV studies The electrochemical performance of HRG-CCHy was evaluated by executing CV studies for all five GPEs at various scan rates, ranging from 10 to 100 mV s -1 , within potential window 0 to 1 V. The area under near rectangular CV curve is proportional to the double-layer capacitance of the electrode material [50]. to SPVA-HRG-0.5, indicates a decline in double-layer capacitance of HRG-CCHy in SPVA-HRG-1.0 due to the excess concentration of HRG in SPVA-HRG-1.0, which could resist the flow of ions due to formation of agglomerates by restacking of HRG layers [45,48]. The CV curves of HRG-CCHy in SPVA-HRG-0.5 electrolyte at multiple scan rates ranging from 10 to 100 mV s -1 (Figure 3b), indicate better rate capability and reversibility [51]. GCD studies The charge-discharge behaviour of HRG-CCHy in all five GPEs was evaluated by performing GCD studies at various constant current densities ranging from 0.5 to 10 A g -1 , within the potential window 0 to 1 V. Figure 4a compares GCD curves of HRG-CCHy in all GPEs at 1 A g -1 . It is obvious that HRG-CCHy shows better charging and discharging ability in SPVA-HRG-0.5 electrolyte, compared to the rest of the GPEs, with an impressive specific capacitance of 200 F g -1 at 1 A g -1 with lowest IR drop of around 0.07 V. Figure 4b shows GCD curves of HRG-CCHy in SPVA-HRG-0.5 electrolyte at multiple constant current densities ranging from 0.5 to 10 A g -1 . Figure 4c represents the IR drop plot of HRG-CCHy in all GPEs at various constant current densities ranging from 0.5 to 10 A g -1 , indicating the lower equivalent series resistance (ESR) and superior conductivity of HRG-CCHy when SPVA-HRG-0.5 was used as electrolyte [52]. Figure 4d represents the Ragone plot of HRG-CCHy in all GPEs, indicating superior specific energy to power ratio of HRG-CCHy in SPVA-HRG-0.5, like 6.1 Wh kg -1 of specific energy at specific power of 1 kW kg -1 . In SPVA-HRG-1.0 electrolyte, HRG-CCHy showed a sharp increase in IR drop and decrease in specific energy at current densities ≥7 A g -1 , indicating an abrupt increase of ESR of the cell. This distinct behavior of SPVA-HRG-1.0 electrolyte at higher current densities, could result from the excess of HRG in SPVA-HRG-1.0, which could lead to restacking of HRG layers forming agglomerates, in-turn resisting the flow of ions [39,42]. Cyclic stability Figure 5a represents the cyclic stability of HRG-CCHy in all GPEs for 5000 GCD cycles at 2 A g -1 . Even after 5000 GCD cycles, HRG-CCHy retained 82, 86, 89, 93 and 87 % of their initial specific capacitances in SPVA, SPVA-HRG-0.1, SPVA-HRG-0.2, SPVA-HRG-0.5 and SPVA-HRG-1.0, respecttively. Figure 5b shows GCD curves of HRG-CCHy in SPVA-HRG-0.5 before and after cycling for 5000 cycles, which clearly establishes the notable charge-discharge stability of HRG-CCHy in http://dx.doi.org/10.5599/jese.1031 203 SPVA-HRG-0.5. Table 1 represents the comparison of supercapacitive performances of some carbonbased electrode materials in sulfonated PVA based GPEs. The EIS studies were performed at ac amplitude of 5 mV and within the frequency range of 100 kHz to 0.1 Hz. Figure 6a represents Nyquist plots of HRG-CCHy in all five GPEs. For all SPVA-HRG-x, the imaginary impedance part in the low frequency region was near perpendicular to the real axis, indicating near ideal capacitive behaviour of the cell. From Nyquist plots, it is evident that HRG-CCHy has low impedance in SPVA-HRG-0.5 compared to other GPEs [56]. The inset image of Figure 6a exhibits the magnified image of Nyquist plots, where SPVA-HRG-0.5 electrolyte showed lower electrolyte resistance compared to other GPEs [57]. Figure 6b represents the Nyquist plot of HRG-CCHy in SPVA-HRG-0.5 before and after 5000 GCD cycles. The size of the semicircle in the high frequency region of the Nyquist plot indicates charge transfer resistance [58,59]. The inset image of Figure 6b exhibits the magnified image of Nyquist plots, where it can be inferred that diameter of the semi-circle of SPVA-HRG-0.5 increased after cycling 5000 GCD cycles. This confirms the increase of charge transfer resistance after cycling [57,58]. Near-vertical line observed in the low frequency region of Nyquist plot for SPVA-HRG-0.5 indicates superior capacitive behaviour compared to other GPEs [60]. Conclusion SPVA-HRG-x were prepared by introducing HRG into SPVA and characterized using XRD technique. Supercapacitor assembled by using HRG-CCHy and GPEs was characterized by CV, GCD and EIS. As confirmed by XRD measurements, SPVA-HRG-x may have induced amorphous nature, improving thereby ionic conductivity and lowering impedance. HRG-CCHy in SPVA-HRG-0.5 exhibited a specific capacitance of 200 F g -1 at 1 A g -1 with the lowest IR drop of around 0.07 V and an impressive specific energy of 6.1 Wh kg -1 at the specific power of 1 kW kg -1 . Even after 5000 GCD cycles, HRG-CCHy retained 93 % of its initial capacitance in SPVA-HRG-0.5. The notable performance of HRG-CCHy in SPVA-HRG-0.5 may be attributed to the relatively lower impedance in SPVA-HRG-0.5. From the performed electrochemical studies, it can be inferred that SPVA-HRG-0.5 can be considered as a potential GPE for application in quasi solid-state flexible supercapacitors.
3,526.6
2021-08-03T00:00:00.000
[ "Materials Science", "Engineering" ]
Applying the engineering design process to teach the physics course for engineering students using the flipped classroom combined with an instructional design model Purpose – This study aims to examine the perceptions of students about learning science and physics using the engineering design process (EDP). Design/methodology/approach – The study employed a mixed-methods research design: The quantitative session features a pre – post-test control group study. In the qualitative aspect, the study conducted semistructured interviews for data collection. In the experimental group, the flipped classroom (FC) model and an instructional design are combined to design, develop and implement a physics course using the steps of the EDP, while the conventional method was applied to the control group. The respondents are students of the Department ofMechanical Engineering Introduction In any engineering course, engineering sciences are based on mathematics and basic sciences.In general, engineering courses require physics knowledge as the basis of science.The majority of engineering courses begin with a foundation unit in physics.In these courses, students learn the fundamental science that forms the foundations of specific engineering disciplines and the application of the acquired skills and physics knowledge to real-world technical problems.However, many students claim that physical laws are less related to their actual experiences in the world of engineering.As a result, they become disengaged from the learning process.Thus, achieving the intended learning objectives becomes difficult in terms of motivation and academic achievement due to the lack of interest in physics courses among students.Therefore, facilitating engineering students in finding meaningful learning activities in physics courses is crucial.To enhance their physics learning activities, the study proposes the use of the engineering design process (EDP) (National Research Council [NRC], 2012).The EDP is a series of steps that engineering students can follow, from defining a problem to testing and validating a proposed solution.Out of these steps, one helps students identify potential solutions on the basis of physical laws, concepts and principles.Thus, using the EDP to teach physics lessons to engineering students is possible.Numerous studies are being conducted on the methods for applying the EDP in developing their competencies using problem-based learning or project methods in the K-12 curriculum.However, less research is conducted on using the EDP to teach science physics to first-year engineering students in college.Hence, finding teaching and learning approaches for engineering students to render physics content relevant and useful to them is the major concern of this study. The EDP requires students to learn by action, and they should adopt deep learning models such as scientific inquiry and engineering design.This approach also requires students to apply basic science and mathematics to solve an engineering problem related to a real-world problem (NRC, 2012).The EDP consists of several steps; from the perspective of the study, class time is insufficient for accomplishing these steps.Thus, to implement the EDP effectively, teaching and learning approaches need to be reorganized.By conducting a literature review on science education, we find that the flipped classroom (FC) model requires students to acquire knowledge prior to class by watching instructional videos and completing learning tasks.We propose that applying the EDP using the FC model combined with instructional design to teach physics to engineering students may effectively enhance student learning in physics courses.This study aims to design, develop and implement the EDP based on the FC to teach physics courses to engineering students. Literature review Problem-solving is considered to be a central activity of engineering practice (McNeill et al., 2016).The research on science education indicates that learning is promoted when learners are involved in solving real-world problems.Moreover, learning is activated when learners independently construct new knowledge based on prior knowledge and skills.Importantly, new knowledge is integrated into the learner's world (Merrill, 2002).Thus, to implement the EDP effectively, the course needs to be designed carefully. The EDP is composed of a series of steps, including defining a problem that addresses a need, identifying criteria and constraints, constructing science principles, investigating potential solutions, designing and building a proposed solution, testing the proposed solution and collecting data and evaluating the proposed solution compared with the criteria and constraints (NRC, 2012).Thus, the goal of the EDP is to solve real-world problems with engineer-designed activities (Putra et al., 2021).In addition, the EDP motivates students to apply critical thinking when designing a product (Yu et al., 2020).The EDP facilitates communication and collaboration among students (Hoeg and Bencze, 2017).Using the EDP JRIT also provides students with opportunities for communicating with peers to construct a better solution based on concepts in physics (Putra et al., 2021).Once students are involved in the EDP, they endeavor to exchange ideas at the stage of planning a solution, trying their design, testing and deciding on a design (Giri and Paily, 2020). In this study, we use the analyze, design, develop, implement and evaluate (ADDIE) to approach the EDP through the FC model.The ADDIE consists of five stages, including analysis, design, development, implementation and evaluation (Branch, 2009).In the analysis stage, the teacher should analyze contextual learning, the learning needs of students, engineering problem solving, learning program, learning outcomes and course content.While analyzing learning needs, the teacher should include the profiles of students.In the design stage, the teacher clearly states the learning outcomes to students, plans assessment strategies and elaborates on learning activities.For the development stage, the teacher elaborates on the learning materials, including the learning scenario and guidelines for the cycle of the EDP.Regarding the implementation of the course, dynamic interactive learning is promoted, such as teacher-student, student-content and peer interactions.These types of interactions directly influence learner engagement and achievement.During the EDP, collaboration is key to identifying solutions to complex learning problems (Sulaeman et al., 2021).Lastly, in the evaluation stage, the teacher compares between the academic achievement to the goal and learning outcome of students.At this stage, course assessment is conducted to evaluate the perception of the students of the effectiveness of the course in the learning process. To help students analyze a problem-situation, identify and define the problem, find solutions and select and implement the best solution to the problem, students need more time to work in collaboration in the community of learning.Regarding this aspect, the FC approach can be used.The FC model is composed of three stages (Kong, 2015).The first is pre-class activities in which students are required to exert effort to study declarative knowledge by watching an instructional video and completing learning tasks on their own before coming to class.During class, the teacher reserves more time for students to practice their problem-solving skills by connecting the learned knowledge to determine a potential solution and design, build and test a product.The last stage pertains to post-class activities in which students are tasked to link the activities during class time to solve a complex problem independently.This study designs the FC model by combining the EDP (NRC, 2012) and the ADDIE model (Branch, 2009).Teaching physics using EDP is established based on NRC (2012; see Table 1). At the analysis stage, we analyze the learning needs of the students to determine the desired learning outcomes, assessment plans and learning activities (Table 1).In relation to the design stage, assessment strategies and learning activities are planned according to the FC model.The learning material is designed for students to learn outside of class based on the oriented questions and the defined problem.For the development stage, learning materials and assessment plans are fitted together into a module of learning.Moreover, learning activities are elaborated to promote the FC format.With regard to the implementation stage, teacher-student, student-content and student-student interactions were effectuated in the steps of identifying, building and testing the proposed solution.Evaluation occurred during the three phases in class, namely, pre-, in-and post-class. Research question RQ1.Is there a significant difference between the experimental and control groups regarding their "perception of physics learning outcomes using the EDP combined with the FC model and the stages of the ADDIE model? RQ2.How do students perceive their learning outcomes in physics using EDP? Research design The study uses a mixed-method design.First, it applied a quasi-experimental design (prepost-test control group).Both groups are exposed to the same content except for the intervention method.In the control group, students were taught using the conventional method, while the EDP was applied to the experimental group.The independent variables were two types of instructional approaches, namely, EDP and conventional method.The dependent variable was the perception of the students of their ability to apply the learned physics knowledge to solve the engineering problem.In the experimental research design, each group received pre-and post-tests to measure the dependent variable before and after the intervention.Second, to elucidate the perception of the students about learning physics using the EDP, the study conducted a semi-interview with the experimental group, which was classified into eight subgroups.One student was randomly selected from each subgroup.The selected students are coded S1-S8. Instrument Based on NRC (2012), we developed a questionnaire on the perceptions of students about physics learning activities using the EDP as a tool for assessing the problem-solving steps of the respondents.Their perceptions of learning activities were measured using a five-point Likert scale ranging from 1 5 Strongly disagree to 5 5 Strongly agree.Table 2 presents the questionnaire elaborated under steps of the EDP. Validity of the instrument.This study used content validity to validate the questionnaire items for the pre-and post-intervention design (Chiwaridzo et al., 2017).Two physics teachers and two engineering teachers reviewed the items, which were modified according to their feedback. Principles of design Items Students' perception of learning physics using the EDP Physics course for engineering students Reliability of the instrument.The study used Cronbach's alpha test to determine the reliability of the instrument by conducting a pilot study on 50 first-year engineering students who were not part of the sample.After data analysis, we found that the Cronbach's alpha coefficient of the variables IDP, BK, AP and IK are 0.70, 0.72, 0.75 and 0.81, respectively.Therefore, we conclude that the instrument is suitable for the study. Designing and developing the experimental procedure Analyzing the context of learning.The author examined the learning outcomes, student profiles, learning need of students, contextual learning, engineering problem-solving, content knowledge and assessment strategy.The author also states a product related to the science and engineering practice (Table 3). Designing and developing the teaching procedure.First, the instructional videos are designed, developed and elaborated according to the principle of flipped learning.Specifically, the learning scenario is explained in detail, including contextual and objective learning, the learning outcome and the problem that the students must solve.The diagnostic assessment is elaborated to evaluate the prior knowledge and skills of the students.Second, they are given learning tasks with the support of the basic theoretical knowledge related to engineering problem-solving.The instructional videos and learning scenarios are uploaded to a learning management system (Table 4). Implementing the lesson plan.Students in the experimental and control groups were divided into eight subgroups composed of 10 students each.Flipped learning was used in the experimental group, while the traditional method was used in the control group.Table 5 lists the sequence of the lessons. Data collection Quantitative data collection.Prior to the intervention study, data were collected using the questionnaire to examine the learning activities of the students in the physics course and to identify the impact of the uses of the EDP in teaching physics to engineering students. Qualitative data collection.As described in the Research design section, the experimental group was divided into eight subgroups and one student was randomly selected from each JRIT subgroup (S1-S8).After the intervention study, the study conducted semistructured interviews with these students to determine their perception of the uses of the EDP in learning physics using the FC model.This study employed the concept of saturation in the qualitative research to gather data from interviews according to the study objective (Saunders et al., 2018). Data processing Quantitative data processing.Data were collected before and after the application of the EDP to teaching science physics with the combined FC and instructional design models.Data were analyzed using SPSS 22.The study used an independent sample Mann-Whitney U test for data analysis due to their non-normal distribution.To identify whether or not significant differences exist between the experimental and control groups in terms of their perception of learning science physics using the EDP, we test a null hypothesis (H0) and an alternative hypothesis (H1) as follows: H0.No significant differences exist in the two teaching approaches between the experimental and control groups. H1. Significant differences exist in the two teaching approaches between the experimental and control groups. Aspect of design and development EDP steps Contents Designing problem-solving scenarios Children in urban areas need an educational toolkit to discover the sciences, especially the field of engineering (e.g. a lift electric model).The participants play the role of engineering students and design an electric model of an elevator for these children Building the criteria and constraints of the product Determining the problem using the criteria and constraints Designing an electric elevator that carries objects weighing 5 kg from the ground floor to a height of 1 m using wood, metal, plastic and a speed reducer motor with a dimension of 100 3 25 3 20 cm and a power supply of 12 V Designing questions for determining solutions to the problem Providing a set of questions for seeking solutions to the problem such as sources of information related to motion, force, work of force, power, energy and the law of conservation of energy.In addition, the teacher also presents students with the engineering problem-solving process Designing the learning material that helps students select an appropriate solution to the problem Designing a guideline for engineering design methodology for students to select an appropriate solution to the problem that meets the criteria and constraints Formulating questions that help students build the protocol for the solution Physics course for engineering students Qualitative data processing.When the researcher intends to understand the viewpoint of interviewees about new topics across a dataset, such as audio data, thematic analysis is used (Braun and Clarke, 2006).Two approaches can be used for thematic analysis, namely, inductive and deductive.This study employed the deductive approach because it aims to understand the opinions of the students based on the EDP during the learning process with some of the themes we expect to find in our collected data.Kiger and Varpio (2020) mentioned that thematic analysis is composed of six steps.First, the author transcribes audio data and immerses in them by reading and checking transcriptions several times.In step 2, the author performs a coding process using the inductive or deductive approach.The third step involves reviewing codes that are categorized into themes.In step 4, the author connects the themes to consider the coherence of these themes.In step 5, the author defines and names these themes.The final step involves writing up the final analysis and description of findings. Quantitative results session The data collected in the pre-test were statistically analyzed.Table 6 presents the results. The study recruited 160 students (experimental group: 80; control group: 80).To determine their perception of learning physics using EDP and to meet the assumption that both groups were equivalent, they were requested to complete the questionnaire on teaching physics using the EDP before the intervention study.Mean score of the pretest of both groups JRIT higher than that of the control group.However, to determine if a statistically significant difference exists between the two groups, we need to examine the statistical test.All p-values are >0.05 at the 95% confidence interval (95% CI) with Mann-Whitney U-and Z-values (Table 6); thus, H0 is supported.In other words, prior to the intervention study, the experimental and control groups displayed the same achievement level in physics. After teaching both groups, they were asked to complete the same questionnaire again.Table 7 depicts that the experimental and control groups obtained mean ranks between 87.33 and 107.42 and between 53.58 and 73.68, respectively.In summary, the mean rank of the experimental group was higher than that of the control group. To verify whether or not a statistically significant difference exists between the two groups, the study examines the results using the Mann-Whitney U test.Table 7 presents the test and U statistics and the p-values of 0.05.First, we find that the p-values for the identification and definition of the problem (IDP1, IDP2 and IDP3) are <0.05,which supports H1.Second, the p-values for building knowledge through contextual learning (BK1, BK2 and BK3) are <0.05,which confirms that contextual learning plays an important role in connecting physics concepts to field of engineering.Third, the p-values for applying physics and mechanical engineering to problem solving (AP1, AP2 and AP3) are <0.05,which suggests that the EDP is effective for the practice of physics and engineering.Finally, the p-values for the integration of knowledge (IK1, IK2 and IK3) are <0.05,which confirms H1 that the EDP facilitates the integration of physics concepts into real-life situations and into the field of engineering.We reject the null hypothesis and conclude that a significant difference exists between the two teaching approaches of the experimental and control Physics course for engineering students groups because all p-values are <0.05.In other words, the problem-solving skills of the students in the experimental group (EDP) were higher than those of the control group (conventional method).In summary, the use of EDP in science teaching and learning improved the problem-solving skills of the experimental group, especially their ability to solve physical and engineering problems in terms of the topic of energy.The discussion section elucidates the effectiveness of learning physical science using the EDP. Qualitative results session RQ2.What is the perception of the students about using the EDP to teach physics to engineering students? Table 8 presents the coding of the students' opinions obtained from the interviews according to the results of thematic analysis.Data were analyzed and presented under two categories. The first was classified as the positive effects of the use of the EDP to teach physical science to engineering students and the second category contained the negative effects of EDP.The qualitative data revealed that learning physical science through the EDP exposed a few positive and negative characteristics.Regarding the contextual learning themes, six students mentioned that the engineering problem made the learning experience more relevant and meaningful and provided them with the opportunity to connect physics concepts to engineering problem-solving.They also expressed that the knowledge and skills they gained through the EDP are relevant to the intended profession.However, two students stated that they spent a lot of time discovering the problem or identifying the need for change or improvement to an existing solution.In terms of finding and selecting a technical solution, seven students who commented that learning physics concepts by formulating solutions to problems is an interesting method for learning physics; it encouraged them to motivate and engage in the learning process.Nevertheless, only one student mentioned that building new knowledge through the use of the EDP is difficult because the student was not used to learning in a manner that constructs physics concepts to solve technical problems.Moreover, five students stated that integrating physics concepts into mechanical engineering helped them recognize the significance of physical science in relation to mechanical engineering, while three students pointed out that they need more time to analyze and identify the problem to be solved.In general, students perceived that learning physical science through the EDP is meaningful for mechanical engineering.Using the EDP to teach physical science is one of the methods for increasing intrinsic learning motivation for engineering students, which leads to academic success. Discussion Teaching science physics using the EDP combined with the FC model was likely to promote the effectiveness of learning in the physics course for engineering college students.The EDP is a crucial strategy for engineering students in exploring and learning new knowledge to identify solutions to engineering problems.Moreover, using the EDP to teach first-year physics also increased student motivation and engagement by integrating physics concepts into the context of engineering design.In our opinion, learning physics concepts using the EDP is an effective approach for connecting physics science to engineering.To facilitate the effectiveness of the exploration and construction of physics concepts, defining the problem and the need to create a product based on the criteria and constraints of the product are a means for instilling the desire to learn in students.Moreover, based on the acquired physics JRIT concepts and engineering design, students investigated and selected the best solution, improved and built the proposed solution, and tested the operation of the product on the basis of its criteria and constraints.Thus, learning the physical sciences using the EDP enhances the ability of engineering students to solve real-world problems.Furthermore, using the EDP to learn the physical sciences is useful for studying physics in the context of mechanical engineering, because it helps them integrate physics concepts into the field of engineering. In addition, the EDP requires students to become proactive in the learning process, such that they no longer need to absorb information from only the teacher.In other words, using the EDP improved the academic performance of students.Indeed, after the intervention study, the experimental group obtained better results than did the control group (Table 7).This study elucidated the use of the EDP combined with the FC model, which was designed under the stages of the ADDIE model.Before the class, the experimental group was given contextual learning and engineering problem solving; they then identified and defined the problem that addresses the learning needs and determined the criteria as well as constraints of a successful solution.When the criteria for success and constraints are examined in advance, the practice session can be devoted to exploring and learning new knowledge to determine the potential solutions to the problem.From the perspective of the study, when teaching and learning physics in the context of the EDP, identifying and defining the problem and establishing the criteria for the technical solution or product are important aspects, such that students develop the concepts or principles of physics.These criteria help students engage in crosscutting concepts and disciplinary core ideas, as defined by the NRC (2012).During class, the teacher spends less time teaching the concepts to the students; therefore, the majority of the class time can be spent on finding solutions to the problem based on the concepts, criteria and constraints.Moreover, the prototype can be built and improved, the proposed solution can be implemented and tested and the product of the project can be communicated.In the post-class phase, the students reviewed and improved the learning task during class and prepared learning tasks for the next lesson. Using the EDP to teach physics to engineering students is an effective strategy for motivating and inspiring students to practice science and engineering.Science teaching and learning is necessary for acquiring the fundamental skills in engineering for the future careers of students.Linking the learned knowledge using the traditional method for teaching physics courses is difficult, because scientific teaching and learning differ from contextual learning.In contrast, using the EDP to teach physics helps students use science to solve engineering problems.The finding is consistent with that of Brand (2020) who emphasized the importance of science and engineering practice.The author mentioned that using the EDP to learn science enables students to build essential skills.This teaching method provides students with an opportunity for developing an understanding of engineering as a discipline and as a potential career path.Moreover, the finding aligns with that of the NRC (2012), that is, the objective of using the EDP is to reinforce science and engineering practice.Moreover, according to the NRC, the EDP is limited to K-12; thus, the current study implemented the EDP on first-year engineering students.The study provides a significant contribution toward the improvement of the teaching of physical science to engineering students using the EDP. The qualitative data revealed that the majority of the students stated that using the EDP encouraged them to learn physical science, but a few students met difficulties when learning physics concepts through the EDP.The students who commented on the negative aspects of EDP stated that they spent a considerable amount of time searching for information and finding solutions to the problem.Thus, they lacked sufficient time for establishing the relationship between physics concepts and mechanical engineering.To eliminate this problem, creating digital learning materials then uploading them to a learning management system is necessary to help them reduce the time spent searching for information related to solutions. Conclusion This study implemented the EDP in teaching physics to first-year engineering students in the Department of Mechanical Engineering using the combined FC and instructional design models.The results revealed that a difference exists in the perception of the students in terms of integrating the EDP into learning physics between the experimental and control groups.The experimental group, who underwent the EDP, obtained better results than did the control group, which used the conventional method.The results demonstrated that the EDP encouraged the students to explore and learn new content knowledge by selecting the appropriate solution to the problem.The EDP also helped them integrate new knowledge and engineering skills into mechanical engineering.This research also introduced a new perspective on physics teaching and learning using the EDP to engineering college students. Research implication Teaching science physics using the EDP to engineering college students exerted a positive effect on the engineering problem-solving skills of the students.This teaching and learning strategy improved academic performance and helped them connect physics concepts to engineering.The findings are important for physics teaching and learning using the EDP in the context of engineering education.Thus, educators can integrate physics teaching and learning into the EDP to motivate and engage students in learning.However, the study was limited to a single product on the topic of energy.Therefore, future studies could use more learning products and examine the application of the results across topics.Second, assessment strategies, tools and technical assessments of the EDP need to be implemented. Finally, this study is limited by its sample size, such that the findings can be considered generalizable only if the results are obtained from a larger sample size. -The teacher provides students with the contextual and learning situations -Students are tasked to identify the problem-solving -The teacher explains the learning objective -The teacher introduces the basic theory related to motion and force In-class -Students clarify the problem solving through collaboration -Students are tasked to determine the criteria and constraints for the solution to the problem based on previous solutions Students do their homework Post-class -Students collect information based on the learning activities in class Week 2 Students read the learning material Pre-class -Students explore physics knowledge such as work of force, power, energy, types of energy and the law of conservation of energy -Students watch instructional videos related to the engineering problem-solving method The teacher presents the concept of work, power In-class -The teacher engages students in the revision of previous knowledge by correcting errors and misconceptions and by adding new information -Students work in groups to select the appropriate solution to the problem based on the criteria and constraints Students do their homework Post-class -Students improve the selected solution Week 3 Students read the learning material related to the application of the energy conservation law Pre-class -Students are asked to build a prototype The teacher presents the law of conservation of energy In-class -Students discuss and validate the prototype for the solution to the problem by planning and assigning tasks to each group member -Students test and improve the proposed solution -Students discuss and detail a list of the materials to implement the prototype for the proposed solution Students do their homework Post-class -Students buy materials to develop the product (continued ) Table 3 . Analysis step of the EDP Table 6 . Table 6 indicates that the experimental and control groups obtained mean ranks between 72.50 and 85.03 and between 68.50 and 83.93, respectively.The results indicate that the mean rank of the experimental group is Table 7 . Mean score of the posttest of both groups
6,419
2024-02-08T00:00:00.000
[ "Engineering", "Physics", "Education" ]
Feasibility and Performance of the Staged Z-Pinch: A One-dimensional Study with FLASH and MACH2 Z-pinch platforms constitute a promising pathway to fusion energy research. Here, we present a one-dimensional numerical study of the staged Z-pinch (SZP) concept using the FLASH and MACH2 codes. We discuss the verification of the codes using two analytical benchmarks that include Z-pinch-relevant physics, building confidence on the codes' ability to model such experiments. Then, FLASH is used to simulate two different SZP configurations: a xenon gas-puff liner (SZP1*) and a silver solid liner (SZP2). The SZP2 results are compared against previously published MACH2 results, and a new code-to-code comparison on SZP1* is presented. Using an ideal equation of state and analytical transport coefficients, FLASH yields a fuel convergence ratio (CR) of approximately 39 and a mass-averaged fuel ion temperature slightly below 1 keV for the SZP2 scheme, significantly lower than the full-physics MACH2 prediction. For the new SZP1* configuration, full-physics FLASH simulations furnish large and inherently unstable CRs (>300), but achieve fuel ion temperatures of many keV. While MACH2 also predicts high temperatures, the fuel stagnates at a smaller CR. The integrated code-to-code comparison reveals how magnetic insulation, heat conduction, and radiation transport affect platform performance and the feasibility of the SZP concept. I. INTRODUCTION The Z-pinch concept is fundamentally a cylindrical plasma implosion onto the symmetry axis by a J × B force provided by a current pulse.There are many variations on the target plasma, such as foils, wire arrays, jets, gas-puffs, pre-filled cylinders, or combinations thereof. 1 Furthermore, additional materials can be used as liners to assist the implosion, with yet more variations on how the liner is created and which material is used.When such a system is driven by modern pulsed-power drivers, the current pinching the target can reach many MA, leading to plasmas that achieve keV temperatures at near-solid densities. These plasmas are of interest to the fusion community and are useful scientific platforms for atomic physics, radiation transport, and laboratory astrophysics studies. 1,2e Z Machine at Sandia National Laboratories in Albuquerque (SNL) is the most powerful pulsed-power device in the world, providing up to 30 MA of peak current to a Z-pinch target. 3In recent years, the Magnetized Liner Inertial Fusion (MagLIF) concept has been a focus of research and development at SNL. MagLIF is a specific type of Z-pinch that utilizes an externally applied axial magnetic field to reduce thermal conduction losses and an on-axis laser to preheat the fuel (typically close to 100 eV), which reduces the implosion velocity required to reach ignition temperatures. 4,5The axial magnetic field is initially 10-20 T but is compressed to much larger values, which my help confine alpha particles when deuteriumtritium fuel is used.MagLIF typically uses an aluminum or beryllium liner to compress a deuterium target and require sufficient liner thickness to avoid significant degradation due to the Rayleigh-Taylor instability. The staged Z-pinch (SZP) is an alternative fusion concept in which energy is transferred to the target plasma in stages.The SZP name was first used for a configuration with an on-axis cryogenic deuterium fiber (i.e., target) compressed by an argon or krypton liner. 6A current pre-pulse through the fiber would create the target plasma and pre-magnetize the liner, and a subsequent main Z-pinch current pulse would implode the liner.A theorized benefit of the SZP is the control and mitigation of the magneto-Rayleigh Taylor (MRT) instability at the fuel/liner interface 7 , however this point is beyond the scope of this work and will be addressed in future publications.Current SZP configurations typically employ a gas fill for the target load and high atomic number liners (gas-puff liners 8,9 or solid liners 10,11 ).The working hypothesis is that a high atomic number liner will radiate more efficiently, and the Feasibility and Performance of the SZP: 1-D FLASH and MACH2 resulting colder liner will allow more magnetic diffusion towards the fuel/liner interface.This would in turn result in a stronger magnetic pressure on the target plasma and potentially reduce thermal conduction losses. It is well known that fusion output is severely inhibited when high atomic number impurities are mixed into the fuel plasma, because this increases radiative losses (i.e., reduces fuel temperatures).Therefore, the high atomic number liners used by the SZP concept will only perform well if the fuel/liner interface remains relatively stable during the implosion. The magnetic, thermal, and radiation transport properties of the system become crucially important as they can all affect the time scales of the implosion and fuel heating and stability of the fuel/liner interface.We focus on the transport properties and their effects in the simulations presented in this work, but, as previously mentioned, we do not include a stability analysis as these simulations are one-dimensional. In this paper, we have modeled two different SZP configurations: a new xenon gas-puff liner (SZP1*) with different initial conditions as compared to the original xenon gas-puff liner (SZP1 9 ), and the original silver solid liner setup (SZP2 10 ).Fig. 1 shows schematics of the SZP2 and SZP1* configurations with approximate dimensions.5][16] Most of the criticism has been aimed at the interpretation of key shock physics and the calculations behind the fusion energy output.The FLASH code can now contribute to this debate courtesy of our on-going collaboration between the Flash Center for Computational Science at the University of Rochester and MIFTI, made possible by funding from the ARPA-E BETHE program.For the present work, we focus on specific physics and code-to-code comparisons, and we exclude calculations and discussion of fusion yield and energy production, in part, because FLASH does not have this capability.This paper is written with several goals in mind: (1) The structure of the paper is as follows: in Section II we describe the two codes used in this work, FLASH and MACH2.Then in Section III, we present results from two analytical test problems with SZP-relevant physics: a radiative shock problem and the Noh cylindrical implosion problem.We show FLASH results from an ideal equation of state (EOS) silver liner (SZP2) model in Section IV, and we briefly discuss how it compares to the originally published MACH2 SZP2 results.In Section V, we include more realistic EOS tables and physics to present both FLASH and MACH2 simulation results of a xenon gas-puff liner configuration (SZP1*).Lastly, we conclude our findings in Section VI. II. NUMERICAL METHODS FLASH 17 is a publicly-available, parallel, multi-physics, adaptive mesh refinement (AMR), finite-volume Eulerian hydrodynamics and MHD code, developed at the University of Rochester by the Flash Center for Computational Science (for more information on the FLASH code, visit: https://flash.rochester.edu).FLASH scales well to over a 100,000 processors, and uses a variety of parallelization techniques like domain decomposition, mesh replication, and threading, to optimally utilize hardware resources.The FLASH code has a world-wide user base of more than 4,350 scientists, and more than 1,300 papers have been published using the code to model problems in a wide range of disciplines, including plasma astrophysics, combustion, fluid dynamics, high energy density physics (HEDP), and fusion energy. Over the past decade and under the auspices of the U.S. DOE NNSA, the Flash Center has added in FLASH extensive HEDP and extended-MHD capabilities 18 that make it an ideal tool for the multi-physics modeling of the SZP platform.These include multiple state-of-the art hydrodynamic and MHD shock-capturing solvers, 19 , three-temperature extensions 18 with anisotropic thermal conduction that utilizes high-fidelity magnetized heat transport coefficients, 20 heat exchange, multi-group radiation diffusion, tabulated multi-material EOS and opacities, laser energy deposition, circuit models, 21 The FLASH code and its capabilities have been validated through benchmarks and codeto-code comparisons, [25][26][27] as well as through direct application to numerous plasma physics experiments, [28][29][30][31][32][33][34] leading to innovative science and publications in high-impact journals.For pulsed-power experiments, FLASH has been able to reproduce past analytical models 35 , is being applied in the modeling of capillary discharge plasmas 36 , and is being validated against gas-puff experiments at CESZAR 37 .The Flash Center is also collaborating with Los Alamos National Laboratory (LANL) in the modeling of laser-driven experiments of cylinder implosions 38 at the Omega Laser Facility at the University of Rochester and the National Ignition Facility at Lawrence Livermore National Laboratory, in a successful integrated inertial confinement fusion (ICF) verification and validation (V&V) effort with xRAGE. 39,40e Multi-block Arbitrary Coordinate Hydromagnetic (MACH2 ) code 41 is a single-fluid, multi-material, three-temperature resistive MHD code, developed by the Center for Plasma Theory and Computation at the Air Force Research Laboratory (AFRL), Phillips Research Site.It solves the usual set of MHD equations: mass conservation, momentum conservation, electron, ion, and radiation energy, and Faraday's law of induction for the magnetic field. One fundamental difference between FLASH and MACH2 lies in the formulation of the total energy equation.Although MACH2 advances the total energy in a non-conservative manner, this has been proven to not impact the code's ability to capture MHD shocks, provided that adequate grid resolution is used. 42Radiation is calculated using a single energy group (Gray radiation), with a flux-limited, non-equilibrium model.The EOS and transport coefficients (opacities, thermal conductivities, magnetic resistivity) can be obtained from the LANL SESAME tables.The code also contains options to use a gamma-law EOS and certain analytical transport coefficients (e.g., Spitzer thermal conductivity).This code has an adaptive mesh algorithm, which can alter the computational grid every time step according to user-specified criteria.Its Arbitrary Lagrangian-Eulerian (ALE) framework allows simulations to be run in pure Lagrangian, pure Eulerian, or a combination of the two methods.In the pure Eulerian mode, the code is still taking a Lagrangian step, but maps the result back to the fixed Eulerian mesh.The grid spacing is potentially adjusted by the adaptive algorithm, depending on magnetic or fluid pressure gradients (or both), which can provide increased accuracy in regions of interest while saving computational time.The Eulerian method, where the computational grid is fixed in space for the entire duration of Feasibility and Performance of the SZP: 1-D FLASH and MACH2 a simulation, is perhaps the easiest to conceptualize and analyze.However, it may require increased resolution in certain regions to properly model important phenomena driving the system dynamics.New MACH2 results in this work use the pure Eulerian method for comparison with FLASH. MACH2 contains a self-consistent circuit model, which is intended to represent the refurbished Z pulsed power machine at Sandia National Laboratory (SNL). 21The input opencircuit voltage profile and other circuit parameters are described in a previous paper. 10This same circuit model is also now implemented in FLASH. MACH2 has been successfully used for a variety of studies, which supports its use as a code that has gone through an extensive amount of V&V.These studies include, but are not limited to, explosive magnetic generators, plasma opening switches, 43,44 compact toroid schemes, [45][46][47] ICF and alternative fusion concepts, 48 and Z-pinches with solid liners. 49,50Some have questioned whether previous SZP simulations used MACH2 correctly with appropriate boundary conditions and sufficient spatial resolution.The code-to-code comparisons reported in this paper are intended to build confidence that these codes can accurately model Z-pinches and help guide experiments. MACH2 is also actively used and developed at the Naval Research Laboratory (NRL), and newer versions of the code may have significant differences from the version used in this study.One of the purposes of this work is to assess the SZP platform with FLASH within the context of MIFTI's previous and current research using the version of MACH2 in their possession.Therefore, it is not our intention to fix any potential errors that may be discovered in MACH2.Any code-to-code discrepancies described in this work pertain specifically to this version of MACH2 and should not have any bearing on newer versions of the code being used by other research groups. III. ANALYTICAL TESTS In published MACH2 simulations of the SZP, shock waves were identified as crucial in preheating the target plasma and piling up liner mass at the liner/target interface. 14e interpretation of these shock waves has come under some criticism in recent years. 12,13vertheless, these shock waves are present in the MACH2 simulations and are complex phenomena as they develop in a magnetized medium with important radiative effects.For these reasons, we decided to test both FLASH and MACH2 with simpler analytical problems in which Z-pinch-relevant shock physics is important.In the subsections that follow, we present test results from a radiative shock problem and from the cylindrical Noh problem. One purpose of these tests is to help build confidence in each code's ability to accurately model constituents that make up the fluid-and thermo-dynamics of the complicated SZP. A. Radiative Shock Problem Radiative shocks (and radiation in general) are essential elements of SZP simulations. As reported by Ruskov et al. 16 (cf.Fig. 9 therein), MACH2 simulations indicate that a more radiative (higher Z ) liner is compressed more for the same liner mass and driver; consequently coupling its kinetic energy into target internal energy more efficiently; ultimately resulting in higher yield. The radiative shock problem presented here follows the prescription described in Lowrie and Edwards 51 , and its simulation setup is also described in the FLASH user's guide. 52It is an analytical solution to a 1D, steady, radiative shock in which electron and ion temperatures are in equilibrium, but the radiation temperature differs.Constant opacities are used for a single radiation energy group (gray).The purpose of the problem is to test a code's radiation transfer and shock-capturing capabilities, both of which are important for the modeling of the SZP concept. The Planck opacities (absorption and emission) are set to approximately 423 cm -1 , and the Rosseland opacity (transport) is approximately 788 cm -1 .An ideal EOS is used with adiabatic index γ = 5/3, atomic number Z = 1 (also the constant ionization state), and atomic mass A = 2 amu.Electron, ion, and radiation temperatures are initially in equilibrium, and their upstream (pre-shock) value is 100 eV.The electron and ion temperatures remain in equilibrium throughout the domain for the duration of the simulation due to heat exchange with an enforced reduction to the equilibration time, but radiation temperature can change.The upstream density is set to 1.0 g cm -3 , and the remaining upstream and all downstream conditions are set appropriately to maintain a steady shock with Mach number Fig. 2 shows the analytical solution for electron (same as ion) and radiation temperatures as well as simulation results from both codes.The grid resolutions used for these simulations were approximately 0.146 µm and 0.293 µm for FLASH and MACH2, respectively.A resolution convergence study was conducted with MACH2, and the results did not change with higher resolution.Note that Fig. 2 does not contain every data point from either code to avoid over-crowding the plot.At a relatively early time of 2 ns, it appears that both FLASH and MACH2 recover the analytical solution, providing confidence that the radiation transfer algorithms give accurate results.However, at later times, we see that Feasibility and Performance of the SZP: 1-D FLASH and MACH2 only FLASH captures the exact position of the shock, whereas MACH2 shows an increasing positional offset in the electron/ion temperature jump.This result shows that MACH2 does not capture electron-radiation coupling as accurately as FLASH, especially in the presence of shocks.This kind of discrepancy will play a role in the SZP1* simulations presented later in this work (Section V). In this version of MACH2, the radiation transport calculation occurs before the hydrodynamic advection calculation, which is a typical operator-split approach.However, the MACH2 result shows a small positional error on the order of a fraction of a cell width, which accumulates over time, hence the position of the shock continuously drifts farther away from the analytical solution at later times.FLASH utilizes a similar operator-split approach, but does not exhibit the same error.Additional tests were conducted to match the order of operations, but the results did not change.It is also important to note that MACH2 took on the order of 100,000 computational time steps for the radiative shock test problem, but the SZP1* simulation discussed later, required over 2.6 million steps, thus the aforementioned error accumulation could be significant in the SZP1* simulation. B. Magnetized Noh Problem The classical Noh problem 53 provides a test to benchmark the accuracy of hydrodynamic codes to capture shock dynamics in a convergent geometry.Initially, an infinite mass is set with a homogeneous inward velocity in cylindrical coordinates.Due to the singularity at the origin, an accretion shock wave is generated that propagates outwards, decelerating the incoming fluid mass.The magnetized extension of the problem, derived by Velikovich et al. 42 , is well-reproduced by FLASH. 54We have repeated the simulation here since we are using a newer version of FLASH, and we compare with results from the relevant version of MACH2 used in the published SZP simulations [9][10][11]14 . A smulation setup for this test is also included in the release version of FLASH and described in the user's guide.52 It was observed that, for both codes, the numerical results converge towards the analytical solution with increased resolution.Fig. 3 shows that both codes approximately reproduce the analytical solution for mass density.The key features that are modeled accurately are the peak density and the location of the shock.Note that this test only involves hydromagnetic advection, and therefore, the MACH2 result does not suffer from the same error IV. SILVER LINER MODELS We first present simulations of the silver liner on a DT target configuration proposed in Wessel et al. 10 , referred to as SZP2.This configuration was chosen because it was used in Ruskov et al. 14 in response to SZP criticism, 12 and is therefore employed here in the FLASH calculations. Our setup consists of a flat density profile for both fuel (ρ fuel = 9.8 × 10 −3 g cm −3 ) and liner (ρ liner = 0.6 g cm −3 ).Initially, the fuel region extends from r = 0 to r = 0.2 cm, and the thickness of the liner is 0.1 cm.The computational domain consists of a uniform grid of 1,024 points that extends to r = 0.4 cm.This ensures a resolution of 10 cells to describe the fuel region at stagnation.At t = 0, the system is in thermal equilibrium at an initial temperature of 2 eV.In this first comparison, we have taken an ideal-EOS approach, in which a gamma-law EOS with γ = 5/3, constant ionization of Z = 1 for the fuel and Z = 10 for the liner, and analytical formulas for radiation opacities corresponding to freefree electron transitions (Bremsstrahlung radiation) 55,56 are assumed.More precisely, we are taking the Planck mean opacity K f f P for the emission and absorption opacities and the Rosseland mean opacity K f f R for the transport opacity, where K f f P and K f f R are (in c.g.s.): Here, A refers to the mass number. The motivation behind the ideal-EOS approach is to verify that FLASH can accurately solve for the fundamental physics that model and govern a pinch implosion.Particularly, appropriate treatment of the vacuum region is essential in an Eulerian code like FLASH.The vacuum region is modeled as a low-density fluid whose task is to transfer the magnetic field from the outer boundary, placed at r = 0.4 cm, to the outer surface of the liner while adhering to a current-free profile, B ∼ 1/r.To ensure this, an artificially high value of magnetic diffusivity in the vacuum region was used, namely, η vac ∼ 10 11 cm 2 s −1 .Additionally, a temperature ceiling was imposed in the vacuum to avoid potential build-up of thermal pressure that could affect pinch dynamics.The temperature ceiling has the side benefit of keeping thermal conduction low in the vacuum, hence reducing liner-vacuum heat losses.A signature of correct behavior of the vacuum is the robustness of the implosion dynamics to changes in the parameters that model the vacuum region.This is shown in Fig. 4, where the temporal evolution of mass-averaged ion temperature is minimally affected by the value of the initial vacuum density (provided that it is sufficiently low), vacuum diffusivity (provided that it is sufficiently high), and temperature ceiling. The dynamics of the implosion are sketched in Fig. 5, where the evolution of the fuel radius and the implosion velocity are shown for runs with radiation physics on (black) and off (red).Initially, the fuel is slowly compressed as a result of a pressure imbalance, present in the initial conditions.This is a result of the three regions (fuel, liner, and vacuum) being Feasibility and Performance of the SZP: 1-D FLASH and MACH2 initialized with a homogeneous temperature of 2 eV.At t = 80 ns, the trajectories of the two runs depart.In the run with radiation physics turned off, a jump-off velocity of the liner of 7 cm/µs is observed when the leading shock breaks out into the fuel at t = 109 ns.This is consistent, albeit slightly above, the ∼ 6 cm/µs implosion velocity reported both in Lindemuth et al. 12 , cf.Fig. 4, and in Ruskov et al. 14 , cf.Fig. 3(b).Thermal and magnetic pressure profiles at the time of shock breakout are depicted in Fig. 6(a) for this run.The run where radiation physics is turned on shows more complex dynamics.Radiation keeps the liner colder, which allows for more magnetic field to diffuse into the fuel.At its interface with the liner, significant magnetic pressure is built up, and the magnetic piston thereby formed drives the initial stages of fuel compression.This stage occurs between After shock breakout, the shock travels in the fuel, heating it non-adiabatically (viz., shock preheating).Eventually, the shock reaches the symmetry axis, rebounds off, and propagates outward in the form of a weaker shock or sound wave.Subsequent shock preheating can take place due to shock rebound at the inner surface of the liner, or due to additional shocks launched by the driver.Shock preheating ends when the fuel temperature is raised to a value where further compression becomes subsonic. Feasibility and Performance of the SZP: 1-D FLASH and MACH2 The evolution of the ion temperature as a function of the convergence ratio provides insights into the implosion dynamics.Throughout this work, the convergence ratio (CR) is defined as the ratio of the initial outer fuel radius to the compressed outer fuel radius.This is plotted in Fig. 7 and compared to its counterpart full-physics MACH2 run from Ruskov et al. 14 .Highlighted in light gray is the stage of the implosion driven by magnetic pressure, whereas the darker gray region denoted the period during which the main shock propagates in the fuel.It can be seen that, despite the different early dynamics previously described, both radiation-off and radiation-on simulations depart from approximately the same fuel temperature after shock preheating, CR = 5.At the later stage of subsonic compression, the the radiation-off run closely follows an adiabatic trajectory, while the radiation-on run deviates significantly.This indicates that radiation losses dominate over thermal conduction losses in the SZP2 configuration, as was anticipated in Lindemuth et al. 12 , cf.Table III.Feasibility and Performance of the SZP: 1-D FLASH and MACH2 Comparing the MACH2 and FLASH simulations, it can be observed that the fuel adiabat after shock preheating is significantly lower in FLASH, resulting in lower fuel temperatures and a lower convergence ratio at stagnation.In this aspect, the FLASH run is more consistent with the results reported in Lindemuth et al. 12 , cf.Fig. 7: stagnation temperatures slightly below 1 keV at CR ∼ 50.However, it should be highlighted that adopting an ideal-EOS framework resulted in different early dynamics.In the MACH2 run, the occurrence of secondary shock preheating, absent in the simulations in Lindemuth et al. 12 , allows the fuel adiabat to rise and attain fusion conditions.In the FLASH run, only primary shock preheating is observed, preceded by compression due to magnetic pressure. V. XENON GAS-PUFF LINER MODELS We have developed a new configuration of a xenon gas-puff liner to enact a direct comparison between FLASH and MACH2 simulations of the SZP platform.Previously published SZP models of Xe gas-puff liners, sometimes referred to as SZP1 9 , use different initial conditions.To avoid confusion, we refer to this new configuration as SZP1*.The main difference is a lower liner density, which was chosen to generate a faster implosion and thus stronger shocks in the hopes of achieving higher temperatures. Another motivating factor for SZP1* is that the dominant thermal loss mechanism is thermal conduction rather than radiation as in SZP1.We can estimate the ratio of thermal conduction losses to radiation losses for DT with the following formula: where T e is electron temperature in keV, CR is fuel convergence ratio, K is a coefficient that accounts for the effect of the electron Hall parameter, χ e , on the electron thermal conductivity, and n i is the ion number density in 10 24 cm -3 .This equation is derived from the work of Lindemuth and Siemon 57 and represents an estimate of the ratio of the rate of electron thermal conduction to the rate of Bremsstrahlung radiation, specifically for DT. For hydrogen, K(χ e ) = (4.664χ 2 e + 11.92)/(χ 4 e + 14.79χ 2 e + 3.77) was used.The input parameters (from the FLASH simulations) and resulting ratios for SZP1 and SZP1* are summarized in Table I.For the original SZP1 scheme, we calculated this ratio to be < 1 near stagnation, making radiation the primary heat loss mechanism.Conversely, for SZP1*, Feasibility and Performance of the SZP: 1-D FLASH and MACH2 we estimate this ratio to be > 57, thus thermal conduction losses dominate near stagnation.This configuration is potentially advantageous because thermal conduction losses can be reduced if sufficient magnetic field is diffused into the fuel.The main contributing factors for this difference, according to Eq. ( 3) and Table I, are the CR and n i attained.Since SZP1* reaches higher CR than SZP1 because of its lower density, the different thermal loss regimes are ultimately a result of the different densities. The tail of the liner Gaussian from r = 1.722 to r = 2.016 cm is modeled as a "vacuum" region, with a floor density of ρ min = 1 × 10 −7 g/cm −3 enforced for the duration of the simulation.Fig. 8 shows the initial mass density profile for the SZP1* configuration.The mass density and magnetic field profiles at CR = 87 are shown in Fig. 10.This is the maximum CR attained in the MACH2 run, whereas the FLASH simulation continues to compress.While a cursory inspection of Fig. 10 may conclude that the simulations match fairly well, there are a two key differences to note.The liner in the MACH2 simulation has compressed to larger densities than in the FLASH simulation.Further, in the MACH2 simulation we see a significant built-up of magnetic field just inside the fuel abutting the fuel/liner interface, which lowers thermal conductivity, insulates the fuel, and reduces thermal losses.This disparity in magnetic field accumulation in the fuel is identified as the main cause for the observed difference in the maximum convergence ratios, at stagnation, between simulations.The relatively larger magnetic field in MACH2 leaks into the fuel at approximately the same time or shortly after the main shock in the liner breaks out into the fuel.This occurs relatively early (∼ 127 ns) at CR ∼ 1.11, and the simulations begin to diverge after this point.After shock breakout, the MACH2 simulation predicts a thin, cold region in the fuel next to the liner.The temperature drop leads to an increase in magnetic resistivity which, in turn, allows more magnetic field to diffuse inwards, further inhibiting thermal conduction. In the limit of large magnetization, the perpendicular thermal conductivity is proportional to T 2.5 e /χ 2 e , where χ e is the electron Hall parameter.This thin fuel region next to the liner is more magnetized in the MACH2 simulation than in the FLASH run as shown in Fig. 11, with peak values of χ e ≈ 3138 and χ e ≈ 58.55, respectively.Taking also into account the different temperatures, we estimate that the thermal conductivity in this part of the fuel is more than 120 times greater in the FLASH simulation than in the MACH2 run.This observation explains why thermal conduction losses are higher in the FLASH simulation and is consistent with the continued compression of the fuel to higher CR.Also note that the Feasibility and Performance of the SZP: 1-D FLASH and MACH2 magnetic field spike inside the fuel in the MACH2 result (see Fig. 10) would require a return current at this location, and we do not generally expect to see return currents inside the fuel in Z-pinches.Nevertheless, the MACH2 result shows how increased fuel magnetization can benefit the SZP1* configuration by reducing thermal losses, in turn leading to higher temperatures and larger, more stable CR values.The electron, ion, and radiation temperature profiles at CR = 87 are shown in Fig. 12. Here we see a much clearer discrepancy between the two simulations.The fuel in the MACH2 run has a much higher electron temperature, which helps explain why the implosion stagnates earlier than in the FLASH simulation.It is also noteworthy that, in the FLASH result, we have a fuel whose T e < T i .Conversely, in the MACH2 simulation, at stagnation, T e > T i . Generally, in Z-pinch experiments, one may expect the ion temperature to be higher than the electron temperature, since electrons lose energy via radiation, thermal conduction, and heat exchange with the ions, whereas ions are also subject to shock heating.Nevertheless, the temperature inversion observed in the MACH2 result is not necessarily nonphysical, given the large fuel magnetization.In such regimes, ions can be more thermally conductive than electrons, so it is possible for ions to lose more thermal energy and remain colder than electrons.Also, we again observe a discrepancy at the fuel/liner interface where the temperatures in the MACH2 run sharply decrease to liner values before the interface is reached, while in the FLASH profile the temperatures decrease after the interface an inside the liner.This helps further explain the aforementioned presence of a larger magnetic field values in the fuel in the MACH2 simulation: The lower-temperature region just inside the fuel/liner interface results in higher magnetic resistivity, which in turn allows for more magnetic field to diffuse into the fuel.The fuel/liner interface is marked by a short-dash vertical line.In the FLASH result, the fuel has T e < T i , whereas in MACH2, at stagnation, T e > T i .Also, the temperatures in the MACH2 run decrease to liner values before the interface is reached, while in the FLASH profile the temperatures decrease after the interface inside the liner.As a result, in the MACH2 run, the magnetic resistivity and the magnetization of the fuel adjacent to the interface are larger than in the the FLASH simulation, insulating the fuel from heat conduction losses. The FLASH model continues to compress and reaches a peak T ion of about 18 keV, on-Feasibility and Performance of the SZP: 1-D FLASH and MACH2 axis, at CR = 100, which occurs at 144.75 ns.After this peak, the FLASH model compresses further, for ∼ 265 ps, and reaches CR of approximately 388.This latter compression is accompanied by thermal losses that result in lower-than-peak temperatures.Fig. 13 shows a comparison of the mass density and azimuthal magnetic field from the FLASH model at CR = 100 and CR = 388 (stagnation).The fuel density has increased by an order of magnitude, which is consistent with the decrease in volume from a radius of 50 µm to 13 µm.The magnetic field in the fuel has also increased, but the plasma beta is still much larger than unity due to the high thermal pressure.Despite having the same initial conditions, circuit model, transport coefficients, and EOS and opacity tables, we were not able to reproduce the MACH2 result with FLASH simulations.We see that the fuel stagnates at lower CR in the MACH2 run because the latter reaches much higher temperatures and thus has enough thermal pressure to halt the implosion.The ability of the fuel to retain its thermal energy (i.e., high temperatures) depends on its thermal losses via radiation and thermal conduction.The fact that discrepancies start becoming apparent after the shock breakout may call into question the codes' shockcapturing capabilities.We have shown that the version of MACH2 used in this work does not reproduce the analytical solution of the radiative shock test problem as accurately as FLASH (see Section III A).The integrated SZP1* simulations are more complicated than the simple benchmark problem, and the ability to accurately model radiative shocks at material interfaces in SZP1* is crucial for accurately predicting thermal conduction losses in the fuel.The MACH2 SZP1* simulation should have a similar error accumulation as observed Feasibility and Performance of the SZP: 1-D FLASH and MACH2 in Fig. 2, once the shock breaks out into the fuel.However the error is potentially larger due to the greater number of computational time steps (∼ 2.65 M). Another interesting observation from the MACH2 result is that the fuel electron temperature remains higher than than the ion temperature (see Fig. 12).We see the opposite relation in the FLASH model, because the electrons are losing more energy without the thin, highly magnetized layer to insulate them.This layer, observed only in the MACH2 result, develops after shock breakout, and is therefore susceptible to the errors associated with radiative shock modeling discussed in Section III A. One would expect the electrons to be radiating, losing energy via thermal conduction, and transferring energy to the ions, while the ions are also subject to compressional heating.Due to aforementioned thermal loss mechanisms, the FLASH model is allowed to reach higher CR values, where thermal conduction losses become even more important. The MACH2 code has been successfully used for and validated against several plasma, inertial confinement fusion, and high energy density physics experiments.However, the modeling of the SZP1* platform, with specific settings to compare with FLASH, is a challenging problem for the particular version of MACH2 used in this work, due to its issues modeling radiative shocks.This deficiency, in this version of MACH2, leads us to conclude that FLASH gives more physically sensible results for SZP1*, even though the FLASH -predicted CR values are too large to be experimentally stable. B. High-fidelity FLASH simulations of the SZP1* We ran two additional SZP1* models with FLASH to determine effects of using higherfidelity physics implemented in FLASH.These include newer, higher-fidelity transport coefficients 20,24 , and multi-group radiation diffusion, neither of which are available in MACH2.These newer transport coefficients are more complicated functions of atomic number and the Hall parameter, and they more accurate than Spitzer coefficients.The multi-group radiation diffusion model also used the newer transport coefficients, as well as 40 radiation energy groups, spanning the same energy range as the single-group (gray) models.We denote the FLASH runs in this subsection as follows: SP is the single-group run with Spitzer transport coefficients (the same run discussed in the previous subsection), DW 1G is the run with the newer transport coefficients and one radiation group, and DW Feasibility and Performance of the SZP: 1-D FLASH and MACH2 40G is the run with the newer transport coefficients and 40 radiation groups. Table II gives a summary of key results in terms of CR, stagnation time, and massaveraged fuel ion temperature at stagnation.Note that for all FLASH simulations, these stagnation temperatures are lower than the peak ion temperatures.We observe that with the newer coefficients, SZP1* converges slightly faster and to a smaller radius, but the ion temperature is slightly lower.The multi-group model converges the fastest and to the highest CR values encountered in this work, CR ∼ 560.At stagnation, the multi-group radiation diffusion run is hotter than both single-group FLASH runs, but its peak temperature, which occurs prior to stagnation, is lower.Fig. 15 shows the mass-averaged fuel ion temperature as a function of CR for all SZP1* models.There are several important features to note in this figure : (1) all models show fuel preheating early (CR < 2), (2) all FLASH models continue to compress to higher CR values after peak T ion , whereas the MACH2 model does not, (3) the FLASH models with newer transport coefficients reach higher CR values, and (4) the multi-group model is on a lower adiabat and has a lower peak T ion than all single-group models.The significance of shock preheating was discussed in Section IV in reference to ideal-EOS SZP2 models, and similar points apply to SZP1* as well.However, in SZP1* there is also a radiation wave that provides significant additional fuel preheating.This was seen when analyzing the early-time behavior of the simulations, and by executing a separate test run with radiation transport switched off, in which the wave was absent.This radiation wave and the initial shock break-out into the fuel effectively set the adiabat of the compression. Point ( 2) is essential for understanding the differences between the FLASH and MACH2 Feasibility and Performance of the SZP: 1-D FLASH and MACH2 models.Thermal losses, which are more significant in the FLASH simulations, cool the fuel and allow for higher CR values.The reasons for the discrepant thermal losses were discussed in the previous subsection. Points (3) and ( 4) are specific to the FLASH models.Use of the newer transport coefficients leads to more thermal conduction losses, which results in higher CR values and lower T ion .The multi-group model converges slightly more than its single-group counterpart, while its stagnation temperature is higher.This result indicates that the liner is radiating more efficiently; a colder liner is easier to compress and will subsequently act as a more effective piston for compressing the fuel.Also, some of the increased liner radiation goes into the fuel, keeping it hot for a longer period.This speaks to the benefit of using a high atomic-number liner and broadly supports the viability of the SZP concept.At these observed high temperatures, alpha particle heating could be significant, but this physics capability is not available in FLASH so we did not explore it with MACH2 either. Any additional heat source or insulation, or increasing the initial fuel density, would help stagnate the fuel at a lower CR value, thus improving stability.It should be emphasized that experiments of other SZP configurations, at smaller-than-Z pulsed-power facilities, have proven to be stable, and SZP1* is a theoretical platform in a different regime that may be more difficult to stabilize. We eventually want to use FLASH to simulate the entire spatial and temporal evolution of the SZP with a reactor-level drive current in three dimensions, taking full advantage of the extended-MHD and transport capabilities of the code.The next immediate step is to conduct two-dimensional simulations of the models discussed in this work and in previous publications. 14,16Future work will assess the stability of the pinch (liner and target) to MHD instabilities in the presence/absence of axial magnetic fields, and explore how FLASH 's extended-MHD terms can affect implosion dynamics and plasma conditions at stagnation.This will shed light on the importance of previously unexplored physical processes at play in the SZP concept and contribute to the evaluation of the feasibility of the concept to achieve fusion. Feasibility FIG. 1. Schematics showing the SZP2 (a) and SZP1* (b) configurations.The SZP2 configuration uses a DT fuel and a silver solid liner, whereas SZP1* uses DT fuel and a xenon gass-puff liner. to introduce FLASH 's new capability of modeling Z-pinches, (2) to further verify both FLASH and MACH2 against analytical test problems and with direct code-to-code comparisons of SZP simulations, (3) to provide a new SZP configuration (SZP1*) for additional verification and future experimental validation, and (4) to shed more light on some of the previously published work by presenting SZP2 Feasibility and Performance of the SZP: 1-D FLASH and MACH2 results from FLASH. and numerous synthetic diagnostics 22 .FLASH 's newest algorithmic developments include a complete generalized Ohm's law that incorporates all extended MHD terms of the Braginskii formulation 23 .The new extended MHD capabilities are integrated with state-of-the-art transport coefficients, 24 developed with Feasibility and Performance of the SZP: 1-D FLASH and MACH2 support from the BETHE program. FIG. 2 . FIG. 2. Analytical solution of electron (same as ion, black) and radiation temperatures (red) for the radiative shock test problem as compared to FLASH and MACH2 simulation results.The analytical solution is shown as a solid line whereas the FLASH and MACH2 results are circle-dot and cross symbols, respectively.The top panel (2 ns) appears to show good agreement, but for later times (4.5 ns and 7 ns) we see an increasingly discrepant position in the MACH2 result. Feasibility FIG. 3. Analytical solution of mass density to the magnetized Noh problem as compared to FLASH and MACH2 simulation results.The analytical solution is shown as a solid line whereas the FLASH and MACH2 results are circle-dot and cross symbols, respectively.Both codes recover the expected profile. FIG. 5 . FIG. 5. Fuel radius (a) and implosion velocity V i (b) as a function of time in the ideal-EOS SZP2 run.The black line denotes a simulation in which the radiation transport operates normally, whereas the red line denotes the simulation in which the radiation transport is artificially switched off. FeasibilityFIG. 6 . FIG. 6. Profiles of thermal pressure (solid) and magnetic pressure (dashed) at the time of shock breakout in the ideal-EOS SZP2 runs, for (a) radiation physics turned off (t = 109 ns), and (b) radiation physics turned on (t = 107 ns). FIG. 9 . FIG. 9. Comparison of shell trajectories (i.e., fuel outer radius) from SZP1* simulations with MACH2 (dashed black) and FLASH (solid cyan).The load current resulting from the circuit model in FLASH (red) is also shown. FIG. 10 . FIG. 10.Comparison of mass density (solid) and magnetic field (dashed) at CR = 87 from SZP1* simulations from MACH2 (red) and FLASH (black).The fuel/liner interface is marked by a shortdash vertical line. FIG. 11 . FIG. 11.Comparison of the electron Hall parameter at CR = 87 from SZP1* simulations with MACH2 (red) and FLASH (black).The fuel/liner interface is marked by a short-dash vertical line.The fuel adjacent to the fuel/liner interface in the MACH2 simulation is significantly more magnetized than in the FLASH result. Fig. 14 Fig.14shows a comparison of the electron, ion, and radiation temperatures from the FLASH model at CR = 100 and CR = 388 (stagnation).From this comparison, we observe that thermal losses have begun to dominate beyond CR = 100.These are primarily due thermal conduction, as we estimated in the previous analysis at the beginning of this section (see Eq. (3) and TableI).Meanwhile, density increases due to compression, eventually causing the fuel to stagnate when the pressure is sufficiently high.Note that the radiation Feasibility and Performance of the SZP: 1-D FLASH and MACH2 ble sources of discrepancy are differences in the codes' algorithms.In general, the FLASH SZP1* simulations reach higher (potentially unstable) CR values than MACH2 simulations, and MACH2 simulations reach higher temperatures than all FLASH simulations.The discrepant results highlight the sensitivity of the SZP1* configuration to heat transport processes (i.e., thermal conduction and radiation).The high CR values are the result of significant fuel thermal conduction losses.As previously discussed, the SZP1* concept would benefit from decreasing thermal conductivity via fuel magnetization, as was shown (perhaps erroneously) in the MACH2 model.Such fuel magnetization could be achieved experimentally by applying an axial magnetic field to the configuration.Despite the different results, all SZP1* simulations with both FLASH and MACH2 generally agree on reaching peak fuel ion temperatures above 15 keV.The highest-fidelity run, the FLASH multi-group diffusion model, reaches the lowest peak ion temperature (see Fig.15), which in turn shows the importance of accurate radiation transport modeling for SZP1*.
10,123.4
2023-11-13T00:00:00.000
[ "Physics", "Engineering" ]
Symmetry, Integrability and Geometry: Methods and Applications The Non-Autonomous Chiral Model and the Ernst Equation of General Relativity in the Bidifferential Calculus Framework The non-autonomous chiral model equation for an $m \times m$ matrix function on a two-dimensional space appears in particular in general relativity, where for $m=2$ a certain reduction of it determines stationary, axially symmetric solutions of Einstein's vacuum equations, and for $m=3$ solutions of the Einstein-Maxwell equations. Using a very simple and general result of the bidifferential calculus approach to integrable partial differential and difference equations, we generate a large class of exact solutions of this chiral model. The solutions are parametrized by a set of matrices, the size of which can be arbitrarily large. The matrices are subject to a Sylvester equation that has to be solved and generically admits a unique solution. By imposing the aforementioned reductions on the matrix data, we recover the Ernst potentials of multi-Kerr-NUT and multi-Demianski-Newman metrics. Introduction The bidifferential calculus framework allows to elaborate solution generating methods for a wide class of nonlinear "integrable" partial differential or difference equations (PDDEs) to a considerable extent on a universal level, i.e. resolved from specific examples. It takes advantage of the simple rules underlying the calculus of differential forms (on a manifold), but allows for a generalization of the latter, which is partly inspired by noncommutative geometry. For a brief account of the basic structures and some results we refer to [1] (also see the references therein), but all what is needed for the present work is provided in Section 2. In this framework we explore the non-autonomous chiral model equation for an m×m matrix g, where ρ > 0 and z are independent real variables, and a subscript indicates a corresponding partial derivative. It apparently first appeared, supplemented by certain reduction conditions (see Section 5), as the central part of the stationary axially symmetric Einstein vacuum (m = 2) and Einstein-Maxwell (m = 3) equations (see in particular [2,3,4,5,6,7,8]). For m > 3 this equation is met in higher-dimensional gravity, with a correspondingly enlarged number of Killing vector fields (see e.g. [9,10,11,12,13,14,15,16,17,18,19]). A version of the above equation also arises as the cylindrically symmetric case of the (2 + 1)-dimensional principal chiral model [20] and as a special case of the stationary Landau-Lifshitz equation for an isotropic two-dimensional ferromagnet [21]. The first construction of "multi-soliton" solutions of (1.1) has been carried out by Belinski and Zakharov [5,6] (also see [7]) using the "dressing method" 1 . Here (1.1) is expressed as the integrability condition of a linear system, which depends on a (spectral) parameter and involves derivatives with respect to the latter. Another approach is based on a linear system that depends on a variable spectral parameter, i.e. a parameter that depends on the variables ρ and z [3]. In Appendix B we show that both linear systems arise from a universal linear system (see Section 2) in the bidifferential calculus framework (also see [31] for a relation between the two linear systems). In the present work, we concentrate on a surprisingly simple general solution generating result in the bidifferential calculus framework, which has already been successfully applied in various other cases of integrable (soliton) equations [1,32,33,34] to generate multi-soliton families. In order to make it applicable to the non-autonomous chiral model, a slight generalization is required, however (see Section 3 and Appendix A). Section 4 then elaborates it for the m × m non-autonomous chiral model. We obtain solutions parametrized by four matrices. Two of them arise as solutions of an n × n matrix version of the quadratic equation for pole trajectories that first appeared in the solution generating method of Belinski and Zakharov [5,6,7]. It then remains to solve a Sylvester equation, where two more matrices enter, which are constant of size m × n, respectively n × m. Since n can be arbitrarily large, we obtain an infinite family of solutions. The Sylvester equation is easily solved if the first two matrices are chosen diagonal, and in this case one recovers "multi-soliton" solutions. Additional solutions are obtained if the two n × n matrices are non-diagonal. In this case it is more difficult to solve the Sylvester equation, though a not very restrictive spectrum condition ensures the existence of a unique solution. Except for an example in Section 5, we will not elaborate this case further in this work. Section 5 addresses reductions, in particular to the Ernst equation of general relativity. It turns out that the "multi-soliton" solutions of the stationary, axially symmetric Einstein vacuum and Einstein-Maxwell equations are indeed in the generated class of solutions of the nonautonomous chiral model. We thus obtain a new representation of these solutions. It has the property that the superposition of two (or more) "solitons" (e.g. black holes) simply corresponds to block-diagonal composition of the matrix data parametrizing the constituents. This puts a new perspective on an old result about one of the most important integrable equations in physics. We would like to stress that the solutions of the non-autonomous chiral model and the Ernst equation(s), (re)derived in this work, originate from a universal result that also generates multi-soliton solutions of various other integrable equations in a non-iterative way. The crucial step is to find a "bidifferential calculus formulation" of the respective equation. This may be regarded as a generalization of the problem of formulating the equation as a reduction of the selfdual Yang-Mills equation. Indeed, in the case under consideration, it is of great help that an embedding of the non-autonomous chiral model in the (m × m) selfdual Yang-Mills equation is known [35,36,37,38,39,40], and a bidifferential calculus formulation is then obtained from that of the selfdual Yang-Mills equation [1], see Section 4. Once this is at hand, the remaining computations are rather straightforward. Section 6 contains some concluding remarks. Preliminaries Basic def initions. A graded algebra is an associative algebra Ω over C with a direct sum decomposition Ω = r≥0 Ω r into a subalgebra A := Ω 0 and A-bimodules Ω r , such that Ω r Ω s ⊆ Ω r+s . A bidifferential calculus (or bidifferential graded algebra) is a unital graded algebra Ω equipped with two (C-linear) graded derivations d,d : Ω → Ω of degree one (hence dΩ r ⊆ Ω r+1 , dΩ r ⊆ Ω r+1 ), with the properties and the graded Leibniz rule for all χ ∈ Ω r and χ ′ ∈ Ω. This means that d andd both satisfy the graded Leibniz rule. In Section 3 we consider a more narrow class of graded algebras. A bidifferential calculus within this class is then specified in Section 4. Dressing a bidif ferential calculus. Let (Ω, d,d) be a bidifferential calculus. Replacing d κ in (2.1) by with a 1-form A (i.e. an element of Ω 1 ), the resulting condition D 2 κ = 0 (for all κ ∈ C) can be expressed as If these equations are equivalent to a PDDE or a system of PDDEs for a set of functions, we say we have a bidifferential calculus formulation for it. This requires that A depends on these functions and the derivations d,d involve differential or difference operators. There are several ways to reduce the two equations (2.2) to a single one. Here we only consider two of them. 1. We can solve the first of (2.2) by setting where α ∈ A is d-constant 2 and β ∈ A isd-constant, and both have to be invertible. Sincē is If P satisfies dP = (dP )P, (2.8) this reduces to A solution generating method Let (C N ) denote the exterior (Grassmann) algebra of the vector space C N and Mat(m, n, B) the set of m × n matrices with entries in some unital algebra B. We choose A as the algebra of all finite-dimensional matrices (with entries in B), where the product of two matrices is defined to be zero if the sizes of the two matrices do not match, and assume that Ω = A ⊗ (C N ) is supplied with the structure of a bidifferential calculus. In the following, I = I m and I = I n denote the m × m, respectively n × n, identity matrix. Proof . Using the last three of (3.1) we obtain Multiplication by U from the left and by V from the right, and usingdI = 0, leads tō dg = U (dX −1 )V g = (dφ)g. Hence φ and g solve the Miura transformation equation (2.10). We did not use the first of (3.1), but it arises as an integrability condition: 0 =d 2 X = (dX)[(dP )P −dP ]. where the last term vanishes. If P and R are sufficiently independent, this implies that the third of (3.1) is satisfied. In particular, this holds if B is the algebra of complex functions of some variables and if P and R have no eigenvalue in common. Appendix A explains how Proposition 3.1 arises from a theorem that has been applied in previous work to generate soliton solutions of several integrable PDDEs. The non-autonomous chiral model The PDE defining the non-autonomous chiral model can be obtained as a reduction of the self-dual Yang-Mills (sdYM) equation (see e.g. [35,36,37,38,39,40]). In an analogous way, a bidifferential calculus for the non-autonomous chiral model can be derived from a bidifferential calculus for the sdYM equation (also see [41]). In coordinates ρ, z, θ, where ρ > 0, it is given by (4.1) Here e.g. f z denotes the partial derivative of a function f (of the three coordinates) with respect to z, and ζ 1 , ζ 2 is a basis of 1 (C 2 ). d andd extend to matrices of functions and moreover to Ω = A ⊗ (C 2 ) with A = Mat(m, m, C), treating ζ 1 , ζ 2 as constants. The coordinate θ is needed to have the properties of a bidifferential calculus, but we are finally interested in equations for objects that do not depend on it. A (matrix-valued) function is d-constant (d-constant) iff it is z-independent and only depends on the variables θ, ρ through the combination ρe θ (respectively ρe −θ ). It is d-andd-constant iff it is constant, i.e. independent of z, θ, ρ. For an m × m matrix-valued function g, (2.4) takes the form with any constant c and θ-independentg, for the latter we obtain the non-autonomous chiral model equation 3 In Section 4.1, we derive a family of exact solutions by application of Proposition 3.1. In Appendix B we recover two familiar linear systems (Lax pairs) for this equation. whereφ is θ-independent, we obtaiñ which is related to the non-autonomous chiral model by the Miura transformatioñ Symmetries. (4.2) is invariant under each of the following transformations, and thus, more generally, any combination of them. A family of exact solutions Let us first consider the equationdP = (dP )P , which is the first of (3.1). Using the above bidifferential calculus, it takes the form and assuming thatP does not depend on θ, this translates tõ The proof of the following result is provided in Appendix C. Lemma 4.1. The following holds. (1) IfP and I +P 2 are invertible, the system (4.4) implies with a constant matrix B. Remark 4.1. IfP is diagonal, then (4.5) becomes the set of quadratic equations (2.11) in [6] (or (1.67) in [7]), which determine the "pole trajectories" in the framework of Belinski and Zakharov. In our approach, there are more solutions sinceP need not be diagonal. [42]. But this would be unnecessarily restrictive, see Section 4.2. Remark 4.3. Under the assumption that also see (C.1). For the bidifferential calculus under consideration,dP = (dP )P is therefore equivalent todP = P dP . The latter is one of our equations for R in Proposition 3.1. Setting withR θ-independent, invertibleP andR both have to solve (4.5). The third of (3.1) becomes Assuming that U and V are θ-independent, and recalling the θ-dependence of φ, the formula for φ in (3.2) requires X = e θX with θ-independentX. Hencẽ The last of (3.1) becomes the θ-independent Sylvester equatioñ Now Proposition 3.1 implies the following. Proposition 4.1. Let n × n matricesP andR be solutions of (4.5) (with a matrix B, respectively B ′ ), with the properties that they commute with their derivatives w.r.t. ρ and z, and that I +P 2 and I +R 2 are invertible. Furthermore, let spec(P ) ∩ spec(R) = ∅ andX an invertible solution of the Sylvester equation (4.7) with constant m × n, respectively n × m, matrices U and V . Theñ with any constant invertible m × m matrix 4 g 0 , solves the non-autonomous chiral model equation (4.2). Proof . As a consequence of the spectrum condition, a solutionX of the Sylvester equation (4.7) exists and is unique. The further assumptions forP andR are those of Lemma 4.1, part (2). Furthermore, (4.6) is a consequence of (4.7) if the spectrum condition holds (also see Remark 3.1). Now our assertion follows from Proposition 3.1 and the preceding calculations. Remark 4.4. The determinant of (4.8) is obtained via Sylvester's theorem, where we used the Sylvester equation (4.7) and assumed that it has an invertible solution. Remark 4.5. As an obvious consequence of (4.7), U and V enterg given by (4.8) only modulo an arbitrary scalar factor different from zero. We also note that a transformatioñ with constant invertible n × n matrices T 1 , T 2 , leaves (4.5), (4.6), (4.7) and (4.8) invariant. As a consequence, without restriction of generality, we can assume that the matrix B in (4.5), and the corresponding matrix related toR, both have Jordan normal form. If they have no eigenvalue in common, then (4.7) has a unique solution given by the Cauchy-like matrixX It remains to solve (4.5) (choosing B diagonal), which yields Since we assume that {p i } ∩ {r i } = ∅, the assumptions of Proposition 4.1 are satisfied. It follows that, with the above data, (4.8) solves the nonautonomous chiral model equation. The case whereP orR is non-diagonal is exploited in the next subsection. But Example 4.1 will be sufficient to understand most of Section 5. More about the family of solutions Introducing matrices A and L via A = (zI + B) 2 + ρ 2 I,P = ρ −1 (L + zI + B), According to Remark 4.5, we can take B in Jordan normal form, Let us first consider the case where B is a single r × r Jordan block, Then we have and thus by use of the generalized binomial expansion formula, noting that M r r = 0 as a consequence of N r r = 0. Hence we obtain the following solution of (4.5), which is an upper triangular Toeplitz matrix. In particular, we havẽ These matrices are obviously nested and, from one to the next, only the entry in the right upper corner is new. For the above Jordan normal form of B, solutions of (4.5) are now given by 5 P = block-diag(P n 1 , . . . ,P ns ), where the blocks typically involve different constants replacing b, i.e. different eigenvalues of B. SinceP ρ andP z obviously commute withP , and since I+P 2 is invertible 6 for ρ > 0, Lemma 4.1, part (2), ensures thatP solves (4.4). IfP has the above form, andR a similar form, and ifP andR have disjoint spectra, it remains to solve the Sylvester equation 7 (4.7) in order that (4.8) yields solutions of the non-autonomous chiral model equation. This leads to a plethora of exact solutions. We postpone an example to Section 5, where additional conditions considerably reduce the freedom we have here, see Example 5.2. Reductions of the non-autonomous chiral model to Ernst equations According to Section 4, a particular involutive symmetry of the non-autonomous chiral mo- (4.2) therefore admits the generalized unitarity reductiong † γg = γ, which means thatg belongs to the unitary group U (m; γ). 8 Another reduction, associated with an involutive symmetry, is g † =g. Imposing both reductions simultaneously, amounts to setting translates these conditions into γP † γ = P, P 2 = P. In particular, P is a projector. If we require in addition that rank(P) = 1, which for a projector is equivalent to tr(P) = 1 [43, Fact 5.8.1], the following parametrization ofg can be achieved (also see e.g. [44,45,46,47]), where v is an m-component vector with v † γv = 0. This parametrization is invariant under v → cv with a nowhere vanishing function c, so that the first component of v can be set to 1 in the generic case where it is different from zero. If γ has signature m−1, [44,45,46]. The condition tr(P) = 1 corresponds to We also note that det(g) = − det(γ). The following result, which we prove in Appendix C, shows how the reduction conditions (5.1) and (5.3) can be implemented on the family of solutions of the non-autonomous chiral model obtained via Proposition 4.1. 6 Note that Ir +P Under the stated conditions the Sylvester equation possesses a unique solution and a vast literature exists to express it. 8 If γ has p positive and q negative eigenvalues, this is commonly denoted U (p, q). (2) If moreover the relations hold, theng given by (4.8) is Hermitian. Then we have Γ 2 = I and (5.4) holds. If spec(P ) ∩ spec(R) = ∅, the corresponding Sylvester equation has a unique solutionX. According to part (1) of Proposition 5.1,g given by (4.8) satisfies the reduction conditions (5.6). This is a way to superpose solutions from the class obtained in Section 3, preserving the constraints (5.4). We simply block-diagonally compose the matrix data associated with the constituents. In an obvious way, this method can be extended to part (2) of Proposition 5.1. Solutions of the Ernst equation of general relativity We choose m = 2 and with a complex function E and its complex conjugateĒ. Then (5.2) takes the form so that (4.2) now becomes the Ernst equation where e.g. ∂ ρ denotes the partial derivative with respect to ρ. This equation determines solutions of the stationary axially symmetric Einstein vacuum equations. The following statements are easily verified. with p i , r i given by (4.9). Choosing so that det(g 0 ) = 1, the second of the reduction conditions (5.11) is solved by setting noting that −1/p i is given by the expression for p i with j i exchanged by −j i . We shall write p, r instead of p 1 , r 1 . With U = (u ij ), V = (v ij ), the remaining constraintg 12 =g 21 is solved by 11 In the following we assume that u 11 and v 11 are different from zero and write Then u 11 , u 12 , v 11 , v 21 drop out ofg. Without restriction of generality, we can therefore choose them as u 11 = 1, u 12 = −u, v 11 = 1 and v 21 = −v, hence Then U and V commute with γ.g is real in particular if either of the following conditions is fulfilled. (1) p, r real (which means b, b ′ real) and u, v real. By a shift of the origin of the coordinate z, we can arrange in both cases that where b ∈ R in case (1) and b ∈ i R in case (2). Using and introducing we obtain . (5.14) Setting jj ′ = −1, the cases (1) and (2) simply distinguish the non-extreme and the hyperextreme Kerr-NUT space-times (see e.g. [50]). The constants satisfy m 2 + l 2 − a 2 = b 2 . where p = ρ −1 (z + b + r), r = ± (z + b) 2 + ρ 2 , and r i is also of the form (4.9) with a constant b ′ i = b. The conditions for U and V restrict these matrices to the form with constants u i , v i . If r 1 = r 2 =: r (i.e. b ′ 1 = b ′ 2 ), it turns out thatg does not depend on r and v i , and we obtain with the parameters which satisfy m 2 − a 2 + l 2 = 0. E is the Ernst potential of an extreme Kerr-NUT space-time. Solutions of the Ernst equations in the Einstein-Maxwell case See e.g. [6,24,51] for other derivations, and also [52], as well as the references cited there. (also see [53,47]). We have where ⊺ denotes transposition. In the following we consider the case m = 3, where (4.2) becomes the system of Ernst equations which determine solutions of the stationary axially symmetric Einstein-Maxwell equations (without further matter fields). If E = 1 and Φ = 0, theng reduces to which corresponds to the Minkowski metric. Example 5.4 (Demiański-Newman). Let n = 2 and with p, r as in (4.9) with constants b, b ′ . Solving g 0 γU = U Γ and ΓV = V g 0 γ, and recalling that U and V enter the solution formula (4.8) only up to an overall factor, leads to According to Proposition 5.1, part (1), in order to obtain solutions of the Ernst equations it remains to determine conditions under whichg is Hermitian. By explicit evaluation one finds that this is so if one of the following sets of conditions is satisfied. Without restriction of generality we can set b ′ = −b, so that p and r are given by (5.12). The Ernst potential E is again of the form (5.14), where now The second Ernst potential is given by , (5.19) where In both cases, the parameters a, l, m, q e , q m are real and satisfy Cases (1) and (2) correspond to a non-extreme, respectively hyperextreme, Demiański-Newman space-time (see e.g. [50]). q e and q m are the electric and magnetic charge, respectively. Whereas (2) can be neatly expressed via (5.7), we have been unable so far to find a corresponding formulation for the conditions (1) in terms of the matricesP ,R, U , V , also see Remark 5.4. Conclusions We addressed the m × m non-autonomous chiral model in a new way, starting from a very simple and universal solution generating result within the bidifferential calculus approach. This resulted in an infinite family of exact solutions for any matrix size m, parametrized by matrices subject to a Sylvester equation. To solve the latter is a well-studied and fairly simple problem. At least in the compact form presented in this work, according to our knowledge these solutions have not appeared previously in the literature. The non-autonomous chiral model originally appeared in reductions of Einstein's equations. We demonstrated in Section 5 that the "multi-solitons" on a flat background, known in the case of stationarity, axial symmetry and vacuum, respectively electrovacuum, are indeed contained in the family of solutions that we obtained in Section 4 for the non-autonomous chiral model equation. More precisely, we found conditions to be imposed on the matrices that parametrize the latter solutions such that (in the cases m = 2, respectively m = 3) they become solutions of the Ernst equation(s) of general relativity. Only in the case of non-extreme multi-Demiański-Newman solutions we were not (yet) able to find a corresponding characterization of the matrix data. Beyond the solutions found e.g. by Belinski and Zakharov, which in the present work correspond to diagonal matricesP andR, there are solutions associated with non-diagonal matrix data. It may well be that such solutions can be obtained alternatively e.g. in the Belinski-Zakharov framework with a dressing matrix involving higher order poles, or by taking suitable limits where some poles coincide. In any case, our approach yields these solutions directly. Moreover, relaxing the spectral condition for the matricesP andR, the Sylvester equation has further solutions, provided that the matrix V U on its right hand side is appropriately chosen. This is another possibility to obtain new solutions. Finally, we should mention the possibility to make sense of the limit 14 n → ∞ (where n × n is the size of the matrices that parametrize the solutions). In conclusion, at present it is not quite clear what the generated class of solutions really embraces. Moreover, using the original method of Belinski and Zakharov, in the Einstein-Maxwell case no appropriate reduction conditions could be found (cf. [54]), and a different approach had to be developed [23,7]. We had less problems in this respect. On the other hand, the Belinski-Zakharov approach, the modified approach of Alekseev [23] in the Einstein-Maxwell case, and others can also be used to generate "solitons" on a nonflat background. Perhaps a corresponding extension of Proposition 3.1 exists. This is also suggested by the relation with Darboux transformations in Appendix A. In any case, here we have a limitation of Proposition 3.1 (which is not a limitation of the bidifferential calculus framework, which offers various methods [1]), but we have the advantage of a very simple and general result that covers physically interesting cases. The appearance of a Sylvester equation is a generic feature of the solution generating result formulated in Proposition 3.1 and in Appendix A (also see [1]). Sylvester equations and their simplest solutions, Cauchy-like matrices, frequently appeared in the integrable systems literature. But this is the first time we came across a Sylvester equation involving non-constant matrix data. A particularly nice feature is the fact that solutions can be superposed by simply composing their matrix data into bigger block-diagonal matrices. The corresponding Sylvester equation still has to be solved, but a unique solution exists if we impose a not very restrictive spectral condition on these matrix data. In Appendix B we recovered two familiar Lax pairs for the non-autonomous chiral model from the general linear equation (2.7) in the bidifferential calculus framework. Our way toward exact solutions in Section 4 is more closely related to Maison's Lax pair than to that of Belinski and Zakharov. We eliminated the θ-dependence, whereas in the Lax pair of Belinski and Zakharov the θ-dependence is kept and it involves derivatives with respect to this "spectral parameter". Our results extend beyond the Einstein-Maxwell case and are also applicable to higherdimensional gravity theories (see e.g. [9,10,11,12,13,14,15,16,17,18,19]). Besides that, other reductions of the non-autonomous chiral model (for some m) are of interest, see e.g. [20,21], and the set of solutions that we obtained in this work will typically be reducible to solutions of them. Since 3), respectively (2.4), from solutions of linear equations. However, the equations for P and R arise as integrability conditions of the latter. In previous work [1,32,33,34], we chose P and R as d-and d-constant matrices, which indeed reduces the equations that have to be solved to only linear ones, and we recovered (and somewhat generalized) known soliton solution families. In case of the non-autonomous chiral model and, more specifically, its reduction to the Ernst equation, it turned out to be necessary to go beyond this level, and thus to consider genuine solutions of the nonlinear equations for P and R, in order to obtain relevant solutions like those associated with multi-Kerr-NUT space-times and their (electrically and magnetically) charged generalizations. This also suggests a corresponding application of the theorem (or Proposition 3.1) to other integrable PDDEs. A Via a Darboux transformation and a projection to a non-iterative solution generating result Lemma A.1. Let P be invertible. The transformation where X is an invertible solution of (2.7) with A = dφ = (dg)g −1 , anddP = (dP )P , maps a solution of the Miura transformation equation (2.10) into another solution. (A.1) is an essential part of a Darboux transformation, cf. [1]. In the following we will use this result to derive a theorem which essentially reduces to Proposition 3.1, see Remark A.2. Proof . Since (φ, g) is assumed to solve (2.10), we havē Multiplying by U from the left and by V from the right, we obtain which is equivalent to (2.10). In addition we require that where U ∈ Mat(m, n, B) and V ∈ Mat(n, m, B) are dandd-constant, and Y ∈ Mat(n, n, B). Then also solve the Miura transformation equation (2.10), and thus (2.3), respectively (2.4). Proof . Since we assume that (−R, S) solves the Miura transformation equation (2.10) in Mat(n, n, B), according to Lemma A.1 this also holds for the pair Using the first of (A.6), we find that (A.2) holds withφ = Y X −1 . Now (A.3) yields the asserted formulas for φ and g. According to Lemma A.2, φ and g solve the Miura transformation equation (2.10). Together with (A.5), the first of (A.6) implies which is satisfied if the last two conditions of (A.6) hold. Remark A.1. This theorem generalizes a previous result in [1], which has been applied in [1,32,33,34] with d-andd-constant P , R, in which case only linear equations have to be solved in order to generate solutions of (2.3), respectively (2.4). The above derivation shows that the theorem may be regarded as a combination of a Darboux transformation (Lemma A.1), on the level of matrices of arbitrary size, and a projection mechanism (Lemma A.2). The projection idea can be traced back to work of Marchenko [57]. More generally, the above result can be formulated in terms of suitable operators, replacing the matrices that involve a size n. The next remark shows that, with mild additional assumptions, Theorem A.1 reduces to Proposition 3.1. with any invertible Q ∈ Mat(n, n, B), leaves the expressions for φ and g in (A.7) invariant. This is then also a symmetry transformation of (A.5) and (A.6) ifdQ −1 = (dQ −1 )P . As a consequence, under the assumptions that Y is invertible, without restriction of generality we can set Y = I, where I is the n × n identity matrix. Then φ is given by the expression in (3.2). We further note that (A.4) and the second of (A.6) implyd(RS) = 0. Assuming that R is invertible, we thus have S = R −1 C with an invertibled-constant C. The expression for g in (A.7) now takes the form assuming temporarily invertibility of U C −1 V . Together with φ, this remains a solution of (2.10) if we drop the last factor, so that This expression also makes sense without the above additional invertibility assumptions. We can still translate it into a simpler form. From the first of (A.6), which now has the form of the last of (3.1), we obtain Multiplication by U from the left and by V from the right, and use in our last formula for g, leads to the expression for g in (3.2). B Linear systems for the non-autonomous chiral model Choosing P = pI with a function p(ρ, z), the equations forP can easily be integrated, which results in 15 where b is an arbitrary constant. In terms of X =g −1X the above linear system, simplified by setting c 1 = c 2 = 0, then takes the form X ρ = − p 1 + p 2 g −1g z + pg −1g ρ X ,X z = p 1 + p 2 g −1g ρ − pg −1g z X . This system is equivalent to a linear system for the non-autonomous chiral model, first found by Maison in 1979 [4] (also see [41]). B.2 The Belinski-Zakharov Lax pair Using instead of θ the variable λ = −ρe θ , We consider the linear system (2.7) with P = I, which trivially solves (2.8), i.e. dX = AX + dX. Writing the integrability condition (2.9) takes the form assuming that A, B are λ-independent. Solving the first (zero curvature) condition by the second equation becomes the non-autonomous chiral model equation ρg ρ g −1 ρ + ρg z g −1 z = 0. The above linear equation leads to This is the Belinski-Zakharov Lax pair [6] (also see [7,Chapter 8]). We note that the "spectral parameter" λ has its origin in a coordinate of the self-dual Yang-Mills equation. We also note that A = (dg)g −1 (using g λ = 0). C.2 Proof of Proposition 5.1 Using ( Expanding the left hand side and using the Sylvester equation (4.7) to eliminate V U , this indeed turns out to be satisfied. To complete the proof of (1), it remains to derive the trace formula. Using In order to prove (2), we consider the Hermitian conjugate of the Sylvester equation (4.7). By use of (5.7), and with the help of g 0 γU = U Γ, ΓV = V g 0 γ and (g 0 γ) 2 = I, it takes the form By comparison with the original Sylvester equation, the spectrum condition allows us to conclude thatX † = −ΓXΓ. It follows that U (RX) −1 V g 0 is Hermitian, and thus alsog given by (4.8).
7,729.4
0001-01-01T00:00:00.000
[ "Physics" ]
Near theoretical ultra-high magnetic performance of rare-earth nanomagnets via the synergetic combination of calcium-reduction and chemoselective dissolution Rare earth permanent magnets with superior magnetic performance have been generally synthesized through many chemical methods incorporating calcium thermal reduction. However, a large challenge still exists with regard to the removal of remaining reductants, byproducts, and trace impurities generated during the purifying process, which serve as inhibiting intermediates, inducing productivity and purity losses, and a reduction in magnetic properties. Nevertheless, the importance of a post-calciothermic reduction process has never been seriously investigated. Here, we introduce a novel approach for the synthesis of a highly pure samarium-cobalt (Sm-Co) rare earth nanomagnet with near theoretical ultra-high magnetic performance via consecutive calcium-assisted reduction and chemoselective dissolution. The chemoselective dissolution effect of various solution mixtures was evaluated by the purity, surface microstructure, and magnetic characteristics of the Sm-Co. As a result, NH4Cl/methanol solution mixture was only capable of selectively rinsing out impurities without damaging Sm-Co. Furthermore, treatment with NH4Cl led to substantially improved magnetic properties over 95.5% of the Ms for bulk Sm-Co. The mechanisms with regard to the enhanced phase-purity and magnetic performance were fully elucidated based on analytical results and statistical thermodynamics parameters. We further demonstrated the potential application of chemoselective dissolution to other intermetallic magnets. in order to achieve further enhanced magnetic properties 14,15 . One way to precisely control the fibre diameter of magnetic structures is to employ electrospinning process. When the fibre dimension reaches the single-domain size of a given magnet (e.g., Sm 2 Co 17 : sub-micron-scale), the theoretical maximum coercivity can be achieved 16 . The reduction process is an indispensable step in all the chemical approaches to prepare rare earth magnets from oxides. Because rare earth elements possess a highly negative reduction potential (e.g., Sm 3+ /Sm = −2.41 eV; while transition metal, Co 2+ /Co = −0.28 eV) and a low free energy for oxidation (e.g., Sm = −5.73 × 10 5 J at 25 °C), rare earth oxides are extremely stable and are difficult to reduce to their metallic phase under H 2 conditions 12,17,18 . Calcium (Ca) granule or calcium hydride (CaH 2 ) powder with an oxidation energy (e.g., Ca = −6.04 × 10 5 J at 25 °C) lower than all other rare earth metals has enabled the reduction, leading to a production of metallic rare earth magnets and oxidized Ca (CaO) 19 . To eliminate the unconsumed or residual calcium phases, a post-calciothermic reduction process was imperative; thus far, a dilute acidic solution and/or deionized water have been conventionally used for rinsing out leftover reductants [20][21][22][23][24][25] . However, a byproduct unavoidably formed, resulting in poor magnetic properties of the resultant nanomagnet. As the residue and water reacted intensely with the liberation of an enormous amount of heat, Ca/CaO formed a water-insoluble calcium compound and it remained as a non-magnetic product. Moreover, H 2 gas was produced vigorously and induced proton (H + ) formation in acidic solution, causing serious damage to the nanomagnets. In the worst case, magnetic phase decomposition could occur 26 . Thus, after the well-controlled synthesis of the nanomagnets, the existence and unsuccessful removal of unwanted impurities led to inferior magnetic properties as well as surface damage to the hard magnetic nanomaterials. The surface defects further resulted in difficult exchange-coupling interactions on hard-and soft-magnetic inter-phases. However, most previously reported studies focused only on the synthetic results without covering the loss in magnetic properties induced by side products and the interaction between byproducts and treated solutions 22,[27][28][29] . Interestingly, Wang et al. proposed a novel washing route (i.e., ethyl alcohol-water; two-step process) for the synthesis of Nd-Fe-B nanoparticles with excellent magnetic properties; however, they could not deviate from the problems of oxidation and the formation of serious defects on the surface of the metallic magnetic phase 30 . To the best of our knowledge, there have not been detailed studies addressing the effects of a chemoselective dissolving solution on the surface characteristics and magnetic properties of nanoscale magnetic structures prepared by the R-D process. Here, 1-D highly pure samarium-cobalt nanostructures with near theoretical ultra-high magnetic performance were synthesized via consecutive electrospinning, calcium thermal reduction, and chemoselective dissolution. The chemoselective effects of various conventional solutions were evaluated and fully discussed according to as-treated rare earth magnetic Sm 2 Co 17 nanofibres and their magnetic properties and surface microstructural characteristics. Moreover, the applicability of the most efficient selective-dissolving solution to other rare earth based magnetic phases (e.g., SmCo 5 and Nd 2 Fe 14 B system) was also discussed in order to obtain high purity, outstanding magnetic performance, and to demonstrate further potential as a raw material applied to an exchange-coupled magnet. A graphical summary of our experimental procedure can be seen in Fig. 1. Results and Discussion Preparation of solutions for chemoselective dissolution. Solutions for chemoselective dissolution must fulfill the following condition: possess a high Ca/CaO solubility or fully react with these calcium compounds. Pure alcohols (e.g., ethanol, methanol) were not selected because these solutions cannot react with the byproducts. Additionally, CaO and CaH 2 possess extremely low solubility in alcohols 31,32 . Ethylene glycol and glycerol are potentially usable; however, these chemicals were excluded due to their sluggish reactivity with CaO 33 . Distilled water and dilute acidic solutions (e.g., acetic acid, hydrochloric acid (HCl)) have already been widely reported as traditional washing solutions 20,21,34 . In this study, the use of a strong acid was not considered even if it was largely diluted, as it could lead to the serious corrosion of metal magnetic compounds. CaO was reported to be soluble in water-based sugar solutions. For example, a 34 w/v% sucrose solution dissolved 9.45 mass % of CaO at 25 °C 35 . Glucose, rhamnose, lactose, and raffinose could be employed as sugars; however, these powders exhibited relatively low solubility compared to sucrose, which possessed a solubility of 201 g/100 mL of water 36 . The application of a NH 4 Cl-methanol mixed solution after calcium-assisted thermal reduction has been reported for the synthesis of a single element magnet (i.e., α-Fe) and some nonmagnetic materials (i.e., LaNiO 2 , La 2 CuO 4 ) 34,37-42 . However, only a few papers have been published for binary or ternary alloy magnets 27,43,44 . All things considered, four different solutions were selected: pure distilled water, 0.1 M dilute acetic acid solution, 85 w/v% sucrose solution, and 0.1 M of NH 4 Cl in methanol. Figure 2 shows the surface morphology of Sm-Co nanofibres obtained after washing and drying under various dissolution conditions. As-reduced samples were coated with a rough layer of CaO and residual CaH 2 particles. There was not a distinct morphology difference between samples obtained prior to and after dissolution using only distilled water ( Fig. 2(a)). Different layer-morphologies were observed when samples were treated with dilute acid or sucrose solutions, as can be seen in Fig. 2(b,c). Interestingly, nanofibres with a smooth surface morphology were obtained when a NH 4 Cl/methanol solution was employed ( Fig. 2(d)). Powder X-ray diffraction patterns of the treated Sm-Co nanofibres can be seen in Fig. 3. Prior to treatment, the unreacted CaH 2 , CaO, and pure Sm 2 Co 17 phases were observed. As CaO and residual CaH 2 reacted with distilled water to generate calcium hydroxide (Ca(OH) 2 ), the diffraction patterns of this insoluble phase and some CaO were obtained ( Fig. 3(a), see the result with different the number of washing in Fig. S6) 45 . When aqueous acid and NH 4 Cl/methanol solutions were applied, a clear Sm 2 Co 17 diffraction pattern was observed in Fig. 3(b,d). However, a broadened pattern throughout low angles between the range of 20-40° was also visible in Fig. 3(b), presumed that there was damage on Sm-Co, such as phase amorphization. Meanwhile, a high intensity water-insoluble CaCO 3 phase was observed with Sm 2 Co 17 exhibiting low intensity in Fig. 3(c). This could be attributed to side reactions between calcium compound and saccharose solutions 35,[46][47][48] . The best dissolution precursor candidate should be able to selectively rinse away Ca without dissolving Sm or Co. To confirm this process, the concentrations of 3 elements, Ca, Sm, and Co in each solution obtained during the dissolution process were determined by ICP-OES as having an error of ±~0.1% ( Fig. 4(a)). For all the given solution conditions, over 600 mg/L of Ca was successfully removed. However, a considerable amount of Sm and Co, both over 150 mg/L, was also detected in the dilute acetic acid solution. It is considered that the acidic solution had an undesirable impact on the physical characteristics (i.e., structural and magnetic properties) of Sm-Co. There was also a colour change in some solutions ( Fig. 4(b)): the dilute acetic acid solution and sucrose juice turned red and yellow, respectively, while distilled water and NH 4 Cl/methanol solution remained colourless. This may have been related to the existence of hydrogen ions, H + , during the rinsing process. It was reported that an excess amount of H + in an acidic solution changed the colour of the cobalt(II) acetate ionic complex to pinkish red, which affected the colour change of metallic Sm-Co. Colourless samarium acetate, Sm(CH 3 COO) 3 ·xH 2 O also formed 49 . The reaction of calcium with the saccharose solution yielded viscous yellow-brown solutions while monocalcium saccharates were dispersed 48 . To investigate the microstructure and phase of the surface, the as-rinsed Sm 2 Co 17 fibres were analysed via TEM (see Fig. 5). With regard to the water-treated sample, altocumulus-like layers consisting of Ca(OH) 2 (JCPDS No.70-5492) and CaO (JCPDS No.76-8925) were observed ( Fig. 5(a)). The filiform layer produced from the acidic solution corresponded to Sm 2 Co 17 H 5 (JCPDS No.79-9700) ( Fig. 5(b)). In Fig. 5(c), there were several amorphous layers and CaCO 3 (JCPDS No.05-0586) on the surface of the sucrose-treated Sm 2 Co 17 nanofibres, which may be attributed to the unexpected reaction between CaO/Ca(OH) 2 and the sugar-derivatives. It was believed that a little calcium saccharate, Ca(C 12 H 22 O 11 ) 2 , with a very long carbon chain, remained on the surface as an amorphous layer 50 . Naked Sm 2 Co 17 crystals (JCPDS No.65-7762) were clearly seen on the surface of the treated fibres when the NH 4 Cl/methanol solution was employed, implying no damage occurred to the resulting fibres ( Fig. 5(d)). The elemental profile of the surface for Sm, Co, Ca, carbon (C), and oxygen (O) was investigated using TEM-EDS and the results were in accordance with the formed phases in each solution. There was an appreciable quantity of Ca in the water-treated Sm 2 Co 17 samples, indicating that distilled water was inadequate to remove CaH 2 and CaO. It turned out that the sucrose solution was not suitable as a treatment solution due to a large portion of C induced from the CaCO 3 phase as in the water-treatment case. In the acid-treated Sm 2 Co 17 , there was a small amount of Ca; however, a considerable amount of O was observed. On the contrary, the TEM-EDS data confirmed the presence of Sm and Co without any impurities including Ca and C between the standard error range of ±~1% for NH 4 Cl/methanol-treated Sm 2 Co 17 sample. The magnetic properties of the Sm 2 Co 17 powder samples were investigated using PPMS without compaction, magnetic alignment, and sintering. The magnetic hysteresis loops as a function of the treatment solution can be seen in Fig. 6(a). The corresponding saturation magnetisation (M s ), remanence (M r ), intrinsic coercivity (H ci ), and squareness (M r /M s ) values are given in Fig. 6(b). Despite utilization of the same Sm 2 Co 17 as a starting powder material, the magnetic properties varied depending on the solution used during the dissolution treatment because the H ci and M s values were strongly affected by the purity of the phase 51 . It was known that pure bulk Sm 2 Co 17 possessed a high M s (M bulk ) of 114.0 emu/g (calculated from 1.0 MA/m with a density of Sm 2 Co 17 = 8.769) 52 . For the NH 4 Cl/methanol-treated nanofibres, M s was found to be 108.9 emu/g, which was within 95% of the theoretical value for Sm 2 Co 17 . It was worth mentioning that this difference (~5 emu/g) was negligible due to a magnetically "dead" layer on the surface of the Sm-Co fibres within the nanoscale regime 53,54 . On the contrary, M s deteriorated as low as 62% of the original value in the other cases. For the as-washed samples treated with distilled water or sucrose juice, the decrease in M s resulted from a considerable amount of impurities including CaO and Ca(OH) 2 , while H ci was not affected by these diamagnetic materials 55 . With regard to the surface treatment utilizing an acidic solution, a remarkable H ci drop occurred due to the formation of a thin Sm 2 Co 17 H x layer with soft magnetic characteristics. Besides, interstitial hydrogen in the magnetic phase reduced the anisotropy field, leading to a further drastic decrease in H ci 22,56,57 . Therefore, it could be concluded that the NH 4 Cl/methanol solution was the most suitable chemoselective dissolution solution leading to high magnetic performance 1-D Sm 2 Co 17 nanostructures. Effect of chemoselective dissolution induced-by different reaction mechanisms. All of the aforementioned dissolution reactions were evaluated in terms of a spontaneous chemical reaction with a lower Gibbs free energy. where the standard Gibbs free energy (∆G 0 ) and the enthalpy change (∆H 0 ) were calculated using the HSC Chemistry software while assuming the entire reaction temperature was 25 °C 58 . The produced Ca(OH) 2 layer located on the outer Sm 2 Co 17 surface could readily coat the surface preventing further reaction of residual CaO with water in the deeper internal body. Thus, CaO and Ca(OH) 2 phases were observed in the X-ray diffraction pattern and micrographs. A negative ∆H for Equations (1 and 2) indicated that the two reactions were exothermic and produced a lot of heat. Dilute acetic acid solution. When dilute acetic acid was utilized, CaO-CaH 2 and the dilute acetic acid could react as follows: Compared to the case in distilled water, the large exothermic enthalpy of Equations (3 and 4) accelerated the generation of activated H 2 , resulting in the hydrogenation of Sm-Co nanostructures 57,59 . Although the hydride phase was indistinguishable in the X-ray diffraction pattern (see Fig. S2), the deteriorated magnetic properties provided conclusive evidence for the formation of Sm 2 Co 17 H x with a small H ci . Also, due to the lower ionization potential of samarium and cobalt in water compared to hydrogen, remaining proton ions (H + ) induced from the acid could also affect the Sm-Co alloy leading to a release of Sm 3+ and Co 2+ 60 . These metal ions could compose the ionic complex, and cobalt acetate could form amorphous cobalt(II) oxide as Equation (5) 61 . A large amount of the oxygen in Fig. 5(b) was believed to be due to an amorphous CoO phase. It was reported that greater CaO solubility could be obtained with a higher sugar concentration in solution due to the formation of additional calcium saccharate 35 . Considering a sucrose with a melting temperature greater than 160 °C and its solubility (i.e., 201 g/100 mL of water), a sucrose concentration of 85 w/v% in water was selected as an appropriate concentration 36,63 . However, as shown in Fig. 5(c), the organic material caused CaCO 3 formation: A lot of heat, produced from the exothermic reactions between calcium compounds and water (Equations (1 and 2)), led to CO 2 gas generation followed by partial thermal-breakdown of the sugar (C 12 H 22 O 11 ) to give the organic structure containing one carbon atom fewer than the parent counterpart. When CO 2 gas was applied to the residual calcium, CaCO 3 is easily formed (Equations (8 and 9)) 47 . The methanol-solubility of NH 4 Cl powder and CaCl 2 obtained at 25 °C was 3.54 g and 29.2 g per 100 g of methanol, respectively. Additionally, NH 4 OH was well soluble with 31.3 g/100 g methanol 64 . The solubility of hydrogen gas, which led to rare earth magnet deactivation, was much lower in methanol than in water 65 . Compared to the two cases above, the lower enthalpy change showed that Equations (10 and 11) occurred favorably to reduce hydrogenation of Sm 2 Co 17 . Hence, the byproduct was removed perfectly by rinsing with methanol without any damage to Sm-Co. That is to say, the combination of NH 4 Cl/methanol was a unique solution for the chemoselective dissolution of Sm 2 Co 17 nanostructures. (11). Magnetic hysteresis curves of the solution-treated Sm-Co nanofibres can be seen in Fig. 7(a). As the concentration increased from 0.05 M to 0.1 M, M s likewise increased from 102.0 emu/g to 108.9 emu/g and H ci increased from 7497.2 Oe to 7994.3 Oe. When the concentration was further increased to 0.5 M, all magnetic properties were retained. Likewise, the effect of a 10 to 120 min dissolution time on the X-ray patterns of treated samples showed no distinct difference (see XRD patterns in Fig. S5). Therefore, higher NH 4 Cl concentrations and increased treatment durations did not enhance the magnetic properties of the hard-phase material while selectively rinsing away byproducts. Effect of NH4Cl concentration and dissolution times. Applicability of the chemoselective dissolution effect on other hard magnetic phases. As mentioned previously, a near theoretical value of M s was a clear manifestation of a highly pure single-phase magnet as verified from XRD, TEM, and ICP-AES analysis. To discuss the applicability of the chemoselective dissolution effect on not only Sm 2 Co 17 but also other rare earth based magnetic phases (e.g., SmCo 5 , Nd 2 Fe 14 B) from a comparison of M s , we summarized the experimental M s values obtained from the current study, and other literature for each hard magnetic phase in Table 1. H ci values were not included because coercivity is ruled by extrinsic conditions such as the shape, size, and microstructure of the magnet 67 . All the listed nanomaterials were synthesized via calcium thermal reduction and were composed of a single-phase and obtained after solution-dissolution in their own particular way. It could be seen that even considering a difference in treatment and measurement conditions, the highest M s /M bulk ratio was only achieved with NH 4 Cl-treated samples, which was over 80%, while most of the other cases achieved small M s values of less than half M bulk . Furthermore, when the same solution-treatment processes were considered for samples with the same phase, a similar level of M s was obtained regardless of synthetic method. It was an interesting and convincing demonstration that no matter how the nanostructured magnets possessed a "dead" layer on their surface, a substantial M s loss was obvious due to side products derived from chemical interactions during the rinse processes for Ca removal. With this regard, chemoselective dissolution with NH 4 Cl played a decisive role toward the preparation of high-purity nanomagnets. However, further investigations and experiments are needed for other intermetallic systems; the possibility of M s improvements for Sm-Co and Nd-Fe-B systems have been tentatively confirmed as summarized from this comparison. Conclusion In this study, we examined the magnetic properties of calcium-reduced Sm 2 Co 17 nanofibres prepared with various treatment solutions (i.e., distilled water, dilute acetic acid, sugar solution, and NH 4 Cl/methanol solution) and discussed the effects of chemoselective dissolution on the purity, surface microstructure, and magnetic characteristics of Sm-Co. A study was performed comparing the calculated thermodynamic parameters such as the change in Gibbs free energy (∆G°) and enthalpy (∆H°) for each reaction at room temperature. Despite utilizing the same Sm 2 Co 17 nanofibres as starting powder materials, obvious enhancements to M s (about 108.885 emu/g) near the theoretical value and high H ci (about 7994.3 Oe) were clearly obtained via chemoselective reactivity of NH 4 Cl/methanol solutions without yielding any damage to Sm-Co; however, other water-based solutions led to the formation of side products or rare earth magnet deactivation. There was no observed time-dependency or concentration-dependency with NH 4 Cl. Compared to previously reported studies, we deduced that the combination of a calcium-assisted thermal reduction and subsequent treatment with NH 4 Cl/methanol was the key toward achieving high purity and thus high magnetic properties. This concept is expected to overcome the inevitable property loss of all calcium-reduced materials and can be extended to other magnetic materials obtained through calcium thermal reduction processes and may help to prepare high-purity phases as raw materials for exchange-coupled magnets. Methods Chemicals. Calcium Preparation of calcium-reduced hard magnetic nanostructures. As a hard magnetic material, Sm 2 Co 17 was comparatively easier to prepare than three-element systems such as Nd-Fe-B. Sm 2 Co 17 nanofibre synthesis was performed through a procedure modified from our previous work that involved electrospinning and several annealing processes 12 . The precursor fibres consisting of Sm 2 O 3 and fcc-Co were mixed with CaH 2 granules (CaH 2 /as-prepared nanofibre = 2 (vol.)) and the reduction-diffusion (R-D) process with CaH 2 was performed at 700 °C for 3 h in argon gas (see the morphology, size distribution, and phase transformation of the as-synthesized Sm 2 Co 17 nanofibres in Fig. S1). After the reduction, most residual CaH 2 granules were sifted through a fine 16 mesh sieve in a glove box under nitrogen. The isolated powder was stored in a vacuum desiccator. Chemoselective dissolution procedure. 0.1 g of calcium-reduced Sm 2 Co 17 nanopowder was added and mixed into 100 mL of each solution for 30 min at 25 °C using a shaking incubator (60 rpm); the mixtures were centrifuged at 8000 rpm for 20 min to separate the solid powder from the solutions. The obtained Sm-Co samples were mixed with 100 mL of each fresh solution and filtered again. All solutions were collected for elemental analysis. The nanofibres were finally rinsed with acetone to remove any residual solution and were stored in a vacuum oven until characterized to limit partial oxidation and any unexpected reactions. Data Availability All data generated or analysed during this study are included in this published article and its Supplementary Information files.
5,272.4
2018-10-23T00:00:00.000
[ "Materials Science" ]
The cyclooxygenase-2 selective inhibitor NS-398 does not influence trabecular or cortical bone gain resulting from repeated mechanical loading in female mice Summary A single injection of the cyclooxygenase-2 (COX-2) selective inhibitor NS-398 reduces bone’s osteogenic response to a single period of mechanical loading in female rats, while women taking COX-2 selective inhibitors do not have lower bone mass. We show that daily NS-398 injection does not influence bone gain from repeated loading in female mice. Introduction Prostaglandins are mediators of bone cells’ early response to mechanical stimulation. COX-2 expression is up-regulated by exposure of these cells to mechanical strain or fluid flow, and the osteogenic response to a single loading period is reduced by COX-2 inhibition. This study determined, in female mice in vivo, the effect of longer term COX-2 inhibition on adaptive (re)modelling of cortical and trabecular bone in response to repeated loading. Methods Nineteen-week-old female C57BL/6 mice were injected with vehicle or NS-398 (5 mg/kg/day) 5 days a week for 2 weeks. On three alternate days each week, the right tibiae/fibulae were axially loaded [40 cycles (7 min)/day] three hours after injection. Left limbs acted as internal controls. Changes in three-dimensional bone architecture were analysed by high-resolution micro-computed tomography. Results In control limbs NS-398 was associated with reduced trabecular number but had no influence on cortical bone. In loaded limbs trabecular thickness and cortical periosteally enclosed volume increased. NS-398 showed no effect on this response. Conclusion Pharmacological inhibition of COX-2 by NS-398 does not affect trabecular or cortical bone’s response to repeated mechanical loading in female mice and thus would not be expected to impair the functional adaptation of bone to physical activity in women. Introduction Mechanical loading is the principal functional determinant of bone mass and architecture [1][2][3], and numerous studies have shown that prostaglandin signalling plays a key role in mechanotransduction, with cyclooxygenase-2 (COX-2) expression being rapidly up-regulated in both osteoblasts and osteocytes following exposure to fluid flow or mechanical strain in vitro [4][5][6]. Blocking prostaglandin production with indomethacin in experimental animals in vivo has repeatedly been shown to impair the osteogenic response to a single period of mechanical loading in cortical and trabecular bone [7][8][9]. Two studies in female rats have also shown that a single injection of the COX-2 selective inhibitor NS-398, 3 h prior to a single period of mechanical loading, reduces the osteogenic response in the cortex [9,10]. However, the effect of more sustained COX-2 selective inhibition on the adaptive response to mechanical loading in cortical bone remains less clear and is unknown in trabecular bone. In the cortex, the osteogenic response to two episodes of mechanical loading in genetically modified female mice lacking COX-2 was not impaired [11]. This could be due to compensation for the complete absence of COX-2 over the animals' life time, a response which is less relevant to the clinical situation using COX-2 selective inhibitors if similar compensation occurs over the comparatively shorter term. This issue is important to resolve, especially in women who have a higher risk of fragility fractures associated with osteoporosis than men, because non-steroidal antiinflammatory drugs (NSAIDs), including COX-2 selective inhibitors, are widely prescribed and a decrease in the skeletal response to physical activity would result in bone loss. Interestingly, a recent randomized controlled trial [12] did not find a suppressive effect of ibuprofen, a nonselective COX inhibitor, on hip areal bone mineral density (BMD) in premenopausal women who performed weight-bearing exercise for 9 months. Consistent with this finding, among the users of COX-2 selective inhibitors, hip areal BMD was normal in postmenopausal women using oestrogen replacement therapy and higher in those not using oestrogen replacement therapy [13]. These clinical data appear to imply that functional adaptation of bone to daily loads is not inhibited by COX-2 selective inhibitors in women. In the present study, we assessed whether NS-398 affects bone's response to repeated periods of mechanical loading in female mice using the well-characterized non-invasive tibia/fibula axial loading model [14][15][16]. This model allows examination of the effect of local mechanical stimulation, distinct from that of exercise, in both trabecular and cortical bone compartments. To our knowledge, this is the first study investigating the effects of a COX-2 selective inhibitor on trabecular and cortical bone's adaptive response to repeated periods of mechanical loading. Experimental design The experiment was conducted in July-August 2009 at the Royal Veterinary College (London, UK), with the approval of the relevant ethical committees. Nineteen-week-old female C57BL/6 mice (Charles River Laboratories, Inc., Margate, UK) were divided into two body weight-matched groups (n08 in each group) and treated with subcutaneous injections of vehicle [dimethyl sulphoxide (2.5 ml/kg): Sigma Chemical Co., St. Louis, Missouri, USA] or NS-398 (Tocris Cookson Inc., Ellisville, Missouri, USA) at a dose of 5 mg/kg/day for 2 weeks (days 1-5 and 8-12). During this period, the right tibiae/fibulae were subjected to external mechanical loading [14][15][16] 3 h after injection on three alternate days per week (see the "External mechanical loading" section for details). We selected to use NS-398, rather than clinically available COX-2 selective inhibitors, to be able to compare the present data with those using a single injection of NS-398 and a single period of mechanical loading [9,10], and thus, the dose and timing of injection were determined based on these previous studies [9,10]. The left tibiae/fibulae were used as internal controls, as validated in the present model [16] and confirmed by others in the rat ulna axial loading model [17], and normal activity within the cages was allowed between external loading periods. On day 15, animals were euthanised and their left control and right loaded tibiae/fibulae collected for analysis of three-dimensional bone architecture. In the present study, ovariectomy was not performed because oestrogen withdrawal could modify the effects of COX-2 selective inhibitors on bone [13]. External mechanical loading The apparatus (model HC10; Zwick Testing Machines Ltd., Leominster, UK) and protocol for non-invasively loading the mouse tibia/fibula have been reported previously [14][15][16]. The tibia/fibula was held in place by a low level of continuous static preload (0.5 N for approximately 7 min), onto which a higher level of intermittent dynamic load (13.0 N) was superimposed in a series of 40 trapezoidal-shaped pulses (0.025-s loading, 0.050-s hold at 13.5 N, and 0.025-s unloading) with a 10-s rest interval between each pulse. Although a peak load of 12.0 N has been shown previously to induce significant osteogenic responses [18], a higher peak load (13.5 N) was selected in order to assess the effect of NS-398 on both lamellar and woven bone because a previous study had described different effects of NS-398 on lamellar and woven bone formation induced by a single loading episode [9]. It has been previously shown that this higher peak load results in loading-related woven bone formation in the cortical region of the proximal to middle tibiae and loading-related lamellar bone formation in the cortical region of the middle fibulae as well as in the trabecular region (secondary spongiosa) of the proximal tibiae [16]. Strain gauges attached to the proximal lateral tibial shaft of similar 19-week-old female C57BL/6 mice ex vivo showed that a peak load of 13.5 N engendered a peak strain of approximately 1,800 με [19]. High-resolution micro-computed tomography analysis Because mouse bone is small and the present axial loadingrelated osteogenesis is site specific, high-resolution microcomputed tomography (μCT; SkyScan 1172; SkyScan, Kontich, Belgium) with a voxel size of 5 μm was used to quantify three-dimensional bone architecture at precisely comparable sites of the loaded and contralateral control tibiae/fibulae as reported previously [15,16,18,19]. Trabecular bone volume/ tissue volume (BV/TV), trabecular number and trabecular thickness were measured in the secondary spongiosa of the proximal tibiae (0.05-1.00 mm distal to the growth plate). Cortical bone volume, periosteally enclosed volume, medullary volume and polar moment of inertia, a parameter of structural bone strength, were determined in 0.5-mm-long sections at four sites [25% (proximal), 37% (proximal/middle), 50% (middle) and 75% (distal) of bone length from its proximal end] in the tibiae and at the 50% (middle) site in the fibulae. Statistical analysis All data are shown as the means and SEM. Statistical analysis was performed by one-way or two-way ANOVA using SPSS (version 17.0; SPSS Inc., Chicago, USA). P<0.05 was considered statistically significant. Effects of NS-398 on trabecular and cortical bone In trabecular bone of the proximal tibiae, NS-398 was associated with significant decreases in BV/TV and trabecular Values are presented as the means±SEM (n08 in each group). Two-way ANOVA was used to compare groups. A P value of < 0.05 was considered statistically significant (in bold) number, but not trabecular thickness, as shown in Table 1. In contrast, no effect of NS-398 was detected in cortical bone of the tibiae/fibulae. Effects of NS-398 on trabecular and cortical bone's response to mechanical loading In trabecular bone, mechanical loading significantly increased BV/TV, trabecular thickness and trabecular number (Table 1). Loading-related woven bone formation was not seen in the secondary spongiosa (Fig. 1a), as confirmed previously in the fluorochrome-labelled sections [16]. In cortical bone, the effects of mechanical loading were site specific; a loading-related increase in bone volume was obtained in the proximal and middle tibiae and middle fibulae, but not in the distal tibiae (Table 1). Consistent with a previous finding [16], in the proximal to middle tibiae, there was loading-related apparent woven bone formation while at the middle fibulae such a woven bone response was not observed (Fig. 1a). The loading-related increases in cortical bone volume and polar moment of inertia (Fig. 1b) were associated primarily with increased periosteally enclosed volume. No effect of NS-398 was observed on any of the loading responses at any site. Discussion The present experimental design enabled evaluation of the effects of selective pharmacological inhibition of COX-2 by daily NS-398 injection on changes in three-dimensional bone architecture in trabecular as well as cortical bone induced by repeated episodes of mechanical loading. In control bones which received no more than normal functional Values are presented as the means and SEM (n08 in each group). No significant difference was detected between vehicle and NS-398 groups by one-way ANOVA mechanical loading, NS-398 slightly but significantly decreased trabecular BV/TV of the proximal tibiae. This would be compatible with a small reduction in bone mass of COX-2 deficient mice [11]. In bones that had been artificially loaded, COX-2 inhibition had no discernible effect on the loadingrelated lamellar or woven bone response in either trabecular or cortical compartments. As a result, NS-398 showed no influence on the loading-related increase in polar moment of inertia, a parameter of structural bone strength. Although there may be a potential small inhibitory effect of NS-398 on bone's response to mechanical loading that could be detected only by histomorphometry, such an effect would not alter the conclusion of the present study. The present data are consistent with the evidence from female mice lacking COX-2 [11], showing that bone adaptation to two consecutive days of mechanical loading does not require a functional COX-2 gene. The authors [11] suggested a compensatory effect of COX-1 in vivo, though this enzyme does not appear to be important for bone cells' response to a single period of fluid flow in vitro [20]. If such compensation exists, it does not seem to be immediately available since in female rats a single injection of NS-398 reduces the cortical response to a single period of mechanical loading [9,10]. The data we present here suggest that compensation for the pharmacological inhibition of COX-2 function does exist and can occur sufficiently swiftly to ensure that adaptive (re)modelling of trabecular and cortical bone to artificial mechanical loading over a 2-week period is not impaired. The relevance of the present experiment in female mice to the human condition must take into account a number of differences in the two situations. Importantly, however, our experimental data of three-dimensional bone architecture analysed by high-resolution μCT are compatible with clinical evidence that women taking COX-2 selective inhibitors such as celecoxib and rofecoxib do not have lower hip areal BMD [13]. In contrast to women, the use of the COX-2 selective inhibitors is associated with lower hip areal BMD in men [13]. It remains to be elucidated whether there are sex differences in the effects of COX-2 inhibition on bone's response to mechanical loading. In conclusion, our present data demonstrate that in female mice pharmacological inhibition of COX-2 using daily NS-398 injection does not affect trabecular or cortical bone gain engendered by repeated periods of mechanical loading over a 2-week period. Should this experimental finding be translated into the clinical situation, it would suggest that in women long-term use of a COX-2 selective inhibitor does not impair the adaptive response of either trabecular or cortical bone to habitual mechanical loading and thus is not expected to contribute to bone loss by interference with this mechanism.
2,972.2
2012-02-14T00:00:00.000
[ "Biology" ]
Object Tracking in Wireless Sensor Networks: Challenges and Solutions : Wireless Sensor Networks (WSNs) are small, inexpensive and battery-operated sensor nodes that are deployed over a geographical area. WSNs are used in many applications such border patrolling, military intrusion detection, wildlife animal monitoring, surveillance of natural disasters and healthcare systems. Mobile object tracking is a vital task in all these applications. The goal of this work is to highlight the most important challenges in the field of object tracking and provide a survey of the WSN architectural design and implementation approaches for tackling this problem. To that end, we analyze how each approach responds to each challenge and where it falls short. This analysis should provide researchers with a state-of-the-art review and inspire them to propose novel solutions. Introduction A Wireless Sensor Network (WSN) is usually comprised of hundreds of small sensor devices, deployed randomly or manually in order to observe an event of interest.If sensors are in the proximity of the event of interest, they track and report back the observed data to the base station (Sink) periodically.The base station serves as a gateway to remote command centers for further processing and data aggregation (Younis et al., 2014). The sensor is a small device that consists of four units: Sensing unit; which is responsible for transforming a physical quantity into electrical signal (Sendra et al., 2011), processing unit (or microcontroller and memory unit) that is responsible for functions such as controlling the sensor's activities and executing the communication protocols, communication unit (or wireless radio transceiver) that is used to communicate with the external world and the neighboring nodes and power unit (or the battery) which is usually nonrechargeable and cannot be replaced. Over the past years, WSNs have proved high effectiveness in numerous applications such as border patrolling, military-restricted areas, prison walls monitoring, campus security and rescue operations, surveillance, wildlife animal tracking, traffic control, home automation and remote healthcare monitoring systems (Tsukamoto et al., 2009;Rault et al., 2014;Prasanna and Rao, 2012). Border patrol systems, for example, have recently gained lots of interest for the purpose of watching and controlling borders, detecting and tracking intruders, enemy movements or any illegal activities.WSNs also proved their importance in the field of public safety and military applications such as mine field detection and battle field surveillance.Furthermore, WSNs have been used in healthcare systems especially for what is called Body Sensor Network (Sun et al., 2012;Alhmiedat et al., 2012) that is used to monitor the patient's body and gather clinical information to help in rehabilitate physically impaired persons.In traffic-monitoring systems, WSNs are used on streets to collect data about traffic to help people to get the latest information regarding traffic jam in different areas and to achieve intelligent transportation system (Kafi et al., 2013). The salient task that is common among all these applications is object (or target) tracking.This object could be a fugitive, an intruder at a border, or an attacker at a military base.Tracking these mobile objects involve several sub-tasks such as object interception (or detection), localization and continual reporting of its position to the base station (Tsukamoto et al., 2009;Darabkh et al., 2012).Tracked objects may have different types of signals to be sensed.For example, sensing environmental changes such as light, temperature, pressure and acoustics, or chemical, biological and radiological changes in case of security attacks (Chen and Varshney, 2004). Our objective in this study is to provide new researchers who are interested in object tracking in sensor networks with a reasonably comprehensive review of the field and inspire them to pursue it with new perspectives.To that end, our methodology in this study depends on surveying tens of papers published over the last decade in order to identify the challenges that researchers faced and categorize them.Then, we investigate the various solutions that tackled these challenges and classify them based on (1) the network architecture used and (2) the approach adopted.By this effort, we hope that we serve researchers get the essential background that inspires them to come up with novel solutions. Our paper is structured as follows.Section 1 defines the common challenges encountered in object tracking research.Section 2 discusses the frequently found network architecture and the most prominent approaches used for object tracking.In section 3, we discuss and analyze our findings.We conclude our paper by section 4. Challenges in Object Tracking Throughout our research journey (Ismail et al., 2015;Darabkh et al., 2012), we were able to identify several challenges pertaining to object tracking in wireless sensor networks.Next, we present these challenges as categories that are independent of their applications. Scalability Scalability is a twofold challenge: The number of sensor nodes in the network and the number of objects need to be tracked simultaneously.It is not uncommon to have a WSN deployment that consists of thousands of sensor nodes.The number may even reach millions in some applications (Li et al., 2008).With such large number of nodes, it is not easy to attend to each one due to several factors: Nodes many not be physically reachable, nodes may fail and other new ones may join the network.In such unpredictable, dynamic environment, scalable coordination and management functions are necessary to having robust WSNs.Consequently, designers of tracking algorithms are typically concerned about optimization problems germane to the size of the network, efficient scheduling of active vs. inactive nodes, energy consumption and communication overhead among sensor nodes. Furthermore, the number of objects needs to be tracked manifests another facet of scalability challenges.For example, tracking algorithms should be able to uniquely identify each object moving especially when the number of issued packets is increased for the sake of increasing accuracy (Naderan et al., 2012).They also should be optimized and adopt efficient scheduling mechanisms in order to intercept and track multiple objects simultaneously while being energy-conservative. Stability Since sensor nodes are likely to be installed in harsh conditions outdoors or in hostile environments, they are commonly subject to device failures or may change their initial deployment result from environmental influences such as wind or waterfall (Marks, 2010).Therefore, it is crucial for any object tracking system to demonstrate a reasonable degree of fault-tolerance and adopt some recovery mechanism (Tseng et al., 2003). Node Deployment Depending on the application, the WSN deployment can be either deterministic, where nodes are placed manually in a pre-planned manner (Yick et al., 2008) at certain Cartesian coordinates (Sendra et al., 2011), or randomized, where nodes are deployed across certain geographical area in an ad-hoc manner (Yang and Sikdar, 2003).Compared to random networks, deterministic networks are featured by lower complexity and lower cost of network maintenance and management because their nodes deployed placed at specific locations that ensure coverage.On the other hand, random deployment can spawn uncovered areas (Yick et al., 2008).In addition, location identification for each sensor node is a must after deployment and before putting the network in operation.Location can be determined using Global Positioning System (GPS) system or manually by calculations (Garg and Jhamb, 2013) or by finding the relative location given that each node is within the coverage of another node (Li et al., 2014).Unfortunately, the random deployment is the only choice when we need to setup WSNs in harsh, unsafe or hostile ambiences.However, in a new type of WSNs, some sensor nodes reposition themselves over time in order to maintain coverage. This goal can be achieved by one of two methods.The first method depends on self-deployment where sensors autonomously reposition themselves in order to improve coverage.The second method depends on the relocation of redundant nodes in order to cover for the failed nodes (Zhu et al., 2011). Computation and Communication Costs Any WSN consists of small sensors with constrained capabilities of computation and communication.Typically, the cost of local computation is much lower than communication cost (Ren et al., 2008), which makes reducing the communication overhead a priority for any WSN algorithm. Energy Constraints Due to the difficulty of recharging, the lifetime of the battery in each sensor determines how long it can operate (Peynirci et al., 2014).Therefore, energy conservation should be kept in mind in all cases (Li et al., 2008). Usually, algorithms tend to minimize energy consumption by: (1) Scheduling when a node should be in active or sleep state (Yang and Sikdar, 2003), or (2) minimizing the communication and computation cost as much as possible.On the other hand, as the author suggested in (Misra et al., 2015), not all sensors that detect the target are in charge with the tracking process.The algorithm uses the sensor's residual energy to check that this sensor is available for the dwelling time that this target will be within range using a prediction formula.Any sensor node that does not meet this criterion is eliminated from the tracking process. Data Aggregation Data aggregation is a common task in WSNs where data spawned from individual sensors are combined and compressed at an intermediate sensor node before relaying them to the final base station, resulting in a minimal number of transmission packets (Jung et al., 2011a).However, the extent of data aggregation depends on the intra-network spatiotemporal correlation of the signal of interest and the nature of the application (Li et al., 2008;Płaczek and Bernas, 2013).It also depends on functions such as suppression, minimum, maximum and average (Sendra et al., 2011) and other statistical techniques that help discover correlations (Naderan et al., 2012).The problem of data aggregation is more apparent when sensor nodes generate duplicate packets.Therefore, it is imperative for any algorithm to reduce travelling packets in order to have a less channel congestion and lower network latency. Sensor Technology and Localization Techniques Currently there are diverse types of sensors and localization techniques with different accuracies but none of them are highly accurate to be used for all possible WSN application scenarios.The best choice of sensor technology for a specific application is highly reliant on the needed distance range, signal propagation cost, precision, bandwidth etc.For instance, infrared, ultrasonic, electromagnetic, optical and Radio Frequency Identification (RFID) systems are the typically used technologies.The most popular localization techniques are range based which use the distance or angles such as Angle-of-arrival, Time-ofarrival, Time-Difference-of-Arrival, and Received-Signal-Strength Indicator (Zhang et al., 2010) for indoor environments while GPS is used for outdoor environment since the line-of-sight is required (Muthukrishnan et al., 2005;Gu et al., 2009). Other issues that are directly related to algorithm design for tracking an object are. Tracking Accuracy Accuracy of tracking algorithms implies low probability of missing the moving object (Peynirci et al., 2014), low response latency and low sensitivity to external noise.Furthermore, they should be equipped with a recovery mechanism in case the object is lost. Reporting Frequency Reporting frequency poses a tradeoff between accuracy and energy consumption.Tracking algorithms face the challenge of creating a balance between keeping the base station informed about the movement of the mobile object at certain frequency (Garcia et al., 2010) and preserving energy that can be highly consumed at high communication frequency (Li et al., 2008).The sink node can adjust the reporting frequency during the network progress and transmit the new value in a single broadcast message so each node will adjust its frequency accordingly (Mahmood et al., 2014).In non-sink centric approach, each node can increase its frequency in case of retransmission and as part of object recovery mechanism. Localization Precision The precision of determining object's location by the WSN is proportion to the number of sensors used in the localization process.Generally, to determine the location of an object in 2D space, at least three nodes are required and in 3D space, four nodes are required (Garg and Jhamb, 2013).To that end, object tracking algorithms face the challenging tradeoff between high precision and the need to conserve energy by lowering the number of active nodes participating in the localization process. Sampling Frequency One of the WSN parameters that an object-tracking algorithm may need to consider optimizing is the frequency of sampling, that is, how often a sensor attempts to detect the existence of an object per time unit.It is a parameter that can directly affect the precision of localization.Low sampling rate hides the minor changes in object movements, resulting in lower tracking accuracy or even failing to intercept the object entirely especially if it moves at a high speed.On the other hand, increasing the sampling rate improves the tracking accuracy but drains sensor's battery. Security Security is vital in mission-critical applications.In mission-critical WSNs, sensors are deployed in harsh, unsafe or hostile places where they can be easy targets for intruders who may falsify the collected data.Tracking algorithms need to take care of source authentication, data integrity and confidentiality (Oracevic and Ozdemir, 2014a).Violation of one of these security properties can lead to unspeakable risks.For instance, object detection algorithm can be deceived by injecting malicious data into the network, garbling the gathered data or sending phony ones.Therefore, object tracking algorithms, especially those used in sensitive application domains, must keep security vulnerabilities in mind prior to deployment. Solutions for the Challenges The literature is rich with approaches that aim to solve object-tracking challenges from different perspectives and for various goals.In this section, we review the network architectures with emphasis on the prominent approaches used and tracking algorithms that operate on top of each approach.Figure 1 depicts our classification of object-tracking architectures, as elucidated in the following subsections. The Naïve Architecture The naïve architecture is the simplest and the most traditional WSN model in which all sensors are always active trying to intercept and monitor objects in their sensing area and reports to one centralized sink node (Tsukamoto et al., 2009;Ramya et al., 2012).With equal responsibility, each sensor independently observes, processes and transmits the monitored data to the sink node (Fayyaz, 2011).Under this centralized approach, the sink node solely undertakes the heavy computation tasks related to tracking and localizing the monitored objects (Sarna and Zaveri, 2010).Moreover, the more sensors the network has, the more messages are relayed onto the sink node, leading to the increase in communication bandwidth consumption.This model is obviously not fault-tolerant due to the single-point-offailure and its limited scalability (Bhatti and Xu, 2009).It usually exhibits the worst energy efficiency because of its heavy communication and computation demands.This renders the naive solution a baseline for comparison with other solutions (Feng et al., 2014). In Tynan et al. (2009), the authors presented object tracking experiments based on centralized architecture to analyze the network performance when sensors use certain state transition model by examining trade-offs between energy, latency, density and accuracy tradeoffs.The localization techniques chosen for the experiments are: Maximum signal strength localization and weighted average localization.In the weighted average localization, each sensor estimates the location of the object; the larger the value, the greater the effect on the overall location estimation is.In the maximum signal strength localization, the maximum signal value sensed at an active sensor is assigned to the location of the object.An example of this central approach is the work in (Feng et al., 2014) where researchers used a grid network structure.They proposed using real time chain grid heads to relay the sensed data to the sink node while keeping other sensors asleep.Sensor nodes can distributedly decide their sleeping time based on the information from their neighbors.This enables distant nodes to sleep while nodes close to the object remain active.Simulation results of this augmented approach outperform the basic naive approach described earlier. Tree-Based Architecture In an attempt to improve performance, some researchers adopted a sub-graph of the entire set of nodes that has a tree structure.The root of this tree is the closest sensor to the object and other sensors get added or removed as the object moves (Tran and Yang, 2006a).In other words, the tree structure follows the object trajectory (Demigha et al., 2013).This structure reduces energy consumption and communication flow by limiting data transmission from the root to the base station through a particular route.However, as the distance between the root node and the object increases, the rate at which the tree needs to be reconfigured also increases.As a result, tree-based structure is not efficient for tracking high speed objects.Fig. 1.Network architectures used for object tracking in WSN (Fayyaz, 2011) An example of the tree-based approach is the Optimized Communication and Organization (OCO) method that has autonomous characteristics such as self-organizing and auto-routing capabilities throughout the tracking process (Tran and Yang, 2006b).The OCO method consists of four phases: Position finding, processing, tracking and maintenance.A major shortcoming of this method is that sensors must be activated all the time which may lead to the depletion of their batteries.Shi et al. (2010), the authors proposed an algorithm that constructs a tree of sensors with minimum energy and desirable level of quality at the fusion center.After tree initialization, the algorithm keeps adjusting and reconfiguring the tree in a way that reduces the energy consumption and improves the estimation quality. Cluster-Based Architecture The premise of this architecture (also called two-tier architecture) is to group sensor nodes into clusters in an effort to reduce the number of active nodes.Each cluster has a head sensor and numerous members (Jung et al., 2011b).Instead of reporting to a centralized sink node, cluster members are to report to their cluster head only, which aggregate data and subsequently report to the sink node.To that end, clustering is considered as a hierarchical architecture (Abbasi and Younis, 2007;Gopal and Krishnamoorthy, 2013) that is efficiently used to minimize energy consumption in WSNs when transmitting data from all sensors to the sink node (Heinzelman et al., 2002). Clustering can be a scalable solution for applications that are comprised of hundreds or thousands of sensor nodes.Scalability requires efficient resource utilization and load balancing in order to increase the network lifetime.Load balancing is achieved by lighter processing load on individual sensor nodes while efficient resource utilization is accomplished through decreasing communication load, reducing possibility of data flow bottlenecks and high survivability as there is no longer a single point of failure.Clustering can be remarkably effective in many-to one, one-to-any, one-tomany, or one-to-all communications (Li et al., 2008).In many-to-one communication, for instance, clustering can reduce communication interference and support data aggregation (Younis and Fahmy, 2004). In general, any clustering algorithm consists of four main stages: • Geographical formation of clusters • Selection of some sensors that are sparsely deployed with high capabilities as cluster heads.The selection is based on their processing capabilities, communication range, residual energy, or location compared to the object.Keep in mind that cluster heads need to be well-distributed over the sensor field to achieve high coverage.Typically, the failure of a cluster head entails re-clustering, however, some approaches can adapt the network topology by resorting to backup cluster heads (Younis et al., 2014) • Data aggregation stage in which the sensed data are gathered and combined in a less number of packets in preparation to be sent to the cluster heads (Jung et al., 2011a;Sinha and Lobiyal, 2013).Basically, sensor nodes will provide their sensing information upon request (Suganya, 2008) • Data transmission stage which involves the transfer of the aggregated data from the cluster heads to the sink node Based on the formation style of clusters, they are classified into static and dynamic, as explained next. Static Clustering In static clustering, clusters are formed statically at the network deployment time as shown in Fig. 2. The attributes of each cluster, such as the cluster size, the coverage area, the sensor members and the cluster head are static (Li and Zhou, 2011).This means that the sensor nodes remain hooked up to the same cluster head throughout the network lifetime (Fayyaz, 2011). When the object enters a cluster area, the cluster head gets activated and it subsequently activates its cluster members to keep localizing and tracking the detected object.When the object departs the cluster vicinity to another, the current cluster head informs the new one to keep tracking the object (Darabkh et al., 2012). Despite the simplicity of this cluster architecture, it suffers from several shortcomings.First, it is not faulttolerant due to the fixed membership.If a cluster head goes down, for battery depletion for example, all the sensors in the cluster become useless.Second and due to fixed membership, sensor nodes in different clusters cannot share information and collaborate on data processing (Gopal and Krishnamoorthy, 2013).Finally, fixed membership prevents the adaptability to dynamic scenarios in which nodes in the region of high (low) event concentration may stay active (go to sleep) state (Li and Zhou, 2011). Dynamic Clustering While static clustering is formed at the network designtime, the construction of adaptive clusters is triggered by a special event of interest, such as the acoustic sounds of a moving object, as shown in Fig. 3.When a sensor, hopefully the one that is the nearest to the object or the one with the highest energy, detects an object, it volunteers to play the role of a cluster head (Abbasi and Younis, 2007;Gopal and Krishnamoorthy, 2013).(Jin et al., 2006) Typically, multiple sensor nodes may detect the event of interest so multiple volunteers may exist.For this reason, some mechanism is used to ensure the selection of only one sensor as a cluster head.Nodes that are close to the cluster head are invited, as members, to form a cluster and report their collected data to the head (Jin et al., 2006).The cluster is dismantled when the object is no longer sensed (Jung et al., 2011a). Unlike static clustering, nodes in a dynamic cluster may belong to different clusters at different times contingent to object movements.Furthermore, since only one cluster is active within the vicinity of the object with high probability, redundant data are suppressed, leading to better tracking quality.Furthermore, energy consumption is reduced since one cluster is active at a time in accordance to object movement (Gopal and Krishnamoorthy, 2013). Generally, dynamic clustering is preferred when the WSN is required to cover a large area (Oracevic and Ozdemir, 2014b), while with dense networks, static clustering is more desirable in order to avoid cluster overlapping and the high frequency of cluster head election that occur in the adaptive clustering technique. In Darman and Ithnin (2014), the authors have mentioned that cluster-based approaches provide better bandwidth utilization and higher scalability than other approaches.They also classify the cluster-based approaches into static, which has a pre-built backbone infrastructure and dynamic that is more suitable with highly dynamic scenarios.In Yan and Wang (2010), the authors have presented a dynamic cluster formation algorithm for object tracking in WSNs.When an object moves towards the sensing field, cluster heads are selected randomly by a certain algorithm which is considered an extension from the Low Energy Adaptive Clustering Hierarchy (LEACH) protocol (Heinzelman et al., 2002).The cluster heads then invite their neighboring sensor nodes to form a cluster.The major advantage of such algorithm is the dynamic number of sensors within each formed cluster.When the object moves around, the formed cluster changes and the set of sensors vary with time.However, the proposed algorithm lacks simulation proof and the random selection of cluster heads most likely will not achieve the required tracking accuracy for such a system.An interesting combination of static and dynamic clustering technique was proposed by (Jung et al., 2011a).This mechanism switches to the appropriate clustering technique and aggregation mechanism depending on the network state. In Darabkh et al. (2012), the authors proposed three cluster-based algorithms for object tracking: Static head, adaptive head and selective static head.Static and selective static head schemes are based on static clustering while adaptive head is based on dynamic clustering.They reached promising tracking accuracy and energy preservation by selecting asking nearby nodes to pitch in the tracking process while leaving the others in a sleep state.They showed that the adaptive head is the most efficient scheme with respect to energy consumption, while static and selective static schemes are better for lowering tracking error especially when the object moves fast. In Wang et al. (2013), a hybrid cluster-based object tracking approach was proposed that integrate static with on-demand dynamic clustering to manage the tracking task.While static clusters are confined to share information within cluster vicinity, on-demand dynamic clustering, on the other hand, is used when the object enters and exits the boundary region so sensors from different static clusters that intercept the object can temporarily share information.In the same context of solving the boundary problem, the authors in (Akter et al., 2015) proposed to combine static clustering with another incremental clustering algorithm to track an object consistently.In other words, incremental clusters are constructed at the boundaries of static clusters to continue the tracking task.The proposed algorithm performs better in tracking the moving object at the boundary regions than other typical tracking protocols. Hybrid Architecture Hybrid architecture generally combines one of the previously mentioned architectures with some prediction mechanism.Prediction relies on heuristics and attempts to anticipate the upcoming position of the moving object based on its historical positions observed over time and the spatial and temporal knowledge of sensors (Zhenga et al., 2014).Based on this prediction, sensor nodes get scheduled to be either active or asleep (Ren et al., 2008) during each defined time step (Mirsadeghi and Mahani, 2014).Due to the inevitable prediction mistakes, these algorithms have recovery mechanisms in order to make up for the inaccuracy of object localization.Unfortunately, such algorithms are typically too complex to be implemented on sensor nodes with constrained resources. An example of a hybrid approach is the Hierarchical Prediction Strategy (HPS) that augments the clusterbased approach with a prediction mechanism.In the HSP strategy, the cluster is built using Voronoi division and the mobile object's next location is predicted (Wang et al., 2008).One of the major shortcomings of such algorithms is the additional complexity resulting from combining the two approaches.Furthermore, the performance overhead incurred was not assessed. In Raza et al. (2009), the authors presented the Dead Reckoning object tracking protocol that depends on predicting its position by analyzing the time series of historical locations over a time window.Using a position fix technique, the Dead Reckoning protocol provides a mechanism for error avoidance and error correction.The performance of the proposed scheme was assessed in terms of node sleep time, tracking error and object loss. In Mirsadeghi and Mahani (2014), the author presented a high-precision and energy efficient tracking scheme that is based on clustering architecture and object speed prediction.In each time step, just a few nodes in the vicinity of the predicted position of object get activated as tracker nodes by considering three parameters: distance, remaining energy of nodes and energy needed to send a packet to the cluster head while others remain in power-saving mode.Simulation results showed that the energy consumption of the nonprediction method is too high in comparison with its counterpart prediction one. An adaptive sensor activation algorithm for target tracking in WSNs is presented in (Zhenga et al., 2014) where the authors used an auction mechanism for selecting the cluster head.In each iteration of the tracking operation, the cluster head tries to predict the region where that target may move to.Based on this predicted region, only nodes within this region are activated and the rest remain asleep.The presented algorithm has proven itself in terms of the network lifetime, energy efficiency and accuracy of tracking. Tracking Multiple Objects Tracking the path of multiple objects is more challenging than tracking a single one due to the need of identifying each object moving in different directions with different speeds and the need of track continuity with good performance.If all energyrestricted nodes are kept active for the purpose of tracking multiple objects, the network traffic and the probability of failure will increase dramatically.Consequently, more complex routing algorithms and energy minimization techniques have to be used.In addition, as each sensor node is responsible for detecting and tracking multiple objects, it should be able to distinguish objects by some means of signal processing algorithms (Naderan et al., 2013).In literature, we can find many object classification algorithms (Panda et al., 2014;Pannetier et al., 2015) that adopt a set of weighted features for the purpose of identifying objects.Object's kinematic characteristics such as its movement pattern, position, velocity and acceleration are usually used in tracking multiple objects to narrow the tracking region (Rahman et al., 2010).Dense networks are often used to monitor multiple objects in order to maximize the number of sensors that cover all points in the object's area.In such networks, eliminating redundancy is imperative for efficiency.This can be achieved by using hierarchical multi-tier networks or eventtriggered solutions. As shown in Fig. 4, three-tier cluster-based network is illustrated where the pre-determined cluster heads keep listening to the medium for any approaching objects then they activate their cluster members based on certain criteria to minimize network traffic and energy consumption.From these members, multiple Sub Cluster Heads (SCHs) could be elected in case of the availability of multiple objects within the same cluster vicinity.The criteria could be a weighted average of multiple factors like the node's remaining energy, the Euclidean distance between the node and the object and the type of sensor or sensor technology (if the network is heterogeneous).If multiple objects are in the vicinity of a certain cluster, we can assign one SCH for each object.These Sub Clusters (SCs) will detect and localize the objects and send their observation to the upper tier cluster heads up to the end base station. Summary and Conclusion Based on the above survey and analysis and regardless the type of the object to be tracked and the signal to be sensed, we found that most tracking algorithms in WSNs share the following characteristics: • An efficient algorithm strives to reduce the number of continually active sensor nodes in order to conserve energy • Any tracking system should report the position of the object to the base station in a timely fashion • Tracking algorithms care about eliminating (or reducing) correlated, inconsistent, or redundant data in order to reduce not only the packets transferred but also the number of collisions and interference in the shared medium.For the same purpose, sensors should collaborate on processing the data then sending them aggregated to the base station We also found that the majority of object tracking algorithms aim to tackle the challenge of creating a balance among network resources like communication bandwidth, energy and tracking accuracy.Table 1 presents the various object tracking architectures discussed in this study and compares between them based on how each one tackles the challenges discussed above.Each entry in the table reflects how effective a given approach at tackling the challenge.The level of effectiveness of a given approach with respect to a particular challenge is expressed as either limited, low, moderate, high and applied but with constraints.By studying this table we can see that the cluster-based architecture apparently provides more scalability, stability, energy-efficiency and tracking accuracy than other approaches.The case of multiple objects tracking bears its own set of extra challenges as the locations of multiple objects have to be tracked simultaneously.We have emphasized that increasing the number of objects to be tracked increases the network traffic.Consequently, more complex routing schemes and energy minimization techniques have to be adopted in order to retain an acceptable network performance.Based on these challenges, we suggested future multi object three-tier network based on clustering architecture.Other open issues are: • How to deploy and manage heterogeneous nodes • How to deal with node failure and adjust the network topology accordingly • Study the mobility effects (target, nodes or sink relocation) on the quality of tracking performance and what is required for network adaption accordingly • The use of efficient aggregation techniques • The manufacturing of sensor nodes to have more powerful batteries, fast processors and long-distance transceivers in order to optimize the energy consumption and achieve better coverage (Can and Demirbas, 2013).
7,336
2016-05-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Determination of Material Requirements for 3D Gel Food Printing Using a Fused Deposition Modeling 3D Printer The material requirements for printing gel food with a fused deposition modeling 3D printer were determined based on fidelity, shape retention, and extrudability, as described by the rheological parameters of storage modulus (G’), yield stress (τ0), and phase angle (δ). The material requirements were determined for printing gel food using three formulations containing gelatin, gelatin and pectin, and gum mixture as the gelling agents. As compared with formulations based on gelatin alone, pectin-containing gelatin-based formulations yielded higher δ and lower G’ and τ0 values, while gum mixture-based formulations formed a gel with higher G’ and δ values and a wider range of τ0. Overall, this study presents quantitative material requirements for printing gel products containing gelatin, gelatin–pectin, and gum mixtures. Introduction Fused deposition modeling (FDM) is the most popular three-dimensional (3D) printing method. Most of its printing parts are economical and it does not require hazardous solvents or glues [1]. Printing materials, predominantly polymers, are fed into a heating barrel in which they melt, becoming highly viscous fluids [1]. The melted materials are extruded through a nozzle and then deposited layer-wise. FDM has a wide range of applications in the fields of smart home, aerospace, and biomedicine [2]. Furthermore, FDM is currently extensively used and studied for 3D food printing [3]. The restaurant Food Ink, in the United Kingdom, recently launched a pop-up store, in which they serve full-course meals (appetizers to desserts) that were printed with an extrusion-based 3D printer. Similarly, the Italian pasta maker Barilla has been attracting attention for its unique pasta design achieved using an extrusion-based 3D printer [4]. Food manufacturing using FDM 3D printing has several advantages, including easy production of small quantities of various types of foods, allowing the consumer to prepare food anywhere and to manufacture personalized food with desired ingredients, taste, color, texture, and nutritional composition [5]. Applying 3D printing technology to gel food manufacturing enables the production of gels with exquisite shapes (which is yet to be accomplished), flavors, and aromas according to personal preferences. However, until now, research on 3D food printing has mainly focused on the extrusion performance and visual quality evaluation of the printed products. Furthermore, there are limited studies that quantitatively correlate the physical properties of food formulations with stable printed products and adequate extrusion performance. Thus, it is necessary to determine the suitability of a formulation for printing by measuring its physical properties and comparing it with predetermined requirements. When a formulation is found to be unsuitable, the physical properties that require improving it could be identified to develop an improved formulation with the desired features. Recently, FDM printers have become increasingly popular and economical for home and restaurant use. Considering the application of research findings using these economical units, room temperature is considered the appropriate temperature for research on FDM food printing, as a temperature control system is not required to be attached to an FDM printer. Moreover, research on the requirements of the materials for 3D printing could generate various gel food recipes. Thus, the objective of this study was to propose standards for printing gels through 3D syringetype extrusion printing with FDM using various materials based on the experimental quantification of their properties. Sample Preparation The formulation of the gel food samples was determined with reference to the current scientific literature on gel products (42 in total), the composition of international gel food products in the market (23 products), and information from personal communication with a gel food product expert at Cosmax NBT (Seoul, Korea). The ingredients commonly found in gel food include sugar and acidity regulators, such as citric acid, lactic acid, and malic acid. Gelatin was the most common gelling agent (based on 18 commercial gel products and 23 publications), followed by pectin (8 commercial gel products and 12 publications). In the gelatin-based gel formulation, gelatin was in the range of 2-20% (w/w, wet basis) of the total content. In the formulations using both gelatin and pectin, gelatin and pectin were used in the ranges of 2-10% and 0.5-10% (w/w, wet basis) of the total content, respectively. Sugar, which acts as a sweetener, was used in the range of 20-80% (w/w, wet basis) to enable the formation of the gel structure and prevent syneresis after gelation [6,7], while 1% citric acid was added to the gel for acidity [8]. In addition, a gum mixture-another gelling agent-was used as a gelling agent in this study and accounted for approximately 10% of the total content. Therefore, in this study, three formulations containing gelatin, gelatin and pectin, and gum mixture as the gelling agents, with sugar, citric acid, and water were used and labeled as A, B, and C, respectively. Their compositions are listed in Table 1. The blending ratio of gelatin, pectin, gum mixture, sugar, and citric acid used in the gel production was determined by preliminary experiments. Various contents of gelatin, pectin, or gum mixture were dispersed in distilled water. Sugar and citric acid were subsequently added to the mixture and mixed. The sum of the amount of the mixed raw materials was adjusted to 100 g. For the formulation without pectin, the container was covered with aluminum foil and heated on a hot plate (85 • C) while stirring. The formulation with pectin was additionally heated for~40 min with stirring at 85 • C using a water bath. The prepared formulation was then poured into a 30 mL Luer-Lok syringe (BD, Franklin Lakes, NJ, USA) and mounted on an FDM 3D printer. 3D Printing The printing experiment was performed with a syringe-type extrusion 3D printing system (Changxing Shiyin Technology Co. Ltd., Hangzhou, China) using the FDM method. The 3D printing system consisted of an extrusion head with a heating barrel to maintain the temperature of the formulation in the syringe, nozzle, and print platform. The extrusion head was adjusted to move along the x, y, and z axes. The 3D modeling software Autodesk 123D design (Autodesk, Inc., San Francisco, CA, USA) was used to design the 3D object (a cube with dimensions of 10 × 10 × 10 mm) to be printed. Simplify3D slicer software (Simplify3D, Cincinnati, OH, USA) was used to set the printing conditions, such as nozzle diameter, layer height, and extruder moving speed, and to slice the objects. The temperature of the extruder that comprised a heating barrel and nozzle and that of the print platform, nozzle diameter, layer height, coasting distance, and extruder moving speed used in the study are listed in Table 2. The temperature of the heating barrel, nozzle, and print platform was set at 24 • C to determine the material requirements for printing. This is because homes and small restaurants, which opt for the economical unit, can easily operate the 3D printer at 24 • C, approximately room temperature. It was difficult to immediately laminate the formulation that consisted only of gelatin, sugar, and citric acid because of the lack of a hardening property at 24 • C after extrusion from the nozzle. Thus, the movement of the nozzle was paused after stacking each layer, and the gel was covered with a cooling cup at −2 • C for 1 min to promote lamination and harden the printed layer without running down. Hence, the layer height was maintained at 0.2 mm, and the next layer was stably printed atop it. Determination of the Gel Food Formulation Three samples were printed for each formulation. The printing quality was determined based on the observation of the appearance and measurements of the dimensions of the printed sample (i.e., printability and dimensional stability). The photographs of the printed structures were immediately acquired after printing to evaluate their overall appearance, line arrangement, line shape, continuity, and filling condition [9]. The cube height was measured with a micrometer immediately and 1 h after printing. Furthermore, the printability and dimensional stability, which indicate the precision and shape stability of the printed structure, were obtained with the following equations, respectively [10]. Achieved height of the printed object Target height of the object × 100 Dimensional stability (%) = Height of the printed object after 1 h Height of the printed object immediately after printing × 100 (2) Determination of Printing Material Requirements For the printing material requirements, fidelity, shape retention, and extrudability were considered. Fidelity indicates the printing accuracy of the shape and size of the printed structure compared with those of the target structure. Shape retention describes the shape stability after printing. Extrudability specifies the ease of extrusion through the nozzle [11][12][13][14]. Fidelity is characterized by the storage modulus (G') of the material [15,16], while shape retention is characterized by the G' and yield stress (τ 0 ) of the material [16,17]. The phase angle (δ) of the material was used to describe its extrudability [18]. Determination of the Rheological and Mechanical Parameters The rheological properties of the material were determined using a rotational rheometer (MCR 92, Anton Paar, Graz, Austria) with a cup-and-bob geometry (bob length, inner diameter, outer diameter, and measuring gap of 82 mm, 40 mm, 42 mm, and 0.5 mm, respectively) and were analyzed by Anton Paar Rheocompass (Anton Paar), a built-in software of the instrument. All measurements were repeated in triplicates at 24 ± 1 • C, and the average values were plotted. G' and δ were obtained based on the amplitude sweep tests of the rheometer [19]. To determine the linear viscoelastic region, a strain sweep was conducted in the oscillation test mode in the range of 0.1-100% at a fixed frequency of 1 Hz during a test time of 200 s [12]. G' and δ were calculated as the average values in the linear viscoelastic region [9,17,20,21]. The yield point was determined as the initial drop section of G', i.e., the point at which, upon increasing the applied stress, the solid first shows liquid-like behavior [22]. The starting section of the nonlinear region appeared because of the structural breakdown of the sample; the stress at this time was considered as τ 0 [13]. Determination of the Gel Food Formulation The observation results and quality measurements of the printed gel foods with various compositions are listed in Table 3. As the gelatin content increased, the printed 3D structure became more stable under the same printing conditions. However, at gelatin concentrations of ≥20%, the formulations became sticky with increased viscosity. Consequently, there were inconsistent extrusion patterns with drop-shaped lumps that formed at the tip of the nozzle during printing, indicating their unsuitability for additive manufacturing. In contrast, formulations with <12% gelatin demonstrated a stable extrusion without clumping or clogging the nozzle. However, it was difficult to obtain a cube shape because the previous layers were not hard enough, despite using a cooling cup. This resulted in the spreading of the shape within the bulging center. Formulations with a gelatin concentration in the range of 14-18% (A1, A3, and A5) had uneven surfaces, result- ing in unstable layered structures. Although these formulations printed and maintained cubic shapes, deformations were observed on the top of the printed samples (Table 3). When the sugar content was varied while fixing the gelatin content at 14, 16, or 18%, the viscosity of the formulations containing 50% or more sugar increased. This resulted in the materials stretching like a thread at the nozzle, which was not suitable for extrusion printing. Meanwhile, the formulations with 40% sugar (A2, A4, and A6) exhibited excellent shape stability even with internal filling owing to the seamless and stable extrusion of the materials through the nozzle along a line with consistent thickness. Based on the appearance of the printed samples at different gelatin concentrations with fixed pectin (7%), sugar (30%), and citric acid (1%) contents in the B formulations, formulations with ≥12% gelatin were unsuitable for printing owing to their high viscosity, whereas those with 10% gelatin were easily extruded, forming the intended cubic shape. Furthermore, although the formulation with 8% gelatin demonstrated excellent adhesion between layers, it was difficult to print an accurate cube shape, as the printed sample was easily disturbed by the moving nozzle during printing because of insufficient hardening. In addition, the printed sample was unable to support its own weight and began spreading. As the pectin content increased with the gelatin concentration fixed at 10%, more solid and stiff formations were obtained. Regarding the effect of the gum mixture concentration on the appearance of the printed samples with fixed 30% sugar and 1% citric acid contents, the formulation with 8% gum mixture (C1) formed a thin and weak gel, which was not hard enough at room temperature (24 ± 2 • C) and could not withstand its own weight. In contrast, a cube shape was printed with the formulations containing 10-12% gum mixture (C3 and C5); however, the line extruded through the nozzle was cluttered, making it difficult to deposit a flat layer. When the concentration of the gum mixture was fixed at 8, 10, or 12% with increased sugar concentration, printed samples with high resolution and uniform and smooth surfaces were obtained. As the sugar concentration increased, the viscosity of the formulation increased, allowing a seamless and stable extrusion without broken extrudate threads [10]. A2, A4, A6, B4, B5, B6, C2, C4, and C6 formulations had adequate printing characteristics based on visual inspection; the layers were substantially fused. The formulations demonstrated printability in the range of 89.03-97.68% and dimensional stability in the range of 94.32-98.86%. These values denote appropriate printing precision and shape stability that are suitable for additive manufacturing with 3D printing. Previously, 3D printed samples with outstanding printing precision formed with various gums mixed with orange concentrate wheat starch blends have exhibited printability and dimensional stability in the ranges of 83.53-99.39% and 81.54-99.39%, respectively [10]. Therefore, the physical properties of the gel formations, which were found to be suitable for printing, were analyzed to determine the material requirements for 3D gel food printing. For the formulation consisting of gelatin, water, sugar, and citric acid, a cooling cup (−5 • C, 1 min) was used for three-dimensional (3D) printing. All scale bars are equivalent to 5 mm. Fidelity The rheological parameters that represent fidelity for the gelatin, pectin, and gum mixture formulations are listed in Table 4. Compared with the target dimension, the shapes and sizes of the printed samples were highly accurate (i.e., high fidelity), with excellent printability based on visual observation. Table 4. Rheological parameters for the fidelity, shape retention, and extrudability of the gel formulations as a function of the gelatin, pectin, gum mixture, sugar, and citric acid contents. As the concentrations of gelatin, pectin, and gum mixtures in the A, B, and C formulations increased, the G' values increased from 3539.70 to 7597.08, 208.74 to 470.48, and 6577.77 to 14,287.78 Pa, respectively ( Table 4). The gelation occurred during the cooling of the gelatin dissolved in an irregular coil from the heating solution. Upon cooling, small regions, made of polypeptide chains that tend to return to the triple helical structure, were cross-linked to form a 3D network [23]. Meanwhile, as the concentration of gelatin increased, a denser gelatin network was formed, improving the ability of the gel to entrap water. Therefore, the increase in G' due to the increased gelatin concentration was attributed to the formation of a dense network. As the concentration of pectin increased, the strength of the pectin gel network increased, forming gels with increased elasticity and firmness [24]. Furthermore, this phenomenon increased the G' values, which indicate the firmness/consistency of the gel structure [24]. Notably, gums have been used as gelling agents for formulations to improve the cohesiveness, elasticity, and adhesion of the gel [25]. As the gum content increased, gelation was promoted, and the tissue of the gel was hardened, resulting in an increase in G'. The tangent of the δ (tan δ) values for A2, A4, A6, B4, B5, B6, C2, C4, and C6 formulations were in the range of 0.08-0.67. A gel with a solid-like structure, low fluidity, and prevalent elastic behavior can be formed when tan δ < 1, whereas one with prevalent viscous behavior is formed when tan δ > 1 [26]. In this study, formulations that yielded excellent printing precision formed a fairly elastic gel network. As the gelation concentration increased from 14% to 18% in the A formulation, tan δ decreased from 0.10 to 0.08, indicating that the formulation had relatively more solid-like rheological properties with poor fluidity. This may be related to the frequent thread breakage during printing and a sticky formulation due to its high viscosity, thereby resulting in a disrupted extrusion through the nozzle when the gelatin concentration exceeded 20%. The decrease in tan δ from 0.67 to 0.55 with increasing pectin concentration from 7% to 9% in the B formulations was attributed to the formation of a firmer pectin gel structure with better elastic properties [24]. Based on these results, the printing requirements of the formulations that resulted in highly accurate shapes and sizes of the printed samples with respect to the target dimension are summarized in Table 4. Therefore, a food item with the desired shape and size can be produced with excellent fidelity using a 3D printer by adjusting the physical properties of the printed materials to meet the requirements. Shape Retention The analysis of the rheological parameters of A2, A4, A6, B4, B5, B6, C2, C4, and C6 formulations, which previously exhibited shape stability upon visual inspection and high values of dimensional stability (≥81.5%, i.e., high shape retention) [10], demonstrated that the G' values of the formulation with gelatin, pectin, and gum mixtures were in the range of 3539.70-7597.08, 208.74-470.48, and 6577.77-14,287.78 Pa, respectively ( Table 4). The G' value reflects the mechanical strength (structural strength) of the material [18]. A formulation with sufficient mechanical strength (G') can endure its own weight over time after printing, maintain the printed shape, and have excellent resolution [18]. This is demonstrated by A6 (a gelatin-based) and C4 and C6 formulations (gum mixture-based), which had high G' values and maintained stable structures after printing with highdimensional stability. The variations in the G' values as a function of the oscillatory frequencies of the A, B, and C formulations with different amounts of gelatin, pectin, and gum mixtures are shown in Figure 1. Small changes in the viscoelastic properties demonstrated by the small changes in the modulus values (with respect to the frequency) in formulations A, B, and C suggested the formation of a strong and self-supporting gel with high shape retention [11,27]. In formulations A, B, and C, as the gelatin, pectin, and gum mixture concentrations increased, the G' value increased, implying that a higher mechanical strength was achieved (Table 4). This can be attributed to the formation of a denser network structure, as more gelatin molecules absorbed water and swelled with a higher gelatin concentration. As the concentration of pectin increased from 7% to 8% and 9% in the B formulations, the τ 0 value increased from 207.06 to 224.41 and 281.45 Pa, respectively. High-methoxyl pectin forms gel when the non-covalent bonding of the adjacent pectin chains is interconnected to form a 3D network. The increase in the G' and τ 0 values at increased pectin concentrations can be attributed to the increase in the number of elastically active chains owing to the increase in the number of junction zones [24]. The main ingredients that comprise 80% of the gum mixture used in this study were locust bean and xanthan gums. The two gum mixtures, when used together, demonstrated a synergistic effect in forming a solid and thermoreversible gel with improved syneresis and enhanced stability compared with their individual use [28]. This explains the excellent shape stability of the C formulations. The τ 0 values related to the shape retention of the printed objects were in the ranges of 258.30-283.09, 207.06-281.45, and 204.21-295.15 Pa for formulations A, B, and C, respectively. The τ 0 value of the A6 formulation, which demonstrated the highest dimensional stability (98.60%) among the A formulations, was higher than the τ 0 value of A2 formulation (dimensional stability: 94.32%), which slightly collapsed after printing. For the B and C formulations with pectin or gum mixtures, the τ 0 values of the formulations with higher dimensional stability (B6 and C6) were higher than the corresponding values of the formulations with lower dimensional stability (B4 and C2). Therefore, formulations with higher G' and τ 0 values resulted in outstanding dimension retention after extrusion based on the previous finding, wherein an increase in G' with respect to an increase in dimensional stability was observed. The τ 0 and mechanical strength were related to their ability to maintain the shape of the printed sample without collapse owing to the gravity applied to the material and stress generated by the material layer deposited thereon. The material requirements for printing the 3D structure with the gelatin, pectin, and gum mixture formulations that can support their own weight could be determined from the results (Table 4). Therefore, formulations that meet the requirements for a 3D printed sample with excellent shape retention could produce food products that retain their shape over time. shown in Figure 1. Small changes in the viscoelastic properties demonstrated by the small changes in the modulus values (with respect to the frequency) in formulations A, B, and C suggested the formation of a strong and self-supporting gel with high shape retention [11,27]. In formulations A, B, and C, as the gelatin, pectin, and gum mixture concentrations increased, the G' value increased, implying that a higher mechanical strength was achieved (Table 4). This can be attributed to the formation of a denser network structure, as more gelatin molecules absorbed water and swelled with a higher gelatin concentration. Extrudability The δ, related to the extrudability, is presented in Table 4. From the analysis of the rheological parameters of formulations A, B, and C that exhibited excellent extrudability through the nozzles during printing, the values for δ ranged from 4.78 • to 5.88 • , 28.79 • to 33.96 • , and 7.55 • to 12.86 • , respectively. For the viscoelastic materials, the δ was in the range of 0-90 • . As the elasticity of the formulation increases, the δ approaches 0 • . Meanwhile, as the viscosity increases, it approaches 90 • [27]. For gels, the typical δ was in the range of 1.2-64 • , in which elasticity-dominant gel-like structures have a δ smaller than 45 • [11]. When the δ was smaller than 10 • , the formulation behaved similarly to a viscoelastic solid [11]. Therefore, a solid-like gel structure with high elasticity was formed as the gelatin and gum mixture contents increased, which decreased the δ. In general, formulations with a large δ (45-90 • ) are liquid and non-self-supporting, whereas those with a small δ (0-3 • ) cannot be easily extruded through narrow nozzles [11]. In this study, A2, A4, and A6 formulations (δ: 4.78-5.88 • ) and C2, C4, and C6 formulations (δ: 7.55-12.86 • ) were stably extruded without breaking during printing and exhibited excellent dimensional stability in the range of 94.32-98.86% (Table 3). A previous study reported a formulation composed of agar, carrageenan, gellan, and xanthan-gelatin with a δ in the range of 3-15 • , resulting in self-supporting hydrocolloids with good extrudability [11], which is similar to the δ of the formulation in this study. Conversely, B4, B5, and B6 formulations containing pectin had a δ value of 28.79-33.96 • that suggested lower elastic properties than those of the A and C formulations, which was reflected in their relatively high δ values (Table 4) [27]. Therefore, despite their high-printing accuracies (printability: 96.79-97.68%), the dimensional stability of the B formulations was relatively lower compared with the A and C formulations. These results are in line with the results of Gholamipour-Shirazi et al. [11], in which the δ of their resulting formulation was in the range of 15-45 • when non-self-supporting semisolids were printed with guar, locust bean, and xanthan gums. Therefore, the material requirement, with respect to the δ, for seamless FDM-type gel printing (no nozzle clogging) that could achieve suitable extrudability could be determined (Table 4). Seamless printing with formulations in compliance with the material requirements would enable food production with uniform surfaces through FDM-type printing. Effects of the Use of Pectin and the Gum Mixture on the Printing Material Requirements The addition of pectin to the gelatin-based formulation (B formulation) increased the values of δ and decreased the values of G' and τ 0 , compared with the gelatin-based formulation (A formulation). The conspicuous increase in δ and decrease in G' values indicate that the mixing of gelatin and pectin could form a less elastic gel than that without pectin. The G' and δ values for the gum mixture-based formulations were higher, and the range of the τ 0 value was wider than those of the gelatin-based formulation. Compared with the gelatin-based formulations, the gelation of the gum mixture-based formulations resulted in a stronger and more elastic gel, which may be owing to the synergistic effect of the locust bean gum and xanthan gum on the formation of a solid gel [29]. Conclusions This study determined the material requirements for fidelity, shape retention, and extrudability that were suitable for printing gel food made of gelatin, pectin, and gum mixtures using a FDM 3D printer. Furthermore, the results suggest that the mixing of gelatin and pectin could form a less elastic gel than one without pectin. Further, the gel formed with the gum mixture yielded a stronger and more elastic gel compared with the gelatin-based formulations. The results of this study are expected to contribute to the commercialization of 3D printing technology in manufacturing gel food using various materials by tailoring the physical properties of each formulation to meet the imposed requirements.
6,028.4
2021-09-26T00:00:00.000
[ "Materials Science", "Agricultural and Food Sciences" ]
Transformer Monitoring with Electromagnetic Energy Transmission and Wireless Sensing To ensure stable and normal transformer operation, light gas protection of the transformer Buchholz relay is essential. However, false alarms related to light gas protection are common, and troubleshooting them often requires on-site gas sampling by personnel. During this time, the transformer’s operating state may rapidly deteriorate, posing a safety threat to field staff. To tackle these challenges, this work presents the near-field, thin-sliced transformer monitoring system that uses Electromagnetic Energy Transmission and Wireless Sensing Device (ETWSD). The system leverages external wireless energy input to power gas monitoring sensors. Simultaneously, it employs Near-Field Communication to obtain real-time concentrations of light gases, along with the electrified state and temperature. In field testing conducted on transformer relays’ gas collection chambers, the ETWSD effortlessly monitors parameters within warning ranges, encompassing methane gas concentrations around 1000 ppm, leakage voltage ranging from 0–100 V, and relay working temperatures up to 90 °C. Additionally, to facilitate real-time diagnosis for electrical workers, we have developed an Android-based APP software that displays current light gas concentrations, leakage voltage collection values, and temperature, while also enabling threshold judgment, alarms, and data storage. The developed ETWSD is expected to aid on-site personnel in promptly and accurately evaluating transformer light gas protection error alarm faults. It provides a method for simultaneous, contactless, and rapid monitoring of multiple indicators. Introduction In power systems, transformers are one of the most crucial components [1,2].Any form of failure can lead to power supply disruptions, costly repairs, and even severe damage [3,4].Transformers, after decades of continuous operation and exposure to varying operating conditions, face significant risks throughout their lifespan [4,5].Therefore, protecting transformers is of utmost importance.Failures in oil-immersed power transformers are mainly categorized into external tank failures and internal tank failures [5,6].Currently, electrical and non-electrical protections are used to isolate transformer faults to ensure their safe operation [2,7].Electrical protection primarily involves forming a differential circuit through the currents on each side of the transformer [8], with protection schemes using current differences in the differential circuit under internal faults, external faults, and normal operation as criteria for action, aimed at protecting against various inter-phase Sensors 2024, 24, 1606 2 of 13 short circuit faults in transformer windings and lead wires [1,7].Non-electrical protection mainly refers to gas protection based on the large amount of free gas produced and the resulting oil flow surges during internal faults in the transformer tank.Light gas protection pertains to the degree of gas accumulation in the tank, while heavy gas protection relates to the speed of oil flow in the tank [9,10]. However, the accuracy of light gas action protection has been low over the long term [11,12].This is because its criterion is the volume of gas accumulation, which is easily influenced by non-fault factors such as trapped gas, leading to insufficient reliability of protection [13,14].As a result, power departments in many countries, including Europe and North America, typically use light gas relays as a means of alarm.The Chinese state grid has also established relevant regulations to improve the accuracy and reliability of light gas alarms.After light gas action, further confirmation is required, with manual on-site gas analysis [15,16].Nevertheless, the transformer is still operational at this time, and in the event of a serious accident, it poses a significant safety hazard to workers.In response to this safety hazard, some scholars have conducted research on dissolved gases in oil and developed online monitoring technology for dissolved gases in oil.This technology, by analyzing changes in the composition of dissolved gases in oil under different voltages, temperatures, and other conditions, can accurately reflect early internal defects in transformers [17][18][19].However, this method has a long monitoring cycle, usually measured in years, months, or weeks.It is not sensitive enough to respond promptly to the rapid development of internal defects into serious faults, leading to severe accidents.In summary, solving how to achieve rapid on-site qualitative diagnosis and avoid personnel casualties when staff perform gas extraction operations following minor gas alarms from transformers is particularly crucial.The current online diagnostic technologies mainly include gas chromatography and photoacoustic spectroscopy [20][21][22][23].Nevertheless, both methods have long gas component detection cycles, typically in hours, and cannot sensitively and quickly respond to the rapid development of internal defects into serious faults.Additionally, both methods require the addition of extra equipment to the existing transformers, and the equipment is costly and complex to install. This study analyzes the principles and prevailing issues in current light gas protection, presenting a novel Energy Transmission and Wireless Sensing Device System (ETWSD) grounded in Near-Field Communication (NFC) and wireless electromagnetic energy transfer.This method comprises an NFC part, a wireless electromagnetic energy transmission part, and a sensor part.It enables swift qualitative analysis of the methane gas concentration within the gas relay, the electrified condition of the gas relay's casing, and the operating temperature of the gas relay under operational status.Energy is transferred through electromagnetic induction to charge a supercapacitor, which then powers the methane gas sensor for real-time monitoring of methane concentration in the transformer, achieving millisecond-level rapid response.NFC technology is employed for contactless collection of signals from the sensor unit, subsequently displaying real-time measurements of temperature, voltage, and methane gas concentration in the supervisory control APP.This ETWSD assists field personnel in assessing the current operational state of the transformer, thereby preventing rapidly developing faults during transformer operation and ensuring the safety of on-site workers. Fabrication of Wireless Electromagnetic Power Transfer The Wireless Electromagnetic Power Transfer (WEPT) design employs the SCT63240 chip from SCT Company (Franklin Lakes, NJ, USA), working in conjunction with a C51 microcontroller, to realize a wireless high-power transmitter system that conforms to the WPC (Wireless Power Consortium) specifications.This device integrates a 4-MOSFETs full bridge power stage, gate drivers, a 5 V step-down DC/DC converter, a 3.3 V Low Dropout Regulator (LDO), a 2.5 V accurate voltage reference, and an input current sensor for system efficiency.Two Pulse Width Modulation (PWM) signal input ports of this system Sensors 2024, 24, 1606 3 of 13 can be controlled by the C51 microcontroller to operate the full bridge inverter across a wide frequency range of 20 kHz to 400 kHz, fully meeting the WPC specification frequency requirement of 100 kHz to 250 kHz.The typical application circuit for the SCT63240 chip can be obtained from its data manual.A corresponding Printed Circuit Board (PCB) is designed according to the Layout Guideline, and then the PWM1 and PWM2 pins of the chip are connected to the General-Purpose Input/Output (GPIO) ports of the C51 microcontroller.Within the C51 microcontroller, the timer is initialized, the PWM frequency is set, and PWM waveforms are generated by toggling the I/O port level states.The duty cycle of the PWM waveforms is adjusted by altering the duration of the high and low voltage levels, thus achieving control over the wireless charging frequency.Figure 1a shows the PCB layout of WEPT.The power supply routing uses a large-area copper laying method to reduce line impedance, resulting in better current-carrying capacity and higher energy transfer efficiency for the WEPT module.Figure 1b displays an optical image of the WEPT module, which comprises two components: the circuit board and the copper wire coil.Figure 1c shows the experimental test structure of the WEPT.A 3D-printed housing with a silicon steel sheet is fixed between the circuit board and the coil to block electromagnetic interference and suppress the generation of eddy currents.This stacking of the circuit board and coil increases space utilization. Dropout Regulator (LDO), a 2.5 V accurate voltage reference, and an input current sensor for system efficiency.Two Pulse Width Modulation (PWM) signal input ports of this system can be controlled by the C51 microcontroller to operate the full bridge inverter across a wide frequency range of 20 kHz to 400 kHz, fully meeting the WPC specification frequency requirement of 100 kHz to 250 kHz.The typical application circuit for the SCT63240 chip can be obtained from its data manual.A corresponding Printed Circuit Board (PCB) is designed according to the Layout Guideline, and then the PWM1 and PWM2 pins of the chip are connected to the General-Purpose Input/Output (GPIO) ports of the C51 microcontroller.Within the C51 microcontroller, the timer is initialized, the PWM frequency is set, and PWM waveforms are generated by toggling the I/O port level states.The duty cycle of the PWM waveforms is adjusted by altering the duration of the high and low voltage levels, thus achieving control over the wireless charging frequency.Figure 1a shows the PCB layout of WEPT.The power supply routing uses a large-area copper laying method to reduce line impedance, resulting in better current-carrying capacity and higher energy transfer efficiency for the WEPT module.Figure 1b displays an optical image of the WEPT module, which comprises two components: the circuit board and the copper wire coil.Figure 1c shows the experimental test structure of the WEPT.A 3D-printed housing with a silicon steel sheet is fixed between the circuit board and the coil to block electromagnetic interference and suppress the generation of eddy currents.This stacking of the circuit board and coil increases space utilization. Fabrication of NFC, Wireless Electromagnetic Power Receiver and Sensor Circuit The Wireless Electromagnetic Power Receiver (WEPR) part of the circuit uses the CP2101 chip from COPO Company (Nacka kommun, Sweden), which complies with WPC specifications.The chip's typical topology schematic and Layout Guideline are displayed in the chip's data manual, and the circuit is designed according to the requirements of the chip manual.Figure 2a displays the circuit topology diagrams of the WEPR and methane sensor.The inductive coil antenna, controlled by the CP2101 chip, collects electromagnetic energy from the supervisory control.The energy is then rectified, filtered, and stored in a supercapacitor.The voltage of the supercapacitor is affected by the output current and voltage of the electromagnetic energy transmission system and is sensitive to environmental disturbances.To prevent overheating and damage to the methane gas sensor, a stable voltage supply is required.Therefore, a low-dropout linear regulator is added before the methane gas sensor to provide a stable voltage.Before normal operation, the methane gas sensor requires a 3 min preheating period, during which it needs a supply voltage of 2.4 V and a current of 32 mA.The WEPR system can quickly charge to meet the energy requirements for the sensor's preheating, sensing, and communication. Fabrication of NFC, Wireless Electromagnetic Power Receiver and Sensor Circuit The Wireless Electromagnetic Power Receiver (WEPR) part of the circuit uses the CP2101 chip from COPO Company (Nacka kommun, Sweden), which complies with WPC specifications.The chip's typical topology schematic and Layout Guideline are displayed in the chip's data manual, and the circuit is designed according to the requirements of the chip manual.Figure 2a displays the circuit topology diagrams of the WEPR and methane sensor.The inductive coil antenna, controlled by the CP2101 chip, collects electromagnetic energy from the supervisory control.The energy is then rectified, filtered, and stored in a supercapacitor.The voltage of the supercapacitor is affected by the output current and voltage of the electromagnetic energy transmission system and is sensitive to environmental disturbances.To prevent overheating and damage to the methane gas sensor, a stable voltage supply is required.Therefore, a low-dropout linear regulator is added before the methane gas sensor to provide a stable voltage.Before normal operation, the methane gas sensor requires a 3 min preheating period, during which it needs a supply voltage of 2.4 V and a current of 32 mA.The WEPR system can quickly charge to meet the energy requirements for the sensor's preheating, sensing, and communication.Figure 2b shows that the NFC chip employs a minimal system peripheral circuit design using the RF430FRL152HCRGER (Texas Instruments, Dallas, TX, USA), with the NFC antenna designed using PCB printing technology.Impedance analysis is conducted using a vector network analyzer to match the impedance frequency values specified in the chip's design manual.Additionally, a coupling capacitor is paralleled on the antenna, allowing the NFC antenna to reach the resonance frequency of the LC oscillation circuit at 13.56 MHz, while also serving to suppress interference.The temperature sensing module uses a thermistor-based measurement method.By measuring the voltage across the thermistor with the analog-to-digital converter (ADC) measurement port on the NFC chip, the resistance value changes with temperature, thus reflecting the actual environmental temperature.The voltage measurement module adopts a resistive voltage divider method for measuring high voltage.It uses resistors of 100 kΩ and 1 kΩ in series, and the NFC's builtin ADC module collects the voltage value across the 1 kΩ resistor.The actual measurement voltage is then obtained through the resistive voltage divider formula.The methane gas measurement unit uses the MiCS-5524 MEMS sensor (DFRobot, Chengdu, China) with an analog output interface.The built-in ADC module of the NFC is used to collect and analyze the analog quantity. Data Analysis and Statistics All line graphs and curve graphs were created using Origin, while data processing was conducted in MATLAB R2023b.For distance testing between WEPR and NFC, temperature comparison tests, and long-term data display after ETWSD testing, the mean STD was used, with a sample size of 3. The error bars were too small to be displayed.After binary conversion and methane concentration calculation in MATLAB, the graphs were plotted in Origin. Electrical Measurement and Characterization The electrical signals of the ETWSD were measured utilizing a RIGOL MSO8204 oscilloscope (RIGOL, Suzhou, China).A RIGOL RP3500A probe was utilized for low-speed signal detection, while a RIGOL RP6150A probe was employed for high-speed signal analysis.Fixed frequency signals were generated using a STANFORD RESEARCH SYSTEMS MODEL DS345 instrument (SRS, Inc., Sunnyvale, CA, USA).Throughout the debugging Figure 2b shows that the NFC chip employs a minimal system peripheral circuit design using the RF430FRL152HCRGER (Texas Instruments, Dallas, TX, USA), with the NFC antenna designed using PCB printing technology.Impedance analysis is conducted using a vector network analyzer to match the impedance frequency values specified in the chip's design manual.Additionally, a coupling capacitor is paralleled on the antenna, allowing the NFC antenna to reach the resonance frequency of the LC oscillation circuit at 13.56 MHz, while also serving to suppress interference.The temperature sensing module uses a thermistor-based measurement method.By measuring the voltage across the thermistor with the analog-to-digital converter (ADC) measurement port on the NFC chip, the resistance value changes with temperature, thus reflecting the actual environmental temperature.The voltage measurement module adopts a resistive voltage divider method for measuring high voltage.It uses resistors of 100 kΩ and 1 kΩ in series, and the NFC's built-in ADC module collects the voltage value across the 1 kΩ resistor.The actual measurement voltage is then obtained through the resistive voltage divider formula.The methane gas measurement unit uses the MiCS-5524 MEMS sensor (DFRobot, Chengdu, China) with an analog output interface.The built-in ADC module of the NFC is used to collect and analyze the analog quantity. Data Analysis and Statistics All line graphs and curve graphs were created using Origin, while data processing was conducted in MATLAB R2023b.For distance testing between WEPR and NFC, temperature comparison tests, and long-term data display after ETWSD testing, the mean STD was used, with a sample size of 3. The error bars were too small to be displayed.After binary conversion and methane concentration calculation in MATLAB, the graphs were plotted in Origin. Electrical Measurement and Characterization The electrical signals of the ETWSD were measured utilizing a RIGOL MSO8204 oscilloscope (RIGOL, Suzhou, China).A RIGOL RP3500A probe was utilized for low-speed signal detection, while a RIGOL RP6150A probe was employed for high-speed signal analysis.Fixed frequency signals were generated using a STANFORD RESEARCH SYSTEMS MODEL DS345 instrument (SRS, Inc., Sunnyvale, CA, USA).Throughout the debugging and development phase of the ETWSD, a KEITHLEY 2200-72-1 programmable power supply (Tektronix China Ltd., Shanghai, China) was employed to provide stable voltage and current inputs.The output capacity of the ETWSD underwent meticulous calibration, facilitated by an A-BF DCT8730 electronic load meter (LCSC, Shenzhen, China).The recent redesign of the NFC signal antenna and the wireless charging coil necessitated recalibration of the antenna's impedance characteristics, frequency response, and amplitude response.This recalibration was precisely executed utilizing a ROHDE&SCHWARZ ZNC3 model vector network analyzer (Rohde & Schwarz Hong Kong Ltd. (RSHK), Hong Kong, China). Light Gas Protection Alarm When oil-immersed transformers are operational, light gas alarms can be triggered under several circumstances: Firstly, transformer oil gradually deteriorates over time due to factors like temperature and oxygen, leading to gas formation.Secondly, the insulation materials inside the transformer can age under high-temperature or -voltage stress, also resulting in gas production.Thirdly, partial discharges within the transformer, such as corona or spark discharges, can generate a small amount of gas.Fourthly, changes in oil level or severe fluctuations in oil temperature may also cause gas release.Fifthly, mechanical faults within the transformer, like loosening of windings, may trigger minor gas alarms.Sixthly, environmental changes around the transformer, such as temperature and humidity, or external gas infiltrating the system can occur.Seventhly, failure of insulation between windings can lead to turn-to-turn short circuits, phase-to-phase short circuits, or ground faults, producing a large amount of gas.Eighthly, the light gas relay itself might malfunction [24,25]. In case of insulation failure inside the oil tank of an oil-immersed transformer, the high-energy short-circuit arc generated at the fault point decomposes the transformer cooling oil and other insulation materials, thereby rapidly producing a large amount of free gas.The gas relay protects the transformer based on the volume of gas produced and the speed of oil flow it causes.Existing light gas protection is based on changes in the volume of free gas accumulated at the top of the gas relay, which is used to determine if there are minor faults inside the transformer tank.Typically, the action value of light gas protection is set according to the gas chamber volume within the gas relay, generally between 250 and 300 mL.A light gas alarm is triggered when the gas volume in the chamber exceeds this set volume. After a light gas alarm is triggered, workers need to sample and test the gas while the transformer is in operation, a process that poses safety risks [26].This is because the transformer's operating condition may rapidly deteriorate during gas extraction, potentially leading to explosions and fires, posing a threat to the safety of the workers.In the current implementation process for false alarms related to light gas, gas extraction operations are permitted during transformer commissioning, provided that the operation process and steps are strictly followed.The non-contact auxiliary diagnosis carried out prior to this step also meets the safety distance requirements.This paper proposes a real-time qualitative diagnostic device powered by wireless electromagnetic energy, allowing workers to quickly assess the state of minor gas alarms in transformers. After statistical analysis of the gas component data following light gas alarms in transformers as reported in the existing literature [24, [27][28][29][30][31] (Table 1), it is found that under the correct operation of light gas protection, the proportions of hydrogen, methane, and ethylene gases in the gas components show noticeable changes.Among these, methane, as the gas with the most significant changes and highest distinctiveness, is used as the primary detection characteristic gas. Component of ETWSD System As shown in Figure 3a, this work presents a conceptual diagram of a device system based on Electromagnetic Energy Transmission and NFC wireless sensing, referred to as the ETWSD.The system comprises three parts: a smartphone installed with a development application, a wireless charging transmitter, and an ETWSD.The smartphone serves as the upper computer for the entire system.It obtains sensor data from ETWSD through NFC communication technology and displays the data graphically in the app using postprocessing algorithms and display interface frameworks.The WEPT module is designed to transfer electrical energy from the phone to the ETWSD and can be omitted if the phone itself supports reverse wireless charging.The ETWSD module is an embedded system that integrates the functions of receiving electromagnetic energy, collecting physical quantities of gas relay operating status, and NFC communication. Component of ETWSD System As shown in Figure 3a, this work presents a conceptual diagram of a device system based on Electromagnetic Energy Transmission and NFC wireless sensing, referred to as the ETWSD.The system comprises three parts: a smartphone installed with a development application, a wireless charging transmitter, and an ETWSD.The smartphone serves as the upper computer for the entire system.It obtains sensor data from ETWSD through NFC communication technology and displays the data graphically in the app using post-processing algorithms and display interface frameworks.The WEPT module is designed to transfer electrical energy from the phone to the ETWSD and can be omitted if the phone itself supports reverse wireless charging.The ETWSD module is an embedded system that integrates the functions of receiving electromagnetic energy, collecting physical quantities of gas relay operating status, and NFC communication. Figure 3b shows the block diagram of the program structure of the ETWSD system.Both smartphones and computers can be used as the upper computer of the system, but the portability of smartphones gives them more advantages.The upper computer supplies energy to the WEPT module through a universal serial bus cable.The Wireless Electromagnetic Energy Receiver module enables the use of high-power sensors on board, and the sensor module collects the temperature, the electrified state of the casing, and the methane gas concentration in the gas relay.The NFC data module is responsible for data communication with the supervisory control APP.The WEPR module provides a stable power source for the sensor module.When the receiver is close to the transfer, the transfer and receiver communicate via metal coils, achieving a handshake function.The main controller continuously judges whether the circuit is operating in a normal energy transmission state, ensuring the efficiency and stability of energy transmission.At the same time, the main controller regulates the voltage adjustment module to filter, rectify, step-down, or step-up the input alternating electromagnetic signal through a series of voltage transformation operations.The main controller can set the output voltage and current of the voltage regulation module through external resistor programming.Under stable voltage drive, the gas sensor undergoes approximately 3 min of preheating.After preheating, it outputs an analog voltage signal to the NFC data module, enabling continuous concentration monitoring of specific gas components.The NFC data module collects analog data of temperature, voltage, and gas concentration through three analog data lines, converts them into digital form through an ADC, and then calculates the actual data values of temperature, voltage, and gas concentration through formulae.Subsequently, the RF modulation module modulates the data information into specific frequency electromagnetic signal information and transmits it wirelessly to the supervisory control.The supervisory control, after demodulating and recording the signal, finally displays it on the interface of the supervisory control application, thus completing the entire process from power supply to measurement and to display.Field workers can instantly view and export measurement data through the device. Figure 3c shows the interface of the ETWSD supervisory control APP.The APP interface displays the connection status of the lower-level machine in real time through the Tag connection status, which can be used to check if the supervisory control is properly placed.The program monitors the amplitude of data from the three sensors and provides brief auxiliary reminders through the error alarm.Meanwhile, the interface displays real-time data line graphs of temperature, leakage voltage, methane concentration, and ADC port voltage.Measurement data can be saved using the SAVE button on the control panel.During transformer gas sampling operations, workers only need to place their smartphones in the sensing area and wait for the electromagnetic energy transmission to complete.After a few minutes, they can obtain the gas temperature inside the gas relay, voltage readings from leaks in the gas relay casing, and the concentration of methane in the gas relay collection chamber. Calibration of Methane Gas Sensors Figure 4a displays the voltage and output current curves of the WEPR system under no-load conditions.The system maintains a stable output of 5 V, with the current gradually increasing to a maximum output value of 1.2 A within 60 s.The stable voltage and rapid current response enable the WEPR system to have good load-bearing capacity, providing the required energy expenditure for the sensor circuit.Figure 4b displays the output voltage of the WEPR system at different distances from the electromagnetic energy transmitter.It can be observed that the WEPR system can stably receive wireless electromagnetic energy within 40 mm, as evidenced by the voltage output.Figure 4c illustrates the charging curves of the WEPR system's internal supercapacitors with varying capacitance values.The time required to charge capacitors to 5 V varies depending on their capacitance.A 0.5 F capacitor takes only 15 s, a 1.5 F capacitor takes 41 s, a 5 F capacitor takes 67 s, and a 10 F capacitor takes 116 s.This rapid charging of capacitors can reduce the time needed for wireless electromagnetic energy transfer between the supervisory control and the lower-level machine.Figure 4d illustrates the correlation between the voltage on the supercapacitor in WEPR and the voltage on the methane sensor.The graph shows that the voltage on the methane sensor remains consistently stable at the expected value.A 10 F capacitor can provide power for the sensor's preheating and working processes.Furthermore, it has been demonstrated that the system can provide sufficient power to the methane gas sensor for continuous measurement work lasting over 8 min, following a rapid charge of the storage capacitor.Figure 4e depicts the methane concentration in the ambient environment.Notably, at the 100 s juncture, upon opening of the gas supply valve, the sensor exhibits a prompt response within a 5 s timeframe, thereafter progressively aligning with the actual methane concentration observed in the environment.This expeditious response characteristic significantly enhances the measurement system's overall speed and stability.Furthermore, Figure 4f presents an analysis of the correlation between the long-term operational data of the methane sensor and the corresponding calibrated values across varied methane gas concentrations.The empirical data display a marginal divergence from the calibration values, thereby underscoring the sensor's precision and stability in measuring methane gas concentrations.It is, however, noteworthy that upon exceeding methane concentrations of 10,000 ppm, the sensor initially manifested a pronounced offset, which subsequently converged towards the actual calibrated values over time.Figure 4b displays the output voltage of the WEPR system at different distances from the electromagnetic energy transmitter.It can be observed that the WEPR system can stably receive wireless electromagnetic energy within 40 mm, as evidenced by the voltage output.Figure 4c illustrates the charging curves of the WEPR system's internal supercapacitors with varying capacitance values.The time required to charge capacitors to 5 V varies depending on their capacitance.A 0.5 F capacitor takes only 15 s, a 1.5 F capacitor takes 41 s, a 5 F capacitor takes 67 s, and a 10 F capacitor takes 116 s.This rapid charging of capacitors can reduce the time needed for wireless electromagnetic energy transfer between the supervisory control and the lower-level machine.Figure 4d illustrates the correlation between the voltage on the supercapacitor in WEPR and the voltage on the methane sensor.The graph shows that the voltage on the methane sensor remains consistently stable at the expected value.A 10 F capacitor can provide power for the sensor's preheating and working processes.Furthermore, it has been demonstrated that the system can provide sufficient power to the methane gas sensor for continuous measurement work lasting over 8 min, following a rapid charge of the storage capacitor.Figure 4e depicts the methane concentration in the ambient environment.Notably, at the 100 s juncture, upon opening of the gas supply valve, the sensor exhibits a prompt response within a 5 s timeframe, thereafter progressively aligning with the actual methane concentration observed in the environment.This expeditious response characteristic significantly enhances the measurement system's overall speed and stability.Furthermore, Figure 4f presents an analysis of the correlation between the long-term operational data of the methane sensor and the corresponding calibrated values across varied methane gas concentrations.The empirical data display a marginal divergence from the calibration values, thereby underscoring the sensor's precision and stability in measuring methane gas concentrations.It is, however, noteworthy that upon exceeding methane concentrations of 10,000 ppm, the sensor initially manifested a pronounced offset, which subsequently converged towards the actual calibrated values over time. NFC Module Testing To ensure the antenna has the appropriate shape and flexibility, we redesigned its shape and material.To achieve optimal signal and energy transmission, we used a vector network analyzer to establish the relationship between frequency and antenna impedance through frequency sweeping.By continuously adjusting the antenna's design parameters, we finally obtained the antenna parameter curve shown in Figure 5a, matching the antenna's maximum frequency and equivalent impedance with the chip's port characteristics.In practical use, the distance between NFC antennas usually affects the accuracy of data transmission.As shown in Figure 5b, the data collected by the ADC completely match the actual data at distances less than 15 mm; beyond this limit, data transmission fails.In the sensor system, measurements of three types of data are carried out in analog form and converted into digital form through the ADCs.As shown in Figure 5c, the ADCs' precision has been calibrated, providing valuable assistance in correcting the collected analog values.Within the 0-0.9V range, the average error rate between the ETWSD's voltage acquisition value and the power supply voltage is impressively low at 0.49%. NFC Module Testing To ensure the antenna has the appropriate shape and flexibility, we redesigned its shape and material.To achieve optimal signal and energy transmission, we used a vector network analyzer to establish the relationship between frequency and antenna impedance through frequency sweeping.By continuously adjusting the antenna's design parameters, we finally obtained the antenna parameter curve shown in Figure 5a, matching the antenna's maximum frequency and equivalent impedance with the chip's port characteristics.In practical use, the distance between NFC antennas usually affects the accuracy of data transmission.As shown in Figure 5b, the data collected by the ADC completely match the actual data at distances less than 15 mm; beyond this limit, data transmission fails.In the sensor system, measurements of three types of data are carried out in analog form and converted into digital form through the ADCs.As shown in Figure 5c, the ADCs' precision has been calibrated, providing valuable assistance in correcting the collected analog values.Within the 0-0.9V range, the average error rate between the ETWSD's voltage acquisition value and the power supply voltage is impressively low at 0.49%.It can be seen from the graph that this method's accuracy is acceptable, with a maximum error of less than 10%, as shown in Figure 5e.This is sufficient for the qualitative measurement of whether the gas relay casing is electrified.Figure 5f shows how we tested the relationship between the analog quantities collected by the ADC and methane concentration.Using professional methane concentration testing equipment, the curve of methane concentration varying over time was obtained.Additionally, an oscilloscope was used to measure the voltage change curve at the analog output port of the methane sensor.By fitting the curve with higher-order functions, the It can be seen from the graph that this method's accuracy is acceptable, with a maximum error of less than 10%, as shown in Figure 5e.This is sufficient for the qualitative measurement of whether the gas relay casing is electrified.Figure 5f shows how we tested the relationship between the analog quantities collected by the ADC and methane concentration.Using professional methane concentration testing equipment, the curve of methane concentration varying over time was obtained.Additionally, an oscilloscope was used to measure the voltage change curve at the analog output port of the methane sensor.By fitting the curve with higher-order functions, the relationship between the test voltage analog quantity and the accurate methane concentration digital quantity was established, thereby creating a function mapping the voltage values to the analog quantities. Laboratory Tests The system's current fault determination method utilizes the threshold comparison approach.If any of the three parameters exceed the designated threshold, the circular label on the APP will change from green to red, indicating abnormal monitoring sensing information.Figure 6a shows the actual working interface of the APP, which displays the connection status of the supervisory control.A green circle indicates there is no danger under the current data conditions, while a red circle signifies that at least one sensor's current data are in an abnormal state.The interface also displays the current temperature, gas relay casing voltage, and methane gas concentration.The APP interface is clear and concise, facilitating quick access and judgment of key information. Sensors 2024, 24, x FOR PEER REVIEW 10 of 13 relationship between the test voltage analog quantity and the accurate methane concentration digital quantity was established, thereby creating a function mapping the voltage values to the analog quantities. Laboratory Tests The system's current fault determination method utilizes the threshold comparison approach.If any of the three parameters exceed the designated threshold, the circular label on the APP will change from green to red, indicating abnormal monitoring sensing information.Figure 6a shows the actual working interface of the APP, which displays the connection status of the supervisory control.A green circle indicates there is no danger under the current data conditions, while a red circle signifies that at least one sensor's current data are in an abnormal state.The interface also displays the current temperature, gas relay casing voltage, and methane gas concentration.The APP interface is clear and concise, facilitating quick access and judgment of key information.Figure 6b depicts the actual working state in a laboratory setting.This sensing system operates in the collection chamber of the gas relay, using the glass panel of the collection chamber to complete electromagnetic energy transfer and NFC-based data communication.By controlling the temperature and methane gas concentration in a sealed beaker, the environment inside the actual gas relay collection chamber is simulated and reproduced as closely as possible, thereby testing the system's stability and reliability.Figure 6c demonstrates the long-term stability and accuracy of the sensor system.However, the system mentioned above has not been able to operate in the gas relay of an Figure 6b depicts the actual working state in a laboratory setting.This sensing system operates in the collection chamber of the gas relay, using the glass panel of the collection chamber to complete electromagnetic energy transfer and NFC-based data communication.By controlling the temperature and methane gas concentration in a sealed beaker, the environment inside the actual gas relay collection chamber is simulated and reproduced as closely as possible, thereby testing the system's stability and reliability.Figure 6c demonstrates the long-term stability and accuracy of the sensor system.However, the system mentioned above has not been able to operate in the gas relay of an actual high-voltage oil-immersed power transformer in operation.Although we have conducted experiments related to electromagnetic radiation interference, the magnitude of the interference noise is smaller than that of the electromagnetic interference noise emitted by the actual transformer.As a result, the quality of communication and electromagnetic transmission may be degraded in the actual working scenario. Discussion Under the current regulations and safety guidelines for gas relay protection operations, there still exist certain safety risks.Existing methods such as photoacoustic spectroscopy and gas chromatography are capable of effectively analyzing the components of dissolved gases in transformer oil, thereby providing early warnings for potential defects and issues in oil-immersed power transformers.However, the detection cycle of these methods is typically longer, usually exceeding 30 min.In cases of rapidly developing defects, they may not sensitively reflect the problems present in the transformer.Consequently, gas alarms may pose a threat to the personal safety of operators conducting gas sampling operations while the transformer is in operation.Furthermore, both existing methods require modifications to the transformer body and the installation of additional equipment, rendering them expensive and difficult to widely implement.The proposed method can be seamlessly integrated with existing gas relays.It enables rapid detection and analysis of methane gas concentration in gas relays, transformer temperature, and the electrification status of relays' casings 3-5 min before personnel engage in gas sampling operations.This is aimed at ensuring the safety of on-site workers.Additionally, the proposed device features a compact size, low cost, and can be produced using flexible PCBs.Installing it in the gas collection chamber of gas relays would be an ideal scenario.Therefore, the proposed sensor device employs near-field communication and wireless power transmission technologies.The electromagnetic signals can penetrate the transparent window of the gas collection chamber, enabling interaction with handheld portable electronic devices. The designed sensor circuit features a flexible antenna with impedance matching at 13.56 MHz, laying a stable foundation for wireless communication.Moreover, after preheating, the circuit can quickly respond to methane concentration, providing precise readings below 1000 ppm and qualitative detection above 1000 ppm.Additionally, it can swiftly detect the electrification status of gas relay casings, offering accurate readings in the range of 0-100 V.It also enables rapid measurement of transformer operating temperature within a temperature range of up to 90 degrees Celsius.However, the proposed sensor also has some drawbacks.For instance, it can only detect a limited range of gas types, which restricts its ability to provide detailed fault analysis in transformers, limiting its capability for qualitative detection.Furthermore, its ability to resist electromagnetic interference in communication remains unknown.Although we conducted relevant electromagnetic interference experiments, proving the reliability of near-field communication, we did not perform actual measurements in operational transformers.The electromagnetic environment in actual operating sites may be more complex and variable, which could affect the reliability of near-field communication. Conclusions In summary, this paper proposes a proximal, thin-slice transformer monitoring device using electromagnetic energy transmission and wireless sensing to assist workers in qualitatively diagnosing the operating condition of the transformer prior to gas extraction operations following the triggering of a light gas relay alarm.This device differs from existing online diagnostic technologies used in the power grid, such as gas chromatography and photoacoustic spectroscopy.While these previous methods require the addition of additional instrumentation or piping to the transformer, and require long-term testing, our method can detect minor defects and faults in the current operating condition of a transformer by analyzing dissolved gas components in the oil, helping to prevent transformer failures.In addition, these methods lack the agility to provide rapid diagnostic results in scenarios where internal defects develop rapidly or mechanical parts loosen, resulting in the ingress of external gases.In contrast, the proposed device can be placed in the gas transparent collection compartment of the gas relay and rapidly measures the operating temperature of the transformer, the leakage voltage of the gas relay housing and the methane concentration in the gas relay collection chamber.Together, these measurements assess the safety of the transformer's current condition for on-site maintenance personnel performing gas sampling operations.The overall system design comprises three main parts: the supervisory controller, the NFC communication sensor data acquisition module and the wireless power supply module.The low-power sensors are powered by NFC, which provides communication functions while offering low power consumption.For the gas sensor, which requires a stable, continuous high voltage, electromagnetic wireless energy transfer and collection methods are used to charge the supercapacitor in a short time, ensuring normal operation of the gas sensor.Finally, by combining NFC communication technology with electromagnetic wireless energy transfer technology, we realized a device for rapid on-site auxiliary diagnosis of light gas relay alarms. Figure 1 . Figure 1.The schematic diagram of WEPT.(a) Altium designer layout diagram for WEPT.(b) Optical image of WEPT.(c) The diagram of WEPT for test structure. Figure 1 . Figure 1.The schematic diagram of WEPT.(a) Altium designer layout diagram for WEPT.(b) Optical image of WEPT.(c) The diagram of WEPT for test structure. Figure 2 . Figure 2. Partial circuit schematic of ETWSD.(a) Circuit topology diagram of WEPR.(b) Circuit topology diagram of the NFC circuit. Figure 2 . Figure 2. Partial circuit schematic of ETWSD.(a) Circuit topology diagram of WEPR.(b) Circuit topology diagram of the NFC circuit. Figure 3 . Figure 3. Conceptual and schematic diagrams of the ETWSD.(a) Conceptual diagram of the use of the ETWSD, along with schematic diagrams showing the basic structural components of smartphones and sensor nodes.(b) Block diagram of the signal and energy flow in the ETWSD system, highlighting the important components of each part.(c) Interface display of the supervisory control APP, including annotations for each display content. Figure 3 . Figure 3. Conceptual and schematic diagrams of the ETWSD.(a) Conceptual diagram of the use of the ETWSD, along with schematic diagrams showing the basic structural components of smartphones and sensor nodes.(b) Block diagram of the signal and energy flow in the ETWSD system, highlighting the important components of each part.(c) Interface display of the supervisory control APP, including annotations for each display content. Figure 4 . Figure 4. Wireless electromagnetic power receiver system.(a) Output voltage and current of WEPR under no-load conditions.(b) Relationship between WEPR output voltage and transmission distance.(c) Charging of different capacitance capacitors by the WEPR system.(d) Comparison of capacitor voltage and load voltage in WEPR under load conditions.(e) Methane concentration response test.(f) Actual test data of the methane gas sensor for different methane gas concentrations. Figure 4 . Figure 4. Wireless electromagnetic power receiver system.(a) Output voltage and current of WEPR under no-load conditions.(b) Relationship between WEPR output voltage and transmission distance.(c) Charging of different capacitance capacitors by the WEPR system.(d) Comparison of capacitor voltage and load voltage in WEPR under load conditions.(e) Methane concentration response test.(f) Actual test data of the methane gas sensor for different methane gas concentrations. Figure 5 . Figure 5. NFC data transmission module.(a) Antenna impedance frequency parameter diagram.(b) Correlation of data collected by the ADC.(c) Calibration of ADC collected voltage against actual values.(d) Comparison of temperature values collected by the temperature collection module with actual values.(e) Collection of leakage voltage values.(f) Calibration of ADC voltage values against methane concentration values. Figure Figure5dmeasures the comparison of the temperature sensor's output values via the supervisory control with standard values, showing a notably lower average error rate, reaching as low as 0.36%, particularly at environmental temperatures below 70 °C.The measurement of leakage voltage with the casing electrified uses a resistive voltage divider method, where reference voltage data collected by the ADC can provide readings of the gas relay casing voltage.It can be seen from the graph that this method's accuracy is acceptable, with a maximum error of less than 10%, as shown in Figure5e.This is sufficient for the qualitative measurement of whether the gas relay casing is electrified.Figure5fshows how we tested the relationship between the analog quantities collected by the ADC and methane concentration.Using professional methane concentration testing equipment, the curve of methane concentration varying over time was obtained.Additionally, an oscilloscope was used to measure the voltage change curve at the analog output port of the methane sensor.By fitting the curve with higher-order functions, the Figure 5 . Figure 5. NFC data transmission module.(a) Antenna impedance frequency parameter diagram.(b) Correlation of data collected by the ADC.(c) Calibration of ADC collected voltage against actual values.(d) Comparison of temperature values collected by the temperature collection module with actual values.(e) Collection of leakage voltage values.(f) Calibration of ADC voltage values against methane concentration values. Figure Figure 5d measures the comparison of the temperature sensor's output values via the supervisory control with standard values, showing a notably lower average error rate, reaching as low as 0.36%, particularly at environmental temperatures below 70 • C. The measurement of leakage voltage with the casing electrified uses a resistive voltage divider method, where reference voltage data collected by the ADC can provide readings of the gas relay casing voltage.It can be seen from the graph that this method's accuracy is acceptable, with a maximum error of less than 10%, as shown in Figure5e.This is sufficient for the qualitative measurement of whether the gas relay casing is electrified.Figure5fshows how we tested the relationship between the analog quantities collected by the ADC and methane concentration.Using professional methane concentration testing equipment, the curve of methane concentration varying over time was obtained.Additionally, an oscilloscope was used to measure the voltage change curve at the analog output port of the methane sensor.By fitting the curve with higher-order functions, the relationship between the test voltage Figure 6 . Figure 6.Laboratory test images of ETWSD.(a) Display pages of the APP in both normal and abnormal states.(b) Actual optical images of ETWSD under laboratory conditions.(c) Test data of sensor data from ETWSD under prolonged operation. Figure 6 . Figure 6.Laboratory test images of ETWSD.(a) Display pages of the APP in both normal and abnormal states.(b) Actual optical images of ETWSD under laboratory conditions.(c) Test data of sensor data from ETWSD under prolonged operation. Table 1 . Components of gas samples after light gas protection operation. Table 1 . Components of gas samples after light gas protection operation.
10,249.4
2024-03-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Theoretically motivated dark electromagnetism as the origin of relativistic MOND The present paper is a modest attempt to initiate the research program outlined in this abstract. We propose that general relativity and relativistic MOND (RelMOND) are analogues of the broken electroweak symmetry. That is, $SU(2)_R \times U(1)_{YDEM} \rightarrow U(1)_{DEM}$ (DEM stands for dark electromagnetism), and GR is assumed to arise from the broken $SU(2)_R$ symmetry, and is analogous to the weak force. RelMOND is identified with dark electromagnetism $U(1)_{DEM}$, which is the remaining unbroken symmetry after spontaneous symmetry breaking of the darkelectro-grav sector $SU(2)_R \times U(1)_{YDEM}$. This sector, as well as the electroweak sector, arise from the breaking of an $E_8 \times E_8$ symmetry, in a recently proposed model of unification of the standard model with pre-gravitation, this latter being an $SU(2)_R$ gauge theory. The source charge for the dark electromagnetic force is square-root of mass, motivated by the experimental fact that the square-roots of the masses of the electron, up quark, and down quark, are in the ratio 1:2:3, which is a flip of their electric charge ratios 3:2:1 The introduction of the dark electromagnetic force helps understand the weird mass ratios of the second and third generation of charged fermions. We also note that in the deep MOND regime, acceleration is proportional to square-root of mass, which motivates us to propose the relativistic $U(1)_{DEM}$ gauge symmetry as the origin of MOND. We explain why the dark electromagnetic force falls inversely with distance, as in MOND, and not as the inverse square of distance. We conclude that dark electromagnetism is a good mimicker of cold dark matter, and the two are essentially indistinguishable in those cosmological situations where CDM is successful in explaining observations, such as CMB anisotropies, and gravitational lensing. I. Introduction 3 II. Theoretical origin of dark electromagnetism 5 III.A brief review of MOND and relativistic MOND 10 IV.A brief review of Verlinde's entropic derivation of MOND 13 A. Introduction 13 The cold dark matter paradigm is a cornerstone of modern cosmology.Dark matter plays a central role in explaining structure formation in the early universe, as well as in explaining the observed anisotropies of the cosmic microwave background.Dark matter is also successful in explaining gravitational lensing of large scale structures, and the observed baryon acoustic oscillations.Anomalous velocity dispersions in galactic clusters are also accounted for by proposing the existence of dark matter, and with some challenges, galaxy rotation curves can also be explained by assuming a specific density profile for the dark matter.Thus a single assumption, that of a prevailing collisionless and gravitationally attractive fluid with mass density about 5-6 times that of ordinary matter, can account for a wide range of cosmological phenomena.A new elementary particle, which is beyond the standard model of particle physics, is considered the most likely origin of this dark fluid. And yet, the greatest challenge for dark matter is that no such particle has been experimentally detected in the laboratory so far.And this in spite of a very large number of proposed theoretical candidates, and an intense experimental search over some four decades and through nearly fifty different experiments.Of course this does not imply that such a particle will never be detected, but it does make one ask if there could be an alternative explanation of cosmic phenomena in which dark matter is not required, but say the law of gravitation [Newton's law as well as general relativity] might be getting modified on large scales.It is not impossible that such a modification to gravitation perfectly mimics the proposed dark fluid.Thus what we call dark matter could in fact be modified gravity in disguise.It is this idea which is explored in the present paper, starting from a first principles proposal for modified gravity, which we call dark electromagnetism. The most successful proposal for a modified law of gravitation, as an alternative to dark matter, is Milgrom's Modified Newtonian Dynamics [MOND].This is a phenomenological proposal in which, for accelerations far smaller than a certain low critical value, the law of gravitation (in the non-relativistic limit) changes from inverse square to a 1/R law.MOND does very well at explaining galaxy rotation curves; whereas on cluster scales and cosmic scales it has modest success and also faces serious challenges.MOND's major challenge is that it lacks a first principles theoretical explanation: what are the convincing theoretical reasons (independent of observations) which might compel us to modify general relativity, and that too in such a way that MOND is an inevitable consequence of the modified theory in its non-relativistic limit? Our recent proposal for a unified theory of interactions provides strong evidence for a theoretical origin of MOND, in the form of a fifth force.Surprisingly, this unified theory does not provide any dark matter particle candidate, thus favouring modified gravity over dark matter.This prediction also makes the theory eminently falsifiable -it will be rule out by a laboratory detection of dark matter.The present paper describes the origin of this fifth force (dark electromagnetism) and how it serves as the origin of MOND.The theory is briefly reviewed in Section II, which is followed by a brief review of MOND in Section III. As we explain in Section VI of the paper, our proposal for MOND as dark electromagnetism also explains Verlinde's entropic criterion [20] for MOND.Keeping this connection with Verlinde in mind, in Section IV we review Verlinde's proposal for motivating MOND from entropy considerations.Section V is central to the paper, and explains how dark electromagnetism mimics MOND on various cosmic scales.Related empirical aspects of dark electromagnetism are discussed in Section VII.Section VIII critiques our findings and we also comment on the current challenges faced by MOND. II. THEORETICAL ORIGIN OF DARK ELECTROMAGNETISM A theory of unification of the fundamental forces has recently been proposed [1], starting from the foundational requirement that there should exist a reformulation of quantum field theory, which does not depend on classical time [2].This theory is based on an E 8 × E 8 symmetry group, in which each of the two E 8 groups is assumed to branch as follows, as a result of a spontaneous symmetry breaking which is identified with the electroweak symmetry breaking: Leaving the two E 6 aside for a moment, the SU(3) × SU(3) pair arising from the E 8 × E 8 branching is mapped to an (8+8=16) dimensional split bioctonionic space from which our 4D spacetime as well as the internal symmetry space for the standard model forces (and two newly predicted forces) are assumed to emerge.The three SU(3)s arising from branching of each of the two E 6 , with the right-most SU(3) in each set branching as SU(2) × U(1), are interpreted as follows.In the first E 6 , the first SU(3) is SU(3) genL and describes three generations of left-handed standard model fermions (eight per generation, along with their anti-particles).The second SU(3) is associated with SU(3) color of QCD.The branched third SU(3) → SU(2) L × U(1) Y describes the electroweak symmetry of the standard model, broken to U(1) em . In the second of the two E 6 , the first SU(3) is SU(3) genR and describes three generations of standard model right-handed fermions including three types of sterile neutrinos (eight fermions per generation, along with their anti-particles).The second SU(3) is identified with a newly predicted but yet to be discovered new (likely short-range) 'sixth force' named SU(3) grav .The third SU(3) → SU(2) R × U(1) Y DEM describes what we call the darkelectrograv sector which breaks to the newly predicted 'fifth force' U(1) DEM which we name dark electromagnetism, which we propose to be the relativistic MOND theory (a gauge theory) whose non-relativistic limit is Milgrom's MOND [3].The broken SU(2) R symmetry is proposed to give rise to classical gravitation described by the general theory of relativity (GR).At low accelerations, the fifth force of dark electromagnetism (DEM) dominates over GR, whereas at high accelerations GR dominates over DEM, with the transition coming at the critical MOND acceleration a M ∼ a 0 /6 ≈ cH 0 /6, where a 0 is the cosmological acceleration of the current accelerating universe.We reiterate that standard general relativity is assumed to emerge from the broken SU(2) R symmetry, whereas SU(3) grav is a newly predicted unbroken symmetry (likely short range and extremely weak, and in which the charged leptons and down family of quarks take part). The particle content of this unification proposal has been described in detail in Kaushik et al. [1].All the 248+248=496 degrees of freedom of E 8 × E 8 are accounted for.The only fermions in the theory are three generations of standard model chiral fermions.Apart from the 12 standard model gauge bosons, there are 12 newly predicted spin one gauge bosons associated with the SU(3) grav × SU(2) R × U(1) Y DEM sector.Eight of these are so-called gravi-gluons associated with the (likely to be short range, and ultra-weak compared to QCD) SU(3) grav .The gauge boson associated with U(1) DEM is named the dark photon, and is massless and has zero electric charge.Of the three bosons associated with the broken SU(2) R symmetry, two have zero electric charge but are as massive as Planck mass, and hence mediate Planck length range interaction: these are analogs of the W + and W − bosons of the weak force.The third is massless and has an insignificantly tiny electric charge (scaled down enormously due to cosmological inflation, in comparison to charge of the electron) which can be set to zero for all practical purposes.This boson is the analog of the Z 0 of the weak force.Pre-gravitation SU(2) R symmetry is mediated by spin-one gauge bosons, with gravitation as described by the metric tensor in the general theory of relativity emerging only in the classical limit.In our approach, classical GR is not to be quantised, which is why we do not have a fundamental, non-composite, spin 2 graviton in the theory.This does not contradict the fact that classical GR admits the experimentally confirmed quadrupolar gravitational waves.The apparent spin 2 nature of gravitation is emergent only in the classical limit.The underlying theory from which spacetime and GR emerge in the classical limit is a prequantum, pre-spacetime theory.Gravitation, and quantum theory, are emergent phenomena. There are two Higgs doublets in this theory, one being the standard model Higgs which gives mass to left-chiral fermions upon spontaneous breaking of the electroweak symmetry.The second, newly predicted Higgs boson gives electric charge to the right chiral fermions, upon breaking of the darkelectro-grav symmetry, which coincides with the electroweak symmetry breaking.Unlike in the standard model, both the Higgs are now predicted to be composite, being composed of the very fermions to which they give mass and electric charge.Of the 496 degrees of freedom in the theory, 32 are with the bosons (after including 4 each for the two Higgs).32 degrees of freedom are with internal generation space and pre-spacetime (16 each), and 144 d.o.f. are with the fermions.The remaining 288 d.o.f.go into making two composite Higgs, 144 per Higgs.It is noteworthy that each Higgs has as many composite d.o.f. as the total number of d.o.f. in the fermions.The bosonic content of the theory is confirmed also by examining the Lagrangian of the theory, as has been done in Raj and Singh [4]. The source charge associated with U(1) em is of course the electric charge, and in the algebraic approach to unification, it can be shown to be quantised, as was done for instance by Furey [5].Electric charge is defined as the number operator constructed from generators of the Clifford algebra Cl(6), which algebra is in turn generated by octonionic chains acting on octonions.It is shown that electric charge can only take the values (0, 1/3, 2/3, 1).Furthermore, the spinorial states associated with these charge values exhibit the following symmetry under the group SU(3) (which is a maximal subgroup of the smallest exceptional group G 2 , the automorphism group of the octonions).The states with charges 0 and 1 are shown to be singlets of SU(3), states with charge 1/3 are anti-triplets, and states with charge 2/3 are triplets.This enables the interpretation that the state with charge (0, 1/3, 2/3, 1) are respectively the (left-handed) neutrino, anti-down quark, up quark, and positron.The SU(3) is hence identified with SU(3) color of QCD.Anti-particle states are obtained by complex conjugation of particle states and are shown to have opposite sign of electric charge, as anticipated.Note that these fermions are left-chiral particles, and their corresponding antiparticles are right chiral.Furthermore, this quantisation of electric charge holds for every one of the three fermion generations.The Clifford algebra construction applies equally well to the second and to the third generation. Consider next the symmetry SU(3) grav × U(1) DEM associated with the right handed sector, with these two being the two new forces [1].Now, the source charge associated with the U(1) DEM symmetry is square-root of mass ± √ m, not electric charge.The motivation for proposing this interpretation (for the number operator made from the Clifford algebra Cl (6) generators which define the right-chiral fermions) comes from the following remarkable fact [6].The square-roots of the masses of the electron, up quark and down quark are in the ratio 1:2:3, which is a flip of their electric charge ratios 3:2:1 We treat electric charge and square root of mass on the same footing.Square root of mass also takes two signs: + √ m and − √ m.The positive sign is for matter, and negative sign is for anti-matter: like signs attract under dark electromagnetic force, and unlike signs repel.Note that mass m, being the square of ± √ m, is necessarily positive.Three new colors for SU(3) grav are introduced: the right-handed neutrino and the down quark are singlets of these new colors, and have √ m value 0 and 1 respectively.The electron is an anti-triplet of SU(3) grav with √ m value 1/3, and the up quark is a triplet of SU(3) grav with √ m value 2/3.Their anti-particles have corresponding square-root mass values − √ m.This mass quantisation is derived from first principles, just as for electric charge quantisation, and holds for every one of the three generations, again just like for the electric charge.Note that our proposal also gives a dynamical definition for matter / anti-matter: matter has positive sign of square root mass (+ √ m) and anti-matter has negative sign of square-root mass (− √ m).Mass m is of course positive for matter as well as anti-matter, it being obtained from squaring of ± √ m. Why then do the second and third fermion generations have such strange mass ratios, as observed in experiments [7]?The answer is that even when we do experiments to measure particle masses, the measurements are electromagnetic in nature, and carried out using electric charge eigenstates.These electric charge eigenstates are not eigenstates of (squareroot) mass.The exceptional Jordan algebra associated with the three fermion generations (one algebra for the electric charge eigenstates which are left-chiral, and one algebra for the square-root mass eigenstates which are right chiral) can be used to express electric charge eigenstates as superposition of square-root mass eigenstates through the so-called Jordan eigenvalues.The weights of these superpositions reveal the observed mass ratios to a very good accuracy [8], and strongly support the proposal that the source charge associated with the dark electromagnetic force is square root of mass.The fact that the source charge for the MOND acceleration is also square-root of mass encourages us to identify dark electromagnetism with relativistic MOND. In the very early universe, at the epoch of electroweak symmetry breaking, the enormous repulsive dark electromagnetic force segregated matter (+ √ m) from anti-matter (− √ m), so that our part of the matter-antimatter symmetric universe is matter dominated [4].(This scenario bears resemblance to the CPT symmetric universe model proposed by Boyle and Turok [9][10][11].)As a result, the dark electromagnetic force in our matter-dominated universe is apparently attractive only (even though U(1) em is a vector interaction).Similarly, the emergent gravitational interaction which is the classical limit of the SU(2) R gauge theory, is attractive only.We predict that the U(1) DEM force between an electron and a positron is repulsive.Another important aspect of the octonionic theory [2] (i.e. the one based on E 8 × E 8 symmetry) is the 'square-root of spacetime'.The spinorial states which define the fermions and satisfy the Dirac equation are constructed from the algebra of the octonions acting on itself.In this sense, a spinor is the square of an octonion, and since spinors are defined on spacetime, this suggests the view that a space which is labeled by using octonions as coordinates is actually square-root of spacetime.However, the absolute square modulus of an octonion should be assigned dimensions of length-squared, not length (as the Lagrangian of the octonionic theory suggests).This compels us to introduce the effective distance R 2 ef f = R H R in the unbroken theory, where R H is the deSitter horizon.An unbroken symmetry such as U(1) DEM thus ought to have a distance dependence (say in Coulomb's law) as 1/( R ef f ) 2 ∼ 1/R, and not 1/R 2 .This is a possible explanation for the 1/R dependence of the MOND acceleration, which taken together with the source charge for U(1) DEM , can explain why the MOND acceleration behaves as √ M /R, unlike gravitation which goes as M/R 2 in the Newtonian limit.We can say that gravitation is the square of dark electromagnetism: the source current for DEM is √ mcu i whereas the source current for gravitation is ( √ mcu i )( √ mcu j ) = mc 2 u i u j which is nothing but the energy-momentum tensor. In our proposal for DEM as relativistic MOND, the DEM force mimics Maxwell's electrodynamics, with electric charge replaced by square root of mass, and spatial distance replaced by an effective distance R ef f ≡ √ RR H where R H is the Hubble radius, equivalently the de-Sitter horizon.The source term, in the non-relativistic limit, is the effective volume density of square-root mass: √ M /R 3 ef f .The left hand side of the Poisson equation is the Laplacian made using the effective distance function.Such a Poisson equation yields MOND, in the deep MOND regime. Since MOND has a distance dependence in the acceleration as 1/R, the associated MOND potential is logarithmic.This is in principle consistent with the source being a surface squareroot mass density ∼ √ M /R 2 , as if the MOND dynamics were taking place effectively in two spatial dimensions.This is consistent with the logarithmic form for the Green's function of the Laplace equation in 2D, as we review in the Appendix in Section VIII.Note however that this surface density of square-root mass does not have a well-defined limit as R → 0 (it diverges as R −1/2 ), whereas the volume density of square root mass defined using the effective distance does have a well-defined limit which is finite.Also, we would not like to modify the structure of the left-hand side of the Poisson equation, and this is consistent with proposing MOND as the non-relativistic limit of the U(1) DEM gauge theory. Why do we associate the SU(2) R gauge symmetry with gravitation, as in the general theory of relativity?The following arguments provide a number independent hints in favour of the notion that the group SU(2) R (arising in the octonionic theory [1,2,4]) qualifies to describe a theory of gravity in 4 dimensions. In the octonionic theory there appears the product group SU(2) L × SU(2) R , where one copy is left-handed, the other right-handed.Now SU(2) L × SU(2) R is locally isomorphic to SO(4), the rotation group in 4 Euclidean dimensions.A Wick rotation will transform SO(4) into the Lorentz group SO (1,3).So the Lorentz group in 4 dimensions is locally isomorphic to the product group SU(2) L × SU(2) R . That the left-handed subgroup SU(2) L accounts for the weak interaction within the standard model has been known for long.Here we claim that the right-handed subgroup SU(2) R can account for gravity in 4 dimensions. To see how a graviton could possibly arise in this setting, consider the tensor product 1 ⊗ 1 of 2 copies of the 3-dimensional irrep of SU(2).Now 1 ⊗ 1 = 2 ⊕ 1 ⊕ 0. The 2 representation carries spin 2 and can thus accommodate the graviton.We expect the 2 irrep to accommodate the emergent, spin 2 graviton, with the 1 being the gravitational analogue of the electroweak W ± and Z 0 .The 0 irrep might be the standard model Higgs. Moreover, Fermi's phenomenological theory of weak interactions has a Lagrangian that carries the Fermi constant G F multiplying the product of 2 currents; the dimension of G F is [energy] −2 .On the other hand, general relativity has a Lagangian that carries Newton's constant G N , the coupling constant being actually the inverse 1/G N .Incidentally the dimension of G N is again [energy] −2 .However G N appears downstairs within its Lagrangian, as opposed to G F which appears upstairs. That both G F and G N are dimensionful makes the corresponding theories nonrenormalisable.Since the two are effective theories (low-energy limits of more fundamental theories), nonrenormalisability is not an issue. All these hints make one suspect that gravity and the weak force could share a common origin, namely, the group SU(2) L ⊗ SU(2) R within the octonionic theory.That the product of the two coupling constants G F and 1/G N is dimensionless, suggests the intriguing possibility that Fermi's theory and general relativity might be each other's dual under a Z 2 duality transformation exchanging the weak and the strong coupling regimes.This duality is strongly reminiscent of analogous dualities put forward in the literature [12,13]. Altogether, this allows one to think of SU(2) L as the dual of the theory governed by SU(2) R .Gravity would then appear as the weak dual of the Fermi theory, the latter being the strong counterpart.Mention should also be made of the several attempts that have been made in the past, on gravi-weak unification [14]. Further evidence for a possible connection between the SU(2) R gauge symmetry and gravity comes from the work of Ashtekar [15], of Krasnov [16], and of Woit [17,18].There is also the attractive fact that SU(2) R × U(1) Y DEM (i.e.darkelectro-grav) is a renormalisable gauge theory, just as the electroweak theory SU(2 The cosmological setting for our proposal of dark electromagnetism is as follows [19].Subsequent to the big-bang creation event, the universe undergoes an inflation-like expansion.The expansion begins with a Planck-scale acceleration ∼ 10 53 cm s −2 , and the acceleration falls inversely with the expanding scale factor.One input taken from observations is that the universe has N ∼ 10 80 particles and hence a total mass of about 10 55 g.The inflating epoch undergoes a phase transition when the decreasing acceleration equals the surface gravity of a black hole with the same mass as the mass of the observed universe.This acceleration happens to be of the same order as the critical MOND acceleration ∼ 10 −8 cm s −2 , as also the acceleration of the current universe.Hence there is an inflation of the scale factor by 61 orders of magnitude before the inflation-like phase ends.(Incidentally, this inflation by 61 orders of magnitude brings down the cosmological constant -which has dimensions of inverse squared length -by 122 orders of magnitude, to the same order as its currently observed value).This phase transition is also a quantum-to-classical transition, and because black hole surface gravity is now higher than the inflationary acceleration, classical inhomogeneous structures can begin to form, and classical spacetime obeying the laws of general relativity emerges.This transition is also the electroweak symmetry breaking and the darkelectro-grav symmetry breaking.Near compact objects, GR as emergent from the broken SU(2) R symmetry dominates; whereas far from compact objects (once the induced acceleration falls below the critical MOND acceleration) the unbroken symmetry U(1) DEM of dark electromagnetism dominates.This latter is the deep MOND regime.Thus, in the presence of compact objects, the deSitter horizon does not immediately yield to GR; rather the MOND zone mediates between the GR zone and the horizon.It is as if there is a phase transition between the GR zone and the MOND zone (similar to Verlinde's ideas [20]).This might be explicable via a generalisation of 'GR as thermodynamics' to '(GR + MOND) as thermodynamics' of an unbroken symmetry phase transforming to a broken symmetry phase.The GR dominated phase exhibits the broken symmetry phase and is stiff; the MOND phase is that of unbroken symmetry and is elastic -the deep MOND region carries a memory of the unbroken inflation-like phase, and also of the currently accelerating universe. We note that grand unification (GUTs) models based on E 6 symmetry have been considered by several researchers before [21][22][23], and the significance of E 6 has been noted repeatedly (it is the only exceptional Lie group which has complex representations).Our proposal, the octonionic theory, is not a GUT.We have an E 6 × E 6 unification of standard model forces with gravitation, and we predict two new forces, SU(3) grav and U(1) DEM .The inflation-like expansion resets the scale of quantum gravity from the Planck scale to the scale of electro-weak symmetry breaking, i.e. ∼ 1 TeV.This is also the scale of the breaking of the darkelectro-grav symmetry SU(2) R × U(1) Y DEM , when space-time and gravitation emerge from the pre-quantum, pre-spacetime theory.Relativistic MOND U(1) DEM also emerges at this epoch. The term dark electromagnetism / dark radiation / dark photon, is sometimes used to refer to a hypothetical radiation which mediates interactions between dark matter particles.In our proposal however, this dark radiation mediates a fifth force between ordinary baryonic matter particles (and of course between leptons as well).There is no dark matter in our theory, unless one wishes to refer to the dark photons of DEM as dark matter. III. A BRIEF REVIEW OF MOND AND RELATIVISTIC MOND The flattened rotation curves of galaxies are non-Keplerian [24], and it is observed that departure of the rotation curve from Newtonian gravity sets in whenever the observed acceleration falls below the following universal value a M [3] where a 0 is the observed cosmic acceleration.This discrepancy between Newtonian gravitation and the observed rotation curves can be explained be postulating that galaxies are surrounded by halos of dark matter.It seems difficult though to understand why the dark matter distribution becomes important precisely below the above mentioned critical acceleration (instead of beyond a critical distance from the galactic centre) and why this critical acceleration should be so close to the observed cosmic acceleration.There is also the possibility that a new fundamental force (let us call it the fifth force) becomes more significant than Newtonian gravitation, whenever the acceleration a falls much below the critical acceleration a M .Keeping this in view, Milgrom proposed in 1983 that the acceleration a experienced by a test body of mass m in the presence of a source mass M is given by the following phenomenological law: In other words, the fifth force starts to dominate over Newtonian gravitation at sub-critical accelerations.This proposal is known as Modified Newtonian Dynamics (MOND) [3].We do not interpret it as the breakdown of Newtonian gravitation/general relativity at low accelerations, but rather as the fifth force dominating Newtonian gravity.The test body of mass m universally experiences Newtonian gravity as well as the fifth force, due to the presence of the mass M. The acceleration due to both the forces is independent of the mass m of the test particle, but the fifth force is proportional to the square root of the source mass M and falls inversely with distance (∼ √ M /R) as if it were the square-root of Newtonian gravitation (∼ M/R 2 ).Subsequently, we will view the MOND relation a 2 = a N a M as a consequence of the introduction of the effective distance R 2 ef f = RR H .This latter choice makes MOND analogous to Coulomb's law and paves the way for relativistic MOND as a U(1) symmetry sourced by square root of mass.An analogy could be made to the electroweak symmetry broken down to the weak force and electrodynamics.An electron in the presence of another electron experiences both the weak force and the much stronger Coulomb force.At energy scales approaching the electroweak scale ∼ 1 Tev, the two forces have nearly equal strength and then get unified.At lower energies the electric force dominates the weak force but this does not mean the weak force law breaks down at low energies.It just means the weak force is comparatively weaker.Analogously, the MOND force (i.e. the fifth force) dominates over Newtonian gravity (GR) at low accelerations, but that does not imply that GR is breaking down.In our work, MOND is to gravitation what electrodynamics is to the weak force.Electrodynamics (MOND) dominates over the weak force (GR) at low energies (accelerations).At high accelerations the fifth force and GR get unified (the darkelectro-grav symmetry The MOND phenomenology cannot derive the interpolating function between the Newtonian regime and the deep MOND regime: that can only come from the deeper theory from which MOND originates.Thus one introduces the unspecified interpolation function µ(x) relating the Newtonian acceleration a N to the MOND acceleration g as If one does not wish to introduce MOND as a fifth force, it can be presented as modified Poissonian gravity [25], by modifying the left hand side of the Poisson equation, with This modified Poisson equation can be derived from the following Lagrangian [26] : As Khoury notes:"However, as a theory of a fundamental scalar field, the non-analytic form of the kinetic term is somewhat unpalatable."For the same reason one might be skeptical about modifying the left hand side of the Poisson equation; doing so will make it harder to relate MOND to other fundamental interactions, and to find a generalisation of GR which reduces to MOND in the non-relativistic limit, at low accelerations.We prefer to derive MOND from a Poisson equation in which the left hand side is intact as the conventional Laplacian, and the right hand side is a new source charge for a fifth force.Nonetheless, as Milgrom writes [25], and we quote: "Very interestingly, its deep-MOND limit, is invariant under space conformal transformations (Milgrom, 1997) [27]: Namely, beside its obvious invariance to translations and rotations, Eqn. ( 8) is invariant to dilatations, r → λr) for any constant λ > 0 [under which φ(r) → φ(r/λ)], and to inversion about a sphere of any radius a, centered at any point r 0 , namely, to with φ(r) → φ[r(R)], and ρ(r where J is the Jacobian of the transformation ( 9).This ten-parameter conformal symmetry group of Eqn. ( 8) is known to be the same as the isometry (geometric symmetry) group of a 4-dimensional de Sitter spacetime, with possible deep implications, perhaps pointing to another connection of MOND with cosmology (Milgrom, 2009a) [28]."This important fact about MOND is very encouraging for us, because our proposed U(1) DEM symmetry is indeed the left-over unbroken symmetry from the deSitter like phase which precedes the darkelectro-grav symmetry breaking.This correspondence with deSitter provides justification for use of the effective distance R 2 ef f = RR H , because doing so enables the aforesaid invariance under dilatations. MOND can also be presented as a modification of the law of inertia, instead of modification of law of gravitation: In our proposal in this paper, MOND arises from a new (fifth) force obeying a modified law of inertia.Thus law of gravitation, as well as law of inertia, both get modified at low accelerations. There have been several serious attempts to develop relativistic MOND, i.e. to generalise general relativity to a modified relativistic theory of gravitation, from which MOND will emerge in the non-relativistic approximation, for accelerations a ≪ a M .These include the TeVeS theory developed by Bekenstein [29], which includes a vector field and a scalar field besides the spacetime metric.TeVeS was originally claimed to be able to explain gravitational lensing and other cosmological observations, but is seriously constrained by observations in the solar system and in binary stars [25].Another prominent relativistic MOND has been proposed by Skordis and Zlosnik, dubbed RMOND, which is claimed to explain CMB anisotropies and the matter power spectrum [30]. Our reservation about these otherwise noteworthy relativistic generalisations is that they are expressly designed to meet the requirements of a relativistic MOND, and are not easy to motivate from first principles.The vector field and scalar field introduced in TeVeS are difficult to relate to the standard model of particle physics.The quantum field theoretic constraints on such theories are also challenging.On the other hand, the U(1) DEM proposed by us as RelMOND is a fallout of the E 8 × E 8 unification, and was not invented to explain MOND.The use of √ m comes from consideration of masses of quarks and leptons of the first fermion generation.Furthermore, the SU(2) R × U(1) Y DEM gauge symmetry is likely to be a renormalizable quantum field theory. There is an extensive literature and review on MOND and its extensive applications; we do not intend to review it here.The excellent SCHOLARPEDIA article by Milgrom is upto date and reviews MOND and its applications in all its aspects [25]. We make mention though of an ongoing related research of great importance: testing the law of gravitation in GAIA DR3 wide binaries [31][32][33].A large number of such binaries are known in the solar neighbourhood of the Milky Way, and these have orbital radii ranging from about 200 AU to 30000 AU.The orbital acceleration crosses the critical MOND value a M for radii around 1000 AU, transiting from the Newtonian regime (relatively low radii) to the alleged MOND regime (large radii).Around 2000 AU onwards, the measured acceleration should disagree with Newtonian prediction, if MOND is right.The analyses by Chae [34,35] and by Hernandez [36,37] shows that Newtonian gravitation is obeyed in the not so wide binaries, but breaks down for larger separations.Banik et al. disagree [38].See however, Chae's critical response to Banik [35], and the responses of Lasenby, Boyle, and especially of Hernandez, after the recent OSMU23 lecture by Banik [39].See also the recent rebuttal by Hernandez and Chae [40].To our understanding, the conclusion of Chae, and of Hernandez, that Newtonian gravitation breaks down below the critical acceleration a M , is correct.It is remarkable that wide binaries have the same critical acceleration scale a M as spiral galaxies do: there is no a priori reason for this to be so, unless the fifth force does indeed exist and begins to dominate gravitation below a M .This anomaly in wide binaries cannot be explained by dark matter; therefore wide binaries are the likely smoking gun which will discriminate MOND from dark matter. We also note that not all researchers are agreed on the presence of a universal critical acceleration scale in galaxies.In their analysis of 193 disk galaxies from the SPARC and THINGS databases, Rodrigues et al. conclude the absence of a fundamental acceleration scale in galaxies [41].Whereas, in an accompanying paper Gaugh et al. perform a Bayesian analysis on galaxy rotation curves from the SPARC database and find strong evidence for a characteristic acceleration scale [42].See also the critical analysis in [43] and references therein.Our outlook is that these analyses are a part of an ongoing debate, and for the purpose of our present theoretical discussion we will assume that a fundamental acceleration scale does exist on galactic scales. Dark matter is a cornerstone assumption in contemporary cosmology, supported by evidence from galaxy rotation curves, gravitational lensing of bullet cluster, CMB fluctuations and baryon acoustic oscillations (BAO), and the formation of large-scale structures.Therefore it is important to assess how MOND fares vis a vis these aspects, as an alternative to dark matter.MOND does spectacularly well with galaxy rotation curves, being able to predict the rotation curve once the baryonic mass distribution of a galaxy is known from observations.In this regard it does better than the cold dark matter hypothesis, where a rotation curve has to be first known from observations and then a CDM distribution is assumed so as to fit the curve of velocities.MOND also provides a natural explanation for the Tully-Fisher relation, which is a challenge for CDM.MOND does not adequately account for cluster velocity dispersions, where CDM certainly does better.It has however been suggested that MOND's shortfall on cluster scales could be accounted for by the missing baryons which are currently unaccounted for in clusters. CDM has a definite upper hand when it comes to formation of large scale structures, CMB anisotropies, BAO and gravitational lensing.Here one is in need of a convincing relativistic MOND which generalises general relativity at low accelerations and can then be convincingly applied in cosmology.The present proposal for dark electromagnetism is a step in that direction.For a detailed recent review of the status of MOND in astrophysics and cosmology, we refer the reader to Banik and Zhao [44]. A. Introduction Roughly 95% of our Universe consists of a nonbaryonic form of matter/energy exhibiting mysterious properties.This is sufficient reason to suspect that perhaps our knowledge of gravity is incomplete.General Relativity might not be universally applicable (i.e.not in all regimes of parameter space), and spacetime might not be an irreducible, primary concept.Instead, our macroscopic notions of spacetime and gravity might emerge from an underlying microscopic description. Verlinde [20,45] suggests that the observed dark energy and the phenomena usually attributed to dark matter have a common origin and can both be connected to the emergent nature of spacetime instead.The key idea is the competition between bulk degrees of freedom and surface degrees of freedom in a de Sitter Universe containing matter: i) when surface degrees of freedom dominate the expression for the entanglement entropy (of spacetime plus matter), we have the standard GR regime; ii) when bulk degrees of freedom (in the entanglement entropy) take over we enter the MOND regime. This state of affairs corresponds to a glassy dynamics, i.e., a mechanics for the microscopic qubits of information in which two time scales are at work: i) a fast, short range dynamics that is responsible for the area law for the entanglement entropy; ii) a slow, long distance dynamics that exhibits slow relaxation, aging and memory effects, and is responsible for the MOND regime.This is the dark-matter phase of emergent gravity. The transition between the two occurs precisely as one crosses the cosmological horizon of de Sitter spacetime.Verlinde interprets this as a true phase transition in thermodynamic sense.The thermodynamic medium here would be a d-dimensional spacetime (de Sitter spacetime containing matter) exhibiting two phases: i) the GR regime, corresponding to a stiff phase of this medium ; ii) the MOND regime, corresponding to an elastic phase. In this picture, dark matter is not to be understood as being made up of some kind of particles.Rather, due to this phase transition in the fabric of spacetime itself, gravitational effects cease to be described by GR in order to exhibit MOND-like properties.GR regards spacetime as perfectly stiff; now we see that it can have elastic properties too.MOND is a consequence of the extremely small, but nonvanishing, elastic properties of spacetime.The net result is that dark matter is an apparent phenomenon, as its effects can be more economically understood in terms of the elastic properties of spacetime in this regime. Altogether, Verlinde claims that: i) spacetime emerges from the entanglement of qubits of information; ii) their short-range entanglement (i.e., between neighbouring bits) produces an entropy scaling as in the Bekenstein-Hawking area law; iii) their long-range entanglement entropy (also called de Sitter entropy) gives rise to a volumetric law (contrary to an area law); iv) de Sitter entropy is equipartitioned between all bits; v) gravity is the force that describes the change in entanglement (i.e., in spacetime) due to matter. B. The flattening of galaxy rotation curves The flattening of galaxy rotation curves occurs only when the gravitational acceleration GM/R 2 drops below a certain acceleration scale a M , i.e., whenever Here a M is Milgrom's acceleration scale [3,46], related to the cosmic acceleration scale a 0 as per a M = a 0 /6, and a 0 = cH 0 ≃ 10 −10 ms −2 (12) with H 0 the Hubble constant. Denoting the observed gravitational acceleration by g obs and the acceleration due to baryonic matter by g bar , Milgrom's proposal is that g obs is a certain function f of g bar such that g bar for g bar ≫ a M √ g bar a M for g bar ≪ a M Eq. ( 13) above can be regarded as an equivalent restatement of the Tully-Fisher law. Altogether we have two extreme regimes: i) when a ≫ a M we have the standard Newtonian regime; ii) when a ≪ a M we have the MOND regime, where Newton's second law gets modified. In the intermediate regime a ≃ a M , MOND makes no assumptions regarding the function f . Verlinde's analysis [45] applies to de Sitter (dS) spacetime, because dS is the space that best fits our Universe according to current data.Now d-dimensional dS spacetime has the metric All computations are done under the assumption of spherical symmetry.At r = L there is a cosmological horizon which carries a finite entropy and temperature.The surface acceleration κ is given in terms of the Hubble parameter H 0 and the Hubble scale L as Then: i) at scales much smaller than the Hubble radius L, gravity is well described by General Relativity (GR), because the entanglement entropy follows the Bekenstein-Hawking area law; ii) at large distances GR breaks down and MOND sets in.This corresponds to the de Sitter entropy (which follows a volumetric law) taking over. Equivalently: i) gravity at accelerations greater than a M obeys GR.Spacetime in this regime, although dynamical, is regarded as stiff , meaning nonelastic; ii) this is opposed to gravity at accelerations below the scale a M : in this MOND regime, gravity is modelled in ref. [20] as being due to the elastic properties of spacetime. In GR, the definition of mass can be problematic.Strictly speaking, the ADM mass can only be defined at spatial infinity.However, dS spacetime has a cosmological horizon and no spatial infinity.In dS spacetime, an approximate analogue of the ADM mass can be defined under some assumptions.It turns out to be given by where dA = n i dA i , n i is the outward normal vector to the surface S ∞ , and the latter is a large enough spherical surface enclosing the mass M placed around the origin.This surface S ∞ must be far away from the origin, so the field produced by M can be approximately spherically symmetric; at the same time, S ∞ may not be too close to the horizon. C. Entropy as a criterion for a phase transition 1. Entropy increase when a bit traverses a horizon The addition or subtraction of n bits (entering the horizon or leaving it) causes an increase or decrease ∆S in the entropy of the horizon.From Verlinde's first paper [45] we have the following result for the entropy increase of a horizon, as the latter is traversed by n bits of information: Here k B is Boltzmann's constant (hereafter k B = 1), and ∆φ is the difference in Newtonian potential between the states before the n bits traverse the horizon and after traversing it.Thus the Newtonian potential φ keeps track of the depletion of horizon entropy per bit of information traversing it. Entropy of empty dS space De Sitter spacetime has a certain microscopic structure, the precise form of which is unknown (and fortunately also irrelevant for our purposes).In consequence we can assign dS spacetime an entropy.For the moment we regard dS spacetime as being empty, or devoid of matter.The expansion of empty dS spacetime is being driven by the dark energy.Verlinde computes the entropy of empty dS spacetime to be [20] The subindex DE stands for dark energy.The above expresses the entropy contained within a sphere of radius r and surface area A(r).We draw attention to the volume dependence on the right-hand side of ( 18), because of the product rA(r).Happily, when evaluated at r = L, Eq. ( 18) yields back the area-dependent Bekenstein-Hawking entropy.However, S DE scales with the volume for r < L. The entropy S DE is carried by excitations of the qubits making up empty dS space that lift the negative groundstate energy to the positive value associated with the dark energy.In other words, dS entropy corresponds to the dark energy that drives the expansion of the Universe. Entropy reduction of dS space due to the addition of matter Our actual Universe is of course not empty.Applying the Bekenstein upper bound [47], Verlinde shows that the addition of a mass M causes the entropy of dS space to decrease by the amount because the horizon size is being reduced.The entanglement between the two sides of the horizon diminishes by the addition of this mass, hence the negative entropy. The missing mass problem in entropic terms We return to Eq. ( 11), which we would like to reexpress in entropic terms.Consider a spherical region with boundary area A(r) = 4πr 2 containing a total mass M. Then the gravitational phenomena attributed to dark matter occur only when the area density Σ(r) of mass falls below a universal value determined by a M : We have replaced the condition (11), expressed in terms of accelerations, as condition (20), expressed in terms of surface density of mass.Next we recast the same condition in terms of entropies.For this, we rewrite (20) more suggestively as We multiply through with r/L and use (15) to obtain Finally using (18) and (19) we have To summarise: the gravitational phenomena commonly attributed to dark matter occur whenever the inequality (23) holds.The bulk entropy S DE scales with the volume, while the matter entropy S M scales linearly with r.The observations on galaxy rotation curves tell us that the nature of gravity changes, depending on whether the matter added to dS space removes all or just a fraction of the entropy S DE of dS space.Therefore we have two regimes: i) the regime when S M (r) < S DE (r), which corresponds to low surface mass density Σ(r) and low gravitational acceleration: this is the MOND, or sub-Newtonian, or dark matter regime; ii) the regime when S M (r) > S DE (r), which describes Newtonian gravity. Verlinde's goal is to explain why the laws of emergent gravity differ from those of General Relativity (GR) precisely when the inequality (20) (equivalently ( 23)) holds.His conclusions are: i) at scales much smaller than the Hubble radius, gravity is well described by GR because the entanglement entropy is still dominated by the area law of the vacuum; this is identified as the stiff phase of spacetime; ii) at larger distances and/or longer time scales the bulk dS entropy leads to modifications of the above laws.Precisely when the surface mass density falls below the value (20), the reaction force due to the thermal contribution takes over from the usual gravity governed by the area law.This is identified as the elastic phase of spacetime, in which MOND gravity takes over. Newtonian gravity in terms of surface densities Motivated by the previous arguments, next we will rewrite the familiar laws of Newtonian gravity in terms of a surface mass density vector Σ. Given a Newtonian potential φ and the corresponding acceleration This is the usual gravitational acceleration vector g i , with some convenient normalisation.The latter is so chosen that the differential expression of Gauss' law in d-dimensional dS spacetime now reads That Σ qualifies as a surface mass density follows from the equivalent integral expression of the Gauss law where M is the total mass enclosed by the surface S. Finally the gravitational self-energy U grav of a mass distribution can also be expressed in terms of Σ: This rewriting of Newtonian gravity in terms of surface densities will facilitate its interpretation in terms of elasticity theory. Elastic moduli in terms of gravitational parameters Verlinde next proves that the ADM-like definition of mass ( 16) can be naturally translated into an expression for the strain tensor.Given the Newtonian potential φ, the corresponding elastic displacement field u i is postulated to be where n i is the outward unit normal to a surface S ∞ .The latter encloses a mass given by where ε ij is the strain tensor for the displacement field u i .Multiplying both sides of ( 29) by the acceleration scale a 0 we obtain a force: where we have identified the stress tensor of the dark-matter elastic medium to be This yields the elastic moduli of the dark-matter medium: A derivation of the Tully-Fisher relation Dark matter causes a gravitational pull, an acceleration g D which scales with √ M D , the square root of the dark mass M D .This is opposed to baryonic matter, whose acceleration g B scales with the baryonic mass M B .Verlinde finds that in d-dimensional dS spacetime one has the following analogue of (13): When d = 4, Eq. ( 33) is equivalent to the Tully-Fisher relation (13).Then a M = a 0 /6, which is the acceleration scale appearing in Milgrom's phenomenological fitting formula (13). Eq. ( 33) is a theoretical derivation of the phenomenological Tully-Fisher law Eq.( 13).This derivation from first principles can be seen as one of the main achievements of Verlinde's paper. Apparent dark matter in terms of baryonic matter Using the previous dictionary between elastic and gravitational quantities, Verlinde derives an expression for the density of apparent dark matter as a function of baryonic matter.Namely, Σ D as a function of the baryonic Newtonian potential φ B : In the spherically symmetric case, and when d = 4, the above can be integrated within a sphere of radius R to yield where is the total mass inside the radius R. Eqs. ( 34) and ( 35) describe the amount of apparent dark matter in terms of the amount of baryonic matter ; as such they allow to make direct comparison with observations.In ref. [48] it is claimed that the agreement is good. V. DARK ELECTROMAGNETISM AS THE ORIGIN OF RELATIVISTIC MOND We demonstrate that MOND can be written as Coulomb's law analogous to Maxwell's electrodynamics, by using an effective distance.Energy conservation along with a modified inertia law can then be used to show that, written as Coulomb's law, MOND mimics cold dark matter, including on cosmological scales.Furthermore, in the deep MOND regime, this formulation is the non-relativistic limit of the U(1) DEM gauge theory. We have in the deep MOND regime that the acceleration a of a test particle in the field of a mass M is given by where a M is Milgrom's acceleration constant; L M = c 2 /a M is the MOND radius.We will assume that the MOND force F on the test particle of mass m is to be obtained by multiplying the acceleration by √ mm P l .We write the force in terms of dimensionless masses, so as to try to make it look more and more like electrodynamics: Assuming that we live in a deSitter universe, we multiply and divide by the Hubble radius R H = cH −1 0 which is also the deSitter horizon, and we introduce the effective distance R ef f given by R 2 ef f ≡ RR H . Now this looks like Coulumb's law, in terms of the effective distance R ef f .If a spatial point x is at a distance |x| from the observer, it has to be stretched by a factor R H .We can discuss the covariance of this procedure, but in a Robertson-Walker universe with cosmic time, this procedure seems well-defined.We assume that the Milgrom constant a M is η times the cosmic acceleration a 0 = cH 0 and also that a 0 = βa P l (L P /R H ) is the scaling down of the Planck acceleration due to the deSitter expansion.Thus, L M = c 2 /a M = c 2 /ηa 0 = c 2 R H /ηβa P l L P .We can hence write the force as where The factor of 3/2 is deliberately introduced so as to get consistency with Verlinde's result and consistently derive the famous factor of 1/6 relating Milgrom's constant to the cosmic acceleration.We will take (40) as the defining force law of the U(1) DEM interaction, with A as defined in (41), with the factor of 3/2 included.MOND is to be derived from this force law, even though initially we started from MOND so as to motivate this Coulomb like force law. Below we consider generalising this to a fully relativistic theory for the square-root mass current.The theory can be expected to be derivable from an action principle, just like Maxwell electrodynamics.For now, let us continue with the spherically symmetric Coulomb case. This force law has an interesting parallel with, and an important difference from, Maxwell's electrodynamics.We can write Coulomb's law as F = hc(e 2 /hc)/R 2 .The charge is expressed in dimensionless units here, so a multiplication by hc appears, just as for the above gravity case.However the gravitational coupling is scaled by a factor dependent on epoch, via the Hubble radius (with the understanding that R H = c 2 /a 0 and epoch dependence, if any, would come from the in-principle-allowed time variation of the cosmic acceleration).And gravity uses the effective distance, which is like a scaling of the actual distance. The introduction of a characteristic acceleration (a M ) related to the Hubble constant implies variability over time, suggesting that galaxies at different redshifts would exhibit distinct rotation curves.This aspect has been discussed carefully for instance in the MOND review by Milgrom [25] (see the subsection on 'The significance of the MOND acceleration constant' therein, as well as the reference to [50] where high redshift rotation curves are discussed in the context of MOND).Milgrom concludes that there are strong observational constraints on the variation of a M with cosmic time, and a value of 4a M at z ∼ 2 is essentially ruled out, hence excluding a dependence such as a ∝ (1 + z) 3/2 .The force law can be derived from a potential φ via F = dφ/dR ef f so that We would now like to write down the energy conservation equation in the deep MOND regime, given this potential, and from that equation derive Verlinde's central equation (7.40) in his paper [20].The energy conservation equation is obtained by starting from the equation of motion for the test mass m at R having velocity v = dR/dt. The left hand side of this equation is a modified inertia law, and in fact is such that the MOND acceleration is independent of the square-root mass of the test particle.Thus we still have the equivalence principle, but this time arising from cancellation of square-root mass when the dark charge is identified with the inertial square root mass. Multiplying both sides by v and noting that dR ef f /dR = R H /2R ef f we can write If we make the crucial assumption that the time-dependence of 2R ef f /R H can be ignored this equation can be integrated to get the following expression for a conserved energy, after substituting the form of the potential: As is done in the Newtonian derivation of the Friedmann equation (converting force law into energy conservation) we equate the right hand side term to the source term of the Einstein equations, as if sourced by an apparent dark matter distribution assuming a constant density and a uniform apparent dark matter distribution.Squaring both sides gives which is consistent with Verlinde's eqn.(7.40) in [20] if we assume η = 1/6.From here, following Verlinde, MOND law can be easily derived. It seems interesting that we get the same result for apparent dark matter as Verlinde does.This can be considered a support for the proposed U(1) DEM symmetry.Furthermore the introduction of the effective distance can be interpreted as a stretching of the distance R to the larger distance R ef f and reminds us of an elastic medium.We should explore how to relate our effective distance to Verlinde's elasticity approach to MOND: the two might be related to each other.Note that the amount of apparent dark matter M D is proportional to the square-root of the actual matter M. We hope to derive these results from first principles in future work. We can also try to now prove that the total amount of apparent dark matter is about five times ordinary matter.Verlinde's equation (7.40) Assuming a uniform density ρ D one can integrate the left hand side, after expressing mass in terms of density, to get The last equality follows by considering the entire universe, and writing the mass M in terms of density, assuming critical density ρ = 3H 2 0 /8πG which gives H 0 = 1/2GM.For R we have assumed the value Hubble radius R H = H −1 0 which is also the deSitter horizon.This is the contribution to apparent dark matter from the Coulomb part of the potential energy.If we assume that each of the three vector components also contribute in equal measure, we get that the total apparent dark matter is 4 × 5/3 = 5.16 times ordinary matter.This agrees well with the standard LCDM model according to which dark matter to ordinary matter ratio is about 5.3.The assumption that the vector components contribute in equal measure as the Coulomb part is a reasonable one because these considerations are being applied on the scale of the entire universe, including at high redshifts.Therefore relativistic motions must be taken into account, and as a result the 'magnetic part' of the four-potential is expected to be as significant as the Coulomb part. A very important point is that only particles with non-zero rest mass take part in dark electromagnetism, just as only particles with non-zero electric charge take part in electromagnetism.Hence there is no U(1) DEM interaction between photons and baryonic matter: from this point of view the apparent dark matter derived above is a perfect mimicker of dark matter.It will produce an additional gravitation-like attraction but it will not have any impact on the CMB anisotropy produced by baryons interacting with electromagnetic radiation on the last scattering surface.We can as usual study the growth of linear density perturbations by working with apparent density fluctuations in apparent dark matter. Furthermore, the potential energy of the dark electromagnetic field serves as a source on the right hand side of Einstein equations, just as cold dark matter does.Therefore, in so far as causing gravitational lensing is concerned, the DEM field mimics cold dark matter. The non-relativistic limit of dark electromagnetism (dark equivalent of Coulomb's law) proposed above is the limit of relativistic dark electromagnetism, patterned after Maxwell electrodynamics. We propose the following action principle for dark electromagnetism and general relativity.It is patterned entirely after the action for Maxwell's electrodynamics coupled to sources, in a curved spacetime.The electromagnetic field is replaced by the DEM field. The last term couples the dark electromagnetic potential D i to the current density J i of square-root mass, obtained by multiplying the latter by four velocity.The coupling constant A was defined earlier in Eqn.(41), wherein R H is to be understood as R H = c 2 /a 0 .The source for gravity is the energy-momentum tensor of mass and the energy momentum tensor of the dark field.The dark current is given by Here, the spatial distance y is the effective distance, i.e. |y| 2 = R H |x| and the time t = x 0 /c is the cosmic time used in Robertson-Walker metric and Friedmann equations.The dark potential is also a function of the effective spatial distance y and not of x.Thus, if we define y i = (t, x), then F ij = ∂ yi D j (y) − ∂ yj D i (y).The interaction term of the dark current does not contribute to the energy-momentum tensor which appears on the right hand side of Einstein equations, because the √ −g in the denominator of the expression for current density cancels the √ −g in the numerator in the expression for interaction action (last term in action above).This is the same as in Maxwell's electrodynamics, but in the present case of dark electromagnetism it has profound significance.Namely, the source term for GR (it being the energy-momentum tensor proportional to mass m) is completely distinct from the source term for dark electromagnetism, this being the current density of square-root mass.Two masses m 1 and m 2 interact both via GR and via DEM, and one interaction dominates over the other, depending on the magnitude of the acceleration.Furthermore, the introduction of the effective distance in DEM and the specific use of cosmic time as time, breaks Lorentz invariance.DEM as relativistic MOND picks up a specific reference frame, which we take to be the rest frame of the CMB. The second last term is the action for the DEM field, made from its field tensor, which also couples to gravitation.Its energy-momentum tensor will contribute as a source in the Einstein field equations.Varying the action wrt the metric gives Einstein's field equations sourced by dust and DEM field; varying wrt DEM field gives Maxwell-like equations coupling DEM field to the current density of square-root mass, and varying wrt particle position gives geodesic equation of motion, which now also includes the effect of the DEM field as an external non-gravitational force. More explicitly, variation of the action will give: Einstein equations Maxwell-like equations sourced by current density of square-root mass: all written as functions of the effective spatial distance, and cosmic time: The solution to this equation determines the DEM field, which then enters the Einstein equations as a source, as an alternative to dark matter.This source term is the potential energy of the DEM field, and it represents the enhanced gravitational interaction amongst baryons, instead of having to invoke dark matter to provide the sought for additional gravitational effects.The right hand side of Eqn.(45) above is an illustration of this claim.Geodesic equation, which has also an external force included (so that motion becomes non-geodesic), with the Maxwell-like force being proportional to square-root mass (analogous to electric charge), and a function of the effective spatial distance As and when the effects of DEM are insignificant, Lorentz invariance and GR are recovered, as expected.These field equations will reduce (in the Newtonian approximation, and in the homogeneous isotropic cosmological approximation) to the analysis in Section V above. The treatment of the exact field equations is left for future work, and if the analysis can be done, it might even yield the sought for interpolating function mediating Newtonian gravitation and MOND. Milgrom [28] writes that: "... one may conjecture that the MOND-cosmology connection is such that local gravitational physics would take exactly the deep-MOND form in an exact de Sitter universe.This is based on the equality of the symmetry groups of dS 4 and of the MOND limit of the Bekenstein-Milgrom formulation [49] both groups being SO(4, 1).The fact that today we see locally a departure from the exact MOND-limit physics-i.e., that the interpolating functions have the form they have, and that a 0 is finite and serves as a transition acceleration-stems from the departure of our actual space-time from exact dS 4 geometry: The broken symmetry of our space-time is thus echoed in the broken symmetry of local physics."Our proposal that U(1) DEM is the remnant unbroken symmetry after breaking of SU(2) R × U(1) Y DEM is entirely in support of this conjecture of Milgrom. A few further remarks about the proposed action principle in Eqn.(50) are in order.This form of the action is assumed to come into play after the electroweak symmetry breaking around a TeV scale.That is the epoch at which classical spacetime, general relativity and dark electromagnetism (i.e.relativistic MOND) emerge.This emergence is expected to give the same physical results as standard big bang cosmology, with cold dark matter exchanged for dark electromagnetism.Prior to this emergence, cosmology is governed by the unified E 8 × E 8 theory, above the TeV scale.Since cosmological data are not yet available at such high energy scales, there is no contradiction between the octonionic theory and cosmology of the very early universe. VI. DERIVATION OF VERLINDE'S ENTROPIC CRITERION, FROM DARK ELECTROMAGNETISM Consider the epoch of left-right symmetry breaking where also the SU(2) R × U(1) Y DEM symmetry is broken.The deSitter expansion (as in the octonionic theory) ends with the formation of compact objects.The U(1) symmetry remains unbroken, like in the electro-weak sector, and becomes the U(1) DEM symmetry which we are currently examining.U(1) DEM , like MOND, is a scale invariant theory and carries memory of the deSitter phase.GR arises as a result of symmetry breaking.Consider a black hole arising from spontaneous localisation, which in fact is how the deSitter expansion ends.As Verlinde shows [20], the formation of a localised compact object reduces the deSitter entropy.The criterion for MOND to be dominant is that this reduction in entropy (which is area proportional) is less than the volume entropy of deSitter in the volume occupied by the compact object.This is equivalent to saying that memory of deSitter is retained under these conditions, and that U(1) DEM dominates over GR. We can try to derive Verlinde's entropy criterion by starting from our U(1) DEM theory.Let us start by asking what is the temperature of a black hole whose radius is such that its surface gravity is less than the critical MOND acceleration?Assuming that the radius R of the black hole is given as in GR and hence R = 2GM, the acceleration on the surface is, assuming that the effective radius of the black hole is where we have neglected the numerical coefficient for now.The interesting point is that this acceleration is independent of the mass of the black hole, and if we associate a temperature with the black hole, it being proportional to the surface gravity, the temperature is a 0 , just as for the deSitter horizon, and independent of the mass of the black hole.This is an example of deSitter memory being retained in the U(1) DEM dominated deep MOND regime. The entropy of the black hole is given by which is consistent with Verlinde's result.In the deep MOND regime, this entropy is less than the deSitter volume entropy, directly as a consequence of our U(1) DEM theory. VII. COUPLING CONSTANTS IN THE DARKELECTRO-GRAV THEORY For the electroweak sector SU(2) L × U(1) Y , the derived fundamental constants are the low energy fine structure constant α f sc ≡ e 2 /hc and the weak mixing angle (Weinberg angle) θ W , this latter being the solution of the trigonometric Eqn.(56) of our paper [4].The fine structure constant is made from the parameters α and L appearing in the Lagrangian of the theory, as displayed e.g. in Eqn. ( 6) of the just mentioned paper.The constants of the electroweak sector are expressible in terms of the fine structure constant and the weak mixing angle, along with the value of the Higgs mass m H whose value is to be predicted from cosmological downscaling (caused by the deSitter-like inflationary expansion) from the original Planck scale value of the Higgs mass.It is significant that the standard model Higgs comes from the right sector in the left-right symmetric model (whereas the standard model forces arise from the left sector).The second Higgs, associated with the left sector, is a newly predicted Higgs which is electrically charged. Thus the weak isospin g (i.e. the SU(2) L coupling) is given by g = e/sin θ W and the weak hypercharge g ′ (the U(1) Y coupling) is given by g ′ = e/cos θ W .The Higgs mass is estimated as follows.When the mass ratios are computed in the octonionic theory, we assume that the lightest of the masses, i.e. the electron mass, is one in Planck units.(Likewise, the charge of the down quark, it being the smallest electric charge, is set to one while determining the fine structure constant).Hence, the Higgs mass is initially about 3 × 10 5 m P ∼ 10 24 GeV because the Higgs is a composite of standard model fermions, and is expected to obtain maximum contribution from the top quark, which at about 173 GeV is about 3 × 10 5 heavier than the electron.An inflation by a factor of 10 61 scales this mass down by a factor 10 61/3 , to the value of about 10 3 GeV.This sets the weak coupling Fermi constant G 0 F ∼ 1/v 2 (where v ∼ 246 GeV is the Higgs VEV), to about 10 −6 GeV −2 whereas the experimentally measured value for the Fermi constant is about 10 −5 GeV −2 . This derivation of the reduced coupling constant G 0 F = G F /(hc 3 ) ∼ g 2 /M 2 W c 4 helps us arrive at a reasonable estimate of the W boson mass from first principles.We also observe that the Fermi constant has dimensions of length squared (same as G N ) and can be written as The scaling down of the W mass from its Planck scale value is responsible for the weak force becoming so much stronger than gravitation.In this theory, G N remains unchanged with epoch. Knowing m W , the mass of the Z boson is determined, as is conventional, by the relation m Z = m W /cos θ W .This way we have a handle on the fundamental constants and parameters of the electroweak sector (Higgs mass, Fermi constant, fine structure constant, weak mixing angle, masses of weak bosons, weak isospin, and hypercharge).For understanding why there is sixty-one orders of magnitude of inflation, which ends at the electroweak scale, please see [19] -the same result is also supported by the idea that the electroweak symmetry is broken below a critical acceleration (see Discussion section below). Let us now discuss the coupling constants and parameters of the right-handed darkelectro-grav (DEM) sector, SU(2) R × U(1) Y DEM staying as close as possible to the above discussion for electroweak sector.The DEM symmetry is broken along with the electroweak symmetry.It can also be shown using the electric charge operator, i.e. the number operator which is associated with the U(1) em symmetry, that W + and W − have electric charge +1 and −1 respectively and that Z 0 is electrically neutral.The corresponding situation for the SU(2) R ×U(1) g symmetry is interesting, because here the U(1) DEM number operator defines square root of mass (in Planck mass units); it does not define electric charge.Consequently, W + R and W − R have square root mass +1 and −1 respectively, and hence their range of interaction is limited to Planck length.They will also have an extremely tiny electric charge, some seventeen orders of magnitude smaller than the charge of the electron (analogous to the W mass being so small on the Planck scale).Whereas the U(1) Y DEM boson (and the dark photon it transforms to) will have zero mass and zero electric charge.Z 0 R will be massless, and will have an extremely tiny electric charge (like the W R bosons).It is possible that emergent gravitation is mediated at the quantum level by the Z 0 R and the dark photon.They take the place of the spin 2 graviton, in this theory. The place of the fine structure constant is taken by the mass of the electron.The Weinberg angle satisfies the same equation and hence has the same value as in the electroweak case.Thus the right sector analogs of the couplings g and g ′ can be obtained.GR is the result of the breaking of the SU(2) R symmetry [i.e. the quantum-to-classical transition].The remaining unbroken symmetry is dark electromagnetism U(1) DEM which is the proposed origin of relativistic MOND.The cosmological origin of MOND is briefly discussed in [19]. During the deSitter like inflationary phase, E 8 ×E 8 symmetry is operational, and includes as a subset the unbroken electroweak symmetry SU(2) L × U(1) Y as well as the darkelectrograv symmetry SU(2) R × U(1) Y DEM .Below the critical acceleration these symmetries are broken, giving rise to the emergence of classical spacetime (precipitated by the localisation of fermions).Near compact objects the gravitationally induced acceleration (GR/Newton) is higher than the critical acceleration and GR dominates.In the far zone, the acceleration is below the critical acceleration: this is the deep MOND regime where the unbroken symmetry U(1) DEM dark electromagnetism dominates.This zone is the buffer between deSitter horizon and the GR zone, and it has been identified also in Verlinde's work using his entropy considerations. All (left-handed) particles take part in the weak force, and all electrically charged particles take part in electromagnetism.Analogously, all right-handed particles take part in the SU(2) R interaction, whereas all particles with non-zero square-root mass take part in dark electromagnetism. VIII. DISCUSSION We somehow tend to think that R is the genuine distance and that the effective distance R ef f is introduced by brute force.This need not be true, and the actual situation can be the other way round.Let us rename the effective distance R ef f as true distance R true .We do that for the following reason.In our approach the universe begins out as a deSitter-like universe, and the formation of structures such as black holes (GR dominated near BHs, MOND farther out) ends the deSitter phase.Let R true be the physical distance of some point, with respect to the observer.We propose that as a result of the spontaneous localisation which causes a classical structure such as a black hole to form, the distance R true shrinks to R in the same ratio that the Hubble radius (event horizon distance) bears to R true .Therefore, This provides some physical basis, in terms of initial conditions, for using the effective distance. A. Critical acceleration It has been demonstrated by earlier researchers that if an inertial observer observes a spontaneously broken symmetry, then a Rindler observer concludes that the symmetry is not broken, provided the acceleration is above a certain critical value.See e.g.[51] and [52].Padmanabhan was one of the early researchers to show this result: indeed Section 7 and in particular Eqn.(7.15) of the work of Padmanabhan [53]. The 2017 paper [54] shows the critical acceleration for the electroweak case: This result appears significant for what we are doing with dark electromagnetism arising from the breaking of SU(2) R × U(1) Y DEM , in the early universe.It helps understand that classicality and GR emerge as a result of the acceleration of the universe coming down below the critical value.This critical value happens to be the same as the current acceleration of the universe. These results could have important implications in early universe cosmology.In particular, it could be that the electroweak symmetry breaks when the acceleration of a quasi-deSitter expanding universe falls below a critical value (assuming the inflation-like phase ends at the electroweak scale).See however the critical analysis in this regard, by Unruh and Weiss [55] and by Hill [56]. In our research we are investigating if Milgrom's MOND arises as the result of breaking of an SU(2) R × U(1) Y DEM symmetry, which is the right-handed counterpart of electroweak.After spontaneous symmetry breaking the U(1) becomes U(1) DEM , which is dark electromagnetism.We are looking into whether this fifth force is an alternative to dark matter, and the sought for theoretical basis of MOND.The critical acceleration result could be relevant in establishing the SSB criterion. B. Limiting values Consider the quantity: which can be expanded around a spatial point as √ We summarise here some of the key advantages of U(1) DEM symmetry: 1.It arises from first principles from E 8 × E 8 theory. 2. It is a relativistic gauge theory. 3. It is plausible that the unbroken SU(2) R × U(1) Y DEM symmetry is renormalizable, and is the correct theory of quantum gravity. 4. U(1) DEM is sourced by square root of mass, just as desired by MOND. 5.Only particles with non-zero rest mass take part in U(1) dem .A photon does not interact with matter through U(1) DEM .Therefore the additional force created by the baryon -U(1) DEM interaction is the perfect dark matter mimicker.It will explain CMB anisotropies for the same reason that dark matter explains CMB anisotropies.It will also mimic dark matter vis a vis gravitational lensing. 6.There is a natural connection with deSitter because U(1) DEM is the leftover unbroken symmetry from deSitter. 7. We are able to derive Verlinde's results for apparent dark matter and entropy criterion for MOND. 8. Earlier researchers have demonstrated that the electroweak symmetry is broken below a certain critical acceleration, and restored above it.The same result can be expected to hold for its right-handed counterpart, this being our GR-DEM theory.Analogous to electro-weak, we could call it darkelectro-grav. The dark photon -the massless gauge boson that mediates quantised DEM, could be thought of as dark matter.Its detection in the laboratory may however be beyond current technology.Same could be said about dark electromagnetic waves, though they could well be the early dark radiation [57,58] that has been proposed as one possible solution to the Hubble tension.Even though the dark photon can be called the sought for dark matter, what is noteworthy is that the associated DEM field is MONDian in character, and we have a newly predicted fifth force mimicking dark matter, but not a new fermionic elementary particle as dark matter.From the point of view of fundamental physics, this difference (i.e. is dark matter fermionic or bosonic) is significant.It decides whether there are only four fundamental forces, or more than four. After this paper was published, we received a private communication from Davi C. Rodrigues, co-author of Ref. [41].They have expressed concern about our remark on their work, on p. 13 above (towards the end of Section III).We quote their comment: "We clarify here that Ref. [41] did a Bayesian analysis, and computed the 5σ uncertainty interval of a 0 for each galaxy; concluding that there is a high level of tension between these results (more than 5 or 10 σ).Ref. [42] did a quite different analysis, based on the reduced χ 2 .Criticisms raised by [42] were promptly answered in the reply [59].The debate about the relevance of the priors in this context, likewise the development of analysis improvements, has continued after these works." We thank Davi Rodrigues for this clarification. 3 / 2 H which has the finite limit ρ 0 /R 3 H .This reinforces the use of the effective distance.In a non-spherical situation the effective distance between two spatial points (having coordinate differencex − x ′ ) is defined by new coordinates y − y ′ such that |y − y ′ | 2 ef f = R H |x − x ′ | C.Advantages in considering dark electromagnetism
18,485.6
2023-12-14T00:00:00.000
[ "Physics" ]
Cavity-control of interlayer excitons in van der Waals heterostructures Monolayer transition metal dichalcogenides integrated in optical microcavities host exciton-polaritons as a hallmark of the strong light-matter coupling regime. Analogous concepts for hybrid light-matter systems employing spatially indirect excitons with a permanent electric dipole moment in heterobilayer crystals promise realizations of exciton-polariton gases and condensates with inherent dipolar interactions. Here, we implement cavity-control of interlayer excitons in vertical MoSe2-WSe2 heterostructures. Our experiments demonstrate the Purcell effect for heterobilayer emission in cavity-modified photonic environments, and quantify the light-matter coupling strength of interlayer excitons. The results will facilitate further developments of dipolar exciton-polariton gases and condensates in hybrid cavity – van der Waals heterostructure systems. S emiconductor transition metal dichalcogenides (TMDs) exhibit remarkable optoelectronic and valleytronic properties in the limit of direct band-gap monolayer (MLs) [1][2][3][4] . High oscillator strength renders the materials ideal for the studies of collective strong-coupling phenomena mediated among excitons and photons by optical resonators 5 . This limit of new bosonic eigenstates of half-matter and half-light quasiparticles known as exciton-polaritons is routinely achieved for ML TMDs in various types of cavities [6][7][8][9] . In contrast, cavity-control of van der Waals heterobilayers (HBLs) has been elusive despite their potential for fundamental studies of dipolar gases with intriguing polarization dynamics upon expansion 10 and condensation phenomena 11 . Composed of two dissimilar MLs in staggered band alignment 12,13 , such van der Waals heterostructures host layerseparated electron-hole pairs in response to optical excitation 14 . The spatial separation of Coulomb-correlated electrons and holes gives rise to a permanent exciton dipole moment along the stacking axis, and extended lifetimes up to hundreds of ns [14][15][16][17] . Although long lifetimes are beneficial for providing sufficient time scales for thermalization, finite exciton dipole moments ensure mutual interactions in exciton-polaritons gases and condensates. To date, however, the integration of HBLs into optical cavities has been impeded by the involved fabrication of exfoliation-stacked HBL systems which require careful alignment of both MLs along the crystallographic axes to reduce momentum mismatch between electrons and holes residing in dissimilar layers. As opposed to exfoliation-stacking, chemical vapor deposition (CVD) realizes inherently aligned TMD heterostructures with atomically sharp interfaces both in lateral and vertical geometries 18,19 . However, even in the presence of inherent angular alignment, excitons in van der Waals stacks of incommensurate layers with dissimilar lattice constants are subject to moiré effects [20][21][22][23][24] akin to twisted HBL systems 25 . As such, CVDgrown MoSe 2 -WS 2 HBL with a lattice mismatch of a few percent feature moiré patterns with a period of~10 nm 22 . In MoSe 2 -WSe 2 heterostructures, on the other hand, the much smaller lattice mismatch of 0.1% can be accommodated by atomic vacancies to yield a fully commensurate HBL system free of moiré effects 26,27 in nearly ideal R-and H-type stacking geometries 28 . In our experiments we use such moiré-free vertical MoSe 2 -WSe 2 HBL, synthesized by overgrowth of ML MoSe 2 with ML WSe 2 , to demonstrate cavity-control of interlayer excitons. Our studies focus on the dynamics of HBL photoluminescence (PL) in weak coupling to a tunable optical micro-cavity. Akin to previous reports, the interlayer exciton PL from our sample exhibits rich spectral and temporal characteristics subject to competing interpretations with respect to the underlying origin and details [14][15][16][17] . We interpret our observations in the framework of interlayer excitons in various spin and valley configurations consistent with the theoretical framework of bright and dark excitons in commensurate HBLs. After establishing the signatures of interlayer excitons in continuous-wave and time-resolved PL spectroscopy and differential reflectance (DR), we present cavity-control of the respective HBL PL in a tunable micro-cavity configuration 29 . Specifically, we demonstrate Purcell enhancement in the lightmatter interaction of interlayer excitons as evidenced by the simultaneous increase of their PL intensity and radiative decay rate, and quantify the respective light-matter coupling strength. Results Confocal photoluminescence spectroscopy of MoSe 2 -WSe 2 heterobilayer. Before demonstrating cavity-control of HBL excitons, we discuss the main signatures of intralayer and interlayer optical transitions in cryogenic spectroscopy. Confocal PL and DR spectra of our MoSe 2 -WSe 2 sample recorded at 4.2 K are shown in Fig. 1a. The DR spectrum at 1.65 and 1.75 eV is dominated by ML excitons in MoSe 2 and WSe 2 , respectively. In PL, the MoSe 2 ML contributes a pair of peaks~1.65 eV stemming from neutral and charged intralayer excitons 30 . Consistent with previous studies of exfoliation-stacked heterostructures [14][15][16][17] , the cryogenic PL shows vanishingly small emission from intralayer WSe 2 excitons and a strong low-energy peak of interlayer excitons around 1.40 eV. This HBL peak arises from photo-generated electrons and holes that relax over the conduction band (CB) and valence band (VB) offsets of ≤350 and 250 meV, respectively, to form interlayer excitons 14 . The configuration of interlayer excitons in moiré-free HBL systems depends on the actual atomic registry. In the Supplementary Note 1, we provide a description of the exciton manifolds in both R-and H-type commensurate vertical HBLs for three types of distinct atomic registries as shown in the Supplementary Fig. 2. We note that the optical selection rules derived from symmetry considerations and summarized in the Supplementary Fig. 3 also hold locally for incommensurate heterostructures that feature different atomic registries over extended regions of moiré superlattices 31 . The HBL sample in our experiment corresponds to AB stacking in H-type registry with a rotation angle of multiples of 60°between the two TMD layers. The assignment follows from second-harmonic generation (SHG) mapping with lower intensity on HBL regions as compared to the SHG signal of ML regions 32 . Moreover, the positive degree of circular polarization P C , shown in Fig. 1b, is consistent with MoSe 2 -WSe 2 HBL in AB stacking 27 . For this specific stacking, we obtain from our symmetry analysis two optically active zero-momentum interlayer excitons. Bright excitons, IX B , involve an unoccupied spin-up (spin-down) VB state in WSe 2 at K (K′) and an occupied spin-up (spin-down) The onset of absorption in the DR spectrum at~1.37 eV stems from the inhomogenously broadened gray interlayer exciton state IX G CB state in MoSe 2 at K′ (K). The gray exciton manifold with a smaller oscillator strength due to its antiparallel spin configuration 31 , IX G , involves an unoccupied spin-up (spin-down) VB state in WSe 2 at K (K′) and an occupied spin-down (spin-up) CB state in MoSe 2 at K′ (K). These bright and gray exciton states, split by the CB spin-orbit splitting of MoSe 2 33 and degenerate with their respective time-reversal counterparts, contribute through their respective radiative decay channels to the HBL peak in Fig. 1. In addition to zero-momentum interlayer excitons with dipolar-allowed optical transitions, finite-momentum interlayer excitons result from spin-like (IX L ) combinations of unoccupied spin-up (spin-down) VB states in WSe 2 at K (K′) and occupied spin-up (spin-down) CB states in MoSe 2 at K (K′), as well as spinunlike (IX U ) combinations of unoccupied spin-up (spin-down) VB states in WSe 2 at K (K′) and occupied spin-down (spin-up) CB states in MoSe 2 at K (K′). These two doubly degenerate states IX U and IX L with non-zero center-of-mass momentum are resonant with IX B and IX G , respectively, yet void of direct radiative decay pathways due to momentum conservation constraints. With this notion of interlayer excitons, we interpret the HBL peak in Fig. 1 as arising from dipolar-allowed recombination of IX B and IX G excitons as well as from phonon-assisted emission from momentum-dark excitons IX L . The IX U reservoir is assumed to be empty due to relaxation of the photoexcited population into energetically lower-lying states. Bright and gray excitons contribute zero-phonon line (ZPLs) emission at their bare energy. Momentum-indirect excitons, on the other hand, contribute to the PL spectrum as phonon sidebands downshifted from their bare energy IX L by the energy of acoustic or optical phonons (and their higher order combinations) that compensate for momentum mismatch in the light-matter coupling and thus promote radiative decay 34,35 . The corresponding spectral decomposition of the HBL peak, provided in the Supplementary Note 3, yields the energies of IX B and IX U as indicated by the dashed lines in Fig. 1c and an inhomogeneous broadening of 40-55 meV. Alternatively, the asymmetric HBL peak can be interpreted as being composed of IX B and IX G emission and red-shifted localized excitons trapped in disorder potentials. To substantiate the interpretation of the HBL peak as a convolution of IX B and IX G ZPLs and IX L phonon sidebands, we carried out time-resolved PL experiments. Previous cryogenic studies of exfoliation-stacked MoSe 2 -WSe 2 heterostructures reported interlayer excitons lifetimes in the range of 1-100 ns with single-or multi-exponential decay dynamics 10,14-17 . The spectrally broad interlayer HBL peak of our sample exhibited similar PL decay characteristics. The best approximation to the total HBL peak was obtained with three-exponential decay channels with lifetimes of~6, 44, and 877 ns (see Supplementary Figs. 6 and 7 in the Supplementary Note 2). Consistent with our understanding of the HBL emission, the contributions of the individual decay channels to the total radiated PL energy varied significantly across the HBL peak. By performing PL decay measurements in narrow spectral windows at variable energies shown in Fig. 2a, we found that the relative weight of the slowest decay component with 877 ns decay constant increased at the expense of the more rapid components with 6 and 44 ns lifetimes as the spectral band of the measurement window was shifted to lower energies (Fig. 2b). In the red-most wing, interlayer PL was significantly delayed (note the prolonged rise-time of the PL traces in Fig. 2a recorded in the red wing) and dominated by the longest decay constant. The cross-over from short to long PL lifetimes upon progressive red-shift provides support for our interpretation of the HBL peak. Our model predicts a decrease for the PL contribution from the momentum-bright exciton IX B upon increasing red-shift from its ZPL, and this trend is consistently supported by the data in the upper panel of Fig. 2b. In this framework, the shortest decay channel is attributed to bright excitons IX B (data in the upper panel of Fig. 2b), the intermediate timescale to gray excitons IX G (central panel of Fig. 2b), and the long lifetime to phonon-assisted decay channels at larger redshifts (lower panel of Fig. 2b). Alternatively, one could assign the fast and intermediate decay components to IX B and IX G decay channels, respectively, and the long decay component to defectlocalized interlayer excitons. Purcell enhancement of MoSe 2 -WSe 2 heterobilayer photoluminescence. In the following, we demonstrate cavity control of the HBL peak PL dynamics. To this end, we positioned a fiber micro-mirror above the macroscopic mirror with CVD-grown MoSe 2 -WSe 2 flakes on top. The schematic drawing of the cavity setup with independent translational degrees of freedom along all three dimensions is shown in Fig. 3a. The related details of the cavity setup are described in the Supplementary Note 4 and include the transmission characteristics of the cavity as a function of variable cavity length in the Supplementary Fig. 9. Displacement of the sample mirror enabled coarse-tuning of the cavity length as well as two-dimensional positioning and profiling of the sample. The respective cryogenic transmission and PL maps of the HBL flake with PL data in Figs. 1 and 2 are shown in Fig. 3b, c. The transmission map in Fig. 3b, recorded with the excitation laser at 635 nm, quantifies both absorption and scattering inside the cavity. The sizeable ML absorption in the range of several percent 36 Fig. 3c where the data of Figs. 1 and 2 were recorded with confocal spectroscopy, and performed PL decay measurements as a function of the cavity length. The respective decay traces are shown in Fig. 4a for cavity lengths of 35, 17, and 6 μm. Clearly, the PL decay speeds up with decreasing cavity length. This reduction of the characteristic lifetimes with decreasing cavity length was accompanied by an increase of the total PL intensity by a factor of 2.6 ( Fig. 4b) as a hallmark of cavity-induced Purcell enhancement of excitonic emission. For a more quantitative analysis of Purcell enhancement, the PL traces recorded at different cavity lengths were modeled by a convolution of the instrument response function (IRF) and a three-exponential decay with amplitudes and time constants of each decay channel as free fit parameters, as described in the Supplementary Note 2. The corresponding model fits, shown as red solid lines in Fig. 4a, were used to extract the short, intermediate, and long decay time components for a given cavity length. The respective set of data, shown in Fig. 4c, clearly demonstrates cavity-control of all three characteristic decay channels. The evolution of the lifetime shortening with decreasing cavity length is quantified by the ratio of the total decay rate in the cavity system γ tot = γ fs + γ c to the bare free-space decay rate γ fs as γ tot /γ fs = 1 + F p , where F P = γ c /γ fs is the Purcell enhancement factor due to the cavity-modified decay rate γ c 37 . An estimate for the cavity-mediated Purcell enhancement can be obtained by identifying the values obtained from confocal PL dynamics with free-space lifetimes. Taking the smallest lifetime values for each decay channel from the data of Fig. 4c, this yields maximum measured Purcell factors F P of 1.8 ± 0.3, 0.8 ± 0.1 and 0.9 ± 0.1 for the short, intermediate, and long lifetime components, respectively. The difference in the Purcell factors is consistent with the different nature of the coupling between the corresponding decay channels and the cavity field, with bright interlayer excitons IX B exhibiting higher coupling efficiency than gray excitons IX G and phonon-assisted decay channels of momentum-dark excitons. This finding can be understood in the framework of dipolar selection rules in AB stacking: the wavevector of circularly polarized IX B emission is collinear with the cavity which optimally enhances the respective decay channel. The enhancement is weaker for the decay channel of gray excitons IX G with zpolarized in-plane emission. Momentum-dark excitons IX L , finally, exhibit the same Purcell enhancement as IX G as they decay via the gray exciton channel though phonon-assisted spinvalley flipping processes. All three decay channels responded consistently to cavity length detuning, as shown in Fig. 4c. At a cavity length of 35 μm, several cavity modes were resonant with the HBL emission peak thus enhancing all possible emission channels simultaneously. For cavity lengths smaller than 9 μm, however, the free spectral range of the cavity exceeded the linewidth of the HBL emission peak, rendering cavity-coupling sensitive to the spectral resonance condition. Open circles in Fig. 4c show the results for off- The solid lines are fits to the data with three-exponential decay constants. Note the speed up in the decay upon the reduction of the cavity length. b Spectra of interlayer exciton PL for the corresponding cavity lengths. c The evolution of the characteristic decay constants with the cavity length is shown by closed circles (error bars: least squares from best fit with three-exponential decay channels). The solid lines show model fits according to the theory of generalized Purcell enhancement. Open circles represent data where the cavity mode was spectrally detuned from the resonance with the interlayer peak; data shown in light gray were discarded from the fit procedure due to presumable physical contact between the fiber and the mirror resonant configurations in accord with cavity-inhibited radiative decay. In contrast, the on-resonance data (measured with a dense spacing of data points for~6-8 μm cavity lengths in Fig. 4c) reflect the effect of cavity-enhancement with anti-correlated trends for short and long decay components at smallest cavity lengths consistent with spectrally distinct channels. At a nominal separation of~5 μm (gray circles), physical contact between the fiber and the extended mirror was presumably reached, preventing further reduction of the cavity mode volume. The data recorded in contact of the fiber and the macro-mirror as well as all off-resonance data were discarded from the following analysis of the cavity-induced Purcell enhancement in the presence of pure dephasing 38 . On resonance, the generalized Purcell factor is F P = (4g 2 /γ fs )/(κ + γ fs + γ d ), where g is the coupling rate of the emitter to the cavity, κ is the cavity decay rate, and γ d is the dephasing rate of the emitter. Both g and κ vary as a function of the cavity length 29,39,40 . By taking the inhomogeneous linewidth γ = 55 meV deduced from the data in Fig. 2b as an upper bound to the dephasing rate in our system (i. e. using γ d ≤ γ), we fitted each data set of Fig. 4c according to the model for the generalized Purcell enhancement (see Supplementary Note 4 for details). The resulting best fits, shown as solid lines in Fig. 4c, were obtained with free-space lifetimes of 5.8 ± 0.4, 39 ± 2, and 760 ± 40 ns for the three sets of data in the respective panels of Fig. 4c. These asymptotic values at infinite cavity length extracted from the model fit agree well with the decay times determined in confocal PL spectroscopy (data in Fig. 2b). With this strong confidence in the correspondence between the free-space lifetime values extracted from the model of generalized Purcell enhancement and the decay times obtained in the absence of the cavity with confocal PL spectroscopy, the model allows now to extrapolate maximum Purcell enhancement F max P that can be achieved at the peak wavelength of the HBL emission λ for a mirror separation of λ/2. The model yields F max P of 2.9 ± 0.2 for the short and 1.7 ± 0.1 for both the intermediate and long lifetime channels, respectively. For the same limit of the intermirror spacing of λ/2 and a cavity volume of~λ 3 , the model also quantifies the light-matter coupling strength g as 195 ± 9, 58 ± 3, and 13 ± 0.9 μeV for IX B , IX G , and phonon-assisted decay of momentum-dark excitons, respectively. These values, in good quantitative agreement with the absorption contrast in Fig. 1, are quite robust against variations in the dephasing rate, with g changing by <25% for γ d in the range of 10-70 meV. At the same time light-matter coupling was sensitive to material and environmental characteristics with up to 50% changes in g and about 30% variations in the free-space PL lifetimes on different positions of the same flake and different flakes. Discussion The values for the light-matter coupling strength g of interlayer excitons in our CVD-grown MoSe 2 -WSe 2 HBL sample are two to three orders of magnitude smaller than the coupling rates reported for MLs TMDs 6-9 . This striking difference in lightmatter coupling, fully consistent with the spatially indirect nature of interlayer excitons in HBL systems, yields tight constraints on the observation of interlayer exciton-polariton phenomena in the strong-coupling regime of HBLcavity hybrids. To ensure g > κ + γ d for strong-coupling, cavities with higher quality factors are readily available 41 , yet much improved HBL crystals and environmental conditions will be required to reduce dephasing. However, in view of radiatively limited linewidths achieved for ML TMDs by encapsulation with hexagonal boron nitride [42][43][44][45] , further progress towards the realization of dipolar excitonpolariton gases in cavityvan der Waals heterostructure systems seems feasible. Methods Chemical vapor deposition of vertical TMD heterobilayers. First, MoSe 2 ML was grown by selenization of molybdenum trioxide (MoO 3 ) powder. SiO 2 /Si substrate along with MoO 3 powder boat were placed at the center of a chemical vapor deposition (CVD) furnace, which was heated to 750°C in 15 min and held for 20 min. SiO 2 /Si substrate was facing down in close proximity with MoO 3 powder. Selenium (Se) powder vaporized at 200°C was used as Se source, and a mixture of argon and hydrogen (15% hydrogen) at 50 SCCM was used as the carrier gas. The as-grown MoSe 2 /SiO 2 /Si was then transferred to a separate CVD setup for subsequent WSe 2 growth similar to the method of MoSe 2 . Specifically, selenization of tungsten oxide (WO 3 ) was performed at 900°C in the presence of 100 SCCM carrier gas. WSe 2 would grow on top of MoSe 2 from its edges, creating MoSe 2 /WSe 2 vertical heterostructures. No additional treatment was necessary prior to WSe 2 growth due to thermal removal of possible physisorbed molecule gases on MoSe 2 during the transfer in air. As-grown heterostructures were studied in spectroscopy or transferred onto a mirror using polymer-supported wet transfer method. To this end polymethylmethacrylate (PMMA) was spin-coated on the heterostructure and lifted off in 1 M potassium hydroxide (KOH) in water. Finally, the PMMA-supported film with MoSe 2 -WSe 2 vertical heterostructures on the mirror was rinsed in three cycles of water to remove possible KOH residue. Photoluminescence spectroscopy. PL experiments were performed in a lab-built cryogenic setup. The sample was mounted on piezo-stepping units (attocube systems ANPxy101 and ANPz102) for positioning with respect to a low-temperature objective (attocube systems LT-APO/NIR/0.81) or the cavity mode. The microscope was placed in a dewar with an inert helium atmosphere at a pressure of 20 mbar and immersed in liquid helium at 4.2 K. Excitation around 635-705 nm was performed with a wavelength-tunable supercontinuum laser (NKT SuperK Extreme and SuperK Varia) with repetition rates down to 2 MHz. In continuouswave measurements, the PL was spectrally dispersed by a monochromator (Princeton Instruments Acton SP 2500) and recorded with a nitrogen-cooled silicon CCD (Princeton Instruments PyLoN). Time-resolved PL was detected with avalanche photodiodes (Excelitas SPCM-AQRH or PicoQuant τSPAD). Scanning cavity microscopy. The cryogenic cavity was composed of a fiber micromirror and a macroscopic mirror with MoSe 2 -WSe 2 vertical HBL on top. The macro-mirror was coated with~30 nm of silver and a spacer layer of SiO 2 with thickness designed to place the HBL at a field antinode. The effective radius of curvature of the central depression in the laser-machined fiber end facet was 136 μm. The facet was coated with~50 nm silver and a protection layer of SiO 2 . Three translational degrees of freedom of the sample on the mirror were accessible by cryogenic positioners (attocube systems ANPxy101 and ANPz102) to provide both lateral scans and coarse-tuning of the cavity length. Cavity fine-tuning was achieved by displacing the fiber-mirror with an additional piezo. Excitation by a supercontinuum laser (NKT SuperK Extreme and SuperK Varia) at 635 nm was provided via the optical fiber and both transmission and PL were detected through the planar macro-mirror with the heterostructure on top. Twodimensional scans were performed with a cavity length of~22 μm resulting in a mode-waist of 3.2 μm for the excitation laser and a mode-waist of 3.7 μm for the detected PL~880 nm. Data availability The data that support the findings of this study are available from the corresponding author on reasonable request.
5,426.4
2019-08-16T00:00:00.000
[ "Physics" ]
The Fundamental Mechanism for the Collision / Pressure Induced Optic Effect In this work, the fundamental mechanism regarding the collision and pressure induced optic effect is elucidated. Based on the concept of the collision-relaxation/the pressure-release induced optic effect put forth here, a new laser technology may be developed. Furthermore, our work also makes the understanding the photon involved chemical reaction become much clear and rationalized. Introduction The interaction between the light and matter has been long standing topics for scientific research.A series of important achievements in the field were made in the past, such as electro-optic effect, magnetic-optic effect and the pressure-optic effect [1] [2] [3] [4] [5].From the electro-optic effect, the concept of the particle-wave duality of photon was finally founded and accepted by the scientific world [6], whereas the pressure-optic effect revealed the relation between the refraction/reflection of the indexes and the structure of matter [7].The establishment of Maxwell equation indicated a milestone progress for us to understand the property of the light and founded the basis of current theory of the light [8]. And till now, the more and more applications of light have been developed, which is involved in almost every aspects in human life.For instance, the solar energy conversion [9] and electro-optic communication [10] have become the part of essential tools for the daily activity of human being.As part of the effort of scientific world in the field, we carefully reviewed the theory about the interaction between the light and matter.It is found that the understanding about the interaction between the light and the matter is far from perfect yet, not mention to complete.For instance, the pressure-optic effect was noticed experimentally long time ago and discussed extensively in the literatures.Based on the experimental and theoretical works, the relation between the refraction/reflection indexes and the structure of matter has been established [11].Then the industry just takes over these scientific results and put them into application.However, to my knowledge, few of the publications give perfect answer how the defect in materials influences on the light refraction/reflection indexes of matter and what the role of the impurity in materials is taken during the interaction between the light and the matter.In most cases, we know the experiment only can give the collective results, not individual defect or impurity behaviors in the interaction between the light and the matter, but theoretically, we can explore the detail of the behaviors of the defect and impurity in the matter in the interaction between the light and the matter. In this work, we select the simplest system, that is, a hydrogen atom to start our discussion.We hope the result in this work can help us understand some fundamental aspects regarding the collision/pressure induced optic effect. Theoretical Consideration and Discussion For hydrogen atom, we can start with the ϕ 1s wave function but we put a factor, α, onto the φ 1s wave function to mimic the behavior of the hydrogen atom experiencing the collision or under the pressure, which leads to the wave function contracting. From the quantum mechanics, we can get where E is the total energy of the system; α is the wave function contracting factor; H is the Hamilton operator; ϕ 1s is the wave function of the hydrogen atom at ground state. From the Equation (1), we find that the energy of hydrogen atom is a function of the pressure or contracting factor, α.The variation of E with the pressure or contracting factor, α, is shown in Figure 1. The result in Figure 1 tells us that if the pressure applied is high enough or the wave function contracting in whatever way so great, the energy of the hydrogen atom will become very high, even higher than the energies for ϕ 2s , ϕ 3s , For the convenience of our discussion, the different energy levels corresponding to the different main quantum numbers for free hydrogen atom are also marked in Figure 1.From the Figure 1, we know that if the starting point of the relaxation of the hydrogen atom located between the second lowest energy level and ground energy level, then we only can observe the infrared photon accompanying the relaxation process of the hydrogen atom. Channel 2: dissipation energy through emitting the ultraviolet and infrared photons If the starting point of the relaxation of the hydrogen atom located above the second lowest energy level, then, the hydrogen atom will first experience the down conversion through the vibration energy level by emitting infrared photon, then, the hydrogen atom will meet the electronic energy level, which is above the ground state energy level.Now the hydrogen atom may directly go down to the different lower energy levels (if available) by emitting the ultraviolet and infrared photons. Channel 3: dissipation energy through the internal energy conversion Even though we talk about the hydrogen atom here, we would like to mention the situation for multi-atoms system, such as molecule, there exists the internal energy conversion from the translational energy to vibration energy and vice versa.However, it doesn't matter in which way the internal energy conversion proceeds, finally, the system still dissipates the energy by emitting the photon (Figure 2).From the discussion above, we find that this kind of wave function contracting, which corresponds to the collision/pressure induced optic effect, has four important aspects for us to pay attention. Chemical Reaction For the chemical reaction, we usually divide the chemical reaction into two categories: thermal reaction [12] and optical reaction [13].When we talk about Based on our discussion regarding the behavior of the hydrogen atom under the orbital contracting, we know that we can promote the molecule or atom to high energy level by applying the pressure onto the system, then releasing the pressure from the system, the energy of the system will redistribute among the molecules or atoms.One important process for us to stress is the molecule or atom will dissipate the energy by emitting photon.This emitted photon may just go away from the system or excite another molecule or atom to the excited state. This kind of energy redistribution by emitting and absorbing photon among the reactants can make the chemical reaction occur.The special point for this kind of chemical reaction is in that it doesn't need heat/cool the system to the temperature required by the chemical reaction nor to promote the molecule or atom to the excited state by the light radiation.Instead the molecule or atom is promoted to the high energy level just by applying the pressure onto the system. Sometime the photon can be detected as by-product, not the reaction starter as usual in optical-chemical reaction.Sometime no photon can be detected if the photon totally be consumed to promote the molecule or atom to high energy state.In this case, the energy transferring between the molecules or atoms be- photon emitting and absorbing, even though no photon be detected. From the discussion above, we can define this kind of chemical reaction as orbital contracting-induced optical reaction.Due to the increasing pressure of system is the easiest way to cause the orbital contracting, therefore, we also can call this kind of reaction as the pressure-induced optical reaction. The Chemical Reaction Dynamics Study From the literatures, a lot of researches about the chemical reaction dynamics are performed based on the molecular beam technique [14].Conventionally, the molecular beam technique is to study the molecular chemical dynamics by the collision of reactants and the distribution of the products.Based on the molecular beam technique, a lot of chemical reaction mechanisms have been elucidated.However, to my knowledge, one important case has been missing, that is, during the process of collision of two or more molecules/atoms, the chemical reaction may involve in the photon emission/absorption.Simply speaking, the conventional molecular beam technique only discusses the internal energy conversion from the translational energy to the vibration energy and vice versa.Whereas we discussed here is the photon emission/absorption induced by the collision or pressure.In fact, the collision of molecule/atom and applying the pressure onto the system involved in the same fundamental mechanism, that is, the wave function of molecule/atom contracting.For the photon emission induced by molecule/atom collision, it is the translational energy of molecule/atom changes partly to the photon energy, partly to the rotational/vibration energy.We search the literatures, there is no publications regarding the photon emission by the molecule/atom collision reported, whereas for the photon emission induced by the pressure, most of publications only talk about the pressure induced the change of light refraction/reflection indexes, instead of the collision/pressure induced photon emission. In reality, the molecular collision induced photon emission and the pressure induced photon emission are very popular phenomenon around us.For example, if the energy of two hydrogen atoms in collision is high enough, the electron in hydrogen atom may totally be kicked out of the hydrogen atom (Figure 2); if the energy of two hydrogen atoms in collision is not high enough, the electron in hydrogen atom will be kicked from the lower energy level to higher energy level, then, the electron will relax back from the higher energy level to lower energy level, the photon will be emitted.For the pressure induced photon emission, the experiment becomes so easy that even a middle school student can demonstrate it.For example, to hit a piece of match as hard as possible with hammer, then we will observe the photon emission by the naked eye.These phenomena of the collision/pressure induced photon emissions are much easy to observe, therefore, there is no reason for scientific world to ignore it.sion/pressure will make the total energy of system increase, then, following the system relaxation, the photon will be emitted.We use the contracting factor, α, to describe the wave function in contracting.From the quantum mechanics, we know the expectation distance of the electron from the nucleus can be described by <r>, which is calculable if the wave function is available (Equation ( 2)). The Expectation Value of <r> Therefore, <r> can be used to measure the distance between the atom/molecule in the collision or under the pressure.The <r> ~α relation for hydrogen atom is shown in Figure 3 and each energy level corresponding <r> also marked in Fig- where α is the wave function contracting factor; ϕ 1s is the wave function of the hydrogen atom at ground state; <r> is the expectation distance between the electron and nucleus in hydrogen atom. For example, if we detect the photon corresponding to the transition between the certain energy levels of hydrogen atom, then, we know the closest distance between two hydrogen atoms should be ≤ 2 <r>.Only meeting with this condition, the system can emit the photon corresponding to the transition between the certain energy levels of hydrogen atom.This work here offers a way to mimic the chemical reaction by collision or under pressure and make the energy utilization become more efficient. The Possibility for Developing New Laser Technique Here we would like to explore a possibility to develop new laser technology based on the discussion above.In order to make a system lasing, we first have to pump the electron to high energy level and realize the energy level inversely Conclusion In this work, we take the hydrogen atom as an example to study the wave function contracting and found that the wave function contracting will lead to increase of the total energy of system.If we allow the system to freely relax, under certain condition, the system will emit photon.This kind of wave function contracting can be realized in a simple way, such as the collision among the molecules/atoms or applying the pressure onto the system.The collision-relaxation or the pressure-release induced the photon emission represents an important but different process from the conventional molecular beam technology, which is unreasonably ignored before.This kind of the collision-relaxation or the pressure-release induced the photon emission also offer a new way for us to draw some chemical reaction information, such as the closest distance between the molecules/atoms in reaction.Furthermore, our work here points out a possibility to develop new laser technology by the collision-relaxation or the pressure-release cycle, which will find a wide application in scientific research and industry.Further work in this direction will be presented later. Figure 1 . Figure 1.The illustration of wave function of hydrogen atom in contracting/stretching. Figure 2 . Figure 2. The illustration of the molecules/atoms in collision process. From the discussion above, the wave function contracting due to the colli-W.-X.Xu DOI: 10.4236/opj.2018.8400995 Optics and Photonics Journal Figure 3 . Figure 3.The illustration of the dependence of the <r> on the wave function contracting factor.
2,912
2018-04-11T00:00:00.000
[ "Physics" ]
Bridging relativistic jets from black hole scales to long-term electromagnetic radiation distances: A moving-mesh general relativistic hydrodynamics code with the HLLC Riemann solver Relativistic jets accompany the collapse of massive stars, the merger of compact objects, or the accretion of gas in active galactic nuclei. They carry information about the central engine and generate electromagnetic radiation. No self-consistent simulations have been able to follow these jets from their birth at the black hole scale to the Newtonian dissipation phase, making the inference of central engine property through astronomical observations undetermined. We present the general relativistic moving-mesh framework to achieve the continuity of jet simulations throughout space and time. We implement the general relativistic extension for the moving-mesh relativistic hydrodynamic code, JET, and develop a tetrad formulation to utilize the Harten-Lax-van Leer Contact (HLLC) Riemann solver in the general relativistic moving-mesh code. The new framework is able to trace the radial movement of relativistic jets from central regions where strong gravity holds all the way to distances of jet dissipation. I. INTRODUCTION Relativistic collimated outflows, known as jets, are associated with many astrophysical systems of vastly different scales, from stellar to galactic and even to extragalactic levels.Phenomena like microquasars, young stellar objects, gamma-ray bursts (GRBs), active galactic nuclei (AGN), and quasars demonstrate the prevalence of relativistic jets and highlight the ubiquity of the underlying physical processes that give rise to these phenomena. A central aspect shared by these varied astrophysical systems is the phenomenon of accretion, in which matter is attracted and pulled into a dense celestial body, like a black hole or neutron star.As matter falls onto these objects, gravitational and magnetic forces play crucial roles in launching and collimating the relativistic jets.Studying relativistic jets across different scales provides astronomers with a unique opportunity to probe fundamental astrophysical processes and test our understanding of high-energy physics in extreme environments. Commencing with the Penrose process [1,2], numerous theoretical investigations have been undertaken to explore jets and mass outflows near black holes.The Penrose process initially elucidates energy extraction from infalling matter into a rotating black hole.Subsequently, the seminal work by Blandford and Znajek (BZ) demonstrated that jet energy could be extracted from the rotational energy of large-scale magnetic fields surrounding spinning black holes.Later, Blandford and Payne (BP) highlighted that matter could also depart from the surface of the accretion disk due to magneto-centrifugal acceleration. One of the fundamental questions in accretion disk physics is how the angular momentum transfers within the disk.Initially, Shakura and Sunyaev introduced the 'α-disc' model in a groundbreaking paper.However, the source of the ad hoc viscosity in this model remains questionable.In contrast, recent years have seen widespread acceptance of magneto-rotational instability (MRI; Balbus and Hawley) as the primary mechanism for angular momentum transport in accretion flows. Another fundamental question in accretion disk physics is the generation of the large poloidal magnetic field as it is pretty natural to assume a toroidal field configuration for accretion flows.To begin with, the orbital differential shear would predominantly amplify the toroidal magnetic field by the shearing of seed poloidal magnetic field, the so-called Ω effect.It took simulators many years to achieve the necessary resolutions and finally report the self generation of the large-scale poloidal magnetic field in black hole accretion disk due to the αeffect (which relies on the buoyancy and Coriolis forces to convert toroidal into poloidal magnetic flux) [7,8].The general mean-field dynamo theory (see, e.g., [9][10][11][12][13]) has been widely used to investigate the generation of largescale magnetic fields from small-scale turbulence. Recent long-term general-relativistic (GR) neutrinoradiation magnetohydrodynamics (MHD) simulations of the merger of the binary neutron star and black hole neutron star have shown that effective viscous processes, α − Ω magnetic dynamo can lead to the generation of large-scale magnetic field, and post-merger mass ejection [14][15][16].The analysis of the binary neutron star (BNS) merger remnant and post-merger ejecta has been investigated in detail (see, e.g., [17][18][19]).Still, the process of successfully launching a relativistic jet is undoubtedly complex.For a comprehensive understanding of the launching mechanism, general relativistic magnetohydrodynamic (GRMHD) simulations that integrate intricate microphysical processes are imperative.On the other hand, relativistic outflows play a pivotal role in a multitude of astronomical phenomena.For example, it has been speculated that the BNS merger remnants and relativistic ejecta are the central engines of gamma-ray bursts [20][21][22][23] and kilo-nova [24][25][26][27][28][29].Relativistic outflows or jets are instrumental in shaping the emission profiles and contributing significantly to the high-energy radiation observed.Understanding these electromagnetic observations requires tracking the propagation of relativistic jets and their interaction with the ambient medium for a long period of time.However, simulating the complete journey of relativistic jets and the related emission process is numerically challenging.Studies in literature split focus on various parts of the whole process.Many studies conduct MHD/GRMHD simulations to investigate the jet launching process and early propagation (see, e.g., [30][31][32][33][34][35][36][37][38][39][40][41]).Some other studies use special relativistic MHD/HD simulations to investigate the jet's interaction with the ambient medium, away from the central compact region (see, e.g., [42][43][44][45][46][47][48][49][50][51][52][53]).In this study, we propose a formulation to achieve the continuum of jet simulations throughout space and time and potentially bridge these two research domains.The formulation is built upon the development of the moving-mesh technique [54][55][56][57][58][59][60][61][62], which has demonstrated its efficiency in simulating ultrarelativistic jets (see, e.g., [51,63,64]).The extension of the moving-mesh technique to the general relativistic hydrodynamics only appears in recent years.We have seen several moving-mesh codes been extended to include GR effects [60,62,65].Most of these movingmesh codes use Harten-Lax-van Leer (HLL) or Harten-Lax-van Leer-Einfeldt (HLLE) Riemann solver [66,67].However, the HLLC approximate Riemann solver [68] resolves not only the extremal waves but also the contact discontinuity in the Riemann fan and is useful for maintaining contact discontinuities with high precision.Its implementation in fixed-mesh GR employs a local frame transformation [69,70].In this study, we provide the mathematical formulation of incorporating the HLLC Riemann solver into a general relativistic moving-mesh code and demonstrate its robustness in simulating fluid flows under strong gravity.In Sec.II, we implement the general relativistic extension to the special relativistic moving-mesh hydrodynamic code JET [57] using the reference metric formulation [71][72][73][74].In Sec.III, we illustrate the tetrad formulation for solving the HLLC Riemann problem in general relativity and the procedures to incorporate it into the moving-mesh framework.Section IV presents several code implementation techniques.In Sec.V, we conduct several simulations with fixed mesh to test the robustness of the GR extension in the code.In Sec.VI, we conduct numerical tests with the movingmesh grid demonstrating the code's capability to track and resolve the relativistic outflow.For the first time in literature, we successfully launch a relativistic jet from the black hole-torus system and simulate its complete propagation to the dissipation distance.Such simulation provides additional evidence supporting the feasibility of full-time-domain jet simulations, as discussed in our earlier research [64].Conclusions and future work are discussed in Sec.VII. Throughout this paper, we use the Greek indices (α, β, µ, ν, . . . ) running from 0 to 3 to denote the spacetime components, and the Latin indices (i, j, k, . . . ) running from 1 to 3 to denote the space components.We adopt the geometric units G = c = M ⊙ = 1 throughout this paper.All the length scales and timescales are expressed in units of the gravitational radius r g = GM ⊙ /c 2 and t g = r g /c, respectively, unless stated otherwise. II. GENERAL RELATIVISTIC HYDRODYNAMICS IN A REFERENCE METRIC FORMULATION The 2D special relativistic moving-mesh hydrodynamic code JET adopts spherical coordinates assuming axisymmetry.The cell interfaces orthogonal to the radial direction are allowed to move radially.The code is essentially Lagrangian in the radial direction, coupled laterally by transverse flux.This setup is particularly suitable for modeling relativistic radial outflows [55].To minimize the modifications for the code, we derive the general relativistic hydrodynamic equations in a way that resembles the special relativistic counterparts.In the following, we lay out the implementation steps for clarity.Despite of the axisymmetry property of the JET code, throughout this paper we will show all the derivations without imposing any symmetry for completeness. In the standard 3+1 decomposition (see, e.g., [75][76][77]), the spacetime is foliated by a family of spatial hypersurface Σ t with future-pointing timelike unit normal vector denoted by n µ , which decomposes the line element as where α is the lapse function, β i is the shift vector, and γ ij is the spatial metric induced on Σ t .In terms of the lapse and shift, the normal vector n µ can be expressed as We adopt a conformal decomposition of the spatial metric γ ij where ψ := (γ/γ) 1/12 is the conformal factor, γij is the conformal spatial metric, and γ and γ are the determinants of γ ij and γij respectively.Following the referencemetric formulation (as shown in [78]), we define the residual metric ϵ ij as where γij is a time-independent background reference metric.For our purpose, we specialize γij to be a flat metric in spherical coordinate (r, θ, ϕ) as γij := diag(1, r 2 , r 2 sin 2 θ).To make the conformal scaling unique, we set γ = γ := det(γ ij ) (see, e.g., [79]).We denote ∇ µ , D i , Di and Di as the covariant derivatives of spacetime metric g µν , γ ij , γij and γij respectively. The equations of relativistic hydrodynamics are based on conservation of rest mass and conservation of energy-momentum where ρ is the rest-mass density and u µ is the fluid fourvelocity and T µν is the stress-energy tensor.Here we assume perfect fluid for T µν in the form where P is the pressure, ε is the specific internal energy and h := 1 + ε + P/ρ is the specific enthalpy.In 3+1 decomposition, T µν can be decomposed as where W := αu t is the Lorentz factor and v i := u i /W + β i /α is the fluid velocity measured by the normal observer. We adopt the Valencia formulation in reference metric formulation following [72,80] to rewrite the hydrodynamics equations in conservative form as with state vectors q being the conserved variables where (D, S i , τ ) := (ρW, ρhW u i , ρhW 2 − P − D) are the density, momentum density and energy density variables in Valencia form respectively. f j and s represent the flux and source terms respectively written as The detailed derivation is shown in Appendix A for the readers' interests. One key ingredient of the reference metric method is to evolve tensorial quantities in an orthonormal basis with respect to the background metric.In this way, all tensor components are explicitly free of coordinate singularities.We will follow the notation of [80] to distinguish between coordinate-basis and orthonormal-basis components.The plain Latin indices represent the tensor components in the standard coordinate basis, while the Latin indices surrounded with curly braces denote the components in the background orthonormal basis.We also introduce a set of basis vector ê{k} i that are orthonormal with respect to the background metric γij , For the flat background metric in spherical coordinates, this leads to So any tensor A i j defined in the standard coordinate basis can be decomposed into its orthonormal basis counterpart A {i} {j} as As an example, the residual metric ϵ ij can be expressed in terms of the components in the orthonormal basis ê{k} i as while for the conserved momentum we have (q Sr , q S θ , q S ϕ ) = ψ 6 γ/γ S {r} , rS {θ} , r sin θS {ϕ} . The complete set of general relativistic hydrodynamic equations in 3D spherical coordinates under reference metric formalism (9) where K ij is the extrinsic curvature, M := ψ 6 γ/γ and v{i} := v {i} − β {i} /α, alongside with the special relativistic Eq. (A25) in Appendix A 4 for comparison (see also [81]).Noted that in practice we compute ∂ k ϵ {i}{j} numerically in source terms instead of ∂ k ϵ ij as while the second term ∂ k ê{l} i ê{m} j is evaluated analytically. III. TETRAD FORMATION AND THE HLLC RIEMANN SOLVER To evaluate the numerical flux through cell interfaces, HLL-type (HLLE/HLLC) Riemann solvers have been designed for relativistic hydrodynamics in Minkowski spacetime [68,82].Most of the GRHD/GRMHD codes in the literature use HLLE Riemann solver in curved spacetime (see, e.g.[83][84][85][86]).The HLLC Riemann solver that captures the contact discontinuity in the wave fan has recently been added for GR codes [69,70,87].We follow previous works for the implementation of the HLLC Riemann solver in general relativity [69,70,88].The basic idea is based on the equivalence principle: physical laws in a local inertial frame of a curved spacetime have the same form as in special relativity.When we define such inertial frame, we can then use the solution of Riemann problems in a local Minkowskian frame to construct the corresponding solution in curved spacetime.The previous section derives the general relativistic hydrodynamic equations in a reference metric formulation.For the benefit of the coming discussion, we will revert to the original formulation [84] in this section with g = det(g µν ) satisfying √ −g = α √ γ.The state vector F 0 and the flux vector F i are given by and the source term in this formulation is denoted by S. Since the source is irrelevant to the tetrad formulation in following discussions, we here omit the explicit form of S. Let us consider a single computational cell of our discrete spacetime Ω, bounded by a closed threedimensional surface ∂Ω.We take the 3-surface ∂Ω as the standard-oriented geometric object made up of two spacelike surfaces {Σ x 0 , Σ x 0 +∆x 0 } plus timelike surfaces {Σ x i − , Σ x i + } that join the two temporal slices together, where x i ± are the cell boundaries of Ω in ±x i directions.The integral form of the system (19) is where is the volume element of cell Ω.From now we will drop the wedge symbol ∧ for simplicity.The integral form (21) can be rewritten in the following conservation form where F0 x 0 is the volume integral of F 0 at x 0 given by F0 x 0 := and F i ± is the integrated spatial flux across the cell interfaces x i ± given by A. Tetrad formulation Instead of attempting a direct resolution of the Riemann problem within the curved spacetime, our approach entails deliberately converting the left and right states at a given interface into a local Minkowskian frame of reference.This methodology enables the utilization of developments in the realm of special relativistic Riemann problems, as proposed by [88,89]. To begin with, we define a new tetrad basis e (μ) that satisfies a list of properties as shown in [69]: 1. e (μ) must be orthogonal to e (ν) for all µ ̸ = ν. 2. Each e (μ) must be normalized to have an inner product of ±1 with itself, with e ( 0) being timelike and e ( î) being spacelike. 3. e ( 0) must be orthogonal to surfaces of constant x 0 . 4. The projection of e ( î) onto to the surfaces of constant x 0 is orthogonal to the surface of constant x i within that submanifold. Without loss of generality, let us only consider the conversion of the volume integral F0 x 0 in Eq. ( 24) and the first spatial flux integral F 1 + in Eq. ( 23).We define the following tetrad basis in the spherical coordinates with x µ = (t, r, θ, ϕ) (the detailed derivation can be found in Appendix of [69,70]) as where the coefficients are given by The covariant components of the tetrad basis are given by e (μ)µ = g µν e (μ) ν .Specifically The transformation of vector and tensor between the tetrad frame and the original Eulerian observer frame follows and Note that the upper and lower spatial tetrad components are the same Therefore, we can define F(μ) as the tetrad transformation of F µ in the form Here for momentum components ( FS ( ĵ) ) (μ) of F(μ) we need to perform one more tetrad transformation due to its tensorial nature.Since we only focus on the flux along r direction, the components F( t) and F(r) are written as where The inverse transformation is given by which gives In addition, we can reformulate the conservation form Eqs. ( 24) and ( 25) with the tetrad basis.Note that the indexes (t, r, θ, ϕ) and ( t, r, θ, φ) are interchangeable with (x 0 , x 1 , x 2 , x 3 ) and (x ( 0) , x ( 1) , x ( 2) , x ( 3) ) respectively.Making use of the following invariance property and transformation rule we can get (see also [84]) where ĝ := det(η ij ) = −1.This gives the volume integral of F t (24) and integrated spatial flux of F r (25) in local tetrad basis as with nonzero interface velocity from a nonzero drift in the direction of interest, in agreement with [69,88]. With tetrad basis formulation, the procedure to obtain the numerical flux across the first spatial direction involves the following steps: 1. Obtain the values of the primitive variables (ρ, P, u i ) and tetrad basis e (ν) 2. Construct the conserved variable F( 0) and flux F( 1) for the left and right state in the tetrad frame. 3. Solve the Riemann problem in the tetrad frame with a nonzero interface velocity V (r) interface . 4. Once we have the updated solution of F( 0) and F( 1) , we can obtain the numerical flux across the first spatial direction in the Eulerian observer frame according to Eq. ( 40). B. HLLC Riemann Solver in the tetrad frame We solve the Riemann problem in the tetrad frame by adopting a special relativity form.We calculate the HLLC flux F(r) by solving the one-dimensional conservation law [70]: with Given an initial condition at cell interface rj+ 1 2 described by three characteristic waves and four states will be established inside the Riemann fan as and the corresponding numerical flux across interface rj+ 1 2 is F(r) where λL/R is the characteristic speed of the left/right going nonlinear wave and ( F(r) ) Explicitly, we have the left or the right state as To reduce the number of unknowns and have a wellposed problem, we assume that S * (r) = (τ * + P * + D * ) λ * (see [68]).If one defines E := τ + D and performs the calculation of (53b) − λ * × [(53a) + (53e)], one will get the following expression, giving λ * in terms of P * [68]: By imposing P * L = P * R across the contact discontinuity, we find the following quadratic equation for λ * where Once we obtain the speed of the contact discontinuity λ * , P * can be obtained from Eq. ( 54).The conserved quantities in the intermediate states are given by The left and right characteristic speeds λL/R follow Davis's estimate [68] λL = min( λ− ( ÛL ), λ− ( ÛR )), (58) λR = max( λ+ ( ÛL ), λ+ ( ÛR )), (59) with where v 2 = v( î) v( î) and c s is the speed of sound Equivalent expressions for the (θ, ϕ) directions can be easily obtained.In the Eulerian observer frame, the minimum and maximum characteristic speeds λ ± are given by [84,90,91]: Making use of Eq. ( 36) and Eq. ( 60), we can get the following relation: Note that Eq. ( 60) and Eq. ( 62) are derived in the special relativistic and general relativistic setting, respectively.Equation ( 63) establishes their relationship with the tetrad method consistently. In addition, we implement the HLLE Riemann solver [67,92] for comparison.We adopt the same tetrad formulation.The HLLE Riemann solver is constructed by assuming an average intermediate state between the fastest and slowest waves in the tetrad frame.The two characteristic waves and three states inside the Riemann fan become: The corresponding numerical flux across interface rj+ 1 2 is: where Û * and ( F(r) ) * are the intermediate state and flux.They can be derived from the jump condition [see Eq. 52] as: and C. HLLC Riemann Solver for the Moving-Mesh GR For the moving mesh in the simulation domain, naturally, we need to solve the Riemann problem on the moving interface with its own coordinate velocity V := dx dt = (V r , V θ , V ϕ ).Let us denote the corresponding four-velocity as u µ gridface .In general, when we consider the 3 + 1 spacetime foliation Σ t , we define a unit normal vector as n µ , and this unit normal vector corresponds by definition to the four-velocity of the Eulerian observer [75].When we define the fluid's four-velocity as u µ , the velocity of the fluid with respect to the Eulerian observer (v µ = (0, v i )) has the following relation: where is the Lorentz factor of the fluid with respect to the Eulerian observer.When we move from a given hypersurface to the next following the normal direction, the change in the spatial coordinates is given as [75]: β µ = (0, β i ) being the shift vector.Then v i is related to the coordinate velocity V by v i = 1 α (V i +β i ).In our case, only the cell interface orthogonal to the radial direction can move with a coordinate velocity denoted as V r .Then the four-velocity of our radially-moving interface is In the above, we illustrate the explicit definition of different velocities for clarity.For our moving-mesh code, the grid moves radially, the integral of the radial flux at a short time interval dt becomes With Note that the above velocity equation relates to Eq. ( 46) and Eq. ( 63).From Eqs. (39), and (40), we have: Compared with the tetrad formulation for the static mesh, we replace the interface velocity γ rr to incorporate the effect of the moving interface into the flux integral. In principle, the coordinate velocity for the moving interface can be set freely.At each instantaneous time, on the cell interface, the three characteristic waves and four states inside the Riemann fan depends only on the values of the primitive variables on the left and right sides of the interface.The interface velocity will influence which state the numerical flux across the interface will be selected [see Eqs.(50], and ( 51) ).Based on this flexibility, we choose the contact discontinuity velocity as the interface velocity: We find this choice performs well for the simulation of ultrarelativistic jets. For the derivation of the tetrad formulation and HLLC Riemann solver, we express every metric and fluid variable in the coordinate basis.For the implementation, we utilize those variables in the orthonormal basis instead.For example, in the tetrad basis calculation, we will use In this way, the geometric factors will not directly appear in the tetrad basis calculation.The derivation itself remains the same because of the invariance of spacetime interval under coordinate transformation Making use of this invariance principle, we can handle the moving mesh in another way.First, boost the coordinate basis into the comoving coordinate basis of the interface: Second, boost the primitive velocities into the comoving coordinate basis: Third, making use of the invariance, calculate the corresponding metric g ⟨µ⟩⟨ν⟩ components: Once we have the new lapse, shift and spatial metric in the comoving frame, we can derive the tetrad basis in the comoving coordinate basis, and solve the HLLC Riemann problem accordingly.We lay out this approach for readers' interest as well as for a more complete discussion. IV. NUMERICAL TECHNIQUES A. Implementation of equations For the numerical implementation, we discretize the volume averages of Eq. ( 9).Using divergence theorem, the discretized version of equation 9 in the cell (i, j) can be expressed as (Since our code is 2.5D, we will ignore the discretization in the ϕ direction.)[85]: where the cell volume and volume average are defined as while the surface area and surface average is defined as Note that when we perform the volumn average ⟨•⟩ or surface average ⟨•⟩ i , we could strip out the geometric factor from the tensorial expressions in coordinate basis and integrate them together with the volume factor √ γ.In this way, the tensorial variables in orthonormal basis like S {θ} become truly independent of the underlining geometry.For example, in the spherical coordinates, the volume average for the conserved momentum q S θ will be calculated as For our moving-mesh scheme, the cells in the radial direction will continuously merge and divide.When we perform the above integral, the variables in orthonormal basis like MS {θ} will be better conserved.As an example, if we assume MS {θ} is constant across cell (i,j) and cell (i+1,j) , when we merge these two cells, it gives a combined conserved momentum as where cell (inew,j) is the combined cells of cell (i,j) and cell (i+1,j) with ∆V inew,j = ∆V i,j + ∆V i+1,j .From the combined momentum, we can recover the variable MS {θ} accurately. In code implementation, the contribution of the source term to the conserved variables inside a cell is defined as ⟨s⟩ i,j ∆V i,j .We perform volume integral on the singular factors such as 1/r and cot(θ) that appear in the source term [see Eq. ( 17)].Explicitly, the integral of 1/r factor gives (r 2 + − r 2 − )/2 while the integral of cot(θ) leads to (sin(θ + ) − sin(θ − )).This practice turns out to reduce numerical error for the source term calculation near singular points. Finally, to work out the cell volume, cell surface, we make the following definition B. Recovery of primitive variables There are many possible ways to make the conversion between conserved variables and primitive variables (e.g., [93]).Our current research focuses on relativistic jets propagating in an ambient medium.We need to deal with large variations of density and pressure in the jet simulations.The following cons-to-prim method proves to be robust for such a task.We use ρ, P, W v {i} = u {i} + W β {i} /α as our primitive variables where W v {i} is the projected fluid velocity in orthonormal basis.For the equation of state (EOS), we only consider the case of a single-component perfect gas for now.In this case, the specific enthalpy h ≡ 1 + ε + P/ρ is a function of a temperaturelike variable Θ = P/ρ (see [94]).In the literature, the most widely used EOS is the ideal gas EOS:P = (Γ − 1)ρε, where P is the gas pressure, ε is the specific internal energy density.Which can be expressed as: where Γ is the adiabatic index.The ideal gas EOS has been applied to the gas of either subrelativistic temperature with Γ = 5/3 or ultrarelativistic temperature with Γ = 4/3.For our simulations of relativistic outflow propagating in a cold ambient medium, a variable equivalent adiabatic index Γ eq = (h−1)/(h−1−Θ) is desirable to account for transitions between the nonrelativistic and the relativistic temperature regime.There have been efforts to find EOSs that better describe the thermal dynamics of relativistic gas.Synge and Morse derives the correct EOS for the single-component perfect gas in a relativistic regime using modified Bessel functions.Mignone et al. proposes an approximate EOS (denoted as TM EOS) that is consistent with the Taub's inequality [96]: for all temperatures.It differs by less than 4% from the theoretical value given in [94].Ryu et al. proposes a new EOS (RC EOS), which better fits the theoretical value. Let us write the expression of the specific enthalpy for the RC EOS: Following the definition of the general form of polytropic index n and the general form of sound speed c s : their values can be calculated for RC EOS as: For both TM and RC, we have correctly c 2 s → 5Θ/3 in the nonrelativistic temperature limit and c 2 s → 1/3 in the ultrarelativistic temperature limit [97]. We can use these expressions to convert the conservative variables into primitive ones with a standard Newton-Raphson method (NRM) [98], using Θ as our independent variable.We will use the (known) values of the conservative variables. First, by squaring the momentum equation, we get with h = h(Θ) given by the EOS.Using the relation p = DΘ/W , we get the energy density (excluding rest mass), τ = DhW − DΘ/W − D. We can then derive the following identity [98] : Together with Eq. ( 93), the derivative df /dΘ has the form: where the relation W ′ = −h ′ (W 2 − 1)/(hW ) has been used (derived from Eq. ( 93), see also [98]).The derivative dh/dΘ depends on the particular EOS used.We adopt the RC EOS [see Eq. ( 89)] for the simulations of relativistic jets and the ideal gas EOS for the remaining numerical tests. C. Reconstruction We reconstruct the primitive variable (denoted with Q ) to the left and right sides of each cell with the total variation diminishing (TVD) method described in [99]: where ∆ξ i = ξ i+1/2 − ξ i−1/2 is the cell width, and ξ is the cell center.And ∆Q i is a slope-limited gradient function written in terms of a nonlinear limiter function φ(v): We adopt the same modified monotonized central (MC) limiter in [99]. where To reconstruct the left and right state of the cell i, the above stop limiter utilizes the cell average values of ⟨Q⟩ i−1 ,⟨Q⟩ i , and ⟨Q⟩ i+1 , defined at the cell center ξi−1 , ξi , ξi+1 .This algorithm takes into account nonuniform spacing.The cell center position ξi can be taken as the volume-averaged cell center ("centroids of volume") or arithmetic-mean cell center.In this study, we adopt the arithmetic-mean cell center for our simulations. D. Treatment of numerical conditions Robust numerical simulations require the treatment of several numerical conditions.One of them is the Courant-Friedrich-Levy (CFL) condition [100], which limits the time step size in explicit numerical methods.The simulation domain of the JET code allocates cells at the same temporal level.A global time step will be used to evolve simulation time.To find the global time step, we first calculate the time step δt of individual cells in the domain according to: where CFL is the CFL number.Its value has been set to 0.4 for simulations performed in this study.(λ r ) ± and (λ θ ) ± are again the minimum and maximum characteristic speeds for the cell in the radial and polar direction, respectively.V r cell is the cell's radial velocity which approximates the cell's upper interface velocity.We then pick the smallest time step as the global one.The subtraction of the cell's radial velocity in Eq. ( 103) leads to a much larger time step, making the long-term simulation of relativistic jets computationally efficient.Another numerical condition that needs to be taken care of is the boundary condition.For our cell-centered grid structure in spherical polar coordinates, we follow the boundary treatment described in [73,80].We first allocate two layers of ghost zones for each of the four boundaries (two in the radial direction, and two in the polar direction), and then fill the boundary ghost zones at the radial origin, and at the θ boundary with values copied from the corresponding points in the interior of the grid, accounting for appropriate parity factors.For the outer boundary in the radial direction, we adopt the Dirichlet boundary condition and use the initial data routine to set their ghost zone values. E. The adjusted moving-mesh scheme Since the initial development of the JET code [57], the moving-mesh scheme has kept being updated to improve the accuracy and efficiency of relativistic jet simulations.The adjusted moving-mesh scheme in this study contains the following rules: inside the simulation domain, the radial interface of a grid cell will move at local contact discontinuity velocity of the flow.Each radial track moves independently.The inner and outer radial boundaries of the domain can also move.At each time step, the longest and shortest cell in each radial track will be marked for refinement or derefinement according to the maximum or minimum aspect ratio of grid cell (a := ∆r/r∆θ) allowed in the simulation (see [57] for more information).In ultrarelativistic jet simulations, we find the domain cells can squeeze into an ultrathin shell with the cell's aspect ratio reaching 1/100 or even smaller.In order to resolve the relativistic thin shell, only cells with length ∆r < r/(8Γ 2 ) will be marked for derefinement.In addition, we define an approximate second derivative of a fluid variable as a measurement of error E i to mark the region of interest.At each time step, the cell along each radial track with the maximum measurement of error will be marked for refinement if its aspect ratio is larger than twice the minimum aspect ratio and its measurement error E i > 0.9.The cells with E i < 0.002 will be considered for derefinement.The cell to be derefined is the one that has the smallest time step (see [64]).To reduce load imbalance of CPUs, the number of grids in each radial track will be balanced dynamically during the simulation. V. FIXED-MESH NUMERICAL SIMULATIONS A. Bondi accretion in maximally sliced trumpet coordinates We first consider spherically symmetric, radial fluid accretion onto a nonrotating black hole (ingoing Bondi flow) [101,102].Following previous work (e.g., [70,103]), we perform simulations of Bondi flow in maximally sliced trumpet coordinates [104,105].The transformation between Schwarzschild coordinate and maximally slicing trumpet coordinate is illustrated as a reference in Appendix B. We set the fluid parameter according to Table 1 of [103]: the accretion rate Ṁbondi = 10 −4 M , the adiabatic index Γ = 4/3, and the critical radius R s = 10M where M is the mass of the central black hole.For simplicity, M is set to 1 in the simulation. The simulation domain is in an axisymmetric spherical coordinate, spanning the region r ∈ [0.4 M, 10 M ], θ ∈ [0, π/2].We employ logarithmic grid spacing in the radial direction with a cell's aspect ratio a set to one (i.e. a := ∆r/r∆θ = 1).The finest cell, located closest to the inner boundary, has a spacing ∆r = r min (θ max − θ min )/(Nt), where Nt is the number of cells in the azimuthal direction. To maintain the unity aspect ratio of the cell, the number of cells in the radial direction Nr is calculated as We conduct simulations with three different resolutions: low resolution with Nt = 128, medium resolution with Nt = 256, and high resolution with Nt = 512.For the benefit of convergence test, we set the number of grids in the radial direction Nr = 2 Nt.In this case, the cell's aspect ratio will deviate from one slightly.In Fig. 1, we show the radial profiles of the fluid restmass density (top) and the fluid velocity (middle) at time T = 0 M (init) and T = 50 M (fin) for the medium resolution simulation.The profile of the Bondi flow has been maintained throughout the simulations.In the bottom panel, we plot the L1-norm of error for the rest-mass density.The L1-norm of error is defined as [69] The Bondi simulations demonstrate second-order convergence for the L1-norm of error with respect to the resolution.The code adopts the second-order RK2 time integrator and the second-order piecewise linear reconstruction method (PLM), described in Sec.IV C. The presented convergence result is as expected and agrees with previous studies (see e.g.[69,70]).For the implementation of a higher-order reconstruction scheme for our unstructured grid in spherical geometry, like the piecewise parabolic method (PPM) [106], weighted essentially nonoscillatory (WENO) [107][108][109][110], or the monotonicity preserving scheme (MP5) [109], we will refer to future work. B. Tolmann-Oppenheimer-Volkoff star The next numerical test we consider is the Tolman-Oppenheimer-Volkoff (TOV) star with the structure of a spherically symmetric body of isotropic material in equilibrium [111,112].We conduct two TOV star tests based on [60]: the stationary case and the one with pressure depletion.The initial profile for the TOV star has a central rest-mass density ρ c (0) = 1.28 × 10 −3 .We adopt the polytropic EOS P (ρ) = Kρ Γ , with (K, Γ) = (100, 2) for the initial data.As for the evolution, we adopt the ideal gas law.Additional parameters for the initial profile can be found in Table I in the cgs unit. In Fig. 2, we plot the central maximum density variation as a function of dynamical time ( ρ c (0) t) for both cases.For the stationary case, we find the central maximum density varies within 0.5% for 14 dynamical times for the Nt = 64 simulation, confirming the stability of the star.When we increase the resolution to Nt = 128, the result gets better.For the pressure depletion simulation, we reduce the TOV initial pressure profile by ten percent.The star falls out of equilibrium and undergoes radial oscillations.We conduct simulations with two different resolutions (Nt = 64 and 128) and find consistent oscillation pattern, as shown in the bottom panel of Fig. 2. The result is equivalent to the test result in [60]. C. Fishbone-Moncrief torus around a Schwarzschild black hole Our next test concerns a stationary, axisymmetric, isentropic torus around a Schwarzschild black hole [113].We consider a particular instance of the Fishbone-Moncrief solution where the spin of the black hole is set to zero. We generate the initial data in the Schwarzschild coordinate with its radius denoted by R.However, we will evolve the system in the isotropic coordinate of the Schwarzschild metric with its radius denoted by r (see Appendix B).The initial profile generator follows the implementation in [79,86,114].Table II shows the key variable values for the torus.For the ambient atmosphere, we set ρ = ρ min (R/R g ) −3/2 , P = P min (R/R g ) −5/2 , where ρ min = 10 −8 , P min = 10 −10 , R g = GM/c 2 is the black hole gravitational radius and M is the black hole mass.For the simulation, we employ an ideal gas EOS: P = (Γ − 1)ρε, with Γ = 4/3.In the azimuthal direction, the simulation domain extends from θ min = π/3 to θ max = 2π/3.In the radial direction, the grid covers the region from r min = 4 to r max = 20.At the location of maximum pressure r = 10.98 (R = 12), the orbital period of the torus is around 238.9.We set the final time of the simulation to be 2000, roughly eight orbits.We conduct two simulations with grid resolution Nt = 256 and Nt = 512, and find consistent results. Figure 3 illustrates the contour plots of the black holetorus system at the beginning (top panel) and at the end of the simulation (middle and bottom panels), taken from the Nt = 512 simulation for better visual effect.The top panel shows the initial contour plot for the logarithmic density ρ.Comparing these two contour snapshots, we first find that throughout the simulation, the torus maintains its density structure.We check that the maximum rest-mass density always keeps the original value within 4% during the simulation, and its radial position varies within 2%.Because the torus stays close to the black hole, the ambient gas falls into the black hole and blows the torus surface in the infalling process.A bow shock appears in front of the torus and a trailing tail fills in the inner region between the torus and the central black hole.The falling gas slows down when it crosses the bow shock as can be seen from the velocity contour plot.The stability of the torus structure near the black hole showcases the code's robustness in the handling of fluid rotation under strong gravity. D. Rayleigh-Taylor instability for a modified Bondi flow Previous work [57] with the original JET code has captured the detailed nonlinear features of Rayleigh-Taylor instability in a relativistic fireball.It uses the HLLC Riemann solver described in [55].To test our general relativistic HLLC Riemann solver, we modify the Bondi flow to induce Rayleigh-Taylor instability under strong gravity.The setup is similar to a Strömgren sphere around the central black hole-the low-density hot gas is surrounded by a high-density gas with gravitational acceleration [116].Within a radius of r < 3(1 + 0.1 × (1 + cos(80θ))/2.0),the density and pressure of the Bondi flow have been modified as ρ = 0.1ρ bondi , P = 50P bondi , v r = 0. ρ bondi and P bondi are taken from the Bondi profile in Sec.V A. This setup creates a hot low-density bubble inside the Bondi flow with a curly interface.As the hot low-density gas pushes against the heavier Bondi flow, Rayleigh-Taylor instability (or sometimes referred to as Richtmyer Meshkov instability in this case) develops.We perform this simulation with an azimuthal resolution of Nt=512, covering the azimuthal angle from π/3 to 2π/3. Figure 4 shows its time evolution.Initially, the hot gas pushes outward and compresses the incoming Bondi flow into higher density as shown at t = 5 M. Instability fingers develop and evolve inside the low-density region.Nonlinear features of the instability continuously evolve at t = 15 M. Later on, due to the attraction of the central black hole, the turbulent gas flows into the black hole.The implemented HLLC Riemann solver is able to capture the detailed structure of the instability in the strong field regime.It performs better than the HLLE Riemann solver. VI. MOVING-MESH NUMERICAL SIMULATIONS A. Spherical shock tube test One advantage of our moving-mesh code is that the cell face is able to move with the contact velocity of the flow in the radial direction.It has been shown that the contact discontinuity is much better preserved when employing HLLC on the moving mesh (see Fig. 7 of [55]).What is more, the flow naturally adjusts the cell width in the radial direction.Combined with robust refinement and derefinement schemes, the simulation domain will be able to resolve the region of interest [64].To test the accuracy of the moving-mesh scheme, we conduct the identical spherical shock tube test as shown in [61]: within the radius of 0.25 (r < 0.25), the density ρ and pressure P is set to 1. Outside of this region, the value of density and pressure is 0.1.We adopt the Minkowskian frame for the test.Since the tetrad formulation for the HLLC Riemann solver also works for the Minkowskian metric, we do not take any additional steps for the special relativistic simulations. In azimuthal direction, the simulation domain extends from 0 to π/2 with Nt = 128.In the radial direction, the grid covers the region from r min = 0.01 to r max = 0.5.We adopt logarithmic spacing in the radial direction and set the initial cell's aspect ratio to one.We first conduct the spherical shock tube test with different Riemann solvers in fixed-mesh simulation.Both the HLLE and HLLC Riemann solver handle the test well and give almost the same results (as shown in Fig. 5).In Fig. 6a, we compare the end profile for simulations with the fixed mesh setup and the moving mesh setup.The density plot exhibits a sharp transition at the contact discontinuity for the moving mesh and a relatively smooth one for the fixed mesh.Following the compression of the fluid in the shocked region, the cells squeeze between the contact discontinuity and the forward shock.The P/ρ plot reveals a jump at the contact discontinuity.We find this appears in the moving-mesh simulation here as well as in the literature [55,61].It may come from the physical squeezing of the fluid as the grid moves together with the flow in the moving-mesh simulation or the TVD reconstruction scheme requires some adjustification for the moving mesh. To investigate the jump's dependence on numerical resolution, we have performed additional moving-mesh simulations with different numerical resolution: the Nt = 32, Nt = 128 and Nt = 2048 resolution.Results from these three simulations (see Fig. 7) demonstrate that the jump feature persists and its magnitude is invariant under different numerical resolution.We also perform additional fixed-mesh simulations with higher resolution and find no presence of the jump feature in these simulations.Since the jump's magnitude does not increase with time and spatial resolution, considering its minimal impact on the fluid dynamics in moving-mesh simulations, we will leave this numerical phenomena to the research community for now. In the rarefaction region, the cells get elongated, leading to an aspect ratio larger than one.Because of the increase in the aspect ratio (i.e. the reduction of radial resolution), we find the peak of the velocity profile for the moving-mesh simulation becomes less sharp compared to the fixed mesh simulation. However, since we have full control over the grid refinement, we can specify the maximum aspect ratio in the simulation.We conduct another moving-mesh sodtube simulation which sets the maximum aspect ratio to 1.5.When the elongated cell reaches such a threshold, it will split into two cells.To show the effect of such a refinement scheme on the sod-tube simulation, we compare the profiles for the moving-mesh simulation with or without maximum aspect ratio control in Fig. 6b.With the maximum aspect ratio control, the resolution in the region where the cell's aspect ratio gets to the threshold value increases.The peak of the velocity profile becomes sharper compared to the peak for the moving-mesh simulation without aspect ratio control.Overall, the implemented HLLC Riemann solver on the moving mesh is robust for simulating relativistic outflow. B. Relativistic jet emerged from a black hole-torus system The detection of the gravitational wave (GW) signal GW170817, coupled with the observations of its electromagnetic (EM) counterpart signifies the commencement of the multimessenger astronomy era [118].Research has demonstrated that the structure of the emerged relativistic outflows plays a crucial role in shaping the afterglow emission in GRB170817A [51,[119][120][121][122][123][124][125].This event provides an ideal candidate for utilizing the electromagnetic observations of the emerged outflow to infer the BNS merging physics.While the presented moving-mesh code is capable of simulating relativistic jets out of various progenitor systems, in the following, we will use a pseudomodel inspired by the outcome of compact binary merger simulations (see e.g., [28,[126][127][128][129][130]).We set up a black hole-torus system in the isotropic coordinate of Schwarzschild metric, with the mass of the central black hole M having been set to M = 3M ⊙ and the torus mass set to 0.2 M ⊙ .The radius of the inner edge of torus is 6 M , and the radius of its pressure maximum is set to 16 M [131].For the simulation domain, the radius of the inner boundary locates at 12.And we use Nt = 256 grids to cover the half spherical domain with cell's initial aspect ratio been set to 1.We adopt the reflecting boundary condition in the azimuthal direction.Outside the torus, the domain is filled with an ejecta cloud with a total mass of 0.02 M ⊙ .The cloud density structure follows: ρ 0 = (n−3)M ejecta /(4πr 3 0 ) is derived to give a total ejecta mass M ejecta .The pressure is P = Kρ Γ + P floor , K = 0.54, Γ = 4/3.We also add a density floor ρ floor = 6.2 × 10 −5 [g/cm 3 ]( i.e. 10 −22 in the code unit) and a pressure floor P floor = 10 −4 ρ floor to the initial profile to avoid numerical precision error.We set the density slope index n = 3.5 to represent the postmerger ejecta profile.Here, we ignore the ejecta profile velocity for simplicity.The reference radius r 0 is set to 6M ⊙ .A jet engine with a variable luminosity of L 0 = 2×10 51 exp(−t/t decay ) [erg/s] operates for 30 ms, in the polar region just above the black hole-torus plane.The engine decay timescale t decay has been set to 100 ms.This gives a total injected jet engine energy 5.2 × 10 49 [erg].We choose this low-energy jet engine injection to test the code's capability of launching a relativistic jet under constraint.In jet simulations, it becomes easier to successfully launch a relativistic jet given a higher energy injection (see e.g.[51,64]).The profile of the jet engine features a narrow nozzle with an opening angle of 0.1 rad.For the complete jet engine profile, we refer readers to the description in Appendix C as well as in [63,64]. Figure 8 shows the jet launching process during the first 30 ms.At the beginning of the simulation, the cloud flows into the black hole.In the polar direction, at a location centered around 130 [km], a small amount of relativistic gas with a terminal Lorentz factor 100 (i.e.jet engine) gets injected into the cloud.The injected gas has an initial boost velocity in the radial direction (see Appendix C).The addition of the relativistic gas slightly pushes the cloud gas in the polar direction, leading to a non-negative radial velocity (as can be seen from the radial velocity plot at t = 0.1 [ms]).The continuous injection of hot relativistic gas drives shocks and changes the temperature profile in the polar direction.By the time t = 5 [ms], a shocked cocoon develops and reveals a two-layer structure: a high-density layer which results from the forward shock, meanwhile the inner cocoon which heats up by the jet engine and reverse shock gets to a low-density regime [ 41,132,133].Inside the inner cocoon, the shocked gas accelerates to a high velocity with a maximum Lorentz factor around 8 at t = 5 [ms].The moving-mesh scheme dynamically allocates cells to resolve the shocked region.The interfaces of the double-layer structure can be seen in the contour plot for the cell's radial resolution: the first interface lies in the shock front between the cocoon and the unperturbed cloud, the second interface is between the cocoon's inner low-density hot relativistic core and its high-density colder part.At the bottom of the cocoon, the shock front hits the torus.At t = 10 [ms], the shock front starts to move beyond the torus and wrap around it.At the head of the cocoon, the loaded matter diverts part of the shocked gas sideways.Below this region, the inner core of the cocoon accelerates to a higher Lorentz factor of 13.Throughout the acceleration period, the maximum Lorentz factor of the jet reaches 20 (which happens at about t = 13 [ms]), smaller than the terminal Lorentz factor of the injected relativistic gas.This is largely due to the engine's relative low-energy budget (we refer readers to more energetic jet simulations in [51,64]).By the end of the jet engine injection t = 30 [ms], at the base of the grid domain, the frontier of the shocked cocoon has passed the torus region.A relative high-density buffer zone appears between the torus and the cocoon (see the density and temperature contour plots).The torus itself rotates stably during the jet launching process, as illustrated by the inner contour plots in Fig. 8.The head of the shocked cocoon expands beyond the initial grid domain boundary.More cells will be allocated in front of the boundary as the shock front propagates.The radius of the new boundary will make sure that the head of the shock front will stay below 0.8 of this new radius during the simulation. Figure 9 illustrates the continued evolution of the relativistic cocoon that emerged from the black hole-torus system when the jet engine had been turned off.For computational efficiency, we cut the grid domain within a radius of 300 (around 440 [km]).We let the inner boundary move with a velocity of a fraction of the fluid's local maximum velocity.When the inner and outer boundary expands outward together, the simulation gets into a (weak) scaling region where the simulation time step ∆t increases with the absolute time itself.In this way, the moving-mesh code in spherical coordinates can simulate long-term evolution of relativistic jets over multiple orders of magnitude of time.When the jet engine turns off, it stops accelerating nearby gas.The inner cocoon turns into a shrinking relativistic bubble.The relativistic bubble keeps pushing against the mass-loaded head, exhausting its internal energy and kinetic energy.By the time t = 50 [ms], the top of the bubble starts to decelerate dramatically.The collision with the mass-loaded head converts part of its kinetic energy into thermal energy.The collision drives a wave passing through the relativistic bubble, increasing the temperature of inner cocoon all the way to its bottom (see the simulation video).Eventually, the relativistic bubble turns into a relativistic thin shell (see the velocity contour at t = 300 [ms]).The relativistic shell features a relativistic core with a mildly relativistic sheath, similar to previous special relativistic jet simulations (see, e.g., [51,64,126,134]).The outer layer of the cocoon goes through adiabatic expansion.The density within this layer keeps decreasing while the temperature structure roughly remains the same (see the contour plots for the temperaturelike variable from t = 50 [ms] to t = 1 [s]).The interface between the inner and outer cocoon features a high-density pillar.The relativistic shell keeps sweeping through the medium while depleting its kinetic energy.By the time t = 100 s, the relativistic thin shell is replaced by a mass-loaded slowmoving core.We see a morphological change in the outer shell structure.Finally, by the end of our simulation t = 1000 s, the outflow velocity becomes completely Newtonian.Now we have seen the complete life cycle of a relativistic jet from its birth at a black hole scale to the distance of its dissipation.In the following, we would like to discuss two dynamical features for this specific simulation.The first feature of interest focuses on the base of cocoon.In Fig. 9, we use white circle to indicate this region of interest.We find it appears after the shock front of the cocoon passed over the torus.It originates from the buffer zone or shock zone between the original torus and the remaining cocoon.It propogates subrelativistically and maintains its hump shape before t = 1 [s].Later on, the shock front accumulates enough matter and slows down to Newtonian velocities.When this happens, via hydrodynamical interaction, the morphology of the region changes and the hump shape disappears, leaving behind a broken filament as shown at t = 100 s and t = 1000 s.The second feature of interest relates to the density pillar at the interface between the inner and outer cocoon.We mark it with a red square in the figure.Its formation, to some extent, comes from the shutdown of the central jet engine during the jet launching period.At the beginning, when the central jet engine inflates a cocoon, it drives mass and energy into the cocoon outer layer while creating a low-density hot inner funnel to generate relativistic outflow.When the central engine shuts down, the inner cocoon quickly gets cold and stops pushing the outer layer (see snapshots at t = 50 [ms] and t = 300 [ms]).Then the adiabatic expansion of the outer layer further separates this interface from the shock front as shown at t = 1 [s].The interface pillar also has positive radial velocity and moves with the outer shell.However, the part, connecting to the outer shell, moves faster.To a point, the pillar detaches itself from the outer shell and falls back to the inner region.This is what happens from t = 100 s to t = 1000 s.Because of the long-term simulation of the relativistic jet, we are able to capture such detailed hydrodynamics evolution, which may provide insights for the study of morphologies of astronomical jets. Throughout the simulation, the maximum grid resolution in the radial direction remains below 60-a value we set initially.We see that the moving-mesh scheme, combined with the dynamical grid refinement and derefinement can capture the detailed dynamical features for the relativistic jet simulation over many orders of magni-FIG.9.The full-time-domain evolution of the relativistic jet emerged from a black hole-torus system.The inner boundary moves outward at a fraction of the local maximum velocity.The simulation video is available from Youtube at [117] with high-definition video quality available. tude of space and time.Also the adjusted moving-mesh scheme makes the simulation of relativistic jets computationally efficient.The presented simulation has been performed on a single high-performance computing node with 32 Intel Xeon Gold 6148 CPUs.The whole simulation consumes around 6400 core hours. VII. CONCLUSION This paper presents an advancement in computational astrophysics: developing and implementing a general relativistic moving-mesh hydrodynamic code featuring an advanced Riemann solver in curvilinear coordinates.We showcase the details of integrating a general relativistic framework into the hydrodynamic simulation code JET, achieved through applying the reference metric method. With its ability to elegantly handle the intricate spacetime geometries inherent in general relativity, the tetrad formulation is an ideal choice to address the HLLC Riemann problem under strong gravity.The achievement of our work is the successful adaptation of the tetrad formulation to incorporate the HLLC Riemann solver into the general relativistic moving-mesh code.We have conducted a series of numerical simulations to validate and demonstrate the efficacy of our newly developed code.These simulations encompass both fixed-mesh and moving-mesh scenarios, allowing us to test the code's performance under various conditions.The results from these simulations are particularly noteworthy in demonstrating the code's robustness and reliability in simulating fluid flows under the influence of strong gravitational fields. Compared to the fixed-mesh approach, a moving-mesh scheme can increase the time step for fluid regions with high velocity since it removes the limitation imposed by the bulk velocity (see, e.g., [61]).The moving-mesh approach makes the long-term simulation of relativistic jets computationally feasible (see, e.g., [51,63,64]).By extending the JET code's capability of handling relativistic jets from an astronomical scale to the scale of a black hole, we have opened new avenues for the full-timedomain simulation of relativistic jets, from their genesis to dissipation.To demonstrate this possibility, we design a representative prototype model which features a torus around a central black hole.A jet is manually launched in the polar direction, near the black hole-torus system.For the underlining jet launching mechanism, we refer readers to Blandford-Payne [4] or Blandford-Znajek [3] and related dynamo processes (see e.g.[13,14]).In this work, we prescribe an engine profile to imitate the jet launching process.This setup allows us to explore the dynamics of the jet's journey from its origin near the black holetorus system to its final Newtonian phase.We found multiple new hydrodynamical features from this end-toend simulation.For the first time, we have been able to simulate the complete life cycle of a relativistic jet, providing insights into the detailed structures of the cocoon and emerged jet over the whole journey.Furthermore, these full-time-domain jet simulations enable the joint investigation of various electromagnetic phenomena associated with relativistic outflows.For the case of BNS mergers, the observational phenomena include the kilo-nova emission from the remnant ejecta (see e.g.[24,25,135]), the GRB prompt and afterglow emission (see e.g.[23,136,137]), and other related processes.By combining our simulations with GRMHD models of jet-launching processes, we will be able to extend the evolution of outflows to distances relevant to long-term electromagnetic radiation observations.This integrative approach aligns perfectly with the era of multimessenger astronomy, allowing for an unprecedented understanding of the underlying physics in jet-launching systems. While this paper sets the foundational steps in this direction, the complete realization of these ambitious goals remains a pursuit for future research.The potential for further advancements and discoveries in the field is vast, and our work may catalyze the next generation of astrophysical jet simulations, potentially revolutionizing our understanding of relativistic jets and their associated physics. FIG. 1 . FIG. 1. Top: radial rest-mass density profile of the Bondi flow for the medium resolution Nt = 256.The mass accretion rate is Ṁbondi = 10 −4 M .The blue dotted line indicates the initial analytical solution at T = 0 M , while the yellow dashed line denotes the numerical solution at T = 50 M .The green dashdotted line shows their difference.The black dotted vertical line indicates the radial location of the event horizon of the black hole at r = 0.78 M in the maximally sliced trumpet coordinate.Middle: the radial fluid velocity −W v r profile.W is the Lorentz factor, and v r is the fluid velocity with respect to the Eulerian observer.Bottom: the L1-norm of error for the rest-mass density as a function of the azimuthal grid number Nt.The dash line indicates second-order convergence. FIG. 2 . FIG. 2. Normalized variation for the central maximum density as a function of dynamical time for the TOV star at two resolutions (Nt = 64 and Nt = 128).Top: the time evolution for the stationary case.Bottom: the time evolution for the pressure-depleted star whose equilibrium pressure has been reduced by ten percent globally. a M is the mass of the central Schwarzschild black hole.R in is the location of the inner edge of the torus.Rmax is the pressure maximum location of the torus in Schwarzschild coordinates.ρmax is the peak density in the torus, Φ Rmax is the angular velocity at Rmax.(calculated from the simulation with Nt = 512 resolution).l = u t u ϕ is the constant specific angular momentum.κ and Γ define the EOS of the initial torus P = κρ Γ .Unless specified, the presented variable value follows the G = c = M ⊙ = 1 unit convention. FIG. 3 . FIG. 3. Contour plot of a stationary torus around a static black hole.The simulation adopts an axisymmetric spherical domain.The azimuthal angle extends from π/3 to 2π/3.The radial grid extends from rmin = 4 to rmax = 20.The top panel shows the initial profile of the logarithmic density.The middle panel represents the density profile at the end of the simulation t = 2000.The bottom panel represents the contour plot of the radial velocity W v r . FIG. 4 . FIG.4.Time snapshots for the density contour of a hot low-density bubble embedded inside a cold Bondi flow.The left panel shows the simulation results with the HLLE Riemann solver while the right panel presents the comparative results from the simulation with the HLLC Riemann solver.The simulation video for the case with HLLC Riemann solver is available from Youtube at[115]. FIG. 5 . FIG. 5. Profiles for the spherical shock tube test at t = 0.3 in fixed-mesh simulations with the HLLC (solid line) and HLLE (dotted line) Riemann solver. FIG. 6.(a) Shock tube test in spherical coordinate for the fixed-mesh (black-dot) and moving-mesh (MM) (red-square) simulations.The profiles are presented at t = 0.3.Initial discontinuity locates at x = 0.25.The inner (outer) state of the shock tube is ρ = 1, P = 1, v = 0 (ρ = 0.1, P = 0.1, v = 0).The EOS follows the ideal gas law with Γ = 4/3.(b) Shock tube test in spherical coordinate for the moving-mesh simulations with different control schemes for the cell's aspect ratio.The black-dot line represents the simulation where the cell's maximum aspect ratio has not been set.The red-square line shows the simulation where the maximum aspect ratio of the cell has been set to 1.5. FIG. 7 . FIG. 7. Profiles for the spherical shock tube test at t = 0.3 in the moving-mesh simulations with different grid resolution.The purple diamond shows results from the simulation with resolution Nt = 2048 while the red square and blue circle represent results from the Nt = 128 simulation and the Nt = 32 simulation, respectively. FIG. 8 . FIG.8.Jet launching process from a black hole-torus system, visualized at four-time snapshots.From left to right, the contour plots represent the logarithmic density, temperature-like variable Θ, normalized radial velocity W v r /c, and the cell's radial resolution (i.e.inverse aspect ratio).The inner contour plots zoom in on the central region of the domain.The inner boundary is stationary before t = 30[ms].The simulation video is available from Youtube at[117] with high-definition video quality available. TABLE I . Parameter values for the initial profile of TOV star. TABLE II . Parameters for the stationary torus around a black hole a
14,856.8
2024-01-03T00:00:00.000
[ "Physics" ]
Interaction between autophagy and senescence is required for dihydroartemisinin to alleviate liver fibrosis Autophagy and cellular senescence are stress responses essential for homeostasis. Therefore, they may represent new pharmacologic targets for drug development to treat diseases. In this study, we sought to evaluate the effect of dihydroartemisinin (DHA) on senescence of activated hepatic stellate cells (HSCs), and to further elucidate the underlying mechanisms. We found that DHA treatment induced the accumulation of senescent activated HSCs in rat fibrotic liver, and promoted the expression of senescence markers p53, p16, p21 and Hmga1 in cell model. Importantly, our study identified the transcription factor GATA6 as an upstream molecule in the facilitation of DHA-induced HSC senescence. GATA6 accumulation promoted DHA-induced p53 and p16 upregulation, and contributed to HSC senescence. By contrast, siRNA-mediated knockdown of GATA6 dramatically abolished DHA-induced upregulation of p53 and p16, and in turn inhibited HSC senescence. Interestingly, DHA also appeared to increase autophagosome generation and autophagic flux in activated HSCs, which was underlying mechanism for DHA-induced GATA6 accumulation. Autophagy depletion impaired GATA6 accumulation, while autophagy induction showed a synergistic effect with DHA. Attractively, p62 was found to act as a negative regulator of GATA6 accumulation. Treatment of cultured HSCs with various autophagy inhibitors, led to an inhibition of DHA-induced p62 degradation, and in turn, prevented DHA-induced GATA6 accumulation and HSC senescence. Overall, these results provide novel implications to reveal the molecular mechanism of DHA-induced senescence, by which points to the possibility of using DHA based proautophagic drugs for the treatment of liver fibrosis. Liver fibrosis is a reversible wound-healing response following liver injury, and its end-stage cirrhosis is responsible for high morbidity and mortality worldwide. [1][2][3] Liver transplantation is the only treatment available for patients with advanced stages of liver fibrosis. [4][5][6] Therefore, new therapeutic agents and strategies are needed for the management of this disease. 7,8 Dihydroartemisinin (DHA), a natural and safe anti-malarial agent, exhibits an ample array of pharmacological activities such as anti-tumor, 9 anti-bacterial 10 and anti-schistosomiasis properties. 11 We previously reported that DHA treatment improved the inflammatory microenvironment of liver fibrosis in vivo, 12 and inhibited activation and contraction of hepatic stellate cells (HSCs) in vitro. [13][14][15][16] In the current study, we aimed to evaluate the effect of DHA on HSC senescence and to further elucidate the underlying mechanisms. Cellular senescence is a terminal arrest of proliferation triggered by various cellular stresses including dysfunctional telomeres, 17 DNA damage 18 and oncogenic mutations. 19 Cellular senescence not only prevents the proliferation of damaged cells, thereby preventing tumorigenesis, but also affects the microenvironment through the secretion of proinflammatory cytokines, chemokines, and proteases, a feature termed the senescence-associated secretory phenotype (SASP). 20 The mechanisms underlying induction and maintenance of cell senescence remain entirely elusive. [21][22][23] Previous studies 21,22 have reported that p53 can lead to cell cycle arrest, DNA repair and apoptosis predominantly when it becomes transcriptionally active in response to DNA damage, oncogene activation and hypoxia. Retinoblastoma 1 (pRb) inactivation mediated by p16 is also known to ensure durable cell cycle arrest, but is unlikely to be regulated by a canonical DNA damage response. 23 Attractively, it is interesting to explore the mechanism underlying the induction and maintenance of cell senescence in liver fibrosis. Interestingly, several lines of evidence indicate a genetic relationship between autophagy and senescence. 24,25 However, whether autophagy acts positively or negatively on senescence is still subject to debate. 25 Through a specialized compartment known as the TOR-autophagy spatial coupling compartment (TASCC), autophagy generates a high flux of recycled amino acids, which are subsequently used by mTORC1 for supporting the massive synthesis of the SASP factors and facilitating senescence. 25 In contrast, increased levels of reactive oxygen species upon autophagy inhibition partially contribute to cellular senescence. 25 We previously reported that DHA treatment stimulated autophagy activation via a ROS-JNK1/2-dependent mechanism in liver fibrosis. 12 Attractively, whether autophagy activation contributes to DHAinduced HSC senescence is worth to further study. In the present study, we evaluated the effect of DHA on HSC senescence, and to further elucidate the underlying mechanisms. We found that DHA could induce senescence of activated HSCs to alleviate liver fibrosis via autophagydependent GATA6 accumulation. The results of the present study provide important information concerning the molecular mechanisms that underlie the antifibrotic activities of DHA, which is essential for investigating its potential for clinical application. Results DHA induces the accumulation of senescent activated HSCs in rat fibrotic liver. Our previous data [12][13][14][15][16] and the present results ( Supplementary Figures 1A-C) have sufficiently demonstrated that DHA protected the liver against CCl 4 -induced injury and suppressed hepatic fibrogenesis in the rat model. To investigate the mechanisms underlying the protective effects of DHA, we proposed that DHA might induce senescence of activated HSCs to limit liver fibrosis. To identify senescent cells in situ, we stained liver sections from DHA and vehicle-treated rat for a panel of senescenceassociated markers, including SA-β-gal, p53 and p21. Results from immunofluorescence staining showed that cells staining positive for each senescence-associated markers accumulated in fibrotic livers, and were invariably located along the fibrotic scar by treatment with DHA in a dosedependent manner (Figures 1a and b; Supplementary Figure 1D). Interestingly, we also found that these cells typically expressed multiple senescence markers and were not proliferating. As shown in Figures 1c and d, of the p16-positive cells identified in DHA-treated livers, more than 80% were positive for p53 staining, whereas less than 9% co-expressed the proliferation-association marker Ki67. Although hepatocytes represent the most abundant cell type in the liver, the location of senescent cells along the fibrotic scar in rat livers raised the possibility that these cells were derived from activated HSCs, which initially proliferate following liver damage and are responsible for much of the extracellular matrix production in fibrosis. [4][5][6] In order to verify this hypothesis, the cells in DHA-and vehicle-treated liver sections were not only stained positive for the senescenceassociated markers p53 and p16, but also were positive for the HSC marker desmin. As expected, cells expressing the senescence markers p53 and p21 co-localized with those expressing desmin (Figures 1e and f). Overall, these results indicate that DHA induces the accumulation of senescent activated HSCs in rat fibrotic liver. DHA promotes activated HSC senescence in vitro. Previous studies have confirmed that HSC activation in vivo, as a result of different liver injuries, could be mimicked in vitro by plating freshly isolated HSCs exposed to platelet derived growth factor-BB (PDGF-BB) on plastic tissue culture dishes. [12][13][14][15][16]26 Therefore, the freshly quiescent HSCs were isolated from Sprague-Dawley rats as described, 27 and then were treated with 5, 10 and 20 ng/ml PDGF-BB. In agreement with previous findings, [12][13][14][15][16] HSC activation markers like α-SMA (acta-2), Fibronectin, Procollagen 1α1 (procol1α1), TNF-α, and TGF-β were significantly upregulated showing that HSCs undergo an activation process in vitro as well ( Supplementary Figures 2A-C). Subsequently, we used cultured HSCs to test whether DHA treatment could promote activated HSC senescence in vitro. Immunofluorescent assay showed that DHA and Etoposide (as a positive control) 28 treatment significantly increased the expression of senescence markers p53, p16, p21, and Hmga1 in cell model (Figure 2a). Besides, we found that DHA treatment increased the number of SA-β-Gal-positive HSCs (Figure 2b). Western blot and Real-time PCR analysis of senescence-associated proteins also consistently showed that DHA treatment upregulated the expression of p53, p16, p21 and Hmga1 in activated HSCs (Figures 2c and d). Additional experiments were performed to verify the role of telomerase activity in DHA-induced HSC senescence. We found that the telomerase activity was decreased in DHAtreated HSCs (Supplementary Figure 2D). A well-known feature of cellular senescence is cell cycle arrest, which largely accounts for the growth inhibition in senescent cells. 20 Next, we examined the cell cycle distribution by a flow cytometer. As shown in Supplementary Figures 2E and F, HSCs treated with DHA or Etoposide showed significantly higher proportions of G2/M cells and lower proportions of S cells compared with untreated HSCs. Cell cycle is influenced by multiple cyclins and cyclin-dependent kinases (CDKs). 29 Real-time PCR analyses indicated that DHA treatment downregulated the expression of cyclin D1, cyclin E1 and CDK4 in activated HSCs (Supplementary Figure 2G). Taken together, these results show that DHA promotes activated HSC senescence in vitro. The accumulation of GATA6 is required for DHA to induce activated HSC senescence in vitro. Cellular senescence is a terminal stress-activated program mainly controlled by the p53 and p16 INK4a tumor suppressor proteins. 22,23 However, in contrast to the downstream functionality of p53 and p16, its upstream control is a relatively unexplored area. 30 In the current study, we hypothesized that GATA6 could have a pivotal role in DHA-induced upregulation of p53 and p16 in activated HSCs. To test this hypothesis, the status of this GATA6 protein was evaluated following the DHA treatment. As shown in Figure 3a, DHA treatment obviously increased the level of GATA6 in a time-and dose-dependent manner. In order to further detect the role of GATA6 accumulation in the DHAinduced senescence, activated HSCs were pre-treated with GATA6 siRNA or GATA6 plasmid, followed by DHA treatment (Figure 3b). As expected, the results from SA-β-Gal staining showed that pretreatment with GATA6 siRNA significantly abrogated DHA-induced increase of SA-β-Gal-positive HSCs, whereas GATA6 plasmid showed a synergistic effect with DHA (Figures 3e and h). Besides, in order to investigate the effect of GATA6 accumulation on DHA-induced p53 and p16 upregulation, the expression of p53, p16, and their downstream effectors were detected by western blot and Real-time PCR analysis. The results revealed that GATA6 plasmid, mimicking DHA, promoted the expression of p53, p21, Hmga1 and p16, while GATA6 siRNA dramatically suppressed the ability of DHA and GATA6 plasmid in inducing cellular senescence (Figures 3c, d, f and g). Furthermore, immunofluorescent assay also indicated that DHA as well as GATA6 plasmid significantly increased the abundance of proteins involved in senescence (Supplementary Figures 3B and D). However, the pretreatment of cells with GATA6 siRNA dramatically eliminated the promoting effects of DHA on the expression of p53 and p16 in activated HSCs (Supplementary Figures 3A and C). Attractively, accumulating evidence suggests that mitogen activated protein kinases (MAPKs) have important roles in the activation of p53 and p16. [31][32][33] Thus, it was assumed that GATA6 accumulation contributed to DHA-induced upregulation of p53 and p16 via a MAPK-dependent mechanism. To test this assumption, the phosphorylation status of these MAPK proteins was evaluated following the GATA6 siRNA or GATA6 plasmid treatment. The results showed that GATA6 plasmid obviously increased the level of phosphorylated JNK1/2, but did not significantly affect the level of phospho-ERK1/2 and phospho-p38 (Supplementary Figure 3E), suggesting the involvement of JNK1/2 in GATA6-induced upregulation of p53 and p16. In order to further determine the association between GATA6 accumulation and p53 or p16 upregulation, selective JNK1/2 inhibitor (SP600125) was used to inhibit the activity of JNK1/2. Interestingly, we found that pretreatment with SP600125 abolished GATA6 plasmid or DHA-induced p53 and p16 upregulation (Supplementary Figures 3F and G), demonstrating that JNK1/2 could mediate p53 and p16 upregulation induced by GATA6 accumulation. Collectively, these data demonstrate that the accumulation of GATA6 is required for DHA to induce HSC senescence in vitro. DHA induces HSC senescence via a GATA6-dependent mechanism in vivo. We further examined whether the disruption of GATA6 accumulation could affect DHA-induced upregulation of p53 and p16 in vivo. Seventy mice were randomly divided into seven groups of ten animals each with comparable mean bodyweight. Mice of seven groups were administrated with vehicle control, CCl 4 , CCl 4 +Ad.Fc, CCl 4 +DHA, CCl 4 +Ad.Fc+DHA, CCl 4 +Ad.shGATA6 or CCl 4 +Ad. shGATA6+DHA, respectively, throughout the 8-week period of CCl 4 treatment. First and foremost we evaluated the effect of interrupting GATA6 on liver injury in vivo. Gross examination showed that morphological changes pathologically occurred in the mouse liver exposed to CCl 4 compared with the normal liver, but DHA treatment improved the pathological changes in livers ( Figure 4a). Interestingly, the improvement of DHA on liver injury was remarkably abrogated by Ad. shGATA6 ( Figure 4a). Besides, liver fibrosis was also demonstrated by histological analyses. Hematoxylin and eosin (H&E), Masson and picro-Sirius red staining showed that intraperitoneal injection of DHA daily for 4 weeks significantly improved histopathological feature of liver fibrosis characterized by decreased collagen deposition, whereas livers derived from mice treated with DHA plus Ad. shGATA6 exhibited more severe liver fibrosis compared with the mice treated with DHA alone (Figure 4a). Next, primary HSCs were isolated for detection of cell senescence markers. The Real-time PCR analysis showed that Ad.shGATA6 significantly reduced the GATA6 level of activated HSCs ( Figure 4b). Then, western blot and Real-time PCR analysis demonstrated that interference of GATA6 significantly inhibited the expression of p53, p21 and p16, suggesting that the effect of DHA was at least partially reversed (Figures 4c and e). Besides, Ad.shGATA6 treatment not only decreased the number of SA-β-Gal-positive cells, but also markedly eliminated the regulatory effects of DHA on cell senescence ( Figure 4d). More importantly, liver tissues were co-stained with the senescence markers p53 or p16 and HSC activation marker desmin. Results from immunofluorescence staining showed that DHA induced the accumulation of senescent activated HSCs in fibrotic liver, whereas Ad.shGATA6 treatment impaired the induction of DHA on activated HSC senescence ( Figure 4f). Altogether, these data suggest that GATA6 accumulation is involved in DHA-induced HSC senescence in vivo. The activation of autophagy is associated with DHAinduced GATA6 accumulation and HSC senescence. Protein accumulation is controlled by two major pathways in eukaryotic cells: the ubiquitin-proteasome 34 and autophagylysosome pathways. 35 Interestingly, Kang et al. 36 showed that inhibition of the proteasome by MG-132, a proteasome inhibitor, had no effect on GATA4 abundance, whereas GATA4 protein was stabilized in cells treated with distinct lysosomal inhibitors known to block autophagy. In the present study, we assumed that autophagy-lysosome pathways could have a pivotal role in DHA-induced GATA6 accumulation. To evaluate this assumption, activated HSCs were treated with various concentrations of DHA for 24 h or with 20 μM of DHA for different hours. Results from western blot analysis showed that DHA induced the generation of autophagosome in a dose-and time-dependent manner ( Figure 5a). Besides, seven important autophagy related genes were detected by western blot and Real-time PCR analysis in DHA-or vehicletreated cells. The results revealed that DHA treatment increased the level of many indicators of the autophagosome Numerous studies have shown a crucial role for mTOR signaling pathway in autophagosome generation. 37,38 Therefore, we evaluated whether DHA treatment affect the expression of p-ULK1, ULK1, p-mTOR and mTOR. Western blot analysis revealed that DHA-induced autophagosome generation was associated with an increase in p-ULK1 activity and a decrease in p-mTOR activity (Supplementary Figure 4A). Next, we further assessed the effect of DHA on autophagic flux in activated HSCs. Firstly, tandem fluorescence RFP-GFP-LC3 (tf-LC3) staining was used to demonstrate the autophagic flux. It has been documented that, in autophagosomes, the combination of both RFP and GFP in the triple fusion yields yellow fluorescence, whereas autolysosomal delivery results in red. 39 As expected, we observed that both autophagosome and autolysosome formation were increased in DHA-treated HSCs (Figure 5c). Secondly, the long-lived protein degradation was detected to indicate autophagic flux because it was substrate for autophagy, and the rate was a key functional index of autophagic flux. 39 The longevity protein degradation rate reflected that DHA timedependently increased autophagic flux (Figure 5d). Thirdly, we observed an increase in LC3-II level in cells which cultured with DHA for 24 h followed by chloroquine (CQ) treatment compared with cells which were treated with DHA alone, suggesting that autophagic flux is increased in DHA-treated HSCs (Supplementary Figure 4B). Lastly, the transmission electron microscopy (TEM) was used to observe autophagy. 39 As expected, we observed the presence of a high level of autophagosomes or lysosomes in DHA-treated HSCs. In contrast, it was difficult to observe autophagosomes or lysosomes in control HSCs (Figure 5e). Overall, these results support that DHA increases the autophagosome generation and autophagic flux in activated HSCs. Disruption of autophagy impairs DHA-induced GATA6 accumulation and HSC senescence in vitro. To determine whether the activation of autophagy by DHA is directly involved in the GATA6 accumulation and HSC senescence in vitro, we used Atg5 siRNA to block the autophagosome formation and employed Atg5 plasmid to induce autophagy (Figures 6a and b). Then, SA-β-Gal staining was performed to measure its effects on cell senescence. As shown in Figure 6c, Atg5 plasmid, mimicking DHA, increased the number of SA-β-Gal-positive HSCs. Conversely, siRNAmediated knockdown of Atg5 markedly suppressed the ability of DHA and Atg5 plasmid in the induction of cell senescence. Furthermore, Real-time PCR analysis indicated that the pretreatment of cells with Atg5 siRNA significantly altered the abundance of p53 and p16 mRNA induced by DHA treatment (Figure 6d). Besides, the results from immunofluorescence assay showed that DHA as well as Atg5 plasmid treatment significantly increased the level of cellular GATA6 and p53 compared with untreated cells, whereas the treatment of cells with Atg5 siRNA, which downregulated the expression of cellular GATA6 and p53, dramatically diminished the effect of DHA or Atg5 plasmid in inducing cell senescence (Figure 6e). Additional experiments showed that DHA and Atg5 plasmid treatment significantly inhibited the telomerase activity, whereas the pretreatment with Atg5 siRNA abrogated DHA-induced inhibitory effects (Supplementary Figure 5A). Lastly, we examined the effect of Atg5 plasmid or Atg5 siRNA on cell cycle distribution. As shown in Supplementary Figures 5B-D, the pretreatment with Atg5 plasmid decreased the expression of Cyclin D1, CDK4 and CDK6, while siRNA-mediated knockdown of Atg5 dramatically upregulated their expression and resulted in a pronounced and significant attenuation of DHA-induced inhibitory effects. Collectively, these results support that autophagy activation mediates DHA-induced GATA6 accumulation and cell senescence in activated HSCs. Degradation of p62 is required for autophagy to mediate DHA-induced GATA6 accumulation and HSC senescence in vitro. To further investigate how DHA-induced autophagy promoted GATA6 accumulation, we hypothesized that the degradation of p62 had an important role in DHA-induced To test this hypothesis, the status of this p62 protein was evaluated following the DHA treatment. Western blot analysis showed that treatment with DHA for 18 h resulted in a significant inhibitory effect, which was negatively correlated to the GATA6 accumulation of DHA-treated HSCs (Figure 7a). Then, the interaction between p62 and GATA6 was determined by immunoprecipitation assay. The result revealed that DHA blocked the binding of p62 to GATA6 in a dosedependent manner (Figure 7b). Interestingly, these data (Figure 7f). Furthermore, a panel of senescence-associated markers, including SA-β-gal, p53 and p21, were all determined. Unsurprisingly, treatment with 3-MA, CQ, and Bafilomycin A1 completely abrogated DHAinduced p62 degradation, and in turn, decreased the number of SA-β-Gal-positive HSCs (Figure 7d), and p16 mRNA expression (Figure 7e). More importantly, p62 overexpression plasmid also resulted in a pronounced and significant attenuation of DHA-induced GATA6 accumulation, and then, decreased the number of SA-β-Gal-positive HSCs, and p53 or p16 mRNA expression (Supplementary Figures 6A-D). Taken together, these data suggest that the degradation of p62 is required for autophagy to mediate DHA-induced GATA6 accumulation and HSC senescence in vitro. Discussion Cellular senescence acts as a potent mechanism of tumor suppression. [17][18][19][20] However, its functional contribution to noncancer pathologies has not been fully understood. Attractively, previous studies, 22,40 have discovered the existence of senescent HSCs during the development of liver fibrosis. Krizhanovsky et al. showed that senescent activated HSCs reduced the secretion of ECM components, enhanced immune surveillance, and facilitated the reversion of fibrosis. 40 Kong et al. also reported that interleukin-22 induced HSC senescence and restricted liver fibrosis in mice. 41 Consistent with previous studies, 40,41 we showed that the senescence of activated HSCs induced by DHA treatment provide a brake on the fibrogenic response to damage by limiting the expansion of the cell type responsible for producing the fibrotic scar. To our knowledge, this is the first report that DHA can induce HSC senescence to alleviate liver fibrosis. Importantly, our study identified the transcription factor GATA6 as an upstream molecule in the facilitation of DHAinduced HSC senescence. The GATA family of transcription factors consists of six proteins (GATA1-6) which are involved in a variety of physiological and pathological processes. 42,43 Recently, Kang et al. reported that GATA4 functioned as a key switch in the senescence regulatory network to activate the senescence-associated secretory phenotype (SASP). 36 In the present study, we found that the accumulation of GATA6 was required for DHA to induce HSC senescence in vitro and in vivo. siRNA-mediated knockdown of GATA6 dramatically abolished DHA-induced upregulation of p53 and p16, and in turn inhibited HSC senescence. Although our data suggested direct connection between GATA6 and DHA-induced HSC senescence, we could not eliminate other effects that may mediate the protective effect of DHA. Autophagy and cellular senescence are stress responses essential for homeostasis. 24 While recent studies indicate a genetic relationship between autophagy and senescence, whether autophagy acts positively or negatively on senescence is still subject to debate. 24,25 Garcia-Prat et al. reported that autophagy maintains stemness by preventing senescence. 44 Conversely, Liu et al. revealed that autophagy suppresses melanoma tumorigenesis by inducing senescence. 45 In the current study, we found that activation of autophagy is required for DHA to induce HSC senescence in live animal model and cell model. Down-regulation of autophagy activity, using Atg5 siRNA, led to an inhibition of DHA-induced HSC senescence, while Atg5 plasmid enhanced the effect of DHA in vitro. Attractively, we found that p62 may be a negative regulator of GATA6 accumulation, but this regulation was suppressed by DHA-induced autophagy activation. Treatment of cultured HSCs with various autophagy inhibitors or p62 overexpression plasmid, led to an inhibition of DHA-induced p62 degradation, and in turn, prevented DHA-induced GATA6 accumulation and HSC senescence. Although more experiments are needed to determine the exact role of autophagy in cell senescence, our results indicate a similar function in HSCs in consistent with previous reports. 24,25 Overall, these results provide the first mechanistic evidence that interaction between autophagy and senescence is required for DHA to alleviate liver fibrosis (Figure 8). Since there still are no clinically effective anti-fibrosis drugs, understanding the mechanistic basis of action of natural dietary products such as DHA offers further insight into developing drugs for the prevention and treatment of liver fibrosis. Animals and experimental design. All experimental procedures were approved by the institutional and local committee on the care and use of animals of Nanjing University of Chinese Medicine (Nanjing, China), and all animals received humane care according to the National Institutes of Health (USA) guidelines. Male Sprague-Dawley rats weighing approximately 180-220 g were procured from Nanjing Medical University (Nanjing, China). A mixture of CCl 4 (0.1 ml/100g bodyweight) and olive oil (1:1 (w/v)) was used to induce liver fibrosis in rats. Fifty rats were randomly divided into five groups of ten animals each with comparable mean bodyweight. Rats of Group 1 were served as a vehicle control and intraperitoneally (i.p.) injected with olive oil. Rats of group 2 were i.p. injected with CCl 4 . Rats of Groups 3, 4 and 5 were served as treatment groups and i.p. injected by CCl 4 and DHA with 3.5, 7 and 14 mg/kg, respectively. Rats of groups 2-5 were i.p. injected with CCl 4 every other day for 8 weeks. DHA was suspended in sterile PBS and given once daily by intraperitoneal injection during weeks 5-8. At the end of the experiment, rats were sacrificed after anesthetization with an injection of 50 mg/kg pentobarbital. A small portion of the liver was removed for histopathological and immunohistochemical studies. Male ICR mice (ages 6-8 weeks) were purchased from Nanjing Medical University (Nanjing, China). Seventy mice were randomly divided into seven groups of ten animals each with comparable mean bodyweight. Mice of seven groups were administrated with Vehicle control, CCl 4 , CCl 4 +Ad.Fc (a control adenovirus encoding IgG2 α Fc fragment), CCl 4 +DHA (20 mg/kg, once a day), CCl 4 +Ad.Fc+DHA, CCl 4 +Ad.shGATA6 (adenovirus encoding mouse GATA6 shRNA for inhibiting GATA6 expression) or CCl 4 +Ad.shGATA6+DHA, respectively, throughout the 8-week period of CCl 4 treatment. Adenoviruses (2.5 × 10 7 pfu/g, once per 2 weeks) were injected into mice by tail vein. A mixture of carbon tetrachloride (CCl 4 ; 0.5 ml per 100 g bodyweight) and olive oil (1: 9 (v/v)) was used to induce liver fibrosis in mice by i.p. injection. After 8 weeks, liver were fixed in 4% buffered paraformaldehyde for histological analysis of liver fibrosis and immunostaining analysis or HSCs were isolated for western blot analysis. Histological analysis. Hematoxylin and eosin, Sirius Red and Masson staining were performed on 4-μm thick formalin-fixed paraffin-embedded tissue sections. Sirius Red and Masson-stained areas from 10 fields (magnification × 200) from 3 to 6 mice/group were quantified with Image J. Cell isolation, cell culture conditions and drug treatment. Primary rat HSCs were isolated from male Sprague-Dawley rats weighing approximately 180-220 g (Nanjing Medical University, Nanjing, China) as described. 27 Isolated HSCs were cultured in DMEM with 10% fetal bovine serum, 1% antibiotics and maintained at 37°C in a humidified incubator of 5% CO 2 and 95% air. Cell morphology was assessed using an inverted microscope with a Leica Qwin System (Leica, Germany). DHA was dissolved in DMSO at a concentration of 10 mM and was stored in a dark colored bottle at − 20°C. The stock was diluted to required concentration with DMSO when needed. Before the DHA treatment, cells were grown to about 70% confluence, and then exposed to DHA at different concentrations (0-20 μM) for different period of time (0-24 h). Cells grown in a medium containing an equivalent amount of DMSO without DHA were served as control. Plasmid transfection. Atg5 siRNA, GATA6 siRNA, p62 siRNA, negative control siRNA, Atg5 plasmid, GATA6 plasmid, p62 plasmid, negative control vectors and mRFP-GFP-LC3 plasmid were transfected into HSCs using MegaTran 1.0 transfection reagent according to manufacturer's instructions. 12 After 24 h, cells were treated with selenite or PBS as a solution control. The transfection efficiency was confirmed by western blot analysis. RNA isolation and real-time PCR. Total RNA was isolated and qPCR performed using the QuantiTect SYBR Green PCR Kit (Qiagen, Valencia, CA, USA) in accordance to the manufacturer's instructions. 12 Actin levels were taken for normalization and fold change was calculated using 2 -ddCt . Primer Sequence available on request. Western blot analysis. Cells or tissue samples were lysed using mammalian lysis buffer (Sigma, St. Louis, MO, USA) and immunoblotting was performed as per the manufacturer's guidelines 12 (Bio/Rad, Hercules, CA, USA). Briefly, the protein levels were determined using a BCA assay kit (Pierce, Rockford, IL, USA). Proteins (50 μg/well) were separated by SDS-polyacrylamide gel, transferred to a PVDF membrane (Millipore, Burlington, MA, USA), blocked with 5% skim milk in Trisbuffered saline containing 0.1% Tween 20. Target proteins were detected by corresponding primary antibodies, and subsequently by horseradish peroxidaseconjugated secondary antibodies. Protein bands were visualized using chemiluminescence reagent (Millipore). Equivalent loading was confirmed using an antibody against β-actin. Densitometry analysis was performed using Image J software (NIH, Bethesda, MD, USA). Immunoprecipitation assay. An immunoprecipitation assay was performed using extracts of the activated HSCs as previously described. 46 Briefly, immunoprecipitation was performed using Classic Magnetic IP/Co-IP Kit (Pierce, Carlsbad, CA, USA) to analyze the interaction between GATA6 and p62 (Abcam, Cambridge, UK). Activated HSCs were washed 3 times in PBS and lysed in IP Lysis Buffer (Abcam) on ice for 5 min. The protein lysate was collected by centrifugation. Protein A/G Magnetic Beads (25 μl) were incubated with anti-GATA6 antibody (Abcam) for 1 h at room temperature, and then added to the protein lysate and incubated overnight at 4°C. The beads were then collected and washed in IP Wash Buffer for 5 times. Proteins were dissolved in Elution Buffer and detected by western blot. Immunofluorescence analysis. Immunofluorescence staining with liver tissues or treated cells were performed as we previously reported. 12 4′,6-Diamidino-2-phenylindole (DAPI) was used to stain the nucleus in liver tissues and HSCs. All the images were captured with the fluorescence microscope and representative images were shown. The software Image J was used to quantitate the fluorescent intensity on the micrographs. Analysis of HSC senescence. HSC senescence was determined by the detection of SA-β-gal (senescence-associated β-galactosidase) activity using an SA-β-gal staining kit (Cell Signaling). Briefly, adherent cells were fixed with 0.5% glutaraldehyde in PBS for 15 min, washed with PBS containing 1 mM MgCl 2 and stained overnight in PBS containing 1 mM MgCl 2 , 1 mg/ml X-Gal, 5 mM potassium ferricyanide and 5 mM potassium ferrocyanide. All the images were captured with a light microscope and representative images were shown. Results were from triplicate experiments. Cell cycle analysis by flow cytometry. Distribution of cell cycle was determined by PI staining and flow cytometry analysis. HSCs were seeded in six-well plates and cultured in DMEM supplemented with 10% FBS for 24 h, and then were treated with DMSO, Etoposide and DHA at indicated concentrations for 24 h. Cells were then harvested and fixed, and the cell cycle was then detected by the cellular DNA flow cytometric analysis kit (Nanjing Keygen Biotech) according to the protocol. 22 Percentages of cells within cell cycle compartments (G0/G1, S and G2/M) were determined by flow cytometry (FACS Calibur; Becton, Dickinson and Company, Franklin Lakes, NJ, USA). The data were analyzed using the software Cell Quest. Results were from triplicate experiments. Calculations and statistics. Individual culture experiments and animal experiments were performed in duplicate or triplicate and repeated three times using matched controls, and the data were pooled. Results were expressed as either S.D. or mean ± standard error of the mean (S.E.M.). The statistical significance of differences (*Po0.05) was assessed by t-test.
6,910.4
2017-06-01T00:00:00.000
[ "Biology", "Medicine" ]
Identification of Sclareol As a Natural Neuroprotective Cav1.3‐Antagonist Using Synthetic Parkinson‐Mimetic Gene Circuits and Computer‐Aided Drug Discovery Abstract Parkinson's disease (PD) results from selective loss of substantia nigra dopaminergic (SNc DA) neurons, and is primarily caused by excessive activity‐related Ca2+ oscillations. Although L‐type voltage‐gated calcium channel blockers (CCBs) selectively inhibiting Cav1.3 are considered promising candidates for PD treatment, drug discovery is hampered by the lack of high‐throughput screening technologies permitting isoform‐specific assessment of Cav‐antagonistic activities. Here, a synthetic‐biology‐inspired drug‐discovery platform enables identification of PD‐relevant drug candidates. By deflecting Cav‐dependent activation of nuclear factor of activated T‐cells (NFAT)‐signaling to repression of reporter gene translation, they engineered a cell‐based assay where reporter gene expression is activated by putative CCBs. By using this platform in combination with in silico virtual screening and a trained deep‐learning neural network, sclareol is identified from a essential oils library as a structurally distinctive compound that can be used for PD pharmacotherapy. In vitro studies, biochemical assays and whole‐cell patch‐clamp recordings confirmed that sclareol inhibits Cav1.3 more strongly than Cav1.2 and decreases firing responses of SNc DA neurons. In a mouse model of PD, sclareol treatment reduced DA neuronal loss and protected striatal network dynamics as well as motor performance. Thus, sclareol appears to be a promising drug candidate for neuroprotection in PD patients. Introduction Parkinson's disease (PD) is an age-related neurodegenerative disorder characterized by progressive motor impairments such as tremors, rigidity, and bradykinesia. [1][2][3] These symptoms are primarily driven by the selective loss of mesencephalic dopamineproducing neurons in the pars compacta of the substantia The ORCID identification number(s) for the author(s) of this article can be found under https://doi.org/10.1002/advs.202102855 Ca V 1.2 channels are expressed at very high levels in cardiac tissues, cross-antagonism to these channels severely limits the dose of DHPs that can be used for neuroprotective purposes. [7,12] Therefore, Ca V 1.3-selective blockers without Ca V 1.2-mediated cardiovascular side effects are currently considered elusive candidates for PD drug discovery. [9] Here, by capitalizing on synthetic biology-inspired gene circuits that can flexibly program cells to perform applicationspecific biological tasks with high robustness and precision, [13] we have custom-designed a mammalian cell-based drug discovery platform for high-throughput screening (HTS) of isoformspecific calcium channel blockers (CCBs). Specifically, deflection of Ca V -dependent NFAT-activation to the repression of reporter protein translation allowed for the engineering of an antagonistinducible reporter system (CaB-A assay) that effectively reduced the susceptibility to false-negatives associated with cytotoxicitymediated signal decrease. After validating this technology with a selection of clinically approved CCB drugs, we identified five plant-derived essential oils that could effectively block Ca v 1.2 and Ca v 1.3. Further integration of in silico virtual screening and deep-learning technology eventually enabled the identification of sclareol-an essential constituent of the long-established Mediterranean medicinal herb Salvia sclarea-as a most relevant bioactive compound that inhibits Ca v 1.3 more strongly than Ca v 1.2. Finally, we demonstrated neuroprotective activities of sclareol both in vitro and in vivo. Using whole cell patch-clamp recordings, we provide evidence that sclareol decreases excessive neuronal activity of substantia nigra dopaminergic (SNc DA) cells. Furthermore, we show that sclareol reduces SNc DA neuronal degeneration in a mouse model of PD and protects striatal cellular network dynamics and motor performance, as compared to vehicle-treated mice. Thus, sclareol appears to be a promising lead compound/candidate drug for neuroprotection in PD patients. We believe the combined application of syntheticbiology-inspired technology, advanced computational methods, and molecular medicine, as exemplified here, represents an efficient platform that could help to set the stage for next-generation drug discovery in a variety of contexts. Engineering of a Cell-Based Drug Screening Platform for Multiplexed and Use-Dependent Analysis of Ca V 1 Channel Blockers PD drug discovery would greatly benefit from multiplexed drug screening, allowing simultaneous assessment of multiple disease-specific drug targets within a single experiment [14] (Figure 1A). In cell-based assay designs, secreted reporter proteins such as human placental secreted alkaline phosphatase (SEAP) or Gaussia princeps luciferase (GLuc) are advantageous for qualitative, non-disruptive, and high-throughput recording of gene expression, while on the other hand, intracellular reporter systems such as fluorescent proteins facilitate resource-efficient and simple experimental setups. [15] To design a CCB-regulated reporter protein assay, we created a synthetic excitation-transcription coupling system. [16] Activation of Ca V 1 channels by membrane depolarization triggers a surge in cytosolic [Ca 2+ ] i , initiating different signal-transduction pathways that modulate endogenous calcium-specific promoters [17] (Figure S1A, Supporting Information). Among different calcium-specific promoters (CSP) known to respond to Ca V 1-dependent cell signaling, [18][19][20] the synthetic NFAT promoter P NFAT3 (pMX57, P NFAT3 -SEAP-pA; P NFAT3 , (NFAT IL4 ) 5 -P min ) showed the most suitable Ca V 1.2-and Ca V 1.3dependent SEAP induction profiles triggered by potassium chloride (KCl)-mediated depolarization ( Figure S1B,S1C, Supporting Information). After validating dose-dependent excitationtranscription coupling with different secreted ( Figure S2A,S2B, Supporting Information) and fluorescent reporter systems (Figure S2C, Supporting Information), we tested the potential of the cell-based SEAP assay for CCB drug discovery. In a genetic configuration enabling CCB-repressible reporter expression (CaB-R assay) ( Figure S3A, Supporting Information), the presence of CCBs blocking Ca V 1.2 and Ca V 1.3 inhibits NFAT signaling and causes a dose-dependent decrease of SEAP production ( Figure S3B,S3C, Supporting Information). When we validated CaB-R with a selection of clinically approved CCB drugs, the IC 50 values determined in this study generally lay within the reference ranges reported for both Ca V 1-channel isoforms (Table S1, Supporting Information). In addition, CaB-R allowed for use-dependent analysis of repetitive CCB-mediated channel inhibition and activation, which is a critical but often elusive screening requirement in ion channel drug discovery. [21] Following prolonged depolarization of cells loaded with Ca V 1.2 or Ca V 1.3-dependent CaB-R systems, the representative CCB nicardipine showed a typical usedependent channel antagonism profile characterized by stronger inhibition at hyperactive channel states (KCl = 20, 40 mm) as compared to the degree of inhibition at baseline channel activities (KCl = 0 mm) ( Figure S3D,S3E, Supporting Information). [22,23] Engineering of an Antagonist-Inducible Reporter Assay to Reduce False-Positive Results In many drug-screening studies that involve the use of living cells, cytotoxicity-mediated signal decrease often interferes with antagonist-associated reporter signals, thus generating falsepositives. [24,25] To overcome this limitation, we engineered a CCBactivated reporter assay (CaB-A) that operates in a reversed configuration, allowing depolarization-dependent NFAT signaling to repress reporter protein expression ( Figure S4A, Supporting Information). The presence of CCBs antagonizes Ca V 1-mediated NFAT activation and triggers de-repression of reporter gene transcription ( Figure S4B, Supporting Information). Not only did CaB-A reveal all CCB-mediated drug effects in the expected dosedependent manner ( Figure S4C,S4D, Supporting Information), but it also effectively reduced the risk of obtaining false-positives, as expected. For example, cytotoxicity control experiments might have led to the classification of the CCB-repressible effect of flunarizine as a false positive in CaB-R ( Figure S4E, Supporting Information), but the potency of flunarizine in activating gene expression in CaB-A corroborated the true channel-blocking efficacy of this drug ( Figure S4C,S4D, Supporting Information). Baseline signal levels of CaB-A assays could be further fine-tuned by choosing different splice-variants of each channel isoform ( Figure S4F, Supporting Information), as we demonstrated with two alternatively spliced Ca V 1.3 1-domains characterized by different basal channel activities [10] (Figure S4G, Supporting Information). In CaB-A ( Figure S4B, Supporting Information), CCBactivated gene expression results from inhibition of NFATrepressible gene expression of a synthetic transcription factor, which binds to and silences synthetic cognate promoters driving constitutive expression of the reporter gene. However, most synthetic transcription factors-especially those having a TetRfamily repressor domain-are inherently under allosteric control by particular ligands. [26,27] Indeed, when we used a parabendependent mammalian trans-silencer (PMS, PmeR-KRAB) [28] as the NFAT-driven repressor, we found that nicardipine and benidipine interfered with de-repression of PMS-specific promoters at high concentrations (>1 μm) ( Figure S4H, Supporting Information), which would likely cause erroneous interpretation of the CaB-A results ( Figure S4C,S4D, Supporting Information). To improve screening accuracy, we designed an optimized CaB-A configuration in which the synthetic NFAT promoter controls the expression of L7Ae (an archaeal ribosome-derived RNA-binding protein with high affinity for a C/D box-aptamer motif) [29,30] (Figure 1B). The presence of CCBs prevents NFAT-dependent L7Ae expression (pMX125, P NFAT4 -L7Ae-pA; P NFAT4, (NFAT IL4 ) 7-P min ; Figure S5A, Supporting Information) and de-represses translation of reporter mRNA engineered to contain cognate C/Dbox motifs in the 5'-UTR ( Figure 1B). Depolarization-dependent production of L7Ae could knock down translation of SEAP mRNA harboring either one (pMX195, P SV40 -(C/D box) 1 -SEAP-pA) or two C/D-box repeats (pMX199, P SV40 -(C/D box) 2 -SEAP-pA), with the vector combination of pMX125/pMX199 affording optimal nicardipine-inducible SEAP expression characterized by low background signals and high induction profiles for use-dependent Ca V 1-inhibition ( Figure 1C,D). Importantly, this modified CaB-A assay is no longer influenced by potential crosstalk between CCBs and the L7Ae-C/D box interaction (Figure S5B-S5D, Supporting Information), and thus it enables accurate assessment of dose-dependent CCB-channel antagonism ( Figure 1E,F). Multiplexed and High-Throughput Screen of Plant Essential Oils to Identify Ca V 1.2 and Ca V 1.3 Antagonism Next, we used CaB-A and performed a pilot test of HTS with a random selection of plant essential oils. Plant-derived natural compounds have historically been proven to have great pharmacological potential, [31,32] especially for neurodegenerative diseases. [33] In particular, plant-derived compounds have inherently high "metabolite-likeness" and bioavailability, and thus represent promising starting points for drug discovery. [34] As plant-derived natural products, essential oils can further be regarded as naturally selected packages of biocompatible, bioavailable, and bioactive substances. [35] Among 42 essential oil products tested (Table S2, Supporting Information), the CaB-A assay identified five oils (i.e., rose flower, cistrus ladanifer, pinus sylvestris, ginger, clary sage) that most effectively inhibited Ca V 1.2 and Ca V 1.3 (Figure 2A). All five essential oils dose-dependently activated SEAP expression in the CaB-A assay ( Figure 2B,C), and control experiments confirmed that none of these essential oils interfered with L7Ae activity or intracellular calcium signaling (Figure S6A, Supporting Information). Notably, the results obtained with clary sage (Salvia sclarea) essential oil corroborated the advantage of CaB-A; although high concentrations of this oil were cytotoxic according to a reporter-based assay determining protein production capacity ( Figure S6B, Supporting Information), the unique antagonism-inducible gene expression readout of CaB-A ( Figure 2B,C) ensured that the clary sage data was not excluded as false-negative. In terms of assay quality, both CaB-R and CaB-A assays have excellent Z' screening windows (Z' factor (CaB-R) = 0.68 ± 0.14; Z' factor (CaB-A) = 0.73 ± 0.07, n = 12 independent experiments), and therefore should be suitable for rapid, robust and resource-efficient HTS. As already mentioned, treatment of PD requires a compound that can maximally inhibit Ca V 1.3, but not Ca V 1.2 7 . To quantify the antagonistic activities towards Ca V 1.3 (PD drug target) and Ca V 1.2 (PD anti-target) simultaneously (i.e., in a multiplexed screening configuration; Figure 1A), we mixed individual cell populations of Ca V 1.2-specific CaB-R and Ca V 1.3-specific CaB-A systems, each driving a different reporter protein. This cell mixture was exposed to clinically approved CCB drugs (positive controls), negative control compounds (i.e., amitriptyline, tetracaine, lidocaine), and the five essential oil hits, in order to determine their impact on the depolarization-dependent expression of SEAP (reflecting Ca V 1.3 activity) and GLuc (reflecting Ca V 1.2 activity) ( Figure S6C, Supporting Information). The experimental results confirmed the multiplexed screening capability of our system. All five essential oils showed the required basic selectivity profile of maximal Ca V 1.3 inhibition (highest CaB-A score vs control) and minimal Ca v 1.2 inhibition (closest CaB-R score to the control). Integration of In Silico Virtual Screening and Deep Learning Enables the Discovery of (6)-Gingerol and Sclareol as Novel Ca V 1.3-Antagonists To identify the putative active constituents of the five essential oils accounting for inhibition of Ca V 1.3, we used the LigandScout software to perform ligand-based virtual screening. [36,37] First, we generated 3D pharmacophore models of putative Ca V 1.3 blockers ( Figure 3A) based on a series of Ca V 1 blockers [38] Table S3, Supporting Information. This illustration exemplifies the alignment of (6)-gingerol to the Ca V 1.3-blocking pharmacophores. B) Structure clustering analysis of candidate compounds. All 13 hits from the virtual screening experiment using 198 candidate molecules derived from GC-MS data of essential oils (Table S3, Supporting Information) were clustered based on structure similarity using the ChemMine tool. Right panel: chemical structures of the five compounds selected as reference), as well as Ca V -independent ion channel modulators found in the multiplexed screening experiment (negative reference) ( Figure S6C and Table S3, Supporting Information). Using these pharmacophore models, we performed in silico analysis of all constituents of rose flower, cistus ladanifer, pinus sylvestris, ginger, and clary sage essential oils by computing the similarity of each chemical structure to a theoretically ideal pharmacologically active moiety. From a total of 198 different candidate molecules, this virtual screening experiment generated 13 hits as the most promising Ca V 1.3-antagonists (Table S4, Supporting Information), and structure clustering analysis enabled us to select the five chemicals diethyl phthalate, linalool oxide, zingerone, (6)-gingerol and sclareol as representative structures ( Figure 3B). Parallel artificial intelligence (AI)-based validation experiments based on a directed-message passing neural network (D-MPNN) [39,40] gave similar results ( Figure 3C; Table S5, Supporting Information), achieving a receiver operating characteristic curve-area under the curve (ROC-AUC) value of 97.78%. Experimental testing of these 5 candidate compounds with the CaB-A assay confirmed that (6)-gingerol and sclareol showed Ca V 1.3-antagonistic activity ( Figure 3D). Notably, both compounds showed a stronger inhibitory effect on Ca V 1.3-mediated reporter gene expression than on the Ca V 1.2-dependent CaB system ( Figure 3E; Figure S6D-S6F, Supporting Information), and were also predicted to have optimal drug-likeness properties according to Lipinski's Rule of Five criteria. [41] Importantly, sclareol (8.8 ± 1.0 μm; Figure 3E) had a more than threefold lower IC 50 value for Ca V 1.3 than (6)-gingerol (30.5 ± 6.3 μm; Figure S6E, Supporting Information) and is also structurally divergent from all currently known CCB compounds (Table S3, Supporting Information), such as dihydropyridines (DHP), represented by nifedipine ( Figure 3B). Indeed, when we created putative DHP-insensitive Ca V 1.3 mutants based on amino acid alterations that were previously shown to be critical for CCB-sensitivity of the related L-type Ca V 1.1 channel, [42,43] we found that our synthetic Ca V 1.3 Y1365A, A1369S, I1372A mutant was no longer inhibited by nifedipine, but still retained full sensitivity to sclareol ( Figure 3F). This result suggests that the binding modes of sclareol and DHPs to Ca V 1.3 are different. These features render sclareol a promising lead compound for PD pharmacotherapy. Validation of Neuroprotective Activity of Sclareol In Vitro and in Mice To assess the potential in vivo efficacy of sclareol, we first confirmed the presence of its molecular target in SNc DA neurons by immunostaining of midbrain-containing brain slices for Ca V 1.3 and tyrosine hydroxylase (TH), which are known to be co-expressed in this brain area [44] (Figure S7A, Supporting Information). To demonstrate functional Ca V 1.3 inhibition by sclareol, we next performed whole-cell patch-clamp recordings of SNc DA neurons. Bath-application of 10 μm sclareol led to a significant neuronal hyperpolarization (−62 ± 2 mV vs −80 ± 2 mV, p = 0.0001). This was accompanied with an increased spiking threshold (rheobase 3.75 ± 0.6 pA vs 11 ± 1.8 pA, p = 0.0107) and decreased firing responses to incremental current injection steps (two-way repeated model ANOVA, sclareol effect F(1,12) = 19.49, p = 0.0008, Figure S7B, Supporting Information). These in vitro findings confirm that sclareol significantly decreases the excitability of SNc DA neurons. To confirm sclareol's neuroprotective effect in experimental PD, we concomitantly profiled the locomotion behavior and neuronal dynamics of live sclareol-treated versus vehicle-treated PD model mice [45] (Figure 4, Figure 5, Movies S1 and S2, Supporting Information). An express probe consisting of a GRINlens coated with the genetically-encoded calcium sensor AAV-CaMKII-GCaMP6m was first implanted above the dorsal striatum (DS) to enable real-time monitoring of calcium dynamics from striatal neurons, along with a guide cannula above the ipsiversive SNc to infuse the PD-triggering neurotoxin 6hydroxydopamine (6-OHDA) ( Figure 4A-C). Three weeks after the surgery, a miniaturized microscope was mounted on the animals' heads to image calcium-associated striatal neuron dynamics in real-time as the animals explored an open arena, thereby setting the baselines for behavioral and neuronal activities. The animals were treated with daily doses of either sclareol (55 mg kg −1 ) or vehicle as a negative pharmacologic control, starting two days before and throughout the 30 days post the single 6-OHDA infusion ( Figure 4A,B). The extent of PD-associated neuronal degeneration was confirmed by immunostaining of the DA neuron-specific marker TH, which was found to be significantly decreased in the ipsiversive DS compared to the contralateral representatives of the clusters. C) Validation of virtual screening using a trained deep-learning neural network. A D-MPNN described in [40] was trained with reported CCBs (Table S6, Supporting Information) as well as randomly chosen compounds from MUV datasets (47) in order to validate the 198 candidate molecules screened by LigandScout. Virtual screening hits in (B) are highlighted in blue (before clustering) and red (after clustering). Numerical values of rank-ordered prediction scores (y-axis) are listed in Table S5, Supporting Information. An arbitrary cut-off of 0.5 was chosen to assess general goodness. D) Assessment of putative Ca V 1.3 antagonism by the PD drug candidates. HEK-293 cells transfected with the Ca V 1.3-dependent CaB-A system were depolarized with 30 mm KCl and immediately seeded into culture wells containing different drug candidates supplemented at 10 or 100 μm. Data points are presented as mean FOC (fold of DMSO control) of SEAP levels scored at 48 h after exposure to nicardipine (n = 3 independent experiments). E) Quantification of Ca V 1 antagonism by sclareol using CaB-R. HEK-293 cells transfected with Ca V 1.2-or Ca V 1.3-dependent CaB-R were depolarized with 30 mm KCl and immediately seeded into culture wells containing different concentrations of sclareol. HEK-293 cells transfected with a constitutive SEAP-expression vector (pSEAP2-Control; P SV40 -SEAP-pA) were used as a reference for putative cytotoxicity caused by drug exposure. HEK-293 cells transfected with a bacterial expression vector (pViM41; P T7 -mCherry-MCS) were used as a negative control indicating Ca V -unrelated assay readouts. Data are mean ± SD of SEAP levels scored at 48 h after drug exposure (n = 3 independent experiments). F) Quantification of Ca V 1 antagonism by sclareol and nifedipine on different Ca V 1.3 mutants. HEK-293 cells transfected with CaB-R regulated by different synthetic Ca V 1.3 mutants (WT, pCa V 1.3/pKK56/pMX57; Ca V 1.3 Y1048A , pWH154/pKK56/pMX57; Ca V 1.3 Y1365A, A1369S, I1372A , pWH155/pKK56/pMX57) were depolarized with 30 mm KCl and immediately seeded into culture wells containing different concentrations of sclareol or nifedipine. HEK-293 cells transfected with a constitutive SEAP-expression vector (pSEAP2-Control; P SV40 -SEAP-pA) were used as a reference for putative cytotoxicity caused by drug exposure. Data are mean ± SD of SEAP levels scored at 48 h after drug exposure (n = 3 independent experiments). Figure 4C,E, 54 ± 10%). In contrast, in the sclareol-treated mouse group little-to-no loss in DA neurons was observed when compared to the contraversive control side as well as to the vehicle-treated-mouse group ( Figure 4D,E, 92 ± 4%; p = 0.0061 sclareol-vs vehicle-control group). When allowed to explore an open arena to monitor locomotion performance, vehicle-treated PD mice carrying the headmounted mini-endoscopic microscope and a cannula developed strong locomotion alterations from baseline throughout the entire experimental timespan, while sclareol-treated animals showed consistently stable travel distance, mobility time, and velocity and appeared to be unaffected by PD ( Figure 4F and Figure 5A-D). Consistently, vehicle-treated PD mice also manifested locomotion impairments, exhibiting unilateral rotations contraversive to the 6-OHDA-affected hemisphere ( Figure 4G and Movie S1, Supporting Information). Interestingly, sclareoltreated mice showed neither 6-OHDA-triggered locomotion increase nor compensatory rotational behavior ( Figure 4F,G; Figure 5A-D; Movie S2, Supporting Information). Additionally, there was an inverse correlation between rotational behavior and neuronal integrity, suggesting that sclareol was indeed preserving the locomotion capabilities of treated animals and protecting them from PD-associated deficiencies ( Figure 4H). Monitoring of striatal neuron activities recorded in real time using mini-endoscopic live single-cell calcium imaging showed a significant difference in the calcium dynamics of sclareoltreated mice, compared with PD mice (Figure 4I-M). Decreased calcium dynamics is known to correlate with a reduced event rate of striatal neurons due to a PD-associated loss of DA neuromodulation. [46] Here, the calcium transients event rate ( Figure 4K, two-way repeated model ANOVA, F(8,40) = 4.64, p <0.0001) was indeed reduced in vehicle-treated mice. And most importantly the relative calcium event rate ( Figure 4K, twoway repeated model ANOVA, sclareol effect F(10,80) = 6.90, p <0.0001), the relative variance of the active neuronal fraction (Figure 4L, two-way repeated model ANOVA, sclareol effect F(10,80) = 11.08, p <0.0001) and the neuronal redundancy over time (Figure 4M, two-way repeated model ANOVA, sclareol effect F(10,80) = 9.50, p <0.0001, and Figure 5E,F) were all significantly different in PD mice compared to sclareol-treated animals. Thus, the calcium live-imaging recordings confirm that striatal neurons in these mouse groups encode motion in a distinct way. Overall, these data support the idea that sclareol protects the animals against the development of PD-associated deficiencies in locomotion programmed by DS neurons. Discussion Synthetic biology is currently undergoing a transition from a design-driven era of creating template circuits into a demanddriven discipline focused on the creation of problem-solving cell functions. [47][48][49] By engineering synthetic gene circuits customized to quantify Ca V 1.2 and Ca V 1.3 activities individually, we were able to overcome a major technical obstacle to drug discovery for PD. When compared to the FLIPR assay (fluorescence imaging plate reader), which is considered the current gold-standard for HTS of ion channel modulators, our cell-based CaB-R and CaB-A assays offer numerous advantages: i) high Z'scores, which are pivotal for HTS of large sample volumes, ii) compatibility with use-dependent analysis of CCBs to increase the information content of individual screens, [21,50] and iii) multiplexed target analysis enabling one-step assessment of massively parallel drug targets and anti-targets under identical screening conditions. [24,51] By using CaB-R and CaB-A in combination with computer-aided technologies, such as virtual screening and deep learning, we were able to identify sclareol as a novel drug candidate for neuroprotection in PD patients. In terms of in silico drug discovery, we used LigandScout for the screening of novel drug candidates as it uses an effective algorithm to rationally compute pharmacophores based on molecular structures of chemical compounds with known drug properties. [37] Deep-learningbased approaches can also be used for this purpose, [39] but we only trained our D-MPNN with known CCBs (Table S6, Supporting Information) without further optimizing the model through iterative cycles between experimental validation and additional training. Nevertheless, it showed excellent utility for the validation of our LigandScout results. Interestingly, both LigandScout and D-MPNN predicted stronger channel antagonism for linalool oxide and gingerol versus sclareol, but sclareol prevailed in subsequent experimental validations. This suggests that cell-based assays may provide a more advanced proxy than in silico technology for drug discovery. Sclareol is a natural compound derived from the Mediterranean medicinal plant Salvia sclarea (clary sage), has selectivity for Ca V 1.3 over Ca V 1.2, waspreviously demonstrated to attenuate growthand cell cycle progression of human leukemic cells, [52] shows low systemic toxicity and good bioavailability in vivo, and can penetrate the blood-brain barrier following oral intake. In addition, sclareol is structurally divergent from all L-type voltagegated CCBs known to date, and might therefore have a unique pharmacological profile without the common limitations of currently available PD drugs. All these features are favorable for behind the lens and expression of GCaMp6m immediately below. The tissue was stained for DA neuron-specific marker TH (red). E) Sclareol-mediated neuronal protection level quantified by differential TH staining with the contraversive hemispheres in sclareol and vehicle treatment groups. potential clinical application. In a mouse model of PD, we could indeed confirm promising therapeutic effects of sclareol, including prevention of SNc DA neuronal degeneration and maintenance of motor performances as compared to control PD mice. Notably, as PD patients typically show decreases in locomotion, the abnormally high locomotion parameters of our vehicletreated PD mice may seem counter-intuitive. However, this observation can be readily explained in terms of the constraints of our experimental model, since the 6-OHDA infusions were done unilaterally and not bilaterally. Such unilateral infusions trigger an unbalanced motor command between hemispheres, leading to unilateral rotations contraversive to the PD-affected hemisphere. This phenomenon is widely known, and similar findings of contraversive rotational behavior were reported in unilaterally 6-OHDA-infused rodents treated with L-DOPA. [53] Collectively, we believe this work well illustrates the value of multi-faceted experimental and computational drug discovery platforms, and especially the utility of cell-based solutions created with synthetic biology-inspired engineering principles, which we employed here to tailor the first high-throughput multiplexed drug screening system for ion channel-related diseases. This platform enabled us to identify sclareol as a structurally distinctive lead compound/candidate drug for neuroprotection in PD patients. We anticipate that the combination of molecular medicine, high-throughput technologies, and AI exemplified in this work will have a huge impact in many areas of biomedicine. Experimental Section Vector Design: Comprehensive design and construction details for all expression plasmids are provided in Table S7, Supporting Information. Computer-Aided Drug Screening: Representative pharmacophore models were created with LigandScout software [37] based on the reference ion channel blockers listed in Table S3, Supporting Information. To perform alignments to the pharmacophore, all constituents of rose flower, cistus ladanifer, pinus sylvestris, ginger, and clary sage obtained from GC-MS data kindly provided by Welfine Beijing Science & Technology Development Co. Ltd (Beijing, China) were assigned with a chemical SMILES (Simplified Molecular-Input Line-Entry System) language. Suggested hits of virtual screening were refined by structure similarity analysis using the free ChemMine software (http://chemmine.ucr.edu/). [54] To train a D-MPNN, [40] isomeric SMILES strings of published CCBs (Table S6, Supporting Information; positive reference, labeled 0) and 400 random compounds from maximum unbiased validation (MUV) datasets [55] (negative reference, labeled 1) were used. After training, the binary classification model was applied to predict the goodness of candidate molecules from the same GC-MS dataset used for virtual screening (309 constituents from five essential oils, 198 non-redundant chemical compounds). Rank-ordered prediction scores (y-axis) are listed in Table S4, Supporting Information. Drug-likeness properties of candidate Ca V -blocker drugs, including pharmacokinetics, Lipinski's Rule of Five criteria, [41] and blood-brain barrier permeability, were evaluated using the ADME toxicity predictor SwissADME (https://www.click2drug.org/directory_ADMET.html). Immunohistochemistry: Mice were anesthetized with a lethal i.p. injection of pentobarbital (300 mg kg −1 ) and perfused intracardially with cold PBS followed by 4% paraformaldehyde. The brains were extracted and kept in 4% sucrose until complete saturation. Slices (60 μm thick) containing the DS were cut with a cryostat and processed for TH and Ca v 1.3 immunohistochemistry. Slices were washed for 3 min 3 times in TBS (1X), permeabilized with TBS containing 0.1% Tween 20 and 0.1% Triton X100, washed again for 3 min 3 times in TBS (1X), and blocked in 1% BSA-TBS solution for 2 h. Finally, mouse anti-TH antibody and/or anti-CaV1.3 (1/500, Sigma) was added and slices were softly shaken at 4°C overnight. Next day, the slices were washed for 3 min 3 times in TBS (1X) and incubated with Alexa 488 donkey anti-rabbit antibody or Alexa 555 donkey anti-mouse antibody (1/500, Sigma) for 2 h at room temperature. After a last round of TBS washing, the slices were mounted on slides and imaged with a confocal microscope (Zeiss LSM700). Images were processed with ImageJ and the fluorescence intensity was analyzed with Matlab (Mathwork). In Vivo Single-Cell Calcium Imaging: An express probe (carrying the AAV1.Camk2a.GCaMP6m.WPRE.SV40, Ready to Image virus, Inscopix) was unilaterally placed above the ipsiversive DS to the cannula (AP: +0.6; L: −1.7; and DV: −2 mm). The probe was fixed with a UV-light-curable glue (Henkel). A custom-made head bar (2 cm long, 0.4 cm wide, 0.1 cm tall) was placed for future handling. A fixed headcap was built from layers consisting of super-glue (Cyberbond), UV-light-curable glue (Loctite), and dental cement (Lang). Small screws were anchored in the skull to improve adhesion between the skull and the head cap. The headcap was secured to the skin with Vetbond tissue adhesive glue (3m). The expression of GCaMP6m and the clearing of the lens were assessed regularly starting from 10 days post-surgery. Calcium transients were recorded with the nVoke2 system, pre-processed in the Inscopix Data Processing Software (IDPS, v1.3, Inscopix), and finally processed and analyzed with Python in collaboration with Inscopix. Ethics: Male and female C57BL/6JRj mice were bred in-house. No gender differences were observed, and the data were thus pooled. All experimental procedures were approved by the Institutional Animal Care Office of the University of Basel and the Cantonal Veterinary Office under License Number 2742. Statistical Analysis: CCB-activity in CaB-R assays was calculated as "percentage of control", with reporter protein levels normalized to maximum average counts (100%; 40 mm KCl addition) and minimum average counts (0%; 10 μm nicardipine). Normalization calculations and nonlinear regression curve-fittings (log (inhibitor) normalized response-variable slope), and statistical analysis were all conducted in Prism 7.0 (Graph Pad Software, San Diego, CA, USA). For statistical analysis, an extra sum-ofsquares F test was performed to determine the significance of differences in Log(IC 50 ) among the data sets of Figure S3D,E, Supporting Information. Fold of control (FOC) values were calculated as FOC = Xi/avg(c + ) × 100, where Xi is the measurement of the i th compound and avg(c + ) is the average measurement of the DMSO-treated samples. The Z' value was calculated between the positive (10 μm nicardipine) and negative (0.1% DMSO) controls according to the reported equation (Zhang et al., 1999). All values for in vitro experiments are expressed as the mean ± SD. Whole-cell patch-clamp results ( Figure S7B, Supporting Information) were analyzed by paired t-test or two-way repeated-measures ANOVA. Immunohistochemistry ( Figure 4E) results were analyzed by un-paired t-test. Mouse behavior and in vivo single-cell calcium imaging results were analyzed by two-way repeated-measures ANOVA. All the above data sets for sclareol efficacy tests in vivo are shown as mean ± SEM. *p<0.05, **p<0.01, ***p<0.001. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
7,105
2022-01-18T00:00:00.000
[ "Medicine", "Computer Science", "Chemistry" ]
The Role of microRNAs in Cholangiocarcinoma Cholangiocarcinoma (CCA), an aggressive malignancy, is typically diagnosed at an advanced stage. It is associated with dismal 5-year postoperative survival rates, generating an urgent need for prognostic and diagnostic biomarkers. MicroRNAs (miRNAs) are a class of non-coding RNAs that are associated with cancer regulation, including modulation of cell cycle progression, apoptosis, metastasis, angiogenesis, autophagy, therapy resistance, and epithelial–mesenchymal transition. Several miRNAs have been found to be dysregulated in CCA and are associated with CCA-related risk factors. Accumulating studies have indicated that the expression of altered miRNAs could act as oncogenic or suppressor miRNAs in the development and progression of CCA and contribute to clinical diagnosis and prognosis prediction as potential biomarkers. Furthermore, miRNAs and their target genes also contribute to targeted therapy development and aid in the determination of drug resistance mechanisms. This review aims to summarize the roles of miRNAs in the pathogenesis of CCA, their potential use as biomarkers of diagnosis and prognosis, and their utilization as novel therapeutic targets in CCA. Introduction Cholangiocarcinoma (CCA) includes a diverse group of biliary epithelial malignancies that involve all points of the biliary tree. Depending on the anatomic location, CCAs are classified into three subtypes: intrahepatic (iCCA), perihilar (pCCA), and distal (dCCA) [1,2]. Among them, pCCA and dCCA are also referred to as "extrahepatic CCA" (eCCA). pCCA, the most common CCA, accounts for 50-60% of all CCAs, followed by dCCA, which accounts for 20-30% of all cases [1]. iCCA is the second most common primary liver cancer after hepatocellular carcinoma (HCC) and accounts for 10-15% of all primary hepatic malignancies [3]. Additionally, a rare type of primary liver cancer, mixed hepatocellular cholangiocarcinoma (HCC-CCA), accounts for <1% of all cases according to the World Health Organization (WHO) [4] (Figure 1). CCAs are aggressive tumors that account for approximately 3% of all gastrointestinal cancers [5]. CCAs are usually asymptomatic in the early stages and are typically diagnosed at an advanced stage. Although surgery is a therapeutic strategy for patients with CCAs, the 5-year postoperative survival rate (7-20%) remains low because of the challenge of diagnosing patients at an early stage [1,6]. Therefore, developing advanced diagnostic techniques and exploring the mechanisms underlying CCA development and progression can be effective approaches to improve the outcomes for patients with CCA. MicroRNAs (miRNAs) are small, non-coding RNAs with a 17-25 nucleotide length [7]. miRNA biogenesis is a multistep process that is categorized into: transcription, nuclear cropping, export to cytoplasm, and cytoplasmic dicing [8]. miRNA genes are transcribed as primary RNA (pri-miRNA) by RNA polymerase II (pol-II) and are processed by Drosha, a nuclear enzyme of the RNase III family, in the nucleus to release a hairpinshaped precursor called "pre-miRNA". Pre-miRNA is recognized by Exportin 5/Ran-GTP transporter and is exported from the nucleus to the cytoplasm. The pre-miRNA is then cleaved by Dicer and the TAR RNA-binding protein to produce a miRNA duplex, which is then loaded onto the Argonaute (AGO) protein to assemble the RNA-induced silencing complex (RISC). One strand remains on the AGO protein to form the mature miRNA, while the other strand is degraded. The mature miRNA represses gene expression by interacting with the complementary sequences in the 3′-untranslated region of the target mRNAs [8][9][10] (Figure 2). Over 5000 miRNAs from diverse organisms are registered in online databases, such as the miRBase (www.mirbase.org, accessed on 9 May 2021). In humans, approximately one-third of the miRNAs are organized in clusters and contain two or more miRNAs with similar sequences [11], possibly leading to combinatorial diversity and synergy in the biological effects of the miRNAs. Furthermore, approximately 30% of the human genes are regulated by miRNAs via signaling pathways [12]. Cancer is a complex genetic disease associated with gene mutations and deregulation of the gene expression. During the last decade, many studies have focused on miRNAs and cancer and have highlighted the impact of miRNAs on gene expression. In this review, we have comprehensively discussed the association between miRNAs and CCA; we have also summarized the roles of miRNAs in the pathogenesis of CCA, their potential use as biomarkers of diagnosis or prognosis, and their possible use as novel therapeutic targets in CCA. MicroRNAs (miRNAs) are small, non-coding RNAs with a 17-25 nucleotide length [7]. miRNA biogenesis is a multistep process that is categorized into: transcription, nuclear cropping, export to cytoplasm, and cytoplasmic dicing [8]. miRNA genes are transcribed as primary RNA (pri-miRNA) by RNA polymerase II (pol-II) and are processed by Drosha, a nuclear enzyme of the RNase III family, in the nucleus to release a hairpin-shaped precursor called "pre-miRNA". Pre-miRNA is recognized by Exportin 5/Ran-GTP transporter and is exported from the nucleus to the cytoplasm. The pre-miRNA is then cleaved by Dicer and the TAR RNA-binding protein to produce a miRNA duplex, which is then loaded onto the Argonaute (AGO) protein to assemble the RNA-induced silencing complex (RISC). One strand remains on the AGO protein to form the mature miRNA, while the other strand is degraded. The mature miRNA represses gene expression by interacting with the complementary sequences in the 3 -untranslated region of the target mRNAs [8][9][10] (Figure 2). Over 5000 miRNAs from diverse organisms are registered in online databases, such as the miRBase (www.mirbase.org, accessed on 9 May 2021). In humans, approximately one-third of the miRNAs are organized in clusters and contain two or more miRNAs with similar sequences [11], possibly leading to combinatorial diversity and synergy in the biological effects of the miRNAs. Furthermore, approximately 30% of the human genes are regulated by miRNAs via signaling pathways [12]. Cancer is a complex genetic disease associated with gene mutations and deregulation of the gene expression. During the last decade, many studies have focused on miRNAs and cancer and have highlighted the impact of miRNAs on gene expression. In this review, we have comprehensively discussed the association between miRNAs and CCA; we have also summarized the roles of miRNAs in the pathogenesis of CCA, their potential use as biomarkers of diagnosis or prognosis, and their possible use as novel therapeutic targets in CCA. Epidemiology The mortality rates of iCCA have increased globally in recent years, with the highest rates reported from 2010 to 2014 (1.5-2.5/100,000 in men and 1.2-1.7/100,000 in women) based on the data of 32 selected countries from the WHO and Pan American Health Organization databases [13]. In addition, in Japan, the mortality rate associated with eCCA is 2.8/100,000 in men and 1.4/100,000 in women [13]. The data from the National Center for Health Statistics between 1999 and 2014 in the USA showed that CCA mortality was 36% higher in patients with age >25 years, and the mortality was lower in females than in males (risk ratio [RR] = 0.78, 95% confidence interval [CI] = 0.77-0.79) [14]. Differences in CCA incidence rates have been reported among different racial and ethnic groups, with the highest rates reported in Southeast Asia and the lowest in Australia [15]. A study in Western Europe indicated that the incidence rates of iCCA increased considerably between 1999 and 2009, especially in the population in the age group of 45-59 years [16]. In contrast, research from the USA has shown that the incidence of iCCA has remained stable from 1992 to 2007; whereas, the incidence of eCCA has been increasing considerably [17]. In Japan, from 1976 to 2013, a total of 14,287 cases of CCA have been identified, and iCCA was more likely to develop in younger patients. The prognosis of iCCA was poorer in comparison to that of eCCA; however, the prognosis of both iCCA and eCCA cases improved after 2006 [18]. Epidemiology The mortality rates of iCCA have increased globally in recent years, with the highest rates reported from 2010 to 2014 (1.5-2.5/100,000 in men and 1.2-1.7/100,000 in women) based on the data of 32 selected countries from the WHO and Pan American Health Organization databases [13]. In addition, in Japan, the mortality rate associated with eCCA is 2.8/100,000 in men and 1.4/100,000 in women [13]. The data from the National Center for Health Statistics between 1999 and 2014 in the USA showed that CCA mortality was 36% higher in patients with age >25 years, and the mortality was lower in females than in males (risk ratio [RR] = 0.78, 95% confidence interval [CI] = 0.77-0.79) [14]. Differences in CCA incidence rates have been reported among different racial and ethnic groups, with the highest rates reported in Southeast Asia and the lowest in Australia [15]. A study in Western Europe indicated that the incidence rates of iCCA increased considerably between 1999 and 2009, especially in the population in the age group of 45-59 years [16]. In contrast, research from the USA has shown that the incidence of iCCA has remained stable from 1992 to 2007; whereas, the incidence of eCCA has been increasing considerably [17]. In Japan, from 1976 to 2013, a total of 14,287 cases of CCA have been identified, and iCCA was more likely to develop in younger patients. The prognosis of iCCA was poorer in comparison to that of eCCA; however, the prognosis of both iCCA and eCCA cases improved after 2006 [18]. PSC, a chronic cholestatic liver disease with an unclear etiology, is characterized by inflammation and fibrosis with multifocal biliary strictures. Additionally, PSC is closely associated with IBD, and about two-thirds of the patients in Northern Europe and the USA have PSC concurrently with IBD [30]. The incidence of CCA in patients with PSC is approximately 0.6-1.5% per year [31]. A genetic study of 186 patients with PSC-biliary tract cancer showed eCCA with high genomic alterations in TP53 (35.5%), KRAS (28.0%), CDKN2A (14.5%), and SMAD4 (11.3%), and even in underlying druggable mutation genes, such as HER2/ERBB2 [32]. Moreover, a 10-year nationwide populationbased study from the UK suggested that development of PSC increases the risk of CCAs (hazard ratio [HR] = 28.46, p < 0.001) in patients with PSC-IBD, and it also increases the risk of HCC (HR = 21.00, p < 0.001), gallbladder cancer (HR = 9.19, p < 0.001), pancreatic cancer (HR = 5.26, p < 0.001), and colorectal cancer (HR = 2.43, p < 0.001) [33]. Certain regions of Southeast Asia, such as North and Northeast Thailand, where Opisthorchis viverrini infestation is prevalent, show high CCA burden, with 19.3% and 15.7% of the population having the infection, and CCA incidence rates of 14.6/100,000 and 85/100,000, respectively [34,35]. Southeastern and Northeastern China, Korea, Northern Vietnam, and Eastern Russia show a prevalence of human Clonorchis sinensis infections [36]. Furthermore, the prevalence of liver fluke (OR = 10.088, 95% CI = 1.085-93.775) is reportedly higher in patients with hilar CCA than in healthy controls [19]. Additionally, there are reports of CCA associated with Schistosomiasis japonica infection [37]. Approximately 57% of global cirrhosis cases are induced by chronic infection with hepatitis B (HBV) and hepatitis C viruses (HCV) [38]. Several meta-analyses have indicated that HBV or HCV infection is associated with an increased risk of CCA, especially iCCA [39][40][41]. Cirrhosis, diabetes, and obesity are also risk factors for CCA [41]. A casecontrol study showed that cirrhosis is a major risk factor for iCCA; other risk factors include nonspecific cirrhosis (adjusted OR = 27.2), HCV infection (adjusted OR = 6.1), diabetes (adjusted OR = 2.0), and alcoholic liver disease (adjusted OR = 7.4) [23]. Furthermore, another study indicated that risk factors associated with iCCA and eCCA were nonspecific cirrhosis, chronic pancreatitis, diabetes, alcoholic liver disease, biliary cirrhosis, and cholelithiasis. Factors associated with iCCA include non-alcoholic fatty liver disease (NAFLD), obesity, and smoking [42]. The Role of miRNAs in CCA In recent decades, several studies have focused on the role of miRNAs in cancers. miRNAs play a key role in cancer cell regulation, and are associated with the progression of the cell cycle, apoptosis, metastasis, angiogenesis, glycolysis/Warburg effect, autophagy, therapy resistance, and epithelial-mesenchymal transition (EMT). miRNAs Associated with CCA Risk Factors miRNAs play an important role in regulating physiological and pathophysiological functions. In gallstone disease, upregulated miR-210 reduces the expression of its target gene, ATP11A, in human gallbladder epithelial cells to regulate the ABC transporter pathway [43]. The pathological mechanism of hepatolithiasis is closely related to chronic inflammation and overexpression of mucin 5AC (MUC5AC). miR-130b inhibits the expression of specificity protein 1 (Sp1), which is followed by a decrease in the expression of MUC5AC [44]. In addition, a clinical control study indicated that the expression levels of miR-21 and miR-221 were upregulated in CCA associated with hepatolithiasis [45]. In PSC, increased melatonin reduces biliary hyperplasia and liver fibrosis by overexpressing arylalkylamine N-acetyltransferase (AANAT) in the pineal gland. Moreover, inhibition of miR-200b reduces the expression of fibrotic and angiogenic genes, such as Col1a1, Fn-1, Vegf-a/c, Vegfr-2/3, Angpt1/2, and Tie-1/2 [46]. In schistosomiasis, miR-21, miR-96, miR-351, miR-146a/b, and miR-27b promote hepatic fibrosis by regulating signaling pathways [47,48]. During liver cirrhosis progression, miR-378 plays a key role in promoting hepatic inflammation and fibrosis via the NF-κB-TNFα axis in non-alcoholic steatohepatitis [49]. Increased miR-30a expression downregulates extracellular matrix-related gene expression, such as that of α-SMA, TIMP-1, and collagen I, and prevents liver fibrogenesis by directly targeting the Beclin1-mediated autophagy [50]. Activation of hepatic stellate cells (HSCs) is a major step in the initiation and progression of hepatic fibrosis and overexpression of miR-214-3p suppresses the expression of suppressor-of-fused homolog (Sufu) to promote HSC activation and fibrosis development [51]. HBV infection induces a spectrum of liver diseases ranging from acute infection to chronic hepatitis, cirrhosis, and HCC [52]. Wang et al. have indicated that miR-98, miR-375, miR-335, miR-199a-5p, and miR-22 are involved in HBV infection [53]. The expression of miR-192-3p is negatively associated with increased levels of HBV DNA in the serum of patients with HBV. HBV induces autophagy to promote its replication by the miR-192-3p-XIAP axis via the NF-κB signaling [54]. Additionally, miR-224 [55] and miR-1231 [56] suppress HBV replication by inhibiting SIRT1-mediated autophagy and targeting the core mRNA, respectively. Other studies have reported that the miR-99 family promotes replication by enhancing autophagy through the mTOR/ULK1 signaling [57]. HCV infection is a global health problem that leads to chronic carriage in 70-80% of all cases and presents a high risk of cirrhosis and cancer [58]. miR-215 [59] promotes HCV replication by targeting TRIM22 and miR-21-5p [60], and it enhances the HCV life cycle and steatosis induced by the viral core 3a protein and other promoters such as miR-122 [61]. Overexpression of miR-199a suppresses HCV genome replication [62], and miR-130a [63] inhibits HCV replication via an Atg5-dependent autophagy pathway. Inflammation is associated with cancer origin and is based on an environment rich in inflammatory cells and factors, such as activated stroma and DNA-damage-promoting agents. Moreover, activation of the inflammation signaling pathway enhances cell proliferation [64]. In IBD, miR-301a is overexpressed in peripheral blood mononuclear cells and inflamed mucosa, which promotes mucosal inflammation by inducing IL-17A and TNF-α expression [65]. Increased miR-31 expression in colon tissues of patients with IBD reduces the inflammatory response by inhibiting the expression of IL-7R, IL-17RA, and signaling proteins (GP130). In addition, miR-31 promotes epithelial regeneration via the Wnt and Hippo signaling pathways [66]. In chronic pancreatitis, upregulated miR-15 and miR-16 expression can alleviate apoptosis and fibrosis by targeting both BCL-2 and SMAD5 [67]. Alcohol consumption is closely associated with liver injury, especially alcoholic liver disease. A recent study indicated that alcohol decreases the expression of miR-148a in hepatocytes through FoxO1, thereby facilitating TXNIP overexpression and NLRP3 inflammasome activation-induced hepatocyte apoptosis [73]. miR-155 promotes alcohol-induced steatohepatitis and fibrosis [74]. Tobacco smoke induces miR-25-3p maturation via m6A modification, which in turn promotes the development and progression of cancers [75,76] ( Table 1). Tumor Suppressor miRNAs in CCA The association of tumor suppressor miRNAs, such as miR-34a [91], miR-122 [102], miR-22 [105], and miR-101 [108], is well established in many cancer types. miR-34a expression is often decreased in cancers. It is transcriptionally controlled by TP53 and regulated by multiple p53-independent mechanisms. miR-34a, a candidate tumor suppressor miRNA, regulates multiple targets, such as MYC, MET, CDK4/6, NOTCH1, NOTCH2, BCL2, and CD44, all of which have been implicated in tumorigenesis and cancer progression [123]. miR-122 is a tumor suppressor in various cancer types, including CCA. It inhibits proliferation and metastasis by targeting ALDOA [103] and chloride intracellular channel 1 (CLIC1) [124]. Moreover, miR-122 is a regulator in various liver diseases, including HCC [125]. miR-22 plays an important role in many cancer types and has been shown to modulate some oncogenic processes, such as proliferation, apoptosis, angiogenesis, immune response, and metastasis [105]. In an in vitro study on CCA, overexpression of miR-433 and miR-22 was demonstrated to suppress cell proliferation and cellular migration by targeting HDAC6 [105]. In addition, a survival analysis indicated that DEPDC1, FUT4, MDK, PACS1, PIWIL4, miR-22, miR-551b, and cg27362525 and cg26597242 CpGs can be used as prognostic markers of CCA. Although miR-22 is a known tumor suppressor, its high expression is correlated with poor survival of patients with CCA [126]. miR-101 has been shown to be a tumor suppressor in certain cancers. For instance, miR-101 overexpression considerably inhibits CCA cell proliferation and angiogenesis by targeting vascular endothelial growth factor (VEGF), cyclooxygenase-2 (COX-2) [108], and E2F8 [109]. EZH2 is also a target gene of miR-101 that regulates CCA cell proliferation [127] ( Figure 3). miRNAs as Biomarkers for CCA CCA is commonly diagnosed through a combination of clinical details, biochemical information, radiological imaging, and histology. Histology is usually considered as the "golden standard" to confirm a diagnosis. Radiological imaging techniques, such as ultrasound, computed tomography, magnetic resonance imaging/magnetic resonance cholangiopancreatography, and positron emission tomography, have been used to diagnose CCA subtypes. Carbohydrate antigen 19-9 (CA19-9), a non-specific tumor biomarker, helps in the diagnosis of CCA; however, this biomarker lacks sensitivity and specificity, particularly in the early stages of CCA [2]. Most patients with early-stage CCA are usually asymptomatic, and are thus, diagnosed at an advanced stage. Tumor biomarkers have been widely used to improve early-stage diagnosis and prognosis prediction. In recent years, many studies have examined miRNAs as potential biomarkers for the early diagnosis and prognosis prediction in case of CCA (Table 3). miRNAs as Biomarkers for CCA CCA is commonly diagnosed through a combination of clinical details, biochemical information, radiological imaging, and histology. Histology is usually considered as the "golden standard" to confirm a diagnosis. Radiological imaging techniques, such as ultrasound, computed tomography, magnetic resonance imaging/magnetic resonance cholangiopancreatography, and positron emission tomography, have been used to diagnose CCA subtypes. Carbohydrate antigen 19-9 (CA19-9), a non-specific tumor biomarker, helps in the diagnosis of CCA; however, this biomarker lacks sensitivity and specificity, particularly in the early stages of CCA [2]. Most patients with early-stage CCA are usually asymptomatic, and are thus, diagnosed at an advanced stage. Tumor biomarkers have been widely used to improve early-stage diagnosis and prognosis prediction. In recent years, many studies have examined miRNAs as potential biomarkers for the early diagnosis and prognosis prediction in case of CCA (Table 3). Numerous studies have reported the use of miRNAs as biomarkers in CCA. miR-21, an oncogenic miRNA, [82], is a potential biomarker for both diagnosis [45] and prognosis [121]. In addition, miR-21 and miR-221 aid in the diagnosis of CCA associated with hepatolithiasis with an increased accuracy (AUC = 0.911) and a sensitivity and specificity of 77.42% and 97.50%, respectively [45]. Serum miR-26a is upregulated in patients with CCA, and its expression levels are associated with the tumor-node-metastasis stage. miR-26a can be clinically beneficial as a diagnostic biomarker for CCA, as it has previously shown an AUC = 0.899, 84.8% sensitivity and 81.8% specificity [128]. Moreover, high expression of serum miR-26a is also an independent predictor of poor overall (HR = 3.461, 95% CI = 1.331-5.364) and progression-free survival in patients with CCA (HR = 4.226, 95% CI = 1.415-10.321) [128]. The expression of serum miR-150-5p is downregulated in patients with CCA, demonstrating a 91.43% sensitivity and 80% specificity for diagnosis. When combined with CA19-9 expression, the sensitivity increases to 93.33% and specificity to 96.88% [129]. To discriminate dCCA from pancreatic ductal adenocarcinoma and improve diagnosis, a recent study revealed that miR-16 is downregulated and miR-877 is upregulated in patients with dCCA. The combination of the two miRNAs-miR-16 and miR-877-has shown an AUC = 0.90, 79% sensitivity, and 90% specificity for diagnosis, and an AUC = 0.88, 71% sensitivity, and 90% specificity for discriminating dCCA from pancreatic ductal adenocarcinoma [138]. miRNAs in CCA Therapy Resistance For certain patients with advanced-stage CCA, for whom surgical resection or liver transplantation is not feasible, a combination of gemcitabine and cisplatin (GemCis) is used as a first-line systemic therapy irrespective of the CCA subtype [2]. Furthermore, a combination of 5-fluorouracil, leucovorin, oxaliplatin, and irinotecan (FOLFIRINOX) [139] or a combination of gemcitabine, cisplatin, and nab-paclitaxel [140] are associated with improved patient survival. In recent years, an increasing number of studies have focused on genetic pathophysiology, and carcinogenic and mutant genes have been identified in many malignancies, including CCA, which in turn promote the development of targeted therapy for CCA. However, drug resistance remains a key issue during such treatment. Numerous studies have reported the role of miRNAs in CCA therapy resistance. For instance, miR-210 sustains HIF-1α activity by targeting HIF-3α, which reduces the sensitivity to gemcitabine in CCA cells [141]. miR-130a-3p increases gemcitabine resistance by targeting PPARG [142]. Overexpression of miR-199a-3p enhances cisplatin sensitivity by inhibiting the mTOR signaling pathway in CCA cells [143]. miR-106b overexpression increases 5-fluorouracil sensitivity by targeting ZBTB7A, and miR-106b downregulation is related to poor prognosis in patients with CCA [144] (Table 4). miRNA-Based Therapies As previously mentioned, miRNAs play a key role as oncogenic or suppressor miR-NAs in the development and progression of CCA, and they act as regulators of drug resistance. miRNA-based therapy is a novel targeted therapy for cancers that is based on the concept of overexpressing suppressor miRNAs or decreasing the expression of oncogenic miRNAs using miRNA sponges. Although numerous in vitro and xenograft model-based studies [82,84,91] that have used miRNA mimics, inhibitors, and plasmids have indicated the functions of miRNAs, miRNA-based therapy has not yet achieved clinical translation. Because of the heterogeneity of the tumor cells and the complexity of miRNA functions, the same miRNA can have opposite functions in different malignancies. Furthermore, one miRNA can target different genes to regulate the protein expression. Additionally, the complexity of the in vivo environment can also affect the therapy. Finally, determining the approach and accuracy of drug delivery, evaluating efficacious doses, and predicting off-target effects are aspects that need to be considered. Currently, these act as obstacles that prevent the use of miRNA-based therapy in clinical applications [145,146]. Conclusions Studying the biological functions of miRNAs, especially their roles in malignancies, is a growing field of research. miRNAs have been reported to play key roles in tumorigenesis, cell proliferation, apoptosis, metastasis, angiogenesis, EMT, and autophagy. In this review, we summarized the functional roles and related target genes of oncogenic and suppressor miRNAs implicated in the development and progression of CCA. Dysregulated expression of miRNAs in CCA has been utilized as potential biomarker for clinical diagnosis and prognosis prediction. Furthermore, miRNAs and their target genes contribute toward the development of targeted therapy and determination of drug resistance mechanisms. Although accumulating evidence has demonstrated that miRNAs may be potential biological targets for CCA treatment in preclinical studies, they are not yet suitable for clinical practice because of tumor cell heterogeneity as well as the complexity of the in vivo environment and miRNA functions. Meanwhile, determining drug delivery approaches, evaluating efficacious doses, and predicting off-target effects remain obstacles that prevent the clinical application of miRNA-based therapy. Further research and analyses of miRNAs will provide more evidence and novel insights into the pathogenesis of CCA and will prove to be useful for the diagnosis, therapy, and prognosis prediction in patients with CCA.
5,524.4
2021-07-01T00:00:00.000
[ "Biology" ]
Biomimetic Sensor for Detection of Hydrochlorothiazide Employing Amperometric Detection and Chemometrics for Application in Doping in Sports This work describes a simple and cost-effective method for quantitative determination of hydrochlorothiazide using biomimetic sensor based on a carbon paste modified with hemin (biomimetic catalyst of the P450 enzyme). The sensor was evaluated using cyclic voltammetry and amperometry for its electrochemical characterization and quantification, respectively. Amperometric analyses were carried out at 800 mV vs. Ag/AgCl (KClsat), using a 0.1 mol L phosphate buffer solution at pH 8.5 as the support electrolyte. Optimization of the experimental parameters was performed using a multivariate methodology. The proposed method was successfully applied to the analysis of hydrochlorothiazide in spiked urine and pharmaceutical formulations, demonstrating that it is a reliable, feasible and efficient alternative method for the detection of hydrochlorothiazide in doping and can also be used for the quality control method. Introduction Hydrochlorothiazide (HCTZ) is a diuretic drug belonging to the thiazide class.Diuretics are substances that increase urine flow and reduce the capacity of tubular absorption of sodium and water.Their use reduces blood volume, decreases blood return to the heart and consequently reduces cardiac output (lowers blood pressure).HCTZ has wide indications in the treatment of diseases such as renal edema, hypertension, hypercalcemia and nephrogenic diabetes insipidus. 1It is absorbed from the gastrointestinal tract and it is mostly eliminated by renal excretion, with an average of 65-72% being excreted in urine. 2,3n sports, thiazide diuretics are used to flush out the prohibited substances previously taken with forced diuresis, and also where weight classes are involved to achieve acute and instantaneous weight loss. 4For this reason, all diuretics are banned in sports by the World Anti-Doping Agency (WADA). 57][8] Evidently, a sensitive and reliable analytical method to determine diuretics in urine and/or plasma is an important prerequisite in sport activities.Classical analytical techniques such as gas chromatography (GC), 9 high-performance liquid chromatography (HPLC), [10][11][12][13][14][15] capillary electrophoresis 16 and mass spectrometry, 10 are very sensitive and standardized techniques for the determination of these substances which are prohibited in sport, have been commonly used.These techniques have some disadvantages such as complexity, extensive time consumption, high cost and also need for qualified and technical workers.For these reasons, the development of rapid, simple and alternative method for screening and quantitative detection of hydrochlorothiazide is still very important. 5][36] The artificial enzyme, which may be a biomimetic catalyst, 37 carries out the same processes performed by the biological catalyst, but it is not necessary to follow the same mechanism.The artificial enzymes on the electrode surfaces will provide the same function as the corresponding enzyme for the detection of its substrates, [32][33][34][35][36][37] and are known as biomimetic sensors. The aim of this work is to develop a sensitive and selective biomimetic sensor for the detection of HCTZ (Figure 1) based on the hemin complex using the amperometry technique.The modifier which was chosen for this work presents many advantages.The hemin is a well-known natural metalloporphyrin and it can be used as a redox biomimetic catalyst, increasing sensitivity of the electrochemical sensor. 35Experimental design methodologies were used to optimize the measurement conditions and its validation was performed in urine samples and pharmaceutical formulations. Reagents and solutions The materials used in this work were HCTZ, chloroprotoporphyrin IX iron(III) (hemin), mineral oil and graphite powder (99.99% purity, particle size < 45 µm) and were obtained from Sigma-Aldrich ® .KH 2 PO 4 , HCl and ethanol were purchased from Synth ® , and methanol was provided by J. T. Baker ® .All buffer solutions were prepared with deionized water (Milli-Q Direct-0.3,Millipore ® ).The stock solution of hydrochlorothiazide was dissolved in ethanol. Electrochemical measurements The electrochemical measurements were carried out using a potentiostat/galvanostat model Autolab PGSTAT30 (Autolab/Eco Chemie).The experimental conditions were controlled with General Purpose Electrochemical System (GPES) software, all measurements were conducted at 25 °C in a conventional electrochemical cell containing of 3 electrodes: a commercial reference electrode (Analion ® ) of Ag/AgCl(KCl sat ), a platinum coil as the counter electrode and the modified carbon paste as the working electrode. The electroanalytical techniques used in this work were cyclic voltammetry and amperometry.Cyclic voltammetry was firstly performed to investigate the electrochemical behavior of hydrochlorothiazide on the sensor surface.Then, in a next step for quantification of hydrochlorothiazide, the amperometric measurements were subsequently carried out at an adequate potential based on the results of the voltammetric experiments. Biomimetic sensor construction The modified carbon paste biomimetic sensor was prepared by mixing of 14 mg of hemin with 86 mg of graphite powder and 1.0 mL of 0.1 mol L -1 phosphate buffer solution (pH 7.0).The material was carefully homogenized with a stainless steel spatula and left to dry at room temperature.Mineral oil (Nujol) was added to the dry material to obtain the carbon paste.For comparative purposes, carbon pastes were prepared in the presence and in the absence of hemin.The paste was placed into the cavity of a glass tube (4 mm internal diameter, 1 mm depth), and a platinum slide was inserted for electrical contact with the paste. HPLC analyses Chromatographic analyses were performed using a Shimadzu ® model 20A liquid chromatograph coupled with SPD-20A UV-Vis detector, a SIL-20A autosampler and a DGU-20A5 degasser.The chromatography system was controlled by a microcomputer.A C18 column (250 mm × 4.6 mm, Shimadzu Shim-Pack CLC-ODS) was used in all chromatographic experiments.The mobile phase was a mixture of methanol, 2.5 × 10 -3 mol L -1 phosphate buffer at pH 8.0 and water.Gradient elution chromatography was carried out by using the following gradient steps of solvents A (water), B (phosphate buffer at pH 8.0) and C (methanol): 97:3:0 (A:B:C) for 8 min, then 20:80:0 for 8 min at a flow rate of 1.0 mL min -1 with a sample injection volume of 10 µL.After the run was complete, the column re-equilibration time was 3 min.The absorbance of hydrochlorothiazide was measured at 220 nm. 38 Study of selectivity/interference The selectivity/interference of the sensor was investigated by means of amperometry response to 13 drugs of different pharmacological classes.1.0 × 10 -2 mol L -1 stock solutions for all drugs were dissolved in the water/ ethanol (9:1 v/v) solution. Sensor application using pharmaceutical formulations The two pharmaceutical formulations were of the generic class, one was normal class and the other one of the similar class 39 containing 25 mg of HCTZ from different batches and trademarks were purchased in local drugstores in Araraquara City (Brazil) and were analyzed by the proposed method. All tablets of each commercial trademark were weighed exactly and ground to a fine powder.A portion of this powder was accurately weighed and dissolved with 10.0 mL of ethanol (proposed method) or methanol (comparative method).These solutions were filtered and aliquot of 2.5 mL of each filtrate was transferred to a 10.0 mL volumetric flask and the volume was completed with water or methanol, respectively. Sensor application using urine samples Urine samples were collected from six volunteers.Samples A to F were from healthy people aged between 20 and 60 years.Sample F was collected from a volunteer who consumed two doses of diuretic amiloride daily for the control of arterial pressure.Each sample was enriched with hydrochlorothiazide.An aliquot of 10 mL of the sample was centrifuged for 10 min at 2000 rpm.The supernatant was diluted 2 times with water and the solution was transferred into the voltammetric cell for analysis without any further pretreatment.Standard addition method was used for the determination of HCTZ in real samples and the results were compared with a chromatographic method described in literature. 38 Electrochemical characterization of HCTZ in the biomimetic sensor The electrochemical characterization of hydrochlorothiazide using the proposed biomimetic sensor based on hemin was realized by cyclic voltammetry.Figure 2 shows the cyclic voltammogram in presence of HCTZ which exhibits a behavior related to the irreversible oxidation of the hydrochlorothiazide at a potential of 800 mV. The oxidation of HCTZ was carried out through experiments varying the scan rate on the cyclic voltammetry to evaluate the electrochemical behavior of the analyte on the sensor surface.Analyzing the anodic peak current as a function of the square root of the scan rate it was possible to visualize a linear dependence (data not shown) in the range between 5 and 200 mV s -1 , indicating that the mass transport of HCTZ on the sensor surface is controlled by the diffusion. To investigate the possibility of an electrocatalytic process for the HCTZ oxidation on sensor surface a graph was plotted for the scan rate-normalized current (i υ -1/2 ) against the scan rate (υ) (Figure 3), whose profile suggests that the oxidation of the HCTZ is a chemical/ electrochemical (CE) catalytic process, characterized by two steps, a chemical reaction coupled with an electrochemical process, 40 such as it occurs in other biomimetic sensors. 41,42ith the results described above, it was possible to propose a mechanism for the sensor (Figure 4).Initially, the chemical reaction (chemical step) between hydrochlorothiazide and molecule of reduced hemin promotes the oxidation of hydrochlorothiazide and reduction of iron in the porphyrin.In the electrochemical step coupled with chemical one, the re-oxidation of metal ion (Fe 2+ to Fe 3+ ) in the complex occurs at the electrode surface leading to the anodic current observed due to the presence of hydrochlorothiazide in the measurement cell.In the next step, in order to evaluate the sensor response for the HCTZ quantification, amperometric experiments were carried out to optimize the analytical parameters influencing its response.The values selected were based on the highest sensitivity from analytical curves obtained in the diverse conditions studied. Optimization of variables Factorial and central composite designs are widely used in screening experiments where many variables involved in the reaction are considered in order to identify those with the greatest effects. 43The optimization conditions of the proposed method were performed by multivariate analysis employing central composite experimental design after prior application of fractional factorial design to select the variable values that maximized the analytical signal. Initially, a 2 (4) full factorial design was carried out, which allowed simultaneously studying seven factors that could have an important effect on the current obtained with proposed sensor.The factors of interest were the amount of modifier in the paste (%, m/m), pH, buffer and buffer concentration (mol L -1 ).All variables that make up the system were studied at two levels, low (-1) and high (+1), requiring a total of 16 experiments, which were performed in triplicate and randomized to minimize any environmental effects.The highest and lowest values of each variable were defined based on our preliminary experiments. As the result of the full factorial design, a Pareto chart was drawn in order to visualize the estimated effect of the main variables.It can be observed from Figure 5 that only one variable (pH) was considered be significant. To perform the experiments the central composite design, there must have two variables, choose the variable percentage of modifier to influence the cost of analysis in the proposed sensor.These variables (pH and percentage modifier) were further studied by central composite design at five levels, including four central points for statistical validity within the range -1.41 to +1.41. Figure 6 shows the response surface estimated as a function of pH and percentage of modifier in the paste. It can be observed from Figure 5 that the optimal region was found and that the maximum response was achieved when pH was 8.5 and the percentage of modifier was 14% m/m.The mathematical model that describes the surface of Figure 5 is the quadratic regression of equation 1: z = -2.21894+ 0.51456x -0.03001x 2 + 0.01871y -0.00072y 2 -0.00031xy (1) where z is the response factor corresponding to the sensitivity value, x is the pH, and y is the percentage of modifiers.In this model the R 2 values were greater than 95% also implying that the equation was well fitted by the data at 95% confidence level for the sensitivity of the sensor. Using the multivariate calibration it was possible to optimize the parameters of the proposed biomimetic sensor for hydrochlorothiazide quantification.Thus, for further experiments the modified paste was prepared with 14% (m/m) of hemin, and the measurements were carried out in 0.1 mol L -1 phosphate buffer at pH 8.5. Validation The proposed analytical method was validated in terms of the linear dynamic range, precision, accuracy, limit of detection (LOD), limit of quantification (LOQ), interferences and recovery values.These parameters were obtained using amperometry (Figure 7 inset).A typical calibration curve (Figure 7) showed a good linearity in the concentration range of 2.0 × 10 -5 and 1.2 × 10 -4 mol L -1 . The value of the coefficient of determination (R 2 ) obtained was 0.993 ± 0.001 whereas the slope obtained from the analytical curve was 85671 ± 2457. The LOD and LOQ were calculated according to the Brazilian National Health Surveillance Agency (ANVISA) recommendations. 44The LOD and LOQ were 5.8 × 10 -6 and 1.9 × 10 -5 mol L -1 .These values are comparable with values reported by other research groups for electrocatalytic oxidation of HCTZ obtained by electrochemical methods (Table 1). The precision of the proposed method was evaluated using three inter-days and three intra-days replicate (n = 5 for each experiment) for a 1.0 × 10 -3 mol L -1 HCTZ standard solution.The relative standard deviations for inter-day and intra-day measurements were 4.2 and 4.6%, respectively.These levels of precision were considered to be acceptable for the employed method. Study of the biomimetic characteristics of sensor Since hemin is a recognized biomimetic catalyst of the oxide-reductase P450 enzyme, 35 its biomimetic character was investigated in the previously optimized conditions.A hyperbolic profile was obtained (Figure 8 inset) besides the linear range of concentrations, which is characteristic of enzymatic biosensors and biomimetic sensors that follow Michaelis-Menten kinetics. 23A Lineweaver-Burk graph (Figure 8) was constructed in order to calculate the apparent Michaelis-Menten constant (K MM app ) for hydrochlorothiazide on the proposed sensor.The value of 1.2 × 10 -3 mol L -1 indicates a good affinity between the catalyst (hemin) and the analyte. A useful biomimetic sensor must show a high level of selectivity.In order to evaluate the selectivity/ interference of the proposed sensor, its response was tested with the following drugs in the concentration of 1.0 × 10 -2 mol L -1 : bumetanide, furosemide, methyldopa, captopril, ketoprofen, ciprofloxacin, aminophylline, nifedipine, urea, uric acid, ascorbic acid and xanthine.Different pharmacological classes were studied because the general use of hydrochlorothiazide is associated with the other drug.The substances urea, uric acid, ascorbic acid and xanthine were assessed as possible interferences, because these are present in the urine.The results obtained showed that the sensor was highly selective for hydrochlorothiazide, and only bumetanide caused a small interference in the analysis, because it also belongs to the class of diuretics. Application The proposed sensor for HCTZ was applied using four samples of pharmaceutical formulations (Table 2) and six enriched samples of human urine (Table 3). The results obtained for the two methodologies were very close, including the values obtained for the "similar" pharmaceutical formulation that showed a value only 70% of the expected, and showed the reliability of the proposed sensor. The proposed method was compared statistically (using t-tests at 95% of confidence level) with the comparative method 38 and showed good agreement (Tables 1 and 2).The calculated t-values did not exceed the critical ones, indicating that there was no significant difference between the two methods in terms of precision and accuracy. The results confirm the viability of this proposed sensor for reliable, simple and cheap method for HCTZ determination in quality control and in the combat against doping in sports. Conclusions The proposed sensor presents a viable alternative to the determination of hydrochlorothiazide in a simple, rapid, selective and sensitive way for the application in doping and quality control.It is also cost effective, safer and environmentally friendly compared with other methods reported in the literature.The technique is sufficiently precise and accurate to offer an attractive alternative in the control of sports doping. Figure 4 . Figure 4. Schematic representation of a plausible mechanism of response to the proposed biomimetic sensor. Figure 5 . Figure 5. Pareto diagram for visualizing the effects of the chemical variables on the amperometry measurements using a 2 (4) factorial design. Figure 6 . Figure 6.Central composite design response surface obtained for sensitivity values as a function of pH and percentage of modifiers. Figure 7 . Figure 7. Analytical curve of the proposed sensor.Measurements performed under optimized conditions.Inset: a typical amperogram obtained for successive additions of hydrochlorothiazide.Applied potential of 0.8 mV. Figure 8 . Figure 8. Lineweaver-Burk plot for the HCTZ oxidation catalyzed by the hemin-based sensor.Inset: hyperbolic profile of the sensor. Table 1 . Comparison of the efficiency of some methods in the determination of HCTZ LOD: limit of detection; DPP: differential pulse polarography; SWV: square wave voltammetry; DPV: differential pulse voltammetry. Table 2 . Determination of hydrochlorothiazide in pharmaceutical formulations standard deviation of three replicates; c critical values of t at 95% confidence level, t t = 4.303.Values obtained considering the value supplied by the comparative method as true. a Declared value: 25 mg hydrochlorothiazide per tablet; b Table 3 . Recoveries of hydrochlorothiazide added to urine samples a Added value: 1.0 × 10 -3 mol L -1 ; b standard deviation of three replicates; c critical values of t at 95% confidence level, t t = 4.303.Values obtained considering the value supplied by the comparative method as true.
4,053
2015-10-01T00:00:00.000
[ "Chemistry" ]
Enhancing scalar productions with leptoquarks at the LHC The Standard Model (SM), when extended with a leptoquark (LQ) and right-handed neutrinos, can have interesting new implications for Higgs physics. We show that sterile neutrinos can induce a boost to the down-type quark Yukawa interactions through a diagonal coupling associated with the quarks and a scalar LQ of electromagnetic charge $1/3$. The relative change is moderately larger in the case of the first two generations of quarks, as they have vanishingly small Yukawa couplings in the SM. The enhancement in the couplings would also lead to a non-negligible contribution from the quark fusion process to the production of the 125 GeV Higgs scalar in the SM, though the gluon fusion always dominates. However, this may not be true for a general scalar. As an example, we consider a scenario with a SM-gauge-singlet scalar $\phi$ where an $\mathcal O(1)$ coupling between $\phi$ and the LQ is allowed. The $\phi q \bar{q}$ Yukawa couplings can be generated radiatively only through a loop of LQ and sterile neutrinos. Here, the quark fusion process can have a significant cross section, especially for a light $\phi$. It can even supersede the normally dominant gluon fusion process for a moderate to large value of the LQ mass. This model can be tested/constrained at the high luminosity run of the LHC through a potentially large branching fraction of the scalar to two jets. I. INTRODUCTION The discovery of a Standard Model-(SM-)like Higgs boson of mass 125 GeV at the LHC [1,2] and the subsequent measurements of its couplings to other SM particles have played a significant role in understanding the possible physics beyond the Standard Model (BSM). The Higgs couplings to the third generation fermions and the vector bosons have already been measured within 10%-20% of their SM predictions [3]. However, it is difficult to put strong bounds on the Yukawa couplings (y f ) of the first two generations of fermions. This is an interesting point since, at the LHC, a change in the light-quark Yukawa couplings opens up the possibility of light quarks contributions to the production of a Higgs. It motivates us to investigate whether it is possible to enhance the Yukawa couplings of the first two generation quarks in some existing minimal extension of the SM. Therefore, in this paper, we study a simple extension of the SM augmented with a scalar leptoquark (LQ) of electromagnetic charge 1=3 (generally denoted as S 1 ) and right-handed neutrinos. We find that the Yukawa couplings of the down-type quarks receive some new contributions and, for perturbative values of the free coupling parameters, can be moderately enhanced, especially for a SM-like Higgs (h 125 ). However, for a singlet Higgs (ϕ), this enhancement could be more significant and could open up the qq → ϕ production channel. Here, we systematically study the production of both h 125 and ϕ at the 14 TeV LHC through the quark and gluon fusion channels in the presence of a S 1 and right-handed neutrinos. LQs are bosons that couple simultaneously to a quark and a lepton. They appear quite naturally in several extensions of the SM, especially in theories of grand unification like the Pati-Salam model [4], SUð5Þ [5], or SOð10Þ [6] (for a review, see [7]). Though, in principle, LQs can be either scalar or vector in local quantum field theories, the scalar states are more attractive, as the vector ones may lead to some problems with loops [8][9][10]. In recent times, LQ models (with or without right-handed neutrinos) have drawn attention for various reasons. For example, they can be used to explain different B-meson anomalies [11][12][13][14][15][16][17][18][19][20] or to enhance flavor violating decays of Higgs and leptons like τ → μγ and h → τμ [21]. LQs may also play a role to accommodate dark matter abundance [22,23] or to mitigate the discrepancy in the anomalous magnetic moment of muon ðg − 2Þ μ [24][25][26]. Direct production of TeV scale right-handed neutrinos at the LHC can be strongly enhanced if one considers the neutrino mass generated at tree level via the inverse-seesaw mechanism within LQ scenarios [27]. An S 1 -Higgs coupling can help to stabilize the electroweak vacuum [28]. The collider phenomenology of various LQs has also been extensively discussed in the literature [7,[29][30][31][32][33][34][35][36][37][38]. In the scenario that we consider, there are three generations of right chiral neutrinos in addition to the S 1 . Generically, such a scenario is not very difficult to realize within the grand unified frameworks. In fact, considering sterile neutrinos in this context is rather well motivated because of the existence of nonzero neutrino masses and mixings, which have been firmly established by now. It is known that an Oð1Þ Yukawa coupling between the chiral neutrinos and TeV scale masses for the right-handed neutrinos can explain the experimental observations related to neutrino masses and mixing angles even at tree level if one extends SM to a simple setup like the inverse seesaw mechanism (ISSM) [39][40][41]. Of course, this requires the presence of an additional singlet neutrino state X in the model. 1 Interestingly, the production cross sections of sterile neutrinos at the LHC can be enhanced significantly if the ISSM is embedded in a LQ scenario [27]. Similarly, as indicated earlier, a ν R state in a loop accompanied with S 1 may influence the production of a Higgs at the LHC and its decays to the SM fermions, especially to the light ones. Observable effects can be seen in scenarios with a general scalar sector that may include additional Higgs states, a TeV scale ν R , and an Oð1Þ Yukawa couplings between the left and right chiral neutrinos. In this paper, we shall explore this in some detail. Notably, the gluon fusion process (ggF) for producing a Higgs scalar gets boosted in presence of a LQ [51]. Our study is generalit can be applied to both SM-like and BSM Higgs bosons. Specifically, we consider the following two cases: (a) A 125 GeV SM-like Higgs boson (h 125 ). We investigate how the light-quark Yukawa couplings can get some positive boosts. However, obtaining a free rise of the Yukawa parameters is not possible in our model 2 and, as we shall see, for perturbative new couplings and TeV scale new physics masses, the boosts are moderate and lead to some enhancement of both production and decays of h 125 at the LHC. (b) A singlet scalar ϕ (BSM Higgs). We also study the productions and decays of a scalar ϕ that is a singlet under the SM gauge group. Such a scalar has been considered in different contexts in the literature. For example, it may serve as a dark matter candidate. Similarly, a singlet scalar can help solve the so-called μ problem in the Minimal Supersymmetric Standard Model [55]. To produce such a singlet at the LHC, one generally relies upon its mixing with the doubletlike Higgs states present in the theory. If the mixing is nonnegligible, then the leading order production process turns out to be the gluon fusion (though vector boson fusion may also become relevant in specific cases [56]). One may also consider the production of ϕ through cascade decays of the doublet Higgs state(s). However, such a process is generally much suppressed. Now, as we shall see, in the presence of a scalar LQ and sterile neutrinos we could have a new loop contribution to the quark fusion production process (qqF). The LQ would also contribute to the gluon fusion process. In such a setup, the singlet Higgs can potentially be tested at the LHC via its decays to the light-quark states. The rest of the paper is organized as follows. In Sec. II, we introduce the model Lagrangian and discuss the new interactions. In Sec. III, we discuss additional contributions 1 ISSM or inverse seesaw extended supersymmetric models may lead to interesting phenomenology at low energy [42][43][44][45][46][47][48][49][50]. 2 This may be possible in an effective theory with free parameters. For example, Ref. [52] considers a dimension-6 operator of the form f d ðH † H=Λ 2 Þðq L Hd R Þ þ H:c: (where Λ ∼ TeV) in addition to the SM Yukawa terms that contribute differently to the physical quark masses and effective quark Yukawa couplings. Thus, by choosing f d one may raise the Yukawa parameters while keeping the physical masses unchanged, though this may require some fine-tuning among the parameters of the model. It is important to note that in the presence of higher-dimensional operators, a large Yukawa coupling need not induce large correction to the corresponding quark mass always. Such enhancements of the light-quark Yukawa couplings can even be probed at the LHC. An analysis of Higgs boson pair production suggests that in the future the High Luminosity LHC (HL-LHC) may offer a handle on this [53]. An updated analysis, with 3000 fb −1 of integrated luminosity suggests (though not in a fully model independent way) that it may be possible to narrow down the d-and s-quark Yukawa couplings to about 260 and 13 times to their SM values, respectively [54], i.e., where the Yukawa coupling modifier κ q is defined as to the production and decays of h 125 . In Sec. IV, we discuss bounds on the parameters. In Sec. V, we investigate the case of the singlet scalar ϕ. Finally we summarize our results and conclude in Sec. VI. II. THE MODEL: A SIMPLE EXTENSION OF THE SM As explained in the Introduction, our model is a simple extension of the SM with chiral neutrinos and an additional scalar LQ of electromagnetic charge 1=3, normally denoted as S 1 . The LQ transforms under the SM gauge group as ð3; 1; 1=3Þ with Q EM ¼ T 3 þ Y. In the notation of Ref. [7], the general fermionic interaction Lagrangian for S 1 can be written as where we have suppressed the color indices. The superscript C denotes charge conjugation; fi; jg and fa; bg are flavor and SUð2Þ indices, respectively. The SM quark and lepton doublets are denoted by Q L and L L , respectively. We now add the scalar interaction terms to the Lagrangian in Eq. (3), Here, H denotes the SM Higgs doublet, and M ϕ andM S 1 define the bare mass parameters for ϕ and S 1 , respectively. We denote the physical Higgs field after the electroweak symmetry breaking as h ≡ h 125 . The singlet ϕ does not acquire any vacuum expectation value (VEV). Physical masses can be obtained via where the SM Higgs VEV v ≃ 246 GeV. We assume the mixing between H and ϕ, controlled by the dimensionless coupling μ to be small to ensure that the presence of a singlet Higgs does not affect the production and decays of h 125 significantly via mixing. Notice that, unlike dimensionless λ or μ, λ 0 is a dimension-1 parameter. We define the physical mass of S 1 to be M S 1 as The above Lagrangian simplifies a bit if we ignore the mixing among quarks and neutrinos (i.e., set V CKM ¼ U PMNS ¼ I). For example, we can expand Eq. (4) for the first generation as where we have simplified ðy X 1 Þ ii as y X i . Since the flavor of the neutrino is irrelevant for the LHC, from here on we shall simply write ν to denote neutrinos. The terms in Eq. (7) have the potential to boost up some production/decay modes for h and ϕ. For example, it would lead to an additional contribution to the effective hgg coupling (see Fig. 1) [51]. Similarly, the decay h → dd, which is negligible in the SM, may get a boost now, as long as some of the new couplings are not negligible. The processes are illustrated in Figs. 1(a)-1(c) [the first diagram is independent of v, while the other two are of Oðv 2 Þ], where the Higgs is shown to be decaying to a dd pair via a triangle loop mediated by S 1 and chiral neutrinos. There are two possibilities: the Higgs directly couples with either the chiral neutrinos or the LQ. Since the contributions of these diagrams appear as corrections to y d , it is easy to see that the fermion in the loop (i.e., the neutrino) has to go through a chirality flip. In this case, the right-handed neutrino from the third term in Eq. (3) helps to get a nonzero contribution. One can, of course, imagine similar diagrams with charged leptons in the loops, contributing to the h → uū (or any other up-type quark-antiquark pair) decay. However, the contributions of such diagrams would be small as they are suppressed by the tiny charged lepton Yukawa couplings, at least for the first two generations. If we restrict ourselves only to flavor diagonal couplings in Eq. (3) (i.e., we allow only i ¼ j terms), only the top Yukawa y t would be modified appreciably. If we allow offdiagonal couplings, one can get contributions for the first two generations of Yukawa couplings-namely, y u and y c , respectively. However, one needs to be careful as offdiagonal LQ-quark-lepton couplings are constrained, particularly for the first two generations [7,57]. In this case, we consider only flavor diagonal couplings and look only at the modification of Higgs couplings to down-type quarks. Thus, one may always set ðy RR 1 Þ ij ¼ 0 for all values of i and j. This may lead to a somewhat favorable situation in some cases to accommodate rare decays of fermions through LQ exchange. Before we discuss productions and decays of h 125 and ϕ in our model, a few comments are in order. As we shall see in the next section, an order 1 hν L ν R coupling, i.e., y ν ∼ Oð1Þ and a TeV scale mass for the ν R would be helpful to raise the Yukawa couplings of the light quarks. Typically, the models like ISSM would be able to accommodate such a scenario. In the ISSM, an additional gauge singlet neutrino, usually denoted by X, is assigned a Majorana mass term μ X XX, while ν R receives a Dirac mass term of the form Mν R X. For our purposes, we may assume that this singlet X cannot directly interact with any other particle we consider. However, since it interacts exclusively with the ν R fields via M, it would modify the ν R propagators. In this case, it may be useful to define something called a "fat ν R propagator" [58] that includes all the effects of the sequential insertions of the X field. For simplicity, we do not display this interaction and mass term of the righthanded neutrinos explicitly in Eq. (3). One can explicitly consider an ISSM in the backdrop of our analysis and easily accommodate fat ν R propagators without any change in our results. III. CONTRIBUTION TO THE PRODUCTION AND DECAYS OF h 125 In this section, we first look into the additional contributions to the Yukawa couplings of the down-type quarks with h 125 . The relevant interactions can be read from Eq. (7). We shall then discuss the role of these loops in the production of h 125 and its decays to the down-type quarks. In this paper, we compute all the loop diagrams using dimensional regularization and Feynman parametrization and then match the results using the Passarino-Veltman (PV) integrals [59]. We evaluate the PV integrals with two publicly available packages, FeynCalc [60] and LoopTools [61]. A. Correction to Yukawa couplings of the down-type quarks In our calculation, we assume that left-handed neutrinos are massless while right-handed ones are massive. Also, since we consider Higgs decays to down-type quarks only, we can safely ignore the quark masses (m q ¼ 0) and set m 2 Fig. 1). The correction to y d coming from the diagram shown in Fig. 1(a) is given bỹ and P L=R are the chirality projectors. From here on, we shall suppress the generation index of the leptoquark couplings and simply write g 2 i as g 2 . After Feynman parametrization and dimensional regularization, we get 7)]. The diagrams for s-and b-quarks are similar to the last two diagrams. Note that we absorb a factor of 1= ffiffi ffi 2 p in the definition of Yukawa couplings in the mass basis, i.e., we write y ν instead of y ν = ffiffi ffi 2 p . where and The divergent piece at Oðv 0 Þ, Δ ϵ ¼ 2 ϵ − γ þ lnð4πÞ þ OðϵÞ is canceled by a similar contribution from diagrams with a bubble in an external quark line. The bubble in the quark lines is obtained by replacing the Higgs field in Fig. 1(a) with v and amputating the external quark lines; see Fig. 2 (a). This extra contribution is given as Putting these two together, we get Now, proceeding along the same lines, we get the correction from the diagram in Fig. 1 Similarly, the correction term corresponding to Fig. 1(c) can be obtained as This is finite like y ðbÞ . The last term of Eq. (15) is actually canceled by the Oðv 2 Þ correction to the external quark propagators, as shown in Fig. 2(b). This is similar to the cancellation at Oðv 0 Þ in y ðaÞ ; in this case, the bubble is obtained by replacing the Higgs field in Fig. 1(b) with v and amputating the external quark lines. However, one has to be careful with the factors here. After electroweak symmetry breaking, one can expand the Higgs-S 1 interaction term in Eq. (4) as The λvhðS † 1 S 1 Þ term contributes to y ðbÞ q , but the propagator correction would come from the λv 2 ðS † 1 S 1 Þ=2 term, i.e., with a different prefactor. The Oðv 2 Þ external leg correction to the Yukawa coupling is proportional to λv 2 ðS † 1 S 1 Þ=2 and can be written as Once this is added, we get Oðv 2 Þ corrections to the quark propagator from loop diagrams mediated by S 1 and chiral neutrinos. The couplings g L ¼ y LL 1 and g R ¼ yR R 1 [see Eq. (7)]. The diagrams for s-and b-quarks are similar. These corrections are independent of the external momentum (p) and hence contribute as mass corrections. Therefore, the effective hdd coupling can be written as where y SM d ¼ m d =v is the d-quark Yukawa coupling in the SM (with m d being the physical mass) and δy ¼ y 12) and (17). 'This is similar to the case in which the SM is augmented with dimension-6 operators [52]. Equation (19) can also be written in terms of the following PV integrals, where D 0 , C 0 , and B 0 are the four-point, triangle, and bubble integrals, respectively. The expressions for the sand b-quarks would be exactly the same as the above with m d and g 2 ¼ g 2 i suitably modified. B. Relative couplings To get some idea about how the extra contributions from the loops depend on the parameters, we use the Yukawa coupling modifiers [Eq. (2)], Since we ignore the mass of the quarks, δy is independent of the flavor of the down-type quark that the Higgs is coupling to as long as g 2 y ν remains the same. Hence, δy=y SM q should go as 1=y SM q ∼ 1=m q . Using this and Eq. (20), we see that κ q depends linearly on 1=m q , λ, and the combination g 2 y ν , but, a priori, its dependence on M S 1 or M ν R is not so simple. In Table I for some illustrative choices of M ν R and M S 1 . With g 2 y ν ¼ λ ¼ 1, we see that there is some cancellation between these contributions. Note that this choice of coupling is not restricted by the rare decays [7,57]. In Fig. 3, we show the variations of κ d , κ s , and κ b for 500 ≤ M S 1 ≤ 3000 GeV for three different choices of M ν R . As expected, we see the lightest among the three quarks, i.e., the d-quark, getting the maximum deviation in κ q from unity. The b-quark coupling hardly moves from the SM value for the considered parameter range. However, all the deviations are well within the ranges allowed by Eq. (1). C. Decays of h 125 As mentioned before, we shall use h and h 125 interchangeably to denote the 125 GeV SM-like Higgs boson. In the SM, the total decay width of the 125 GeV Higgs boson is computed as Γ SM h ¼ 4.07 × 10 −3 GeV, with a relative theoretical uncertainty of þ4.0% −3.9% [62]. Now, because of the additional loop contribution, the total decay width would increase in our model. We can use Eq. (19) or (20) to compute the partial decay width for the h → qq decay in the rest frame of the Higgs as where iM tot ¼ y eff q qq is the invariant amplitude and N c ¼ 3 accounts for the colors of the quark. Similarly, the h → gg partial width would also get a positive boost in the presence of S 1 [7]. The relevant diagrams can be seen in Figs. 1(d) and 1(e). In our model, the h → gg partial width can be expressed as [7,63], Fig. 1 to the Yukawa couplings obtained from Eq. (19) or (20) for some illustrative choices of the mass of the right-handed neutrino M ν R and the leptoquark mass M S 1 while keeping g 2 y ν ¼ 1 and λ ¼ 1. The relevant one-loop functions are given by Now, Eqs. (22) and (23) can be used to obtain the total width in our model, Ideally, we should also include corrections to partial widths of other decay modes, like h → γγ or other three body decays in the above expression. However, since their contributions to the total width are relatively small, we ignore them. From Eqs. (22) and (27), we compute the new branching ratios (BRs) of the h → qq modes in our model as In Fig. 4, we show BRðh → qqÞ for different quarks for g 2 y ν ¼ 1 (for all generations) and λ ¼ 1. Equation (22) indicates BRðh → qqÞ ∼ jy SM q þ δyj 2 ; i.e., it increases with y SM q (remember, for g 2 y ν ¼ 1, δy is the same for all the quarks). Hence, we expect BRðh → bbÞ > BRðh → ssÞ > BRðh → ddÞ, as y SM q increases with the mass of the quark. This can be seen in Fig. 4. However, even for order 1 y ν couplings and TeV scale S 1 and ν R , the relative shift in branching ratio of the h → bb decay to that of SM is not large (as expected from Fig. 3). For the lighter quarks, the branching ratios become much larger than their SM values, even though they remain small compared to other decay modes like h → bb. The branching fraction h → gg is almost unaffected with the variation in S 1 , as the SM contribution always dominates. D. Production of h 125 For a quantitative understanding of the quark-gluon fusion production of h 125 , we normalize the fusion cross section with respect to its SM value. We define the "normalized production" factor μ F as It is a function of the BSM parameters and measures the relative enhancement of production cross section in the fusion channel. The subscript "F" stands for the fusion channel. In the denominator, we ignore σðbb → hÞ SM , as it is much smaller than σðgg → hÞ SM because of the small b-quark parton distribution function (PDF) in the initial states. In our model, the leading order gluon fusion cross section at parton level can be expressed as [7,[63][64][65] where Γ h→gg is given in Eq. (23). Similarly, the quark fusion cross section at parton level can be expressed in terms of Γ h→qq from Eq. (22) as [62] σðqq → hÞ ¼ Naively, one would expectσðqq → hÞ for the heavier quarks to be larger than the lighter ones, as Γ h→qq is proportional to the square of y eff q (which increases linearly with m q ). However, there is a trade-off between m q and the PDFs, as the heavier quarks PDFs are suppressed compared to their lighter counterparts. We compute σðqq → hÞ at the 14 TeV LHC using the NNPDF2.3QED LO [66] PDF. Similarly, we use the next-to-next-to-leading-order plus next-to-next-to-leading-logarithmic QCD prediction for the 14 TeV LHC which leads to σðgg → hÞ SM ≃ 49.47 pb [67]. We use these results to compute μ F . We show μ F as a function of M S 1 in Fig. 5(a) while assuming that g 2 y ν ¼ 1 for all the generations and λ ¼ 1. For this plot, we set M ν R ¼ 1 TeV. However, since the gluon fusion cross section is much larger than the quark fusion ones, μ F is largely insensitive to M ν R . To get an idea of the contributions of the different modes to μ F , we define the following two ratios: The difference between these two ratios lies in the interference between the SM and BSM contributions. We show these ratios in Figs ðdd → hÞ. Of course, because of the large gluon PDF, σðgg → hÞ is larger than any quark fusion cross section. IV. LIMITS ON PARAMETERS Any increase in either the productions or the decays of h 125 would be constrained by the existing measurements [3] (also see [68] for future projections). However, we see from Figs. 4 and 5 that the parameters we consider, i.e., g 2 i ¼ y LL i y RR i ¼ 1, y RR i ¼ 0, λ ¼ 1, y ν ¼ 1, and TeV scale M S 1 ; M ν R for all three generations are quite consistent with the present and future h 125 limits. Concerning the bounds on S 1 , we see that in our parameter region of interest, LQ S 1 can decay to all the SM fermions. According to Eqs. (4) and (7), a heavy S 1 would have six decay modes for M S 1 ≤ M ν R , S 1 → fue; cμ; tτ; dν; sν; bνg; ð34Þ with roughly equal BR (∼1=6) in each mode (if we ignore the differences among the masses of the decay products in different modes). The LHC has put exclusion bounds on scalar leptoquarks in the light-leptons þ jets (lljj=lνjj) [69][70][71] and bbνν=ttττ [72-74] channels (see also [75,76]). The strongest exclusion limit (∼1.5 TeV) comes from the lljj channel for 100% BR in the S 1 → lj decay. These searches are for pair production of scalar leptoquarks, where the observable signal cross sections are proportional to the square of the BR involved. Hence, in our case, the limit on S 1 would get much weaker. A conservative estimation indicates that the limit goes below a TeV when the BR decreases to about 1=6. Also, pair productions of leptoquarks are QCD driven and thus cannot be used to put limits on the fermion couplings. The CMS Collaboration has performed a search with the 8 TeV data for single production of scalar leptoquarks that excludes up to 1.75 TeV for order 1 coupling to the first generation [77]. However, even that limit comes down below 1 TeV once we account for the reduction in the BR. However, a recast of CMS 8 TeV data for the first generation (eejj=eνjj) indicates that for order 1 g ðL=RÞ , M S 1 ≳ 1.1 TeV [78]. 3 To be on the conservative side, we may use M S 1 ≳ 1.5 TeV as a mass limit for S 1 with g 2 y ν ¼ 1 for all generations. If, however, M S 1 > M ν R , the LQ can decay to three more final states with right-handed neutrinos. Thus, we would expect a further reduction of the limits on S 1 [27]. Moreover, specifically for first generation fermions, the choices of g L and g R are restricted further. The atomic parity violation measurements in Cs 133 [79] put a strong constraint on them. Typically, all existing constraints may be satisfied easily for M S 1 ≳ 2 TeV and g 2 ≈ 1 with g L ¼ g R . V. THE SINGLET HIGGS ϕ Unlike the case of h 125 , the parameters of the singlet scalar defined in Eq. (4) are largely unconstrained. Generally, to probe a heavy BSM scalar, its decays to fermion pairs like ττ or the massive gauge bosons are assumed to be promising. But, for a singlet scalar, these decay modes lose importance. Also, most of the BSM singlet scalar searches rely on the mixing among the singlet state with the doublet one(s), either h 125 or other BSM heavy Higgs states. In our model, by contrast, ϕ's can be produced from and decay to a pair of gluons or quarks via the loop of S 1 and neutrinos without relying, in general, on the mixing of ϕ with the doublet Higgs. Hence, its phenomenology at the hadron collider would be different than what is generally considered in the literature. A. Effective coupling We first calculate the effective couplings of ϕ to the light quarks, as we did for h 125 . The ϕqq effective coupling Y eff q (where q is any down-type quark) would receive contribution from diagrams like the one shown in Fig. 6, which is similar to the one shown in Fig. 1(b). Because of the singlet 3 Recasting limits from the single production searches is trickier than the pair production case because here the production processes also depend on the unknown couplings. Even though the parton-level cross section scales easily with these couplings, one cannot account for the PDF variation for different quarks in such a simple manner. Since we are interested in a conservative limit, we have ignored the PDF variation to obtain this number. nature of ϕ, the tree-level ϕν L ν R coupling does not exist, so in this case there is no diagram like the one shown in Fig. 1 (a). Proceeding as before, we get Written in terms of PV integrals, this becomes We present our results in Fig. 7, which shows the variation of Y eff q as a function of M S 1 for two values of M ν R and M ϕ ¼ 500 GeV. Here, λ 0 is a dimensionful parameter [see Eq. (4)] that can be taken to be of the order of the largest mass in the model spectrum. The coupling Y eff q decreases as M S 1 increases. Since ϕ has only loop-level interaction with the SM quarks, the effective coupling is the same for all three generations of down-type quarks for the same value of g 2 y ν . B. Branching ratios and cross sections The expressions for the partial decay widths and production cross section of ϕ are essentially identical to the ones for h 125 if we replace y eff q → Y eff q and m h → M ϕ . Thus, the expressions for the partial decay widths would look like The Feynman diagrams for the ϕ → γγ process will be similar to those in Figs. 1(d) and 1(e), with the gluons replaced by two photons and the α s coupling substituted for the α em coupling. As earlier, we can now express the cross sections in these modes in terms of the partial widths. In the gg channel,σ and in the qq channel The total width for ϕ can be expressed as We now present our numerical results. We begin with Fig. 8, where we show the variation of BRs of different decay modes of ϕ. For the most part, the plots for the quarks overlap, as Γ ϕ→qq is essentially independent of m q [see Eq. (37)]. Here, without any singlet-doublet mixing, ϕ can decay only to down-type quarks or gluon or photon pairs. As a result, when M S 1 increases, BRðϕ → gg=γγÞ decreases and BRðϕ → qqÞ goes up if M ν R is held fixed. We see that for a 2 TeV S 1 , ϕ → qq is the dominant decay mode for g 2 y ν ¼ 1, M ν R ¼ 1 TeV (the BRs are independent of λ 0 ). In Figs. 9(a) and 9(b), we plot the scattering cross sections of ϕ in different decay modes at the 14 TeV LHC, considering both the gluon and quark fusion processes. We show the production cross section times the branching ratio for all the modes against M S 1 and M ϕ . Note that, in the parameter space that we consider, we find Γ ϕ ≪ M ϕ , which makes the narrow width approximation used in our computation a valid one. Here, we use the same set of PDFs as in the h 125 case. To have some intuition about the strengths of different production channels, we scale the cross sections by σðgg → h M ϕ Þ, where h M ϕ represents a BSM Higgs whose couplings with the SM particles are the same as those of h 125 . Its production cross section in the gluon fusion mode can be computed from Eq. (30), after taking Then we define the scaled cross sections as In Figs. 10(a) and 10(b), we show the variation of R ϕ with M S 1 and M ϕ . Recall that a SM singlet ϕ cannot be produced at tree level. The leading order contribution to σðii → ϕÞ starts at one-loop level. In Fig. 10(b), we observe a crossover where the qqF becomes the dominant process over the ggF, i.e., σðqq → ϕÞ > σðgg → ϕÞ for a fixed value of LQ mass (¼ 2 TeV). This is not a generic pattern and can be understood from Eqs. (37) and (38) by varying a few of the free parameters. For example, for a relatively large value of LQ mass (M S 1 ≥ 2 TeV), one may obtain Γ ϕ→gg ≤ Γ ϕ→qq when ϕ is not large, i.e., M ϕ ≤ 250 GeV. In this case, the quark fusion process would have leading contributions. If one increases M S 1 further, Γ ϕ→gg decreases more rapidly than Γ ϕ→qq , with M ϕ ensuring that the qq → ϕ process remains the dominant one for a larger range of M ϕ . For example, if one sets M S 1 ∼ 3 TeV, we find that quark fusion becomes dominant for M ϕ ≤ 350 GeV. However, the relative contributions are insensitive to the value of λ 0 chosen. x Br( ss -) x Br( bb -) x Br( ) gg) (a) C. Prospects at the LHC It is clear that the scalar ϕ in our model would offer some novel and interesting phenomenology at the LHC. However, a detailed analysis is beyond the scope of this paper. Instead we now simply make a few comments on its prospects. It may be possible to put a bound on σ ϕ ðM ϕ Þ from the dijet resonance searches. For example, the one performed by the CMS Collaboration at the 13 TeV LHC [80] indicates that σ ϕ × BRðϕ → ggÞ has to be less than about 1 pb for M ϕ ¼ 1 TeV and about 20 pb for M ϕ ¼ 600 GeV. Similarly, in the quark mode, σ ϕ ×BRðϕ → dd þ ss þ bbÞ is less than about 1 pb for M ϕ ¼ 1 TeV and about 5 pb for M ϕ ¼ 600 GeV. Figure 9(b) (which is obtained for the 14 TeV LHC) indicates that our choice of parameters easily satisfies this limit. Future searches in this channel would put stronger bounds on σ ϕ and/or M ϕ . The LHC has also searched for such a state in the γγ final states, though the present bound from this channel is weaker [81] than the dijet one. In our model, this channel is not at all promising, as can be seen in Figs. 9 and 10. Even the HL-LHC might not be able to probe the singlet state in the γγ mode. VI. CONCLUSION In this paper, we have considered a simple extension to the SM, in which we have a scalar LQ (S 1 ) with electromagnetic charge 1=3 and heavy right chiral neutrinos. While the presence of both BSM particles may have its origin in a grand unified framework, we have simply considered their interactions at the TeV scale. The motivation for considering such an extension comes from the fact that it can accommodate Yukawa couplings of the down-type quarks that are enhanced compared to SM expectations. We have shown that the LQ and the right chiral neutrinos can enhance the production cross section of the SM-like Higgs through a triangle loop. We have calculated the oneloop contributions to the Yukawa couplings of the downtype quarks. We have found the enhancements (which we have parametrized by the usual κ d;s;b ) for order 1 new couplings and TeV scale new particles. We have then further extended our analysis to include a SM-singlet scalar ϕ in the model with a dimension-1 coupling with S 1 but no tree-level mixing with the SM-like Higgs. We have found that, for a similar choice of parameters, the gluon fusion (through a LQ in the loop) and the quark fusion (mediated by a LQ and neutrinos in a loop) processes can lead to a significant cross section to produce ϕ at the LHC. They also enhance the decay width of the singlet. Interestingly, we have found that for a light ϕ, the quark fusion can become more important than the gluon fusion process as long as the mass of the LQ remains high (∼TeV). In both cases, precise measurements of branching fractions or partial widths of the 125 GeV SM-like Higgs or the singlet scalar, i.e., h 125 , ϕ → dd, ss, bb, would be crucial for testing or constraining the model at the high luminosity run of the LHC. ACKNOWLEDGMENTS Our computations were supported in part by SAMKHYA: the high performance computing facility provided by the Institute of Physics (IoP), Bhubaneswar, India. A. B. and S. M. acknowledge support from the Science and Engineering Research Board (SERB), DST, India, under Grant No. ECR/2017/000517. We thank P. Agrawal for the helpful discussion. S. M. also acknowledges the local hospitality at IoP, Bhubaneswar during the meeting IMHEP-19, where this work was initiated.
9,157.4
2020-08-04T00:00:00.000
[ "Physics" ]
On J/psi and transverse momentum distributions in high energy collisions The transverse momentum distributions of final-state particles are very important for high-energy collision physics. In this work, we investigate and meson distributions in the framework of a particle-production source, where Tsallis statistics are consistently integrated. The results are in good agreement with the experimental data of proton-proton ( ) and proton-lead ( -Pb) collisions at LHC energies. The temperature of the emission source and the nonequilibrium degree of the collision system are extracted. Introduction The creation and study of nuclear matter at high energy densities are the purpose of Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) [1][2][3][4][5][6][7]. As a new matter state, quark gluon plasma (QGP) is a thermalized system which consists of strong coupled quarks and gluons in a limited region. The suppression of J/ψ meson with respect to proton-proton (p p ) collisions is regarded as a distinctive signature of the QGP formation and brings valuable insight into properties of the nuclear matter. In proton-nucleus (p-A) collisions, the prompt J/ψ meson suppression has also been observed at large rapidity [8]. But, QGP is not expected to be created in the small system. The heavy quarkonium production can be suppressed by the suppression cold-nuclear-matter (CNM) effects, such as nuclear absorption, nuclear shadowing (antishadowing) and parton energy loss. The transverse momentum T p spectra of identified particles produced in the collisions are a vital research for physicists. Now, different models have been developed to describe the T p distributions of the final-state particles in high energy collisions [9][10][11][12], such as Boltzmann distribution, Rayleigh distribution, Erlang distribution, the multisource thermal model, Tsallis statistics and soon on. Different phenomenological models of initial coherent multiple interactions and particle transport have been proposed to discuss the particle production in high-energy collisions. In condensed matter research, Tsallis statistics can deal with non-equilibrated complex systems [13]. Then, Tsallis statistics are developed to describe the particle production [14][15][16][17][18]. In our previous work [11], the temperature information was understood indirectly by an excitation degree. We have obtained the emission source location dependence of the exciting degree specifically. In this paper, the temperature of the emission source is given directly by combining a picture of the particle-production source with Tsallis statistics. We discuss the Tsallis statistics in an emission source According to the multisource thermal model [11], at the initial stage of nucleon-nucleon (or nucleon-nucleus) collisions, a projectile cylinder and a target cylinder are formed at the rapidity y space when the projectile nucleon and target nucleon pass each other. The projectile and target cylinder can be regarded as one emission source with a rapidity width. The source emits the observed particles, which follow a certain distribution. In order to describe the transverse momentum spectra in high-energy collisions, several versions of Tsallis distribution are proposed. But, they originate from the Ref. [13], the meson number is given by It is worth noting that the T where   Transverse momentum spectra and discussions , and   3S  mesons respectively, the B is the dimuon branching fraction. The experimental data are taken from Ref. [20]. The symbols and lines represent the same meanings as those in Fig. 1. The results are also in agreement with the experiment data. The parameters T and q taken in the calculation are listed in Table 1 13TeV. The experimental data are taken from Ref. [21]. The symbols and lines represent the same meanings as those in Fig. 1. The results are also in agreement with the experiment data. The T and q values used in the calculation are listed in Table 2. The temperature T decreases with increasing the rapidity bins. As the emission source draws closer to the center, the excitation degree increases. The T values are larger than that at  NN s 8 TeV in the same y range. The q behavior is similar to that of Fig. 1 and 2. For comparison, Fig. 5 presents the double-differential cross-section 2 / T d dp dy Ref. [22]. The symbols and lines represent the same meanings as those in Fig. 1. The results are also in agreement with the experiment data. The parameters T and q taken in the calculation are listed in Table 2. The emission source temperature T decreases with increasing the rapidity bins. The values of q are between 1.031 and 1.065. Conclusions In the framework of the emission source, where Tsallis statistics are consistently integrated, we investigate the transverse momentum spectra of  / J and  mesons produced in pp and p -Pb collisions over an energy range from 5 to 13 TeV. The results agree with the experimental data of the LHCb Collaboration in LHC. By comparing with the experimental data, the emission source temperature T is extracted and decreases with increasing the rapidity bins. It is consistent with the corresponding rapidity that the closer the emission source is to the y center, the larger the excitation degree is. The temperature T increases with increasing the collision energy. So, the excitation degree of the emission source also increases with increasing the collision energy. The parameter q does not show an obvious change, which means the collision system is not very unstable. Final-state particle production in high-energy collisions have attracted much attention, since attempt has been made to understand the properties of strongly coupled QGP by studying the production mechanisms. Thermal-statistical models have been successful in describing particle
1,268.4
2017-01-11T00:00:00.000
[ "Physics" ]
Spermatozoa in mice lacking the nucleoporin NUP210L show defects in head shape and motility but not in nuclear compaction or histone replacement Biallelic loss‐of‐function mutation of NUP210L, encoding a testis‐specific nucleoporin, has been reported in an infertile man whose spermatozoa show uncondensed heads and histone retention. Mice with a homozygous transgene intronic insertion in Nup210l were infertile but spermatozoa had condensed heads. Expression from this insertion allele is undefined, however, and residual NUP210L production could underlie the milder phenotype. To resolve this issue, we have created Nup210lem1Mjmm, a null allele of Nup210l, in the mouse. Nup210lem1Mjmm homozygotes show uniform mild anomalies of sperm head morphology and decreased motility, but nuclear compaction and histone removal appear unaffected. Thus, our mouse model does not support that NUP210L loss alone blocks spermatid nuclear compaction. Re‐analyzing the patient's exome data, we identified a rare, potentially pathogenic, heterozygous variant in nucleoporin gene NUP153 (p.Pro485Leu), and showed that, in mouse and human, NUP210L and NUP153 colocalize at the caudal nuclear pole in elongating spermatids and spermatozoa. Unexpectedly, in round spermatids, NUP210L and NUP153 localisation differs between mouse (nucleoplasm) and human (nuclear periphery). Our data suggest two explanations for the increased phenotypic severity associated with NUP210L loss in human compared to mouse: a genetic variant in human NUP153 (p.Pro485Leu), and inter‐species divergence in nuclear pore function in round spermatids. haploid round spermatids that transform into mature spermatozoa during spermiogenesis.In this final phase, spermatids undergo extensive and unique structural modifications involving elongation and condensation of the nucleus, histone-to-protamine chromatin transition, acrosome formation, and flagellum biosynthesis to become highly specialized motile spermatozoa, 4 optimized for the faithful transmission of the paternal genetic and epigenetic information to the next generation. 6][7] The interpretation of genetic data can, however, be complicated by the genetic heterogeneity of male infertility which means that independent confirmatory cases are often not available, even in large studies of common phenotypes such as azoospermia. 8r this reason, candidate human gene mutations must often be transposed to an animal model to validate their pathogenicity, and elucidate the physiological role of the affected gene. Recently, a homozygous loss-of-function mutation in the Nucleoporin 210 like gene (NUP210L) has been identified in an infertile man whose seminal analysis revealed spermatozoa with large uncondensed round heads that retain histones and have abnormal motility, indicating that the NUP210L protein contributes to chromatin compaction and flagellar biogenesis during human spermiogenesis. 9Such a phenotype is extremely rare and has only been described in this one case. NUP210L is predicted to be a nuclear pore complex (NPC) protein and is predominantly expressed in the testis (https://gtexportal.org/ home/gene/NUP210L).Based on its primary sequence, its structure is similar to that described for its more widely expressed paralogue NUP210 (also known as gp210 or POM210): a peptide signal at the N-terminus, a large 1783 amino acid domain located in the luminal space between the two nuclear membranes, a transmembrane domain that enables NUP210 to be addressed to the nuclear pore and a short cytoplasmic C-terminal tail. 10 With NDC1 and POM121, NUP210, and NUP210L make up the known mammalian transmembrane nucleoporins (also known as pore membrane proteins-POMs). 11With its relatively large luminal domain, NUP210/NUP210L is probably the major component of the luminal ring that forms a cushion around the center of the NPC, possibly anchoring the NPC to the pore membrane and buffering its transport functions from forces in the nuclear envelope. 12 Human Protein Atlas, NUP210 and NUP210L appear to have complementary expression during human spermatogenesis, with NUP210 in spermatogonia and early spermatocytes, and NUP210L in late spermatocytes and spermatids (https://www.proteinatlas.org/),suggesting a specialized role for NUP210L in NPC function during spermiogenesis. Male mice homozygous for a ROSA26-EGFP transgenic insertion in intron 17 of Nup210l have been reported to be infertile with elevated numbers of abnormal spermatozoa: immotile (86%) and mild sperm head defects (80%). 13In contrast to the human case, there was no evidence of nuclear compaction failure in the ROSA26-EGFP transgene insertion mouse.The effect of the ROSA26-EGFP insertion on NUP210L expression was not reported, however, and so residual expression of NUP210L or production of a functional truncated isoform might explain the milder phenotype in the mouse compared to the published human case, where the mutation is predicted to result in a complete loss of NUP210L function. 9 address this issue and determine whether NUP210L is essential for nuclear compaction, in mouse spermatids, we created a mouse mutant allele, Nup210l em1Mjmm .Here, we demonstrate that NUP210L is absent from germ cells in male mice homozygous for Nup210l em1Mjmm . Their spermatozoa have a uniform moderately abnormal head morphology, frequent flagellar anomalies and decreased progressive motility. Nevertheless, Nup210l em1Mjmm homozygous males are fertile, and do not show compromised histone removal or nuclear compaction failure, demonstrating that the Nup210l-KO mouse model does not validate the hypothesis that, in the human case, loss of NUP210L alone is sufficient to cause failure of germ cell chromatin condensation during spermiogenesis.We reveal that the human case is heterozygous for a rare missense variant with predicted pathogenicity in another nucleoporin gene encoding NUP153.We show that NUP153 and NUP210L colocalize to the nuclear periphery of elongating spermatids, in mouse and human, while in round spermatids they localize to the nuclear periphery in human but the nucleoplasm in mouse.We discuss how our findings might explain the exceptional nuclear compaction failure in the human case. | Nup210l-knockout mice Founder mice carrying the null allele Nup210l em1Mjmm were created using CRISPR-Cas9 methodology.Fertilized C57BL/6NCrl oocytes were injected with Cas9 nuclease, tracrRNA and the crRNA (guide RNA), at the transgenic animal service (SEAT) at Gustave Roussy.The targeting sequence of the crRNA was 5 0 -TACTAAGAGATATG-TATCGT-3 0 .For the creation and study of the Nup210l em1Mjmm model, the necessary ethical approval was obtained from the Animal experimentation Ethics committee of Marseille (Comité d' Ethique en Expérimentation Animale de Marseille-CEEA14). | Human samples Human testicular material (T0140007) came from a 50 year old man with brain death, within a protocol, approved by the French Research Minister, for the procurement of human tissue for use in research.We declared our protocol through the French Biomedicine Agency (Agence de la biomedicine) in January 2013.Testes were recovered during multi-organ retrieval for transplantation while the donor was under extracorporeal circulation and respiratory assistance.The donor had normal spermatogenesis based on histological analysis and presence of numerous epididymal and testicular spermatozoa. The infertile man, 9 13-4587, was recruited at our public fertility center in Marseille and gave his informed consent that his DNA be used in research.DNA was obtained from the Centre of Biological Resources at La Timone Hospital, Marseille (CRB AP-HM). | Mouse sperm analysis The caudal epididymis was dissected into 2 mL of DMEM-HEPES, cut into small pieces and then incubated for 10 min at 37 C to liberate spermatozoa.Spermatozoa were counted and their motility assessed by direct microscopic observation.To assess morphology sperm smears were stained with SpermBlue, manufacturer's protocol (Microptic, Barcelona, Spain) and silver nitrate staining. 14 | Mouse histology Testes and epididymides were collected from WT and Nup210l À/À mice, fixed in Bouin's solution overnight at room temperature and embedded in paraffin using standard protocols.Sections were stained with Mayer's hematoxylin, with periodic acid-Schiff (PAS) for testes or eosin for epididymides.Seminiferous tubule stages were determined according to Russell. 15 | Antibodies and lectin The following primary antibodies were used for immunofluorescence (IF) and Western blot (WB) analyses at the given dilution: anti-NUP210L | Statistical analyses The significance of the difference between the means in study groups was calculated using Student's t-test.A p-value of <0.05 was considered significant. | Additional methodology Details are given in Supporting Information for: Mouse sperm analysis; Mouse Histology; Tissue preparation for Immunostaining; Immunostaining; Chromomycin A3 staining; Western blot analysis; and Transmission electron microscopy. | NUP210L is not expressed in testis of Nup210l em1Mjmm/Mjmm KO mice To study the biological function of NUP210L in vivo, we used CRISPR-Cas9 to create a mouse line carrying a null allele of Nup210l. We selected an allele, Nup210l em1Mjmm (hereinafter referred to as Nup210l À ) with a single base insertion in exon 6 of the 40 exons that compose Nup210l (Figure 1A).To determine the transcripts produced by Nup210l À , we used RT-PCR to amplify across the targeted exon 6 (primers in exons 5 and 10).We obtained a single predominant band of the expected size for wild-type and Nup210l À heterozygotes but two bands were obtained for Nup210l À homozygotes (Figure 1B).Purifying and sequencing each RT-PCR product confirmed the presence of the insertion mutation in exon 6 and revealed the upper band to consist of all exons from 5 to 10, while the lower band did not include exon 6 (Figure 1C).Thus, the mutation reduces the efficiency of exon 6 splicing.Since exon 6 is 134 bp in length neither transcript produced from the Nup210l À allele preserves the Nup210l reading frame and both are predicted to be substrates for nonsense mediated decay.We conclude that Nup210l À is a null allele. We confirmed that the NUP210L protein is not expressed in male mice homozygous for Nup210l À using immunofluorescence with an antibody raised against human NUP210L (HPA064245immunogen amino acids 530-613-exons 12-14; Figure 1D).In WT and heterozygous males, the antibody gave a diffuse granular nucleoplasmic signal in round spermatids that became concentrated at the posterior pole in late round and early elongating spermatids and was only detected at the caudal site in most elongating spermatids and spermatozoa.In testis from adult Nup210l À/À mice, no signal was observed on testicular germ cells, showing both that NUP210L is not produced from the Nup210l À allele and that the antibody HPA064245 detects NUP210L specifically by immunofluorescence. Western blot analysis detected a weak band of approximately 210 kDa in WT and HTZ testis that is absent in Nup210l À/À testis.A strong band at 150 kDa, and a weaker band at 100 kDa were detected in all three genotypes showing that they are nonspecific (Figure 1E). | Mice lacking NUP210L are fertile with normal testicular histology Male and female Nup210l À/À mice were observed to develop normally and showed no signs of distress.To test fertility, male littermates Nup210l +/+ (n = 7) and Nup210l À/À (n = 8) were mated with two females for 10 weeks.No differences were seen in frequency of litters or litter size (Figure 2A).No age-dependent decrease in fertility was observed up to 6 months.The average weight of the testis from 6-month-old Nup210l À/À mice was comparable with that of Nup210l +/+ of the same age (Figure 2B).Nup210l À/À females also showed no signs of reduced fertility (normal litter size and frequency-unpublished data). To determine the state of spermatogenesis in mice lacking NUP210L function, we performed histological analysis of testis sections stained with PAS-H.Histological analysis of Nup210l À/À testis revealed that all seminiferous tubules were rich in germ cells with abundant spermatozoa visible in the lumen (Figure 2D).The germ cell content and arrangement at each tubule stage was observed to be normal in Nup210l À/À adult males (Figure 3A).In addition, we quantified seminiferous tubule stages and did not observe a significant difference in their distribution between Nup210l À/À and Nup210l +/+ males (Figure 3B), indicating that no delay occurs during spermatogenesis in Nup210l À/À mice. | Spermatozoa quality is diminished in mice lacking NUP210L We next investigated spermatozoa quality in Nup210l À/À mice and during spermatogenesis (Figure 4A).In contrast, spermatozoon motility and morphology were affected in Nup210l À/À mice.We scored nonprogressive and progressive motility for six males of each genotype.All genotypes had spermatozoa with progressive motility.The Nup210l À/À males, however, consistently had a lower percentage of progressively motile spermatozoa compared to Nup210l +/+ littermates (Figure 4B). The morphology of the sperm was examined by sperm blue and silver nitrate staining.Flagella of Nup210l À/À sperm frequently formed a hairpin loop (40%), bent in the distal mid piece, or were coiled (15%).These sperm tail abnormalities were significantly more common in Nup210l À/À mice than WT (Figure 4C,D).In addition, the sperm head was uniformly affected in homozygous mice with shortening of the acrosomal hook and narrowing of the base of the head (Figure 4D). Transmission electron microscopy (TEM) was used to investigate the state of chromatin condensation as well as the organization and position of flagellar components.Epididymal sperm heads (n > 35) were examined for each of three Nup210l À/À males and one Nup210l +/À male.Sperm chromatin appears tightly compacted in both Nup210l +/À and Nup210l À/À mice, as evidenced by the high density of nuclear staining observed by TEM (Figure 5A).We observed the presence of straight, axoneme and the mitochondrial sheath appear normal in the straight and bent (double cross-section) flagella (Figure 5B).but aberrant in the coiled sections, as indicated by disordered axonemal microtubules (Figure 5C). The head-tail junction appeared normal in spermatozoa with a straight or a coiled flagellum (Figure 5A,C). | Normal chromatin compaction and histone removal in Nup210l À/À spermatozoa To determine whether histone removal during spermatid elongation is affected in Nup210l À/À mice, we used immunofluorescence to track acetylated-histone H4 (acH4) (Figure 6A).We observed normal acH4 labeling in Nup210l +/+ and Nup210l À/À mice: the acH4 signal was first detected weakly throughout the nucleus of round spermatids at step 8, but became much stronger as spermatids began to elongate at step 9.As elongation progressed, the acH4 signal was gradually lost, receding from under the acrosome to the posterior nuclear pole in early condensed elongated spermatids, and was not detectable in later condensed spermatids or in spermatozoa.Our results indicate that histone removal from spermatid chromatin is unaffected in mice lacking NUP210L. We then used chromomycin A3 (CMA3) staining to evaluate whether protamines were present in the chromatin of mature spermatozoa.CMA3 binds histonylated chromatin but not chromatin packaged with protamines.We observed that the frequency of CMA3-positive spermatozoa was low in both WT and Nup210l À/À spermatozoa (Figure 6B,C), indicating that the histone- to-protamine transition is essentially normal during spermiogenesis in Nup210l À/À mice. | NUP210L-KO man carries potential pathogenic variant in nucleoporin NUP153 Our findings indicate that the absence of NUP210L in mice has no major effect on fertility, while it has been linked to a failure of nuclear remodeling and male infertility in humans. 9To search for a genetic basis for this discrepancy, we re-analyzed exome sequencing data for the infertile man (13-4587) looking for potentially pathogenic variants in known components of the NPC.We identified a heterozygous missense variant in the nucleoporin NUP153, p.Pro485Leu, that we verified by Sanger sequencing (Figure 7A,B).The affected proline is conserved in all mammals and the variant is predicted to be pathogenic by Polyphen2 (probably damaging) and SIFT (deleterious).In gnomAD, the frequency of NUP153-p.Pro485Leu is 0.0004 overall but, is highest (0.0022) in Bulgarian and southern European groups (n = 14 274 alleles).Based on a lower than expected number of LoF mutations (LOEUF = 0.174), gno-mAD estimates that there is a high probability of intolerance to loss of a single copy of NUP153 in human.Thus, if the p.Pro485Leu substitution has a negative effect on NUP153 function, a reduction in functional NUP153, coupled to complete loss of NUP210L, could explain the severity of the phenotype observed in the infertile man. | NUP210L and NUP153 colocalize in mouse and human elongating spermatids To evaluate if NUP210L and NUP153 could act together in human, we used IF analysis to determine their relative localization in spermatids.We observed that NUP210L localizes at the nuclear periphery of elongating spermatids, in human and mouse, and is gradually restricted to the posterior pole of the nucleus (Figure 7C).This caudal localization persists in both mouse and human spermatozoa.In round spermatids, however, NUP210L localization diverges between the two species: in human, NUP210L is restricted to the nuclear periphery except under the acrosome, while, in mouse, it is nucleoplasmic. NUP153, like NUP210L, is nucleoplasmic in mouse round spermatids, but otherwise colocalizes perfectly with NUP210L, at the nuclear periphery in human round spermatids, and the caudal extremity in mouse and human elongating spermatids (Figure 7D).We also show that NUP153 localization in elongating spermatids is not altered in mice lacking NUP210L (Figure 7E).The colocalization of NUP153 and NUP210L shows that they could function together in elongating implies that species differences in NPC organization in round spermatids must be considered to explain the severe human phenotype. | DISCUSSION Loss of NUP210L function has recently been reported in an infertile man producing predominantly abnormal spermatozoa with large uncondensed heads and an elevated level of histone retention, indicating that NUP210L and the NPC play a central role in the remodeling of the spermatid nucleus during spermiogenesis. 9However, in a mouse model with a ROSA26-EGFP transgene insertion into intron 17 of Nup210l, homozygous males showed infertility but produced spermatozoa with condensed sperm heads. 13Since the effect of the ROSA26-EGFP intronic insertion on the expression of NUP210L was not determined, a possible explanation for this discrepancy was that the insertion did not completely inactivate Nup210l.Here, we have resolved this issue by creating a knockout mouse model in which we show that NUP210L is not expressed.Our results demonstrate that in the mouse the absence of NUP210L has no overt effect on histone removal or chromatin condensation, establishing that NUP210L loss in the mouse does not strengthen the case that loss of NUP210L alone is the cause of the striking failure of nuclear compaction observed in the human case. 9comparison with the published data for the ROSA26-EGFP 13 model indicates similarities between the sperm phenotypes of the two models: short acrosomal hook, tapering of the caudal extremity of the sperm head and a reduced percentage of progressively motile spermatozoa.Nonetheless, the homozygous ROSA26-EGFP male mice clearly show signs of a more severe phenotype: age-related degeneration of the seminiferous epithelium with reduced sperm production, acrosome anomalies and male infertility at all ages.The greater phenotypic severity in the ROSA26-EGFP model, compared to our model, may be due to differences in the genetic background or a dose-dependent negative effect of the ROSA26-EGFP allele, such as production of a truncated isoform of NUP210L or altered expression of a neighboring gene.Our knockout model presented here represents a reference for the phenotype associated with loss of NUP210L function in the mouse. We identified that the infertile man 13-4587 9 is heterozygous for a potential pathogenic variant, p.Pro485Leu, in the nucleoporin NUP153, a critical component of the NPC nucleoplasmic basket.In somatic cells, NUP153 has diverse roles including regulating the export of mRNAs, 16 building chromatin architecture through interactions with CTCF and cohesins, 17 nuclear membrane. 18In spermatids, the NL is composed of B-type lamins and it is dismantled during spermiogenesis. 19,20NUP153 might therefore also facilitate B-type lamin extraction from the nucleus and contribute to the breakdown of the spermatid nuclear lamina.Of note in this regard, in Caenorhabditis elegans, the NUP210 orthologue is required for disassembly of the NL and NE breakdown during mitosis. 21Importantly, in human, we show here that NUP153 and Given the phenotype of large round uncondensed sperm heads, in the infertile man, it seems reasonable to assume that the primary defect occurs in his round spermatids as elongation begins.We show here that in round spermatids, NUP210L and NUP153 localize to the nuclear periphery in human, but to the nucleoplasm in the mouse. Since this is observed for two nucleoporins, it favors the interpretation that NPC or NE organization in round spermatids is distinct between human and mouse.The localization of mouse NUP210L to the nucleoplasm is unexpected because NUP210 proteins carry a signal peptide to direct them to the NPC via the endoplasmic reticulum, 10 and mouse and human NUP210L proteins localize to the nuclear periphery when ectopically expressed in HeLa cells (unpublished data).This indicates that the signal peptide carried by mouse NUP210L may not be efficient in round spermatids, and indeed Signa-lIP 6.0 does not predict the peptide for mouse NUP210L, but does for human NUP210L (unpublished data). 22Our data show that in human, but not in mouse, NUP210L could play a role at the nuclear envelope in round spermatids, and this may underlie the greater phenotypic severity observed with the loss of NUP210L function in human compared to mouse. In elongating spermatids, in both mouse and human, we show that NUP210L and NUP153 concentrate at the caudal pole of the nucleus.This is entirely consistent with electron microscopy studies in mouse and bovine showing that NPCs shift into a high-density array at the caudal pole when the manchette forms, and the spermatid nucleus begins to elongate. 23,24This NPC reorganization to the caudal pole indicates that this must represent the major site of exchange between the nucleoplasm and the cytoplasm during the elongation steps of spermatid differentiation.NUP210, has been shown to influence NPC spacing in HeLa cells, 25 and in Xenopus laevis there is recent evidence that NUP210 forms a buffering transluminal cushion around the NPC. 12 The absence of NUP210L could therefore conceivably reduce the capacity of traffic into and out of the elongating spermatid nucleus by negatively impacting the exceptional tight NPC packing, or the size and plasticity, of the pore channel.In support of the idea that a stressed histone-to-protamine transition could be one consequence of NUP210L loss, mice lacking the chromatin remodeling proteins TNP1 or PRM1, have spermatozoa with a short acrosomal hook and reduced motility as observed in our Nup210l-KO mice. 26Despite the absence of a severe phenotype in the NUP210L knockout mice presented here, our results show that NUP210L does play a role in morphogenesis of the sperm head and is required for efficient flagellum formation in the mouse.We conclude that although NUP210L is not required for spermatid nuclear condensation or male fertility in the mouse, it may optimize NPC array organization and functionality at the posterior pole in elongating spermatids. Nup210l +/+ controls.Epididymal spermatozoa were recovered and we determined their number, motility and morphology.No difference was observed in the number of spermatozoa recovered F I G U R E 1 Generation of Nup210l À/À mice.(A) Schematic illustration of Nup210l, 40 exons, including the location of the C insertion mutation in exon 6. (B) RT-PCR analysis of adult mouse testis RNA, PCR products were separated on 3% agarose gel.Lane 1 (1 kb+) is the 1 kb plus size marker (ThermoFisher).The blue line indicates the wild-type band, while the red and green ones mark the two bands detected in Nup210l À/À mice.(C) Sanger sequencing of RT-PCR bands: top (blue)-wild-type (WT) RT-PCR amplicon, middle (red)-upper Nup210l À/À amplicon with the "C" insertion in exon 6, and bottom (green)-lower Nup210l À/À amplicon showing skipping of exon 6. (D) Immunofluorescence revealing the absence of NUP210L protein in Nup210l À/À mice testis-anti-NUP210L (green), lectin PNA labeling of the acrosome (red) and DAPI (blue).Scale bar is 5 μm.(E) Western blot analysis of testicular lysates from adult mice with anti-NUP210L.A band of approximately 210 kDa (arrowhead) was detected in wild-type (WT) and Nup210l +/À but not Nup210l À/À lysates.Sizes of Spectra size marker (ThermoFisher) are indication.[Colour figure can be viewed at wileyonlinelibrary.com] from Nup210l À/À and control mice showing that the rate of spermatozoa production is not affected by the absence of NUP210L coiled and bent (double cross-sections) flagella in homozygotes, consistent with our prior findings in bright field analysis.The structure of the F I G U R E 2 Fertility test and histological analysis of the testis from WT and NUP210L deficient mice.(A) No significant difference observed in litter size between Nup210l À/À and wild type males.p-Value = 0.1.(B) No difference observed in testes weight between Nup210l À/À and wild type males.Seven mice of each genotype were used.p-Value = 0.93.(C) Testes of Nup210l À/À mutant mice compared with their wild-type littermates.(D) PAS-hematoxylin staining of testis (i, iii) and H&E staining of cauda epididymis (ii, iv) from WT and Nup210l À/À mice.On the right are images with higher magnification showing compacted sperm heads in greater detail (ii 0 , iv 0 ).[Colour figure can be viewed at wileyonlinelibrary.com] F I G U R E 3 Normal spermatid development and stage proportions in WT and Nup210l À/À mice males.(A) Nup210l À/À mice have a typical arrangement of germ cells with normal spermatid development.Stages of the seminiferous tubules were identified according to the morphology of spermatocytes and spermatids.Spermatids at different stages are illustrated.Pl, preleptotene; L, leptotene; Z, zygotene; P, pachytene; D, diplotene; RS, round spermatids; ES, elongating spermatids.(B) Quantification of the proportion of stages in Nup210l À/À and wild type males.Three mice of each genotype were analyzed.p-Value = 0.79 for the stage I, II, III, p-value = 0.41 for the stage IV, V, p-value = 0.61 for the stage VI, VII, VIII, p-value = 1 for the stage IX, and p-value = 0.84 for the stage X, XI, XII.p-Values were calculated by Student's t-test.[Colour figure can be viewed at wileyonlinelibrary.com] 4 Sperm quality analysis of Nup210l À/À mice.(A) Nup210l À/À males have normal sperm count.Sperm from cauda epididymis of 6-month-old wild type (n = 10) and Nup210l À/À mice (n = 10) were counted.p-Value = 0.07.(B) Progressive motility of sperm was reduced in Nup210l À/À males.Six mice of each genotype were analyzed.p-Value <0.0001 for progressive motility, p-value = 0.0001 for nonprogressive motility and p-value = 0.0008 for immotile sperm.(C) Quantification of coiled, bent and normal tail observed in the spermatozoa of Nup210l À/À and WT mice.p-Values <0.0001 for coiled tail, 0.006 for bent tail and 0.001 for normal tail.(D) Sperm morphology, as labeled by sperm blue stain (i-vi) and silver nitrate (vii-xii).Nup210l À/À sperm show bent and coiled flagellum, as well as changes in the morphology of head: shortened acrosomal hook (arrowheads) and narrowing of the base of the head (arrows). F I G U R E 5 TEM analysis reveals normal chromatin condensation in Nup210l À/À (A) Sperm head of Nup210l À/À mice exhibit normal electron density under electron microscopy and normal head-tail coupling apparatus (HTCA).Magnified view of the sperm head in the right.(B) Ultrastructure of the Nup210l À/À sperm flagellum showing a normal microtubule structure in an uncoiled flagellum, but abnormal structures in a coiled one (C).spermatids, supporting the possibility that a partial loss of NUP153 function may explain the severe nuclear compaction failure associated with loss of NUP210L in human.Nevertheless, the distinct localization of NUP210L and NUP153 in human and mouse round spermatids and post-mitotic formation of the nuclear lamina through the targeting of B-type lamins to the inner F I G U R E 6 Normal chromatin compaction in Nup210l À/À mice.(A) Immunostaining images of testicular section from wild-type (WT) and Nup210l À/À mice are shown-anti-acetylated histone 4 acH4 (green), lectin PNA (red), and DAPI (blue).White arrows S8, S9, S12, and S16spermatids at step 8, 9, 12, and 16.Scale bar is 20 μm.(B) CMA3 were used to stain epididymal sperm (green) and DNA counterstained with Hoechst.Scale bar is 20 μm.(C) CMA3 positive and negative sperm were counted in Nup210l À/À and WT mice.Three mice of each genotype were analyzed, with 200 sperm counted for each mouse.p-Value = 0.37.[Colour figure can be viewed at wileyonlinelibrary.com] NUP210L colocalize to the nuclear periphery in round spermatids and at the caudal nuclear pole in elongating spermatids and mature spermatozoa, showing that they could have a functional interaction during human spermiogenesis.Furthermore, the underrepresentation of loss-of-function alleles for NUP153 in gnomAD indicates that there is intolerance for haploinsufficiency of NUP153 in human, implying that a partial reduction in NUP153 functionality could negatively impact the NPC and sensitize it to the absence of NUP210L in spermatids.Testing if this genetic combination underlies the severe human phenotype will require further similar human cases or a mouse model with the appropriate mutations in NUP210L and NUP153.F I G U R E 7 Patient 13-4587 is heterozygous for a potential pathogenic variant in the gene encoding nucleoporin NUP153 which colocalizes with NUP210L to the caudal nuclear pole in elongating spermatids and spermatozoa, in human and mouse.(A) Position of the p.Pro485Leu the NUP153 gene missense variant, p.Pro485Leu, heterozygous in patient 13-4587.(B) Sanger chromatogram displaying the sequences of both the control and patient (13-4587), with the serine codon in the blue rectangle.(C) Immunolocalization of NUP210L (green) during human and mouse spermatogenesis.The spermatocyte and subsequent stages of spermatid development are labeled.Different steps of spermatids are identified using lectin PNA (acrosome-red) and DAPI (DNA-blue).Scale bar is 5 μm.(D) Colocalization of NUP210L (red) and NUP153 (green) in human and mouse spermiogenesis.Scale bar is 5 μm.(E) No difference in NUP153 localization occurs in Nup210L À/À mice.Scale bar is 5 μm.[Colour figure can be viewed at wileyonlinelibrary.com]
6,574.4
2023-12-21T00:00:00.000
[ "Biology" ]
Estimating Null Values in Database Using CBR and Supervised Learning Classification Database and database systems have been used widely in almost, all life activities. Sometimes missed data items are discovered as missed or null values in the database tables. The presented paper proposes a design for a supervised learning system to estimate missed values found in the university database. The values of estimated data items or data it items used in estimation are numeric and not computed. The system performs data classification based on Case-Based Reasoning (CBR) to estimate loosed marks of students. A data set is used in training the system under the supervision of an expert. After training the system to classify and estimate null values under expert supervision, it starts classification and estimation of null data by itself. Keywords—DataBase(DB);Data mining; Case-Based Reasoning (CBR); Classification;Null Values; Supervised Learning I. INTRODUCTION Database is a collection of related data, to represent some aspects of the real world, sometimes called the mini-world or the universe of discourse.It has become an essential component of everyday life in modern society.In the course of a day, most of us encounter several activities that involve some interaction with a database. RDBSs are the mostly database systems used today.These system organize databases in many relations.Each relation has data about certain entity type or class and consists of rows.Each row represent a record of entity or object.The state of the whole database will correspond to the states of all its relations at a particular point of a time. Data Mining is an essential process where intelligent methods are applied in order to extract data patterns.Data mining algorithms look for patterns in data.While most existing Data Mining approaches look for patterns in a single data table, relational Data Mining (RDM) approaches look for patterns that involve multiple tables (relations) from a relational database [1]. In recent years, the most common types of patterns and approaches considered in Data Mining have been extended to the relational case and RDM now encompasses relational association rule discovery and relational decision tree induction, among others.RDM approaches have been successfully applied to a number of problems in a variety of areas, most notably in the area of bioinformatics.This chapter provides a brief introduction to RDM [2]. Knowledge discovery in databases (KDD), also called data mining, has recently received wide attention from practitioners and researchers.There are several attractive application areas for KDD, and it seems that techniques from machine learning, statistics, and databases can be profitably combined to obtain useful methods and systems for KDD [3]. The KDD area should be largely guided by (successful) applications.Theoretical work in the area is needed.A KDD process in which the analyzer first produces lots of potentially interesting rules, subgroup descriptions, patterns, etc., and then interactively selects the truly interesting ones from these [4] The presented system uses CBR classification in estimation null values in DB.The basic idea is locating a classified case (a student object) in the system Knowledge Base (KB) as the most close case to the student case row which has a null value.After that, the system could estimate that null value using three methods and their average.The weight of each attribute is varied, to represent its effect in the total mark.The total mark at any moment is a resultant of the already registered marks in the fields of the table.At any time, the weight of each attribute it is computed as output of dividing the attribute value for a student by the resultant of maximum values of all registered attributes for that course. Section II present some survey on related work.While, section III, outlines the structure of database record used by the system.Section IV, explores the system knowledge base.Section VI explains training the system and system classification experiment and results, while section VII discuss the conclusion and future work. II. RELATED WORK A lot of research effort have been done in estimating null values in DB.The pioneers, Chen, in this area used a new method to estimate null values in relational database in [5].They improved their method by creating fuzzy rule base in [6] and used genetic algorithms for generated weighted fuzzy rules in [7].Then, they applied the automatic clustering algorithm for clustering the tuples in the relational database in [8].Then, they presented a new method for estimate null values in relational database systems having negative dependency relationships between attributes in [9], where the "Benz secondhand car database" is used for the experiment. Wang, C.H. Cheng, and W.T. Chang [10] utilized stepwise regression to select the important attributes from the database and a partitional approach to build the datacategory.They apply the clustering method to cluster output data.Also, Chen and Hsaio [11] and Cheng and Lin [12] utilized clustering algorithms to cluster data, and calculate coefficient values www.ijacsa.thesai.org between different attributes by generating minimum average error. Jain and Suryawanshi [13] proposed an efficient approach for handling null values in web log.They used Tabu search-KNN classifier perform featureselection of K-NN rules.Also, C.H Cheng, J.R. Chang, and L.Y. Wei in [14] used adaptive learning techniques, based on clustering, toresolve the issue of null values in relational database systems.This study uses clustering algorithms to group data and calculates the degree of influence between independentattributes (variables) and the dependent attribute through an adaptive learning method. Lee and Wang in [15] proposed a modular method for trying to processhigh-reliability relational database estimation, and thestructure of the proposed method can be composed of threephases, comprising partition determination, automatic fuzzysystem generation, and relational database estimation.While, Mridha and Banik used Noble evolutionary algorithm to generating weighted fuzzy rules to estimate null values [16].Sadiq, S.A. Chawishly, and N.J.Sulaka in [17] proposed a hybrid approach for solving null values problem, it hybridize rough set theory with ID3 (Iterative Dichotomiser 3) decision tree induction algorithm.The proposed approach is a supervised learning model.Large set of complete data called learning data is used to find the decision rule sets that then have been used in solving the incomplete data.Then, the intelligent swarm algorithm is used for feature selection which represents bees algorithm as heuristic search algorithm combined with rough set theory as evaluation function [18]. III. DATABASE APPLICATION The proposed approach is tested in relational data base (DB) of university students.This database consists of many relations.Each relation is concerned of certain records of entity set.The target table is the STUDY table, shown in table 1, which concern of the remarks and grades of students in the registered courses. Sometimes there missed or null values in a column of certain records in databases.As Example some remarks data of some student exams are missed.These null values might result from missing some exam grades or from non-entering mistakes. As example, the estimation of null values is applied for a course has the assessments: two quiz (q1 and q2), five home works, a project, midterm exam, final exam.But it is possible to add or remove some assessment(s) to/from the proposed list of assessment(s).The STUDY table has those attributes, as shown in Table 1, which shows some records of student remarks. The experiment is applied over marks data of the course, "Compilers Construction" in Computer Science department.2) Type 2 is NotEstimated, where the attribute null values can't be estimated by any system.As example, the attributes: Student_num(St#), Name, Address, or Cours_Code. 3) Type 3 is Derived or Computed, like Total.The attribute null value can be computed or imaged from another attribute(s). The action in the first three types is running the program that computes or acquires those null data, or fill them by user. 4) Type 4 is Can-Estimated, where a value of an attribute in certain row(student record) is missed.This null value can be estimated by the system.The proposed method estimates null values in all column of type 4, based on the values of the known marks in the database.Thus, the known and estimated values are numeric values.Then system computes the total remark. IV. KNOWLEDGE REPRESENTATION The system should acquires basic knowledge needed to build its KB, shown in Fig. 1.This process trains the system to www.ijacsa.thesai.orglearn classification and estimation under the supervision of the expert.Fig. 2 demonstrates the algorithm for this process. Each student object is scanned to be classified is stored as a case.Each case is described by its attributes of certain row in Table 1.Values of these attributes will be used in classification (category) of student objects.The category gives impression about the level of student objects related to it.It refers to the range within which their total resultant of registered attributes grades divided by the total of maximum marks of those attributes, in certain course.It has actual categories like: APLUS, A, BPLUS,B,……...,FAIL, LOWFAIL. Fig. 1. System Knowledge Base Each classified student object is related to a category.It is known as an exemplar of that category.It is represented as a combination of values of assessments attributes.There is no restriction on number or names of categories and exemplars. Student object, Student Level, Attribute, and exemplar are represented as C++ classes.These classes and their relationships represent the knowledge base of the proposed system. V. TRAINING AND SUPERVISED LEARNING PROCESS At running the system for the first time, it reads the student objects (rows of Table ) and checks there attributes for null values.Then, it gives report of null value types according to the preliminary classification given in section III.Also, it specifies the rows which has null values to be estimated later as described in section VI. For each attribute a scale of possible values is determined.The combination of all possible attribute values defines all possible marks states within this description.The task is to classify each student object's state. When a student objects (cases) are scanned by the systemfor the fist time -to classify, it can do nothing.It has no categories and no classified exemplars to match.It'll ask for help from the expert to classify and clarify reason for that classification.First distinct cases will be classified by the expert and added to KB as exemplars.Those exemplar are related to new created categories. At reading a new row of student object (unclassified case), the system will start classification process to specify a category from KB categories, based on values of its attributes.If the category is not in the KB yet, it will ask the expert to create new one, and name it.Categories names are listed section IV.Within each category, there will be many exemplar, each has its level.This level should be within the space of the category. Exemplar level =sum of actual values of all encountered attributes/sum of maximum values of all encountered attributes. For a new case, the system looks up for a similar exemplar to it.If it finds a category, it consult its suggestions to the expert.If the expert accepts, the new case is related to the category and a new exemplar is created if expert want.While, if the expert refuses that classification, or the system fails to find a category, it asks the expert to explain why?And classify himself and give reasons.Then a new category and an exemplar (new case) related to that category are created. The expert may classify the new case to an existing category, or even a new one.The Algorithm of system training and classification is shown in Fig. 2. Supervised learning will continue in the estimation process, as seen in the next section. A. System Classification of Student Object Mainly, the use of CBR classification is for locating the most close case (exemplar) to the student case which has null data.Student object of null value is the object has to be classified and assigned to a certain class (category).It is constructed from its marks of assessments (attributes) and their weight.It is clear that an expert Instructor uses that knowledge to characterize a marks condition.Assuming that in order to make preliminary conclusions the expert uses a finite number of marks of assessments. Each attribute has a weight, based on the its space of minimum and maximum marl value that can be assigned to it, compared with the total of values all registered attributes for students.Some attribute may be not available at certain moment for a course.There may an assessment canceled, or not held yet.So, the N/A attributes should excluded from the list of attributes that describe a student objects (cases) for a while, until be included in the DB.This happens as done with column of type 1 , type 2 , and type 3 .When a new case (student object), with null value in one of its attributes, is found out, the system start its classification process.It looks up for a category for that case and discovers the most similar exemplar (classification) to that case.If it fails, it asks for expert classification.While, if it successes, it starts estimating of the null value for the current classified student (case).The Algorithm of system classification and estimation is presented in Fig. 3. After classifying the student object to be related to certain category, the system retrieves the exemplars related to the same category.It might use one of four methods to estimate null values.Value of null attribute A in the current student record is estimated as any of the following methods: 1) The opposite value of the same attribute A in the most similar exemplar. 2) The average of all opposite values for attribute A in all exemplars related to the classification category. 3) The average of level of all exemplars related to the classification category * maximum value of that attribute (out of marks). 4) The average of the results from 1,2 and 3.Then, the system offers its estimated values to the expert, to get his selection and guidance.The expert should choose one of them or refuse all.For all chooses, the system ask the expert for reasons of his decision. Most of times, the expert reason was that the selected method is suitable for the nature, weight, and difficulty of each assessment (attribute).Next times, the system will use this knowledge to choose the method itself.Comparing results of estimating for assumed null values attribute will explain next.Finally, system calculates the average of all methods.As seen next. B. Experiment Results Assume that there are n records (R 1 , R2,….,R n ) in the STUDY table of the database, where the value of the attribute "MidTerm" of the record R i is "R i .MidTerm", as example.Also, assume that the estimated values of Ri.Midterm are ER i .MidTerm (method1, method2, method3, method4).To estimate the value of the missed MidTerm value, those four values are estimated according to the four methods listed above. Referring to the table STUDY showed in table 1, and assuming that there is null value in a certain records.five assumptions will be tested, while Table 3, collects results of the following experiment to estimate null values in an attribute of five columns of different records: 1) The record of 15 th student record has null value in the column of MidTerm, while other attributes are given their values. 2) The record of 5 th student record has null value in the column of homework H1, while other attributes are given their values. 3) The record of 8 th student record has null value in the column of Final Exam, while other attributes are given their values. 4) The record of 10 th student record has null value in the column of quiz Q1, while other attributes are given their values. 5) The record of 12 th student record has null value in the column of Project, while other attributes are given their values.As seen in table 3, there is no method is preferred to applied for all attributes.While, the average of all estimation process, is somehow reasonable and applicable.Also, it is noticed that if the number of rows increases, the precision of the estimation will increase also. VII. CONCLUSIONS This paper presented a supervised learning system for estimating null values found in the database.The system performs data classification based on CBR-based classification to estimate missed marks of students.A moderate data set is used in training the system under the supervision of an expert, then the system start classification of objects that have null values using four methods.It is found that the average of the estimated values is more reasonable and applicable.In future, improvements will be applied to increase the precession of estimated values.Bigger training data set will be used in training the system to improve precision.Also, the task of estimation will be enlarged to enable the system to estimate a multiple null values not only one null value in the in the same record. 1 . 6 . 8 . Read actual and maximum values for each attribute of encountered column.2. Compute the Total of Actual Values (TAV).3. Compute the Total of Maximum Values (TMV).4. Compute Case Level (CL) = TAV/MTV.5.If there is no Category, its scope include CL, Consult expert to get name and scope of new Category, and Create it.Exemplar level (EL) = CL.7. Add new Exemplar related to the specified category.Ask Expert for Classification reason and Learn.www.ijacsa.thesai.org Table 2 , presents the universe of discourse of the attributes Home works, quizzes, MidTerm, Project, and Final Exam.The attributes (column) of any database entry (SQL table)that have null values, are classified into four types, according to the reason and type of missing values or the ability of estimating the null values.1)Type 1 is NullColumn, where any column, like MedTerm as example, may have all of its values are null.This means that the column values are not inserted or computed yet. TABLE I . STUDY TABLE WITH ACTUAL VALUES OF HOMEWORKS, QUIZZES, MIDTERM, PROJECT, AND FINAL EXAMS.
4,082.4
2014-01-01T00:00:00.000
[ "Computer Science" ]
Circulating Fibroblast Growth Factor 21 Levels Are Closely Associated with Hepatic Fat Content: A Cross-Sectional Study Background and Aims Fibroblasts growth factor 21 (FGF21), a liver-secreted endocrine factor involved in regulating glucose and lipid metabolism, has been shown to be elevated in patients with non-alcoholic fatty liver disease (NAFLD). This study aimed to evaluate the quantitative correlation between serum FGF21 level and hepatic fat content. Methods A total of 138 subjects (72 male and 66 female) aged from 18 to 65 years with abnormal glucose metabolism and B-ultrasonography diagnosed fatty liver were enrolled in the study. Serum FGF21 levels were determined by an in-house chemiluminescence immunoassay and hepatic fat contents were measured by proton magnetic resonance spectroscopy. Results Serum FGF21 increased progressively with the increase of hepatic fat content, but when hepatic fat content increased to the fourth quartile, FGF21 tended to decline. Serum FGF21 concentrations were positively correlated with hepatic fat content especially in subjects with mild/moderate hepatic steatosis (r = 0.276, p = 0.009). Within the range of hepatic steatosis from the first to third quartile, FGF21 was superior to any other traditional clinical markers including ALT to reflect hepatic fat content. When the patients with severe hepatic steatosis (the fourth quartile) were included, the quantitative correlation between FGF21 and hepatic fat content was weakened. Conclusions Serum FGF21 was a potential biomarker to reflect the hepatic fat content in patients with mild or moderate NAFLD. In severe NAFLD patients, FGF21 concentration might decrease due to liver inflammation or injury. Introduction Fibroblast growth factor 21 (FGF21) belongs to a distinct ''endocrine'' subgroup within the FGF superfamily, consisting of FGF19, FGF21 and FGF23 [1][2][3]. Due to the lack of the conventional FGF-heparin binding domain, these FGFs can escape the body's vast deposition of heparansulphate proteoglycans and can be released into circulation and function as endocrine factors [4]. FGF21 is predominantly synthesized in liver, where it is induced by the peroxisome proliferator-activated receptors, PPARa [5]and PPARc [6]. In addition, the expression of FGF21 is also present in pancreas, adipose, and muscle [7][8][9][10], FGF21 acts via FGF receptors(FGFR), though the FGFR is widely distributed in almost any tissue in the body, it is anticipated that FGF21 functions in a selective set of tissues including liver, adipose and pancreas, where b Klotho, a cofactor for FGF21 to activate FGFR, is expressed selectively [10,11]. Physiologically, elevated FGF21 in liver can induce gluconeogenesis, fatty acid oxidation and ketogenesis in the context of prolonged fasting and starvation [12]. FGF21 has been shown to be an important protective factor against various glucose and lipid metabolic disorders in animal models [13][14][15]. For example, FGF21 activates glucose uptake in adipocytes and protectes animals from diet-induced obesity [13]. Transgenic overexpression of FGF21 improves insulin sensitivity, reduces blood glucose and triglyceride to near normal levels in both ob/ob and db/db mice [13]. Similarly, in diabetic rhesus monkeys, FGF21 significantly decreases fasting glucose, insulin, glucagon and triglycerides [14]. A recent study showed that treatment of recombinant murine FGF21 exerts beneficial effects on hepatic steatosis [15]. Several recent studies have also examined the role of FGF21 in humans, though none of these studies directly supports the metabolic regulation role of FGF21. Circulating FGF21 concentrations are increased in subjects who were either overweight or had type 2 diabetes or impaired glucose tolerance [16,17]. Mai et al. showed that both lipid infusion and artificial hyperinsulinemia increase FGF21 levels in vivo [18]. However, another study found that the function of FGF21 is closely related to lipid metabolism instead of insulin sensitivity in humans [19]. FGF21 levels also correlate with gamma-glutamyl transferase(c-GT) and aspartate aminotransferase(AST), indicating the close relationship between FGF21 and liver diseases [19]. Since liver is the major site for FGF21 expression and hepatic steatosis is highly correlated with impairment of glucose and lipid metabolism in humans, the relationship between hepatic steatosis and FGF21 has been investigated in several recent studies. Li et al. [20] reported that serum FGF21 levels were significantly higher in the non-alcoholic fatty liver disease (NAFLD) group compared with the controls and had a high positive correlation with intrahepatic triglyceride content(r = 0.662, p,0.001). This study, along with recent reports by Dushay et al. [21] and Yilmaz et al. [22] contributed greatly to expand our knowledge on plasma FGF21 levels in patients with NAFLD, and indicate the role of FGF21 in regulating hepatic lipid metabolism. Although the aforementioned studies suggest that FGF21 could be a potential biomarker to screen or monitor NAFLD patients [23], the methods utilized to assess the severity of hepatic steatosis, such as B-mode ultrasound or pathological score system, were qualitative or semi-quantitative and did not reflect the quantitative association between serum FGF21 and hepatic fat content accurately. Moreover, in the study by Li et al., liver biopsies were obtained from patients undergoing resection for benign liver disease and the number of patients with precise information of hepatic fat content was rather small, which might preclude a reliable conclusion [20]. In the current study, we used 1 H Magnetic Resonance Spectroscopy ( 1 H MRS) to quantify hepatic fat content in a relatively large number of participants with impaired glucose metabolism and without known liver disease except for different degree of hepatic steatosis, and further analyzed the quantitative association between serum FGF21 level and hepatic fat content. Ethics Statement The study was approved by the human research ethics committee of Zhongshan hospital, and was conducted according to the principles of the Declaration of Helsinki. Written informed consent was obtained from all subjects. Subjects The subjects were participants from a clinical intervention study named Role of Pioglitazone and Berberine in the Treatment of Non-Alcoholic Fatty Liver Disease (http://clinicaltrials.gov/, NCT00633282), which was an open, randomized, controlled clinical trial. From March 2008 to July 2010, 160 subjects (88 men and 72 women) were recruited initially from the outpatients department of endocrinology, Shanghai Zhongshan Hospital, China. All participants were diagnosed as impaired glucose regulation(including impaired fasting glucose or impaired glucose tolerance or both) or newly diagnosed diabetes and fatty liver by B ultrasonography during clinical screening tests according to the inclusion criteria of the clinical trial. No subjects took anti-diabetic medications(see exclusion criteria below). (details on the inclusion criteria of the clinical trial: http://clinicaltrials.gov/ct2/show/ NCT00633282?term = NCT00633282&rank = 1). Szczepaniak and colleagues [24] had analyzed the distribution of hepatic fat content (HFC) in 2,349 participants from the Dallas Heart Study by 1 H MRS and found 5.56% could be considered a cut-off for NAFLD. According to the study , we took HFC .5.56% as a criteria for diagnosis of NAFLD in our study too. All subjects underwent comprehensive physical examinations, routine biochemical analyses of blood, 75g oral glucose tolerance test, hepatitis B surface antigen, hepatitis C virus antibody and 1 H MRS. All participants completed a uniform questionnaire containing questions about the histories of present and past illnesses and medical therapy. Subjects with the following conditions were excluded from this study: (1) alcohol consumption$140 g/week for men or 70 g/week for women; (2) acute or chronic virus hepatitis; (3)biliary obstructive diseases; (4)druginduced liver disease; (5) total parenteral nutrition;(6) autoimmune hepatitis; (7) Wilson's disease; (8) known hyperthyroidism or hypothyroidism; (8) presence of cancer; (9) current treatment with systemic corticosteroids; (10) patients who have taken or are taking oral hypoglycemic or hypolipidemic drugs and (11) pregnancy. As the intensity of interventions in the clinical trial mentioned above was mild, patients with obvious metabolic abnormalities were excluded for the health of patients, including diabetics patients with hemoglobin A1c (HbA1c).7.5% on initial visit ; serum triglyceride $5.0 mmol/L and patients with significantly impaired liver function [Alanine aminotransferase(ALT) or AST$150 U/ L]. Among the 160 subjects, the study was performed on 138 subjects (72 men and 66women) aged from 18 to 65 years old excluding 22 subjects who met the above exclusion criteria. Anthropometric and biochemical measurements Body mass index (BMI) was calculated as the weight in kilograms divided by the square of the height in meters. Waist circumference was measured at the midpoint between the inferior costal margin and the superior border of the iliac crest on the midaxillary line. Waist-hip ratio (WHR) was calculated as waist circumference divided by hip circumference. Blood pressure(BP) was measured three times with 5 minute intervals each time in the seated position with a mercury sphygmomanometer in the morning,The first and fifth Korotkoff sounds were used to designate systolic(SBP) and diastolic BP(DBP), respectively. and the average of the three BPs was used as the final BP. The biochemical indexes were measured on a Hitachi 7600 analyzer (Hitachi, Tokyo, Japan). Serum fasting glucose (FBG) and 2 hour glucosewere measured by the glucose oxidase method. Serum levels of total cholesterol (TC), triglycerides (TG), high density lipoprotein cholesterol (HDL-c), and low density lipoprotein cholesterol (LDL-c) were determined enzymatically. Apolipoprotein A, B, E (APOA, APOB, APOE) were measured by the immunoturbidimetric assay. ALT, AST,c-GT and lactate dehydrogenase(LDH)were measured by standard enzymatic methods. HbA1c was measured by high performance liquid chromatography with HLC-723G7 automated glycohemoglobin analyzer (Tosoh, Tokyo, Japan). Measurement of serum FGF21 Circulating FGF21 concentrations were measured with an inhouse chemiluminescence immunoassay [25] (Antibody and Immunoassay Services, University of Hong Kong). The assay was proven to be highly specific to human FGF21 and did not cross-react with other members of the FGF family (for details see Supplement S1). Measurement of HFC Localized proton magnetic resonance spectroscopy ( 1 H MRS) images of the liver were acquired using a 1.5-T Avanto MR system (Siemens AG, Erlangen, Germany) by an experienced radiologist. Sagittal, coronal, and axial slices through the right lobe of the liver were acquired, and an 8 cm 3 volume of liver parenchyma was selected for further study. Spectra were collected using a Q-body coil for radiofrequency transmission and signal reception and a double-echo point-resolved spectroscopy sequence for 128 acquisitions. Areas of resonances from protons of water and methylene groups in fatty acid chains were obtained with a time-domain nonlinear fitting routine using commercial software (Syngo spectroscopy VB15, Siemens AG). HFC was calculated by dividing the integral of the methylene groups in fatty acid chains of the hepatic triglycerides by the sum of methylene groups and water [26]. Statistical analysis Statistical analyses were performed with SPSS software version 13.0 (SPSS, Inc. Chicago, IL). Normally distributed data were expressed as means 6 SD. Data that were not normally distributed, as determined using Kolmogorox-Smirnov test, were logarithmically transformed before analysis and expressed as median with interquartile range. One-way ANOVA was used for comparisons among groups, and multiple testing was corrected using LSD method (Equal Variances Assumed) or Games-Howell method (Equal Varance not assumed). Pearson's correlations and multiple stepwise regression analysis were used to examine the association of HFC, serum FGF21, and other parameters. In all statistical tests, p values ,0.05 were considered significant. Results Among 138 subjects, 76 had impaired glucose regulation(FBG $5.6 mmol/L and/or a two hour glucose value $7.8 mmol/L) and 62 had newly diagnosed diabetes(FBG$7.0 mmol/L and/or a two hour glucose value$11.1 mmol/L). Hepatic fat contents (HFCs) of all the study subjects determined by 1 H MRS were distributed normally from 2.47% to 81.95% with a mean and standardized deviation of 32.30% and 15.95%, respectively. Using HFC .5.56% as a criteria for diagnosis of NAFLD [24], 136 subjects of the study was NAFLD. The general characteristics of the subjects (Table 1) By dividing the distribution of HFC into quartile, we found that there were more male subjects than female subjects in groups with 15.50 U/L when HFC was in Q1,Q2,Q3,Q4, respectively. and c-GT showed a tendency to increase when HFC increased gradually (p = 0.057), with highest value up to 59.96 U/L in the fourth quartile, but a obvious drop to 38.31 U/L in the third quartile. HFC and FGF21 With the increase of HFC, serum FGF21 also increased progressively in patients with HFC no more than the fourth quartile. The FGF21 concentrations were 194.126126.96 pg/ml, 219.656 141.74 pg/ml and 326.446149.47 pg/ml when HFC was in Q1, Q2, Q3, respectively. Interestingly, once HFC further increased to the fourth quartile, FGF21 tended to decline to 258.756124.69 pg/ ml. (compared with the third quartile, p = 0.059) ( Figure S1A). In light of the fact that FGF21 increased progressively when HFC was increased from the first quartile to the third quartile, but decreased in the fourth quartile, we analyzed the association between serum FGF21 concentration and HFC in subjects within the first three quartiles of HFC and all subjects, respectively ( Figure S2). When HFC wasin Q1 to Q3, there was a significant positive association between FGF21 and HFC (r = 0.276, p = 0.009); However, the significant association between HFC and FGF21 no longer existed when HFC was in Q4 (r = 20.087, p = 0.671). Also, we analyzed the association between HFC and other parameters in subjects within the first three quartiles of HFC and all subjects, respectively. In univariate correlation analyses, HFC in Q1-Q4 positively associated with AST (r = 0.487, p,0. To compare the diagnostic value of FGF21 and other common clinical metabolic parameters in reflecting HFC, we conducted multivariate stepwise regression analysis between HFC and variables which are significant in univariate analysis and relevant to HFC, including: sex, FGF21, ALT, AST, c-GT, LDH, TC, TG, APOA, APOB, APOE when HFC was in Q1-Q4, and age, FGF21, ALT, AST, LDH, TC, TG, APOB when HFC was in Q1-Q3. FGF21 has already been shown to be correlated with age in some studies [27], therefore, age also was adjusted in the multiple regression analysis when HFC was in Q1-Q4. We found that in all subjects, AST and sex(female) were independently associated with HFC (all p,0.05). However, when the subjects with the highest quartile of HFC were excluded from the analysis, FGF21 became the strongest factors independently associated with HFC (Table 3). Discussion In the present study, we demonstrated the close association of serum FGF21 concentrations with intrahepatic fat content in 138 patients with abnormal glucose metabolism and with B ultrasound-diagnosed hepatic steatosis, whose hepatic fat content were distributed in a large range (2.47%-81.95%). To the best of our knowledge, this study is the first to show the quantitative correlation between serum FGF21 concentrations and hepatic fat content measured by 1 H MRS in patients with impaired glucose metabolism. Interestingly, we found that in patients with mild or moderate hepatic steatosis (HFC was in Q1-Q3), FGF21 was the strongest factors independently associated with HFC among all metabolic parameters measured. However, when the hepatic fat content increased to the fourth quartile, serum FGF21 concentration no longer increased, but tended to decrease on the contrary. A previous study has shown that in 17 patients with pathological liver triglycerides ranged from 10% to 40%, serum FGF21 concentration was highly positively correlated with hepatic fat content [20], similar to the results of our current study. Mounting evidences have suggested FGF21 as a protective metabolic regulator against a series of abnormalities in glucose and lipid metabolism. FGF21 is most abundantly expressed in the liver and can be directly induced by free fatty acids (FFAs), through PPARa, whose responsive elements had been found in the promoter regions of human FGF21 genes [28]. Liver is the main processing site of FFAs released from white adipose tissue (WAT). Therefore, hepatic cells are able to directly ''sense'' the alteration of circulating FFAs and regulate the concentration of FGF21 accordingly. A recent study has reported that circulating FGF21 level was closely related with the daily oscillation of free fatty acids [25], which also supported the FFAs-dependent activation of FGF21 in humans. Under the condition of obesity and insulin resistance, excessive influx of FFAs to the liver would induce FGF21 over-expression, and then elevated FGF21 could in turn decrease the level of serum FFAs through the inhibition of lipolysis in WAT [29]and inhibit the hepatic triglycerides generation and hepatic steatosis through promotion of fatty acid oxidation and ketogenesis [30]. Therefore, it is possible that the elevation of FGF21 is a hepatic protective response to the whole-body lipid metabolic burden influx to the liver, and the hepatic fat content directly reflect the excessive FFAs that enter the lipid synthesis pathway in the liver. Therefore, the serum FGF21 increases independently with the degree of hepatic steatosis to maintain a balance of hepatic lipid metabolism. In addition, since liver is the predominant organ for FGF21 production and action, it is possible that fat accumulated in the hepatic cell could also directly stimulate the secretion of FGF21 or cause an attenuated functional response to FGF21 (FGF21 resistance), thus leading to a compensatory FGF21 up-regulation. Interestingly, when hepatic fat content increased to the fourth quartile, we found that the serum FGF21 concentration began to decrease on the contrary ( Figure S1A). In line with our finding, a recent study reported that serum FGF21 levels were increased in individuals with NASH, but FGF21 level in NASH patients was much lower than that in NAFLD patients [21]. In the current study, in patients with hepatic fat content in the fourth quartile, the serum concentration of ALT, a well-established marker of hepatic injury, was also elevated (Table 1, Figure S1B), indicating the presence of hepatic injury in these patients. Therefore we speculated that the decrease of FGF21 in patients with severe hepatic steatosis might also be explained by the hepatic cell injury or death caused by lipoxicity and hepatic inflammation, so that the remaining hepatic cells were unable to produce as much FGF21 as needed. If our assumption turned out to be true, then a decrease of FGF21 level in NAFLD patient might indicate a decompensatory stage of the disease and might accompany with an acute deterioration of a series of metabolic disorders. As we have shown that the FGF21 concentration in patients with mild or moderate hepatic steatosis was elevated in parallel with serum ALT level, but this balance would break in severe NAFLD patients, whose biochemical indexes will show an obviously elevated ALT concentration but only a slight unparallel elevation of FGF21 concentration probably due to the presence of hepatic injury. Therefore, it is possible that the insufficiency of FGF21 relative to elevation of ALT concentration might be a warning for hepatic cell injury clinically. Our study also found in patients with hepatic fat content no more than the fourth quartile, serum FGF21 was better than any metabolism-related parameters, including ALT, AST and TG, to reflect the hepatic fat content. Traditionally, ALT was most commonly used parameter to reflect hepatic impairment including NAFLD. However, our study indicated that FGF21 might be a better serum biomarker for NAFLD than ALT, though the clinical value of FGF21 as a NAFLD biomarker still need to be validated by further large-scale studies in the general population. There are several limitations in this study. Firstly, since the study detected the quantitative correlation between FGF21 and hepatic fat content in a specific group of participants with abnormal glucose metabolism and B ultrasound-diagnosed hepatic steatosis, the average hepatic fat content in our subjects was much higher than the general population, so further studies are needed to determine the clinical value of FGF21 as a biomarker for NAFLD in the general population. Secondly, as a non-invasive imaging technique, 1 H MRS can detect fatty infiltration of the liver, but unlike ''gold standard'' liver biopsy, it is limited in its ability to detect coexisting inflammation or fibrosis. However, in this article, we concerned more about the relationship of HFC with FGF21 and other metabolic parameters than the pathological changes of liver, and liver biopsy is an invasive examination which can not be accepted easily by patients, furthermore, it has been demonstrated that histology correlates well with 1 H MRS in evaluating hepatic triglyceride content [31]. Several clinical trials [32,33] on NAFLD have used 1 H MRS as an outcome measurement. Therefore, 1 H MRS may be a more appropriate reference standard than histology in accurately assessing fat content, especially in a relatively large sample study. Thirdly, we speculated the presence of hepatic injury in patients with severe hepatic steatosis according to the ALT concentration, a simple marker for hepatic injury, biopsy-proven data are needed to confirm the hepatic pathological features in severe NAFLD patients in our future works. In summary, our study demonstrated that FGF21 was strongly correlated with the hepatic fat content in people with mild or moderate hepatic steatosis and could better reflect hepatic fat content than any known serum parameters. Furthermore, we found a decrease of FGF21 in patients with severe hepatic steatosis, which might indicate the presence of hepatic injury. These results support the role of FGF21 as a potential biomarker for NAFLD and further suggest an important role of FGF21 in regulating hepatic lipid metabolism in humans.
4,804.8
2011-09-16T00:00:00.000
[ "Medicine", "Biology" ]